Do you mean during an interactive AFNI run? What are you doing -- running some program outside AFNI, and overwriting an existing dataset (e.g., moo+orig.HEAD is replaced by a new file of the same name), then doing "Rescan"? If so, this will confuse the poor program, since every dataset has a unique IDcode inside of its header, and that is used to set up inter-dataset pointers for various purposes. So when you re-create moo+orig.HEAD, it will have a new IDcode, then when you reload it, AFNI will become unhappy -- as you have observed.
However, you CAN replace just the .BRIK file, assuming the dimensions of the dataset are the same, just the voxel values differ. Then if you do "Purge Memory" from the Misc menu, all datasets bricks will have to be reloaded from disk. So if you are re-creating moo in a script, something like this would work
... create moo_temp+orig.HEAD and moo_temp+orig.BRIK somehow
cp -f moo_temp+orig.BRIK ./moo+orig.BRIK
... followed by "Datamode->Misc->Purge Memory"
... loop back to re-create moo_temp+orig
Note that if you will be doing "Rescan", the dataset moo_temp+orig should be placed into a separate directory so you don't get a problem with its continual changes.
If you want, an option could be added to force a memory purge via an external program, such as plugout_drive, which can send various commands to AFNI (cf. README.driver or [
afni.nimh.nih.gov]). That would be pretty easy. Then the whole thing above would be scriptable (i.e., you wouldn't have to click the "Purge Memory" button).
I suppose an "Expunge Dataset" menu item could be added. That requires a fair amount of thought, to be able to clean up inter-dataset references (e.g., dealing with someone who is drawing on a dataset with the "Draw Dataset" plugin and then who choses to Expunge that dataset without closing the plugin first -- the program has to catch that kind of thing to avoid a horrible crash).