Hello,
Let me jump in :)
Regarding point 3), I think 'Karelo' refers to the definition of 'srms' in 3dTto1D where srms = scaled rms = dvars/mean, and it is suggested that "SRMS survives both a resampling and scaling of the data. Since it is unchanged with any data scaling (unlike DVARS), values are comparable across subjects and studies." (taken from [
afni.nimh.nih.gov])
My suggestion is that you explore the range of the srms timecourse across the subject in your dataset, and try to infer the most appropriate value to clean your data from very large global fluctuations. Note the global effects will probably be captured by -regress_censor_outliers or 3dToutcount too. Performing this exploration is also valid for enorm (or FD or any metric). As Paul indicates, a threshold of 0.2 on enorm might be a good start, but maybe too strict for instance for clinical populations. Note that there are multiple definitions of FD in the literature (see Figure 9 in [
pubmed.ncbi.nlm.nih.gov]) and setting up a fixed threshold is irrelevant, informative if the definition is not indicated in the paper.
Hope this helps
Cesar