Gang Wrote:
-------------------------------------------------------
> Hi Michael,
>
> > 1)Why is session not considered a fixed effect?
>
>
> Nobody had ever before asked for the option of
> fixed effects in the model. I can add such an
> option soon.
Yes, if you could add that option it would be great. Why would one consider session effect as random and not fixed?
>
> > Does 3dICC_REML only include the (2, 1) ICC
> model and not the (3, 1) ICC model?
>
> The ICC computation is an extension of the
> traditional methods such as ICC(2,1) and ICC(3,1).
> See more discussion in Chen, G., Saad, Z.S.,
> Britton, J.C., Pine, D.S., Cox, R.W. (2013).
> Linear Mixed-Effects Modeling Approach to FMRI
> Group Analysis.
> NeuroImage 73:176-190.
I see.
>
> > 2) Do the results reflect consistency or
> absolute agreement? Is there an option for either
> one?
>
> I'm not so sure abou the difference between the
> two. The interpretation I'm familiar with is
> discussed in the manual webiste
> [
afni.nimh.nih.gov]
Here is an excerpt about the difference between the two from the McGraw and Wong 1996 paper, "Forming Inferences About Some Intraclass
Correlation Coefficients":
Understanding the
conceptual difference between them begins by
noting their formal distinction, which is in the
definition of the ICC denominator. For consistency
measures, column variance is excluded from
denominator variance, and for absolute agreement
measures, it is not......
and also
"...when measurements differ in absolute
value, regardless of the reason, they are viewed
as disagreements.Thus paired scores (2,4), (4,6),
and (6,8) are in perfect agreement using a consistency
definition [ICC(C,1) = 1.00] but not an absolute
agreement definition [ICC(A,1) = .67]."
>
> > 3) For each factor, there is a correlation
> coefficient, but no corresponding p-value?
> > Is there a round about way to calculate the
> appropriate p-value?
>
> I've never seen a way to associate the ICC with a
> p-value. Do you happen to know any literature
> about this?
>
With running ICC in SPSS, you get the ICC value plus a p-value under the heading "F test with True Value 0".
In the McGraw and Wong 1996 paper, it describes how F-tests are derived for ICC.
> > 4) If i wanted to run separate 3dICC_REML's for
> two groups. How would I compare the results
> > from both groups (i.e., Comparing ICC
> correlation coefficients between groups)?
>
> What else are you looking for?
Well, we want to generate separate ICC maps for two tasks. We want to compare ICC values from both tasks to see if they are significantly different from each other.
>
> > 5) We have 3 sessions, a .75 value for session
> indicates that 75% of the variability
> > is accounted for across 3 sessions. Which would
> seem like a good indicator of reliability.
> > If there were differences among the 3 sessions,
> then the icc value would be low, correct?
>
> Some misunderstanding here. The ICC value is a
> relative measure in the sense that it's relative
> to the other variables you incorporated in the
> model. So a ICC value of 0.75 for session means
> that, among all variables, session accounts for
> 75% of total variability while 25% comes from
> other sources (e.g., subjects). So if the
> differences among the 3 sessions were big, the ICC
> for session would be high.
Hmm. I'm still confused. Aren't we interested in reliability though? So, if we are looking for reliability, wouldn't that suggest we'd be interested in low ICC values? Is this correct? I was under the impression that high ICC values indicate strength of reliability, reproducibility, no?