Hi Paul,
regarding your question #2.
Well, I will tell you the difference as I get the results. Anyway, I'd like to have your specific opinion here.
It is important to note that I am using the Glasser's parcellation (360 parcels, from the Nature paper). Each parcel is a RoI.
By default, one can expect a decrease in correlation values, especially regarding intra-parcel connectivity, when averaging correlations. However, I am worrying about the non-homogeneity of parcels (their size varies from 30/40 to 400/500 voxels), which may lead to huge differences between the two approaches (averaging then correlating VS correlating then averaging).
My opinion is that the choice between the twoapproaches depends on how much the user trusts the actual parcellation (i.e., how much the average timecourse reflects the biological entity 'parcel').
By default, I would use the 'averaging and then correlating' approach, because (i) I trust the parcellation, and (ii) I trust an acceptable computational time.
What do you think about this dilemma?
Best,
Simone