AFNI Message Board

Dear AFNI users-

We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:

https://discuss.afni.nimh.nih.gov

Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.

The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.

Sincerely, AFNI HQ

History of AFNI updates  

|
December 04, 2012 11:39PM
Hi Rick,

Thanks for the reply. Regarding the necessity of the PSC - yes, I'd like to have it for my subsequent analyses. What I'm struggling to wrap my head around is the idea of what I actually want for my subsequent analyses - I need some sort of "scaled" and "clean" signal; the appeal of the second idea was the logic of: 1. Extract task-related signal change only, then 2. Work out the PSC per condition as a percentage of its OWN mean signal change. Now that I've written it out, it seems a little more dodgy.

Further, I tried subtracting the baseline, and worked out the mean of those results. They weren't 0, but the numbers did vary a lot. And of course, once I divided the subtracted results by the mean of the subtracted results, I got very odd numbers. I think it's just flawed logic on my part?
And the decision to leave scaling till later was mainly about not wanting to use the raw mean, instead opting for something more "accurate" (given concerns about trend).

To that end, I have another question. I want to know how well the model (especially the baseline aspects) fits the data. In an earlier version of some of my later analyses, I was calculating a PSC for time points of interest by dividing by the mean of the first 2 time points. I thought this was redundant and so removed that, but if the baseline I divide by now for PSC is in some sense, inaccurate, it may be worthwhile to use the earlier method of establishing a mean per trial.
I used a polort value of 4, and "-bout" with 3dDeconvolve. I got a whole lot of extra t-statistics for each polort, for each run. How do I interpret them? Using run#1pol#0_coef and tstat (linear trend?), most of the voxels (basically all) are highly significant. Am I to assume that this means this trend is more or less fully accounted for in the model? Fewer voxels are highly significant using run#1pol#1, etc. I know I can also use the "-cbucket" output in the grapher to check the fit. Are there any other methods? I suppose my question here is, how small a number of significant voxels for a pol# would I have to see to justify dropping my polort value?
My data was quite messy, and since I am unable to re-collect data, I am trying to get the best AFNI-processed results before running ROC/SVM/MVPA type analyses.

Thanks a bunch,
Mahen
Subject Author Posted

Best Way to Calculate PSC Given Other Constraints

Mahen_N December 03, 2012 11:40PM

Re: Best Way to Calculate PSC Given Other Constraints

rick reynolds December 04, 2012 09:58AM

Re: Best Way to Calculate PSC Given Other Constraints

Mahen_N December 04, 2012 11:39PM

Re: Best Way to Calculate PSC Given Other Constraints

rick reynolds December 05, 2012 02:01PM