AFNI Message Board

Dear AFNI users-

We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:

https://discuss.afni.nimh.nih.gov

Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.

The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.

Sincerely, AFNI HQ

History of AFNI updates  

|
May 16, 2019 05:49PM
Much thanks for your comments! What follows is a detailed account of our situation. I don’t know how to present things any more succinctly than what follows, so I apologize for the length, and understand if you don’t have the time or inclination to read it. At the same time, you might find what’s described to be an interesting issue. We’re increasingly struck by how much more informative it is to assess breadth of activation instead of activation strength. And what we’re asking about is how to appropriate assess differences in breadth. We think that the solution to this problem could be of potential interest and use to other researchers (although it wouldn’t be the first time I’ve been wrong about something like this!).

Here's the situation. This is now the third experiment where we have observed what I'm going to describe next. We have a task of interest (most recently, food choice) that is typically performed by 20-30 participants, in large numbers of trials (we have reasonable power). Critically, we use an active baseline that is well matched to the target task in the scanner, with respect to low-level visual processing of the stimuli, comparable cognitive processing, and a similar motor response. As a result, computing an activation map for the target task only produces voxels that become significantly more active than the closely matched active baseline task. The results are much better controlled and interpretable than if we had used a resting state baseline. We get a broad relatively complete sense of all the brain areas important for a task.

In the experiment that I'm asking about now, the critical task was a food choice task, where participants saw a food image, decided whether they would want to eat it now, and then responded yes or no. We know a lot about the brain areas that this kind of task activates. The active baseline involved viewing scrambled object images, detecting whether a target circle falls on the left or right of them, and then making a binary response to indicate which side. Thus, significant clusters for the food choice task, relative to the active baseline task, indicated areas that became more active for processing food choice than for processing which side a circle fell on in a scrambled image. We found significant activations above the active baseline all over the brain that are typically associated with processing food cues. Indeed, we find areas that other researchers typically haven’t (using signal intensity analyses), along with much larger areas of activation.

In the current experiment, we further included several manipulations of interest, such as whether the pictured foods were tasty or healthy, and whether participants had been asked to adopt a normal viewing strategy (the control condition) or an "observe" strategy (i.e., a simple mindfulness strategy). Of interest was whether these various manipulations affected activations in ROIs associated with eating (e.g., the insula for taste, OFC for predicted reward, etc.).

Take the insula, for example, which is the primary taste area. Often, relevant areas in the insula become more active for tasty foods than for healthy foods. This is now a widely obtained result (that Kyle Simmons, Alex Martin, and I initially reported in 2005). In our recent experiment, though, with 20 participants, linear contrasts found no difference for tasty vs. healthy foods. When, however, we compared the number of voxels significantly active above baseline, there were large differences in the predicted direction, with tasty foods activating the insula more broadly than healthy foods.

In our three most recent experiments, we have found what I just described repeatedly: Often we fail to observe differences in overall signal intensity in an ROI but observe large differences in the breadth of activation above a well-matched active baseline. The first attached slide illustrates this for two conjunction analyses, each contrasting activations in food-related ROIs for tasty vs. healthy foods. In the left two columns are the results for a conjunction analysis in the normal viewing (control) group. In the right two columns are the results for a conjunction analysis in the observe (mindfulness) group. As you can see, there are substantial differences in breadth of activation for tasty vs. healthy foods, with tasty foods activating the ROIs much more than the healthy foods. As you can also see, these differences were much larger for the observe condition than for the normal viewing condition. Additionally, the observe condition activated the ROIs overall across both food types much more than the normal viewing condition. All of these are predicted effects. There are substantial predicted differences in how much the two manipulations activated the ROIs, when we measure the breadth of activation.

Notably, many of these differences in breadth of activation for the ROIs don’t produce significant effects for linear contrasts of signal intensity. Often we fail to find clusters that differ in signal intensity, even when observing large differences in breadth of activation. The second attached slide illustrates what we have found from examining activations in these ROIs. Panel A at the top of this figure illustrates the most important case. The two conditions both activate an area significantly above baseline, each with considerable breadth of activation, but there are no differences in signal intensity between them, perhaps because of how the BOLD response gets squashed as it reaches asymptotic levels. Thus, no significant clusters emerge. What gets lost, however, is that both conditions have activated the ROI above baseline, with one condition activating it much more.

So, again, going back to my earlier message, what we’re looking for is a way to establish the probability of obtaining the number of total voxels that each of two conditions has activated above the active baseline in an ROI (or the number of unique voxels that the two conditions activated). If one were to assume that the two conditions didn’t actually differ, what would be the probability of seeing a given pair of voxel counts for the two conditions in the ROI.

Thanks again for your time, expertise, and patience :) and please let me know if you’d like any further information.

Larry
Attachments:
open | download - 2019-05-16 afni BB - fig 1.png (226 KB)
open | download - 2019-05-16 afni BB - fig 2.png (126.2 KB)
Subject Author Posted

comparing cluster sizes within an ROI

Larry Barsalou May 16, 2019 07:53AM

Re: comparing cluster sizes within an ROI

ptaylor May 16, 2019 02:30PM

Re: comparing cluster sizes within an ROI

gang May 16, 2019 03:08PM

Re: comparing cluster sizes within an ROI Attachments

Larry Barsalou May 16, 2019 05:49PM

Re: comparing cluster sizes within an ROI

gang May 16, 2019 10:40PM

Re: comparing cluster sizes within an ROI Attachments

Larry Barsalou May 17, 2019 08:19AM

Re: comparing cluster sizes within an ROI

gang May 17, 2019 05:55PM

Re: comparing cluster sizes within an ROI

Larry Barsalou May 20, 2019 10:48AM