Hi,
thanks again for your patience and thorough support! In the end it turned out there was a typo in the cat_matvec command. It works now and the transformed atlas ROI looks beautiful in native space :)
"And I ran:
align_epi_anat.py \
-dset1 short_epi+orig. \
-dset2 FT_anat+orig. \
-dset1to2 \
-partial_axial \
-dset1_strip None \
-dset2_strip None \
-edge \
-cost lpa
... getting "short_epi_al_mat.aff12.1D" (-> mat_EPI_to_anat). Note that in this step, the whole EPI time series was selected---is that really what you want? Normally we use 3dvolreg for across-EPI motion correction, and just select a single volume from the EPI time series to align to the anatomical. (It will take a much longer time to have the whole EPI aligned in the way currently coded here; in your case, it sounds like you should already have a processed/motion-corrected EPI time series, anyways?) "
Regarding your comment above, I do not really have an EPI time series. I "only" have one 0.4^3mm native image (covering a part of the brainstem) that looks very similar to an EPI image. I used align_epi_anat.py because it seems to be a very flexible tool that can be used for many non-T1 images.
"Note that I don't think I would use @SUMA_AlignToExperiment in this way, if I were wanting to bring a volumetric ROI dataset from one space to another. It should be OK, but I would probably use 3dAllineate if I only wanted affine, and (as noted above) a nonlinear program for closer alignment."
Do I understand it correctly that 3dAllineate is a more parsimonious approach compared to @SUMA_AlignToExperiment? Would it look like this:
3dAllineate -prefix anatT1_to_MNI_brain -base MNI -source anat -twopass -cost lpa -1Dmatrix_save anatT1_to_MNI.aff12.1D -autoweight -fineblur 3 -cmass -twobest MAX -source_automask
Finally, out of curiousity, in what situation would it have been appropriate to use 3dNwarpApply? It seems it was not necessary to use it at any step after all.
Thanks again and bests,
Philipp
Edited 1 time(s). Last edit at 04/28/2022 11:37AM by philippn.