nick Wrote:
-------------------------------------------------------
> If you combine cross-validation with other
> processing steps such as feature selection, please
> consider that it's important to make sure that you
> are not mixing up data used both for training and
> testing. For example, you might bias your results
> if you select informative features on the whole
> dataset and then run cross-validation. The correct
> approach is to run feature selection on the
> training set in each cross-validation fold. See
> [
miplab.unige.ch]
> tti_brain_decoding_biases-ERRATA.pdf for details.
I don't really understand why we shouldn't run the feature selection on the whole dataset. How could the results be biased? Isn't it also possible to run the feature selection on the testing set?
>
> Other than that I would suggest to start simple,
> maybe with a region of interest that you expect to
> be informative, or if you have no idea which area
> is informative, consider using a searchlight
> (information mapping).
>
> Hope this helps.
You considered a simple way by starting with a ROI using a searchlight. How can I find this "information mapping" ?