Program: 3dsvm
+++++++++++ 3dsvm: support vector machine analysis of brain data +++++++++++
3dsvm - temporally predictive modeling with the support vector machine
This program provides the ability to perform support vector machine (SVM) learning on AFNI datasets using the SVM-light package (version 5) developed by Thorsten Joachims (http://svmlight.joachims.org/).
3dsvm [options]
3dsvm -trainvol run1+orig
-trainlabels run1_categories.1D
-mask mask+orig
-model model_run1
- 3dsvm -alpha a_run1
- -trainvol run1+orig -trainlabels run1_categories.1D -mask mask+orig -model model_run1
-bucket run1_fim
Training: exclude some time points using a censor file
- 3dsvm -alpha a_run1
-trainvol run1+orig -trainlabels run1_categories.1D
-censor censor.1D -mask mask+orig -model model_run1 -bucket run1_fim
- 3dsvm -c 100.0
- -alpha a_run1 -trainvol run1+orig -trainlabels run1_categories.1D -censor censor.1D -mask mask+orig -model model_run1 -bucket run1_fim
Training: using a kernel
3dsvm -c 100.0
-kernel polynomial -d 2 -alpha a_run1 -trainvol run1+orig -trainlabels run1_categories.1D -censor censor.1D -mask mask+orig -model model_run1
- 3dsvm -type regression
- -c 100.0
-e 0.001 -alpha a_run1 -trainvol run1+orig -trainlabels run1_categories.1D -censor censor.1D -mask mask+orig -model model_run1
- 3dsvm -testvol run2+orig
- -model model_run1+orig
-predictions pred2_model1
Testing: compare predictions with ‘truth’
- 3dsvm -testvol run2+orig
-model model_run1+orig
-testlabels run2_categories.1D -predictions pred2_model1
Testing: use -classout to output integer thresholded class predictions (rather than continuous valued output)
- 3dsvm -classout
-testvol run2+orig -model model_run1+orig -testlabels run2_categories.1D -predictions pred2_model1
——————- TRAINING OPTIONS ——————————————- -type tname Specify tname:
classification [default] regressionto select between classification or regression.
-trainvol trnname A 3D+t AFNI brik dataset to be used for training.
- ++ If ‘-mask’ is not used ‘-nomodelmask must be
- specified.
For example, a mask of the whole brain can be generated by using 3dAutomask, or more specific ROIs could be generated with the Draw Dataset plugin or converted from a thresholded functional dataset. The mask is specified during training but is also considered part of the model output and is automatically applied to test data.
-nomodelmask | Flag to enable the omission of a mask file. This is required if ‘-mask’ is not used. |
corresponding to the stimulus paradigm for the training data set. The number of labels in the selected file must be equal to the number of time points in the training dataset. The labels must be arranged in a single column, and they can be any of the following values:
0 - class 0 1 - class 1 n - class n (where n is a positive integer) 9999 - censor this point
See also -censor.
-kernel kfunc kfunc = string specifying type of kernel function:
linear : <u,v> [default] polynomial : (s<u,v> + r)^d rbf : radial basis function
exp(-gamma ||u-v||^2)sigmoid : tanh(s <u,v> + r))
note: kernel parameters use SVM-light syntax:
- -d\ int : d parameter in polyniomial kernel
- 3 [default]
- -g\ float : gamma parameter in rbf kernel
- 1.0 [default]
- -s\ float : s parameter in sigmoid/poly kernel
- 1.0 [default]
- -r\ float : r parameter in sigmoid/poly kernel
- 1.0 [default]
-alpha aname Write the alphas to aname.1D
——————- TRAINING AND TESTING MUST SPECIFY MODNAME —————— -model modname modname = basename for the model brik.
Training: modname is the basename for the output brik containing the SVM model
- 3dsvm -trainvol run1+orig
- -trainlabels run1_categories.1D -mask mask+orig -model model_run1
Testing: modname is the name for the input brik containing the SVM model.
- 3dsvm -testvol run2+orig
- -model model_run1+orig -predictions pred2_model1
——————- TESTING OPTIONS ——————————————– -testvol tstname A 3D or 3D+t AFNI brik dataset to be used for testing.
A major assumption is that the training and testing volumes are aligned, and that voxels are of same number, volume, etc.
-classout | Flag to specify that pname files should be integer- valued, corresponding to class category decisions. |
classification. Current implementations use 1-vs-1 two-class SVM models.
mctype must be one of the following:
DAG : Directed Acyclic Graph [default] vote : Max Wins from votes of all 1-vs-1 models
see http:\lacontelab.org3dsvm.html for details and references.
——————- INFORMATION OPTIONS —————————————
-help this help
——————– SVM-light learn help —————————–
SVM-light V5.00: Support Vector Machine, learning module 30.06.02stim
Copyright: Thorsten Joachims, thorsten@ls8.cs.uni-dortmund.de
This software is available for non-commercial use only. It must not be modified and distributed without prior permission of the author. The author is not responsible for implications from the use of this software.
usage: svm_learn [options] example_file model_file
-v [0..3] -> level (default 1)
Learning options:
- -z {c,r,p} -> select between classification (c), regression (r),
- and preference ranking (p) (default classification)
-c float -> C: trade-off between training error and margin (default [avg. x*x]^-1)
- -w [0..] -> epsilon width of tube for regression
- (default 0.1)
- -j float -> Cost: cost-factor, by which training errors on
- positive examples outweight errors on negative examples (default 1) (see [4])
- -b [0,1] -> use biased hyperplane (i.e. x*w+b>0) instead
- of unbiased hyperplane (i.e. x*w>0) (default 1)
- -i [0,1] -> remove inconsistent training examples
- and retrain (default 0)
Performance estimation options:
- -x [0,1] -> compute leave-one-out estimates (default 0)
- (see [5])
- -o ]0..2] -> value of rho for XiAlpha-estimator and for pruning
- leave-one-out computation (default 1.0) (see [2])
- -k [0..100] -> search depth for extended XiAlpha-estimator
- (default 0)
Transduction options (see [3]):
- -p [0..1] -> fraction of unlabeled examples to be classified
- into the positive class (default is the ratio of positive and negative examples in the training data)
Kernel options:
- -t int -> type of kernel function:
- 0: linear (default) 1: polynomial (s a*b+c)^d 2: radial basis function exp(-gamma ||a-b||^2) 3: sigmoid tanh(s a*b + c) 4: user defined kernel from kernel.h
-d int -> parameter d in polynomial kernel -g float -> parameter gamma in rbf kernel -s float -> parameter s in sigmoid/poly kernel -r float -> parameter c in sigmoid/poly kernel
-u string -> parameter of user defined kernel
Optimization options (see [1]):
-q [2..] -> maximum size of QP-subproblems (default 10)
- -n [2..q] -> number of new variables entering the working set
- in each iteration (default n = q). Set n<q to prevent zig-zagging.
- -m [5..] -> size of cache for kernel evaluations in MB (default 40)
- The larger the faster...
-e float -> eps: Allow that error for termination criterion [y [w*x+b] - 1] >= eps (default 0.001)
- -h [5..] -> number of iterations a variable needs to be
- optimal before considered for shrinking (default 100)
- -f [0,1] -> do final optimality check for variables removed
- by shrinking. Although this test is usually positive, there is no guarantee that the optimum was found if the test is omitted. (default 1)
Output options:
- -l string -> file to write predicted labels of unlabeled
- examples into after transductive learning
- -a string -> write all alphas to this file after learning
- (in the same order as in the training set)
More details in: [1] T. Joachims, Making Large-Scale SVM Learning Practical. Advances in
Kernel Methods - Support Vector Learning, B. Schoelkopf and C. Burges and A. Smola (ed.), MIT Press, 1999.
——————– SVM-light classify help —————————–
SVM-light V5.00: Support Vector Machine, classification module 30.06.02
Copyright: Thorsten Joachims, thorsten@ls8.cs.uni-dortmund.de
This software is available for non-commercial use only. It must not be modified and distributed without prior permission of the author. The author is not responsible for implications from the use of this software.
usage: svm_classify [options] example_file model_file output_file
-v [0..3] -> verbosity level (default 2) -f [0,1] -> 0: old output format of V1.0
-> 1: output the value of decision function (default)
Jeff W. Prescott, William A. Curtis, Ziad Saad, Rick Reynolds, R. Cameron Craddock, Jonathan M. Lisinski, and Stephen M. LaConte
Original version written by JP and SL, August 2006 Released to general public, July 2007
Questions/Comments/Bugs - email slaconte@vtc.vt.edu
Reference: LaConte, S., Strother, S., Cherkassky, V. and Hu, X. 2005. Support vector
machines for temporal classification of block design fMRI data. NeuroImage, 26, 317-329.
Specific to real-time fMRI: S. M. LaConte. (2011). Decoding fMRI brain states in real-time.
NeuroImage, 56:440-54.
S. M. LaConte, S. J. Peltier, and X. P. Hu. (2007). Real-time fMRI using brain-state classification. Hum Brain Mapp, 208:1033–1044.
Please also consider to reference: T. Joachims, Making Large-Scale SVM Learning Practical.
Advances in Kernel Methods - Support Vector Learning, B. Schoelkopf and C. Burges and A. Smola (ed.), MIT Press, 1999.