Hello Serge:
Perhaps I should elaborate. The partial R^2 value, for each of the regressors,
is more precisely referred to as the "coefficient of partial determination".
That is, the partial R^2 measures the proportional reduction in the error
sum of squares achieved by introducing that regressor AFTER all of the other
regressors have already been included in the model.
Using the "Body Fat" example from Applied Linear Statistical Models (Neter,
Wasserman, and Kutner, 2nd. ed.), Table 8.3:
X1 X2 Y
19.5 43.1 11.9
24.7 49.8 22.8
30.7 51.9 18.7
29.8 54.3 20.1
19.1 42.2 12.9
25.6 53.9 21.7
31.4 58.5 27.1
27.9 52.1 25.4
22.1 49.9 21.3
25.5 53.5 19.3
31.1 56.6 25.4
30.4 56.7 27.2
18.7 46.5 11.7
19.7 44.2 17.8
14.6 42.7 12.8
29.5 54.4 23.9
27.7 55.3 22.6
30.2 58.6 25.4
22.7 48.2 14.8
25.2 51.0 21.1
Save the above (without the column headings) to file BodyFat.1D. Then execute
the following script:
3dDeconvolve -input1D BodyFat.1D'[2]' \
-polort 0 -num_stimts 2 \
-stim_file 1 BodyFat.1D'[0]' -stim_label 1 X1 \
-stim_file 2 BodyFat.1D'[1]' -stim_label 2 X2
Note that polort = 0 since this is NOT time series data. The screen output is
as follows:
------------------------------------------------------------------------------
Results for Voxel #0:
Baseline:
t^0 coef = -19.1742 t^0 t-st = -2.2934 p-value = 3.4843e-02
Stimulus: X1
h[ 0] coef = 0.2224 h[ 0] t-st = 0.7328 p-value = 4.7368e-01
R^2 = 0.0306 F[ 1, 17] = 0.5370 p-value = 4.7368e-01
Stimulus: X2
h[ 0] coef = 0.6594 h[ 0] t-st = 2.2646 p-value = 3.6899e-02
R^2 = 0.2318 F[ 1, 17] = 5.1284 p-value = 3.6899e-02
Full Model:
MSE = 6.4677
R^2 = 0.7781 F[ 2, 17] = 29.7972 p-value = 2.7742e-06
------------------------------------------------------------------------------
From the above, we see that the partial R^2 for X2 is 0.2318.
We can calculate this directly from the definition. First, note that the
error sum of squares when both variables are include in the model is given by:
SSE(X1,X2) = MSE * dfF
= 6.4677 * 17 = 109.951
What happens if only variable X1 is included in the model?
3dDeconvolve -input1D BodyFat.1D'[2]' \
-polort 0 -num_stimts 1 \
-stim_file 1 BodyFat.1D'[0]' -stim_label 1 X1
------------------------------------------------------------------------------
Results for Voxel #0:
Baseline:
t^0 coef = -1.4961 t^0 t-st = -0.4507 p-value = 6.5756e-01
Stimulus: X1
h[ 0] coef = 0.8572 h[ 0] t-st = 6.6562 p-value = 3.0243e-06
R^2 = 0.7111 F[ 1, 18] = 44.3046 p-value = 3.0243e-06
Full Model:
MSE = 7.9511
R^2 = 0.7111 F[ 1, 18] = 44.3046 p-value = 3.0243e-06
------------------------------------------------------------------------------
So, when only variable X1 is present, the error sum of squares is:
SSE(X1) = MSE * dfF
= 7.9511 * 18 = 143.12
Therefore, the proportional reduction in the SSE obtained by adding X2 to the
model, given that X1 is already in the model, is:
R^2 (X2|X1) = (SSE(X1) - SSE(X1,X2)) / SSE(X1)
= (143.12 - 109.951) / 143.12
= 0.231757
Therefore, adding variable X2, given that variable X1 is already in the model,
reduces the error sum of squares by 23.2%.
A similar calculation yields R^2 (X1|X2) = 0.0306.
Note that the R^2 for the full model (0.7781) is much greater than either of
the partial R^2 values. This is due to correlation of the variables.
Doug Ward