AFNI Message Board

Dear AFNI users-

We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:

https://discuss.afni.nimh.nih.gov

Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.

The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.

Sincerely, AFNI HQ

History of AFNI updates  

|
July 06, 2009 08:43PM
Hi Yisheng,

It would probably be more polite if I were to quickly try to prove the
point to some degree, rather than just stating it. So...

---

Given data vector Y and N linearly independent regressors R1, ... RN
(of the same length, less than or equal to N), there is a unique least
squares solution vector A to Y = RX where Y = RA + E (1), or:

Y = a1*R1 + a2*R2 + ... aN*RN + E

where E is minimal (as a sum of squared elements). This is presumably
known by us all (stating which almost assures that I mis-stated or
forgot some aspect, such is life).

---

Now suppose that you want to modify regressor Ri by adding some
multiple m of Rj to it. Then the only effect to the solution will be
a corresponding change in the beta weight for regressor Rj.

---

Proof:

First, rearrange the above equation into equivalent ones, by adding
and subtracting ai*m*Rj on the right side:

original:
Y = a1*R1 + ... + ai*Ri + ... + aj*Rj + ... + aN*RN + E

add and subtract ai*m*Rj:
Y = a1*R1 + ... + ai*Ri + (ai*m*Rj - ai*m*Rj) + ... + aj*Rj + ... + aN*RN + E

regroup:
Y = a1*R1 + ... + ai*(Ri+m*Rj) + ... + (aj-m*ai)*Rj + ... + aN*RN + E


So the ai term is the original ith beta times the modified regressor
Ri+m*Rj, and regressor Rj has a new beta weight (aj-m*ai). This is
equivalent to the original equation, and is still formed as a sum of
regressors in R. But if these newly grouped regressors are considered
to be a regression matrix S, then the new coefficients B must be a
least squares solution to Y = SX, where Y = SB + E (2).

Lastly we note that the solutions are unique (because the vectors in R
are linearly independent). If there were a different solution to (2)
then there would be a different solution to (1). So this is the unique
solution to (2).

---

I guess it can be called a corollary that a similar argument could be
made for adding multiples of a more general subset of vectors in R to
some other subset of vectors in R, such as your adding multiples of the
constant and maybe linear polort regressors to the 6 motion regressors.

---

More babbling than anticipated, but I have to wait for pictures to be
copied off my camera anyway... oops that's done. There are more
exciting things to do now. :)

- rick

Subject Author Posted

De-mean/trend motion parameters for regression?

Yisheng Xu July 04, 2009 09:49AM

Re: De-mean/trend motion parameters for regression?

bob cox July 04, 2009 12:22PM

Re: De-mean/trend motion parameters for regression?

rick reynolds July 05, 2009 09:30PM

Re: De-mean/trend motion parameters for regression?

Yisheng Xu July 06, 2009 10:17AM

Re: De-mean/trend motion parameters for regression?

rick reynolds July 06, 2009 10:47AM

Re: De-mean/trend motion parameters for regression?

Yisheng Xu July 06, 2009 05:43PM

Re: De-mean/trend motion parameters for regression?

rick reynolds July 06, 2009 07:16PM

Re: De-mean/trend motion parameters for regression?

rick reynolds July 06, 2009 08:43PM