History of AFNI updates  

|
bob cox
May 04, 2003 04:32PM
I have just modified 3dDeconvolve to be able to use multiple CPUs on a shared memory SMP system. The only such system I have easy access to is my dual-Athlon Linux tower. With this system, a run that takes 226 sec normally took 119 sec with the new "-jobs 2" flag. Results were identical (which is good, of course).

I'd like to have some people with other types of SMP systems (SGI, more than 2-way Linux, Mac OS X?) test this modification. The source code is now in the AFNI distribution on this server. The only change to a script needed is to use the additional command-line flag "-jobs J", where J is the number of processes you want to use. Of course, you'll have to compile 3dDeconvolve (or the entire AFNI distribution) first.

Inter-process communication is via shared memory. This means that on some systems, you might have to tell the OS to increase the upper bound on the max size of a shared memory segment. If you have this kind of trouble, let me know.

The program prints some progress stuff when it is using multiple jobs. The sample from my dual-Athlon run is below:

++ Shared memory: 6684672 bytes at id=10551297
++ Voxels in dataset: 98304
++ Voxels in mask: 19222
++ Voxels per job: 9611
++ Job #1: processing voxels 48081 to 98303
++ Job #0: processing voxels 0 to 48080
++ Job #1 finished.
++ Job #0 waiting for children to finish.
Writing `bucket' dataset into ./hc_speedo_jobs2+orig.HEAD


any volunteers out there? -- Bob Cox

Subject Author Posted

Parallized 3dDeconvolve - testers needed

bob cox May 04, 2003 04:32PM

Re: Parallized 3dDeconvolve - testers needed

Marci Flanery May 04, 2003 07:04PM

Re: Parallized 3dDeconvolve - more info

bob cox May 05, 2003 02:28PM

Re: Parallized 3dDeconvolve - testers needed

bob cox May 06, 2003 10:56AM