Hello!
The x11 install of the course machine was incomplete. One file, used by one of the thread classes, needed Print.h which doesn't exist on the course machine.
I work at the san diego supercomputer center (SDSC) and got a OK from my boss to use the SDSC TeraGrid supercomputer for this project. 'make totality' appears to work, but i ran out of disk quota while installing :] I was using my work account, which has spaced used up by other things. I noticed that 3dDeconvolve compiled. I cleaned up and i'm building 3dDeconvolve alone.
My partner and i are getting user accounts, hopefully with enough quota space to make totality. SDSC's TeraGrid cluster currently consists of 256 IBM cluster nodes, each with dual 1.5 GHz IntelĀ® ItaniumĀ® 2 processors, for a peak performance of 3.1 teraflops. The nodes are equipped with four gigabytes (GBs) of physical memory per node. The cluster is running SuSE Linux and is using Myricom's Myrinet cluster interconnect network.
For MPI-zing, we've read up on how it's done with shared memory. We were planning on storing results locally and using a gather to collect everything at the end. If you know of anything else we should look out for, we would love to know.
The dataset we are currently planning on using is something i DLed from one of the build guides i found, AFNI_data1.tgz. I'm pretty sure that the processors are 64bit and the scratch space on the SDSC TG machine has enough room for a 2GB file. If you have a larger data set that we can work with, we would love to use it :)
Thanks for all the help!!! I'll be sure to post our final report and new 3dDeconvolve_MPI source online when we finish.
-Omid
(BTW, the build for 3dDeconvolve just finished)