> Could you please elaborate? By a new compressor, do you mean code to be integrated with AFNI that specifies how sparse masks are loaded and saved?
To my understanding, the way compression works in afni is that for each compression type an external program is execute to compress and expand the BRIK. i.e. "gzip", "bzip2", or "compress" are executed. I believe these are executed "pipe-style" and compressed/uncompressed data is feed to stdin and converted data is read from stdout.
For reading of sparse data as a compressed form I think all you would need is a program that reads the sparse data on stdin and writes non-sparse data on stdout.
Now, I could have sworn I saw an environment variable or something to specify a shell command to use as a compressor, but I can't find it in README.compression or README.environment so I'm beginning to think I hallucinated it.
In any case, I've realized there's a problem with writing the data, namely that that the dimensions of the BRIK aren't stored in the BRIK itself, so you couldn't create a sparse form without somehow getting additional data from the HEAD. So I'm less certain that writing of compressed data wouldn't work. But maybe afni could do something like add the dimensions to the environment then it would be up to the compressor to either care or not care.