I have just finished watching an extended talk by Guy Steele about how we should be designing our programs for parallelism. The gist of his messages declares that we should not be thinking explicitly about the low level operations of dispatching our code to core, or the like, but should instead be focusing on structuring our programs using divide-and-conquer methods so that compilers can do the parallelizing for us.
Okay, obviously there are some things about this that do not quite work. Principally, humans are still better than machines at determining when to parallelize. Nonetheless, in those cases when I know that I want to parallelize something, then I very well do care about making that process as seemless and comprehensible as I can. The approach championed by Fortress, which is Steele’s research language, does have some appeal, but I do not want to use Fortress to get these benefits. I want to use Scheme.
So, my question then becomes, what is the appropriate general interface (read, what library ought to exist) to facilitate this sort of parallelism? Given a Scheme with appropriately capable multi-tasking primitives, such as pthreads or MPI access, what would a high-level interface look like? Maybe someone has done this already for languages like Haskell? Can anyone point me to what this should look like?