Here are the slides for the talk as promised. Note that slideshare is not showing some of the images etc., so you might be better to download the pdf from slideshare.
A great article about Twittering with Mathematica on the Wolfram blog. I had investigated a while ago a Mathematica twitter bot for doing “Micro-calculations” with the results from Mathematica being less than 140 chars. Not very useful but a fun bot.
Anyways if you are interested, I made a gist for it. Its in Java and uses JLink to communicate with Mathematica. It was never running for long as I suspect it violated some end user license, but basically one would send a Mathematica command to @mathematica and it would tweet you back your result evaluated by the MathKernel. I am hoping Wolfram might create a similar bot themselves for when you need to know the value of a special function quickly
Since there is a Matlab plug-in for CUDA that provides some examples of off-loading computation to the GPU, I thought it might be neat to have something similar for Mathematica. So as a start, I decided to try out a simple scalar product example using MathLink.
The initial template of my function is in the scalarProd.tm file:
which describes the ScalarProd function in Mathematica, and links it to the scalarProd() C method, which is where we receive the two arrays from Mathematica and use CUDA to calculate their scalar product and send the result back. This and the main() function for Linux and Mac, which is what I was using, are in the scalarProd.cu file. Note that Windows has a slightly different main() method.
and in the same scalarProd.cu we now include the scalarProd_kernel.cu kernel from CUDA’s SDK together with our scalarProd() C function:
Now we are ready to run Mathematica’s mprep pre-processor from MathLink to generate a scalarProdtm.cu file, and on this we run CUDA’s compiler nvcc and compile everything with the appropriate CUDA and MathLink libraries to generate our scalarProd binary, which we can now call from within Mathematica: