Saturday, 8 October 2011

Java v OpenCL/CPU

I've been using the AMD CPU driver a bit for debugging and testing: i never really considered it for performance but for various reasons late tonight I ended up poking around with a simple routine and wondered how it compared.

At first I thought i'd discovered a disaster, but that's because I wasn't initialising the data: too many non-normal floating point operations slowing it down significantly. Oops, glad I checked that before posting. Although it's getting late so who knows what else I may have stuffed up.

I was testing using a simple matrix multiply, a 4096x4096 matrix stored in row-major order, multiplied by a 4096 row column-vector. It isn't something i'm in any need of, but after poking around this site which i've read a few times, and with nothing on TV I decided to play around a bit. Then after exhausting my interest on the GPU I tried the CPU version - I was originally going to see if just doing it locally with the CPU driver would be quicker than a device copy and back, but it isn't, the GPU is still 5-10x faster.

I tested 4 implementations:
  1. OpenCL written for a CPU target using float types, one work-group and one work-item per row, 4096 work groups
  2. OpenCL using float4 types, same
  3. Java, single threaded
  4. Java, using a ThreadPoolExecutor w/ 12 threads, 32 jobs.
 Code             Time (s)
Java single 1.5
Java pool 0.39
OpenCL float 0.43
OpenCL float4 0.37

So I had to resort to float4 types to beat the thread pool code, and then only just. It's kind of debatable as to which is easier to write: the Java code must explicitly deal with the range allocation and job launching. But then it's all built-in, and doesn't require a different language, runtime, interface and foreign memory management ... and one that's prone to crashing with zero information, and otherwise and also excruciatingly difficult to debug at that. Ok scratch that: the Java clearly wins here.

One can either conclude that the AMD compiler is a bit below-par to start with (mostly likely true), and only by using vectorised code that it was able to beat the Java. Or perhaps that the hotspot compiler is rather good at this particular problem (again, most likely true), and is possibly using SSE opcodes to implement the loop too. Not that SSEn really seems to add much of a boost in general apart from a few extra registers - it's not like on an SPU where vectorised code can be 10x faster than scalar.

I had until this point thought of the CPU drivers for OpenCL providing a sort of 'portable assembly language' for higher level languages, but if you have a decent compiler already it doesn't seem worth it - at least for some problems.

I suppose another implementation might do better; but you're still stuck with a pretty hostile debugging environment and if you're after performance you'll be using a GPU anyway. So about all it seems useful for is debugging/verifying code. Given that, perhaps it would be useful to add more checking in the compiled code to help with debugging rather than worrying about performance ... Unlike C, OpenCL has a much simpler memory model for which accurate and full run-time address-range-checking can be ?easily? added.

2 comments:

Igor Suhorukov said...

If your task has little amount of input data and large amount of SIMD operation on it(with small count memory read/write operation) OpenCL/CUDA are great for such tasks.
If you want develop/debug opencl with java JavaCL is best choice http://suhorukov.blogspot.com/2011/12/opencl-kernel-debugging-for-java-host.html

NotZed said...

I think you're missing the point of the post, but no matter.

On the matter of which Java binding to use - I think JOCL is excellent and I wouldn't consider JavaCL. I've been using it for a few years now with no troubles and have a lot of code that uses it.

That you feel the need to advertise another library on a rarely-read obscure blog is a little strange.