Monday, 26 September 2011

LU Decomposition 2

So I finally needed to get that matrix solver working for work ... and revisited the LU decomposition code I had played with a few weeks ago.

It turns out the code was broken, first a simple typo, and then a deeper problem: I hadn't noticed that the loop which multiplies out the rows depends on earlier results. Confusingly of course this worked just fine using a CPU driver since the work items within a work-group are actually executed in serial.

So I had to come up with another solution.

First I wrote a bit of code that just printed out the calculations actually being performed.

For column 0 this amounted to a no-op.

For column 1, it was some edge cases, then something like:
  for i : 0 to n
col 1 [i] -= col 0 [i] * = col 1 [ 0 ]

For column 2, edge cases plus:
  for i : 0 to n
col 2 [ i ] -= col 0 [ i ] * col 2 [ 0 ] + col 1 [ i ] * col 2 [ 1 ]

And so on.

As the emboldened calculations depend on a previous iteration of the same loop (for n=1 in case 1, and n=2 in case 2), each column result cannot be calculated independently.

Since the amount of calculation is small, using the shared memory to propagate the partial results didn't seem viable, so instead I simply calculate the required previous values manually for each column for all threads. I fiddled with the code I had which printed out the calculations and turned it into a code generator to expand all loops: it's only a 6x6 matrix so it isn't very much code. For the edge cases I use some logic that zeros out the multiplicands so the entire algorithm is completely branchless.

For example, column 2 is calculated using:
 tmp0 = getlu(lu, 2);
tmp1 = getlu(lu, 8);
tmp1 -= getlu(lu, 6) * tmp0;
tmp0 = cid >= 1 ? tmp0 : 0;
tmp1 = cid >= 2 ? tmp1 : 0;
v = getlu(lu, cid * 6 + 2);
v -= getlu(lu, cid * 6 + 0) * tmp0;
v -= getlu(lu, cid * 6 + 1) * tmp1;
barrier(CLK_LOCAL_MEM_FENCE);
putlu(lu, cid * 6 + 2, v);
barrier(CLK_LOCAL_MEM_FENCE);
pivotexchange(lu, piv, 2, cid);
barrier(CLK_LOCAL_MEM_FENCE);

Where cid is the column id (get_local_id(0) % 6). I use macros to access the array since I am also flattening it out to avoid (or at least reduce) bank conflicts. The barriers are needed since some of the rows span wave-fronts: but in any event only add 3% or so to the processing time.

I've had to write code more than once to understand the data pattern of an algorithm: multi-dimensional array access is bad enough to visualise without adding multi-dimensional processing elements as well. In some cases I've managed to remove huge amounts of redundant calculations this way - matlab seems to encourage code that has very poor data flow for example.

I otherwise stuck to the same thread topology: 6 threads work on each 6x6 matrix together, and 192 threads in the local work group for a total of 32 matrices.

For 4096 solutions of the same 6x6 matrix (i.e. best-case branch divergence), it takes 26uS on the GTX 480. That seems reasonable enough to me. Although I still have a small number of bank conflicts I think this result is good enough for an early minute for today.

Thursday, 15 September 2011

Control the Roll

I think I hit a bug in the NVidia OpenCL compiler today, and lost a bit of faith in compilers in general ..

What was a fairly small routine with a simple loop - wanted to use 63 registers/work item. No matter how i tried to unroll the loop using #pragma unroll, re-arrange the work to use vectors or not, and so on.
// local group size = 16, 16, 1
kernel void
somefunc(..., constant float *f0b, ...) {
local float *localdata[];

... load local data ...

for (int i=0;i<9;i++) {
float a0 = localdata[i*2];
float a1 = localdata[i*2+1];
...
v0 += f0a[i*2] * a0 + f1a[i*2] * a1;
v1 += f0b[i*2] * b0 + f1b[i*2] * b1;
v2 += f0a[i*2+1] * a0 + f1a[i*2] * a1;
v3 += f0b[i*2+1] * a0 + f1b[i*2] * b1;
}
}
Looking at the intermediate code it had about a thousand(!) redundant register moves to/from other registers. For the small problem I had it was taking about 100uS which probably wouldn't have bothered me apart from the weird compiler output.

So I removed the loop entirely by hand, using C macros to implement each step.

Result: 73uS & 21 registers.

And the intermediate code was much smaller and more compact.

NVidia's compiler seems to do a pretty crappy job with vectors in any event, the vector version was even worse - half the speed of a scalar version - around 200uS. It's nor normally this extreme but it seems it's almost always faster not to use vector code. It would also (only sometimes!) hang for 20 seconds or more whilst compiling this file, and these changes fixed that too.

Aparapi freed

AMD finally decided to release Aparapi as free software on google code.

This is an interesting project that basically targets a GPU device for executing Java bytecode. With some fairly obvious heavy limitations ...

I wont be rushing to convert any of my code over, but when I get time I will have to have a look at it. Maybe it would be useful for some prototyping - although I find JOGL a real cinch to use and I quite like C, so it might not be for me.

For doing mathematical calculations, C is simply a nicer language than Java too. With Java you're forced to use simple arrays for performance, and the lack of a primitive complex type is a real pita.

(i.e. I think I've just talked myself out of it: I also love using local memory and so on).

Wednesday, 14 September 2011

Shared Bank Conflicts

So yesterday I was working on some code for a wavelet transform - essentially a bunch of convolutions performed in a specific way.

One routine I have does 4 interleaved convolutions simultaneously, and had a large number of shared bank conflicts - 200% or so. So I spent quite a while trying to remove them. I got rid of most of them - but for some weird reason I still get about 14% and I ran out of day (and patience) to work out what was going on there.

I find this kind of problem quite tricky - trying to juggle dozens of sets of numbers in your head and still come up with something that works. I've developed a few rules of thumb but I still haven't done it often enough to make it easy.

But for all that effort I got a very minor performance improvement: barely 2%. From 70uS, to 67uS kernel time. Hardly seemed worth it.

Saturday, 10 September 2011

Masked Loops & LU Decomposition

Update 26/11/11: So it turns out the timing is completely bogus because the code not only had a typo in it - which probably allowed the loop to be optimised away - the algorithm was incorrect as well. So I have a feeling this is all just wrong now ... That's what I get for timing without validating first.

I have since re-coded this algorithm, and I think I have something that works. I resorted to building a code generator and manually unrolling all the loops using branchless code. For 4096 6x6 matrices it takes 28uS to perform the LU decomposition (they're the same matrix again).

Also see a follow-up on the LU decomposition problem.


I've been having trouble getting some code going for work so I thought i'd have a 'break' and visit porting some of the bits and pieces to OpenCL.

The LU decomposition seemed obvious.

Not because I really intended to use it, I thought i'd just try an almost direct translation to OpenCL to start with. All I did was have each thread work on a full solution independently. I used local memory for the column cache, but that was about it.

Fairly impressed - even disregarding all possible optimisations and memory issues, it only took 220uS (4096, 6x6 matrices). It can no doubt be made to run faster, but I was so surprised I almost considered just leaving it at that - relative to the other operations it is 'fast enough' ... but of course I didn't. (I was also solving the same matrix copied 4096 times, so it was 'best case' wrt thread divergence). I guess it isn't really so surprising though - apart from the poor memory access pattern the code path is consistent across threads.

I did a bit more work the serial version before tackling the parallel: e.g. by copying the whole matrix to LS first, and telling the compiler to unroll the loops I got it down to 118uS.

Anyway, along the way to parallising this I came across an interesting and not entirely obvious optimisation. It was to use the sort of result-masking you need when vectorising algorithms, but just for a simple loop.

Part of one inner loop the algorithm has this construct:
#define m 6
#define n 6

int kmax = min(i, j);
float s = 0.0f;

for (int k = 0; k < kmax ; k++) {
s += LU[i*n+k] * LU[k*n+j];
}

LU[i*n+j] = LU[i*n+j] -=s;

In this case, 6 threads working together on this problem and each are working on 1 row each. The i index is thus the matrix-local work-item, and j is the current column.

The problem with this code is the loop indexing is unknown at compile time, so the address calculations need to be done on the fly and the loop can't be unrolled very efficiently. And although the compiler could do what I do below, it seems a bridge too far at present.

So I changed the loop to this:
    int kmax = min(i, j);
float s = 0.0f;

for (int k = 0; k < n ; k++) {
float v = LU[i*n+k] * LU[k*n+j];

s += k < kmax ? v : 0;
}

LU[i*n+j] = LU[i*n+j] -=s;

And even though it appears to be doing more work - things just don't work like that on a GPU. Each 'thread' is still executing all parts of all loops anyway (or at best, sitting idle waiting for the longest running one to finish).

This simple change lead to a 25% improvement in the overall execution time in my test case.

My 'final' code executes in 38uS (I haven't verified it works! So this number might be nonsense!). And I still have further to go - the way I access memory isn't coalesced, I also have a lot of local-store bank conflicts to nut out.

So, assuming the code works, maybe that original 220uS wasn't so hot afterall.

Wednesday, 7 September 2011

Fixing what isn't really broken.

Blah, so Google have decided they're going to mess up another of their products for me.

I've already stopped using the web interface to gmail - I just use pop and thunderbird - and now they're playing with blogger.

Blogger's existing interface is pretty crap - but it's simple and it's fast and easy to use. But the 'improvements' hardly seem improvements to me.

Harder

First, the composer. About the only 'good' thing is that the editing area is bigger - not much difference to me - and that it is a fixed size - that's the nicest bit.

But it's now more difficult to attach labels to posts as you need to click on the labels tab first - it seems like a simple thing, but the old interface is very simple if you use a few common labels most of the time. The post options 'pane' in general is pretty pants, it somehow manages to take up about 4x as much space as the previous one while only showing 1/4 as much information at a time.

They've broken the 'preview' feature - ok, it was always fairly broken - but now it's a full stylised preview which takes quite a while to load on a separate page/tab. The old in-line quick preview was much more useful to me than the full preview just to review the content of the text when proof-reading and editing and trying to get the paragraph white-space right. What used to take no time now takes a second and a tab switch.

Bigger

The stats pages now no longer fit in my browser, and they seem to take longer to load. Too many annoying tooltips and popups as well.

The settings/dashboard is weird in general - everything is double-spaced, and a huge chunk of the top of the window is dedicated to a fixed area that doesn't really add anything useful by being fixed. For some reason people seem to love this kind of crap but it just gives me the shits.

For me blogger is just another tab of dozens - not a stand-alone full-screen application. Everyone seems to want to move away from being able to put stuff in a window which doesn't take up the whole screen - you know, to multi-task?

Apple had a reason to force full-screen applications - first in macos which had a shit multi-tasking system, and then on iphone/itab since the machines aren't that powerful. Microsoft did the same with windows - the OS was really just a glorified application switcher. But I thought those days were long gone ...

Slower & Hotter

One reason I dropped gmail is that it was starting to make my laptop hot - firefox was busy all the time doing pointless shit you couldn't see or would rather not. It will be interesting to see if this new interface on blogger is also heavier than the old one. Whilst typing this post i've already noticed a bunch of pauses and freezes which I can't recall having occured in the previous incarnation.

This could be a real deal-breaker for me, although the stats are fun to watch, by far the most time I ever use in blogger is simply writing blog posts. If that becomes too painful (and I have to say, 1/2 a second cursor pause every 20 seconds gets tiring VERY VERY fast) then I wont be too pleased. The pause seems to get longer the more you write too ...

For now i'll go back to the old blogger, and I provided some feedback, for what little that will be worth. But I know eventually i'll be forced onto something I don't really want.

(this sort of forced-upgrade stuff is the sort of thing that scares me about firefox's 'no version' plans. I'm still using an 'old' firefox because the new ones are not to my liking, and in any event aren't packaged for my distro - but at least I can use an old version if I so want).

Update And in a further twist, the 'no i'd rather go back to the old interface' feedback form failed to work.

Java 2D arrays

I had to find a bit of code to solve a set of simultaneous equations for some prototype code i'm working on.

Having to do this gives me the willies really because linear algebra was never much fun for me ...

I only have to solve a pretty simple system - 6x6 - and I settled on using Jama, mostly because it's a small and simple library. The code is also fairly clean and I need to eventually port this to OpenCL too.

The code uses 2-D arrays to store it's matrices, but I know 2-D matrices in Java aren't particularly efficient - they are implemented much the way you would do it in C. That is an array of pointers which point to the rows of the array. Thus every 2D access requires 2 array dereferences. Anyway as I need to port it to OpenCL eventually anyway I tried converting the Matrix and LUDecomposition classes to use linear arrays. Then I just use simple arithmetic to map 2-D indices onto this linear array (i.e. i*n + j).

I got pretty much exactly a 2x performance boost from this simple change. Which is in-line with what I expected but I didn't quite expect it to be so close to 2x. The memory accesses far out-weigh any arithmetic on a modern CPU, and since 2-D arrays require 2x as many memory accesses (and 2x the range checks i presume), halving the memory accesses required lead to a 2x speed up. Even though the twin-array access was replaced by the more complex multiply and an addition as the index calculation.

Jama is wildly out of date, and I didn't look at the alternatives which might focus more on performance, but it shows that at least in this case 2-D arrays come at quite a cost.

Not really looking forward to getting it working in OpenCL either, trying to parallelise it is going to be a bit of a hassle. Then again maybe the challenge will be refreshing - I need something to spark me up at the moment.

This whole chunk of work is getting me down a bit - I have a big pile of hard to decipher ('matlabotomised') code to convert before I even get something to test, and then I have to try to remove some of the matlabisms that don't make sense in a real language, or require unnecessary excess memory. Then I have to get it working. And once that's done I have to re-do it all again from Java to OpenCL and get that working ... but i'm just not into it these last few weeks. Lack of sleep mostly (I keep waking at sun-up, I really must exercise), but also some other distractions - a few days of nice weather, family visiting, and so on.

This is also why I haven't had time to work on any of the other projects - I just don't have the energy. Lawn is looking good though.

The problem with teaching abstractions

For want of distraction, I've been hanging around some of the OpenCL forums of late. Boy do they get some ninny questions.

From people trying to drop plain C (with full stdio and all) into the compiler and expecting it to work, to someone asking if you can 'write functions' in the host language ... (err, I mean seriously. Maybe the guy is a matlab guru but certainly it isn't that hard to find out about C or whatever host language he's using).

But those are just the most extreme in the last couple of days. What is more worrying is just how many people don't seem to understand computer architecture at all - much less a system containing a 'remote' processing device like a GPU.

Really basic things like cache, registers, stack, memory latency, memory banks & contention, I/O bus latency, call invocation overheads, and so on. Not to mention the less-obvious but not really more complex ideas that GPU's bring to the table such as memory coalescing, thread divergence/masking (i.e. SIMT branches), local memory, and so on.

Also, a rough idea of just how fucking fast modern hardware is.

I presume most of the queries are from students but they just seem to have absolutely NFI what is going on 'under the bonnet' on their shiny new bit of hardware. e.g. the reason your code is slower on a GPU should be bleedingly obvious before you went anywhere near a compiler.

Before trying to work with such a complex system, you really need to know some basics of computer system architecture - otherwise none of the api will make any sense, nor will any of the results.

The manuals are good: just read them.

Experiment on your own; it's easy, it's fast, it's cheap (it only takes your time, and a student's time isn't worth anything). You don't learn any-where near as much if you just copy someone else, or forever ask inane questions.