And so it goes.
Update: 29.11.16 And so it goes ...
Good and bad craic about Atheism, BeagleBoard, C, Free Software, GNU, Java, OpenCL, and the sorry life of yet another jaded software engineer.
The following few lines is now the complete makefile for dez. This supports `jar' (normal build target), `sources' (ide source jar), `javadoc' (ide javadoc jar), dist (complete rebuildable source), and now even `test' or `check' (unit and integration tests via JUnit 4) targets. The stuff included from java.make is reusable and is under 200 lines once you exclude voluminous comments and documentation.
java_PROGRAMS = dez dez_VERSION=-1 dez_JAVA_SOURCES_DIRS=src dez_TEST_JAVA_SOURCES_DIRS=test DIST_NAME=dez DIST_VERSION=-1.3 DIST_EXTRA=COPYING.AGPL3 README Makefile include java.make
The article is over on my home page at Using GNU Make for java under my software articles section.
This morning I wrote and published article about writing an image container class for Java which supports efficient use of Streams. It is on my local home page under Pixels - Java Images, Streams.
Although there is much said of it, there is still quite a bit unsaid about how many wrong-footed experiments it took to accomplish the seemingly obvious final result. The code itself is now (or will be) part of an unpublished library I apparently started writing just over 12 months ago for reasons I can no longer recall. It doesn't have enough guts to make publishing it worthwhile as yet.
I'm also still playing with fft code and toying with some human-computer-interaction ideas.
After posting the result I kept experimenting with the code I `live blogged' about yesterday. I did some linlining but was primarily experimenting with multi-threading. I also looked at a decimation in time algorithm (which i kept fucking up until I got it working today), and a coupe of other things too, as will become apparent.
First a picture of a thousand words.
Now the words.
This picture shows the CPU load over time (i'm sure you all know what that is) as I ran a specific set of tests on a test against 8x runs of a 2^24 complex forward transform. The lines represent each core available on this computer. I put some 2s sleep calls between steps to make them distinguishable. Additionally each horizontal pixel represents 250ms which is is about the minimum sampling time which gives usable results.
Refer to earlier posts as to what the names mean.
That's the setup out of the way, the first 3rd or so of the plot. It's important but not as important as the next bit.
Well that's it I suppose. It utilises more of the available cpu resource and executes in a shorter time. But how was that achieved?
As Deane would say, ``Well Rob, i'm glad you asked''.
It only required a couple of quite simple steps. Firstly I copied the "radix4" routine into the inner loop of "radix4_pass". The jvm compiler wont do this itself without some options and it makes quite a difference of itself. Then I copied this to another "radix6_pass" which takes additional arguments that defines a sub-set of a full transform to calculate. I then just invoke this in parts from 4x separate threads, and keep doing that sort of thing until I hit the "logSplit" point and subsequently proceed as before. It was a quick-and-dirty and could be cleaned up but probably wont add much performance.
It was a bit of mucking about with the addressing logic but once done it's actually a fairly minor change: yet it results in the best performance by far.
At this point i've explored all the isssues and am working on a complete implementation which ties it all together. I think I will write two implementations: one using fully expanded tables for "ultimate performance" and another which calculates the Wn exponents on the fly for "ultimate size". Today I got a DIT algorithm working so I will fill out the API with forward/inverse, pairs-of-real, real-input, perhaps in-order results, and a couple of other useful things to aid convolution performance. Oh and 2D of all that.
But now for a little rant.
So as part of this effort, I ended up having to write my own cpu load monitor. The only one I had available on slackware only has a tiny graph and is mostly just a GUI version of top. The dark slate blue is user time, the grey area is idle, crimson is cpu load, irq is medium sea green and the io wait is golden rod. The kernel doesn't report particularly accurate values in /proc/stat but it sufficed.
But I had intended to annotate this image with some nice 'callouts' and shadowed boxes and whatnot so i wouldn't have to write those 1000 words just to explain what it was showing. However ...
gimp has turned into a "professional photographer editing suite" - i.e. a totally useless piece of junk for most of the planet (and pro photographers wont use it anyway?). So the only other application i had handy was openoffice "dot org" (pretentious twats) draw. I even started to track down the dependencies of inkscape to build that but gtkmm? Yeah ok. But openoffce: Jesus H fucking-A-cunt-of-a-christ, what a load of shit that is. I can't imagine how may millions of dollars that piece of rubbish has cost in terms of developer hours and wasted customer time (aka luser `productivity') but i'm astounded by just how terrible it is. It runs very slow. Has some weird-arse modality/GUI update bullshit going on. Is a total pain to use (in terms of number of mouse clicks required to do the most trivial of operations). And above that it's just buggy as hell. I'd call up the voluminous settings "dialogue" to change a background colour and then it would decide to throw any changes i'd made if i didn't explicitly set it every time.
But it's in good company and about as shit as any `office' software has ever been since the inane marketroid idea to lock users into fucktastic `software ecosystems' was first conceived of. Fuck micro$oft and the fucking hor$e it fucking rode in on.
I was so pissed off I spent the next 4+ hours (till 5am) working on my own structured graphical editor. Ok maybe that's a bit manic and it'll probably go about as far as the last 4 times I did the same thing the last 4 times I also tried using a bit of similar software to accomplish a similar seemingly-simple goal ... but ``like seriously''?
To phrase it in the parlance of our time: what in the actual fuck?
Just for something a bit different this morning I had an idea to do a record of developing software from the point of view of a "live blog". I was somewhat inspired by a recent video I saw of Media Molecules where they were editing shader routines for their outstandingly impressive new game "Dreams" on a live video stream.
Obviously I didn't quite do that but I did have a hypothesis to test and ended up with a working implementation to test that hypothesis, and recorded the details of the ups and downs as I went.
Did I get a positive or negative answer to my question?
To get the answer to that question and to get an insight into the daily life of one cranky developer, go have a read yourself.
I started writing a decent post with some detail but i'll just post this plot for now.
Ok, some explanation.
I tried to scale the performance by dividing the execution time per transform by N log2 N. I then normalised using a fudge factor so the fastest N=16 is about 1.0.
My first attempt just plotted the relative ratio to jtransforms but that wasn't very useful. It did look a whole lot better though because I used gnuplot, but this time i was lazy and just used openoffice. Pretty terrible tool though, it feels as clumsy as using microsoft junk on microsoft windows 95. Although I had enough pain getting good output from gnuplot via postscript too, but nothing a few calls to netpbm couldn't fix (albeit with it's completely pointless and useless "manual" pages which just redirect you to a web page).
Well, some more on the actual information in the picture:
int l1 = i << logStep; int l2 = i << logStep + 1; int l3 = l1 + l2; int mw = w.length - 2; int nw = w.length - 1; float w1r = w[l1]; float w1i = -w[nw - l1]; float w2ra = w[l2 & mw]; float w2ia = w[nw - (l2 & mw)]; int cs2 = (l2 >> logN - 1) * 4; int cs3 = (l3 >> logN - 1) * 4; float w2r = cosmap[cs2 + 0] * w2ra + cosmap[cs2 + 1] * w2ia; float w2i = cosmap[cs2 + 2] * w2ra + cosmap[cs2 + 3] * w2ia; float w3ra = w[l3 & mw]; float w3ia = w[nw - (l3 & mw)]; float w3r = cosmap[cs3 + 0] * w3ra + cosmap[cs3 + 1] * w3ia; float w3i = cosmap[cs3 + 2] * w3ra + cosmap[cs3 + 3] * w3ia;And this.
So it turned out and turns out that the twiddle factors are the primary performance problem and not the data cache. At least up to N=2^20. I should have known this as this was what ffts was addressing (if i recall correctly).
Whilst a single table allows for quick lookup "on paper", in reality it quickly becomes a wildly sparse lookup which murders the data cache. Even attempting to reduce its size has little benefit and too much cost; however 'tab_1' does beat 'tab_0' at the end. While fully pre-calculating the tables looks rather poor "on paper" in practice it leads to the fastest implementation and although it uses more memory it's only about twice a simple table, and around the same size as the data it is processing.
In contrast, the semi-recursive implementation only have a relatively weak bearing on the execution time. This could be due to poor tuning of course.
The rotation implementation adds an extra 18 flops to a calculation of 34 but only has a modest impact on performance so it is presumably offset by a combination of reduced address arithmetic, fewer loads, and otherwise unused flop cycles.
The code is surprisingly simple, I think? There is one very ugly routine for the 2nd to lass pass but even that is merely mandrualic-inlining and not complicated.
Well that's forward, I suppose I have to do inverse now. It's mostly just the same in reverse so the same architecture should work. I already wrote a bunch of DIT code anyway.
And i have some 2D stuff. It runs quite a bit faster than 1D for the same number of numbers (all else being equal) - in contrast to jtransforms. It's not a small amount either, it's like 30% faster. I even tried using it to implement a 1D transform - actually got it working - but even with the same memory access pattern as the 2D code it wasn't as fast as the 1D. Big bummer for a lot of effort.
It was those bloody twiddle factors again.
Update: I just realised that i made a bit of a mistake with the way i've encoded the tables for 'tab0' which has propagated from my first early attempts at writing an fft routine.
Because i started with a simple direct sine+cosine table I just appended extra items to cover the required range when i moved from radix-2 to radix-4. But all this has meant is i have a table which is 3x longer than it needs to be for W^1 and that W^2 and W^3 are sparsely located through it. So apart from adding complexity to the address calculation it leads to poor locality of reference in the inner loop.
It still drops off quite a bit after 2^16 though to just under jtransforms at 2^20.
Well I did some more mucking about with the fft code. Here's some quick results for the radix-4 code.
First the runtime per-transform, in microseconds. I ran approximately 1s worth of a single transfer in a tight loop and took the last of 3 runs. All algorithms executed sequentially in the same run.
16 64 256 1024 4096 16384 65536 262144 jtransforms 0.156 1.146 6.058 27.832 135.844 511.098 3 037.140 14 802.328 dif4 0.160 0.980 5.632 27.503 138.077 681.006 3 759.005 20 044.719 dif4b 0.136 0.797 4.713 22.994 120.915 615.623 3 175.115 17 875.563 dif4b_col 0.143 0.797 4.454 21.835 117.659 593.314 3 020.144 22 341.453 dif4c 0.087 0.675 4.255 21.720 115.550 576.798 2 775.360 15 248.578 dif4bc 0.083 0.616 3.760 19.596 108.028 547.334 2 810.118 16 308.047 dif4bc_col 0.137 0.622 3.699 19.629 107.954 550.483 2 820.234 16 323.797
And the same information presented as a percentage of jtransforms' execution time with my best implementation highlighted.
16 64 256 1024 4096 16384 65536 262144 jtransforms 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 dif4 102.4 85.5 93.0 98.8 101.6 133.2 123.8 135.4 dif4b 86.7 69.6 77.8 82.6 89.0 120.5 104.5 120.8 dif4b_col 91.3 69.6 73.5 78.5 86.6 116.1 99.4 150.9 dif4c 55.9 58.9 70.2 78.0 85.1 112.9 91.4 103.0 dif4bc 53.3 53.8 62.1 70.4 79.5 107.1 92.5 110.2 dif4bc_col 87.7 54.3 61.1 70.5 79.5 107.7 92.9 110.3
Executed with the default java options.
$ java -version java version "1.8.0_92" Java(TM) SE Runtime Environment (build 1.8.0_92-b14) Java HotSpot(TM) 64-Bit Server VM (build 25.92-b14, mixed mode)
CPU is kaveri clocked at minimum (1.7Ghz) with only a single DIMM, it is quite slow as you can see.
A summary of the algorithms follow.
The final (trivial) pass is hand-coded in all cases. dif4 requireds N/2 complex twiddle factors and the rest require N/2+N/4 due to the factorisation used.
In all cases only a forward complex transform which leaves the results unordered (bit-reversed indexed) is implemented.
All the implementations do full scans at each pass and so start to slow down above 2^12 elements due to cache thrashing (together with the twiddle table). Depth-first ordering should help in most cases.
Despite requiring fewer twiddle lookups in dif4c the extra complexity of the code leads to register spills and slower execution. That it takes a lead at very large numbers is also consistent with this and the point above.
Twiddle factor lookups are costly in general but become relatively costlier at larger sizes. This needs to be addressed separately to the general cache problem.
Whilst the re-ordering for the "col" variants was a cheap and very simple trick to gain some performance, it can't keep up with the hand-coded variants. Further tuning may help.
For addressing calculations sometimes it's better to fully spell out the calculation each time and let the compiler optimise it, but sometimes it isn't.
It is almost always a good idea to keep loops as simple as possible. I think this is one reason such a trivial / direct implementation of the FFT beats out a more sophisticated one.
There's a pretty clear winner here!
In most cases the calculations are expanded in full inside inner loops, apart from the W^0_N case. In all other cases the compiler generated slower execution if it was modularised, sometimes significantly slow (10%+). I suspect that even the radix-4 kernel is starting to exhaust available registers and scheduling slots
together with the java memory model. The code also includes numerous possibly questionable and sometimes a little baffling micro-optimisations. I haven't validated the results yet apart from a test case which "looks wrong" when things go awry but i have reasonable confidence it is functioning correctly.
After I found a decent reference I did start on a radix-8 kernel but i realised why there is so little about writing software to do it; it just doesn't fit on the cpu in a high level language. Even the radix-4 kernel seems to be a very tight fit for java on this cpu.
Still playing ...