Friday, 28 February 2014

javafx + internet radio = partial success?

After a long and pointless goose-chase with different versions of ffmpeg and libav ... I got my internet radio thing to work from JavaFX.

It turned out to be at least partly a problem with my proxy code. The ice-cast stream doesn't include a Content-Length header (because you know, it's a stream), so this was causing libfxplugins to crash as in my previous post on the subject. Shouldn't really cause a crash - at most it should just throw a protocol error exception? Not sure if it has javafx security implications (e.g. if media on a web page through the web component goes through the same mechanism it certainly does - Update: I filed a bug in jira, and it seems to have been escalated - even at this late stage of java 8).

If I add a made-up content-length value at least I get the music playing, .. but the 'user experience' isn't very good because it seems to change how much it pre-buffers depending on the reported length (rather strange i think). So if i report 1-10M it starts after a few seconds but if i report some gigantic number it buffers for minutes (or maybe it never starts, i lost patience). The problem is that as the data length is hounoured (as it should be) which means that too small a number causes the stream to finish quickly. And it seems to read the same stream twice at the same time which is also odd.

To be blunt I find it pretty perplexing 'in this day and age' that javafx doesn't support streaming media to start with. Or that g-"so-called-streamer" seems to be the reason for this. If I could work out how to compile from the openjfx dist I would have a poke, but alas ...

I was going to go ahead with the application anyway but I think because of this streaming issue I wont - or if I do I wont be using javafx to do the decoding which is a bit of a pain.

Update 2: The bug has just been patched. 4 working days from reporting and if the openjdk schedule is still correct, just 2 weeks from release. So yeah I guess it was reasonably important and even just mucking about has value.

Tuesday, 25 February 2014

Simple Java binding for OpenCL 1.2

Well I have it to the point of working - it still needs some functions filled out plus helper functions and a bit of tweaking but it's mostly there. So far under 2KLOC of C and less of Java. I went with the 'every pointer is 64-bits' implementation, using non-static methods, and passing objects around to the JNI rather than the pointers (except for a couple of apis). This allows me to implement the raw interface fully in C with just an 'interface' in Java - and thus write a lot less code.

Currently i'm mapping a bit closer to the C api than JOCL does. I'm using only using ByteBuffers to transfer memory asynchronously, for any other array arguments i'm just using arrays.

This example is with the raw api with no helpers coming in to play - there are some obvious simple ones to add which will make it a bit more comfortable to use.

// For all the CL_* constants
import static au.notzed.zcl.CL.*;


  CLPlatform[] platforms = CLPlatform.getPlatforms();
  CLPlatform plat = platforms[0];
  CLDevice dev = plat.getDevices(CL_DEVICE_TYPE_CPU)[0];
  CLContext cl = plat.createContext(dev);
  CLCommandQueue q = cl.createCommandQueue(dev, 0);

  CLBuffer mem = cl.createBuffer(0, 1024 * 4, null);

  CLProgram prog = cl.createProgramWithSource(
    new String[] {
      "kernel void testa(global int *buffer, int4 n, float f) {" +
      " buffer[get_global_id(0)] = n.s1 + get_global_id(0);" +

  pog.buildProgram(new CLDevice[]{dev}, null, null);

  CLKernel k = prog.createKernel("testa");

  ByteBuffer buffer = ByteBuffer.allocateDirect(1024 * 4).order(ByteOrder.nativeOrder());
  k.set(0, mem);
  k.set(1, 12, 13, 14, 15);
  k.set(2, 1.3f);
  q.enqueueWriteBuffer(mem, CL_FALSE, 0, 1024 * 4, buffer, 0, null, null);
  q.enqueueNDRangeKernel(k, 1, new long[] { 0 }, new long[] { 16 }, new long[] { 1 }, null, null);
  q.enqueueReadBuffer(mem, CL_TRUE, 0, 1024 * 4, buffer, 0, null, null);

  IntBuffer ib = buffer.asIntBuffer();
  for (int i=0;i<32;i++) {
    System.out.printf(" %3d = %3d\n", i, ib.get());

Currently CLBuffer (and CLImage) is just a handle to the cl_mem - it has no local meta-data or a fixed Buffer backing. The way JOCL handles this is reasonably convenient but i'm still yet to decide whether I will do something similar. Whilst it may be handy to have local copies of data like 'width' and 'format', I'm inclined to just have accessors which invoke the GetImageInfo call instead - it might be a bit more expensive but redundant copies of data isn't free either.

I'm not really all that fond of the way JOCL honours the position() of Buffers - it kind of seems useful but usually it's just a pita. And manipulating that from C is also a pain. So at the moment I treat them as one would treat malloc() although I allow an offset to be used where appropriate.

Such as ...

public class CLCommandQueue {
   native public void enqueueWriteBuffer(CLBuffer mem, boolean blocking,
      long mem_offset, long size,
      Buffer buffer, long buf_offset,
      CLEventList wait,
      CLEventList event) throws CLException;

Compare to the C api:

extern CL_API_ENTRY cl_int CL_API_CALL
clEnqueueWriteBuffer(cl_command_queue   /* command_queue */, 
                     cl_mem             /* buffer */, 
                     cl_bool            /* blocking_write */, 
                     size_t             /* offset */, 
                     size_t             /* size */, 
                     const void *       /* ptr */, 
                     cl_uint            /* num_events_in_wait_list */, 
                     const cl_event *   /* event_wait_list */, 
                     cl_event *         /* event */) CL_API_SUFFIX__VERSION_1_0;
In C "ptr" can just be adjusted before you use it but in Java I need to pass buf_offset to allow the same flexibility. It would have been nice to be able to pass array types here too ... but then I realised that these can run asynchronous which doesn't work from jni (or doesn't work well).

I'm still not sure if the query interface is based only on the type-specific queries implemented in C or whether I have helpers for every value on the objects themselves. The latter makes the code size and maintenance burden a lot bigger for questionable benefit. Maybe just do it for the more useful types.

Haven't yet done the callback stuff or native kernels (i don't quite understand those yet) but most of that is fairly easy apart from some resource tracking issues that come in to play.

Of course now i've done 90% of the work i'm not sure i can be fagged to do the last 10% ...

more on JNI overheads

I wrote most of the OpenCL binding yesterday but now i'm mucking about with simplifying it.

I've experimented with a couple of binding mechanisms but they have various drawbacks. They all work in basically the same way in that there is an abstract base class of each type then a concrete platform-specific implementation that defines the pointer holder.

The difference is how the jni C code gets hold of that pointer:

Passed directly
The abstract base class defines all the methods, which are implemented in the concrete class, which just invokes the native methods. The native methods may be static or non-static.

This requires a lot of boilerplate in the java code, but the C code can just use a simple cast to access the CL resources.

C code performs a field lookup
The base class can define the methods directly as native. The concrete class primarily is just a holder for the pointer value.

This requires only minimal boiler-plate but the resources must be looked up via a field reference. The field reference is dependent on the type though.

C code performs a virtual method invocation.
The base class can define the methods directly as native. The concrete class primarily is just a holder for the pointer value.

This requires only minimal boiler-plate but the resources must be looked up via a method invocation. But here the field reference is independent on the type.

The last is kind of the nicest - in the C code it's the same amount of effort (coding wise) as the second but allows for some polymorphism. The first is the least attractive as it requires a lot of boilerplate - 3 simple functions rather than just one empty one.

But, a big chunk of the OpenCL API is dealing with mundane things like 'get*Info()' lookups and to simplify it's use I came up with a number of type-specific calls. However rather than write these for every possible type I pass a type-id to the JNI code so a single function works. This works fine except that I would like to have a separate CLBuffer and CLImage object - and in this case the second implementation falls down.

To gain more information on the trade-off involved I did some timing on a basic function:

  public CLDevice[] getDevices(long type) throws CLException;

This invokes clGetDeviceIDs twice (first to get the list size) and then returns an array of instantiated wrappers for the pointers. I invoked this 10M times for various binding mechanisms.

Method                   Time
 pass long                13.777s
 pass long static         14.212s
 field lookup             14.060s
 method lookup            16.252s

So interesting points here. First is that static method invocations appear to be slower than non-static even when the pointer isn't being used. This is somewhat surprising as 'static' methods seem to be quite popular as a mechanism for JNI binding.

Second is that a field lookup from C isn't that much cost compared to a field lookup in Java.

Lastly, as expected the method lookup is more expensive and if one considers that the task does somewhat more than the pointer resolution then it is quite significantly more expensive. So much so that it probably isn't the ideal solution.

So ... it looks like I may end up going with the same solution I've used before. That is, just use the simple field lookup from C. Although it's slightly slower than the first mechanism it is just a lot less work for me without a code generator and produces much smaller classes either way. I'll just have to work out a way to implement the polymorphic getInfo methods some other way: using IsInstanceOf() or just using CLMemory for all memory types. In general performance is not an issue here anyway.

I suppose to do it properly I would need to profile the same stuff on 32-bit platforms and/or android as well. But right now i don't particularly care and don't have any capable hardware anyway (apart from the parallella). I wasn't even bothering to implement the 32-bit backend so far anyway.


This is just more detail on how the bindings work. In each case objects are instantiated from the C code - so the java doesn't need to know anything about the platform (and is thus, automatically platform agnostic).

First is passing the pointer directly. Drawback is all the bulky boilerplate - it looks less severe here as there is only a single method.

public abstract class CLPlatform extends CLObject {

    abstract public CLDevice[] getDevices(long type) throws CLException;

    class CLPlatform64 extends CLPlatform {
        final long p;

        CLPlatform64(long p) {
            this.p = p;

        public CLDevice[] getDevices(long type) throws CLException {
            return getDevices(p, type);

        native CLDevice[] getDevices(long p, long type) throws CLException;

    class CLPlatform32 extends CLPlatform {
        final int p;

        CLPlatform32(int p) {
            this.p = p;

        public CLDevice[] getDevices(long type) throws CLException {
            return getDevices(p, type);

        native CLDevice[] getDevices(int p, long type) throws CLException;

Then having the C lookup the field. Drawback is each concrete class must be handled separately.

public abstract class CLPlatform extends CLObject {
    native public CLDevice[] getDevices(long type) throws CLException;

    class CLPlatform64 extends CLPlatform {
        final long p;

        CLPlatform64(long p) {
            this.p = p;

    class CLPlatform64 extends CLPlatform {
        final long p;

        CLPlatform64(long p) {
            this.p = p;

    class CLPlatform32 extends CLPlatform {
        final int p;

        CLPlatform32(long p) {
            this.p = p;

And lastly having a pointer retrieval method. This has lots of nice coding benefits ... but too much in the way of overheads.

public abstract class CLPlatform extends CLObject {
    native public CLDevice[] getDevices(long type) throws CLException;

    class CLPlatform64 extends CLPlatform implements CLNative64 {
        final long p;

        CLPlatform64(long p) {
            this.p = p;

        long getPointer() {
            return p;

    class CLPlatform64 extends CLPlatform implements CLNative32 {
        final int p;

        CLPlatform64(int p) {
            this.p = p;

        int getPointer() {
            return p;

Or ... I could of course just use a long for storage on 32-bit platforms and be done with it - the extra memory overhead is pretty much insignificant in the grand scheme of things. It might require some extra work on the C side when dealing with a couple of the interfaces but it is pretty minor.

With that mechanism the worst-case becomes:

public abstract class CLPlatform extends CLObject {
    final long p;

    CLPlatform(long p) {
        this.p = p;

    public CLDevice[] getDevices(long type) throws CLException {
        return getDevices(p, type);

    native CLDevice[] getDevices(long p, long type) throws CLException;

Actually I can move 'p' to the base class then which simplifies any polymorphism too.

I still like the second approach somewhat for a hand-coded binding since it keeps the type information and allows all the details to be hidden in the C code where it is easier to hide using macros and so on. And the java becomes very simple:

public abstract class CLPlatform extends CLObject {
    CLPlatform(long p) {

    public native CLDevice[] getDevices(long type) throws CLException;


Another problematic part of the OpenCL api is cl_event. It's actually a bit of a pain to work with even in C but the idea doesn't really map well to java at all.

I think I came up with a workable solution that hides all the details without too much overheads. My initial solution is to have a growable list of items (the same as JOCL) that was managed on the Java side. It's a bit messy on the C side but really messy on the Java side:

public class CLEventList {
   static class CLEventList64 {
      int index;
      long[] events;

   enqueueSomething(..., CLEventList wait, CLEventList event) {
       CLEventList64 wait64 = (CLEventList64)wait;
       CLEventList64 event64 = (CLEventList64)event;

           wait64 == null ? 0 : wait64.index, wait64 == null ? null :,
           event64 == null ? 0 : event64.index, event64 == null ? null :;

       if (event64 != null) {

Yeah, maybe not - for the 20 odd enqueue functions in the API.

So I moved most of the logic to the C code - actually the logic isn't really any different on the C side it just has to do a couple of field lookups rather than take arguments, and I added a method to record the output event.

public class CLEventList {
   static class CLEventList64 {
      int index;
      long[] events;
      void addEvent(long e) {

   enqueueSomething(..., CLEventList wait, CLEventList event) {

UserEvents are still a bit of a pain to fit in with this but I think I can work those out. The difficulty is with the reference counting.

Sunday, 23 February 2014


As a 'distraction' last night I started coding up a custom OpenCL binding for Java. This was after sitting / staring at my pc for a few hours wondering if i'd simply given up and 'lost the knack'. Maybe I still have. Actually in hindsight i'm not sure why i'm doing it other than as some relatively 'simple' distraction to keep me busy. It's quite simple because it's mostly a lot of boilerplate mapping the relatively concise OpenCL api to a small number of classes and there isn't too much to think about.

Not sure i'll finish it actually. Like I said, distraction.

But FWIW I took a different approach to the binding this time - all custom code, trying to use / support native java types where possible (rather than forcing ByteBuffer for every interaction), etc. Also a different approach to the 32/64 bit problem compared to previous JNI bindings - spreading the logic between the C and Java code by having the C make the decisions about constructors but having the Java define the behaviour via abstract methods (it's more obvious than i'm able to describe right now). Well I got as far as some of CLContext but there's still a day or two's work to get it 'feature complete' so we'll see if I get that far.

Was a nice day today so after a shit sleep (i think the dentist hit some nerves with the injections and/or bruised the roots - the original heat-sensitive pain is gone but now i have worse to deal with) I decided to try to do some socialising. I dropped by some mates houses unannounced but I guess they were out doing the same thing but I did catch up with a cousin I haven't seen properly for years. Pretty good for just off 70, wish I had genes from his part of the family tree. Then had a few beers in town (and caught up with him and his son by coincidence) - really busy for a Sunday no doubt due to the Fringe.

Plenty of food-for-the-eyes at least. Hooley dooley.

Saturday, 22 February 2014

Small poke

Started moving some of my code and whatnot over to the new pc and had a bit of a poke around DuskZ after fixing a bug in the FX slide-show code exposed by java 8.

I'm just working on putting the backend into berkeley db. Took a while to remember where I was at, then I made some changes to add a level of indirection between item (types) where they exist (on-map or in-inventory). And then I realised I need to do something similar for active objects but hit a small snag on how to do it ...

I'm trying to resolve the relationship between persistent storage and active instances whilst maintaining the class hierarchy and trying to leverage indices and referential integrity from the DB. And trying not to rewrite huge chunks of code. I think i'm probably just over-thinking it a bit.

Also now have too many importers/exporters for dead formats which can probably start to get culled.

So yeah, quick visit but (still) need to think a bit more.

Wednesday, 19 February 2014

Kaveri 'mini' pc

Yesterday I had to go somewhere and it was near a PC shop I go to so I dropped in an ordered the bits for a new computer with a A10-7850K APU. I got one of the small antec cases (ISK300-150) - for some reason I thought it had an external PSU so I wasn't considering it. I'm really over 'mini' tower cases these days which don't seem too mini. Ordered a 256GB SSD - not really sure why now i think about it, my laptop only has a 100G drive and unless I have movies or lots of DVD iso's on the disk that is more than enough. 8G DD3-2133 ram, ASRock ITX board. Hopefully the 150W PSU will suffice even if i need to run the APU in a lower-power mode, and hopefully everything fits anyway. Going to see how it goes without an optical drive too.

I guess from being new, using some expensive bits like the case, being in australia, and not being the cheapest place to get the parts ... it still added up pretty fast for only 5 things: to about $850 just for the computer with no screens or input peripherals. *shrug* it is what it is. As I suspected the guy said nobody around here really buys AMD of late. Hopefully the HSA driver isn't too far away either, i'm interested in looking into HSA enabled Java, plain old OpenCL, and probably the most interesting to me; other ways to access HSA more directly (assuming linux will support all that too ...). Well, when I get back into it anyway.

Should hopefully get the bits tomorrow arvo - if i'm not too rooted after the "root canal stage 1" in the morning (pun intended). I'll update this post with the build and os install - going to try slackware on this one. I'm interested to see EFI for the first time and/or if there will be problems because of it; i'm not fan of the old PC-BIOS and it's more than about time it died (asrock's efi gui looks pretty garish from pics mind you). Although if M$ and intel were involved i'm sure they managed to fuck it up some how (beyond the obvious mess with the take-over-your-computer encrypted boot stuff. I'm pretty much convinced this is all for one purpose: to embed DRM into the system. I have a hunch that systemd will also be the enabler for this to happen to GNU/Linux. Making life difficult for non-M$ os's was just a bonus.)

PS This will be my primary day-to-day desktop computer; mostly web browsing + email, but also a bit of hobby hacking, console for parallella, etc.

Dentist was ... a little disturbing. He wasn't at all confident he was even going to be able to save the tooth until the last 10 minutes after poking around for an hour and a half. He was just about to give up. It was for resorbtion - and amounted to a very deep and very fiddly filling that went all the way through the top and out the side below the gum-line. Apart from being pretty boring it wasn't really too bad except for a couple of stabs of pain when he went into the nerves before blasting them with more drugs ... until the injections wore off that is. Ouch - I think mainly just bruising from the injections. Well I hope it was worth it anyway and it doesn't just rot away after all that, even with a microscope I don't know how he could see what was going on. Have to go back in 3 months for the root canal job :-( That's the expensive one too.

Anyway, had a few beers then went and got the computer bits.

Case is an antec ISK300-150. The in-built PSU is about 1/4 the size of a standard ATX PSU.

Motherboard is ASRock FM2A88X-ITX - haven't bought one for a while so it seems to have an awful lot of shit on it. Not sure what use hdmi in is ...

And everything fits fairly well. The main pain was the USB3 header connector which is 2 fat cables and a tall connector. This the first SSD i've installed and it's interesting to see how small/light they are. The guy in the computer shop was origianlly going to sell me some cruicial ram but i went with a lower-profile g.skill one - and just as well, I don't think the other would have fit.

Apart from that everything fits in pretty easy (I might cable-tie some of the cables to the frame though). I updated the firmware using the network bios update thing - which was nice.

Then I booted the slackware64 usb image, created the partitions using gdisk, and started installing directly from my ISP's slackware mirror. A bit slower than doing it locally but I'm in no rush.

So far it's so boringly straightforward there's nothing really to report. I presume the catalyst driver will be straightforward too.

I have an old keyboard I intend to use and i was surprised the mobo comes with a PS/2 socket (I was going to use a usb converter). I got it at a mysterious pawn shop one saturday afternoon - mysterious because i've never been able to find it again despite a few attempts looking for it. I must've wildly mis-remembered where it was. It's got a steel base and no m$ windoze keys.

Time passes ... (installs via ftp) ...

Ok, so looks like I did make a mistake: one must boot in EFI mode from the USB stick for it to install the EFI loader properly. Initially it must've booted using BIOS mode automagically so it didn't prompt for the ELILO install. I just rebooted from the stick and ran setup again. It setup EFI and the bios boot menu fine.

And it took me a little while on X - the APU requires a different driver from the normal ones (search apu catalyst driver). And ... well my test monitor with a HDMI to DVI cable had slipped out a bit and caused some strange behaviour. It worked fine in text mode and for the EFI interface, but turned off when X started (how bizarre). Once I seated it properly it worked as expected. Now hopefully that HSA driver isn't too far away.

Now i've got it that far I don't feel like shuffling screens and cables around to set it up, maybe tomorrow.

Must've had too many coffees yesterday at the pub, I ended up installing the box in a 'temporary' setup and playing around till past midnight.

I've got another workstation on the main part of the desk so i'm just using the return which is only 450mm deep - it's a bit cramped but I think this will be ok - not sure on the ergonomics yet. This is where I had my laptop before anyway. There may be other options too but this will do for now.

And yeah, I really did buy some 4:3 monitors although I got them a few years ago (at a slight premium over wide-screen models). For web or writing or pretty much anything other than playing games or watching movies, it's a much better screen size. These 19" models have about as much usable space as a 24" monitor in much less physical area and even a higher resolution at 1600x1200.

I also had a bit of a play with the thermal throttling and so on. With no throttling it gets hot pretty fast - the AMD heatsink is a funny vertical design that doesn't allow cross-flow from the case fan so it doesn't work very well. And it's radial design also seems to cause extra fan noise when it ramps up. The case fan is a bit noisy too. If i turn it up to flat out it will cause the cpu fan to slow down to a reasonable level - so I guess I could operate it that way if I really wanted the speed.

Throttling at 65W via the bios seems a good compromise, I can set the case fan to middle-speed (or low if i'm not doing much) and the machine is only about 10-15% slower (compiling linux).

I knew it was going to be a compromise when going for such a small case so this is ok by me.

Hmm, maybe I spoke too soon - although the X driver is working, GL definitely isn't. For whatever reason GL seemed to point to the wrong version ( points to fglrx but was the old one).

But fixing that ... and nothing GL works at all. Just running glxinfo causes artifacts to show up and anything that outputs graphics == instant (X) crash.

Trying newer kernels.

Initially I had no luck - i built a 3.12.12 kernel using the huge config from testing/; the driver build fails due to the use of a GPL symbol. It turns out that was because kernel debugging was turned on in that config. Removing that let me build the driver.

While I was building 3.12.12 I also tried 3.13.4 ... But the driver interface wont build with this one and it looks like it needs a patch for that. Or I missed some kernel config option in the byzantine xconfig (there's something that definitely hasn't improved over the years).

So with 3.12.12 and a running driver GL still didn't seem to work and crashed as soon as I started any GL app. I was about to give up. Then as one last thing I tried turning on the iommu again; and viola ... so far so good. Or maybe not - that lasted till the next reboot. Tried an ATX PSU as well. No difference.

Blah. I have no idea now?

Then I saw a new bios came out between when i updated it yesterday and today so I tried that.

Hmm, seems to be working so far. I reset the bios to defaults (oops, bad idea, it wiped out the efi boot entry), fixed the boot entry, fixed the ram speed (it uses 1600 intead of 2133). Doesn't need the iommu on. And doesn't seem to need thermal throttling to keep it running ok. So maybe it was a bung bios.

Bloody PeeCees!

I decided to do a little cleanup of the cables to help with airflow and tidy up the main volume. It had been getting a bit warm on one of the support chips (the heatsink on left corner of mobo). The whole drive frame is a bit of a pain for that tiny SSD drive.

Since the BIOS update it's been running a lot cooler anyway. In general use I might be able to get away with the case fan on it's lowest setting with all the default BIOS settings.

Update: Been running solid for the 3 days since I put it back together. During 'normal use' the slowest fan setting is more than enough and it runs quiet and cool (normal use == browsing 20+tabs, pdf viewers, netbeans, a pile of xterms). And quite novel having a 'suspend to ram' option that works reliably on a desktop machine (like i said: been a long time since i build a pc, and that just didn't work properly). Yay for slackware!

Monday, 17 February 2014

Well that kinda sucked ...

Yeah so ..., nice birthday present. An hour in a dentists chair while he tries to cause pain repeatedly - to isolate what the problem was. Apparently my pain threshold is lower than it should be because I don't go to the dentist regularly (somehow I don't follow that logic; and/or just as well I never fucking went to the dentist if getting used to sharp pain is one side-effect; fucking a hurt a lot more than a broken arm that's for sure). And after all that the original dentist had the correct diagnosis - the specialist just kept saying how unusual it all was. Just what one wants to hear ... Just as well humans can't actually remember pain.

Apparently the sleep apneoa device can't be a cause of problems, and otherwise I have rather robust cavity-free teeth (which i'm pretty pleased with given how long it's been since i've been to a dentist).

Anyway, now queued up for an hour long operation later in the week to do some pretty nasty drilling which basically kills the inside of the tooth. What can you do eh ...

Then I did a bit of a pub crawl on the way home. Probably should do that more often if only to perve on the hot pretty things walking past.

I no longer have a mobile so I had no way to ping my friends (yes i do have some) to catch up for a birthday drink; so it just turned into a pretty depressing and isolated few hours in the end. I wasn't sure how I was going to end up after the appointment so I didn't organise anything in advance and I haven't been out for ages either.

slackware update oops

So I decided to update one of my old laptops the other day (IBM Thinkpad T40), I only use it for web browsing and it's running slackware 14.0.

It's CPU is quite old and doesn't support PAE kernels ...

But for some reason slackpkg decided to change to the PAE kernel when it ran lilo. Actually it's kind of funny it still uses lilo; I thought that died a decade ago. By luck I found the install DVD relatively easily and managed to boot into single-user mode against the on-board HDD and point lilo to the correct kernel. Although booting off dvd was a bit flakey - i had to power down and disconnect the mains beteen reboots otherwise the screen stayed black.

It's only got 512MB RAM which sadly isn't really enough to do much these days. I looked into buying some more SODIMMs but it looks like it isn't worth it (around here, if you can even find PC2100 SODIMMs). For a 10 year old machine it still functions pretty well otherwise. Not sure it's worth upgrading the memory on my X61 thinkpad either, and the fan seems to be getting worse - I don't want to have have to pull the whole thing apart to see if i can fix that.

I've been contining to look into getting a small-as-possible ITX Kaveri machine going to replace my day-today use of the X61. At first I was initially dissapointed in the sizes of case available but there are one or two that will probably suffice - with psu, heatsink, hdds and air-flow you just can't make it too small. Unless gigabyte come out with a Kaveri based BRIX anyway. Or I get keen enough to make my own case using a low-profile PSU. Most PC shops around here are all intel so the AMD stuff isn't that common, although it's still possible to get it. The fanless heatsink cases (can't remember the brand) looked interesting, until I realised they needed an external power brick and cost a bit too much. Not particularly attractive either. I have another workstation but that's in a less convenient room; and is pretty much relegated to a shitty/unreliable mythtv server atm so that can stay there (not sure why i bother, i haven't watched any recordings from it for months).

But for now i'm more pre-occupied with a dental issues. After some fuckups when I got braces back in my youth I don't have much enthusiasm for dentists but after having a problem that wasn't fixing itself I finally went to a dentist (after, err, 25 years or something) and found out I need root canal work done; well whatever, so long as it just gets fixed. Seeing a specialist in a couple of hours. My shitty teeth have always been a pita since I was a kid and i have a feeling this wont be the last of it (and looking back, i'm sure it affected my life trajectory somewhat. There's a reason i only smile when i'm drunk), and i'm pretty sure that the sleep apnoea device didn't help. At least the local dentist was quite good.

Wednesday, 12 February 2014

javafx + internet radio = sigh

I thought i'd look at porting the android internet radio player I have over to JavaFX; although jjmpeg is an option I thought i would first try with the JavaFX MediaPlayer. Thought it might be a simple distraction on a hot day.

Unfortunately it's a no go so far.

Firstly it just doesn't accept the protocol from the shoutcast server that i'm using: it sends a response with ICY 200 OK rather than HTTP/1.1 200 OK. Because mpeg is a streaming protocol normally players just ignore that and keep going even if they aren't aware of the streaming protocol (which they normally are).

Then I realised it requires ffmpeg 0.10 libraries to function (the same version jjmpeg/head uses, oddly enough - which seems pretty odd for such a new product) - so I pointed to my local build of those and at least got it playing local mp3 files ...

So since I had that going I hacked up a quick proxy server (it's something i wanted to look at anyway wrt android; so the player can extract the current song info from the stream) and after mucking about with some silly bugs I managed to get it to the point of segfaulting. If I save the content of the stream it will play that ok it just seems to have trouble with loading from a network. The proxy server just rewrites the ICY header response to say HTTP/1.1 instead.

Steaming Poo.

I'm using the latest jdk 1.8.0 release candidate as of 12/2/14. I suspect it's a version compatability issue with my ffmpeg build or it could just be a bug in the media code - given it works with local files and particularly since it's using gstreamer: the second would be no surprise to me at all because gstreamer is a pile of shit.

   Stack: [0x84dad000,0x855ae000],  sp=0x855ad1e4,  free space=8192k
   Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
   C  []  cache_has_enough_data+0xc
   C  []  progress_buffer_loop+0xbd
   C  []  gst_task_func+0x203
   C  []  default_func+0x29
   C  []  fileno+0x6c3a1
   C  []  fileno+0x69bd0
   C  []  abort@@GLIBC_2.0+0x5e99

My proxy code is also totally shit so that's always a possible cause but it shouldn't cause a crash regardless.

Update: The above was on an x86 machine, also tried on an amd64 isa and it did the same. So probably version incompatabilities. Unfortunately those versions (at least) of ffmpeg can be compiled in binary incompatible ways even if the library is the same revision, and if it's been built against ubuntu or debian, well they like to mess around with their packages.

Update 28/2/14: So I thought perhaps it was something to do with the debian mess of using libav instead of ffmpeg. But trying that results in the same crash. I also tried the that comes with the gst-ffmpeg-0.10.13 package ... but that doesn't include avcodec_open2(). Hmm, so much for version numbers eh.

Also tried ffmpeg 0.8.x and 0.9.x. But they all just crash in the same place.

On my new pc I also tried using the version of '' that comes with gst-ffmpeg-0.10.13 ... but that 'version 53' of ffmpeg doesn't include libavcodec_open2(). Actually it looks like that is also using libav for fucks sake. Oh, so it seems that it's actually using ffmpeg 0.7.2 instead. How odd.

What were oracle thinking if it is built against libav - a buggy and insecure fork of another project and a version that is well beyond maintenance at that.

I guess I will keep trying different versions and see if one sticks. Pretty fucked up though - libraries have a version for a reason and so what's the point if they don't actually mean anything. Possibly the situation was compounded by the whole libav fork. Or it might just be a bug with gstreamer-lite.

Time passes ...

Ok, so 0.8.x and 0.9.x also provide '' ... and they all crash in the same place, so quite probably it's just the code in libfxplugins.

Shit, I even tried building openjfx ... but even though it has the source-code for ... it doesn't seem to build it and just copies it from the installed jre. It doesn't help that it uses gradle which is both slow and opaque.

See follow-up post.

Tuesday, 11 February 2014

Habanero + Lime Cordial

So I finally got off my arse and made it to the beach on a week-day today. 1/2 hour easy ride (it's 41 today, too hot to rush) although I left it a bit late and caught some of the after-school traffic on the way home. Water was clear and cool and there more people down there than I would have expected - not that any went into water deeper than their chest and some people were sun-baking (the sun is so friggan hot wtf would you want to sit right out in it for? In it??). Saw a couple of dolphins swim past slowly about 40m further out.

Anyway 1/2 hour ride home is enough to get pretty warm so I went for a cool drink (reehydration before I start on some beer) and I only had an experimental bottle of lime cordial I made last time - I dropped half a large ripe red habanero chilli into the bottle when I sealed it. I wasn't sure if it would really be what I was after.

It's ... certainly ... interesting.

As you're drinking it it is a typical cool refreshing tangy lime flavoured drink. And then you stop. Your mouth, lips, and throat instantly starts to gently and delicately burn.

So it makes you want to have more ...

... Ahh, nice cool refreshing tangy drink. And then you stop. The burning just intensifies.

Which makes you want even more.

Wow I said. Definitely something I'll do again next time I get some limes.

And as with most habanero based heat the burning just keeps increasing the more you have, I guess the capsaicin must get stuck in your soft-tissues for a long time. I know after cutting a lot up i've had burning fingers for a few days despite aggressive soapy scrubbing - your fingertips burn quite notably (to the point of pain) when you press your fingers together and the harder you press the hotter they feel. And it hasn't affected the flavour as sometimes chillies add a capsicum note although habaneros have thier unique sweet flavour so it is probably just complementing the sugar. is not authorised to copy this article?

Weird, looks like this blog has been put onto a site that sounds like 'various' and the posts are made to look like I write for them.

Never heard of them, nor are they permitted to re-post any of my posts.

Lets see if this one makes it on there ...

Monday, 10 February 2014

Internode Freezone Internet Radio Player v2.2

I decided to update the Android Internode Radio Player today. I kept putting it off because I needed to fix up the web page and build a source-code distribution. Fortunately i'd done most of the GPLv3 work already.

More over on the project page @ internode.

This version is GPLv3 and includes the source-code.

Update: I moved the home page.

Friday, 7 February 2014

The tech / enthusiast 'press' - if you're not paying, you're the product.

Just a short thought for the day - if you're not paying for it, you're not the customer but the product. So what does that make the tech and enthusiest 'press' - toms hardware, anandtech, ign, youtube, etc?

The tech and enthusiast press is just another big arm of the PR industry.

Unlike the ABC's charter, they do not exist to entertain and inform their readership; they're there to sell the products which provide the advertising dollars which pay for them to exist in the first place.

Their customers are the advertisers, not the readers.

I used to have some respect for technical articles on anandtech and the like but some recent technical articles lead me to drop this completely. These convinced me that they don't have the technical competency to write these articles in the first place which leads to the obvious conclusion that they were effectively written by the PR firms of the companies that make the products in question. And what's worse is that the trend toward access journalism means that they'll bend over backward (or is that forward) to accomondate PR agendas just to get that next exclusive.

I'm sure some would label such an idea a 'tin-foil hat conspiracy' but that in itself is a typical fud tactic used to silence such dissenting thoughts. It's such a fiendishly clever one too: first people are treated as idiots for believing such nonsense, and if the truth really does come out (such as with the recent youtube m$/ea payola indicents) they're just dismissed as idiots again because 'everybody knew that'.

Some keywords to help you navigate this minefield:


Basically means the entire article is direct from the PR firm or may as well be. It will only follow set talking-points and wont be overly critical of the product.

Apparently attaching conditions to this type of article is a wide-spread industry practice which at best leads to selection bias. Sites which don't mind litterally prostituting themselves out in this way are going to get more money and crowd out any with ethics (in short, there's no room for ethics in publishing).

I'd rather just have the raw PR material so I can make up my own mind. At least you know where the PR starts and stops.


Here the press has access to information before the general public. It is possible the preview is in a controlled environment and possibly not even hands-on, so even if there were no other influence the previewer may not be getting a complete and accurate picture of the product themselves.

When did you last see a scathing preview? They are just part of the PR path and should be treated as such; nothing wrong with being interested about it but they must be treated with a good dose of scepticism. Always.


In general the review copy will be supplied by the vendor: this already introduces a conflict of interest. But reviews may also impact the ability to secure exclusives or previews - which allows advertiser interest to go beyond the supposed chinese walls between the advertising and editorial sections of a reporting business.

Some sites even bizarrely take pride in having their reviewers use inconsistent scoring across all reviews which is utterly baffling. The job of the review editor should be there to ensure the score at the end of the article actually means something and isn't unduly affected by the tastes, biases, or bribes of the reviewer.

You definitely can't trust 'user' reviews: apart from widespread fraud there may be other mechanisms such as filtering screens on 'apps' which only direct positive reviewers to the rating mechanism (e.g. dungeonkeeper) or even use bribery of virtual junk to encourage distorted scores. And something about the immediacy of the mobile internet has created a 'great hysterical age' well beyond anything in previous human history: so you can't use popularity as any sort of reliable guide either.

This kind of sucks: it means you basically can't trust anything on the internet. There is no way to form an objective opinion when all of the information sources are poisoned. At best if you are an expert in the field you can form your own opinion but that will also be biased based on your own life experiences (although it probably doesn't matter). The layman has no hope.

I don't think the 'mainstream' press is any better tbh. Sure there are some journalistic ethics that might prevent the most egregious examples but papers either exist at the whim of their advertisers (their real customers) or to push the political agenda of the insanely rich (literal meaning) arsehole who owns it for that purpose. And the ethics only cover news reporting anyway which is only a part of any paper's content.


Well, lack thereof.

I guess i'm on holidays but i've mostly been too lazy to even sit in the yard drinking cold beer on a hot day ... (it's kind of boring without something to do, and reading tends to put me to sleep).

I've been playing a good bit of Ni No Kuni over the last few days after buying it on PSN - some of it is a bit cringe inducing and childish but enough of it is quite charming to keep going. The hand-holding never seems to stop (yes snot-nose, I do know that i should cast such and such here, you don't need to step me through it every single time), the whole pure hearted help-everyone mend hearts stuff is a bit vomit inducing, and sometimes it seems there are just a few too many short battles every few seconds. But yeah you don't play jrpg's for the story. Technically it is excellent - short loading times (just 30 seconds to the main screen from xmb, 5 seconds to load a saved game), very pretty and clean graphics almost always a solid 25fps (only a bit of tearing sometimes when you're starting on the dragon). I haven't played a decent RPG since Rogue Galaxy and I keep forgetting Level 5 make much better games technically than square enix (the stories are usually a bit better too if only because you can work out what they are). The only final fantasy I really liked was 12 but the loading times for each tiny area sucked. I'm more of the lazy 'grind until the battle is winnable' type so I found FF-13 a pain since it was easy to get stuck with no way to back-track in order to build up capabilities. It doesn't help that you hate most of the characters. I actually don't mind Lightning - if i was in her situation I would hate all the wankers around me too and just focus on getting shit done (not sure she deserves her own dress-up game though, and seems completely out of character). I thought Vanille was a cutsy bit of fluff until I found out the back story of her coping with something bad in her past - and then I hated her with a passion. Reminds me of someone I knew once; pretty and fakely shallow but ultimately a complete fuck-up living in a detatched reality who would simply make shit up to try to please/impress those around her. I didn't get very far in 10 so maybe I should try again, although the loading times there suck even more. Given how much time they take as it is the loading just kills it.

I've been to the beach a couple of times on a weekend but although i always 'fully intend' to go during the week I never seem to get out of the house. Either too hot or too cold or too windy but really i'm just too lazy. Today? hmm naah.

Although i'm not a complete lazy arse. I finally built a shaded frame over some of the garden - a couple of months too late, 45 degrees in the shade is enough to do some real damage when there isn't any shade. The garden in general has been pretty dissapointing this year. Tomatoes should be nearly over by now but i'm only just getting fruit now from the most developed plants (well, I had one ripe fruit). And even then there hasn't been many bees around so lots of the flowers just aren't setting. I have some basil flowering now so that should help. I think I can count on one hand the number of chillies i've picked from last years plants - the ones I planted this year are about 5cm high after 3 months. I thought i'd be inundated with beans this year as I was last but even they aren't producing very well although a lot of that has been the heat. I've got two cucumbers so far and only one plant (of 3) looks like ever fruiting. I got a few sweet corn but I think i'm down to the last one and some I picked too late. It takes too much space and water to be worth it. No fruit set on the peach tree at all this year which was a bummer weather was wrong at that time of year.

And I somehow completely killed my mandarine tree ;-(. My best guess is a tiny bit of stray glyphosate killed it but i'm not sure how plausible it is for that to kill a whole tree - either that or something stressed it to death. Had nice juicy and tart fruit too. Maybe i'll drop in a valencia to replace it, or leave it bare for a while. I also killed the rosemary I had in a pot - I got sick of just keeping it alive watering it constantly and it was root-bound anyway so it always grew poorly. I had one come up by seed in another pot though so all is not lost (i had a couple come up in the paving too, for such a tough plant it seems easy to kill). About the only things doing any good has been the annual herbs so i've been using handfulls of mint/basil/lemon basil/parsely with pretty much every meal (particularly sandwiches, noodles/soups/pasta or curries). Sage is going well too.

It's been too hot to do much interesting cooking although I baked a cake a few days ago to try to use up some stewed apricots I froze when I had a tree (before that one died - about 5 years ago!). A bit heavy but passable. It's been too hot to brew beer. I did a couple of batches last month but due to the high temperature at the time they're not going to be beating any personal bests (sat around 28-30 most of the time which fucks up the flavour). Still, I might go put some in the fridge for something to do later today.

I haven't turned on my parallella for a few weeks. I thought I was going to get into the FFT hacking there for a bit before I went off on a tangent on the GA + OD stuff but I can't muster enthusiasm for any coding right now. Since i've been playing games a bit lately I also thought about some game stuff, dusk, or a shoot-em-up; but that's as far as that went. My laptop is a bit of a pain to code on because it needs more memory and the fan rumbles loudly sometimes (thinkpad fault) and the hand rest gets hot, so i'm keeping an eye out for a small (micro/mini itx?) machine with one of the just released HSA-capable APUs but there isn't much of them around here yet.

I guess I just need a break anyway so I shouldn't be too hard on myself. But I think being tired all the time from sleep apnoea has to be a contributing factor. And unfortunately there's not much you can do about it. Losing weight or just exercising is supposed to help but I had it even when i was a skinny-arsed cunt back in uni riding every day and it's hard to lose weight when the sleepiness fucks with your appetite and ability to exercise. Makes one a bit fumbly and accident prone at it's worst too. Even something purportedly relaxing such as sitting under a tree on a warm day drinking a cold beer and reading a book can become a bit of a struggle and ends up being not particularly enjoyable.

Tuesday, 4 February 2014

Is VR really a good idea?

It looks like the technology is just about there to create affordable and usable 'virtual reality' hardware for the general public: but as with many other technological advances one has to ask whether the technology is ahead of society's ability to cope.

TV is already a pretty good conversation killer and mobile phones have become little cones of isolation even when "socialising" with friends or family, so how will it play out if you're whole field of view (and hearing?) is encased in a helmet?

I can see a lot of agro from brothers and sisters fighting over the one head-mounted display that these things will only be able to support for the time being. And some angry mums when little Johnny or dear little Alice wont come out of his or her bedroom for dinner because he can't even hear the calls (and god knows what they're up to in there). And some pretty boring get togethers with mates around the TV getting sea-sick looking at a view from the one player's eyes.

Another more disturbing factor that will play into it is the continual fine tuning of the skinner box trade - games which are pretty much just poker machines / gambling devices for extracting money from the vulnerable. Since those are making such a fuck-ton of money at the moment they are only going to get worse. If people can already get lost in a tiny screen on a phone how will they cope when they're shut-off completely from the outside world? The scope for manipulation of vulnerable or susceptible people is enourmous. It's easy to blame people as being weak-minded but it's not entirely their fault: they are being manipulated without even knowing it, but intentionally by maniplators who know what they're doing.

Or propaganda / religious / idealogical indoctrination, both scurges of right-thinking citizens everywhere. Cut off from immediate self-correcting factors like someone telling you what a dickhead you are.

As an aside I wonder when hollywood became such an obvious propaganda front for the neocon/zionist agenda? One of the recent transformers movies was on the other night on TV and I couldn't get over just how blatantly propagandist it was at every level: an [unemployed] 'nerd' who saves the day, with a super-model girlfriend, with happy middle-class parents, with nothing but leisure to keep them occupied, government secret organisations [being a good thing, by] protecting the [whole] world from bad stuff [that usually happens in the middle east], to advanced alien race only wanting to deal with the USA [who are obviuosly the good guys], even to some [crazy] conspiracy nut not only being believed by everyone but also being incredibly wealthy. I guess we had some of that shit in the 80s and 90s but at least we had some stuff to counter it too (and a bit of fucking humour) and now it's just so overt it's bordering on sick. Although it's been an undercurrent for some time I suppose it was around the turn of the century it really took off so brazenly by taking advantage of public maleability at the time. And the really sick part is they get upset when people don't want to pay to be advertised at and brainwashed (or maybe the sick part is people want to go to the effort to get it in the first place). But I digress ...

Back to the VR stuff: potential health issues. Spending many hours staring at a screen with a fixed focal distance can't be good for your eyes. Modern lifestyles are already sedentry enough and the lack of external vision enforces this even further as you don't have a choice but to sit while doing it unless you live in a rubber room. And if socially retarded people (like me) can already get caught up in reading books or hacking code well into the next morning what's going to happen when you can't tell if it is night or even know where you are? I wonder how long until someone dies using one? Or loses their job/fails at school because they'd rather spend time outside of IRL, because lets face it, IRL pretty much sucks for most people at least some of the time. Although it's not like both of these don't already happen with existing technology.

And imagine not being able to skip adverts or mute them - or even look away from them? That's nightmare material.

There are some potentially interesting non-game uses that spring to mind which seem to contradict some of these points such as remote communication or stuff for mobility impaired (whether through age or disablement). And others such as training. But most of these will be short-term or irregular.

But overall i'm just not sure on the whole idea for entertainment itself. It could be totally bloody awesome or it could be the beggining of the end for western civilisation (civilisations never last forever ...). Ok probably not the latter but there are big issues beyond the technology capability itself and those explored by some laughably fantastical stories by Neal Stephenson.

It'll certainly be interesting for a while anyway - a new area of technlogy to explore. Don't get me wrong there are some really exciting possibilities for games and other uses that i'm looking forward to trying out one day. But in the end there may need to be mandatory breaks, minimum age restrictions, ambient input (e.g. external cameras or see-through screens/windows) or other tweaks just to protect people from themselves.

I guess we'll see in about a decade, assuming the experiments in the next few years become commercially successful.