Friday 24 May 2013

on google

So google have decided to disable downloads on google code.

So I have decided to stop using it.

... although as yet I have no concrete plans or timeline for when this decision will take effect.

Whilst they claim it's about abuse, one can only assume that is just a "likely-sounding excuse" for what in reality is just another straight-up lie from the PR department of a supra-national conglomerate, and it's really just a way to cut costs and promote their 'drive' service (a useless microsoft/apple only service as far as i'm concerned).

Nobody seems to have reported that they have also gimped their POP interface to gmail a couple of days ago. No more UID support. This makes POP a lot less reliable/useful as a mail store (although in honesty it was never designed for that purpose). I proceeded to delete all the mail in gmail to help them free up some disk space.

I guess over-all the writing is on the wall. We all know that at some point 'google account' will mean 'google+', and blogger may be retired at any time.

So it seems my on-going-but-totally-lax search for alternatives to 'everything google for convenience' just got another big kick up the rump-side.

As my projects are all pretty small and low-volume I might look at a local solution because every network based solution faces the same problem. I have a couple of beagleboards doing nothing although getting a running and secure-enough system might be more pain than it's worth.

It's a bit of a pain to have to deal with.

Tuesday 21 May 2013

on build systems

So i'm kind of baffled by gradle.

"power and flexibility of ant" with [enforced] "conventions of maven".

Sounds like it cherry picked the two worst parts of both outside of using XML!

Actually it looks ok enough for simple projects, but then again pretty much every tool is because solving simple problems is always ... simple. However I think the decision to go with implementing it in a scripting language is just going to lead to some pretty nasty long-term maintenance problems.

The only valid argument for something like ant is that the configuration files are machine readable (even if they aren't human readable!), which can lead to tooling support (ok, ant isn't very machine readable anyway, i'm just stating that it could be valid if they did it right). So it's kind of strange that gradle eschews that for something which is about as parseable as a batch file.

Of course it's the flash new kid on the block so it will go through a rapid adoption phase, but like every other tool before it cracks will then start to appear.

I'm also a little baffled by the claim that somehow groovy is just java and so it's easier for java developers. Doesn't look anything like java to me. At all. Actually even if it were true, i think that would be a problem not a benefit. Java is just not the right language to use for the problems that build systems solve.

At least it's better than ant, but that's a pretty low bar. At best ant isn't much better than a 'build-all.sh' file, and demonstrably worse in many ways.

automake

I've put a few hours into getting somewhere on the java automake stuff. However I seem to have got stuck in an extended discussion on how a zip file works. The java build process is so simple I don't think anyone who is only familiar with C can grasp it.

I guess the main impression I get is that there isn't a particularly strong desire for simplicity vs 'the way we do it', which is a bit frustrating. If I end up with something I wouldn't want to use myself there doesn't seem much point. And given that in the intersection of the sets of 'i write java' and 'i want to use makefiles' and 'using automake isn't utterly and completely out of the question' i'm probably one of about a dozen unique and beautiful snowflakes, there isn't much hope if i'm not interested myself. Actually i may not use it anyway.

So although earlier I was more optimistic now i'm not sure where it's headed. I have some fragments which do part of the job but given the difficult i've had in explaining this simple external stuff i'm not sure I'm mentally up to trying to create and then explain any code inside automake.in. I'm not really that thrilled with the idea of trying to provide a complete patch anyway.

Most (big) projects seem to want every potential contributor to kow-tow to the whims of some god-like maintainer as if it's you the one who should feel privileged that they should deign to even entertain the idea of you doing free work for them. I'm ashamed this is exactly how we did things in Evolution and now regret it. There's quite a difference between a casual contribution and a long-term maintainer. I have no idea if automake is like that, but my patience threshold is pretty low these days so it wouldn't have to be for me to suddenly not to give a shit (i get paid to put up with crap, it's not something I need to volunteer for).

Tuesday 14 May 2013

So I finally wrote a game ...

Ok, so it's just a bash version of hangman for the olimex weekend coding challenge, but it's still a complete game, including opening screen/instructions, a computer brain and even a closing animation on the credits.

Welcome to hang-man bash.

       /--|
       |  o /
       | /|
       | / \
      ---

I was going to do a java version with graphics, or even an android one for the hell of it, but the inner loop of the bash solution was too elegant to not just use that. Yay for grep and sed, and shuf is pretty neat too.

Although I don't have any olimex hardware I check up on the blog once in a while to see if anything is interesting is happening, and the coding challenge is quite a nice little idea. I might suggest something similar for the parallella project.

In other news I thought I would look at trying to improve the Java support in automake as nobody else seems to want to. The main issue is just coming up with a tidy set of conventions and deciding what features it has. I'm hoping to come up with something tidy and useful, but with so many possible solutions it might take a while for a good one to coalesce. An on-going journey.

I've also started doing a bit of paid-for work on libffts post the JNI stuff I contributed. Wont replace my day job but the opportunity arose. An android app is one goal, but more on that later.

Wednesday 1 May 2013

Reading comprehension, hUMA, NUMA, HSA, FSA, WTF?

I really need to find something better to do in my spare time than read ars "tech" nica and the like, but whilst doing a pass over the confusing front-page I came across an article about AMD's hUMA press. At least the front page isn't as bad as anandtech - i''m not sure what 'pipeline stories' are supposed to be, and to be honest i'm not sure why I bother reading a site which is full of computer case and psu reviews (ffs) and otherwise rather personally biased coverage of pretty random topics.

Anyway back to the arsetechnica piece. Pretty lazy article all round but I guess it summarised some of the points.

The real laff is with the comments.

Quite a few people seem to be getting "hUMA" confused with "NUMA". Hint: The N is for "NOT". Detail: Non-Unified-Memory-Architecture is exactly the opposite to Unified-Memory-Architecture which is the UMA part of the hUMA acronym.

NUMA is a way to add a lot of memory to a system with a lot of processors and not be bottlenecked by concurrent access issues (this is very much a good thing, it scales very well). UMA just makes the memory fast enough that the concurrent access shouldn't matter and then puts everything on the same memory ... (but it can't scale as well).

The rest of the comments just show that nobody knows what the 'h' means either. Probably understandable, it's a bloody horrid acronym and the article goes no way to explaining what's going on beyond the one set of slides in that press pack - however the information is readily available on AMD's site.

i.e. the h is for HSA, ... which is the other side of the coin. Another mouth-full at Hetereogenous Systems Architecture (off the top of my head, could be off a bit - i'm not a journalist).

In a nut-shell, AMD and the other HSA co-conspirators are working on turning their custom processors, DSPs, FPGAs, and GPUs into first-class CPU-compatible co-processors. They will all need to share the same virtual (and protected) address space that the CPU does. They will need to support a coherent cache (at some level, L2 at least). Obviously (like duh) this will require operating system support although apart from the CPU I would suspect it can just be hidden in the driver. Personally I hope the coherency isn't too fine-grained otherwise it will be a bottleneck on it's own.

And the other big part (from the last information I read on it at least) is that HSA uses a common assembly language/binary format/bytecode which can be re-targetted to different platforms cheaply, at run-time. So if the hardware provides the resources required, it will just run from a single compile. Although I suspect for performance it will have to target 'classes' of hardware, since to get good GPU performance you really need to write things very differently. I presume this will be based capability based on things like LDS memory.

Obviously AMD have to do this so that developers are able to target legacy Intel/PC hardware for free as well since neither Intel nor Nvidia are part of HSA - nor are they likely to be if they have any choice in the matter since it's such a big benefit to AMD's technology.

I think the commenters are also missing the point on just how much GPUs and CPUs have already converged. CPUs keep getting a wider MMX, as well as 'hyper-threading' and so on. And GPUs now have scalar units running the show, pre-emptive threading (in addition to the super-hyper threading they already have) and other processor features. The new GPUs will be capable of directly executing other languages like Java or Python or whatever - how those would handle vectorisation is another issue.

Anyway ... man, I hope they can pull it off. Right now working with a GPU it's like trying to solve every transport problem with a frieght-train. Sure you can get a lot of work done but it's not the best suited tool to every transport job - sometimes you can just walk. Like everything in the peecee wintel world getting to this point has been the product of throwing enough hardware and power at a problem until the architectural inefficiencies are inconsequential. This isn't good system design unless you're trying to sell the big hardware parts that drive it (i.e. you're intel).

The technology is great. The challenges are great. The wintel inertia which must be overcome is great too. The challenge of making the hardware easy enough to programme that all developers can take advantage of it ... is nigh on insurmountable.

With lambda's and the parallel collections Java could be a perfect fit. Well that language will be. With the JVM being so friggan complex, hopefully the implementation wont be a decade getting there as it was with cpus.