Well i've been a bit quiet of late. I wrote a few blog entries but never got around to publishing them - the mood had changed by the time I finished or I couldn't get my words arranged in a readable manner ...
I also had a flu for a week, visitors and other distractions, and writers block for the last few weeks - I hitting some problems with work which paralleled some of the problems I was hitting with my hobby code and everything ground to a bit of a halt. Well such is the way of things. The flu is mostly gone now and I resolved the deadlock with my work code, so perhaps I will get back to hacking again soon.
So today I had a couple of spare hours and the motivation to making jjmpeg work on windows - maybe if I have that working i can drop xuggle [for my work stuff] which is getting a bit out of date now. Actually the main problem is that it's too much hassle to build, and only available in a 32 bit version - and with the opencl code and other issues, the 32 bit jvm limits are starting to cramp the application a bit.
The biggest problem was working out how to compile it, and after a lot of buggerising around I found it was easiest to just install the mingw 64-bit compiler as a cross compiler and I get to keep the nice coding tools I always use and keep myself in linux. Trying to do any work at all - and particularly development - in windows is like trying to ride a bike with one leg cut-off and a broken arm. Unpleasant, and painful.
Apart from that it was mostly just re-arranging the code to call some simple macros which change depending on the platform - i.e. dlopen/dlsym or LoadLibrary/GetProcAddress. And then a bit of a rethink on how the binaries are built to support multiple targets via a cross compiler.
I have done very little testing, but when setup properly it found the library and decoded an mp3 file, which is good enough for me.
(and obviously, there will never be windows support for the linux-dvb code, only for the libavformat/libavcodec binding).
Thursday, 28 July 2011
Friday, 1 July 2011
OpenCL killer application?
So i've been trying to think of some killer application that OpenCL could enable.
Sure you have video rendering or processing, signal analysis and the like - but for desktop use these sorts of things can already be done. And if it's a little slow you can always throw more cores and/or boxes at it.
But I guess the big thing is hand-held devices. This is probably why the ARM guys are starting to make noise of late: being able to put `desktop power' into hand-held devices. Still, this is more of an evolutionary change than a revolutionary one - with mobile phones now being pocket computers we all expect that one day they'll be able to do everything we can on bigger machines, with moores law and all that (which is related to the number of transistors, not the processing performance).
I was also thinking again about AMD's next-gen designs - one aspect I hadn't fully appreciated is that they can scale up as well as down. Even just a single SM unit with 4x16 SIMD cores running at a modest and battery-friendly clock rate would add a mammoth amount of processing power to a hand-held device. It has some similar traits to the goals behind the CELL CPU - the design forces you to partition your work into chunks that fit on a single SPU. But once done you get that done - you gain a massive benefit of then being able to scale up the software by (almost) transparently executing these discrete units of work on more processors if they're available.
So, I don't think there will be a 'killer application' - software that only becomes possible and popular because of OpenCL (for one, the platform support is going to be weak until the hardware is common, and even then micro$oft wont support it because they're wanker khunts) - rather it will be the hardware application of placing desktop-power into your hand (and if such performance is only utilised to play flash games at high resolution, I fear the future of humanity is already lost).
Sure you have video rendering or processing, signal analysis and the like - but for desktop use these sorts of things can already be done. And if it's a little slow you can always throw more cores and/or boxes at it.
But I guess the big thing is hand-held devices. This is probably why the ARM guys are starting to make noise of late: being able to put `desktop power' into hand-held devices. Still, this is more of an evolutionary change than a revolutionary one - with mobile phones now being pocket computers we all expect that one day they'll be able to do everything we can on bigger machines, with moores law and all that (which is related to the number of transistors, not the processing performance).
I was also thinking again about AMD's next-gen designs - one aspect I hadn't fully appreciated is that they can scale up as well as down. Even just a single SM unit with 4x16 SIMD cores running at a modest and battery-friendly clock rate would add a mammoth amount of processing power to a hand-held device. It has some similar traits to the goals behind the CELL CPU - the design forces you to partition your work into chunks that fit on a single SPU. But once done you get that done - you gain a massive benefit of then being able to scale up the software by (almost) transparently executing these discrete units of work on more processors if they're available.
So, I don't think there will be a 'killer application' - software that only becomes possible and popular because of OpenCL (for one, the platform support is going to be weak until the hardware is common, and even then micro$oft wont support it because they're wanker khunts) - rather it will be the hardware application of placing desktop-power into your hand (and if such performance is only utilised to play flash games at high resolution, I fear the future of humanity is already lost).
It's funny 'cause it's true ...
From a little while ago, but I just flipped threw a few weeks worth of xkcd the other day and came across it.
When I was doing engineering at uni we talked about the reams of documentation and being able to pre-define the problem to such a degree that the coding itself would be an afterthought. A mere bullet-point to be performed by lowly trained knuckle dragging code monkeys somewhere between finalising the design and testing. Of course, this was proven to be immediately impractical during our final year project - and that was about the last time I ever saw an SDD. In one job I had we started with lofty goals of fully documenting it using references SRS and SDD's and the like but in the end we just ended up with piles of junk. They were complete, and even sometimes up to date but ultimately useless - they didn't add any value.
In reality of course there are many impediments to such an approach:
It's not that development documentation isn't useful - I wouldn't mind a good SRS myself - but there needs to be a happy medium.
Back to the flow-chart - which to me has a deeper meta-meaning even by being a flow-chart. The software engineering lecturers scoffed at flow-charts as being obsolete and out of date - yet they seem to be more useful than anything they claimed replaced it.
Personally I try to do it right but sometimes do it fast - because ultimately you always end up having to refresh a significant chunk of the code-base when the customer reveals what they really wanted from the start. Fortunately when i'm in the groove (say 30% of the time?) I can hack so fast and well (not to put tickets on myself, but i can) the line is a bit blurred - writing and (re)-re-factoring gobs of code on the fly as the design almost anneals itself into a workable solution. Pity I can't do that all the time.
Extra effort is usually is worth it, but not always. And sometimes the knack is just knowing when you get get away with taking short-cuts. For isolated code at the tail-end of the call-graph it usually makes little difference so long as it works.
If you throw the front-end away and start from scratch and you have some well designed code underneath, you can usually re-use most of it. Crappy code is much harder to re-use. But in the earlier stages of a project doing it right can be more of a hindrance. Particularly with OO languages - which force you to create good data models to fit the problem - which means even a small change to the problem can be a big change to the data model. Of course, many coders never achieve good data models, so perhaps for them the cost isn't so high - at the cost of perpetually low quality code. Yes I say data, not code - the data is always more important.
Annealing is probably a good way to describe software design and maturity process - early stages punctuated by large fluid changes due to high-energy experimentation then and over time the changes becoming smaller as it matures and solidifies. If the requirements change you have to put it back into the fire to liquefy the structure and reconsider how it fits in the new solution.
Simply bolting on new bits will only create an ugly and brittle solution.
When I was doing engineering at uni we talked about the reams of documentation and being able to pre-define the problem to such a degree that the coding itself would be an afterthought. A mere bullet-point to be performed by lowly trained knuckle dragging code monkeys somewhere between finalising the design and testing. Of course, this was proven to be immediately impractical during our final year project - and that was about the last time I ever saw an SDD. In one job I had we started with lofty goals of fully documenting it using references SRS and SDD's and the like but in the end we just ended up with piles of junk. They were complete, and even sometimes up to date but ultimately useless - they didn't add any value.
In reality of course there are many impediments to such an approach:
- The customer doesn't know what they ultimately want. Ever.
- New ideas come along which change or add requirements.
- You don't know the best way to solve a problem without trying it.
- You don't know where to even start solving problems without plenty of experience.
- The market or other outside circumstances force a change.
- That just isn't how the brain works - you continue to learn every second of every day and that changes how you would solve problems or present them.
- It's slow and too expensive for anyone who has to earn money and not just ask for it (i.e. outside of defence, govt).
It's not that development documentation isn't useful - I wouldn't mind a good SRS myself - but there needs to be a happy medium.
Back to the flow-chart - which to me has a deeper meta-meaning even by being a flow-chart. The software engineering lecturers scoffed at flow-charts as being obsolete and out of date - yet they seem to be more useful than anything they claimed replaced it.
Personally I try to do it right but sometimes do it fast - because ultimately you always end up having to refresh a significant chunk of the code-base when the customer reveals what they really wanted from the start. Fortunately when i'm in the groove (say 30% of the time?) I can hack so fast and well (not to put tickets on myself, but i can) the line is a bit blurred - writing and (re)-re-factoring gobs of code on the fly as the design almost anneals itself into a workable solution. Pity I can't do that all the time.
Extra effort is usually is worth it, but not always. And sometimes the knack is just knowing when you get get away with taking short-cuts. For isolated code at the tail-end of the call-graph it usually makes little difference so long as it works.
If you throw the front-end away and start from scratch and you have some well designed code underneath, you can usually re-use most of it. Crappy code is much harder to re-use. But in the earlier stages of a project doing it right can be more of a hindrance. Particularly with OO languages - which force you to create good data models to fit the problem - which means even a small change to the problem can be a big change to the data model. Of course, many coders never achieve good data models, so perhaps for them the cost isn't so high - at the cost of perpetually low quality code. Yes I say data, not code - the data is always more important.
Annealing is probably a good way to describe software design and maturity process - early stages punctuated by large fluid changes due to high-energy experimentation then and over time the changes becoming smaller as it matures and solidifies. If the requirements change you have to put it back into the fire to liquefy the structure and reconsider how it fits in the new solution.
Simply bolting on new bits will only create an ugly and brittle solution.
Subscribe to:
Posts (Atom)