Ahh game demo's. Do they really convince anyone to buy a game? Or do they just convince them not to buy it? I can feel a whinge coming on ...
A couple of new demo's on the PSN today. Mercenaries 2, and Fracture. Actually neither are games I thought looked interesting enough to buy before I tried their demo's, but having tried, they're even less likely to be swapped for some hard earned plastic.
And the main reason? Controls. Both let you 'invert y', but not x! For some reason - I think Jak and Daxter - I learnt to use camera controls opposite to the rest of the world.
After a couple of minutes of looking at the floor or the sky or spinning in circles, I gave up on Fracture. It was just too frustrating and annoying - even for a demo. They've obviously put a lot of work into it, and other than that, from the small demo I saw, it is probably a competent game, but without inverted controls all it ends up to me is deleted from my hard drive. From what I could tell, the ground-altering mechanic is a little odd as I expected it to be - a bit neither here nor there, but I guess it could 'work' ok if it's use isn't too gimmicky or forced.
Mercenaries 2 wasn't much better. I ran around randomly and somehow ended up where I was supposed to, but well, died. It looks like it could be fun, but running around looking the wrong way isn't. I like the stylised graphics, and the explosions are nicely done.
Both suffered another major turn-off for me too - screen tearing. Where they were too memory-strapped or lazy to use multi-buffering. Some devs claim that using double-buffering would halve the frame-rate when they just-drop a frame, which is true (if you just miss a frame on a 50fps animation, you have to waste processing/wait a whole frame before you can flip, and you end up with a 25fps frame-rate, and spending approximately half the available time/cpu power doing nothing) - but triple buffering doesn't suffer this problem - it's mostly just a memory cost. Well at least the tearing was only minimal, but still it was there, and it is a visual glitch I've found particularly irritating ever since I first saw it on crappy PC games that didn't have the hardware to easily avoid it on every frame (the way Amiga hardware worked you had to go out of your way to make things tear, so it was a real shock). It's such a tiny little nod to quality that can't cost more than a tiny fraction of the ginormous bloody budgets they spend these days - I can't see how the art department could sign off on such a sloppy trade-off (versus dropping the texture resolution slightly for instance).
Ok yes, they were only demo's - and sometimes problems like this get fixed by release time (Burnout?), and often control inversion is also included in the final version - but that doesn't help in evaluating a game from it's demo. It's not like I had planned to buy either of those games (I guess they're not my type of games), but the demos didn't help to convince me otherwise.
While i'm on demo's, last week we had 'Pure', a sort of trick-bike off-road racing game. Weird choice of game mechanics. You have to do slow and clumsy and hard to pull off `aerial tricks', otherwise you don't get enough boost/juice whatever they call it to be able to win a race. Sounds pretty tedious to me. Actually I may have had enough off-road racing with Motorstorm - I'm still not even sure I'll get Motorstorm 2. Well, the local split-screen would be nice, and maybe it'll load tracks and cars faster. Pure does look nice though.
Friday, 19 September 2008
Thursday, 11 September 2008
PS3 Linux
I bought a new HDD for my PS3 yesterday, backed it up and installed it. Very easy process, although the screws holding the drive in its caddy were a bit tight for the jewellers screwdriver I was using, and it took a couple of hours to back it up and restore it (nearly 50GB used). Although disturbingly, it sometimes seems to start up a bit too slowly, and then goes 'missing' at power-up until a restart. No big deal I suppose - if that's all it does. It's a western digital 320GB drive (WD3200BEVT), fwiw. Ahh, i did a search, and it seems to be some sort of interface issue - I tried jumpering the 'RPS' mode on, and so far it looks to have done the job.
I didn't bother backing up the Ubuntu partition, beyond my source tree. I haven't been particularly happy with Ubuntu, and that was even before upgrading to 8.x broke everything. Even after I fixed the boot issue, all it did was leave me with an unusable amount of RAM detected. I had long since got rid of any Ubuntu on my laptops, so I didn't need much of an excuse to jump ship. As i've said elsewhere - i'm sure Ubuntu is just fine for plenty of people, but it certainly isn't for me.
So I spent a bit of time trying to work out what system to install. It was pretty depressing really, it was quite difficult to find ANY quality or useful information at all (or maybe it was this incredibly disturbing video and comments I'd seen earlier in the day?). There are a few blogs and news sites around for PS3 development, but many (most?) of them are quite stale. Often started in a flourish and soon forgotten.
Of those still active, the developerworks forum for Cell development is full of newbies with very basic GNU or C/parellel programming questions for the most part, or weird arguments about performance (e.g. 'why is the ppe so slow?', 'how come i can't get the peak theoretical bandwidth in a memcpy?', sigh.). The beyond3d and ps2dev forums seem to be stuck on the fact that the GPU is inaccessible, or relying and waiting on a couple of guys working on ps3-medialib to deliver some magic, and generally just don't seem to be all that helpful. There are a couple of queries about useful linux distributions, but they are either unanswered or not helpful to me (e.g. they recommended ubuntu). I was starting to think that the whole situation was a lost cause, and certainly nobody seems to be working together toward any common goal. I finally stumbled upon PS3 forums - which seems to be a bit more active, and a few of the sort of questions I was interested in at least had some sort of answer.
Anyway, since there was a decent article on DeveloperWorks about installing FC7, and the IBM SDK 'support', I thought i'd give that a go. Burnt a DVD and away I went - I even checked the media. But unfortunately, it couldn't find the DVD it booted off when it came to looking for packages for some reason - so I couldn't get any further. Bit of a waste of time. I couldn't find any mention of this show-stopper on the 'net, so I gave up. I have FC9 on my laptop and although i'm happy enough that it works, the default setup is far too fat for a PS3, and it took forever to install, so I thought i'd give it a miss.
Someone on ps3forums.com had suggested Ubuntu to a query of what distro to use, but then changed his mind and complained about how much of a timewaster it was, and suggested YellowDog. I hadn't really considered YellowDog - it seemed a bit out of date, and well, just different, but after my experience upgrading Ubuntu - to get features I didn't really need - I thought stability and ease of setup will do over bleeding edge. So, YD downloaded (fortunately my ISP mirrors all these DVD's, so the download is as fast as the phone line can muster). Hmm, Fedora 6 based. Ok, so it's a bit old. Still - the install worked just fine first time. It was also pretty fast, and it boots up pretty fast too. Both much faster than Ubuntu ever did. For some reason the default 'development' install doesn't include ppu-gcc and particularly ppu-binutils, but I found out what I needed, and it seems some of my test code can build and run. I can always compile a newer gcc if I need it.
Ahh well that's done, and i've updated it too, now I can reboot back to the GameOS and forget about it for another few months!
I didn't bother backing up the Ubuntu partition, beyond my source tree. I haven't been particularly happy with Ubuntu, and that was even before upgrading to 8.x broke everything. Even after I fixed the boot issue, all it did was leave me with an unusable amount of RAM detected. I had long since got rid of any Ubuntu on my laptops, so I didn't need much of an excuse to jump ship. As i've said elsewhere - i'm sure Ubuntu is just fine for plenty of people, but it certainly isn't for me.
So I spent a bit of time trying to work out what system to install. It was pretty depressing really, it was quite difficult to find ANY quality or useful information at all (or maybe it was this incredibly disturbing video and comments I'd seen earlier in the day?). There are a few blogs and news sites around for PS3 development, but many (most?) of them are quite stale. Often started in a flourish and soon forgotten.
Of those still active, the developerworks forum for Cell development is full of newbies with very basic GNU or C/parellel programming questions for the most part, or weird arguments about performance (e.g. 'why is the ppe so slow?', 'how come i can't get the peak theoretical bandwidth in a memcpy?', sigh.). The beyond3d and ps2dev forums seem to be stuck on the fact that the GPU is inaccessible, or relying and waiting on a couple of guys working on ps3-medialib to deliver some magic, and generally just don't seem to be all that helpful. There are a couple of queries about useful linux distributions, but they are either unanswered or not helpful to me (e.g. they recommended ubuntu). I was starting to think that the whole situation was a lost cause, and certainly nobody seems to be working together toward any common goal. I finally stumbled upon PS3 forums - which seems to be a bit more active, and a few of the sort of questions I was interested in at least had some sort of answer.
Anyway, since there was a decent article on DeveloperWorks about installing FC7, and the IBM SDK 'support', I thought i'd give that a go. Burnt a DVD and away I went - I even checked the media. But unfortunately, it couldn't find the DVD it booted off when it came to looking for packages for some reason - so I couldn't get any further. Bit of a waste of time. I couldn't find any mention of this show-stopper on the 'net, so I gave up. I have FC9 on my laptop and although i'm happy enough that it works, the default setup is far too fat for a PS3, and it took forever to install, so I thought i'd give it a miss.
Someone on ps3forums.com had suggested Ubuntu to a query of what distro to use, but then changed his mind and complained about how much of a timewaster it was, and suggested YellowDog. I hadn't really considered YellowDog - it seemed a bit out of date, and well, just different, but after my experience upgrading Ubuntu - to get features I didn't really need - I thought stability and ease of setup will do over bleeding edge. So, YD downloaded (fortunately my ISP mirrors all these DVD's, so the download is as fast as the phone line can muster). Hmm, Fedora 6 based. Ok, so it's a bit old. Still - the install worked just fine first time. It was also pretty fast, and it boots up pretty fast too. Both much faster than Ubuntu ever did. For some reason the default 'development' install doesn't include ppu-gcc and particularly ppu-binutils, but I found out what I needed, and it seems some of my test code can build and run. I can always compile a newer gcc if I need it.
Ahh well that's done, and i've updated it too, now I can reboot back to the GameOS and forget about it for another few months!
Sunday, 7 September 2008
Chrome, again.
I recently posted my last entry on b.g.o, and I said I wasn't going to rant about what is wrong with the desktop (well I did before I deleted it). But maybe I should have, as with fortuitous timing, my second to last entry about Chrome should have reminded me what Chrome is capable of. I will only say in my defence that I was only considering Chrome as a browser, and maybe as an `ms office' replacement, and dismissing views otherwise (well that is how I use a browser).
First, some background. I had been noticing the trend to move toward Python in GNOME in particular, and I haven't liked it. I know why developers like it (well why they claim to like it), but as a user it leaves a lot to be desired - slow, extremely heavy applications, that too-often bomb out with meaningless backtraces. I had some ideas that could make it palatable to users (well, beyond just debugging), but it relied on some features which Python lacks, so I gave up thinking about it. But Python isn't the only problem.
The GNU desktop is in an awful state - and that's even if you stick to just one flavour and it's attendant applications (I don't know about KDE, but the following is true of both GNOME and Xfce). If you take a default install of your average `distribution', for example, Ubuntu, after installing a rather large number of packages you end up with a pretty login window, and a relatively pretty desktop, and quite a few applications, from basic to outstanding, from buggy to stable. But what is behind the actual desktop? A mis-mash of random programmes the packager/desktop team determined to be useful for themselves or some mythical `average luser'. Some work well, some don't, some are necessary for the basic operation of the machine (auto-mounting and network selection), others are pure fluff, most are in-between. Also - it barely runs ok if you have only 256MB of memory, for example that `older machine' that GNU/Linux can supposedly take advantage of, or embedded/special machines, like a Playstation 3, both of which actually affect me.
One problem is that the `in thing' these days seems to be to write (or re-write!) many of the applets/applications that provide core desktop functionality using Visual BAS... oh oops ... Python. Now Python is a `scripting language'. This means that every time you run a python ap, it must compile the source-code into byte-code or perhaps machine code (I do not know if there are pre-compilers for it). This takes time, and it takes memory, and to do it well it can take a lot of memory and time, and this is one reason traditionally that developers had much beefier machines than users - because they're the only ones who had to do this step, once. If it only compiles to byte-code, then every basic instruction is emulated using a state machine - a 'virtual machine' (VM), which is at least and order of magnitude slower than the physical machine is. Any conversion to machine code and further optimisations which make the running speed faster, also generally cost in memory and cpu time during the compilation phase. For simple scripts and applications this is no big deal, but for more complex applications it can start to add up. Not only that, because many of the libraries themselves are written using the scripting language, every application which uses those libraries needs to recompile the same libraries every time they run - and more importantly store their own copy of the byte/machine code. I will also mention in passing that many of these `libraries' are just `wrappers' - glue code which just calls some `C' library to do the actual work; but someone has to write those too, so either the script engine `vendor' or the library `vendor' must expend additional resources (which wouldn't otherwise be needed) for this work, so the cost isn't born solely by the users.
Scripting languages are just fine for short-lived applications, they run, do their job, and finish, releasing the memory they used - even if it is excessive it doesn't usually matter. And often they are `batch' processes anyway - non-interactive programmes which run by themselves, and so long as they run to completion they needn't be particularly speedy. But now with applets and other trivial applications that run for the entire time you're at the computer, or they require interactive response, they are a potential disaster. You now have a separate VM for every application loaded, with all the non-shareable data that entails. Often scripting VM's haven't even been designed with this in mind, and in that case they may be quite cavalier with their use of memory because it isn't an issue for the workloads for which they were designed. Most of these languages use garbage collection too - but garbage collectors are quite hard to write properly, so there are often bugs, but even when those are all fixed, to get performance they generally need more total memory than they're actually using (sometimes by a lot, but often about twice). And again, all of this overhead needs to be duplicated for each VM running. Contrast that to say a C application. When an application is compiled in the normal way, all of the code, and all of the code of the libraries can be shared in memory. Far more time and memory can be spared during the compilation phase, since it is only done once. And explicit memory management at least forces you to think about it, even if you don't take advantage of that opportunity for thought (even if explicit memory management has spare/overheads for efficiency, it's a trade-off you can control). And finally, often the reason programmers use scripting languages in the first place is because they are easier - or to translate (in some cases) - they don't know any better. Although they may have the enthusiasm and the ideas, they may just not have the skills to pull it off properly.
Another problem affects all languages - that is the startup time/non-shared data overhead. Things such as font metric tables (sigh, and font glyph tables/glyph cache, now the font server has been basically dropped - remote X sucks shit now, even though networks are much faster), display information, other global state tables, and other data which is loaded at run-time, and could otherwise be shared among applications. This only gets worse when you have many versions of the same library present, and/or completely different libraries which do the same thing. Sure you can run a KDE application on a GNOME desktop, but it isn't at a zero cost, as even basic things like displaying a string of text involves an extraordinary amount of logic and data, little of which will be shared.
Having so many libraries to choose from, and indeed a continually changing set of libraries to choose from, is also a particular problem with GNU desktops (and Windows at least). Add to that - people keep coming up with their own `framework' which will `solve all the problems' in a specific domain, but all it really does is add yet another set of libraries (and versions over time) that we all have to put up with if we want to run a particular application that uses them (or worse, the poor developer is burdened with having to develop and maintain yet-another backend when they could be doing real - and more importantly; interesting - work). Even if the one library is the one everyone uses, new versions seem to come out every year or so.
So the result is, that in 2008 we have a desktop with barely more features than one in 2000, yet consuming far more resources. Tiny little applets which could just as easily been written in any language, are dragging in millions of lines of code and megabytes of memory by virtue of being written in a scripting one. Lots of libraries - many which do the same thing, even just different versions of the same one - often end up being installed as well.
There are at least a couple of ways to get around the scripting problem, and they also cover the shared state and library's breeding like fundie children as well. If you're not using scripting they don't help - but shared state could be addressed using traditional IPC mechanisms (i.e. use a server), but because of the complexity this is often not done. Fixing the breeding library problem in general is tricky - each library needs to be far more disciplined in their design, and make use of ld features for backward/forward compatibility if required. Some duplication is still necessary - competition is generally good - although perhaps application developers should avoid using every new library that comes out just because it is new and promises to abolish world hunger.
First possibility, you have a separate process that compiles and executes all scripts - a script `application server', in today's language. For a stand-alone script, a small client uploads/tells the server which script to execute, and the server sends the results back to the client using queues and/or rpc. Because the scripts are executed in the same address space, they can share libraries, the garbage collector, and other resources. You also have the benefit that if you want to extend your application with scripting facilities, any application can use the same mechanism to run their own scripts. This could also provide a powerful system whereby you can write meta-applications, talking between applications as well, if you design the system properly. Threading is an issue - but it's an issue that only has to be solved once, by people who probably have an idea, rather than clueless application programmers.
The other way is to move your applications to the (one) server. All applications simply run in the same VM/address space, and again all code and much data can easily be shared among applications. Where you need additional non-scripted facilities you either build them in/use plugins, or use IPC mechanisms. And you only have to do it once too. Although meta-application programming is certainly possible, it would have to be an additional layer or protocol that needn't be there by design. And you can't really write an application that has a scripting `extension mechanism' either - since the app is the script.
The first way is sort of how AREXX worked. It can be quite simple, yet very powerful. Nobody wrote applications in AREXX, but they did write meta-applications which literally let completely unrelated applications `talk' to one other. The second way, if taken to the extreme, is something like JavaOS or that M$ thingy that does the same thing.
Hmmm. So I guess one potential realisation of the second idea is Chrome. It isn't a browser, it's an application framework, or rather, an os-independent application execution environment, a meta-operating system if you will. The sort of thing Java was capab;le of, but didn't work so well because it was too fine grained/no central server. The sort of thing Flash is basically doing now, although it's too buggy and also no central server. Probably the closest is the sort of thing GNOME was originally envisioned to be (as i fuzzily remember it - the NOM in GNOME) before being down-graded to basically a Gtk theme - although the glandular-fever infected among them are still thinking along those lines, I think. The sort of thing Firefox always claimed to be, but you couldn't take seriously because we all know what a bloaty pig's bum it was, and still is, even though they've made great strides in the swine's bun-tone. Well, at least the process model in Chrome makes sense now.
So watch out GNOME and KDE and Xfce. All of those little crapplets that deal with no or small amounts of data - they can all be re-written as trivial JavaScript applications, and probably with network transparency built in (I haven't mentioned `google gadgets', because it should be obvious this is one and the same thing). e.g. post-it notes, a desktop clock/calendar which links into your planner, rss aggregators, umm, whatever it is people run on their desktop, file browsers aren't much different from an internet browser either. So maybe the `start menu' (for native apps) can't be written - well, yet - because of the OS integration, so that is safe for now. Still, who knows, they've got the sandboxing, so there will perhaps be a mechanism for priviledge escalation as well, and it can be made as secure as yum or apt-get (i.e. not very). If they implement a VIDEO tag, and SVG properly, with any luck Flash and M$' flash knock-off can get the bullets in the head they deserve as an added bonus. Good riddance to bad rubbish there.
Ok, so perhaps I was wrong in my second to last post on b.g.o. Chrome isn't just another featureless webkit browser after all (although it is still too featureless for me). But it isn't just Firefox that has to fear from another browser, it is not just desktop applications that have to fear from another browser, it is the desktop as we have come to know it - and thank fuck for that too.
Ahh well, maybe that isn't the idea `they' had. It has the potential though, if the VM and GC is as good as the claims on the box. And if Google doesn't do it, someone else can - because it's free software.
First, some background. I had been noticing the trend to move toward Python in GNOME in particular, and I haven't liked it. I know why developers like it (well why they claim to like it), but as a user it leaves a lot to be desired - slow, extremely heavy applications, that too-often bomb out with meaningless backtraces. I had some ideas that could make it palatable to users (well, beyond just debugging), but it relied on some features which Python lacks, so I gave up thinking about it. But Python isn't the only problem.
The GNU desktop is in an awful state - and that's even if you stick to just one flavour and it's attendant applications (I don't know about KDE, but the following is true of both GNOME and Xfce). If you take a default install of your average `distribution', for example, Ubuntu, after installing a rather large number of packages you end up with a pretty login window, and a relatively pretty desktop, and quite a few applications, from basic to outstanding, from buggy to stable. But what is behind the actual desktop? A mis-mash of random programmes the packager/desktop team determined to be useful for themselves or some mythical `average luser'. Some work well, some don't, some are necessary for the basic operation of the machine (auto-mounting and network selection), others are pure fluff, most are in-between. Also - it barely runs ok if you have only 256MB of memory, for example that `older machine' that GNU/Linux can supposedly take advantage of, or embedded/special machines, like a Playstation 3, both of which actually affect me.
One problem is that the `in thing' these days seems to be to write (or re-write!) many of the applets/applications that provide core desktop functionality using Visual BAS... oh oops ... Python. Now Python is a `scripting language'. This means that every time you run a python ap, it must compile the source-code into byte-code or perhaps machine code (I do not know if there are pre-compilers for it). This takes time, and it takes memory, and to do it well it can take a lot of memory and time, and this is one reason traditionally that developers had much beefier machines than users - because they're the only ones who had to do this step, once. If it only compiles to byte-code, then every basic instruction is emulated using a state machine - a 'virtual machine' (VM), which is at least and order of magnitude slower than the physical machine is. Any conversion to machine code and further optimisations which make the running speed faster, also generally cost in memory and cpu time during the compilation phase. For simple scripts and applications this is no big deal, but for more complex applications it can start to add up. Not only that, because many of the libraries themselves are written using the scripting language, every application which uses those libraries needs to recompile the same libraries every time they run - and more importantly store their own copy of the byte/machine code. I will also mention in passing that many of these `libraries' are just `wrappers' - glue code which just calls some `C' library to do the actual work; but someone has to write those too, so either the script engine `vendor' or the library `vendor' must expend additional resources (which wouldn't otherwise be needed) for this work, so the cost isn't born solely by the users.
Scripting languages are just fine for short-lived applications, they run, do their job, and finish, releasing the memory they used - even if it is excessive it doesn't usually matter. And often they are `batch' processes anyway - non-interactive programmes which run by themselves, and so long as they run to completion they needn't be particularly speedy. But now with applets and other trivial applications that run for the entire time you're at the computer, or they require interactive response, they are a potential disaster. You now have a separate VM for every application loaded, with all the non-shareable data that entails. Often scripting VM's haven't even been designed with this in mind, and in that case they may be quite cavalier with their use of memory because it isn't an issue for the workloads for which they were designed. Most of these languages use garbage collection too - but garbage collectors are quite hard to write properly, so there are often bugs, but even when those are all fixed, to get performance they generally need more total memory than they're actually using (sometimes by a lot, but often about twice). And again, all of this overhead needs to be duplicated for each VM running. Contrast that to say a C application. When an application is compiled in the normal way, all of the code, and all of the code of the libraries can be shared in memory. Far more time and memory can be spared during the compilation phase, since it is only done once. And explicit memory management at least forces you to think about it, even if you don't take advantage of that opportunity for thought (even if explicit memory management has spare/overheads for efficiency, it's a trade-off you can control). And finally, often the reason programmers use scripting languages in the first place is because they are easier - or to translate (in some cases) - they don't know any better. Although they may have the enthusiasm and the ideas, they may just not have the skills to pull it off properly.
Another problem affects all languages - that is the startup time/non-shared data overhead. Things such as font metric tables (sigh, and font glyph tables/glyph cache, now the font server has been basically dropped - remote X sucks shit now, even though networks are much faster), display information, other global state tables, and other data which is loaded at run-time, and could otherwise be shared among applications. This only gets worse when you have many versions of the same library present, and/or completely different libraries which do the same thing. Sure you can run a KDE application on a GNOME desktop, but it isn't at a zero cost, as even basic things like displaying a string of text involves an extraordinary amount of logic and data, little of which will be shared.
Having so many libraries to choose from, and indeed a continually changing set of libraries to choose from, is also a particular problem with GNU desktops (and Windows at least). Add to that - people keep coming up with their own `framework' which will `solve all the problems' in a specific domain, but all it really does is add yet another set of libraries (and versions over time) that we all have to put up with if we want to run a particular application that uses them (or worse, the poor developer is burdened with having to develop and maintain yet-another backend when they could be doing real - and more importantly; interesting - work). Even if the one library is the one everyone uses, new versions seem to come out every year or so.
So the result is, that in 2008 we have a desktop with barely more features than one in 2000, yet consuming far more resources. Tiny little applets which could just as easily been written in any language, are dragging in millions of lines of code and megabytes of memory by virtue of being written in a scripting one. Lots of libraries - many which do the same thing, even just different versions of the same one - often end up being installed as well.
There are at least a couple of ways to get around the scripting problem, and they also cover the shared state and library's breeding like fundie children as well. If you're not using scripting they don't help - but shared state could be addressed using traditional IPC mechanisms (i.e. use a server), but because of the complexity this is often not done. Fixing the breeding library problem in general is tricky - each library needs to be far more disciplined in their design, and make use of ld features for backward/forward compatibility if required. Some duplication is still necessary - competition is generally good - although perhaps application developers should avoid using every new library that comes out just because it is new and promises to abolish world hunger.
First possibility, you have a separate process that compiles and executes all scripts - a script `application server', in today's language. For a stand-alone script, a small client uploads/tells the server which script to execute, and the server sends the results back to the client using queues and/or rpc. Because the scripts are executed in the same address space, they can share libraries, the garbage collector, and other resources. You also have the benefit that if you want to extend your application with scripting facilities, any application can use the same mechanism to run their own scripts. This could also provide a powerful system whereby you can write meta-applications, talking between applications as well, if you design the system properly. Threading is an issue - but it's an issue that only has to be solved once, by people who probably have an idea, rather than clueless application programmers.
The other way is to move your applications to the (one) server. All applications simply run in the same VM/address space, and again all code and much data can easily be shared among applications. Where you need additional non-scripted facilities you either build them in/use plugins, or use IPC mechanisms. And you only have to do it once too. Although meta-application programming is certainly possible, it would have to be an additional layer or protocol that needn't be there by design. And you can't really write an application that has a scripting `extension mechanism' either - since the app is the script.
The first way is sort of how AREXX worked. It can be quite simple, yet very powerful. Nobody wrote applications in AREXX, but they did write meta-applications which literally let completely unrelated applications `talk' to one other. The second way, if taken to the extreme, is something like JavaOS or that M$ thingy that does the same thing.
Hmmm. So I guess one potential realisation of the second idea is Chrome. It isn't a browser, it's an application framework, or rather, an os-independent application execution environment, a meta-operating system if you will. The sort of thing Java was capab;le of, but didn't work so well because it was too fine grained/no central server. The sort of thing Flash is basically doing now, although it's too buggy and also no central server. Probably the closest is the sort of thing GNOME was originally envisioned to be (as i fuzzily remember it - the NOM in GNOME) before being down-graded to basically a Gtk theme - although the glandular-fever infected among them are still thinking along those lines, I think. The sort of thing Firefox always claimed to be, but you couldn't take seriously because we all know what a bloaty pig's bum it was, and still is, even though they've made great strides in the swine's bun-tone. Well, at least the process model in Chrome makes sense now.
So watch out GNOME and KDE and Xfce. All of those little crapplets that deal with no or small amounts of data - they can all be re-written as trivial JavaScript applications, and probably with network transparency built in (I haven't mentioned `google gadgets', because it should be obvious this is one and the same thing). e.g. post-it notes, a desktop clock/calendar which links into your planner, rss aggregators, umm, whatever it is people run on their desktop, file browsers aren't much different from an internet browser either. So maybe the `start menu' (for native apps) can't be written - well, yet - because of the OS integration, so that is safe for now. Still, who knows, they've got the sandboxing, so there will perhaps be a mechanism for priviledge escalation as well, and it can be made as secure as yum or apt-get (i.e. not very). If they implement a VIDEO tag, and SVG properly, with any luck Flash and M$' flash knock-off can get the bullets in the head they deserve as an added bonus. Good riddance to bad rubbish there.
Ok, so perhaps I was wrong in my second to last post on b.g.o. Chrome isn't just another featureless webkit browser after all (although it is still too featureless for me). But it isn't just Firefox that has to fear from another browser, it is not just desktop applications that have to fear from another browser, it is the desktop as we have come to know it - and thank fuck for that too.
Ahh well, maybe that isn't the idea `they' had. It has the potential though, if the VM and GC is as good as the claims on the box. And if Google doesn't do it, someone else can - because it's free software.
Subscribe to:
Posts (Atom)