Just uploaded another Cell tutorial ... they've just about come to an end, and although there's probably things I could write about, the urge to do it only comes in spurts. I just introduce an absolutely bare minimum library for accessing the ps3 frame-buffer.
And otherwise ... I had some strange thought at the pub one night that I should 'write a game', and unfortunately sobering up didn't dispel such a silly idea (it's not like I don't already have plenty of other ideas floating around to keep me busy). A mate of mine has wanted to write one from before I met him (damn, was it already 15 years ago), so I asked him to join me, and maybe we'll get somewhere this time. Perhaps - he's a bit of a rabid Ninty fan (which I am not) so it may not end up something I'd play; but i'm not particularly worried about what comes out. I've never written one either, so it's the journey and not the outcome that counts. It's a whole new set of basically unfamiliar problems so it's pretty much starting from scratch.
Monday, 22 June 2009
Thursday, 18 June 2009
Yawn.
Too damn tired, all the late nights have caught up with me. I don't think the uncounted beers at the pub last night helped either.
After the last post I mucked about with a couple of implementations of 'job queues' on the Cell. I wrote the whole lot up before testing on real hardware - and thought i'd really messed one up. But it turns out it was salvageable and i'd only made a small mistake. For the simple 1-PPU-writer-only-SPU-reader queue I can send about 1.3m 'jobs' to an SPU per second (the 128 byte `jobs' are sent one way and once received, simply marked as 'done') for a single SPU and about 1m jobs/sec when all 6 are used. Which seems reasonable scalability; the contention isn't getting in the way too much. For the more complex any-writer-any-reader queue (only being used with the same ppu-writer-n-spu-reader test driver), it drops down to about 1m/s for 1 SPU and 750K/s for all 6. Which could probably be improved - but the jobs aren't actually doing anything so creating artificially high contention anyway.
They seem to be stable and reliable, no races or deadlocks.
I'll keep poking to see if I can improve them.
After the last post I mucked about with a couple of implementations of 'job queues' on the Cell. I wrote the whole lot up before testing on real hardware - and thought i'd really messed one up. But it turns out it was salvageable and i'd only made a small mistake. For the simple 1-PPU-writer-only-SPU-reader queue I can send about 1.3m 'jobs' to an SPU per second (the 128 byte `jobs' are sent one way and once received, simply marked as 'done') for a single SPU and about 1m jobs/sec when all 6 are used. Which seems reasonable scalability; the contention isn't getting in the way too much. For the more complex any-writer-any-reader queue (only being used with the same ppu-writer-n-spu-reader test driver), it drops down to about 1m/s for 1 SPU and 750K/s for all 6. Which could probably be improved - but the jobs aren't actually doing anything so creating artificially high contention anyway.
They seem to be stable and reliable, no races or deadlocks.
I'll keep poking to see if I can improve them.
Monday, 15 June 2009
On stuff, and other stuff.
Well I've updated the cell tutorial with another entry. This rounds out the optimisations for the Mandelbrot Set generator, with I think some impressive results.
I spent most of the weekend (another nasty cold and wet one) plugging away at my renderer and reading up on bits and pieces. As I suspected `it was harder than that'. Oh well. Losing interest on that track, so I think I will play with some Cell code for a while - I started work on completing the IPC chapter again which I'd abandoned a couple of months ago. Then I might go back to the freetype renderer or maybe agg (the freetype renderer I was looking at turned out to be based on that).
It's amazing how much ones productivity changes from day to day. When things are kicking along you can write several thousands of lines of code in a week, and a lot of it can be good code. When you hit a wall everything seems to grind to a halt. I spent all weekend writing 50 lines of ... total worthless crap.
Oh well, another week gets under-way.
I spent most of the weekend (another nasty cold and wet one) plugging away at my renderer and reading up on bits and pieces. As I suspected `it was harder than that'. Oh well. Losing interest on that track, so I think I will play with some Cell code for a while - I started work on completing the IPC chapter again which I'd abandoned a couple of months ago. Then I might go back to the freetype renderer or maybe agg (the freetype renderer I was looking at turned out to be based on that).
It's amazing how much ones productivity changes from day to day. When things are kicking along you can write several thousands of lines of code in a week, and a lot of it can be good code. When you hit a wall everything seems to grind to a halt. I spent all weekend writing 50 lines of ... total worthless crap.
Oh well, another week gets under-way.
Thursday, 11 June 2009
A pixel as a unit square
So while I think about whether I stick to freetype or try something else, I've started playing with ideas for my own renderer. I actually have a screen-shot capable output, but I don't have it handy right now. Maybe next time.
After a few silly mistakes with list pointers, I've been surprised at just how simple it is to get something up that looks quite reasonable. I am most certainly missing something, because it can't be this easy; the few 'i'm not sure' edge cases are probably where the problems come in that'll keep me from ever finishing it. I've basically done a simple 'classic' scan-line renderer which keeps track of the X coords as it steps down the Y coords, then scans from one side to the other keeping track of edge crossings and using that to work out when to fill or not. The only interesting thing is that I compute exact pixel coverage as I go so I can produce quite nicely anti-aliased lines with very little extra work. There are some artefacts with intersecting lines, and non-zero fill rule has some big issues, but I'm not particularly worried about them right now. I wouldn't have a clue if it is all that fast though, quite probably it isn't. The coverage calculation is particularly simple - since I treat each pixel as a unit square, much of the time the multiplications are just by a factor of 1, so the coverage is just a simple sum and a divide by 2.
I guess the more complicated part of the equation is the line stroker. The basic idea is simple enough, but there a lot of nasty cases to handle if you have unusually fat lines, and what to do about intersecting lines. Dashed lines seem like a hassle too.
Well, at least with my own implementation it might make it easier to look at making it work on a Cell B.E., which should all keep me busy for the foreseeable future should I decide to investigate that. So the OS idea is on hold for now.
After a few silly mistakes with list pointers, I've been surprised at just how simple it is to get something up that looks quite reasonable. I am most certainly missing something, because it can't be this easy; the few 'i'm not sure' edge cases are probably where the problems come in that'll keep me from ever finishing it. I've basically done a simple 'classic' scan-line renderer which keeps track of the X coords as it steps down the Y coords, then scans from one side to the other keeping track of edge crossings and using that to work out when to fill or not. The only interesting thing is that I compute exact pixel coverage as I go so I can produce quite nicely anti-aliased lines with very little extra work. There are some artefacts with intersecting lines, and non-zero fill rule has some big issues, but I'm not particularly worried about them right now. I wouldn't have a clue if it is all that fast though, quite probably it isn't. The coverage calculation is particularly simple - since I treat each pixel as a unit square, much of the time the multiplications are just by a factor of 1, so the coverage is just a simple sum and a divide by 2.
I guess the more complicated part of the equation is the line stroker. The basic idea is simple enough, but there a lot of nasty cases to handle if you have unusually fat lines, and what to do about intersecting lines. Dashed lines seem like a hassle too.
Well, at least with my own implementation it might make it easier to look at making it work on a Cell B.E., which should all keep me busy for the foreseeable future should I decide to investigate that. So the OS idea is on hold for now.
Tuesday, 9 June 2009
Butt stroking and other stuff
Well that was a long wet dreary weekend. I managed to avoid leaving the house (getting through some old stuff in the cupboard and freezer at last) ... and of course spent an inordinate amount of time hacking away.
OpenVG needs a lot of scaffolding to get started - so I spent a lot of time doing that. Lots of code for getting and setting attributes and whatnot. And the path type. So I had to brush up on splines and geometric algorithms again, and I was back in the land of vectors and splines and whatnot - again. I wrote a nice little non-recursive adaptive spline tessellator I could use for implementing some of the features required like path length.
I originally wrote a vgpath that worked the same way as the reference implementation ... but just about the time I got most of it finished I got sick of writing yet-another-loop that parsed the data in slightly different ways, so I decided to change it around a bit, focussing on simplifying the code. So basically now I only have to canonicalise the data once, and the other functions can work on a simplified data stream, and not have to calculate relative or partial coordinates or smooth control points or arcs (although I haven't done them yet) or even data conversion every time they run. e.g. VLINE/HLINE are converted to LINE, all _RELs are converted to _ABSs, SCUBIC/SQUAD converted to QUAD, ARC's into QUAD's. I think doing that will still honour the API and it simplifies other bits of code. But since this is only the 2nd attempt I've probably got this wrong too - always seems to take 3 goes to get something right (as I write this I'm already thinking of some things I did wrong).
A few short lines of code later and I had it hooked up to the FreeType renderer and discovered the headline bug above. Butt line endings wasn't implemented, although it took a while to discover that since I just assumed it must be my code since the enumeration existed. I've submitted a patch which has already been applied (I must've been a bit tired as it took me a while to realise why 'butt stroking bug freetype' didn't find anything relevant, but seemed to probe the darker regions of the internets instead). I played around with the stroker a bit more and found other issues, and ended up delving into the source code more deeply - now I'm not sure I will use the FreeType stroker and renderer as I'd hoped to. To start with, a couple of features are missing or different. And it is, as they say on the box 'optimised for small sizes'. The algorithm is quite interesting though - basically it renders a band (ideally the whole image) into a 'sparse' bitmap, and then just steps through that to produce runs of filled pixels by keep track of edge crossings. So unlike a normal scan-line renderer it doesn't need to keep track of active lists and so forth, or update the lines piecemeal. Unfortunately I don't have a real handle on how scalable the algorithm is to screen-size resolutions. I may have to 'suck it and see', otherwise I'll get no where ...
... Since, to cut a long story short I've started delving into the mysterious and dark corridors of writing my own AA renderer. I orignally looked at FreeType because 1. I intend to use it anyway for font glyphs, and 2. It has few dependencies, and 3. It's easy to use. I'd also considered libart, but I thought was more closely tied to glib than it is, but it also no longer seems to be maintained, and apart from that it looks a bit over-engineered for my taste. And well, writing my own could be interesting - until you hit all the weird edge cases and numerical stability issues that throw it under a bridge. I still have at the back of my mind the idea of making this run well on a Cell BE too, so that's another reason to investigate since I need to know how it works, even if I just adapt another bit of code.
OpenVG needs a lot of scaffolding to get started - so I spent a lot of time doing that. Lots of code for getting and setting attributes and whatnot. And the path type. So I had to brush up on splines and geometric algorithms again, and I was back in the land of vectors and splines and whatnot - again. I wrote a nice little non-recursive adaptive spline tessellator I could use for implementing some of the features required like path length.
I originally wrote a vgpath that worked the same way as the reference implementation ... but just about the time I got most of it finished I got sick of writing yet-another-loop that parsed the data in slightly different ways, so I decided to change it around a bit, focussing on simplifying the code. So basically now I only have to canonicalise the data once, and the other functions can work on a simplified data stream, and not have to calculate relative or partial coordinates or smooth control points or arcs (although I haven't done them yet) or even data conversion every time they run. e.g. VLINE/HLINE are converted to LINE, all _RELs are converted to _ABSs, SCUBIC/SQUAD converted to QUAD, ARC's into QUAD's. I think doing that will still honour the API and it simplifies other bits of code. But since this is only the 2nd attempt I've probably got this wrong too - always seems to take 3 goes to get something right (as I write this I'm already thinking of some things I did wrong).
A few short lines of code later and I had it hooked up to the FreeType renderer and discovered the headline bug above. Butt line endings wasn't implemented, although it took a while to discover that since I just assumed it must be my code since the enumeration existed. I've submitted a patch which has already been applied (I must've been a bit tired as it took me a while to realise why 'butt stroking bug freetype' didn't find anything relevant, but seemed to probe the darker regions of the internets instead). I played around with the stroker a bit more and found other issues, and ended up delving into the source code more deeply - now I'm not sure I will use the FreeType stroker and renderer as I'd hoped to. To start with, a couple of features are missing or different. And it is, as they say on the box 'optimised for small sizes'. The algorithm is quite interesting though - basically it renders a band (ideally the whole image) into a 'sparse' bitmap, and then just steps through that to produce runs of filled pixels by keep track of edge crossings. So unlike a normal scan-line renderer it doesn't need to keep track of active lists and so forth, or update the lines piecemeal. Unfortunately I don't have a real handle on how scalable the algorithm is to screen-size resolutions. I may have to 'suck it and see', otherwise I'll get no where ...
... Since, to cut a long story short I've started delving into the mysterious and dark corridors of writing my own AA renderer. I orignally looked at FreeType because 1. I intend to use it anyway for font glyphs, and 2. It has few dependencies, and 3. It's easy to use. I'd also considered libart, but I thought was more closely tied to glib than it is, but it also no longer seems to be maintained, and apart from that it looks a bit over-engineered for my taste. And well, writing my own could be interesting - until you hit all the weird edge cases and numerical stability issues that throw it under a bridge. I still have at the back of my mind the idea of making this run well on a Cell BE too, so that's another reason to investigate since I need to know how it works, even if I just adapt another bit of code.
Thursday, 4 June 2009
Argh.
I've been umming and aahing about working on a ps3 version of the kernel, because although there is plenty of other stuff to do, i've gotten to the point i'd like to have a framebuffer to work in. And since VGA hardware is such a pita to work with ... maybe the ps3 is the go.
So I did a lot of reading and digging, about powerpc64 hardware (hmm, this is really a mainframe chip, not a game console one!), hypervisor calls (so little documentation), instruction sets, etc etc. Hmm, a lot of work, but it seemed like a good challenge.
But then I thought i'd look at extracting a vga driver from some library - i looked at svgalib only because it had the least dependencies. I thought maybe I could get something going just to get started, or at least evaluate the size of the task. And after wasting a couple of days and very late nights on this ... I finally (re)discovered bochs and qemu had their own framebuffer device which was trivial to setup (I had seen the page a couple of weeks ago and 'noting it for later' promptly forgot it). Ho hum, what a total waste of time that was then. I still need a real driver if i want to get something up on real hardware, but it isn't exactly a priority right now.
So given I now have a framebuffer to work with I might postpone the ps3 stuff. I'm still mulling over an ARM based stuff too, so maybe I will look at that next instead - i imagine it will be somewhat simpler, and I can use qemu for that too.
Now I have a framebuffer ... what to do with it. OpenVG looks interesting, I might start there.
So I did a lot of reading and digging, about powerpc64 hardware (hmm, this is really a mainframe chip, not a game console one!), hypervisor calls (so little documentation), instruction sets, etc etc. Hmm, a lot of work, but it seemed like a good challenge.
But then I thought i'd look at extracting a vga driver from some library - i looked at svgalib only because it had the least dependencies. I thought maybe I could get something going just to get started, or at least evaluate the size of the task. And after wasting a couple of days and very late nights on this ... I finally (re)discovered bochs and qemu had their own framebuffer device which was trivial to setup (I had seen the page a couple of weeks ago and 'noting it for later' promptly forgot it). Ho hum, what a total waste of time that was then. I still need a real driver if i want to get something up on real hardware, but it isn't exactly a priority right now.
So given I now have a framebuffer to work with I might postpone the ps3 stuff. I'm still mulling over an ARM based stuff too, so maybe I will look at that next instead - i imagine it will be somewhat simpler, and I can use qemu for that too.
Now I have a framebuffer ... what to do with it. OpenVG looks interesting, I might start there.
Tuesday, 2 June 2009
Cells and ratty rodents
Got off me arse and posted a new intro to Cell tutorial. It gets into SIMD coding so it's one of the more interesting ones.
And I managed to get a ps2 mouse driver working, of sorts (well, i'm getting the bytes from the port). And what a pile of shit the PS2 AUX port is. There doesn't seem to be any official documentation, just a few old text files from the days before the internets. Anyway, after a lot of mucking about I got it to work on bochs, qemu and and old PC I have - so that's good enough for me and it didn't take too much code. I'm still not sure if I should combine the keyboard and mouse 'devices' since they share the same io port.
Of course, in the process of testing on real hardware I found everything was broken. Everything. Ho hum. After much frobnication I found the APIC maybe wasn't as easy to use as I thought - or it's just buggy. And my old laptop doesn't even have one (early celeron). So I had to remove all the APIC code so it booted (i'm not doing any run-time stuff yet). And then I moved to using the RTC for timing instead. But that didn't work on real hardware either. Arg. In the end I got it to work but i'm not sure if it was just setting values directly (rather than read-twiddle-write) or clearing the interrupts first.
The apic thing is a bit of a bummer, I guess I'll need to use the pic instead for timing. Which sucks because it can only measure very short periods of time. I suppose if I use the RTC for longer periods - either by just using the 64Hz signal to count down, or the alarm function, and just resort to the pic for the last bit if more accuracy is required, it should be an ok balance between accuracy, overhead, and simplicity.
Hmm, wonder what to do next. A framebuffer would be nice - i'd really rather piss off the text mode entirely. And a disk driver - although then i'd need a filesystem too. Maybe it'd be less effort working out the hypervisor on the ps3 ... Hmm, perhaps a forth based monitor? Something to think about ...
And I managed to get a ps2 mouse driver working, of sorts (well, i'm getting the bytes from the port). And what a pile of shit the PS2 AUX port is. There doesn't seem to be any official documentation, just a few old text files from the days before the internets. Anyway, after a lot of mucking about I got it to work on bochs, qemu and and old PC I have - so that's good enough for me and it didn't take too much code. I'm still not sure if I should combine the keyboard and mouse 'devices' since they share the same io port.
Of course, in the process of testing on real hardware I found everything was broken. Everything. Ho hum. After much frobnication I found the APIC maybe wasn't as easy to use as I thought - or it's just buggy. And my old laptop doesn't even have one (early celeron). So I had to remove all the APIC code so it booted (i'm not doing any run-time stuff yet). And then I moved to using the RTC for timing instead. But that didn't work on real hardware either. Arg. In the end I got it to work but i'm not sure if it was just setting values directly (rather than read-twiddle-write) or clearing the interrupts first.
The apic thing is a bit of a bummer, I guess I'll need to use the pic instead for timing. Which sucks because it can only measure very short periods of time. I suppose if I use the RTC for longer periods - either by just using the 64Hz signal to count down, or the alarm function, and just resort to the pic for the last bit if more accuracy is required, it should be an ok balance between accuracy, overhead, and simplicity.
Hmm, wonder what to do next. A framebuffer would be nice - i'd really rather piss off the text mode entirely. And a disk driver - although then i'd need a filesystem too. Maybe it'd be less effort working out the hypervisor on the ps3 ... Hmm, perhaps a forth based monitor? Something to think about ...
Subscribe to:
Posts (Atom)