Wednesday, 12 June 2013

Into the cloud!

So yeah i've been a bit bored/insomniacal[sic] lately and reading the nets ... and one topical topic is the next set of game consoles from microsoft and sony.

I still can't believe how much microsoft ball(mer)sed-up their marketing message, but I guess when you live in such a bubble as they do it probably seemed like a good idea at the time. Sad that sony gets such cheers for merely keeping things the same and letting people SHARE the stuff they buy with their hard-earned. But when microsoft intentionally avoid the 'share' word it isn't so surprising. Incidentally the microsoft used game thing reeks somewhat of the anti-trust trouble that apple are currently in; although they seem to have made an effort to ensure it was regulator-safe by a bit of weaselling in the way they structured it. i.e. they facilitated the screwing of customers without mandating it.

But back to the main topic - the whole 'it's 4x faster due to the cloud' nonsense.

Ok, obviously they shat bricks over the fact that the PS4 is so much faster on paper. The raw FPU performance is 50% better, but I would suggest that the much higher memory bandwidth (~3x) and the sony hardware scheduler tweaks will make that more like 100% faster in practice, or even more ... but time will tell on that. With 1080P framebuffers, 32MB vanishes pretty fast - it's only just enough for a 4x8-bit RGBA framebuffers, or say a 16-bit depth buffer and 1xRGBA colour and 1xRGBA accumulation buffer. With HDR, deferred rendering, and high fp performance and so on, this will be severely limiting and its still only relatively a meagre 100GB/s anyway. microsoft made a big bet on the 32MB ESRAM thinking sony would somehow over-engineer the design; and they simply fucked up (the dma engines are of course handy, but even the beagleboard has a couple of those and it can't make up bandwidth). As another aside, I lost a great deal of respect for anand from anandtech when he came up with the ludicrous suggestion that the 32MB of ESRAM might actually be a hardware associative cache. For someone who claims to know a bit about technology and fills his articles with tech talk, he clearly has NFI about such a fundamental computer architecture component. It suggests PR departments are writing much of his articles.

It learns from its mistakes?

So anyway ... most of the talk about the cloud performance boosting is just crap. Physics or lighting will not be moved to the internet because internet performance just isn't there yet - and the added overheads of trying to code it just aren't worth it. Other things like global weather could be, but it's not like mmo's can't already do this kind of thing and unless you're building 'Cyclones! The game!' the level of calculation required will be minimal anyway. However ... there is one area where I think a centralised computing capability will be useful: machine learning.

Most machine learning algorithms require gads and gads of resources - days to weeks of computing time and tons of input data. However the result of this work is a fairly dense set of rules that can then be sent back to the games at any time. Getting good statistical information for machine learning algorithms is a challenge, and having every machine on the network allows them to do just that, and then feed them back.

So a potential scenario would be that each time you play through a single player game, the AI could learn from you and from all other players as you go; reacting in a way that tries to beat you, at the level of performance you're playing at. i.e. it plays just well enough that you can still win, but not so that it's too easy. Every time you play the game it could play differently, learning as you do. We could finally move beyond the fixed-waves of the early 80s that are now just called 'set pieces'.

It would be something nice to see, although a cheaper and easier method is just to use multi-player games to do the same thing and use real players instead. So we may not see it this iteration: but I think it's about time we did.

Maybe some indie developer can give it a go.

sony or microsoft could also use the same technique to improve the performance of their motion based input systems, at least up to some asymptotic performance limit of a given algorithm.

But ...

However, microsoft have no particular advantage here as it isn't the 'cloud services' that are important here, it's just that every machine has a network port. Sure having some of the infrastructure 'done for you' is a bit of a bonus, but it's not like internet middleware is a new thing. There are a bevvy of mature products to choose from, and microsoft is just one (mediocre) player from many. And a 3rd party could equally go to any other 3rd party for the resources needed (although I bet microsoft wont let them on their platform: which is another negative for that device).

Actually ...

Update: Actually I forgot to mention that I really think the whole 'always online' and kinect-required gig in microsoft's case is all about advertising: it can see who is in a room, age, gender, ethnicity, and if you have an account registered even more details on the viewer. It could track what people in the room are doing during game-playing as well as watching tv shows and advertising; probably even where they're looking, what they're talking about, and what those tv shows/advertisements are. It doesn't take much more to "anonymously" link your viewing habits with your credit card transactions.

A marketer's wet dream if ever there was one. A literal "fly on the wall" in every house that has "one".

And if you think this is hyperbole, you just haven't been paying attention. Google (and others) already do all this with everything you do on the internet or on your phone, why should your lounge room be any different? I was pretty creeped out when I started seeing adverts that seemed to be related to otherwise private communications.

Let's just see how long before people start seeing advertising popping up (perhaps over the TV shows they're watching, or within/over games?) that matches their viewing habits and lounge-room demographics or what they were doing last night on the coffee table. And even if they don't get there "this generation", it's clearly a long-term goal.

So whist one can do some neat stuff with the network, that's just a side-effect and teaser for the main game. Exactly like google and all it's "free" services. Despite paying for it this time, you're still going to be the product. Even the DRM stuff is a side-show.

Prices

Well as usual $AUS gets shafted by the 'overheads' of the local market. But you know what? Who cares. They're both cheaper than the previous models even in face-value-dollars let alone real ones, and we don't treat our less fortunate workers as total slaves in this country (at least, not yet).

They might still be a luxury item but in relative terms they've never been cheaper. My quarterly electricity bill has breached $500 already and it's only going up next year.

The initial price of the device is always only a part of the cost (and for-fucks-sake, it is NOT a fucking investment), and pretty small part with the price of games, power, the tv/couch, and internets on top.

I'm still not sure if i'll get a ps4: given the amount of games i've played over the last 12 months it would be pretty pointless. I've still got a bunch of unopened ps3 games - I think I just don't like playing games much, they're either too easy and boring, too much like work, or I hit a point I can't get past and I don't have the patience to beat a dumb computer and feel good about it (and i'm just generally not into 'competition', i'd rather lose than compete). A PS4 CPU+memory in a GNU/linux machine on the other hand could be pretty fun to play with.

Monday, 10 June 2013

Clamping, scaling, format conversion

Got to spend a few hours poking at the photo-effects app i'm doing in conjunction with 'ffts'. I ended up having to use some NEON for performance.

One interesting solution along the way was code that took 2x2-channel float sequences (i.e. 2xcomplex number arrays) and re-wound them back to 4-channel bytes, including scaling and clamping.

I utilised the fixed-point variant of the VCVT instruction which performs the scaling to 8 bits with clamping below 0. For the high bits I used the saturating VQMOVN variant of move with narrow.

I haven't run it through the cycle counter (or looked the details up) so it could probably do with some jiggling or widening to 32 bytes/iteration but the current main loop is below.

        vld1.32         { d0[], d1[] }, [sp]

        vld1.32         { d16-d19 },[r0]!
        vld1.32         { d20-d23 },[r1]!     
1:
        vmul.f32        q12,q8,q0               @ scale
        vmul.f32        q13,q9,q0
        vmul.f32        q14,q10,q0
        vmul.f32        q15,q11,q0

        vld1.32         { d16-d19 },[r0]!       @ pre-load next iteration
        vld1.32         { d20-d23 },[r1]!

        vcvt.u32.f32    q12,q12,#8              @ to int + clamp lower in one step
        vcvt.u32.f32    q13,q13,#8
        vcvt.u32.f32    q14,q14,#8
        vcvt.u32.f32    q15,q15,#8

        vqmovn.u32      d24,q12                 @ to short, clamp upper
        vqmovn.u32      d25,q13
        vqmovn.u32      d26,q14
        vqmovn.u32      d27,q15

        vqmovn.u16      d24,q12                 @ to byte, clamp upper
        vqmovn.u16      d25,q13

        vst2.16         { d24,d25 },[r3]!

        subs    r12,#1
        bhi     1b

The loading of all elements of q0 from the stack was the first time I've done this:

        vld1.32         { d0[], d1[] }, [sp]

Last time I did this I thing I did a load to a single-point register or an ARM register then moved it across, and I thought that was unnecessarily clumsy. It isn't terribly obvious from the manual how the various versions of VLD1 differentiate themselves unless you look closely at the register lists. d0[],d1[] loads a single 32-bit value to every lane of the two registers, or all lanes of q0.

The VST2 line:

        vst2.16         { d24,d25 },[r3]!
Performs a neat trick of shuffling the 8-bit values back in to the correct order - although it relies on the machine operating in little-endian mode.

The data flow is something like this:

 input bytes:        ABCD ABCD ABCD
 float AB channel:   AAAA BBBB AAAA BBBB
 float CD channel:   CCCC DDDD CCCC DDDD   
 output bytes:       ABCD ABCD ABCD

As the process of performing a forward then inverse FFT ends up scaling the result by the number of elements (i.e. *(width*height)) the output stage requires scaling by 1/(width*height) anyway. But this routine requires further scaling by (1/255) so that the fixed-point 8-bit conversion works and is performed 'for free' using the same multiplies.

This is the kind of stuff that is much faster in NEON than C, and compilers are a long way from doing it automatically.

The loop in C would be something like:

float clampf(float v, float l, float u) {
   return v < l ? l : (v < u ? v : u);
}

    complex float *a;
    complex float *b;
    uint8_t *d;
    float scale = 1.0f / (width * height);
    for (int i=0;i<width;i++) {
       complex float A = a[i] * scale;
       complex float B = b[i] * scale;

       float are = clampf(creal(A), 0, 255);
       float aim = clampf(cimag(A), 0, 255);
       float bre = clampf(creal(B), 0, 255);
       float bim = clampf(cimag(B), 0, 255);

       d[i*4+0] = (uint8_t)are;
       d[i*4+1] = (uint8_t)aim;
       d[i*4+2] = (uint8_t)bre;
       d[i*4+3] = (uint8_t)bim;
    }

And it's interesting to me that the NEON isn't much bulkier than the C - despite performing 4x the amount of work per loop.

I setup a github account today - which was a bit of a pain as it doesn't work properly with my main browser machine - but I haven't put anything there yet. I want to bed down the basic data flow and user-interaction first.