At least along the way I discovered the profiler, overheads of using a BlockingQueue(), memcheck in the emulator, enabling GLES2 in the emulator, hooked up the decode_audio4 api, and a few other odds and sods.
But, the more I look, the more it looks like i'm hitting a bug in the driver. Given how new the tegra3 stuff is, and given it's nvidia, it wouldn't surprise me. I still consider labelling it a driver bug a 'last resort', since it's such a basic feature, but i'm running out of other possibilities here.
I've tried java side or c side glTexSubImage2D(), i've tried copying the AVFrames, or copying the AVPlane's to pre-allocated or dynamically allocated bytebuffers, i've tried using a single AVFrame for decoding or a cycling buffer of them. Although sometimes one mechanism seems a bit more reliable (and i even managed a full run-through the hour-ish video i've been using to test) eventually they all crash. If all I do is remove the texture loading (glTexImage2D and glTexSubImage2D calls); then it doesn't.
This has been just a huge waste of time now i'm getting pissed off with it.
I could try writing a simpler bit of code to isolate the driver code, but given it's crashing inside FFmpeg it probably wouldn't provide enough logic to cause problems if I did that. Which probably means returning to software colour conversion and resorting to a Bitmap as the image surface ...
I wrote the above Saturday morning but kept poking - now i'm not so sure. Maybe it's something to do with the audio decode. Early on in the piece I had a packet-checker checking that the packets were padded properly - and it showed they weren't. But then I moved to the decode_audio4() api and haven't re-checked. Still, using gles2 with no audio stuff at all still causes problems as well, so it can't be just that. And the solid valgrind results from the emulator show that it's not suffering from memory nastiness as I originally thought. Assuming they can be trusted.
No comments:
Post a Comment