So I finally got around to trying out my LBP-P (pixel) classifier algorithm for the task of object recognition. This is the one i've mentioned in numerous previous posts which i've applied with moderate success to the problem of object detection.
I don't have any code or numbers to publish at this time but if I haven't made a mistake it appears to work much better than histogram/block-based LBP algorithm (which is one of the algorithms implemented in OpenCV) with a similar sized descriptor on the data i'm testing. Computational cost is somewhat lower too - particularly if one can delve into some assembly language and the cpu has a vtbl (NEON) like instruction. I have still yet to incorporate scale into the calculation, but i can only assume that could improve it: if I can work out how to combine the probabilities in a meaningful way.
Given that it works at all as a detector means it has to have a pretty good false-positive rate, and so I thought it should work quite well at the recognition task too.
And as I now know how to generate ROC curves ... one day I will do this for the object detector; but that is something for another day, as today is hot, and I am lazy.
I've also been trying various other LBP code variations.
The LBP 8,1 u2 code I started with is is still clearly the best of the rest, but I found that the CS-LBP variation - centre-symmetric - usually works at least or good or even better in many cases. This is pretty significant as it's only a 4-bit code which means classifiers and/or descriptors can be 1/4 the size of the 59-code (~6 bit) LBP 8,1 u2 encoding. Rather than do 8 comparisons around a centre pixel this just does 4 comparisons against opposite pixels: n/s, ne/sw, e/w, se/nw.
For the LBP histogram algorithm I generally had no luck trying to pre-process the images - usually anything I did adversely affected the results - although that depended on the image size. The size of the image seems to have more of an impact on the LBP histogram algorithms. And surprisingly a fairly small image had the strongest result: I suspect that scale is doing some of the pre-processing for me in this case. Unless i'm missing something, for LBP based algorithms I never understood why anyone would perform any intensity based normalisation (histogram equalisation and so on) as I don't see how they could affect the results as the LBP code is a local gradient operator and scaling couldn't affect that (that's kind of the whole point of using them).
Although i'm now quite a fan of the LBP, I should also get a chance to try a similar set of algorithms using wavelets: it will be interesting to see how they perform, and even if they aren't as good they might be useful if the individual results aren't correlated too closely. Although given they also perform localised directional gradient measures, I suspect they will correlate. Still, they'd have to be quite a bit better to justify the increased processing required.
Update: Ahh bummer, when I did a more valid study the LBP-P classifier isn't quite as good as my first results suggested. That's a pity. I can still get a decent result, but it starts to take a lot of training and/or too much memory to get really good results.