This was a really great session to be involved with at FITC.
Storytelling x.0
There was a truly exceptional lineup of speakers. Everyone really brought something interesting to the discussions.
Thursday, April 29, 2010
Saturday, April 17, 2010
Gremlins are back - need proof?
Some strange things have been happening to the technonolgy around me.
Of course, I blame gremlins...
Of course, I blame gremlins...
Friday, April 2, 2010
"So close and yet..."
Hmmm... The good news or the bad news? Well here's both.
As you can see, the ladybug2 can now talk directly to Max... But it's green and unhappy. There are two major problems, one which I couldn't have foreseen, the other I'm an idiot for not considering.
First: We were lucky enough not to have the Bayer filters included in Libdc1394v2, to try and decode the RGGB that the ladybug is outputting but as you see it doesn't work as expected. Our theory is that the code is expecting the image to be upright, but as you can see, our image directly out of memory is 90ยบ from where it should be. So why don't we just rotate the image before processing? Well, as it is we are copying the matrix once, from memory, to the outlet. This image is huge, a whopping 4608x1024. That's 6x768x1024. So of course that brings me to the second problem. Under no circumstances have I ever been amazed with the framerate of an incoming image in Max at 1024x768. Let alone 6 of them simultaneously. So is this even a good idea? Perhaps there's just too much visual information to process to be useful at all. Not one to give up too easily, I may have another solution. If we output the raw image into a jit.slab and crunch it all on the GPU (which is what they do at Point Grey if I'm not mistaken anyways), then maybe we can get the performance up to something halfway decent. Definitely open to suggestions.
As you can see, the ladybug2 can now talk directly to Max... But it's green and unhappy. There are two major problems, one which I couldn't have foreseen, the other I'm an idiot for not considering.
First: We were lucky enough not to have the Bayer filters included in Libdc1394v2, to try and decode the RGGB that the ladybug is outputting but as you see it doesn't work as expected. Our theory is that the code is expecting the image to be upright, but as you can see, our image directly out of memory is 90ยบ from where it should be. So why don't we just rotate the image before processing? Well, as it is we are copying the matrix once, from memory, to the outlet. This image is huge, a whopping 4608x1024. That's 6x768x1024. So of course that brings me to the second problem. Under no circumstances have I ever been amazed with the framerate of an incoming image in Max at 1024x768. Let alone 6 of them simultaneously. So is this even a good idea? Perhaps there's just too much visual information to process to be useful at all. Not one to give up too easily, I may have another solution. If we output the raw image into a jit.slab and crunch it all on the GPU (which is what they do at Point Grey if I'm not mistaken anyways), then maybe we can get the performance up to something halfway decent. Definitely open to suggestions.
Finally something to contribute to the Max Community after a long time. Thanks to Rob and Andrei from York and Rand at Intersense. Download at futurecinema.ca/arlab
Follow the instructions in the top corner of the .maxhelp patch to get the dylib installed and get you up and running. Sorry but I think the object only works on OS X 10.5 (Leopard) and up.
Subscribe to:
Posts (Atom)