View Full Version : High Definition with Elphel model 333 camera
Matteo Pozzi May 21st, 2009, 01:45 AM @Biel
I think that something is in development
JP4 HDR - ElphelWiki (http://wiki.elphel.com/index.php?title=JP4_HDR)
and also in hardware:
info from the aptina data sheet
Pixel dynamic range 70.1dB (full resolution),
76dB (2 x 2 binning)
Biel Bestue May 21st, 2009, 05:48 PM Matteo i've seen this, but my idea isn't exactly this, the idea is to use the same info coming form the ccd before any pixel binning is done, the idea is to use the original image and the "pixel-binned" one and merge them to get a little nit more of dynamic range (only if by pixel-binning on gets more light, otherwise this porcess would be futile)
anyway if this porcess is not futile and also the Process Matteo said one could get an awesome dynamic range gain from the same camera, what do you guys think?
Sebastian Pichelhofer May 22nd, 2009, 05:23 AM Interesting idea.
In theory it should be possible to gather higher dynamic range data with this.
But I see some potential trouble:
-) As you mentioned this will not be possible at full fps. Changing binning mode will require at least one if not multiple frames to execute.
-) Binning is a CMOS on-chip feature rather than a post-processing step so it is not possible to get the data "before" it happens. You can only take 2 shots one after the other, one with binning on (2x2 for example) and one without binning.
Binning does not only mean more dynamic range but also more light sensitivity. 4 pixels are brighter than one.
Biel Bestue May 23rd, 2009, 02:32 AM Even if it is not possible to do it at a usable FPS it would be interesting to do it as a proof of concept no?so when chips get more fast it will be prooven no?
is there a way to get the info from the chip in order to pixel-bin, but do it as a post process? like getting the image from a non-pixel-binned process and pixel-bin it when its not al already done, i mean do a porcess in post (than means after the chip has got the image (not "post-pro")) from a non-pixel-binned iimage:
lens - chip - stream data - pixel-binning - stream data + pixel-binning - raw enconding - file
here pixel-binning and the operation joining the two diferent streams would be a postprocess before the raw encoding of the file
instead of:
lens - chip - pixel-binning - stream data - raw encoding - file
or does pixel.binning actually means that the chip uses 16 photosites instead of the normal 4 for a final pixel? if that is pixel binning then maybe its not possible...
Sebastian Pichelhofer May 23rd, 2009, 02:37 AM or does pixel.binning actually means that the chip uses 16 photosites instead of the normal two for a final pixel? if that is pixel binning then maybe its not possible...
Exactly that is binning!
Instead of a single photosite the sensor can add up the charges of blocks of 2x2, 3x3, 4x4, 5x5, etc. photosites.
Anything regarding resolution reduction after the sensor is "resizing".
Biel Bestue May 23rd, 2009, 03:15 AM well then...
damit i loved that idea because it's simplicty! :(
Oscar Spierenburg May 23rd, 2009, 10:15 AM Hi Biel,
I don't know if it's really impossible, but I guess it's not simple. Anyway, we where just talking about the Elphel's FPGA features, here's part of the discussion:
Over a year ago Andrey wrote: "For the viewfinder mode it will be
rather easy to make FPGA to provide 1/8 (or even 1/16) resolution (so
1/64 or 1/256 number of pixels) of uncompressed data in parallel to
the video, virtually without using additional resources (so not
slowing down the recording)."
"Anyone know if this still true?
....is this also one of the goals for the Elphel Vision interface?
... the 7" tablet PC has a lot of trouble with the high resolution stream. I don't think it'll ever show real time HD video.
A horrible blocky image is even preferred if it's real time above a beautiful HD image that is one second behind.
Especially if we have the focus aid.
(Oscar)
"...apparently this has never been implemented. But as I also recently
learned is the FPGA only filled 40%. That means there are plenty of
resources. If we find someone who is a skilled verilog developer it
might be possible to have the feature added.
....After all I also think its the only way to enable us using something
like the beagleboard as viewfinder."
(Sebastian)
Regards,
Oscar
Biel Bestue May 23rd, 2009, 12:39 PM i don't get it Oscar, what are you triying to say? does resizing have the same benefit of the pixel-binning? can luminosity (ore even chroma component) of 16 pixels combine and form a brighter (and maybe a more color-correct) 4 pixels?
and then combine that image with another resized (16 to 4 without the post-pixelbinning process)
and get a similar latitude as the process i wanted to do?
Sebastian Pichelhofer May 23rd, 2009, 03:14 PM Very first alpha version of ElphelVision Java Applet is ready!
This is still in a very rough and early condition but it should work and maybe already be kinda useful.
It lets you view the video live stream, shows camera infos and record video files to camera internal HDD.
Detailed instructions on how to setup and use are included as README.txt
Currently only works under windows.
Final version should be platform independent as long as your platform can run Java and has an mplayer build.
Please let me know if it worked, how it worked, etc.
http://community.elphel.com/Apertus/ElphelVision_Alpha01.zip
Oscar Spierenburg May 23rd, 2009, 05:16 PM hey Sebastian, that's good news! I'll try to test it right now, or first ting tomorrow (just came back from a trip to Holland) No problem it's only Windows, I just installed Windows 7 (test version) on the tablet PC. I guess using Windows for the end user makes it a lot more attractive for lots of people.
i don't get it Oscar, what are you triying to say? does resizing have the same benefit of the pixel-binning? can luminosity (ore even chroma component) of 16 pixels combine and form a brighter (and maybe a more color-correct) 4 pixels?
Sorry if I'm confusing things :-) I'm not a programmer, but I was trying to follow your idea: if the camara gets more light by pixel binning, then why not make two processes, first the normal image ("un-binned") you get a "low luminosity" image and save it, then the pixel-binning processed image "high luminosity", and compare the burned parts of the "high" image to the "low"
I thought it might be relevant to mention that we were talking about two 'streams' parallel.
Biel Bestue May 23rd, 2009, 06:59 PM I thought it might be relevant to mention that we were talking about two 'streams' parallel.
ah, ok, now i get it, but this "uncompressed stream" means this, uncompressed stream ,not before the raw encoding but after the raw encoding, i was referring to the stream out of the chip, that means before creating any raw file, but as Sabastian said, you can't have both binned and unbined images out of the chip, you only can have one (or is it possible?)
sebastian---
i'm strating to program in java and i would love to contribute to the program, i've done a little aplication that converts a given number of frames at a given Fps to a SMPTE timecode(at the moment it's is only itegers, so i don't know how to do 29.97, but I'll learn)
are you interested in it?
Sebastian Pichelhofer May 24th, 2009, 01:40 AM sebastian---
i'm strating to program in java and i would love to contribute to the program, i've done a little aplication that converts a given number of frames at a given Fps to a SMPTE timecode(at the moment it's is only itegers, so i don't know how to do 29.97, but I'll learn)
are you interested in it?
Open source software development is all about collaboration and working together. So this is a good idea in any case.
My suggestion for the next lecture: Read about the float datatype :)
Biel Bestue May 24th, 2009, 03:24 AM no, no i know it :) but what i don't wanna start getting is float frame numbers, so i need the computer to understand that you can't have 3.5 frames, there is 4 frames or 3 not 3.5, you know, all that kind of things, anyway sorry to go offtopic, i'll send you the program and you decide
i'm triying to do a java libarary of SMPTE workarounds like let the user enter a String and make the computer understant it as a SMPTE code, or give a exception signal back to the user if what he has written is not a SMPTE timecode... all that kind of things
Oscar Spierenburg May 24th, 2009, 12:11 PM Very first alpha version of ElphelVision Java Applet is ready!
This is still in a very rough and early condition but it should work and maybe already be kinda useful.
It lets you view the video live stream, shows camera infos and record video files to camera internal HDD.
Detailed instructions on how to setup and use are included as README.txt
Currently only works under windows.
Final version should be platform independent as long as your platform can run Java and has an mplayer build.
Please let me know if it worked, how it worked, etc.
http://community.elphel.com/Apertus/ElphelVision_Alpha01.zip
OK, I've uploaded the files to the camera. Anyway, it a great start! It seems to be working fine. I had to restart firefox a couple of times till I found out how to use it, it seems to hang when you do something wrong.
I noticed that it only works when camvc is also running and updating the image. So camvc is showing a frame a second or something, but Elphel Vision is streaming the video at full FPS. The stream (on my fast laptop) is not real time, I think 0.5 sec behind.
So far my findings, I'll test it on the netbook tomorrow.
Again, great job so far!
Chris Hurd May 30th, 2009, 10:48 AM Now that we have a dedicated forum for Apertus, it's time to close this particular
thread. Please continue any further discussion by creating new threads. Thanks all,
|
|