View Full Version : High Definition with Elphel model 333 camera
Robert Schiebel August 21st, 2006, 05:26 AM Hi,
one question, Phil, please tell me with lens you used in Rome.
I look for a wide-angle-lens like this.
Robert
Wayne Morellini August 21st, 2006, 05:54 AM Thanks Zolt.
It might take lots of time before I am ready, some unexpected things are up. I'll probably email you when I am freer.
The register suggestion (also implying on chip ram as register) was only on the basis of doing a pixel at a time, needing only a few memory words/registers for surrounding pixels and intermediate results, not on 20*20 blocks of pixels.
Wayne Morellini August 21st, 2006, 08:06 AM If anybody is interested. Michael Schoeberl, who has had experience with noise removal in medical imaging, over at the comp.compression thread has put me onto some very good noise removal software and plugin, also works on compression artifacts. There is both a still and video versions.
http://www.neatimage.com/
http://www.neatvideo.com/index.html?snim
Would lead to more compressible cleaner files in post. I am aiming to look for routines suitable on camera as well.
Wayne Morellini August 21st, 2006, 10:08 AM Have looked over the examples, and the results are pretty amazing. I have compared the before and after file sizes on their site, and mostly reductions upto less then half the original size, usually less reduction for the stills. I must admit this does not entirely make sense, I think the re-compressor they are using is not doing such a good job, otherwise I would expect more reduction on average than this. Some minor loss in detail at times, and gain in some other places, as it tries to predict what is what. But still very nice.
http://www.neatimage.com/examples.html
http://www.neatvideo.com/examples.html
http://www.neatimage.com/reviews.html
Reported to be very good too (see conclusions)
http://www.michaelalmond.com/Articles/noise.htm
Zsolt Hegyi August 21st, 2006, 02:29 PM Have looked over the examples, and the results are pretty amazing.
Yes they're not bad but consider that these are interpolated rgb images, not bayer. Interpolation introduces noise and jpeg compression introduces more. This noise is algorithmical and can be removed later by software if correctly recognized.
You're probably aware that the cmos chips we intend to use contain analog noise removal circuits and they're really good (removing the noise digitally is not nearly as efficient).Well, unless you push up the analog gain in the chip - that seems to be the case with several images posted on the above link. And the other set of images are just poorly jpeg-compressed.
By using correctly exposed sensors with normal analog gain levels we should have no significant noise on the raw bayer images.
Andrey Filippov August 21st, 2006, 11:56 PM The biggest noise visible on the CMOS (compared to CCD) is FPN (fixed pattern noise) caused by differences in the pixels. Actually the camera FPGA code in the 333 (and even in older 313) has the code to subtract array from the image and multiply - reading that correction data from the same video memory. This processing can be done in parallel with the compression so no slowing down.
But I never got the time to write some code to calculate that arrays without individual measurements for each particular settings (and sensor temperature) like taking an image, closing the cap on the lens and getting a dark one to subtract. It is possible to do the compensation without that - just while using the camera.
Zsolt Hegyi August 22nd, 2006, 12:46 AM taking an image, closing the cap on the lens and getting a dark one to subtract.
Sounds good; I mean there's room for improvement. I think I'm going to do that when the camera is ready. Seems quite simple, especially if integrated into the post-process instead of fpga.
If we use strictly interframe-difference compression then this noise won't decrease the encoding rates (because it's fpn). And if we use lossless, then we can reproduce these "black frames" without errors.
Wayne Morellini August 22nd, 2006, 10:18 AM Yes they're not bad but consider that these are interpolated rgb images, not bayer. Interpolation introduces noise and jpeg compression introduces more. This noise is algorithmical and can be removed later by software if correctly recognized.
There are far better ways then interpolation to remove noise, I understand they use a number of different methods including temporal, where they compare across frames.
But you have to remember that this is only for post processing MJPEG and Theora at the moment, not bayer, though I expect you can do bayer through one of the camera noise removers listed in that review link.
You're probably aware that the cmos chips we intend to use contain analog noise removal circuits and they're really good (removing the noise digitally is not nearly as efficient).
Yes, but the purpose is to clean up after shooting, or if I can find something suitably simple for FPGA, it is suitable for increasing compression. According to information I read, intra frame compression performance, and severely inter frame lossless compression, is limited because of noise. Extra non consistent complexity in the image limiting intra compression, and changes in pixel values between frames reducing inter performance.
By using correctly exposed sensors with normal analog gain levels we should have no significant noise on the raw bayer images. (Eg. use lighting if there's too low light around and always use manual exposure control.)
In a film set, yes, but using in the field, under various conditions, there are extremes of exposure that has to be used, as with most cameras producing noise. The SN is also not very high on these chips compared to Ibis (with proper external setup) Altasens, or ENG cameras, but probably close to a number of HDV cameras.
I personally want to increase aperture through a condenser 35mm adaptor, but these chips have a limit form their microlens that restricts maximum aperture (another reason why the IBIS5a was so good it had no microlens and you could use very fast apertures of less than 1 for more stops of light, and great latitude extension). I would like to minimise lighting (an personal experiment).
Thanks
Wayne.
Zsolt Hegyi August 22nd, 2006, 11:59 AM There are far better ways then interpolation to remove noise
I meant the source images not the results.
But you have to remember that this is only for post processing MJPEG and Theora at the moment, not bayer
Okay, I've got it now, sorry.
another reason why the IBIS5a was so good it had no microlens and you could use very fast apertures
And what do you think the maximum aperture is for the microns?
Switching topics: today I finished the development of the encoder. The source material for the tests were the following files:
http://3lib.ukonline.co.uk/cricket/imran1.mpg
http://www.elphel.com/3fhlo/samples/snakeriver.mov
I converted both of them to grayscale mpeg-s before processing. The java/jmf framework provided the decoded buffers with a bit depth of only 5 so I upconverted it to 12 bits (shifting 7 bits to the left, zeros coming in on the right) and replaced the lower 3 bits with random noise. I write this because it's not the ideal input - we could have better or worse when trying it with a real 12 bit stream.
With both of the above files I've got the targeted 3:1 ratio with lossless interframe compression. So my prognosis was good: the much lower frequency content of the time domain really helped the same encoder routine to provide better results than using it in intraframe mode.
I'd like to add that in java, the encoder routine is only 10 lines (!) long so I should have no problems implementing it in verilog... And it will be fast because it works with a 1 pixel per 1 clock cycle way. And because of its small size I could even instantiate an encoder for every pixel in a 5x5 block.
I have some more ideas for optimization but I don't want to implement them until I have a camera and the test results demand better compression performance than the current one.
Now I quit posting to this forum for a while; you'll probably hear from me again when I have a working camera in my hands. Hope this happens very soon.
Thanks,
Zsolt
Andrey Filippov August 22nd, 2006, 12:32 PM You're probably aware that the cmos chips we intend to use contain analog noise removal circuits and they're really good (removing the noise digitally is not nearly as efficient).Well, unless you push up the analog gain in the chip - that seems to be the case with several images posted on the above link. And the other set of images are just poorly jpeg-compressed.
What do you mean by this magic analog noise removal? And what kind of noise do you mean?
CMOS have on-chip black level compensation (still not as good as "digital" - subtracting individual pixel black levels). The only analog that can be better then digital (and it mostly applies to CCD) is true binning (with reduction of resolution, of course). The S/N gain compared to averaging of the output pixel data (can be done on chip for CMOS, always off-chip for CCD) can be better up to square root of the number of the pixels binned (e.g. 2 for 2x2 binning) The reason is that binning by combining pixel charges does not introduce any noise, so each cluster of pixels binned get noise from the output amplifier only once, and if you average pixel values after the amplifier each pixel will have the output amplifier noise added, so averaging of non-correlated noise will give you square root.
The above is true only for the noise of the output amplifier that can be a major factor with CCDs in some conditions (pixel thermal noise low - short exposure and/or cooling).
With CMOS the most visible is FPN that can be nicely removed by digital subtraction of the "dark" image.
Zsolt Hegyi August 22nd, 2006, 12:50 PM What do you mean by this magic analog noise removal?
It's not magic, I was talking exactly about the black level which has both analog and digital correction phases (at least in the micron). It is not amplifier noise that it removes but thermal noise generated by the pixels themselves. Sorry for not being so specific.
Anyway, are you talking about this method not being as good as the array substraction because the pixels used as the base of the black level are only surrounding the visible area but they're used for every pixel being read out?
Zsolt
Andrey Filippov August 22nd, 2006, 01:01 PM It's not magic, I was talking exactly about the black levels which has both analog and digital correction phases (at least in the micron). It is not amplifier noise that it removes but thermal noise generated by the pixels themselves. Sorry for not being so specific.
Zsolt
There is no way to reduce thermal noise but cooling the chip. Subtraction of the dark level as in Micron chips is OK as a first approximation, but measuring/calculating (and subtracting) per-pixel data produces much better results.
Wayne Morellini August 23rd, 2006, 10:20 AM And what do you think the maximum aperture is for the microns?
Andrey would know, I forget, but probably at around 1.4 it will start to wash out (or was that 3 chip, one was closer to 1.6 I think).
Switching topics: today I finished the development of the encoder. The source material for the tests were the following files:
http://3lib.ukonline.co.uk/cricket/imran1.mpg
http://www.elphel.com/3fhlo/samples/snakeriver.mov
I converted both of them to grayscale mpeg-s before processing. The java/jmf framework provided the decoded buffers with a bit depth of only 5 so I upconverted it to 12 bits (shifting 7 bits to the left, zeros coming in on the right) and replaced the lower 3 bits with random noise. I write this because it's not the ideal input - we could have better or worse when trying it with a real 12 bit stream.
With both of the above files I've got the targeted 3:1 ratio with lossless interframe compression. So my prognosis was good: the much lower frequency content of the time domain really helped the same encoder routine to provide better results than using it in intraframe mode.
I'd like to add that in java, the encoder routine is only 10 lines (!) long so I should have no problems implementing it in verilog... And it will be fast because it works with a 1 pixel per 1 clock cycle way. And because of its small size I could even instantiate an encoder for every pixel in a 5x5 block.
I am impressed by the ten lines of code, but exactly what sort of system are you doing, can you describe it? With only ten lines you could almost just use an array of small processing cells (look for Misc processor cores on the web less than 16K gates each) to achieve it without custom design.
Have you got any raw bayer files to test? There might be some RAW bayer footage on the Silicon imaging camera threads, or the Drake camera thread, or Obin Oslo's original camera thread. Compressed 4:2:0 files already have much of their information removed, and the most hard to compress data is in the last 2 bits.
Zsolt Hegyi August 23rd, 2006, 11:46 AM I am impressed by the ten lines of code, but exactly what sort of system are you doing, can you describe it?
I will, as soon as the code works in a camera and produces the results I have now. As I wrote earlier it's an integer-difference compression with a bitstream output.
With only ten lines you could almost just use an array of small processing cells (look for Misc processor cores on the web less than 16K gates each) to achieve it without custom design.
For ten lines one doesn't need to do really heavy design work... What I'm more interested in is that how this routine will fit into the Elphel design. If I take one of those cores then I might have a hard time fitting it into the 353.
There might be some RAW bayer footage on the Silicon imaging camera threads, or the Drake camera thread, or Obin Oslo's original camera thread.
I don't remember bumping into any raw videos when I was browsing through those threads. Anyway, in what kind of format they could've posted them, if they did?
Compressed 4:2:0 files already have much of their information removed, and the most hard to compress data is in the last 2 bits.
I know, that's why I inserted 3 bits of random noise into the pixels - which the encoder couldn't really compress effectively so basically I had the lower 3 bits uncompressed... Hence the only 3:1 ratio.
Wayne Morellini August 23rd, 2006, 01:10 PM I will, as soon as the code works in a camera and produces the results I have now. As I wrote earlier it's an integer-difference compression with a bit stream output.
Cool, no DCT.
For ten lines one doesn't need to do really heavy design work... What I'm more interested in is that how this routine will fit into the Elphel design. If I take one of those cores then I might have a hard time fitting it into the 353.
Doesn't the 353 have something like 1.2M gates, the Misc core is only around 16k (I think). www.ultratechnology.com/chips.htm has a archived list of forth chips that had the FPGA Misc variants (their are other sites on the web with the more info).
I don't remember bumping into any raw videos when I was browsing through those threads. Anyway, in what kind of format they could've posted them, if they did?
They did, I think they padded them in Tiffs and the like (their are still camera raw formats) and maybe custom formats. I know Obin's for certain. You could email him. But when you get your camera you can shoot small windowed raw frames output as monochrome bit-mapped. You could get somebody to do stills here even.
I know, that's why I inserted 3 bits of random noise into the pixels - which the encoder couldn't really compress effectively so basically I had the lower 3 bits uncompressed... Hence the only 3:1 ratio. Without the noise the ratios went up high as 12:1 (nonsense, the lower 7 bits were zeroes) and using the original 5 bit wide data without upscaling, the ratio was around 4:1.
If you are not compressing 4 blank padded bits we have a very promising start. The three bits of random noise should go to offsetting the lack of detail in the heavily compressed video. You could compare against the existing lossless video codecs, that I linked to on the web wiki before, even on their test footage.
Another interesting link I came across, if useful.
http://www.data-compression.info/Algorithms/RC/
Thanks for your efforts Zolt, keep it up. I wonder where everybody else has gone with their cameras?
Wayne.
Wayne Morellini August 25th, 2006, 04:47 PM Over at the comp.compression thread, a commercial oriented guy from this company: http://www.algopix.com/ has turned up and is interested in developing noise removal for increasing lossless compression ratios.
He is aiming to do a simple two frame comparison, where he uses the noise removal to also identify inter frame movement, and then compresses lossless.
Thanks
Wayne.
Wayne Morellini August 28th, 2006, 01:30 AM I did a search for something and accidentally came across that thread I posted on comp.compression newsgroup for reference:
http://groups.google.com/group/comp.compression/browse_thread/thread/af007e9fd5951cc7/09ee766672dc7727
It seems to be turning up in different places:
http://archives.devshed.com/forums/compression-130/lossless-near-lossless-bayer-pixel-shift-and-noise-removal-advice-for-1937388.html
http://groups.google.co.jp/group/comp.compression/browse_thread/thread/af007e9fd5951cc7/09ee766672dc7727?lnk=raot&hl=ja
Thanks
Wayne.
Matteo Pozzi September 22nd, 2006, 03:25 AM look at this!
"Leveraging its heritage in high-speed, high-performance imaging, Micron will unveil a new 5-megapixel high-definition (HD) image sensor for mainstream digital cameras. The new sensor is capable of capturing video at 60 frames per second (fps) in 720p (progressive) format and 30 fps in 1080p format. Micron will begin sampling this sensor in the fall of 2006.
Designed specifically for digital video camcorder applications, Micron will also introduce a new HD video sensor that captures 60 fps in 720p format. Designed to work with long-range zoom lenses, the sensor was built using Micron’s stunning 2.2-micron pixel technology. Additionally, the sensor has additional pixel area for image stabilization, which reduces the effect of shaky and blurred images typically caused by jittery hands or camera-shake. This sensor is expected to begin sampling in the first quarter of 2007."
taken from:
http://www.micron.com/about/news/pressrelease.aspx?id=DFA76239EFA2B68E
Rob Scott September 25th, 2006, 12:02 PM is there a way to view the graphic user interface ... so I've some idea for a touch screen user interface for a 7" monitor ...I don't know linux (for now) so what program I need, and to learn, to change the GUI of the elphel recording program? and is there someone that whant to help me?
I'm working on something similar -- ObscuraCam (http://www.obscuracam.com/) -- which is a "heads-up" recording program specifically for filmmaking; and also a post-processing program to convert the raw files into something useful.
I've just recently discovered this thread, and I find it very intriguing. The Elphel 353 could be an excellent choice for a DIY cinema camera.
Wayne Morellini September 26th, 2006, 10:36 AM I just realised it wasn't Rob Loham posting to the thread. Long time no here, still got the same email?
So, still doing the Obsura, how is it going?
Actually, Zolt, what is happening after a month?
Matteo, I think that is the one Andrey aims to use.
Rob Scott September 27th, 2006, 07:48 AM Long time no here, still got the same email?
So, still doing the Obsura, how is it going?
Good to hear from you Wayne! Yup, same e-mail address.
I re-started the project a month or so ago and am working on it as I have time. I'd love to get my hands on a Sumix M73 or an Elphel camera ... wish I had some extra cash.
Wayne Morellini September 27th, 2006, 10:26 AM If you contact Andrey he has special pricing for developers, or maybe ask him for a loaner.
I find those sourceforge pages hard to follow, what's the level you are upto?
If you look at my Technical thread, you will find updates on some interesting technologies, except three or so new camera threads. The last few pages have the most relevant info to you. I don't have the technical link here with me, but here is another link that I have open in the background, on a sensor technology that has 170db range (latitude?).
http://www.dvinfo.net/conf/showthread.php?p=545177&posted=1
Rob Scott September 27th, 2006, 02:43 PM I find those sourceforge pages hard to follow, what's the level you are upto?
It's basically alpha quality, currently only supporting the SI-1300CL.
The capture software will do 1280x720p 10-bit raw @ 24fps; on my system, this requires two hard drives in tandem. Multiple-drive support is built in; of course, you can use a RAID array or a very fast drive. It's a full-screen "heads-up" UI using DirectX. I'm going to investigate moving to OpenGL to be more portable.
The conversion software will process the raw images into 16-bit TIFF. Eventually it will support OpenEXR and DNG (for processing elsewhere). It is currently command-line only.
[Edit] One of the issues going forward is that nobody I know of has the SI-1300 and I doubt anyone will go out and buy one just to use this software. Obin was doing something similar (I didn't read the entire thread) and tried the SI-3300 then gave up (as far as I know).
On the other hand, the Elphel is attractive because its' firmware is also open-source, making it an excellent match. If the 353 is affordable I'm going to be very tempted to buy one.
Noah Yuan-Vogel September 27th, 2006, 04:11 PM Hmm, Rob, I seem to have an M73 sitting around I havent gotten to use much lately. What are the chances you'll be able to support it? I've gotten a little ways with my VB.net capture program and the sumix api but have run into some issues and now have next to no time to work on it and may not in the near future. I might be able to lend you the camera or sell it to you for a low price or something. If not, I'm still very interested in your project, so please keep us updated on your progress. The elphel 333 also looks like a good option for your program, but it is limited to outputting jpegs over 100BT, but it could be great given a good interface for controling blanking and LUTs.
Rob Scott September 28th, 2006, 07:33 AM Hmm, Rob, I seem to have an M73 sitting around I havent gotten to use much lately. What are the chances you'll be able to support it?
I already downloaded the API and it looks quite straightforward -- certainly no more difficult than the CameraLink API I've already implemented. I would love to borrow it from you and work on support for it. If I have some success (and I can find some cash) I may be interested in buying it from you. Feel free to e-mail me directly.
The elphel 333 also looks like a good option for your program
I agree, the Elphel camera is very cool, but the 353 sounds nearly perfect, so I will probably wait for that one. Or I guess I could start with the 333 ... well, one thing at a time.
Thanks for your interest!
Wayne Morellini September 28th, 2006, 11:54 PM It's basically alpha quality, currently only supporting the SI-1300CL.
The capture software will do 1280x720p 10-bit raw @ 24fps; on my system, this requires two hard drives in tandem. Multiple-drive support is built in; of course, you can use a RAID array or a very fast drive. It's a full-screen "heads-up" UI using DirectX. I'm going to investigate moving to OpenGL to be more portable.
I would suggest supporting both, as direct X is probably goign to give the best hardware support options.
What computer hardware are you using to achieve this? From what you are saying, you don't have camera to test it.
The conversion software will process the raw images into 16-bit TIFF. Eventually it will support OpenEXR and DNG (for processing elsewhere). It is currently command-line only.
Yes there are a number of options. If you approached Steve from SI you might be able to access some of their stuff and their workflow, if you approach David from cineform, the cineform Raw is available for license. The codec being developed here is probably a very good idea (not to mention previous work by Juan and Jason under different projects).
[Edit] One of the issues going forward is that nobody I know of has the SI-1300 and I doubt anyone will go out and buy one just to use this software. Obin was doing something similar (I didn't read the entire thread) and tried the SI-3300 then gave up (as far as I know).
You do not have a camera, what are you doing your testing on? Obin was offering to sell his, maybe there was somebody else to.
The latest advice on getting the thing to run full speed (see my technical thread for more details). Is that Epix (is that the name, I forget, you know the one I mean) latest PCIE one lane capture card has buffering, and will quiet happily continuously record without problems according to them. They claim it has just enough buffering to iron out the problems, but to record to hard disk requires the more advanced software, or to roll your own with developer software.
Speaking to an local distributor of another camera (I think) about their setup and capture software, they informed me that there was a simple trick to getting their software to capture it to continuously without dropped frames or timing difficulties. I forget what my thoughts were on this, he wouldn't say, but I had an idea what it was about.
About main board hardware requirements: Black magic has a Intensity HDMI capture card, and looking at the requirements they mention a number of recommended boards and chip sets, but recommend against any that use an integrated graphics in their chipset (though, I imagine a discrete graphics card will be OK with them) because they find that the graphics access to main memory stuffs up the bandwidth flow. I imagine that a new chipset like the Intel G965 express (would wait for finale drivers and see if should wait for stepping after c-2) possibly could be different though, because it uses dual channel memory, and bandwidth is 12.8GB/s.
I found a number of companies that could do what is required for film recording at significantly cheaper prices than what SI offers, even using Dalsa.
On the other hand, the Elphel is attractive because its' firmware is also open-source, making it an excellent match. If the 353 is affordable I'm going to be very tempted to buy one.
Yes.
Wayne Morellini September 29th, 2006, 12:07 AM Hmm, Rob, I seem to have an M73 sitting around I havent gotten to use much lately. What are the chances you'll be able to support it? I've gotten a little ways with my VB.net capture program and the sumix api but have run into some issues and now have next to no time to work on it and may not in the near future. I might be able to lend you the camera or sell it to you for a low price or something. If not, I'm still very interested in your project, so please keep us updated on your progress. The elphel 333 also looks like a good option for your program, but it is limited to outputting jpegs over 100BT, but it could be great given a good interface for controling blanking and LUTs.
Noel, could you post a link to what these issues are?
Yes, Rob,
The 333 is good for experimenting with, but is too limited, unless you want to experiment with output grey scale jpegs of the Raw pattern over firewire. Which I am still interested in what results anybody could get out of that, compared to same frame at 75% +. Andrey is doing prototyped 353, you might be able to get a big developer discount for that, as he normally does.
I forgot to mention, the Epix camera card combos are cheap.
To everybody else, I have researched the Jpeg issues, and not all codecs are created equal, for instance the Apple one. That 75% 4:2:2 is the minimum that we should go for (or gray scale of the bayer raw) I think 4:4:4 is above that.
Rob Scott September 29th, 2006, 07:55 AM I would suggest supporting both, as direct X is probably goign to give the best hardware support options.
Since I already have DirectX support written, that's probably a good idea.
What computer hardware are you using to achieve this? From what you are saying, you don't have camera to test it.
I do have the SI-1300CL, and the software works with it. I don't have a Sumix camera, which would be a far more affordable (though less capable) choice.
... SI workflow ... the cineform Raw is available for license.
It sounds like the Cineform codec is excellent, but my goal is for this project to be entirely open source.
The codec being developed here is probably a very good idea
There is a codec under development? I must have missed that -- where is it?
... run full speed ... Epix latest PCIE one lane capture card has buffering ...
I can get the camera to run full speed up to the limitations of my current system. I have only 32-bit PCI and no PCIe slots, and no funding to upgrade at the moment.
Black magic has a Intensity HDMI capture card...
HDMI is certainly interesting, but probably way beyond the scope of this project for now. Besides, are there any cameras out there which would squirt raw (Bayer) frames through HDMI? Otherwise there isn't really much point.
The 333 is good for experimenting with, but is too limited, unless you want to experiment with output grey scale jpegs of the Raw pattern over firewire.
Yes, it would be cool to experiment with, but the 100 Mbps is just too slow. Firewire? Last time I checked it just had Ethernet. Hmmm ... I wonder if the JPEG FPGA core supports 16-bit grayscale?
Wayne Morellini September 29th, 2006, 10:29 AM There is a codec under development? I must have missed that -- where is it?
Last few pages, Zolt, is working on it. You could probably get hold of Juan/the Obin one if you asked nicely. I think as many techniques you can combine the better. I suspect Zolt, however, will surpass the other ones we had with a richer feature set.
HDMI is certainly interesting, but probably way beyond the scope of this project for now. Besides, are there any cameras out there which would squirt raw (Bayer) frames through HDMI? Otherwise there isn't really much point.
I've questioned them carefully on custom modes, and this looks like the case (unless a camera could be made to output a bayer pattern as a monochrome image). It is 4:2:2 with upto 8 to 10 bits (depending on the camera). the card is $249. Now, most of these cameras will not have access to the quality sensors the cameras we are talking about do. I also have a alternative possible high quality HDMI capture in mind.
Yes, it would be cool to experiment with, but the 100 Mbps is just too slow. Firewire? Last time I checked it just had Ethernet. Hmmm ... I wonder if the JPEG FPGA core supports 16-bit grayscale?
Whoops, my bad. I think it does do above 8 bit monochrome Jpeg, but if you consider that 8bit 720p bayer is around 180mb/s+, the photo jpeg option makes sense (but the 333 is very restricted in bandwidth, so the 100mb/s might not be too useful (discussion several pages ago, and probably 4 pages ago). I would go for 10bit packing, if available, and forget the 16bits.
Rob Scott September 29th, 2006, 11:25 AM Last few pages, Zsolt, is working on it.
D'oh ... wasn't paying very close attention there, sorry. Now that I've read back through the thread, that does sound very promising.
Thanks!
Zsolt Hegyi September 30th, 2006, 02:36 AM Hello,
Thanks for your question Wayne, I'm currently developing the video editor plugin for the "codec".
But I must ask you that please don't exaggerate: as I stated earlier the encoder will only do the minimum to fit into the 353's disk write speed, nothing more. If that means only 2:1 compression then that will be. I wouldn't even call it a "codec" at all as it will be dumb as possible. And if I have to do a really complex encoder then I won't do it.
Also, I only make a verilog encoder and an importer for a video editor software so the term "codec" isn't appropriate, again. So if Rob is looking for a general purpose codec then this might not be his choice.
And as I still don't have a camera I cannot test the encoder so I don't know that it will produce the desired results or not. Let's wait with this.
Zsolt
Juan M. M. Fiebelkorn September 30th, 2006, 02:36 PM Big Question is:
Why not giving to Elphel camera a PCMCIA interface?
It shouldnt be so difficult ,there is already Verilog/VHDL code for a PCI interface at Opencores......, just add a device driver and a grain of salt.
Juan M. M. Fiebelkorn September 30th, 2006, 02:51 PM BTW, is there any way for me to get uncompressed 10 or 12 bit bayer video????
I mean It was impossible for me to get stuff like that to experiment with.
Rob Scott September 30th, 2006, 08:55 PM the encoder will only do the minimum to fit into the 353's disk write speed, nothing more. If that means only 2:1 compression then that will be. I wouldn't even call it a "codec" at all as it will be dumb as possible.
Zsolt, that sounds like a very good approach. Will it be open source?
BTW, do you know if the 353 will support writing to disk and streaming Ogg Theora video simultaneously?
Thanks!
Wayne Morellini September 30th, 2006, 09:42 PM Zolt, this is sucking time out of my tight schedule, but where did I exaggerate? I've asked you questions in times past, about your claims of much more than 2:1 performance from your experiments, but received very little reply. I merely stated that you could, on your large claims, out do the lossless codec that was worked on for Obin's camera, which was simple and 2:1 or so, the cineform codec is visually lossless (not really lossless, not so much of interest) so is a different matter to which I was talking about (but from your previous claims it sounds like you could have a go at that). The need for only a codec that could , at a minimum (2:1) to squeeze raw through, was originally my proposal anyway.
So, please, I don't need anybody else needlessly coming down on me, I am aware of the limitations and the possibilities, which is usually more than we can say for most.
You and Rob have fun together. As you can see, I am mostly away from this, deliberately.
Thanks for your efforts.
But I must ask you that please don't exaggerate: as I stated earlier the encoder will only do the minimum to fit into the 353's disk write speed, nothing more. If that means only 2:1 compression then that will be.
Also, I only make a verilog encoder and an importer for a video editor software so the term "codec" isn't appropriate, again. So if Rob is looking for a general purpose codec then this might not be his choice.
I imagine he only wants something that will fit the data through and then can transcode to whatever he wants for editing.
External PC units for display etc:
If anybody wants to hook a computer for display/control, then 5 inch UMPC's (mini tablet Window XP computers) have just been shown off for next year, and Apple is hoped to come out with one of these next year. The Intel G965 chipset, with unified shader direct X 10 GPU, I suspect they will sort out the problems of, and the mobile version is expected to come Q1 07. Eventually, I expect this will come to the UMPC. But for anybody wanting a small PC hooked to the unit, without graphic card, the G965 (G versions only) might be well positioned.
Zsolt Hegyi October 1st, 2006, 05:08 AM Will it be open source?
If it works as expected, then it will be open.
BTW, do you know if the 353 will support writing to disk and streaming Ogg Theora video simultaneously?
I think we can manage that, that's what I'm also planning to do. But we have to keep in mind the processor's bandwith issues when dealing with two streams simultaneously.
The need for only a codec that could , at a minimum (2:1) to squeeze raw through, was originally my proposal anyway.
Well, I'm sure we'll have 2:1 but not sure it will be enough for actual recording. I'm curious just as you are, probably, as I've never seen a raw bayer stream in my whole life...
So, please, I don't need anybody else needlessly coming down on me
Didn't mean to offend you. Actually I see nothing wrong in what you say but I fear that others might overestimate the things that I do.
Zsolt
Rob Scott October 1st, 2006, 03:14 PM If it works as expected, then it will be open.
Excellent. If there is any way I can help with this, I will.
But we have to keep in mind the processor's bandwith issues when dealing with two streams simultaneously.
Yep, I'm really hoping it has enough horsepower to do both. Do you happen to know how big of a hard drive it will support? One of those 750GB drives would be cool, or perhaps a 1TB drive when they come out in a few months :-)
Well, I'm sure we'll have 2:1 but not sure it will be enough for actual recording. I'm curious just as you are, probably, as I've never seen a raw bayer stream in my whole life...
From my calculations, 2:1 should be just fine, unless it overloads the CPU. Also, I can provide you with some raw bayer images, though it may take me a few days to pull it together.
Thanks!
Rob Scott October 1st, 2006, 05:34 PM I've never seen a raw bayer stream in my whole life...
I've uploaded a single raw frame here: https://sourceforge.net/project/showfiles.php?group_id=168514&package_id=206179&release_id=452036
Wayne Morellini October 2nd, 2006, 10:39 AM Thanks Zolt.
I think testing the effectiveness of the JPEG compression on an grayscale of the bayer mask should be tested to see how well bayer interpolation holds together, with adjustments made to the compressors parameters. The main parameter to test is the quantisation phase (? I forget, the phase when they divide down the pixels to even them out to make them more compressible). A maximum of 3:1 compression, would probably be best. it would probably be near lossless at the most, and nowhere near as good as a bayer compressor. But ti would be better for existing camera owners. Very simple intra frame compression then could be added, as was being designed elsewhere.
Rob, a lot of those drives are power hungry and big, so select carefully.
I would like to move in different directions than doing anything camera/codec development related, but I am interested in your bayer footage. if you could post 10 second scenes and bayer format information that would be useful? A simple scene, simple shooting, an complex movement but simple scene, and a complex scene and complex movement. This would provide good test subjects for codec performance. I'm itching, but have too much else I should be doing.
Rob Scott October 2nd, 2006, 11:45 AM I think testing the effectiveness of the JPEG compression on an grayscale of the bayer mask ...
Sure, I was just thinking that the 16-bit mode of JPEG compression would help to preserve the high bit depth, despite the quantization step.
Rob, a lot of those drives are power hungry and big, so select carefully.
Sure, but a drive is something you could easy swap depending on the particular application. For a studio camera on a tripod, a full-sized 3.5" 750GB drive is no big deal, as long as the noise could be kept down.
if you could post 10 second scenes and bayer format information that would be useful? A simple scene, simple shooting, an complex movement but simple scene, and a complex scene and complex movement. This would provide good test subjects for codec performance.
That would be useful for me as well; I'll see if I can fit it into my schedule.
I'm itching, but have too much else I should be doing.
I hear you! :-)
Zsolt Hegyi October 2nd, 2006, 01:22 PM Do you happen to know how big of a hard drive it will support?
It has linux running on it so I don't think there's any limit.
From my calculations, 2:1 should be just fine
What calculations? I had some calculations earlier (they're on this thread somewhere) but Andrey instantly proved some of them wrong so I decided not to calculate but wait for the test results instead...
I've uploaded a single raw frame here:
Thanks. Actually I've seen single raw images before, while I was building my own camera but was unable to get a continuous stream of images.
as long as the noise could be kept down.
There are some simple but good ideas here: http://www.silentpcreview.com/
That would be useful for me as well;
...and for me and probably for Juan here as well (altough we don't know his intentions with those raws). The three types of videos Wayne mentioned would do just fine. Hope you find some time to record them.
Zsolt
Rob Scott October 2nd, 2006, 02:07 PM What calculations?
I saw the figure 15 MB/sec, and calculated that a 2:1 compression ratio applied the raw frame (1280x720, 10 bit @ 24fps) would fit. Of course, if this ratio is an average, then we might have problems during particularly detailed or noisy scenes.
There are some simple but good ideas here: http://www.silentpcreview.com/
Thanks!
Hope you find some time to record them.
I'm sure I will. (Of course, they will probably have some nasty rolling shutter artifacts.)
Zsolt Hegyi October 3rd, 2006, 12:56 AM I saw the figure 15 MB/sec
Where? The theora streamer's output rate on ethernet is 70Mb/s=8.75MB/s.
Zsolt
Rob Scott October 3rd, 2006, 06:52 AM Where?
I was referring to hard drive (IDE) performance, not Ethernet. I thought I saw the 15 MB/sec figure in this thread but I guess it was this post (http://mhonarc.axis.se/dev-etrax/msg03101.html), which states the theoretical maximum for the ETRAX 100LX as 16.67 MB/s. I must have rounded it down to 15.
I did a bit of Googling and found this document (http://www.developer.axis.com/products/etraxfs/ds_etrax_fs_26181_en_0512_lo.pdf) which states that the FS supports UltraDMA mode 2 (33 MB/s).
The bandwidth required for 1280x720, 10 bits @ 24fps is 26.4 MB/second, so it may be possible for the FS to write the raw data to disk with no compression at all, or with only slight compression.
Zsolt Hegyi October 4th, 2006, 01:42 AM which states the theoretical maximum for the ETRAX 100LX as 16.67 MB/s.
But the guy also writes that 12MB/s is the practical maximum and another guy writes that he measured 7MB/s. So I guess the reality can be between the two.
The bandwidth required for 1280x720, 10 bits @ 24fps is 26.4 MB/second
The new micron sensor has 12 bit output so it's slightly larger than that.
so it may be possible for the FS to write the raw data to disk with no compression at all, or with only slight compression.
Andrey stated in this thread somewhere that we shouldn't expect much improvement if we write to disk instead of ethernet.
Based on all of the above, I expect something around 10MB/s and that requires 3:1 compression ratio. Anyhow, we're going to be on the edge, that's for sure.
Zsolt
Rob Scott October 4th, 2006, 05:15 AM But the guy also writes that 12MB/s is the practical maximum and another guy writes that he measured 7MB/s. So I guess the reality can be between the two.
True, but that's for the 100LX. The FS supports UltraDMA mode 2 (33 MB/s), so presumably the maximum would be somewhere between 14 and 24 MB/s.
The new micron sensor has 12 bit output so it's slightly larger than that.
I was assuming a LUT to reduce it to 10 bit. But if it turns out to be possible to handle the full 12 bits, that would be better of course.
BTW, have you looked at some of the components available on OpenCores (http://www.opencores.org/) such as xmatchpro lossless data compressor (http://www.opencores.org/projects.cgi/web/xmatchpro/overview) or Video compression systems (http://www.opencores.org/projects.cgi/web/video_systems/overview)?
Thanks!
Zsolt Hegyi October 4th, 2006, 05:56 AM The FS supports UltraDMA mode 2 (33 MB/s)
And the 353 will use the FS? Anyway, Andrey knows the system better than us, so if he says "not much improvement can be expected" then I believe that. My encoder will be somewhat faster than the theora stuff, but it could well be that some bottleneck comes from the fpga itself so the processor won't be able drive the IDE interface to a maximum.
BTW, have you looked at some of the components available on
No. I don't know how easy it would be to insert a third party encoder into the Elphel design.
Zsolt
Rob Scott October 4th, 2006, 06:41 AM And the 353 will use the FS?
That's what I understand. You may be right about the increase in performance (or lack thereof) but I guess we'll see.
Edit -- Andrey mentions using the ETRAX FS in this post (http://www.dvinfo.net/conf/showpost.php?p=514959&postcount=186) and this post (http://www.dvinfo.net/conf/showpost.php?p=515202&postcount=192) and indicates that the CPU should be roughly twice as fast.
Juan M. M. Fiebelkorn October 6th, 2006, 05:30 AM I see that Etrax FS has PCI support built in so, does it mean that you just can connect a PCI board to it or what?
Rob Scott October 6th, 2006, 05:32 AM I see that Etrax FS has PCI support built in so, does it mean that you just can connect a PCI board to it or what?
Juan, I believe you'd have to design a PCI bus into your custom board; the Elphel won't have this.
|
|