Elphel image quality? - Page 3 at DVinfo.net
DV Info Net

Go Back   DV Info Net > Special Interest Areas > Alternative Imaging Methods > Apertus: Open Source Cinema Project

Reply
 
Thread Tools Search this Thread
Old June 22nd, 2009, 02:22 PM   #31
Regular Crew
 
Join Date: Feb 2009
Location: San Francisco, CA
Posts: 82
Record 12bit sensor data only

I was wondering what exactly members of the Apertus "team" would be willing to do to help in the development of a RAW port option on the Apertus camera?

It is easer for me to ask that then to go round and round about the minutiae. If you are not interested in you camera having a true RAW port option, the why not just say that from the start, rather than bring up all these non-issues (what issues JP4 has do not affect building a true RAW recording camera since the JP4 encoded files would not be used for farmes that end up in the finished motion picture).

You broght up using DIMM, and I asked if you understand them well enough to help me build a data recorder I could test, clear yes and no answers would make the development path clearer.

Last edited by Dan Hudgins; June 22nd, 2009 at 03:52 PM.
Dan Hudgins is offline   Reply With Quote
Old June 23rd, 2009, 10:37 AM   #32
Regular Crew
 
Join Date: Aug 2004
Posts: 91
Wow, there is a lot of patient and courteous guys in this thread...

If you want that RAW output that much (even if you don't understand why...), just go buy a ikonoscop, it's the cheapest you will find. Looks like you have absolutly now idea how expensive electronic development can be.

Or dither the JP4 output to 32 bits per channel if that what you want... (you can't retrieve information that isn't here in the first place)

@Juan : the great thing with bayer filter (and the JP4) is that you get an half-resolution video stream (almost) for free for display. My Avisynth filter support that already (preview mode) and from what I've seen, the bayer patch for mplayer too (but Sebastian told me there was still some performance issues even in that mode, which is strange... need investigation)
Steven Mingam is offline   Reply With Quote
Old June 23rd, 2009, 11:32 AM   #33
Major Player
 
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
Added connector to 10373

Yesterday I added one more connector to the 10373 board I'm working on now. It is a small high density one, I had just right number of the spare pins on the FPGA in the I/O bank designated for the sensor interface, and there was a nice place for such a connector on the board (you may compare before/after in my blog -
Elphel Development Blog Andrey’s Journal ). So that was a really easy job, but still took me some 10 hours for the circuit diagram update, partial PCB re-layout (that included FPGA pin swapping that had to go back to the circuit diagram). I would charge $1250 if I did that for a customer - and that does not require any additional manufacturing NRE costs - I'm still working on the rev "0" board, no hardware is built yet :-)

That means that model 373 camera will be capable of running 2 senor boards with the sensor same/similar to what we use now in the model 353, each at the full speed. Of course there are also plans to use higher performance sensors too as the system board is expected to process up to 250-300MPix/sec (vs. 80MPix in JP4 / 53MPix in JPEG mode in the 353). But there is still a bottleneck in the link between the FPGA and TI DaVinci CPU that has a limit of 100MB/sec - just 20% more than Axis ETRAX FS has in the current camera. There are multiple channels connected between the CPU and FPGA (all that are availble on TMS320DM6467 chip and are not shared with other required interfaces), so actual bandwidth higher, but the 100MB/s is the major part of it. Other channels will be used for FPGA commands and additional lower bandwidth data (possibly - for low-res preview).
Andrey Filippov is offline   Reply With Quote
Old June 23rd, 2009, 02:23 PM   #34
Regular Crew
 
Join Date: Aug 2004
Posts: 91
Wow, that's an incredible job Andrey !! (the only pcbs I ever routed are analog guitar effect pedal with a few components and 1 layer, and that was hard enough). Which software did you use ?

You forgot one use case though : HDR ?
(too bad you can't fit 3 sensors port for RGB color camera :D)
Steven Mingam is offline   Reply With Quote
Old June 23rd, 2009, 03:23 PM   #35
Regular Crew
 
Join Date: Jan 2007
Location: Vienna, Austria
Posts: 112
Ohhhh!!! -> (possibly - for low-res preview)

The big issue we are currently trying to solve is debayering JP4 and displaying FullHD video on a mobile device like a netbook. Streaming a FullHD stream from the camera alone is too much for an atom and those cheap 945GSE onboard video cards. Results in around 10-15 fps. You CAN already frame and shoot with 15fps but it's simply not the full deal.

We are considering an Nvidia Ion with on board Geforce 9400M. That should give the video performance a 1000% increase, the question is just if JP4 debayering can be offloaded from the CPU to the video hardware (maybe with CUDA) or if the CPU still has to carry that burdon and therefore be the bottleneck of the whole pipeline.

Another option would be if the camera itself can generate a lower resolution output preview video stream. This would enable us to use much lighter and smaller devices like a beagleboard: BeagleBoard.org - Apertus or iPhone, etc.
Sebastian Pichelhofer is offline   Reply With Quote
Old June 23rd, 2009, 03:30 PM   #36
Major Player
 
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
Quote:
Originally Posted by Steven Mingam View Post
Which software did you use ?
Currently I use rather old version of PADS that I finally made work with WINE on my computer that runs GNU/Linux. Before that I used Athlon 700 running W2K - I never upgraded to Windows XP as it required online registration that I do not like.

But there are some problems with that old software, they are probably resolved in the current releases but I never paid for upgrades to the newer versions waiting for Mentor Graphics to release software that runs on GNU/Linux - that did not happen so far. So now I'm considering using modern version of Altium Designer, but unfortunately I was not able to make it work on GNU/Linux myself, so I'm trying to pay for that work to be done by professionals.

Quote:
Originally Posted by Steven Mingam View Post
You forgot one use case though : HDR ?
Yes, you are right - that might be a good application too, but I believe it is better if the sensor could support HDR mode internally - like made by FillFactory that was later sold to Cypress

Quote:
Originally Posted by Steven Mingam View Post
(too bad you can't fit 3 sensors port for RGB color camera :D)
It is possible with the 10359 - ElphelWiki board. But are the 3-sensor (3-CCD) cameras still better than single-sensor ones?
Andrey Filippov is offline   Reply With Quote
Old June 23rd, 2009, 05:07 PM   #37
Major Player
 
Join Date: Jun 2004
Location: Buenos Aires , Argentina
Posts: 444
Andrey, using two sensors and the technique and the technique you described at elphel page of using two different gains on the same sensor, HDR is quite posible.
Merging this two half images from one sensor and the full one from the other, should give you three backets to work with...
Juan M. M. Fiebelkorn is offline   Reply With Quote
Old June 23rd, 2009, 07:40 PM   #38
Major Player
 
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
Quote:
Originally Posted by Juan M. M. Fiebelkorn View Post
Andrey, using two sensors and the technique and the technique you described at elphel page of using two different gains on the same sensor, HDR is quite posible.
Merging this two half images from one sensor and the full one from the other, should give you three backets to work with...
In that case you'll likely need something like this:
Pellicle Beamsplitters - Edmund Optics (NT39-478 with 8%/92% ) so both senors will use the same lens.
Andrey Filippov is offline   Reply With Quote
Old June 24th, 2009, 04:47 AM   #39
Major Player
 
Join Date: Jun 2004
Location: Buenos Aires , Argentina
Posts: 444
IMHO this should be better

TECHSPEC&reg VIS and NIR Plate Beamsplitters - Edmund Optics

50R/50T
Juan M. M. Fiebelkorn is offline   Reply With Quote
Old June 24th, 2009, 04:55 AM   #40
Major Player
 
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
Pellicle vs. thick beam splitters

I don't think so. Pellicle mirrors are really good as you can put them after the (regular) lens, while thick beam splitters require parallel beams or specially designed lenses. Additionally HDR applications require different levels of light, and the light is precious in most cases. Even if you use additional filters (again - you need really thin filters to put them after the lens (and that was the point of using the beamsplitter - to use the same lens) - still 92% is nearly twice more than 50%, so such camera will be nearly twice more sensitive.

Andrey
Andrey Filippov is offline   Reply With Quote
Old June 24th, 2009, 12:04 PM   #41
Major Player
 
Join Date: Jun 2004
Location: Buenos Aires , Argentina
Posts: 444
You are perfectly right.Only thing is seeing that light is so precious, what can I do with only 8% of the light on the other sensor?

I mean, the diference is more than 4 F-Stops....

Having 50/50 or something around that, even 70/30 sounds much more logical to me...
Juan M. M. Fiebelkorn is offline   Reply With Quote
Old June 24th, 2009, 12:31 PM   #42
Major Player
 
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
We are talking about HDR, aren't we?

Juan, but we are talking about HDR application, aren't we? So one senor should get significantly more light than the other, having it 10:1 can potentially add that order of magnitude when the exposure is the same. Analog gain can also help to some extent - but that is only for the cheap CMOS sensors with ADC on-chip. When using the CCDs no regulated analog gains are needed - you just use ADC that has enough resolution over the full CCD output range - now fast, high-res ADC are easily available. Before that (some 15 years ago) I just used two 12-bit ADCs with 2 gains (~16:1) in parallel. One (high gain) was used in darks, the other - in highlights, were dominating shot noise made the full resolution unneeded. And then the two channels data was combined (with some calibration to "stitch" them smoothly). Here are some link to the results I've got with that system - it was 1K CCD with Peltier cooler, slow-scan (~ 1MPix/sec)

annotated 8-bit photo (processed to compress dynamic range)
original 16-bit image
He-Ne laser beam reflected between to 50% mirrors, 8-bit processed
original 16-bit image


So you only can use optical filters and exposure. And so if we try to keep the "high gain" channel exposure at minimum (that will define the overall motion blur) 10:1 splitter can be perfect as a first step of making the two channels have significantly different gain.

And, btw - you can order pellicles on custom frames, they do not need to be standard or round ones. It is possible to design a compact splitter with mount for the two sensors (one can be non-adjustable, but other needs precise ~1um adjustment to make both sensors in focus)
Andrey Filippov is offline   Reply With Quote
Old June 24th, 2009, 07:06 PM   #43
Major Player
 
Join Date: Jun 2004
Location: Buenos Aires , Argentina
Posts: 444
the Devil lies in the details.

You should have mentioned all this some posts above!!!!! :D


BTW, reading it again, it seems that the best option quality wise would be having a nice CCD, hopefully no rolling shutter, conected to nice 16 bits ADCs. Dan will be happy.

Last edited by Juan M. M. Fiebelkorn; June 25th, 2009 at 03:00 AM.
Juan M. M. Fiebelkorn is offline   Reply With Quote
Old June 25th, 2009, 09:19 AM   #44
Major Player
 
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
Quote:
Originally Posted by Juan M. M. Fiebelkorn View Post
the Devil lies in the details.

You should have mentioned all this some posts above!!!!! :D


BTW, reading it again, it seems that the best option quality wise would be having a nice CCD, hopefully no rolling shutter, connected to nice 16 bits ADCs. Dan will be happy.
Juan, there is no absolute perfection - that sensor I used was a cooled CCD with slow scan and FWC in the range of hundreds of thousands of electrons. It is in no way suitable for video. and still 13 bits were more than enough (2 12-bit ADC - 12bit data and one bit - which ADC used)

Later I used a nice (~$3K for full grade, less with more defects) Kodak 35mm CCD in the Model 323 camera
That sensor FWC is ~30ke-, it has maximal frame rate of just 5fps - but with some reduces performance - we used it at just 1-2 fps using lower pixel clock.

So unfortunately CCDs are generally slower than CMOS, and their single-sensor intra-frame dynamic range drops as soon as you go to smaller pixels.
Andrey Filippov is offline   Reply With Quote
Old June 25th, 2009, 11:03 PM   #45
Major Player
 
Join Date: Jun 2004
Location: Buenos Aires , Argentina
Posts: 444
So, this is not suitable?

http://www.kodak.com/global/plugins/...uctSummary.pdf

it says it runs to 64 fps, 5.5 micron pixel.

Does it make any sense having a peltier for cooling the sensor box?
Juan M. M. Fiebelkorn is offline   Reply
Reply

DV Info Net refers all where-to-buy and where-to-rent questions exclusively to these trusted full line dealers and rental houses...

B&H Photo Video
(866) 521-7381
New York, NY USA

Scan Computers Int. Ltd.
+44 0871-472-4747
Bolton, Lancashire UK


DV Info Net also encourages you to support local businesses and buy from an authorized dealer in your neighborhood.
  You are here: DV Info Net > Special Interest Areas > Alternative Imaging Methods > Apertus: Open Source Cinema Project

Thread Tools Search this Thread
Search this Thread:

Advanced Search

 



All times are GMT -6. The time now is 05:07 AM.


DV Info Net -- Real Names, Real People, Real Info!
1998-2021 The Digital Video Information Network