Elphel image quality? at DVinfo.net
DV Info Net

Go Back   DV Info Net > Special Interest Areas > Alternative Imaging Methods > Apertus: Open Source Cinema Project

Reply
 
Thread Tools Search this Thread
Old February 12th, 2008, 05:30 PM   #1
Major Player
 
Join Date: Jan 2003
Location: Portland, OR
Posts: 220
Elphel image quality?

What kind of image quality are users of Elphel cameras able to pull from them? What camera footage would you compare it to?
Brandt Wilson is offline   Reply With Quote
Old February 14th, 2008, 01:46 PM   #2
Major Player
 
Join Date: Jan 2005
Location: (The Netherlands - Belgium)
Posts: 735
There is some footage in the Elphel 333 thread and on this page. The new 353 camera isn't really tested yet for film purposes.
Oscar Spierenburg is offline   Reply With Quote
Old February 14th, 2008, 02:29 PM   #3
Major Player
 
Join Date: Jan 2003
Location: Portland, OR
Posts: 220
Thanks Oscar. I had seen some images on the wiki, but not these.
Brandt Wilson is offline   Reply With Quote
Old February 14th, 2008, 03:37 PM   #4
Major Player
 
Join Date: Jan 2005
Location: (The Netherlands - Belgium)
Posts: 735
So, what do you think Brandt? Maybe this is a good start to discuss and improve the image of the camera.
First of all, you'll notice the compression, if you zoom in you'll also notice that the compression uses quite large blocks. At first site it looks bad, but it's completely removable with a small box blur. The blur doesn't soften the actual image. See the test (zoom in some more to see the blocks)
I've also attached an early shot with the 333. I guess I used 80% compression. Maybe Andrey can tell us if the Hard disk recording with the 353 enables us to use lower compression (like 90 or 95%) without frame drops.
Attached Thumbnails
Elphel image quality?-compressiontest.jpg   Elphel image quality?-roof-16-1.jpg  

Oscar Spierenburg is offline   Reply With Quote
Old February 14th, 2008, 10:55 PM   #5
Major Player
 
Join Date: Jan 2003
Location: Portland, OR
Posts: 220
I'm not a coder, so becoming actively involved in the behinds the scenes workings of this project has been a bit intimidating for me. I need to get myself up to speed with how to program and use the camera with the web interface, and how to modify the interface itself. Do you know of any tutorials I can learn from?

The blocks are larger than I would have expected. I had thought that the compression was adaptive, depending on the nature of the adjacent pixels. I'm still learning the technology side, so I'm afraid that my insights into this part of the project won't be very inspiring. I can tell in zooming in on the image of the woman with the buildings behind her that the compression scheme leaves an image that looks a bit like reversal film. There is a lot of noise and the whites blow out quickly.

The interior shot of the girl has a pleasant curve in the skin tones, but the darker areas, particularly her hair, look really noisy.

How does this 20x20 pixel macroblock sampling compare to the Digic II or III processor's method? I notice that the HV20, even at 4:2:0 chroma subsampling, has a nice filmic curve without large compression blocks.

I did see the reference to the bloodsimple codec. Do we have examples of images or video taken with this codec?

I know that for digital cameras, there are filters that reduce or remove digital noise. Some of these filters work with fixed pattern noise, so if the noise on th elphel is fixed pattern, perhaps a grain remover would be better than a blur.

The curve is of particular interest to me...there is a sharpness and flatness in the transitions between tonal values. I cinematic curve could really improve some of these starker shots.

Also Oscar, what was your "shutter speed" on the video of your girlfriend working on the path? I saw a lot of strobing in the movement of the gravel as it was shoveled into the wheelbarrow. A 1/48 second (20.83 millisecond) exposure would provide a nice filmic motion blur.

Maybe for really digging into this it would be helpful to get some test shots without the assistance of the ground glass adapter. If it is providing additional grain, it will throw off my impression of what the sensor and FPGA are capable of.
Brandt Wilson is offline   Reply With Quote
Old February 16th, 2008, 10:50 PM   #6
Major Player
 
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
Brandt,

Current color demosaic in the camera is very basic - it uses only 3x3 blocks (even if FPGA reads from memory 5x5), but we have "jp4" mode that provides slightly re-ordered raw Bayer pixels. It is format that can be post-processed to perform quality demosaic.

Noise in the darks - it can be improved slightly by eliminating the FPN, but still this sensor has only ~8500 e- full well capacity (even with the noise of the analog amplifiers close to a single e-).

Andrey
Andrey Filippov is offline   Reply With Quote
Old June 18th, 2009, 04:23 AM   #7
Regular Crew
 
Join Date: Feb 2009
Location: San Francisco, CA
Posts: 82
True RAW sensor data PLEASE!

With a few changes the basic idea of the Elphel camera could be made into a useful Digital Cinema camera, these are:

1) Record true RAW sensor data, all 12bits. Use Delta values if need to reduce the bandwidth, you split the even and odd pixels to reduce the Delta, that is RED-RED=DELTA RED, and so on, you don't need to take the color into account, just use odd-odd=odd delta, and even-even=even delta. To make the bandwidth even smaller you can change the bits in the "word" size being recorder. This should be able to be programed into FPGA, it is not complex, or even needed.

2) Use the Aptina MT9P401I12STC sensor because it can(?) support high enough resolution and frame rates for Digital Cinema use,

2592x1102x12bit@23.976/24fps

2560x1080x12bit@23.976/24/25fps

2048x1152x12bit@23.976/24/25fps

1920x1080x12bit@23.976/24/25/29.97/30fps

1280x720x12bit@23.976/24/25/29.97/30/47.952/48/59.94/60fps

And lower resolutions like 720x480 at higher speeds for slow motion.

3) Use shift registers to divide the bandwidth to record to a DRAM/FLASH/SRAM FIFO that can be used in a "Shoot then Dump" camera concept. This reduces the cost over a "Shoot then Swap" camera since you can off-load the buffer memory. Each of the 12 data bits would be sent to its own memory chip, CF or SD card, etc. You can use flip-flop to record even and odd address pixels to 24 memory units if needed to cut the bandwidth by 32x.

==

The main problem with the Elphel cameras is their in camera De-Bayer and the use of compresion, if you get rid of both of those problems, the Aptina MT9P401I12STC sensor seems fully able to be used for 35mm motion picture production as far as I have been able to tell during development of my de-mosaic software I am developing for it.

You can read some more about where I am with my Digital Cinema projects in this post on ScarletUser,

Open Source Camera - Page 10 - Scarletuser.com

If you are open to talking about working together to make an off-shoot of the Elphel that records full uncompressed data I have several camera design concepts that could work, mostly I would need help with memory chips since their read/write cycles and bad memory addresses that need to be bypassed become the major issue in the FIFO buffer memory since you need a low cost 4GB or so buffer that records about 150MB/sec without errors.

My editing system does full uncompressed Digital Cinema filmmaking with color correction and sound mixing, it is up for Download right now, and the main part runs under FreeDOS so you can have many systems running in parallel to speed the processing of the many large frames used in making a feature film, without the OS licence fees, etc.

If anyone has questions about my NLE/DICC/MIX software they can email me. If you have any true RAW sensor data from a Digital Cinema camera you can contact me about supporting it in my de-mosaic program I am developing, and I can do some trial CC on your data so you can see some look options maybe, etc.

Dan Hudgins
The official DANCAD3D (tm) Beta Test Web site.
tempnulbox [at] yahoo [dot] com
Dan Hudgins is offline   Reply With Quote
Old June 18th, 2009, 09:38 AM   #8
Major Player
 
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
Do you really need "raw"?

There are two articles in LinuxDevices on the topic I wrote earlier this year.

First deals with amount of information each sensor pixel can provide and how many bits are required to represent it: How many bits are really needed in the image pixels?

The second one among others describes JP4 format that we originally developed to store high-res images of scanned documents. That compression mode bypasses debayer in the camera leaving it for higher quality post-processing:
Elphel camera under the hood: from Verilog to PHP

And then - there is some info on the model 373 camera under development -
Elphel Development Blog Circuit design - 10373
Andrey Filippov is offline   Reply With Quote
Old June 18th, 2009, 06:07 PM   #9
Regular Crew
 
Join Date: Feb 2009
Location: San Francisco, CA
Posts: 82
Grading

The use of true RAW sensor data is important for these main issues:

1) When you are going to project the images on a 30 meter wide screen some people will be sitting close enough to the screen to see any artifacts even if those are just a loss of texture etc. If you are going to replace 35mm negative film with sensor data you have the most control in post using true raw sensor data, and you do not need to do real time processing in the camera. There is no limit to how long you can process the data in post so if you have the full true RAW data you can decide later how much time you can use to get the best results that can be obtained. This lets your camera stand out above the camcorders made by major companies in terms of image quality in the result. The needs for true Digital Cinema to equal or surpass ECN (the industry gold standard) require the used of maximum image quality.

2) Although some compression artifacts may be hard to see in graded finished images using video gamma, the are not good to have in a camera "negative" since the process of grading increases the contrast, adjusts the gamma, brightness shifts, and increases the saturation and sharpness all of which can make compression artifacts more visable. It is not hard to record true RAW sensor data with todays memory chips at a resolution of 2.5K and a frame rate of 24fps. Rather than use 80% bandwidth on compression systems that leave artifacts in the images, if you use delta encoding you get 100% image quality without any artifacts that can be made more visible through extream grading.

3) Because shooting with uncontrolled high dynamic range lighting requires a wide dynamic range sensor, it is better to leave to post the grading rather than doing white balance and other grading tasks in the camera. Record everything the sensor can put out, you are too busy with the actors and other issues when filmmaking and often do not get the exposure just right, or need to under expose to hold the highlights, and so fine adjustments to the grading are best left to post using the full sensor data since various noise filters will be needed depending on the subject matter and the way the light falls on the subjects in that particular shot. Camcorders use automatic exposure to try to hold the mid-toes within the limited range of their compressed recording formats, but for true Digital Cinema use, you need true RAW sensor data since it is more like a color negative in its wider than "needed" range and resolution.

What is the argument against recording RAW data for Digital Cinema use?

1) TB disks are now very cheep compaired to 35mm ECN at 2.5K 24fps recording.

2) The bandwidth is not that much higher, and todays parts can handel it.

3) The camera circuit is simpler and uses less power without the processing of the data.

4) You get the maximum quality the sensor can produce without any compromise.

5) Delta encoding can be used to reduce bandwidth without any negative effects on the image quality.

I understand that years ago there were limits that made compression useful in a network camera for surveillance, but a new mindset is needed for Digital Cinema use, there are many compressed HD cameras at low prices, Digital Cinema is about a level of quality that is above Video in many aspects and more like the 35mm movie film it is going to replace, with respect for the filmmakers of the past who worked to make such beautiful images we owe their legacy to make Digital Cinema better and to carry on the uncompressed nature of motion picture images, rather than making just another compressed camcorder...

If you cannot figure out how to make a low cost uncompressed camera I can talk to you about the circuit details and how to eliminate unneeded parts from the circuit.

If you insist that the camera MUST produce artifacts in the images, I cannot understand why that MUST be the case, why?

JP4 introduces brightness changes to the pixel values with upsets surrounding pixels during the de-mosaic process, this is also a problem with the R3D files, but with smaller sensors the size of the artifacts produced is larger relative to the frame height. To avoid these extra artifacts you should not compress the sensor data at all.

In true RAW sensor data noise is mixed with true data, although the noise may be high in a single pixel, during the de-moasic process several pixles are used to get the result brightness data for each pixel in the image, so the brightness accuracy can increase after de-mosaic and so the data in the noise contributes to the accuracy of the final image, resulting in more bits of data rather than less.
Dan Hudgins is offline   Reply With Quote
Old June 18th, 2009, 09:10 PM   #10
Major Player
 
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
Raw video

Elphel Model 353 can either send video over the network at up to 10MB/sec (100Mb/s Ethernet) or record it to attached CF or HDD. In that mode the data rate is limited to ~16MB/sec - the limitation comes from the ATA implementation in the ETRAX FS processor we use in that camera (next model will have a different one).

Good quality JP4 images for the full 5 MPix frame can be ~1 MB each, so at the maximal frame rate sensor is capable of (15fps) we can only record full data rate to the CF/HDD, not enough bandwidth for the network.

Development of a new camera costs us several hundred thousand and what you are suggesting is a completely different camera that has nothing in common with what we are making currently. And right now I'm busy with the 373 development and it will keep me busy for quite a while - after just migrating to a new architecture and porting existent code I'll need to upgrade sensor and put more stuff in the improved hardware.

We have very limited resources and can not afford engaging in costly development that, to be honest, I do not really understand - both technically and financially.

Quote:
JP4 introduces brightness changes to the pixel values with upsets surrounding pixels during the de-mosaic process,...
This I do not understand. In JP4 there is no de-mosaic and it does not "change the pixel values". When quantization is turned off (==1.0 for each DCT coefficient) the compression is virtually lossless (difference between the restored and original image is less than 1 count). Just regular JPEG (if pixel array is treated as monochrome, no color conversion) would do the same, but - produce larger files than JP4 as it does not exploit the "knowledge" that the pixels have color filters, and any color produces odd/even pixel modulation.


Taking MT9P001/031 sensor (5MPix@15 fps)- it outputs average of 75MPix/sec. If you use "raw" 12 bit (or maybe use analog output from the other senors and put some 16bit ADC ? Or 18-bit one?) it will be 112.5 MB/sec. You suggest some "differences" between pixels to have what is called "lossless compression" - with that you can get 2x compression. 2.5x with advanced algorithms (BTW JPEG/JP4 with all quantization coefficients of 1.0 induce error less than 1 count). OK, you'll implement a really good lossless algorithm in FPGA and get 2.5. So you need to record 45 MB/sec - it is 4.5 times more than the network interface in the 353 camera and 3 times more than it's ATA interface. So while providing extra bits (that do not provide more real data by the laws of physics) and using inappropriate compression algorithm (true lossless compression is only needed when the data is going to be edited multiple times) I would have to reduce the frame rate 3 times. And that would cause _very_ visible artifacts.

Last edited by Andrey Filippov; June 19th, 2009 at 12:28 AM.
Andrey Filippov is offline   Reply With Quote
Old June 19th, 2009, 01:39 AM   #11
Regular Crew
 
Join Date: Feb 2009
Location: San Francisco, CA
Posts: 82
What makes it a Digital Cinema camera?

If you have decided that you will not make a new design for a Digital Cinema camera, then basically you for the most part are just calling a network compressed surveillance camera by a new name, but it is something from the past not taking full advantage of new parts, and not really in the sprit of Digital Cinema.

Can anyone agree what the definition of what "Digital Cinema Camera" means, what it requires and what seperates it from a camcorder or network surveillance camera?

I can see that you would like to find new markets for your network compressed surveillance camera, but that does not turn it into a good tool for 35mm filmmaking without major changes, as far as I can see so far.

==

I understand that the JP4 is to re-construct the bayer pattern image, but in order to get high compression ratios the brightness of pixels changes, and so when the de-moasic is done later on the re-constructed data those changes affect the resulting pixels around the changed ones because of interpolation used, in other words the errors enlarge the artifacts. I.e. take a RED pixel, when interoplated it is then surrounded by 8 interpolated pixels, so if that RED pixel was off so will the 8 interpolated ones.

How many bits does you JP4 image hold? (8 bit data introduces histogram gaps after grading which are generally unexceptable for professional cinematography due to the tone banding and posterization, in addition to the compression artifacts etc. degrading the images)

In what program can I open the JP4 file? (do I get 12bits or 8bits)

Where can I download sample JP4 files to look at?

Do you have a converter program to convert JP4 to 16bit (48bpp) TIF files?

Do you have a converter program to convert JP4 to DNG files? (so DCRAW can be used to make 48bpp TIF files)

==

In order to compress the data in the JP4 file, you need to alter the brightness of pixels, are you saying that the maximum error of the 12th LSB bit (bit 0) is the greatest error for any pixel in the image, that is you get top 11bits with 100% accuracy in all frames at all times? (doing a pixel by pixel difference between the true RAW data and the de-compressed result, if you do not have true RAW data how did you measure this?)

==

Is there a network adapter that can go at 100MBytes/second rather than 10MB/s?

Why can't you use the 100MB/s adapter and record to RAM in the PC, PC should be able to hold 4GB now?

Storing the 12bit data two pixels in 3 bytes you get:

2592x1102x1.5x24=103MB/s (peak data rate would be faster close to 150MB/s without the FIFO on the sensor data lines)

4096/103=39seconds record time in 4GB.

39 seconds is more than enough per shot for making feature films. I think 2.5:1 is too optimistic for most images, but 10% to 25% size reduction should be possable with encoding the data in "words" shorter than 12bits, so the record time could be longer than 39 seconds.

If a FIFO is used to store the data then you can send it over your 10MB/s network connection it would just take 15x as long to download. USB would seem to be faster than your 10MB/s connection would it not? Like 35MB/s to 40MB/s so the time would be four time as fast or about 4x. With the 100MB/s network connection and a FIFO buffering the sensor data there is almost no dump delay time if recording to RAM in the PC.

==

I do not understand what would cost a bunch of money, you already understand how the parts work, you just remove the ones that are not needed, and add more memory chips to make a buffer. Please tell me where all that money would go?

I might be able to lay out the circuit board if someone could explain some of the details of the operation of the memory chips and the timing cycles of the sensor programing. I do not have a FPGA, PIC, EPROM, programer, nor do I have the software to program those chips at this time, you have those, would you be able to burn the chips?

Are you paying salaries for the development of this so called "Digital Cinema Camera"?

How many Elphel have you sold, i.e. what has been your gross and net, are you running in the red for all the years of development costs? How much are you in the hole?

Would you help me for free or would you expect payment?

Not so much an open community project if you expect payments, is it?

As for 16bit or 18bit A2D output, well noise would take up most of the bottom bits on those would it not? Anyway, that would make the deltas larger, but if the sensor is clean then the deltas would not be quite as much larger. I do not think there are sensors running at 24fps that give useful output at the 18th LSB bit, do you have a link for one?
Dan Hudgins is offline   Reply With Quote
Old June 19th, 2009, 06:59 AM   #12
Major Player
 
Join Date: Jan 2005
Location: (The Netherlands - Belgium)
Posts: 735
Quote:
Originally Posted by Dan Hudgins View Post
I can see that you would like to find new markets for your network compressed surveillance camera, but that does not turn it into a good tool for 35mm filmmaking without major changes, as far as I can see so far.
Dan, I'm sure your right from your point of view. But there is a little confusion here. Elphel is an independent company. They make their open source cameras for different purposes (depends on the end user) This could be surveillance, 3d, panoramic imaging etc...

Apertus is project that tries to build on the 'open' qualities of the camera, but has no commercial attachment to Elphel.
So Andrey Filippov (Elphel) is here to answer technical questions from time to time about the camera itself.
Take a look at our site and see how the project is set up: Home page | Apertus Open Source Cinema
Oscar Spierenburg is offline   Reply With Quote
Old June 19th, 2009, 09:06 AM   #13
Major Player
 
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
Quote:
Originally Posted by Dan Hudgins View Post
If you have decided that you will not make a new design for a Digital Cinema camera, then basically you for the most part are just calling a network compressed surveillance camera by a new name, but it is something from the past not taking full advantage of new parts, and not really in the sprit of Digital Cinema.
Dan, there is some misunderstanding. Elphel does not manufacture (or market) any "Digital Cinema" cameras - you may go on our web site and look on the price list. On the other hand - we do work on a new camera, I already linked that info. And that new camera will be capable of higher resolution/frame rate (at the expense of the higher power consumption) and can be used in "Digital Cinema" projects, but that never was our only (or even main) application - you can use Internet search to find some other areas where Elphel cameras may be or are already used.

Quote:
Originally Posted by Dan Hudgins View Post
I understand that the JP4 is to re-construct the bayer pattern image, but in order to get high compression ratios the brightness of pixels changes, and so when the de-moasic is done later on the re-constructed data those changes affect the resulting pixels around the changed ones because of interpolation used...
There is no interpolation involved in JP4 - each pixel data is encoded separately when quantization coefficients are all 1.0 - no high spacial frequencies are cut. The only interpolation is in the post-processing, it is related to the Bayer pattern of the sensor, not the camera.

Quote:
Originally Posted by Dan Hudgins View Post
How many bits does you JP4 image hold? (8 bit data introduces histogram gaps after grading which are generally unexceptable for professional cinematography due to the tone banding and posterization, in addition to the compression artifacts etc. degrading the images)
Currently we use 8 bits - number that perfectly matches the 12-bit sensors we use.

Later you write:
Quote:
Originally Posted by Dan Hudgins View Post
As for 16bit or 18bit A2D output, well noise would take up most of the bottom bits on those would it not?
So you understand, that if the sensor noise is about one 12-bit ADC count (in the darks), adding 4 extra ADC bits will just increase noise to be about 16 counts - so the 4 LSBs would not carry much of the additional information.
Let's come back to the 12-bit senor we use. In the darks - yes, it has "true" 12 bits, so (at low analog gain settings) the ADC matches the noise performance of the pixels.

But what about whites? If you get signal close to saturation - let's say 4000 ADC counts - how many bits are "true" there? What is the noise level (in ADC counts)?
One?
Same 16 as if you use 16-bit ADC for the "12-bit" sensor?
Even more?

Do you have an answer ?

Quote:
Originally Posted by Dan Hudgins View Post
In what program can I open the JP4 file? (do I get 12bits or 8bits)
Where can I download sample JP4 files to look at?
Do you have a converter program to convert JP4 to 16bit (48bpp) TIF files?
Do you have a converter program to convert JP4 to DNG files? (so DCRAW can be used to make 48bpp TIF files)
Unfortunately, when I created the original version of the JP4 I made only the encoder part while the decoder was made by our customers as non-GPL proprietary software that they used only in-house. But by now I'm sure there are some programs and plugins available to do the job. Basically it consists of 2 parts - restore the sensor Bayer mosaic data and then perform a quality de-bayer. The first part is easy, the second - well known, multiple algorithms are described and implemented.


Quote:
Originally Posted by Dan Hudgins View Post
In order to compress the data in the JP4 file, you need to alter the brightness of pixels, are you saying that the maximum error of the 12th LSB bit (bit 0) is the greatest error for any pixel in the image, that is you get top 11bits with 100% accuracy in all frames at all times? (doing a pixel by pixel difference between the true RAW data and the de-compressed result, if you do not have true RAW data how did you measure this?)
This I tried to explain in How many bits are really needed in the image pixels?
You are probably confusing 2 things:
1 - noise-matched non-linear encoding of the individual pixels when encoded ADC data has the same extra resolution over all range of output values.
2 - compression of those values

Quote:
Originally Posted by Dan Hudgins View Post
Is there a network adapter that can go at 100MBytes/second rather than 10MB/s?
Yes, sure there are now USB Gigabit adapters that provide 100MB/sec. But the USB port on Axis ETRAX FS processor used in Elphel model 353 camera is USB 1.1, not USB 2.0 so you would get some 1.5MB/sec, not the full speed that adapter is capable of. (it may be capable of less than 100MB/s - I do not have much experience with them)

Next camera will used different CPU with the GigE - but it will be directly, not with an adapter. And that higher network speed will allow us to go to higher resolution and frame rate, definitely.

Quote:
Originally Posted by Dan Hudgins View Post
Why can't you use the 100MB/s adapter and record to RAM in the PC, PC should be able to hold 4GB now?
So where do you suggest to plug the "adapter" on the camera side?

Quote:
Originally Posted by Dan Hudgins View Post
Storing the 12bit data two pixels in 3 bytes you get:
2592x1102x1.5x24=103MB/s (peak data rate would be faster close to 150MB/s without the FIFO on the sensor data lines)
Model 353 camera has 64MB memory that you can use as "FIFO on the sensor data lines". It has peak bandwidth of 640MB/sec, with average more than 90% of that.

Quote:
Originally Posted by Dan Hudgins View Post
With the 100MB/s network connection and a FIFO buffering the sensor data there is almost no dump delay time if recording to RAM in the PC.
Yes there are currently dozens of cameras that have GigE (100MB/s) connection that you can use for your project, Elphel 353 has just 100Mb/s network. And the next model 373 will use extra bandwidth to increase resolution/frame rate or stream data from multiple sensors,

Quote:
Originally Posted by Dan Hudgins View Post
I do not understand what would cost a bunch of money, you already understand how the parts work, you just remove the ones that are not needed, and add more memory chips to make a buffer. Please tell me where all that money would go?
I do not think Elphel is anything that different from the other companies. And the cost of the new product development is more than just cost of the set of electronic components from Digi-Key to build a prototype :-)


Quote:
Originally Posted by Dan Hudgins View Post
I might be able to lay out the circuit board if someone could explain some of the details of the operation of the memory chips and the timing cycles of the sensor programing. I do not have a FPGA, PIC, EPROM, programer, nor do I have the software to program those chips at this time, you have those, would you be able to burn the chips?
Maybe that "someone" is a college? And reading some publications on the matter, component datasheets, studying other designs, building your own ones?

And, btw, the only chip "burnt" in the model 353 camera (that requires a special programmator device) is a programmable clock generator that needs to generate specific frequencies at startup. All other programmable devices (flash memory, SRAM-based FPGA) are programmed by the camera itself and do not require any special equipment - you may find that information on our web site and on Sourceforge (on our project page).


Quote:
Originally Posted by Dan Hudgins View Post
Are you paying salaries for the development of this so called "Digital Cinema Camera"?
Definitely not.

Quote:
Originally Posted by Dan Hudgins View Post
How many Elphel have you sold, i.e. what has been your gross and net, are you running in the red for all the years of development costs? How much are you in the hole?
This information is not GPL-ed but you can make some research on the Internet.

Quote:
Originally Posted by Dan Hudgins View Post
Would you help me for free or would you expect payment?
What exactly help do you need from me or Elphel? Answer some question, explain details of the circuit boards or code?

Or develop a camera to your specs?

Quote:
Originally Posted by Dan Hudgins View Post
Not so much an open community project if you expect payments, is it?
What is the relation between "open community" and "payments"? Elphel project is not my hobby, it is a commercial company. We develop and manufacture products based on Free Software and Open Hardware (as our site says), free - as in freedom (Cameras - Free Software Foundation ), not as in beer :-)
Andrey Filippov is offline   Reply With Quote
Old June 19th, 2009, 04:54 PM   #14
Regular Crew
 
Join Date: Feb 2009
Location: San Francisco, CA
Posts: 82
APERTUS, issues on resolution and artifacts

Thanks for the detailed answers.

APERTUS and Elphel seem more than slightly related, in that it seems that APERTUS will be a version of the Elphel camera that only records JP4 at compression ratios of about 10:1 or more? (I base the compression ratio on the fact that you have 10MB/s network connection and the sensor puts out about 100MB/s true RAW sensor data).

If APERTUS could have other models, then could one of them be one that records true RAW sensor data 100%? Or their is no interest in using anything other than JP4 and other lossy compressed data formats for recording?

==

I seem to grasp JP4 seeks to record the Bayer pattern data, in four compressed images one 1/4 size RED, one 1/4 size BLUE, and two 1/4 size GREEN. And that you have experimented with using two gain values for the two GREEN fields in order to do HDR. You group the pixels of the same color for compression to avoid interactions between the colors.

What I was asking was the number of bits each color field is redorded using, the answer seems to be 8, which is not enough to allow heavy color correction without major histogram gaps?

The other question is, are there sample JP4 frame files I can download, and do you have a program to convert the JP4 sample files into RAW, DNG, and or (48bpp) TIF files that I can download for free?

If you do not have sample JP4 and a converter, do you have sample 16bit (48bpp) TIF files made from JP4 that I can look at 1:1 full size to check for artifacts?

If so I could process the TIF converted files in my CC and look at the results. My system uses 16bit (48bpp) TIF files that are numbered like 00000000.TIF, 00000001.TIF, or 0.TIF, 1.TIF. The left padded ones are better for Windows (tm) since they sort in the right order in a file directory listing.

If the JP4 can be converted into a (48bpp) TIF file WITHOUT de-moasic, I can convert the data into RAW and de-mosaic it myself to see how the artifacts of compression look. In other words, the TIF would be a Bayer color "checker board" image, with the colors not interpolated and set to black where not used, this way the JP4 image can be looked at before interpolation in any graphics type viewer.

You could also have a conversion of JP4 to a TIF that gives a monochrome image in which the four color fields are output as quadrants of 1/4 size, that would make looking for artifacts of compression easer.

==

In order to compress the 4 color "fields" in the JP4 file, some of the brightness values of the pixels must change? If not how so?

==

In the bright areas of the image the 12bit linear data is run through a curve to adjust the highlight detail, therefore the distance between the data levels will be more or less and the fine histogram gaps in the 16bitx3 output will be more or less. My system mostly is set up to adjust the highlights in the images for video gamma output, in the range of gamma 2.2 to 2.4, but other gammas can be set with the adjustments.

This is why 12bit data is needed, and 8bit data is not enough, even if the 8bit is LUT to gamma in the camera, since sometimes the highlights or shadows are expanded in contrast.

With the Aptina P401 sensor the A2D is inside, so if you record all 12bits and use all of them in de-mosaic and grading you get the best results the sensor can produce.

==

The order of the circuit parts would be:

sensor > FIFO > network > PC > RAM/SSD > HD

or

sensor > FIFO > single_board_computer > HD/SSD

or

sensor > FIFO acting as memory card,

then

FIFO > reader > PC > HD

Some way of setting up the sensor would be needed, maybe through the network/USB, or for stand alone by way of a PIC chip and EEPROM.

Viewfinder can be by scan converter on sensor data tap or software in the PC.

sensor > scan_converter > SVGA/NTSC/PAL

==

What sort of memory chip are you using for your 64MB buffer/FIFO now? i.e. SRAM/DRAM/FLASH?

How much would it cost to use enough of those chips to make a 4GB buffer/FIFO?

Is there a lower cost memory chip for the 4GB buffer/FIFO that you understand how to hook up?

How much would that one cost for 4GB?

Since the FIFO is the most important part of the circuit it should be the part that would be prototype first, if you can help me understand what chip can work best for a 150MB/s+ FIFO then I might be able to design a prototype buffer for testing? Information gained from this buffer trial might help later with the APERTURS camera working on various computers gitch free.

As for programing other parts like PIC, EEPROM, FPGA I do not have a programer device and was asking if you could program the chips for me from the data files generated by software desigined for such chips.

==

My view is that there are many compressed camcorders/DSLR etc. in the $1000 to $5000 range, and then there are Scarlet/Acam dII and other higher priced cameras in the Digital Cinema mold.

For use with my "freeish" Digital Cinema programs, I was looking for a lower cost, maybe $250 to $750, simple, dependable, minimum parts camera that would have true Digital Cinema specs, like:

1) Records full uncompressed RAW sensor data for 15seconds or more (same as Eyemo)

2) Records at exact frame rates for cinema use 24.000fps, +/- one frame per shot length.

3) Shoots 2048x1152x12bit or better, like 2592x1102x12bit.

4) Has full manual exposure, color blance, gain, etc.

5) Has changable lens mount for various lens types

6) Has autoslate beep/flash circuit for double system recording

7) Has LED bargraph on sensor data bits for quick exposure adjustments

8) Has some way to set the shutter angle ~0 to ~360

9) Not required but a scan converter viewfinder with both full field and 1:1 zoom would make it more usable. Otherwise a reflex zoom lens, or NTSC/PAL camera focused on chip face and focus by scale.

10) Produces images of quality high enough for 35mm movie film prints to be made and look close to ECN in the end result, without "video" or compression artifacts, and film like dynamic range.

The needs for Digital Cinema feature filmmaking are not the same as for other uses of cameras like the Elphel, mostly the record time per shot does not need to be long, and the quality of the images must be as high as the sensor can produce.

Digital Cinema is all about image quality and manual control, otherwise why not just purcahse a Canon 5DMkII or some HD camcorder?
Dan Hudgins is offline   Reply With Quote
Old June 19th, 2009, 06:23 PM   #15
Major Player
 
Join Date: Jan 2005
Location: (The Netherlands - Belgium)
Posts: 735
Quote:
Originally Posted by Dan Hudgins View Post
I was looking for a lower cost, maybe $250 to $750, simple, dependable, minimum parts camera that would have true Digital Cinema specs, like:

1) Records full uncompressed RAW sensor data for 15seconds or more (same as Eyemo)
2) Records at exact frame rates for cinema use 24.000fps, +/- one frame per shot length.
3) Shoots 2048x1152x12bit or better, like 2592x1102x12bit.
4) Has full manual exposure, color blance, gain, etc.
5) Has changable lens mount for various lens types
6) Has autoslate beep/flash circuit for double system recording
7) Has LED bargraph on sensor data bits for quick exposure adjustments
8) Has some way to set the shutter angle ~0 to ~360
9) Not required but a scan converter viewfinder with both full field and 1:1 zoom would make it more usable. Otherwise a reflex zoom lens, or NTSC/PAL camera focused on chip face and focus by scale.
10) Produces images of quality high enough for 35mm movie film prints to be made and look close to ECN in the end result, without "video" or compression artifacts, and film like dynamic range.
I want that too! :-D Please let me know when you find it ;-)

Now all your points are true as ultimate goal, but we must be realistic and see that something that is called a 'project' is a growing thing, sometimes even growing slowly. But just go back a year and see the difference in technology and the progress that is made so far. Than think a year ahead and imagine what is possible.
Oscar Spierenburg is offline   Reply
Reply

DV Info Net refers all where-to-buy and where-to-rent questions exclusively to these trusted full line dealers and rental houses...

B&H Photo Video
(866) 521-7381
New York, NY USA

Scan Computers Int. Ltd.
+44 0871-472-4747
Bolton, Lancashire UK


DV Info Net also encourages you to support local businesses and buy from an authorized dealer in your neighborhood.
  You are here: DV Info Net > Special Interest Areas > Alternative Imaging Methods > Apertus: Open Source Cinema Project

Thread Tools Search this Thread
Search this Thread:

Advanced Search

 



All times are GMT -6. The time now is 06:52 AM.


DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network