DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   Apertus: Open Source Cinema Project (https://www.dvinfo.net/forum/apertus-open-source-cinema-project/)
-   -   Elphel image quality? (https://www.dvinfo.net/forum/apertus-open-source-cinema-project/114676-elphel-image-quality.html)

Brandt Wilson February 12th, 2008 05:30 PM

Elphel image quality?
 
What kind of image quality are users of Elphel cameras able to pull from them? What camera footage would you compare it to?

Oscar Spierenburg February 14th, 2008 01:46 PM

There is some footage in the Elphel 333 thread and on this page. The new 353 camera isn't really tested yet for film purposes.

Brandt Wilson February 14th, 2008 02:29 PM

Thanks Oscar. I had seen some images on the wiki, but not these.

Oscar Spierenburg February 14th, 2008 03:37 PM

2 Attachment(s)
So, what do you think Brandt? Maybe this is a good start to discuss and improve the image of the camera.
First of all, you'll notice the compression, if you zoom in you'll also notice that the compression uses quite large blocks. At first site it looks bad, but it's completely removable with a small box blur. The blur doesn't soften the actual image. See the test (zoom in some more to see the blocks)
I've also attached an early shot with the 333. I guess I used 80% compression. Maybe Andrey can tell us if the Hard disk recording with the 353 enables us to use lower compression (like 90 or 95%) without frame drops.

Brandt Wilson February 14th, 2008 10:55 PM

I'm not a coder, so becoming actively involved in the behinds the scenes workings of this project has been a bit intimidating for me. I need to get myself up to speed with how to program and use the camera with the web interface, and how to modify the interface itself. Do you know of any tutorials I can learn from?

The blocks are larger than I would have expected. I had thought that the compression was adaptive, depending on the nature of the adjacent pixels. I'm still learning the technology side, so I'm afraid that my insights into this part of the project won't be very inspiring. I can tell in zooming in on the image of the woman with the buildings behind her that the compression scheme leaves an image that looks a bit like reversal film. There is a lot of noise and the whites blow out quickly.

The interior shot of the girl has a pleasant curve in the skin tones, but the darker areas, particularly her hair, look really noisy.

How does this 20x20 pixel macroblock sampling compare to the Digic II or III processor's method? I notice that the HV20, even at 4:2:0 chroma subsampling, has a nice filmic curve without large compression blocks.

I did see the reference to the bloodsimple codec. Do we have examples of images or video taken with this codec?

I know that for digital cameras, there are filters that reduce or remove digital noise. Some of these filters work with fixed pattern noise, so if the noise on th elphel is fixed pattern, perhaps a grain remover would be better than a blur.

The curve is of particular interest to me...there is a sharpness and flatness in the transitions between tonal values. I cinematic curve could really improve some of these starker shots.

Also Oscar, what was your "shutter speed" on the video of your girlfriend working on the path? I saw a lot of strobing in the movement of the gravel as it was shoveled into the wheelbarrow. A 1/48 second (20.83 millisecond) exposure would provide a nice filmic motion blur.

Maybe for really digging into this it would be helpful to get some test shots without the assistance of the ground glass adapter. If it is providing additional grain, it will throw off my impression of what the sensor and FPGA are capable of.

Andrey Filippov February 16th, 2008 10:50 PM

Brandt,

Current color demosaic in the camera is very basic - it uses only 3x3 blocks (even if FPGA reads from memory 5x5), but we have "jp4" mode that provides slightly re-ordered raw Bayer pixels. It is format that can be post-processed to perform quality demosaic.

Noise in the darks - it can be improved slightly by eliminating the FPN, but still this sensor has only ~8500 e- full well capacity (even with the noise of the analog amplifiers close to a single e-).

Andrey

Dan Hudgins June 18th, 2009 04:23 AM

True RAW sensor data PLEASE!
 
With a few changes the basic idea of the Elphel camera could be made into a useful Digital Cinema camera, these are:

1) Record true RAW sensor data, all 12bits. Use Delta values if need to reduce the bandwidth, you split the even and odd pixels to reduce the Delta, that is RED-RED=DELTA RED, and so on, you don't need to take the color into account, just use odd-odd=odd delta, and even-even=even delta. To make the bandwidth even smaller you can change the bits in the "word" size being recorder. This should be able to be programed into FPGA, it is not complex, or even needed.

2) Use the Aptina MT9P401I12STC sensor because it can(?) support high enough resolution and frame rates for Digital Cinema use,

2592x1102x12bit@23.976/24fps

2560x1080x12bit@23.976/24/25fps

2048x1152x12bit@23.976/24/25fps

1920x1080x12bit@23.976/24/25/29.97/30fps

1280x720x12bit@23.976/24/25/29.97/30/47.952/48/59.94/60fps

And lower resolutions like 720x480 at higher speeds for slow motion.

3) Use shift registers to divide the bandwidth to record to a DRAM/FLASH/SRAM FIFO that can be used in a "Shoot then Dump" camera concept. This reduces the cost over a "Shoot then Swap" camera since you can off-load the buffer memory. Each of the 12 data bits would be sent to its own memory chip, CF or SD card, etc. You can use flip-flop to record even and odd address pixels to 24 memory units if needed to cut the bandwidth by 32x.

==

The main problem with the Elphel cameras is their in camera De-Bayer and the use of compresion, if you get rid of both of those problems, the Aptina MT9P401I12STC sensor seems fully able to be used for 35mm motion picture production as far as I have been able to tell during development of my de-mosaic software I am developing for it.

You can read some more about where I am with my Digital Cinema projects in this post on ScarletUser,

Open Source Camera - Page 10 - Scarletuser.com

If you are open to talking about working together to make an off-shoot of the Elphel that records full uncompressed data I have several camera design concepts that could work, mostly I would need help with memory chips since their read/write cycles and bad memory addresses that need to be bypassed become the major issue in the FIFO buffer memory since you need a low cost 4GB or so buffer that records about 150MB/sec without errors.

My editing system does full uncompressed Digital Cinema filmmaking with color correction and sound mixing, it is up for Download right now, and the main part runs under FreeDOS so you can have many systems running in parallel to speed the processing of the many large frames used in making a feature film, without the OS licence fees, etc.

If anyone has questions about my NLE/DICC/MIX software they can email me. If you have any true RAW sensor data from a Digital Cinema camera you can contact me about supporting it in my de-mosaic program I am developing, and I can do some trial CC on your data so you can see some look options maybe, etc.

Dan Hudgins
The official DANCAD3D (tm) Beta Test Web site.
tempnulbox [at] yahoo [dot] com

Andrey Filippov June 18th, 2009 09:38 AM

Do you really need "raw"?
 
There are two articles in LinuxDevices on the topic I wrote earlier this year.

First deals with amount of information each sensor pixel can provide and how many bits are required to represent it: How many bits are really needed in the image pixels?

The second one among others describes JP4 format that we originally developed to store high-res images of scanned documents. That compression mode bypasses debayer in the camera leaving it for higher quality post-processing:
Elphel camera under the hood: from Verilog to PHP

And then - there is some info on the model 373 camera under development -
Elphel Development Blog Circuit design - 10373

Dan Hudgins June 18th, 2009 06:07 PM

Grading
 
The use of true RAW sensor data is important for these main issues:

1) When you are going to project the images on a 30 meter wide screen some people will be sitting close enough to the screen to see any artifacts even if those are just a loss of texture etc. If you are going to replace 35mm negative film with sensor data you have the most control in post using true raw sensor data, and you do not need to do real time processing in the camera. There is no limit to how long you can process the data in post so if you have the full true RAW data you can decide later how much time you can use to get the best results that can be obtained. This lets your camera stand out above the camcorders made by major companies in terms of image quality in the result. The needs for true Digital Cinema to equal or surpass ECN (the industry gold standard) require the used of maximum image quality.

2) Although some compression artifacts may be hard to see in graded finished images using video gamma, the are not good to have in a camera "negative" since the process of grading increases the contrast, adjusts the gamma, brightness shifts, and increases the saturation and sharpness all of which can make compression artifacts more visable. It is not hard to record true RAW sensor data with todays memory chips at a resolution of 2.5K and a frame rate of 24fps. Rather than use 80% bandwidth on compression systems that leave artifacts in the images, if you use delta encoding you get 100% image quality without any artifacts that can be made more visible through extream grading.

3) Because shooting with uncontrolled high dynamic range lighting requires a wide dynamic range sensor, it is better to leave to post the grading rather than doing white balance and other grading tasks in the camera. Record everything the sensor can put out, you are too busy with the actors and other issues when filmmaking and often do not get the exposure just right, or need to under expose to hold the highlights, and so fine adjustments to the grading are best left to post using the full sensor data since various noise filters will be needed depending on the subject matter and the way the light falls on the subjects in that particular shot. Camcorders use automatic exposure to try to hold the mid-toes within the limited range of their compressed recording formats, but for true Digital Cinema use, you need true RAW sensor data since it is more like a color negative in its wider than "needed" range and resolution.

What is the argument against recording RAW data for Digital Cinema use?

1) TB disks are now very cheep compaired to 35mm ECN at 2.5K 24fps recording.

2) The bandwidth is not that much higher, and todays parts can handel it.

3) The camera circuit is simpler and uses less power without the processing of the data.

4) You get the maximum quality the sensor can produce without any compromise.

5) Delta encoding can be used to reduce bandwidth without any negative effects on the image quality.

I understand that years ago there were limits that made compression useful in a network camera for surveillance, but a new mindset is needed for Digital Cinema use, there are many compressed HD cameras at low prices, Digital Cinema is about a level of quality that is above Video in many aspects and more like the 35mm movie film it is going to replace, with respect for the filmmakers of the past who worked to make such beautiful images we owe their legacy to make Digital Cinema better and to carry on the uncompressed nature of motion picture images, rather than making just another compressed camcorder...

If you cannot figure out how to make a low cost uncompressed camera I can talk to you about the circuit details and how to eliminate unneeded parts from the circuit.

If you insist that the camera MUST produce artifacts in the images, I cannot understand why that MUST be the case, why?

JP4 introduces brightness changes to the pixel values with upsets surrounding pixels during the de-mosaic process, this is also a problem with the R3D files, but with smaller sensors the size of the artifacts produced is larger relative to the frame height. To avoid these extra artifacts you should not compress the sensor data at all.

In true RAW sensor data noise is mixed with true data, although the noise may be high in a single pixel, during the de-moasic process several pixles are used to get the result brightness data for each pixel in the image, so the brightness accuracy can increase after de-mosaic and so the data in the noise contributes to the accuracy of the final image, resulting in more bits of data rather than less.

Andrey Filippov June 18th, 2009 09:10 PM

Raw video
 
Elphel Model 353 can either send video over the network at up to 10MB/sec (100Mb/s Ethernet) or record it to attached CF or HDD. In that mode the data rate is limited to ~16MB/sec - the limitation comes from the ATA implementation in the ETRAX FS processor we use in that camera (next model will have a different one).

Good quality JP4 images for the full 5 MPix frame can be ~1 MB each, so at the maximal frame rate sensor is capable of (15fps) we can only record full data rate to the CF/HDD, not enough bandwidth for the network.

Development of a new camera costs us several hundred thousand and what you are suggesting is a completely different camera that has nothing in common with what we are making currently. And right now I'm busy with the 373 development and it will keep me busy for quite a while - after just migrating to a new architecture and porting existent code I'll need to upgrade sensor and put more stuff in the improved hardware.

We have very limited resources and can not afford engaging in costly development that, to be honest, I do not really understand - both technically and financially.

Quote:

JP4 introduces brightness changes to the pixel values with upsets surrounding pixels during the de-mosaic process,...
This I do not understand. In JP4 there is no de-mosaic and it does not "change the pixel values". When quantization is turned off (==1.0 for each DCT coefficient) the compression is virtually lossless (difference between the restored and original image is less than 1 count). Just regular JPEG (if pixel array is treated as monochrome, no color conversion) would do the same, but - produce larger files than JP4 as it does not exploit the "knowledge" that the pixels have color filters, and any color produces odd/even pixel modulation.


Taking MT9P001/031 sensor (5MPix@15 fps)- it outputs average of 75MPix/sec. If you use "raw" 12 bit (or maybe use analog output from the other senors and put some 16bit ADC ? Or 18-bit one?) it will be 112.5 MB/sec. You suggest some "differences" between pixels to have what is called "lossless compression" - with that you can get 2x compression. 2.5x with advanced algorithms (BTW JPEG/JP4 with all quantization coefficients of 1.0 induce error less than 1 count). OK, you'll implement a really good lossless algorithm in FPGA and get 2.5. So you need to record 45 MB/sec - it is 4.5 times more than the network interface in the 353 camera and 3 times more than it's ATA interface. So while providing extra bits (that do not provide more real data by the laws of physics) and using inappropriate compression algorithm (true lossless compression is only needed when the data is going to be edited multiple times) I would have to reduce the frame rate 3 times. And that would cause _very_ visible artifacts.

Dan Hudgins June 19th, 2009 01:39 AM

What makes it a Digital Cinema camera?
 
If you have decided that you will not make a new design for a Digital Cinema camera, then basically you for the most part are just calling a network compressed surveillance camera by a new name, but it is something from the past not taking full advantage of new parts, and not really in the sprit of Digital Cinema.

Can anyone agree what the definition of what "Digital Cinema Camera" means, what it requires and what seperates it from a camcorder or network surveillance camera?

I can see that you would like to find new markets for your network compressed surveillance camera, but that does not turn it into a good tool for 35mm filmmaking without major changes, as far as I can see so far.

==

I understand that the JP4 is to re-construct the bayer pattern image, but in order to get high compression ratios the brightness of pixels changes, and so when the de-moasic is done later on the re-constructed data those changes affect the resulting pixels around the changed ones because of interpolation used, in other words the errors enlarge the artifacts. I.e. take a RED pixel, when interoplated it is then surrounded by 8 interpolated pixels, so if that RED pixel was off so will the 8 interpolated ones.

How many bits does you JP4 image hold? (8 bit data introduces histogram gaps after grading which are generally unexceptable for professional cinematography due to the tone banding and posterization, in addition to the compression artifacts etc. degrading the images)

In what program can I open the JP4 file? (do I get 12bits or 8bits)

Where can I download sample JP4 files to look at?

Do you have a converter program to convert JP4 to 16bit (48bpp) TIF files?

Do you have a converter program to convert JP4 to DNG files? (so DCRAW can be used to make 48bpp TIF files)

==

In order to compress the data in the JP4 file, you need to alter the brightness of pixels, are you saying that the maximum error of the 12th LSB bit (bit 0) is the greatest error for any pixel in the image, that is you get top 11bits with 100% accuracy in all frames at all times? (doing a pixel by pixel difference between the true RAW data and the de-compressed result, if you do not have true RAW data how did you measure this?)

==

Is there a network adapter that can go at 100MBytes/second rather than 10MB/s?

Why can't you use the 100MB/s adapter and record to RAM in the PC, PC should be able to hold 4GB now?

Storing the 12bit data two pixels in 3 bytes you get:

2592x1102x1.5x24=103MB/s (peak data rate would be faster close to 150MB/s without the FIFO on the sensor data lines)

4096/103=39seconds record time in 4GB.

39 seconds is more than enough per shot for making feature films. I think 2.5:1 is too optimistic for most images, but 10% to 25% size reduction should be possable with encoding the data in "words" shorter than 12bits, so the record time could be longer than 39 seconds.

If a FIFO is used to store the data then you can send it over your 10MB/s network connection it would just take 15x as long to download. USB would seem to be faster than your 10MB/s connection would it not? Like 35MB/s to 40MB/s so the time would be four time as fast or about 4x. With the 100MB/s network connection and a FIFO buffering the sensor data there is almost no dump delay time if recording to RAM in the PC.

==

I do not understand what would cost a bunch of money, you already understand how the parts work, you just remove the ones that are not needed, and add more memory chips to make a buffer. Please tell me where all that money would go?

I might be able to lay out the circuit board if someone could explain some of the details of the operation of the memory chips and the timing cycles of the sensor programing. I do not have a FPGA, PIC, EPROM, programer, nor do I have the software to program those chips at this time, you have those, would you be able to burn the chips?

Are you paying salaries for the development of this so called "Digital Cinema Camera"?

How many Elphel have you sold, i.e. what has been your gross and net, are you running in the red for all the years of development costs? How much are you in the hole?

Would you help me for free or would you expect payment?

Not so much an open community project if you expect payments, is it?

As for 16bit or 18bit A2D output, well noise would take up most of the bottom bits on those would it not? Anyway, that would make the deltas larger, but if the sensor is clean then the deltas would not be quite as much larger. I do not think there are sensors running at 24fps that give useful output at the 18th LSB bit, do you have a link for one?

Oscar Spierenburg June 19th, 2009 06:59 AM

Quote:

Originally Posted by Dan Hudgins (Post 1160590)
I can see that you would like to find new markets for your network compressed surveillance camera, but that does not turn it into a good tool for 35mm filmmaking without major changes, as far as I can see so far.

Dan, I'm sure your right from your point of view. But there is a little confusion here. Elphel is an independent company. They make their open source cameras for different purposes (depends on the end user) This could be surveillance, 3d, panoramic imaging etc...

Apertus is project that tries to build on the 'open' qualities of the camera, but has no commercial attachment to Elphel.
So Andrey Filippov (Elphel) is here to answer technical questions from time to time about the camera itself.
Take a look at our site and see how the project is set up: Home page | Apertus Open Source Cinema

Andrey Filippov June 19th, 2009 09:06 AM

Quote:

Originally Posted by Dan Hudgins (Post 1160590)
If you have decided that you will not make a new design for a Digital Cinema camera, then basically you for the most part are just calling a network compressed surveillance camera by a new name, but it is something from the past not taking full advantage of new parts, and not really in the sprit of Digital Cinema.

Dan, there is some misunderstanding. Elphel does not manufacture (or market) any "Digital Cinema" cameras - you may go on our web site and look on the price list. On the other hand - we do work on a new camera, I already linked that info. And that new camera will be capable of higher resolution/frame rate (at the expense of the higher power consumption) and can be used in "Digital Cinema" projects, but that never was our only (or even main) application - you can use Internet search to find some other areas where Elphel cameras may be or are already used.

Quote:

Originally Posted by Dan Hudgins (Post 1160590)
I understand that the JP4 is to re-construct the bayer pattern image, but in order to get high compression ratios the brightness of pixels changes, and so when the de-moasic is done later on the re-constructed data those changes affect the resulting pixels around the changed ones because of interpolation used...

There is no interpolation involved in JP4 - each pixel data is encoded separately when quantization coefficients are all 1.0 - no high spacial frequencies are cut. The only interpolation is in the post-processing, it is related to the Bayer pattern of the sensor, not the camera.

Quote:

Originally Posted by Dan Hudgins (Post 1160590)
How many bits does you JP4 image hold? (8 bit data introduces histogram gaps after grading which are generally unexceptable for professional cinematography due to the tone banding and posterization, in addition to the compression artifacts etc. degrading the images)

Currently we use 8 bits - number that perfectly matches the 12-bit sensors we use.

Later you write:
Quote:

Originally Posted by Dan Hudgins (Post 1160590)
As for 16bit or 18bit A2D output, well noise would take up most of the bottom bits on those would it not?

So you understand, that if the sensor noise is about one 12-bit ADC count (in the darks), adding 4 extra ADC bits will just increase noise to be about 16 counts - so the 4 LSBs would not carry much of the additional information.
Let's come back to the 12-bit senor we use. In the darks - yes, it has "true" 12 bits, so (at low analog gain settings) the ADC matches the noise performance of the pixels.

But what about whites? If you get signal close to saturation - let's say 4000 ADC counts - how many bits are "true" there? What is the noise level (in ADC counts)?
One?
Same 16 as if you use 16-bit ADC for the "12-bit" sensor?
Even more?

Do you have an answer ?

Quote:

Originally Posted by Dan Hudgins (Post 1160590)
In what program can I open the JP4 file? (do I get 12bits or 8bits)
Where can I download sample JP4 files to look at?
Do you have a converter program to convert JP4 to 16bit (48bpp) TIF files?
Do you have a converter program to convert JP4 to DNG files? (so DCRAW can be used to make 48bpp TIF files)

Unfortunately, when I created the original version of the JP4 I made only the encoder part while the decoder was made by our customers as non-GPL proprietary software that they used only in-house. But by now I'm sure there are some programs and plugins available to do the job. Basically it consists of 2 parts - restore the sensor Bayer mosaic data and then perform a quality de-bayer. The first part is easy, the second - well known, multiple algorithms are described and implemented.


Quote:

Originally Posted by Dan Hudgins (Post 1160590)
In order to compress the data in the JP4 file, you need to alter the brightness of pixels, are you saying that the maximum error of the 12th LSB bit (bit 0) is the greatest error for any pixel in the image, that is you get top 11bits with 100% accuracy in all frames at all times? (doing a pixel by pixel difference between the true RAW data and the de-compressed result, if you do not have true RAW data how did you measure this?)

This I tried to explain in How many bits are really needed in the image pixels?
You are probably confusing 2 things:
1 - noise-matched non-linear encoding of the individual pixels when encoded ADC data has the same extra resolution over all range of output values.
2 - compression of those values

Quote:

Originally Posted by Dan Hudgins (Post 1160590)
Is there a network adapter that can go at 100MBytes/second rather than 10MB/s?

Yes, sure there are now USB Gigabit adapters that provide 100MB/sec. But the USB port on Axis ETRAX FS processor used in Elphel model 353 camera is USB 1.1, not USB 2.0 so you would get some 1.5MB/sec, not the full speed that adapter is capable of. (it may be capable of less than 100MB/s - I do not have much experience with them)

Next camera will used different CPU with the GigE - but it will be directly, not with an adapter. And that higher network speed will allow us to go to higher resolution and frame rate, definitely.

Quote:

Originally Posted by Dan Hudgins (Post 1160590)
Why can't you use the 100MB/s adapter and record to RAM in the PC, PC should be able to hold 4GB now?

So where do you suggest to plug the "adapter" on the camera side?

Quote:

Originally Posted by Dan Hudgins (Post 1160590)
Storing the 12bit data two pixels in 3 bytes you get:
2592x1102x1.5x24=103MB/s (peak data rate would be faster close to 150MB/s without the FIFO on the sensor data lines)

Model 353 camera has 64MB memory that you can use as "FIFO on the sensor data lines". It has peak bandwidth of 640MB/sec, with average more than 90% of that.

Quote:

Originally Posted by Dan Hudgins (Post 1160590)
With the 100MB/s network connection and a FIFO buffering the sensor data there is almost no dump delay time if recording to RAM in the PC.

Yes there are currently dozens of cameras that have GigE (100MB/s) connection that you can use for your project, Elphel 353 has just 100Mb/s network. And the next model 373 will use extra bandwidth to increase resolution/frame rate or stream data from multiple sensors,

Quote:

Originally Posted by Dan Hudgins (Post 1160590)
I do not understand what would cost a bunch of money, you already understand how the parts work, you just remove the ones that are not needed, and add more memory chips to make a buffer. Please tell me where all that money would go?

I do not think Elphel is anything that different from the other companies. And the cost of the new product development is more than just cost of the set of electronic components from Digi-Key to build a prototype :-)


Quote:

Originally Posted by Dan Hudgins (Post 1160590)
I might be able to lay out the circuit board if someone could explain some of the details of the operation of the memory chips and the timing cycles of the sensor programing. I do not have a FPGA, PIC, EPROM, programer, nor do I have the software to program those chips at this time, you have those, would you be able to burn the chips?

Maybe that "someone" is a college? And reading some publications on the matter, component datasheets, studying other designs, building your own ones?

And, btw, the only chip "burnt" in the model 353 camera (that requires a special programmator device) is a programmable clock generator that needs to generate specific frequencies at startup. All other programmable devices (flash memory, SRAM-based FPGA) are programmed by the camera itself and do not require any special equipment - you may find that information on our web site and on Sourceforge (on our project page).


Quote:

Originally Posted by Dan Hudgins (Post 1160590)
Are you paying salaries for the development of this so called "Digital Cinema Camera"?

Definitely not.

Quote:

Originally Posted by Dan Hudgins (Post 1160590)
How many Elphel have you sold, i.e. what has been your gross and net, are you running in the red for all the years of development costs? How much are you in the hole?

This information is not GPL-ed but you can make some research on the Internet.

Quote:

Originally Posted by Dan Hudgins (Post 1160590)
Would you help me for free or would you expect payment?

What exactly help do you need from me or Elphel? Answer some question, explain details of the circuit boards or code?

Or develop a camera to your specs?

Quote:

Originally Posted by Dan Hudgins (Post 1160590)
Not so much an open community project if you expect payments, is it?

What is the relation between "open community" and "payments"? Elphel project is not my hobby, it is a commercial company. We develop and manufacture products based on Free Software and Open Hardware (as our site says), free - as in freedom (Cameras - Free Software Foundation ), not as in beer :-)

Dan Hudgins June 19th, 2009 04:54 PM

APERTUS, issues on resolution and artifacts
 
Thanks for the detailed answers.

APERTUS and Elphel seem more than slightly related, in that it seems that APERTUS will be a version of the Elphel camera that only records JP4 at compression ratios of about 10:1 or more? (I base the compression ratio on the fact that you have 10MB/s network connection and the sensor puts out about 100MB/s true RAW sensor data).

If APERTUS could have other models, then could one of them be one that records true RAW sensor data 100%? Or their is no interest in using anything other than JP4 and other lossy compressed data formats for recording?

==

I seem to grasp JP4 seeks to record the Bayer pattern data, in four compressed images one 1/4 size RED, one 1/4 size BLUE, and two 1/4 size GREEN. And that you have experimented with using two gain values for the two GREEN fields in order to do HDR. You group the pixels of the same color for compression to avoid interactions between the colors.

What I was asking was the number of bits each color field is redorded using, the answer seems to be 8, which is not enough to allow heavy color correction without major histogram gaps?

The other question is, are there sample JP4 frame files I can download, and do you have a program to convert the JP4 sample files into RAW, DNG, and or (48bpp) TIF files that I can download for free?

If you do not have sample JP4 and a converter, do you have sample 16bit (48bpp) TIF files made from JP4 that I can look at 1:1 full size to check for artifacts?

If so I could process the TIF converted files in my CC and look at the results. My system uses 16bit (48bpp) TIF files that are numbered like 00000000.TIF, 00000001.TIF, or 0.TIF, 1.TIF. The left padded ones are better for Windows (tm) since they sort in the right order in a file directory listing.

If the JP4 can be converted into a (48bpp) TIF file WITHOUT de-moasic, I can convert the data into RAW and de-mosaic it myself to see how the artifacts of compression look. In other words, the TIF would be a Bayer color "checker board" image, with the colors not interpolated and set to black where not used, this way the JP4 image can be looked at before interpolation in any graphics type viewer.

You could also have a conversion of JP4 to a TIF that gives a monochrome image in which the four color fields are output as quadrants of 1/4 size, that would make looking for artifacts of compression easer.

==

In order to compress the 4 color "fields" in the JP4 file, some of the brightness values of the pixels must change? If not how so?

==

In the bright areas of the image the 12bit linear data is run through a curve to adjust the highlight detail, therefore the distance between the data levels will be more or less and the fine histogram gaps in the 16bitx3 output will be more or less. My system mostly is set up to adjust the highlights in the images for video gamma output, in the range of gamma 2.2 to 2.4, but other gammas can be set with the adjustments.

This is why 12bit data is needed, and 8bit data is not enough, even if the 8bit is LUT to gamma in the camera, since sometimes the highlights or shadows are expanded in contrast.

With the Aptina P401 sensor the A2D is inside, so if you record all 12bits and use all of them in de-mosaic and grading you get the best results the sensor can produce.

==

The order of the circuit parts would be:

sensor > FIFO > network > PC > RAM/SSD > HD

or

sensor > FIFO > single_board_computer > HD/SSD

or

sensor > FIFO acting as memory card,

then

FIFO > reader > PC > HD

Some way of setting up the sensor would be needed, maybe through the network/USB, or for stand alone by way of a PIC chip and EEPROM.

Viewfinder can be by scan converter on sensor data tap or software in the PC.

sensor > scan_converter > SVGA/NTSC/PAL

==

What sort of memory chip are you using for your 64MB buffer/FIFO now? i.e. SRAM/DRAM/FLASH?

How much would it cost to use enough of those chips to make a 4GB buffer/FIFO?

Is there a lower cost memory chip for the 4GB buffer/FIFO that you understand how to hook up?

How much would that one cost for 4GB?

Since the FIFO is the most important part of the circuit it should be the part that would be prototype first, if you can help me understand what chip can work best for a 150MB/s+ FIFO then I might be able to design a prototype buffer for testing? Information gained from this buffer trial might help later with the APERTURS camera working on various computers gitch free.

As for programing other parts like PIC, EEPROM, FPGA I do not have a programer device and was asking if you could program the chips for me from the data files generated by software desigined for such chips.

==

My view is that there are many compressed camcorders/DSLR etc. in the $1000 to $5000 range, and then there are Scarlet/Acam dII and other higher priced cameras in the Digital Cinema mold.

For use with my "freeish" Digital Cinema programs, I was looking for a lower cost, maybe $250 to $750, simple, dependable, minimum parts camera that would have true Digital Cinema specs, like:

1) Records full uncompressed RAW sensor data for 15seconds or more (same as Eyemo)

2) Records at exact frame rates for cinema use 24.000fps, +/- one frame per shot length.

3) Shoots 2048x1152x12bit or better, like 2592x1102x12bit.

4) Has full manual exposure, color blance, gain, etc.

5) Has changable lens mount for various lens types

6) Has autoslate beep/flash circuit for double system recording

7) Has LED bargraph on sensor data bits for quick exposure adjustments

8) Has some way to set the shutter angle ~0 to ~360

9) Not required but a scan converter viewfinder with both full field and 1:1 zoom would make it more usable. Otherwise a reflex zoom lens, or NTSC/PAL camera focused on chip face and focus by scale.

10) Produces images of quality high enough for 35mm movie film prints to be made and look close to ECN in the end result, without "video" or compression artifacts, and film like dynamic range.

The needs for Digital Cinema feature filmmaking are not the same as for other uses of cameras like the Elphel, mostly the record time per shot does not need to be long, and the quality of the images must be as high as the sensor can produce.

Digital Cinema is all about image quality and manual control, otherwise why not just purcahse a Canon 5DMkII or some HD camcorder?

Oscar Spierenburg June 19th, 2009 06:23 PM

Quote:

Originally Posted by Dan Hudgins (Post 1160900)
I was looking for a lower cost, maybe $250 to $750, simple, dependable, minimum parts camera that would have true Digital Cinema specs, like:

1) Records full uncompressed RAW sensor data for 15seconds or more (same as Eyemo)
2) Records at exact frame rates for cinema use 24.000fps, +/- one frame per shot length.
3) Shoots 2048x1152x12bit or better, like 2592x1102x12bit.
4) Has full manual exposure, color blance, gain, etc.
5) Has changable lens mount for various lens types
6) Has autoslate beep/flash circuit for double system recording
7) Has LED bargraph on sensor data bits for quick exposure adjustments
8) Has some way to set the shutter angle ~0 to ~360
9) Not required but a scan converter viewfinder with both full field and 1:1 zoom would make it more usable. Otherwise a reflex zoom lens, or NTSC/PAL camera focused on chip face and focus by scale.
10) Produces images of quality high enough for 35mm movie film prints to be made and look close to ECN in the end result, without "video" or compression artifacts, and film like dynamic range.

I want that too! :-D Please let me know when you find it ;-)

Now all your points are true as ultimate goal, but we must be realistic and see that something that is called a 'project' is a growing thing, sometimes even growing slowly. But just go back a year and see the difference in technology and the progress that is made so far. Than think a year ahead and imagine what is possible.

Andrey Filippov June 19th, 2009 09:55 PM

Dan please read carefully "How many bits are really needed in the image pixels?"
 
Quote:

Originally Posted by Dan Hudgins (Post 1160900)
I seem to grasp JP4 seeks to record the Bayer pattern data, in four compressed images one 1/4 size RED, one 1/4 size BLUE, and two 1/4 size GREEN. And that you have experimented with using two gain values for the two GREEN fields in order to do HDR. You group the pixels of the same color for compression to avoid interactions between the colors.

It is nice if you are starting to grasp the JP4 idea.

Quote:

Originally Posted by Dan Hudgins (Post 1160900)
What I was asking was the number of bits each color field is redorded using, the answer seems to be 8, which is not enough to allow heavy color correction without major histogram gaps?

But unfortunately - not yet the idea described in How many bits are really needed in the image pixels?

Quote:

Originally Posted by Dan Hudgins (Post 1160900)
In the bright areas of the image the 12bit linear data is run through a curve to adjust the highlight detail, therefore the distance between the data levels will be more or less and the fine histogram gaps in the 16bitx3 output will be more or less. My system mostly is set up to adjust the highlights in the images for video gamma output, in the range of gamma 2.2 to 2.4, but other gammas can be set with the adjustments.

Gamma conversion to compensate CRT characteristics is something needed decades ago, now primary function of such non-linear conversion is to match the digitized output to the noise performance of the sensor pixel - in most parts of the output range but the very low (dark) end it is dominated by the shot noise. So answer to my previous question has an easy to calculate numeric value, not just "more or less". The gaps in histogram - it is a different story.

Let's consider hypothetical sensor with analog output with just 100:1 dynamic range. If you put an 8-bit ADC (sufficient for such sensor, agree?) than the histogram will be limited to just 256 levels that may produce gaps if you'll try to use more levels.

First solution - use a 16-bit ADC(they are rather inexpensive already). Even as the 8 lower bits will not carry meaningful information about the image frame, some 3-4 of them may be nicely randomized (not so with other 5-4 - it would be a challenge to design the circuit that it will not have harmonic interference visible as a periodic pattern on the image). But those 3-4 extra random bits will help you to smooth the histogram.

Or maybe there is a cheaper (less bandwidth) solution - still use an 8-bit ADC and then just add artificial noise on the decoder side. Those "natural" noise bits did not carry much of the pixel information - why bother to send them over from the noise generator in the camera?

And now - please understand that the same happens with encoding 12-bit sensor output (and FWC~=8500) using 8 bits and a non-linear conversion. Look at the illuistrations in that my article. Instead of preserving shot noise (you remember - it is equal to a square root of the number of electrons in the pixel?) for smoothing the histogram - why not generate it in the decoder?

Dan Hudgins June 20th, 2009 04:13 AM

Sooner than later?
 
Oscar Spier, the specs I listed are easy to get, that idea for the camera is low cost and does not require a user interface, you can just select the sensor program using dip switches (etc.) on the PIC chip to read the right values from the EPROM.

If Andrey Filippov can help me find a memory chip that will work for the 4GB FIFO then we could be closer to making it real.

==

Andrey Filippov, the output of a Digital Cinema is not a monitor, it is motion picture film, which can resolve more than 8 bits of brightness data when viewed at sync sound running speed. My film-recorder program uses "gamma spread" (still in development) to exceed the range of the monitor used for the film exposure by making stacked exposures, so having more than 8 bits is usefull because as I pointed out cinematography is ART not science and part of that art is GRADING the images using curves to complement the lighting of the shot. Bacause you need to underexpose the sensor to hold the lighlights for film like rendering you are getting most of the mid-tones Down two to three bits more than you would for video exposure.

Because film is shown at sync speed UNCOMPRESSED your eyes can mix details from one frame to the next and see details in the stream of frames that they cannot see in a single frame.

What looks like "noise" in one freeze frame reveals useful detail when the frames run at speed. If you compress frame images and "de-noise" them you loose the low contrast details in the so called "noise" and take some of the "life" out of the images that makes film based cinematography have a quality that compressed de-noised limited bit depth video lacks. A goal in Digital Cinema is to be able to capture uncompressed images that are like the uncompressed images film captures for printing on film prints for uncompressed projection.

==

You say you have a 64MB buffer chip now in your design, so you understand that chip.

How much does that 64MB memory chip cost?

Do you know of a larger memory chip that you understand how to hook up so that it could record at 150MB/s?

How much would that larger memory chip cost?

With a resolution of 2592x1102x1.5 each frame is about 4.3MB, so 64MB will buffer about 14 frames, since a 360 frames is the minimum size for the FIFO/Buffer one would need 26 of the 64MB memory chips to get a minimum working camera design.

What would the cost of 26 of the 64MB memory chips be?

Is there a larger and lower total cost memory chip that you understand well that would work, if so which one and what is its cost?

What type of memory is the 64MB chip you are using, i.e. SRAM, DRAM, FLASH?

What is its part number and maker?

Thank you for your answers they are most helpful.

Sebastian Pichelhofer June 20th, 2009 04:34 AM

Please take the time to find and read the available information. Almost everything covered or discussed so far is publically available on the Elphel Website or wiki.

JP4 sample videos and images are available at Index of /videos/JP4_demo

There is a linux command line tool to convert to JP4 to DNG (I posted results doing this in the big Elphel 333 thread) and 2 avisynth plugins to do the demosaicing available at: Software | Apertus Open Source Cinema

More details here: JP4 - ElphelWiki

Andrey Filippov June 20th, 2009 10:11 AM

Dan, please, read documentation on Elphel cameras on the Wiki
 
Quote:

Originally Posted by Dan Hudgins (Post 1161060)
If Andrey Filippov can help me find a memory chip that will work for the 4GB FIFO then we could be closer to making it real.

I believe you'll have to wait some 5-10 years. The largest x16 DDR3 chip I could find for the model 373 camera is just 2Gb (256MB)

Quote:

Originally Posted by Dan Hudgins (Post 1161060)
What looks like "noise" in one freeze frame reveals useful detail when the frames run at speed.

Yes but that will never give you more than 1-2 extra bits. With the frame rate we have in 353 it is just one bit (requires averaging of 4 frames). 2 bits require much higher fps and averaging of 16 frames. The noise in the "whites" of the 12-bit sensor withe 8500e- FWC is more trhan 40 counts (>1%).

If the difference between output codes (ADC and non-linear encoder combined) is smaller than the pixel noise than the same eye-integration effect will be achievable with fewer encoded bits.. The problem can be only if the output codes are so far apart that it is likely that the same pixel will "get stuck" to the same level in multiple consecutive frames. Encoding steps that we use in the cameras are smaller than the noise, so consecutive frames (when the camera is looking at the same static scene) will have different output values and so averaging will work the same way.

You see, camera includes multiple elements and performance is defined by the worst of them. And this guides our designs - match the performance of the components and improve the weakest first.


Quote:

Originally Posted by Dan Hudgins (Post 1161060)
You say you have a 64MB buffer chip now in your design, so you understand that chip.
How much does that 64MB memory chip cost?

Have you ever studied Elphel camera designs? They are all released under GNU FDL 1.3 (circuit diagrams and other hardware documentation) or GNU GPLv3 - software and FPGA code. You are free to study them and create derivative products. You are free to use them for non-commercial as well as commercial purpose as long as they are compliant to the original license (http://wiki.elphel.com/index.php?title=GNU_GPL_FAQ).

If you go on our Wiki page, you'll get to the 10353 - ElphelWiki page about the camera system board. Then, under the block diagram there is a link to 10353 Circuit Diagram, Parts List, PCB layout (just in front of the link to Gerber files).

That PDF contains circuit diagram, parts list (with all the prices I had at the time ofdevelopment as well as were to buy them) You may find the current memory prices at DigiKey Corp. | Electronic Components Distributor | United States Home Page or other component distributors.

Quote:

Originally Posted by Dan Hudgins (Post 1161060)
Do you know of a larger memory chip that you understand how to hook up so that it could record at 150MB/s?

I'm confused with "understand how to hook up" - I just take the data sheet, application notes for the particular chip and follow what is written there.

Quote:

Originally Posted by Dan Hudgins (Post 1161060)
How much would that larger memory chip cost?

I believe there is none that you need - 4GB (or 32Gb - they measure capacity in bits, not bytes). I would recommend you to use computer DIMM module in your design (I did a similar thing with older memory in mid-90s)


Quote:

Originally Posted by Dan Hudgins (Post 1161060)
With a resolution of 2592x1102x1.5 each frame is about 4.3MB, so 64MB will buffer about 14 frames, since a 360 frames is the minimum size for the FIFO/Buffer one would need 26 of the 64MB memory chips to get a minimum working camera design.
What would the cost of 26 of the 64MB memory chips be?

OK, you want to put 26 memory chips in your camera. That would be a pain! I strogly recommend you to use ready to use memory modules, not trying te design your own - that will be very expensive even if you do it as a hobby and don't account for your time spent. Each run of multilayer board is expensive and designing a multi-chip memory module is a challenging task. So that will cost you many thousands of dollars while you can buy 4GB computer module much cheaper.


Quote:

Originally Posted by Dan Hudgins (Post 1161060)
Is there a larger and lower total cost memory chip that you understand well that would work, if so which one and what is its cost?

I'm not in business of designing memory modules and can not help you much with that. As I wrote - I just use available datasheets and application notes when I need to use particular memory chip.

Quote:

Originally Posted by Dan Hudgins (Post 1161060)
What type of memory is the 64MB chip you are using, i.e. SRAM, DRAM, FLASH?
What is its part number and maker?

10353 Circuit Diagram, Parts List, PCB layout

Dan, recently you offered me help to design a camera (as it turned out - without studying the "circuit" you plan to improve):
Quote:

Originally Posted by Dan Hudgins (Post 1160451)
If you cannot figure out how to make a low cost uncompressed camera I can talk to you about the circuit details and how to eliminate unneeded parts from the circuit.

So what exactly do you want from me personally or Elphel as a company?
a) You are writing just to share you vision of "ideal" camera with other people?
b) You are offering help to teach me circuit design of the "uncompressed" cameras?
c) You are asking us for advices in your camera design (how to use "memory chips" and other components)?
d) You want us to develop a camera to your specs free of charge?
e) You want to place an order at Elphel to design a camera for you?
f) Something else ?

Dan Hudgins June 20th, 2009 03:31 PM

Apertus camera will be only Elphel type or something else?
 
Sebastian Pichelhofer, Thank you for the links to the sample images and the software links.

==

Andrey Filippov, Thank you for taking the time to talk about the questions I asked.

As far as I can understand what Oscar Spier said:

"I want that too! :-D Please let me know when you find it ;-)"

About the specs I listed for a concept of an uncompressed camera, I understood that to mean that he would like a version of the Apertus camera to HAVE those specs, or ones better such as 1 minute shot rather than just 15 seconds.

It was also my understanding that the Apertus camera will not be an Elphel camera, but something else. I asked if the Apertus camera will ONLY be a Elphel JP4 camera, or will there be other models, and from what Oscar Spier said I understood that he was open to other models. If that is not the case please give a clear reply on that question.

==

Since Oscar Spier seems interested in an uncompressed camera I was going to ask Andrey Filippov a series of questions, each of which would be used to develop information to be used in the development of some new reference circuit that could become part of the Apertus or some other camera.

If the all models of the Apertus MUST be developments of the Elphel, then I guess you would not be interested in helping me develop a new design that uses perhaps some of the parts the Elphel uses but connected in other ways.

Since you are experenced in the use of parts in the Elphel, you could answer a few questions about the parts you are using and understand so I could see if they would meet the needs of a new circuit, and in answering some questions and sharing your experence and tools the development could go faster perhaps.

I asked if you would program chips for me since I do not have the tools required or to some extent the money to purchase such hardware at this time, I did not see a Yes or No answer.

I asked you if the 64MB memory chip you use for a buffer in the Elphel is SRAM/DRAM/FLASH, part number, and how much each one costs, how hard is it to just give a direct answer to that question?

If you don't want to share your experence and be helpfull, then just say so. Development of a uncompressed camera circuit could go much faster if you can just answer a series of questions one by one.

==

With regard to the highlight noise, I know that phototubes are noiser with more light, that is not an issue since the highlights are rolled off with an S cruve, so the noise bits amplitude in the result would be smaller steps. Film has S curve, and one simulates 4th generation film like grading where the images have gone through several S curves.

Posterization is a bigger issue, and compression can contribute to that, and if you have only 8 bits and underexpose to hold the highlights Posterization can contribute to histogram gaps in the graded images. Mid tones and shadows are expanded in gamma adjustments and grading for filmout in the film recorder.

The noise level of single pixels is not the issue since many pixels are mixed in the de-mosaic and the down sizing of the images for filmout, if each pixel has noise mixed with true data, the pixels that result from the processing will center closer to their true value, if lossy compression adjusts the brightness values of adjacent pixels then the mixing does not find the same center value.

Andrey Filippov June 20th, 2009 05:11 PM

Quote:

Originally Posted by Dan Hudgins (Post 1161244)
I asked if you would program chips for me since I do not have the tools required or to some extent the money to purchase such hardware at this time, I did not see a Yes or No answer.

No, I will not. I do not have any tools to do that myself - I don't like using such chips in my designs.



Quote:

Originally Posted by Dan Hudgins (Post 1161244)
I asked you if the 64MB memory chip you use for a buffer in the Elphel is SRAM/DRAM/FLASH, part number, and how much each one costs, how hard is it to just give a direct answer to that question?


All that information is on the circuit diagram and parts list for the 353 camera, I gave you the link as you failed to find that open information on our web site yourself.

Memory chips cost some $10-$20, each run of the multilayer board (you'll have to re-run it after each bug detected - $1500. PCB assembly of such prototype boards - couple thousand (or you can do it yourself if you have experience with mounting BGA components). So that is why I recommended you to use DIMM modules in your design, not try to develop and make a 26-chip system yourself - that will most likely need several PCB runs before you'll learn how to do that.

Quote:

Originally Posted by Dan Hudgins (Post 1161244)
If you don't want to share your experence and be helpfull, then just say so. Development of a uncompressed camera circuit could go much faster if you can just answer a series of questions one by one.

I share my experience in multiple ways - post documentation on our wiki, publish articles, answer your questions. And I expect you to read the available documentation before asking questions - that could really help you "go faster" :-)



Quote:

Originally Posted by Dan Hudgins (Post 1161244)
Posterization is a bigger issue...does not find the same center value.

Any numbers, formulai, calculations to support your ideas?

Dan Hudgins June 21st, 2009 12:06 AM

Will Apertus use JP4 and be derived from Elphel?
 
Quote:

Originally Posted by Andrey Filippov (Post 1161280)
No, I will not. I do not have any tools to do that myself - I don't like using such chips in my designs.

So you do not use PIC chips, GAL,PAL,PLD, CPLD,FPGA, PROM, or EEPROM other than those that your PCB has provision for in-circuit programing?

You program the FPGA in circuit over the network cable?

What software do you use to design the wiring codes/program for your FPGA?

Do you know a link for a low cost programer for FPGA chips of the type you use (I would not have a provision for in-circuit programing in a prototype)?

What part numbers FPGA do you use?


Quote:

Originally Posted by Andrey Filippov (Post 1161280)
So that is why I recommended you to use DIMM modules in your design.

Do you understand DIMM modules like DDR333 or faster DIMM well enough to be able to help me with answers to build a 150MB/s data recorder that records 12bits with random write times, then can read out that data at random read times slower or as fast?

My first thought was to use DIMM, but I could not find anyone who understands how to hook them up. For removable shooting they would require a battery, which is why most people want to use FLASH rather than DIMM for the buffer.

Do you think FLASH are easer to use than DIMM?

Do FLASH have bad memory addresses that need to be mapped over, whereas DIMM are free of bad memory addresses?


Quote:

Originally Posted by Andrey Filippov (Post 1161280)
Any numbers, formulai, calculations to support your ideas?

Digital Voodoo | 10-bit Advantage

It seems beyond discussion that there are advantages to using more that 8bits for images that will be processed, one of those issues is scaling which is required for the gamma and other curve adjustments in the film recorder, the link above gives an example, but these issues are more of an issue in Digital Cinema since the scaling can be non-linear and extreme resulting in histogram gaps and posterization.

Why does the RED ONE record 12bits if they are not useful?

If lossy compresion alteres the brightness values of pixels then the average values of those altered pixels cannot be anything other than altered as well, that is simple math.

What is the lossy compression ratio of the JP4 you will be using in the Apertus camera, i.e. 10:1 or 20:1 or what?

==

Another issue is that sample JPG and MOV images are just about useless for checking for artifacts in the Digital Cinema frame images, since both of those formats can be riddled with artifacts.

==

Could I please have a clear answer as to what kind of camera the Apertus will be, will it be a version of the Elphel camera using JP4, or will it be something unrelated to the Elphel camera?

Who will be making the circuit design for the Apertus cameras?

Could Apertus have an uncompressed version/model for sale as well?

Can anyone interested in the Apertus project help provide me with their experience to develop information focused on the parts required build an uncompressed camera, or is that goal of no interest to the Apertus project and I miss understood Oscar Spier's comments?

It might be useful to make a pool of knowlage outside of the Elphel project, a sort of "Idiots guide to Digital Cinema camera design" that would enable any electronics hobbyist to understand the parts required enough to develop his own camera circuits. Much of what is in manfactures spec sheets is insufficient without experence to get working results on the first try, as you have noted. Someone who has used the part knows better how exactly to get the required results out of it.

Andrey Filippov June 21st, 2009 10:49 AM

Quote:

Originally Posted by Dan Hudgins (Post 1161359)
It might be useful to make a pool of knowlage outside of the Elphel project, a sort of "Idiots guide to Digital Cinema camera design" that would enable any electronics hobbyist to understand the parts required enough to develop his own camera circuits. Much of what is in manfactures spec sheets is insufficient without experence to get working results on the first try, as you have noted. Someone who has used the part knows better how exactly to get the required results out of it.

Dan, please go ahead and write such guide "for idiots". In many cases you should not wait for somebody to do what you want but just do it yourself.

Yes, I do have some experience in the camera design - my first one in 1978 was just 0.000256 Mpix (16x16) but I'm afraid I can not help you much. I do publish my "guides" in magazines like Linux Devices or Xilinx Xcell but all of them require some background from the readers. And of course - we publish all our code and other documentation according to GNU GPL and GNU FDL, some people can read circuit diagrams, parts lists and the code.

Then, if publications+documentation+code does not answer their particular questions -they write to support@ or to the wiki or irc or Sourceforge forum - and always get replies from Elphel people (in some cases email filters unfortunately put legitimate emails into spam - but we are working on that issue).

In your case - you repeated the questions already covered in available documentation (like what components we use and how they are "programmed").

So I'm pessimistic about importance of a "Idiots guide to Digital Cinema camera design" - the camera design involves many tricky things that do require some knowledge and the readers not to be idiots. For "idiots" I would recommend:

Use your cell phone camera. You'll get more exciting footage even as the quality is not so good - but it is small and always with you. You would see such footage translated through national/international TV channels much more often than from the hobby camera. To say nothing about Youtube.

Sebastian Pichelhofer June 21st, 2009 10:55 AM

To clarify a few things:

*) Apertus is based on an Elphel 353 camera and will in the future be based on the next generation model Elphel 373.

*) Apertus is a community driven project so we are thankful for every contribution and improvement of value both of the software as well as the hardware.

*) The project already has a decent (around 3 year) history so the development is (even if the website and this forum was newly formed) pretty advanced. So we are not primarily interested in starting from scratch.

*) We all love throwing around ideas about the highest possible specifications (primarily of sensors recently it seems) of the present and future. Anyone can dream and think of his own ultimate camera specifications as he wishes (I love doing it myself) but you have to keep in mind that there is a lot more to a camera project of such size and that many parts and factors have to work together to create something useful.


Complex questions result in complex answers.

Oscar Spierenburg June 21st, 2009 02:25 PM

Quote:

Originally Posted by Dan Hudgins (Post 1161244)
As far as I can understand what Oscar Spier said:
"I want that too! :-D Please let me know when you find it ;-)"

Since Oscar Spier seems interested in an uncompressed camera I was going to ask Andrey Filippov a series of questions, each of which would be used to develop information to be used in the development of some new reference circuit that could become part of the Apertus or some other camera.

Dan, about my comment, it is best described here: Humour - Wikipedia, the free encyclopedia ;-)

But, seriously... in the end we are all looking for the same things. But the fact is that there are a lot of DIY cinema camera projects and even commercial projects (incl. Sparta and Scarlet) that try more or less exactly what you are talking about.. but they all strand at some point. Don't get me wrong, I hope they all succeed one day, but we choose not to go this way with Apertus, meaning not to build it from scratch, but finding existing hardware, combine and improve it with custom software and creativity. Elphel is the base of the project and we work closely with them. (But again, we are not commercially related)

I encourage everyone to bring in new ideas suggestions and solution, but also to read about the philosophy of the project on our website first.

Dan Hudgins June 21st, 2009 03:46 PM

RAW data port on Apertus camera for external RAW recorder
 
Quote:

Originally Posted by Oscar Spier (Post 1161534)
Elphel is the base of the project and we work closely with them. (But again, we are not commercially related)

Since you have decided to only make the Apertus camera an offshoot of the Elphel camera, it would seem with possibility to have a 25pin D connector, or some other kind of connector like a ribbon cable connector, to take some data lines out of the Apertus camera so that an external true RAW sensor data recorder could be attached to your existing camera for those who would like to have the full RAW sensor data to work with in shooting their motion picture.

The pins required for the external 25pin data port might be:

1) - supply ground
2) + supply
3) pixel write or read pulse (on pulse 1 rise or a short time after?)
4) pixel write or read mode (1=write 0=read)
5) addess counter reset (1=reset 0=enabled for write/read counting in address counter)
6) frame start (optional if camera blocks pixel write pulse before frame start) 1=first pixel
7) line start (optional if camera blocks pixel write pulse before line start) 1=first pixel of line
8) Camera trigger, 1=shooting 0=idle
9) Frame active, 1=pixel pulse is in active pixel area 0=pixel pulse is outside active area
10)-21) D0-D11 I and O
22)-23) reserved (for 14bit sensors maybe? D12-D13?)
24) external status back to camera 1=connected and working 0=not connected (and end of read or error?)
25) Erase memory, resets high address latch to zero, 1=reset 0=enabled for write/read (this is used for partal read of unfull recorder memory)

(It may be better to have the Hi/Lo logic of the pins inverted if pull ups are used so no connection reads right.)

It would seem an easy thing to include such a connector, at this point the boards have not been finished for the Apertus camera.

Since Andrey Filippov has recomended using DIMM modules he must understand how to connect them, so I could work with him to build a prototype DIMM module recorder board that connects to the external data connector on the Apertus camera.

I know how to read the parallel port one byte at a time, so I can write a program to read the raw data out of the external prototype recorder and to de-mosaic the true RAW sensor data in maybe a day or two. On readout the 12bit data would end up converted to 16bit data for processing, so the top 8 bits would be read, then the lower 4 bits, then a pulse sent select the next 12bits. Later other readers could be built using knowlage of the network type cable or USB etc., but at least I could be able to shoot tests right away and process the true RAW sensor data.

If Andrey Filippov is unable to help me and others understand DIMM modules, maybe someone else could, at any rate, this can be done and there is no reason I can see why the external data port should not be included in the Apertus camera design spec, can you tell me otherwise?

Also I invite any input on the required pins for the external data port connector.

The Apertus would operate as normal not needing to know that the full sensor data is being recorded on the external recorder, the external recorder can use the frame start and trigger signals to mark the first frame for the beep/flash and have an audio out for the double system beep tone. The external recorder can also have the LED bar graph for quick exposure adjustments, the LEDs work like a VU meter off the sensor data lines with an extra blue led for the top three bits being high. The external recorder could also have a scan converter option to output to SVGA/NTSC/PAL, I understand how to build these parts of the circuit for the most part, although some help with the parts spec sheets and timing cycles might make things move along better.

The only change needed in the Apertus is to have its firmware switch the sensor from 8 bit mode output into 12bit output, if the top 8 bits are used for the normal compressed Apertus recording that does not matter since that recording would not be used, but would allow some function of the viewfinder even if the shadow areas would be darker, a gamma fix in the viewfinder software could offset that issue as well.


Also, if there is a disable (or error disable) in the Aperturs for the compression circuits, then the sensor could be programed from the Apertus firmware to run at speeds that the compression circuits could not handel, so the full 12bit output of the Aptina MT9P401I12STC could be used and the camera could shoot at 2592x1102x12bit@24fps and 2048x1152x12bit@24fps. If a pixel skip could be worked into the compression circuits, then a monochrome view on the viewfinder in the Apertus could be used, or when operating the sensor above the Apertus bandwidth then external live scan-converter could be used for a viewfinder. The external scan converter could be an accessory not required for the external recorder to operate.

The compressed recording could be used for on-set playback to check action. If there is a scan converter on the recorder it can also be used for playback from the memory recording at full uncompressed resolution, in monochrome or if the matrix is in analog circuit live to a RGB monitor like a computer monitor etc.

Juan M. M. Fiebelkorn June 21st, 2009 08:47 PM

I don't see the point of demonizing compression. RED is heavily compressing the RAW bayer using JPEG2000 (they just modify it in many ways just to avoid people opening it).If you don't believe it just look inside this forum and you'll find where the proposal came from.

BTW, Steven is telling us that wavelet compression is worst than the block based in H264.

Elphel right now is using something "simillar" to that and I myself think that using a block based 4x4 integer transform should be better on quality and lower in compoutation power than the wavelet one, unless you were using simple haar.

The nice thing on favor of wavelets is this one. If the camera head is compressing using Dirac, we can let the realtime compression of the camera hardware and just decode part of those frames to get an instant lower resolution full color image for monitoring purposes.That is a really nice thing to have and doable on a PC side.

Of course this procedure is also doable using H264, but I guess the bitstream should be modified in some way to let us easily access the lower frequencies (lower resolution) image.

Maybe someone is so intrepid to produce an hybrid codec with wavelet compresion for the lower resolutions and 4x4 integer for the medium/higher ones?

Who knows? This is something I've been thinking on....Anyway I don't clearly know if it brings any advantages on quality/computational costs.

BTW I've been doing film outs for around 10 years, and from my experience, those artifacts DAN is talking about aren't a problem for Professional Formats, maybe if you are using really compressed consumer stuff but, hey, even good HDV looks fine on film......


PS: Anybody saw compression artifacts on Star Wars episodes?

Dan Hudgins June 22nd, 2009 12:42 AM

Choice
 
It will cost very little to put a data port on the camera, probably less than $5, maybe as little as $0.75 for a connector for a rubbon cable on the sensor board.

If you give users a choice between the compressed output and the true RAW output then they can see what difference there might be in the end results.

StarWars was shot with three chip cameras, not Bayer filter cameras, in that system the compressed images were processed before compression, not so called RAW sensor data through a Bayer filter.

One issue with the Bayer filter is that it is not saturated, so you need to decide if you are going to matrix before or after compression. If you matrix before compression you reduce the luma in the four images and therfore the resolution, if you matrix after then because of the limited bits the chroma becomes more Posterized and the compression artifacts increase chroma artifacts.

The RED ONE uses a 4K sensor, so the artifacts are smaller in the final images, than if you use a lower resolution image sensor to start with. If you use a smaller pixel count sensor and fewer data bits then the artifacts become greater.

I do not see why anyone would oppose putting a simple data port on the sensor board so that the sensor can be run at its full bit range and at better resolutions since the cost of doing so is small.

Juan M. M. Fiebelkorn June 22nd, 2009 12:30 PM

You can't put a ribbon cable directly on the sensor board to get "RAW" data from it without needing the whole circuitry for the camera on the other end.
The sensor is not a selfsupported camera, if you think about it that way.
So getting a "RAW" port for it needs a little bit more than a $ 0.75 connector.
Also getting that huge amount of data outside the camera becomes a harder job that getting some little compressed stuff, because of bandwidth issues.If it were the case anyway you would need some kind of "noise inmune" transfer system.That costs a little bit more than $0.75, and BTW take "real state" on the PCB. I know Andrey can answer to this in a better way but at least I tried.

Andrey Filippov June 22nd, 2009 01:38 PM

Quote:

Originally Posted by Juan M. M. Fiebelkorn (Post 1161929)
I know Andrey can answer to this in a better way but at least I tried.

Juan, thank you - I do not think I could say much more.

1) Yes, components need real estate so D-SUB will definitely not fit in the camera. In the 373 camera I'm replacing most of the 0402 resistors and capacitors with 0201 (it is 0.6mm by 0.3mm) - otherwise there is not enough room on the board.

2) Yes, you are correct - there are no such signals Dan wants on the sensor board so he will need to redesign the board completely. Last year one company tried to modify the sensor board - rotate the sensor relative to the connector - and that did not work out.

3) Yes, you are correct - laying out the PCB is much more expensive than $0.75, PCB manufacturing, stencils, programing the PCB assembly and other NRE costs will go to thousands.

4) Additional consideration. Pixel size of the sensor is 2.2um (there are other sensors with even smaller pixels). With the high power lenses the depth of field on the sensor side is also about those 2um - you move the senor/PCB by 2 microns and it goes out of focus. DSUB is heavy, the cable - rather rigid too. So shaking of the camera may cause the sensor board to bend slightly and blur images. That is why we use light and flexible "flex cables" between the system board and the sensor board

5) On the other hand - it is very easy to implement such signals for the cost of just a single camera (less than a thousand USD), not redesigning the camera system board that with NRE costs will easily get to tens of thousands USD. Each 10353 boards has two inter-board connectors (of cause they are small and fine pitch, not DSUB). One is 40-pin for ATA, the second (30-pin) has multiple signals (USB, serial, power) and includes 12 GPIO from the FPGA. It is less than DAN wanted but he could use DDR or just higher frequencies to fit the signals he wanted to fit into 12 signal lines. That would save him a lot of money as he will need "just" to modify the FPGA code, no hardware changes to the camera itself. Of cause he will still need to develop and manufacture his memory subsystem, it will be expensive too.

Dan Hudgins June 22nd, 2009 02:22 PM

Record 12bit sensor data only
 
I was wondering what exactly members of the Apertus "team" would be willing to do to help in the development of a RAW port option on the Apertus camera?

It is easer for me to ask that then to go round and round about the minutiae. If you are not interested in you camera having a true RAW port option, the why not just say that from the start, rather than bring up all these non-issues (what issues JP4 has do not affect building a true RAW recording camera since the JP4 encoded files would not be used for farmes that end up in the finished motion picture).

You broght up using DIMM, and I asked if you understand them well enough to help me build a data recorder I could test, clear yes and no answers would make the development path clearer.

Steven Mingam June 23rd, 2009 10:37 AM

Wow, there is a lot of patient and courteous guys in this thread...

If you want that RAW output that much (even if you don't understand why...), just go buy a ikonoscop, it's the cheapest you will find. Looks like you have absolutly now idea how expensive electronic development can be.

Or dither the JP4 output to 32 bits per channel if that what you want... (you can't retrieve information that isn't here in the first place)

@Juan : the great thing with bayer filter (and the JP4) is that you get an half-resolution video stream (almost) for free for display. My Avisynth filter support that already (preview mode) and from what I've seen, the bayer patch for mplayer too (but Sebastian told me there was still some performance issues even in that mode, which is strange... need investigation)

Andrey Filippov June 23rd, 2009 11:32 AM

Added connector to 10373
 
Yesterday I added one more connector to the 10373 board I'm working on now. It is a small high density one, I had just right number of the spare pins on the FPGA in the I/O bank designated for the sensor interface, and there was a nice place for such a connector on the board (you may compare before/after in my blog -
Elphel Development Blog Andrey’s Journal ). So that was a really easy job, but still took me some 10 hours for the circuit diagram update, partial PCB re-layout (that included FPGA pin swapping that had to go back to the circuit diagram). I would charge $1250 if I did that for a customer - and that does not require any additional manufacturing NRE costs - I'm still working on the rev "0" board, no hardware is built yet :-)

That means that model 373 camera will be capable of running 2 senor boards with the sensor same/similar to what we use now in the model 353, each at the full speed. Of course there are also plans to use higher performance sensors too as the system board is expected to process up to 250-300MPix/sec (vs. 80MPix in JP4 / 53MPix in JPEG mode in the 353). But there is still a bottleneck in the link between the FPGA and TI DaVinci CPU that has a limit of 100MB/sec - just 20% more than Axis ETRAX FS has in the current camera. There are multiple channels connected between the CPU and FPGA (all that are availble on TMS320DM6467 chip and are not shared with other required interfaces), so actual bandwidth higher, but the 100MB/s is the major part of it. Other channels will be used for FPGA commands and additional lower bandwidth data (possibly - for low-res preview).

Steven Mingam June 23rd, 2009 02:23 PM

Wow, that's an incredible job Andrey !! (the only pcbs I ever routed are analog guitar effect pedal with a few components and 1 layer, and that was hard enough). Which software did you use ?

You forgot one use case though : HDR ?
(too bad you can't fit 3 sensors port for RGB color camera :D)

Sebastian Pichelhofer June 23rd, 2009 03:23 PM

Ohhhh!!! -> (possibly - for low-res preview)

The big issue we are currently trying to solve is debayering JP4 and displaying FullHD video on a mobile device like a netbook. Streaming a FullHD stream from the camera alone is too much for an atom and those cheap 945GSE onboard video cards. Results in around 10-15 fps. You CAN already frame and shoot with 15fps but it's simply not the full deal.

We are considering an Nvidia Ion with on board Geforce 9400M. That should give the video performance a 1000% increase, the question is just if JP4 debayering can be offloaded from the CPU to the video hardware (maybe with CUDA) or if the CPU still has to carry that burdon and therefore be the bottleneck of the whole pipeline.

Another option would be if the camera itself can generate a lower resolution output preview video stream. This would enable us to use much lighter and smaller devices like a beagleboard: BeagleBoard.org - Apertus or iPhone, etc.

Andrey Filippov June 23rd, 2009 03:30 PM

Quote:

Originally Posted by Steven Mingam (Post 1162458)
Which software did you use ?

Currently I use rather old version of PADS that I finally made work with WINE on my computer that runs GNU/Linux. Before that I used Athlon 700 running W2K - I never upgraded to Windows XP as it required online registration that I do not like.

But there are some problems with that old software, they are probably resolved in the current releases but I never paid for upgrades to the newer versions waiting for Mentor Graphics to release software that runs on GNU/Linux - that did not happen so far. So now I'm considering using modern version of Altium Designer, but unfortunately I was not able to make it work on GNU/Linux myself, so I'm trying to pay for that work to be done by professionals.

Quote:

Originally Posted by Steven Mingam (Post 1162458)
You forgot one use case though : HDR ?

Yes, you are right - that might be a good application too, but I believe it is better if the sensor could support HDR mode internally - like made by FillFactory that was later sold to Cypress

Quote:

Originally Posted by Steven Mingam (Post 1162458)
(too bad you can't fit 3 sensors port for RGB color camera :D)

It is possible with the 10359 - ElphelWiki board. But are the 3-sensor (3-CCD) cameras still better than single-sensor ones?

Juan M. M. Fiebelkorn June 23rd, 2009 05:07 PM

Andrey, using two sensors and the technique and the technique you described at elphel page of using two different gains on the same sensor, HDR is quite posible.
Merging this two half images from one sensor and the full one from the other, should give you three backets to work with...

Andrey Filippov June 23rd, 2009 07:40 PM

Quote:

Originally Posted by Juan M. M. Fiebelkorn (Post 1162509)
Andrey, using two sensors and the technique and the technique you described at elphel page of using two different gains on the same sensor, HDR is quite posible.
Merging this two half images from one sensor and the full one from the other, should give you three backets to work with...

In that case you'll likely need something like this:
Pellicle Beamsplitters - Edmund Optics (NT39-478 with 8%/92% ) so both senors will use the same lens.

Juan M. M. Fiebelkorn June 24th, 2009 04:47 AM

IMHO this should be better

TECHSPEC&reg VIS and NIR Plate Beamsplitters - Edmund Optics

50R/50T

Andrey Filippov June 24th, 2009 04:55 AM

Pellicle vs. thick beam splitters
 
I don't think so. Pellicle mirrors are really good as you can put them after the (regular) lens, while thick beam splitters require parallel beams or specially designed lenses. Additionally HDR applications require different levels of light, and the light is precious in most cases. Even if you use additional filters (again - you need really thin filters to put them after the lens (and that was the point of using the beamsplitter - to use the same lens) - still 92% is nearly twice more than 50%, so such camera will be nearly twice more sensitive.

Andrey


All times are GMT -6. The time now is 12:24 PM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2025 The Digital Video Information Network