Elphel image quality? - Page 2 at DVinfo.net
DV Info Net

Go Back   DV Info Net > Special Interest Areas > Alternative Imaging Methods > Apertus: Open Source Cinema Project

Reply
 
Thread Tools Search this Thread
Old June 19th, 2009, 09:55 PM   #16
Major Player
 
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
Dan please read carefully "How many bits are really needed in the image pixels?"

Quote:
Originally Posted by Dan Hudgins View Post
I seem to grasp JP4 seeks to record the Bayer pattern data, in four compressed images one 1/4 size RED, one 1/4 size BLUE, and two 1/4 size GREEN. And that you have experimented with using two gain values for the two GREEN fields in order to do HDR. You group the pixels of the same color for compression to avoid interactions between the colors.
It is nice if you are starting to grasp the JP4 idea.

Quote:
Originally Posted by Dan Hudgins View Post
What I was asking was the number of bits each color field is redorded using, the answer seems to be 8, which is not enough to allow heavy color correction without major histogram gaps?
But unfortunately - not yet the idea described in How many bits are really needed in the image pixels?

Quote:
Originally Posted by Dan Hudgins View Post
In the bright areas of the image the 12bit linear data is run through a curve to adjust the highlight detail, therefore the distance between the data levels will be more or less and the fine histogram gaps in the 16bitx3 output will be more or less. My system mostly is set up to adjust the highlights in the images for video gamma output, in the range of gamma 2.2 to 2.4, but other gammas can be set with the adjustments.
Gamma conversion to compensate CRT characteristics is something needed decades ago, now primary function of such non-linear conversion is to match the digitized output to the noise performance of the sensor pixel - in most parts of the output range but the very low (dark) end it is dominated by the shot noise. So answer to my previous question has an easy to calculate numeric value, not just "more or less". The gaps in histogram - it is a different story.

Let's consider hypothetical sensor with analog output with just 100:1 dynamic range. If you put an 8-bit ADC (sufficient for such sensor, agree?) than the histogram will be limited to just 256 levels that may produce gaps if you'll try to use more levels.

First solution - use a 16-bit ADC(they are rather inexpensive already). Even as the 8 lower bits will not carry meaningful information about the image frame, some 3-4 of them may be nicely randomized (not so with other 5-4 - it would be a challenge to design the circuit that it will not have harmonic interference visible as a periodic pattern on the image). But those 3-4 extra random bits will help you to smooth the histogram.

Or maybe there is a cheaper (less bandwidth) solution - still use an 8-bit ADC and then just add artificial noise on the decoder side. Those "natural" noise bits did not carry much of the pixel information - why bother to send them over from the noise generator in the camera?

And now - please understand that the same happens with encoding 12-bit sensor output (and FWC~=8500) using 8 bits and a non-linear conversion. Look at the illuistrations in that my article. Instead of preserving shot noise (you remember - it is equal to a square root of the number of electrons in the pixel?) for smoothing the histogram - why not generate it in the decoder?
Andrey Filippov is offline   Reply With Quote
Old June 20th, 2009, 04:13 AM   #17
Regular Crew
 
Join Date: Feb 2009
Location: San Francisco, CA
Posts: 82
Sooner than later?

Oscar Spier, the specs I listed are easy to get, that idea for the camera is low cost and does not require a user interface, you can just select the sensor program using dip switches (etc.) on the PIC chip to read the right values from the EPROM.

If Andrey Filippov can help me find a memory chip that will work for the 4GB FIFO then we could be closer to making it real.

==

Andrey Filippov, the output of a Digital Cinema is not a monitor, it is motion picture film, which can resolve more than 8 bits of brightness data when viewed at sync sound running speed. My film-recorder program uses "gamma spread" (still in development) to exceed the range of the monitor used for the film exposure by making stacked exposures, so having more than 8 bits is usefull because as I pointed out cinematography is ART not science and part of that art is GRADING the images using curves to complement the lighting of the shot. Bacause you need to underexpose the sensor to hold the lighlights for film like rendering you are getting most of the mid-tones Down two to three bits more than you would for video exposure.

Because film is shown at sync speed UNCOMPRESSED your eyes can mix details from one frame to the next and see details in the stream of frames that they cannot see in a single frame.

What looks like "noise" in one freeze frame reveals useful detail when the frames run at speed. If you compress frame images and "de-noise" them you loose the low contrast details in the so called "noise" and take some of the "life" out of the images that makes film based cinematography have a quality that compressed de-noised limited bit depth video lacks. A goal in Digital Cinema is to be able to capture uncompressed images that are like the uncompressed images film captures for printing on film prints for uncompressed projection.

==

You say you have a 64MB buffer chip now in your design, so you understand that chip.

How much does that 64MB memory chip cost?

Do you know of a larger memory chip that you understand how to hook up so that it could record at 150MB/s?

How much would that larger memory chip cost?

With a resolution of 2592x1102x1.5 each frame is about 4.3MB, so 64MB will buffer about 14 frames, since a 360 frames is the minimum size for the FIFO/Buffer one would need 26 of the 64MB memory chips to get a minimum working camera design.

What would the cost of 26 of the 64MB memory chips be?

Is there a larger and lower total cost memory chip that you understand well that would work, if so which one and what is its cost?

What type of memory is the 64MB chip you are using, i.e. SRAM, DRAM, FLASH?

What is its part number and maker?

Thank you for your answers they are most helpful.
Dan Hudgins is offline   Reply With Quote
Old June 20th, 2009, 04:34 AM   #18
Regular Crew
 
Join Date: Jan 2007
Location: Vienna, Austria
Posts: 112
Please take the time to find and read the available information. Almost everything covered or discussed so far is publically available on the Elphel Website or wiki.

JP4 sample videos and images are available at Index of /videos/JP4_demo

There is a linux command line tool to convert to JP4 to DNG (I posted results doing this in the big Elphel 333 thread) and 2 avisynth plugins to do the demosaicing available at: Software | Apertus Open Source Cinema

More details here: JP4 - ElphelWiki
Sebastian Pichelhofer is offline   Reply With Quote
Old June 20th, 2009, 10:11 AM   #19
Major Player
 
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
Dan, please, read documentation on Elphel cameras on the Wiki

Quote:
Originally Posted by Dan Hudgins View Post
If Andrey Filippov can help me find a memory chip that will work for the 4GB FIFO then we could be closer to making it real.
I believe you'll have to wait some 5-10 years. The largest x16 DDR3 chip I could find for the model 373 camera is just 2Gb (256MB)

Quote:
Originally Posted by Dan Hudgins View Post
What looks like "noise" in one freeze frame reveals useful detail when the frames run at speed.
Yes but that will never give you more than 1-2 extra bits. With the frame rate we have in 353 it is just one bit (requires averaging of 4 frames). 2 bits require much higher fps and averaging of 16 frames. The noise in the "whites" of the 12-bit sensor withe 8500e- FWC is more trhan 40 counts (>1%).

If the difference between output codes (ADC and non-linear encoder combined) is smaller than the pixel noise than the same eye-integration effect will be achievable with fewer encoded bits.. The problem can be only if the output codes are so far apart that it is likely that the same pixel will "get stuck" to the same level in multiple consecutive frames. Encoding steps that we use in the cameras are smaller than the noise, so consecutive frames (when the camera is looking at the same static scene) will have different output values and so averaging will work the same way.

You see, camera includes multiple elements and performance is defined by the worst of them. And this guides our designs - match the performance of the components and improve the weakest first.


Quote:
Originally Posted by Dan Hudgins View Post
You say you have a 64MB buffer chip now in your design, so you understand that chip.
How much does that 64MB memory chip cost?
Have you ever studied Elphel camera designs? They are all released under GNU FDL 1.3 (circuit diagrams and other hardware documentation) or GNU GPLv3 - software and FPGA code. You are free to study them and create derivative products. You are free to use them for non-commercial as well as commercial purpose as long as they are compliant to the original license (http://wiki.elphel.com/index.php?title=GNU_GPL_FAQ).

If you go on our Wiki page, you'll get to the 10353 - ElphelWiki page about the camera system board. Then, under the block diagram there is a link to 10353 Circuit Diagram, Parts List, PCB layout (just in front of the link to Gerber files).

That PDF contains circuit diagram, parts list (with all the prices I had at the time ofdevelopment as well as were to buy them) You may find the current memory prices at DigiKey Corp. | Electronic Components Distributor | United States Home Page or other component distributors.

Quote:
Originally Posted by Dan Hudgins View Post
Do you know of a larger memory chip that you understand how to hook up so that it could record at 150MB/s?
I'm confused with "understand how to hook up" - I just take the data sheet, application notes for the particular chip and follow what is written there.

Quote:
Originally Posted by Dan Hudgins View Post
How much would that larger memory chip cost?
I believe there is none that you need - 4GB (or 32Gb - they measure capacity in bits, not bytes). I would recommend you to use computer DIMM module in your design (I did a similar thing with older memory in mid-90s)


Quote:
Originally Posted by Dan Hudgins View Post
With a resolution of 2592x1102x1.5 each frame is about 4.3MB, so 64MB will buffer about 14 frames, since a 360 frames is the minimum size for the FIFO/Buffer one would need 26 of the 64MB memory chips to get a minimum working camera design.
What would the cost of 26 of the 64MB memory chips be?
OK, you want to put 26 memory chips in your camera. That would be a pain! I strogly recommend you to use ready to use memory modules, not trying te design your own - that will be very expensive even if you do it as a hobby and don't account for your time spent. Each run of multilayer board is expensive and designing a multi-chip memory module is a challenging task. So that will cost you many thousands of dollars while you can buy 4GB computer module much cheaper.


Quote:
Originally Posted by Dan Hudgins View Post
Is there a larger and lower total cost memory chip that you understand well that would work, if so which one and what is its cost?
I'm not in business of designing memory modules and can not help you much with that. As I wrote - I just use available datasheets and application notes when I need to use particular memory chip.

Quote:
Originally Posted by Dan Hudgins View Post
What type of memory is the 64MB chip you are using, i.e. SRAM, DRAM, FLASH?
What is its part number and maker?
10353 Circuit Diagram, Parts List, PCB layout

Dan, recently you offered me help to design a camera (as it turned out - without studying the "circuit" you plan to improve):
Quote:
Originally Posted by Dan Hudgins View Post
If you cannot figure out how to make a low cost uncompressed camera I can talk to you about the circuit details and how to eliminate unneeded parts from the circuit.
So what exactly do you want from me personally or Elphel as a company?
a) You are writing just to share you vision of "ideal" camera with other people?
b) You are offering help to teach me circuit design of the "uncompressed" cameras?
c) You are asking us for advices in your camera design (how to use "memory chips" and other components)?
d) You want us to develop a camera to your specs free of charge?
e) You want to place an order at Elphel to design a camera for you?
f) Something else ?

Last edited by Andrey Filippov; June 20th, 2009 at 12:14 PM.
Andrey Filippov is offline   Reply With Quote
Old June 20th, 2009, 03:31 PM   #20
Regular Crew
 
Join Date: Feb 2009
Location: San Francisco, CA
Posts: 82
Apertus camera will be only Elphel type or something else?

Sebastian Pichelhofer, Thank you for the links to the sample images and the software links.

==

Andrey Filippov, Thank you for taking the time to talk about the questions I asked.

As far as I can understand what Oscar Spier said:

"I want that too! :-D Please let me know when you find it ;-)"

About the specs I listed for a concept of an uncompressed camera, I understood that to mean that he would like a version of the Apertus camera to HAVE those specs, or ones better such as 1 minute shot rather than just 15 seconds.

It was also my understanding that the Apertus camera will not be an Elphel camera, but something else. I asked if the Apertus camera will ONLY be a Elphel JP4 camera, or will there be other models, and from what Oscar Spier said I understood that he was open to other models. If that is not the case please give a clear reply on that question.

==

Since Oscar Spier seems interested in an uncompressed camera I was going to ask Andrey Filippov a series of questions, each of which would be used to develop information to be used in the development of some new reference circuit that could become part of the Apertus or some other camera.

If the all models of the Apertus MUST be developments of the Elphel, then I guess you would not be interested in helping me develop a new design that uses perhaps some of the parts the Elphel uses but connected in other ways.

Since you are experenced in the use of parts in the Elphel, you could answer a few questions about the parts you are using and understand so I could see if they would meet the needs of a new circuit, and in answering some questions and sharing your experence and tools the development could go faster perhaps.

I asked if you would program chips for me since I do not have the tools required or to some extent the money to purchase such hardware at this time, I did not see a Yes or No answer.

I asked you if the 64MB memory chip you use for a buffer in the Elphel is SRAM/DRAM/FLASH, part number, and how much each one costs, how hard is it to just give a direct answer to that question?

If you don't want to share your experence and be helpfull, then just say so. Development of a uncompressed camera circuit could go much faster if you can just answer a series of questions one by one.

==

With regard to the highlight noise, I know that phototubes are noiser with more light, that is not an issue since the highlights are rolled off with an S cruve, so the noise bits amplitude in the result would be smaller steps. Film has S curve, and one simulates 4th generation film like grading where the images have gone through several S curves.

Posterization is a bigger issue, and compression can contribute to that, and if you have only 8 bits and underexpose to hold the highlights Posterization can contribute to histogram gaps in the graded images. Mid tones and shadows are expanded in gamma adjustments and grading for filmout in the film recorder.

The noise level of single pixels is not the issue since many pixels are mixed in the de-mosaic and the down sizing of the images for filmout, if each pixel has noise mixed with true data, the pixels that result from the processing will center closer to their true value, if lossy compression adjusts the brightness values of adjacent pixels then the mixing does not find the same center value.
Dan Hudgins is offline   Reply With Quote
Old June 20th, 2009, 05:11 PM   #21
Major Player
 
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
Quote:
Originally Posted by Dan Hudgins View Post
I asked if you would program chips for me since I do not have the tools required or to some extent the money to purchase such hardware at this time, I did not see a Yes or No answer.
No, I will not. I do not have any tools to do that myself - I don't like using such chips in my designs.



Quote:
Originally Posted by Dan Hudgins View Post
I asked you if the 64MB memory chip you use for a buffer in the Elphel is SRAM/DRAM/FLASH, part number, and how much each one costs, how hard is it to just give a direct answer to that question?

All that information is on the circuit diagram and parts list for the 353 camera, I gave you the link as you failed to find that open information on our web site yourself.

Memory chips cost some $10-$20, each run of the multilayer board (you'll have to re-run it after each bug detected - $1500. PCB assembly of such prototype boards - couple thousand (or you can do it yourself if you have experience with mounting BGA components). So that is why I recommended you to use DIMM modules in your design, not try to develop and make a 26-chip system yourself - that will most likely need several PCB runs before you'll learn how to do that.

Quote:
Originally Posted by Dan Hudgins View Post
If you don't want to share your experence and be helpfull, then just say so. Development of a uncompressed camera circuit could go much faster if you can just answer a series of questions one by one.
I share my experience in multiple ways - post documentation on our wiki, publish articles, answer your questions. And I expect you to read the available documentation before asking questions - that could really help you "go faster" :-)



Quote:
Originally Posted by Dan Hudgins View Post
Posterization is a bigger issue...does not find the same center value.
Any numbers, formulai, calculations to support your ideas?
Andrey Filippov is offline   Reply With Quote
Old June 21st, 2009, 12:06 AM   #22
Regular Crew
 
Join Date: Feb 2009
Location: San Francisco, CA
Posts: 82
Will Apertus use JP4 and be derived from Elphel?

Quote:
Originally Posted by Andrey Filippov View Post
No, I will not. I do not have any tools to do that myself - I don't like using such chips in my designs.
So you do not use PIC chips, GAL,PAL,PLD, CPLD,FPGA, PROM, or EEPROM other than those that your PCB has provision for in-circuit programing?

You program the FPGA in circuit over the network cable?

What software do you use to design the wiring codes/program for your FPGA?

Do you know a link for a low cost programer for FPGA chips of the type you use (I would not have a provision for in-circuit programing in a prototype)?

What part numbers FPGA do you use?


Quote:
Originally Posted by Andrey Filippov View Post
So that is why I recommended you to use DIMM modules in your design.
Do you understand DIMM modules like DDR333 or faster DIMM well enough to be able to help me with answers to build a 150MB/s data recorder that records 12bits with random write times, then can read out that data at random read times slower or as fast?

My first thought was to use DIMM, but I could not find anyone who understands how to hook them up. For removable shooting they would require a battery, which is why most people want to use FLASH rather than DIMM for the buffer.

Do you think FLASH are easer to use than DIMM?

Do FLASH have bad memory addresses that need to be mapped over, whereas DIMM are free of bad memory addresses?


Quote:
Originally Posted by Andrey Filippov View Post
Any numbers, formulai, calculations to support your ideas?
Digital Voodoo | 10-bit Advantage

It seems beyond discussion that there are advantages to using more that 8bits for images that will be processed, one of those issues is scaling which is required for the gamma and other curve adjustments in the film recorder, the link above gives an example, but these issues are more of an issue in Digital Cinema since the scaling can be non-linear and extreme resulting in histogram gaps and posterization.

Why does the RED ONE record 12bits if they are not useful?

If lossy compresion alteres the brightness values of pixels then the average values of those altered pixels cannot be anything other than altered as well, that is simple math.

What is the lossy compression ratio of the JP4 you will be using in the Apertus camera, i.e. 10:1 or 20:1 or what?

==

Another issue is that sample JPG and MOV images are just about useless for checking for artifacts in the Digital Cinema frame images, since both of those formats can be riddled with artifacts.

==

Could I please have a clear answer as to what kind of camera the Apertus will be, will it be a version of the Elphel camera using JP4, or will it be something unrelated to the Elphel camera?

Who will be making the circuit design for the Apertus cameras?

Could Apertus have an uncompressed version/model for sale as well?

Can anyone interested in the Apertus project help provide me with their experience to develop information focused on the parts required build an uncompressed camera, or is that goal of no interest to the Apertus project and I miss understood Oscar Spier's comments?

It might be useful to make a pool of knowlage outside of the Elphel project, a sort of "Idiots guide to Digital Cinema camera design" that would enable any electronics hobbyist to understand the parts required enough to develop his own camera circuits. Much of what is in manfactures spec sheets is insufficient without experence to get working results on the first try, as you have noted. Someone who has used the part knows better how exactly to get the required results out of it.
Dan Hudgins is offline   Reply With Quote
Old June 21st, 2009, 10:49 AM   #23
Major Player
 
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
Quote:
Originally Posted by Dan Hudgins View Post
It might be useful to make a pool of knowlage outside of the Elphel project, a sort of "Idiots guide to Digital Cinema camera design" that would enable any electronics hobbyist to understand the parts required enough to develop his own camera circuits. Much of what is in manfactures spec sheets is insufficient without experence to get working results on the first try, as you have noted. Someone who has used the part knows better how exactly to get the required results out of it.
Dan, please go ahead and write such guide "for idiots". In many cases you should not wait for somebody to do what you want but just do it yourself.

Yes, I do have some experience in the camera design - my first one in 1978 was just 0.000256 Mpix (16x16) but I'm afraid I can not help you much. I do publish my "guides" in magazines like Linux Devices or Xilinx Xcell but all of them require some background from the readers. And of course - we publish all our code and other documentation according to GNU GPL and GNU FDL, some people can read circuit diagrams, parts lists and the code.

Then, if publications+documentation+code does not answer their particular questions -they write to support@ or to the wiki or irc or Sourceforge forum - and always get replies from Elphel people (in some cases email filters unfortunately put legitimate emails into spam - but we are working on that issue).

In your case - you repeated the questions already covered in available documentation (like what components we use and how they are "programmed").

So I'm pessimistic about importance of a "Idiots guide to Digital Cinema camera design" - the camera design involves many tricky things that do require some knowledge and the readers not to be idiots. For "idiots" I would recommend:

Use your cell phone camera. You'll get more exciting footage even as the quality is not so good - but it is small and always with you. You would see such footage translated through national/international TV channels much more often than from the hobby camera. To say nothing about Youtube.
Andrey Filippov is offline   Reply With Quote
Old June 21st, 2009, 10:55 AM   #24
Regular Crew
 
Join Date: Jan 2007
Location: Vienna, Austria
Posts: 112
To clarify a few things:

*) Apertus is based on an Elphel 353 camera and will in the future be based on the next generation model Elphel 373.

*) Apertus is a community driven project so we are thankful for every contribution and improvement of value both of the software as well as the hardware.

*) The project already has a decent (around 3 year) history so the development is (even if the website and this forum was newly formed) pretty advanced. So we are not primarily interested in starting from scratch.

*) We all love throwing around ideas about the highest possible specifications (primarily of sensors recently it seems) of the present and future. Anyone can dream and think of his own ultimate camera specifications as he wishes (I love doing it myself) but you have to keep in mind that there is a lot more to a camera project of such size and that many parts and factors have to work together to create something useful.


Complex questions result in complex answers.
Sebastian Pichelhofer is offline   Reply With Quote
Old June 21st, 2009, 02:25 PM   #25
Major Player
 
Join Date: Jan 2005
Location: (The Netherlands - Belgium)
Posts: 735
Quote:
Originally Posted by Dan Hudgins View Post
As far as I can understand what Oscar Spier said:
"I want that too! :-D Please let me know when you find it ;-)"

Since Oscar Spier seems interested in an uncompressed camera I was going to ask Andrey Filippov a series of questions, each of which would be used to develop information to be used in the development of some new reference circuit that could become part of the Apertus or some other camera.
Dan, about my comment, it is best described here: Humour - Wikipedia, the free encyclopedia ;-)

But, seriously... in the end we are all looking for the same things. But the fact is that there are a lot of DIY cinema camera projects and even commercial projects (incl. Sparta and Scarlet) that try more or less exactly what you are talking about.. but they all strand at some point. Don't get me wrong, I hope they all succeed one day, but we choose not to go this way with Apertus, meaning not to build it from scratch, but finding existing hardware, combine and improve it with custom software and creativity. Elphel is the base of the project and we work closely with them. (But again, we are not commercially related)

I encourage everyone to bring in new ideas suggestions and solution, but also to read about the philosophy of the project on our website first.
Oscar Spierenburg is offline   Reply With Quote
Old June 21st, 2009, 03:46 PM   #26
Regular Crew
 
Join Date: Feb 2009
Location: San Francisco, CA
Posts: 82
RAW data port on Apertus camera for external RAW recorder

Quote:
Originally Posted by Oscar Spier View Post
Elphel is the base of the project and we work closely with them. (But again, we are not commercially related)
Since you have decided to only make the Apertus camera an offshoot of the Elphel camera, it would seem with possibility to have a 25pin D connector, or some other kind of connector like a ribbon cable connector, to take some data lines out of the Apertus camera so that an external true RAW sensor data recorder could be attached to your existing camera for those who would like to have the full RAW sensor data to work with in shooting their motion picture.

The pins required for the external 25pin data port might be:

1) - supply ground
2) + supply
3) pixel write or read pulse (on pulse 1 rise or a short time after?)
4) pixel write or read mode (1=write 0=read)
5) addess counter reset (1=reset 0=enabled for write/read counting in address counter)
6) frame start (optional if camera blocks pixel write pulse before frame start) 1=first pixel
7) line start (optional if camera blocks pixel write pulse before line start) 1=first pixel of line
8) Camera trigger, 1=shooting 0=idle
9) Frame active, 1=pixel pulse is in active pixel area 0=pixel pulse is outside active area
10)-21) D0-D11 I and O
22)-23) reserved (for 14bit sensors maybe? D12-D13?)
24) external status back to camera 1=connected and working 0=not connected (and end of read or error?)
25) Erase memory, resets high address latch to zero, 1=reset 0=enabled for write/read (this is used for partal read of unfull recorder memory)

(It may be better to have the Hi/Lo logic of the pins inverted if pull ups are used so no connection reads right.)

It would seem an easy thing to include such a connector, at this point the boards have not been finished for the Apertus camera.

Since Andrey Filippov has recomended using DIMM modules he must understand how to connect them, so I could work with him to build a prototype DIMM module recorder board that connects to the external data connector on the Apertus camera.

I know how to read the parallel port one byte at a time, so I can write a program to read the raw data out of the external prototype recorder and to de-mosaic the true RAW sensor data in maybe a day or two. On readout the 12bit data would end up converted to 16bit data for processing, so the top 8 bits would be read, then the lower 4 bits, then a pulse sent select the next 12bits. Later other readers could be built using knowlage of the network type cable or USB etc., but at least I could be able to shoot tests right away and process the true RAW sensor data.

If Andrey Filippov is unable to help me and others understand DIMM modules, maybe someone else could, at any rate, this can be done and there is no reason I can see why the external data port should not be included in the Apertus camera design spec, can you tell me otherwise?

Also I invite any input on the required pins for the external data port connector.

The Apertus would operate as normal not needing to know that the full sensor data is being recorded on the external recorder, the external recorder can use the frame start and trigger signals to mark the first frame for the beep/flash and have an audio out for the double system beep tone. The external recorder can also have the LED bar graph for quick exposure adjustments, the LEDs work like a VU meter off the sensor data lines with an extra blue led for the top three bits being high. The external recorder could also have a scan converter option to output to SVGA/NTSC/PAL, I understand how to build these parts of the circuit for the most part, although some help with the parts spec sheets and timing cycles might make things move along better.

The only change needed in the Apertus is to have its firmware switch the sensor from 8 bit mode output into 12bit output, if the top 8 bits are used for the normal compressed Apertus recording that does not matter since that recording would not be used, but would allow some function of the viewfinder even if the shadow areas would be darker, a gamma fix in the viewfinder software could offset that issue as well.


Also, if there is a disable (or error disable) in the Aperturs for the compression circuits, then the sensor could be programed from the Apertus firmware to run at speeds that the compression circuits could not handel, so the full 12bit output of the Aptina MT9P401I12STC could be used and the camera could shoot at 2592x1102x12bit@24fps and 2048x1152x12bit@24fps. If a pixel skip could be worked into the compression circuits, then a monochrome view on the viewfinder in the Apertus could be used, or when operating the sensor above the Apertus bandwidth then external live scan-converter could be used for a viewfinder. The external scan converter could be an accessory not required for the external recorder to operate.

The compressed recording could be used for on-set playback to check action. If there is a scan converter on the recorder it can also be used for playback from the memory recording at full uncompressed resolution, in monochrome or if the matrix is in analog circuit live to a RGB monitor like a computer monitor etc.

Last edited by Dan Hudgins; June 21st, 2009 at 04:52 PM.
Dan Hudgins is offline   Reply With Quote
Old June 21st, 2009, 08:47 PM   #27
Major Player
 
Join Date: Jun 2004
Location: Buenos Aires , Argentina
Posts: 444
I don't see the point of demonizing compression. RED is heavily compressing the RAW bayer using JPEG2000 (they just modify it in many ways just to avoid people opening it).If you don't believe it just look inside this forum and you'll find where the proposal came from.

BTW, Steven is telling us that wavelet compression is worst than the block based in H264.

Elphel right now is using something "simillar" to that and I myself think that using a block based 4x4 integer transform should be better on quality and lower in compoutation power than the wavelet one, unless you were using simple haar.

The nice thing on favor of wavelets is this one. If the camera head is compressing using Dirac, we can let the realtime compression of the camera hardware and just decode part of those frames to get an instant lower resolution full color image for monitoring purposes.That is a really nice thing to have and doable on a PC side.

Of course this procedure is also doable using H264, but I guess the bitstream should be modified in some way to let us easily access the lower frequencies (lower resolution) image.

Maybe someone is so intrepid to produce an hybrid codec with wavelet compresion for the lower resolutions and 4x4 integer for the medium/higher ones?

Who knows? This is something I've been thinking on....Anyway I don't clearly know if it brings any advantages on quality/computational costs.

BTW I've been doing film outs for around 10 years, and from my experience, those artifacts DAN is talking about aren't a problem for Professional Formats, maybe if you are using really compressed consumer stuff but, hey, even good HDV looks fine on film......


PS: Anybody saw compression artifacts on Star Wars episodes?
Juan M. M. Fiebelkorn is offline   Reply With Quote
Old June 22nd, 2009, 12:42 AM   #28
Regular Crew
 
Join Date: Feb 2009
Location: San Francisco, CA
Posts: 82
Choice

It will cost very little to put a data port on the camera, probably less than $5, maybe as little as $0.75 for a connector for a rubbon cable on the sensor board.

If you give users a choice between the compressed output and the true RAW output then they can see what difference there might be in the end results.

StarWars was shot with three chip cameras, not Bayer filter cameras, in that system the compressed images were processed before compression, not so called RAW sensor data through a Bayer filter.

One issue with the Bayer filter is that it is not saturated, so you need to decide if you are going to matrix before or after compression. If you matrix before compression you reduce the luma in the four images and therfore the resolution, if you matrix after then because of the limited bits the chroma becomes more Posterized and the compression artifacts increase chroma artifacts.

The RED ONE uses a 4K sensor, so the artifacts are smaller in the final images, than if you use a lower resolution image sensor to start with. If you use a smaller pixel count sensor and fewer data bits then the artifacts become greater.

I do not see why anyone would oppose putting a simple data port on the sensor board so that the sensor can be run at its full bit range and at better resolutions since the cost of doing so is small.
Dan Hudgins is offline   Reply With Quote
Old June 22nd, 2009, 12:30 PM   #29
Major Player
 
Join Date: Jun 2004
Location: Buenos Aires , Argentina
Posts: 444
You can't put a ribbon cable directly on the sensor board to get "RAW" data from it without needing the whole circuitry for the camera on the other end.
The sensor is not a selfsupported camera, if you think about it that way.
So getting a "RAW" port for it needs a little bit more than a $ 0.75 connector.
Also getting that huge amount of data outside the camera becomes a harder job that getting some little compressed stuff, because of bandwidth issues.If it were the case anyway you would need some kind of "noise inmune" transfer system.That costs a little bit more than $0.75, and BTW take "real state" on the PCB. I know Andrey can answer to this in a better way but at least I tried.
Juan M. M. Fiebelkorn is offline   Reply With Quote
Old June 22nd, 2009, 01:38 PM   #30
Major Player
 
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
Quote:
Originally Posted by Juan M. M. Fiebelkorn View Post
I know Andrey can answer to this in a better way but at least I tried.
Juan, thank you - I do not think I could say much more.

1) Yes, components need real estate so D-SUB will definitely not fit in the camera. In the 373 camera I'm replacing most of the 0402 resistors and capacitors with 0201 (it is 0.6mm by 0.3mm) - otherwise there is not enough room on the board.

2) Yes, you are correct - there are no such signals Dan wants on the sensor board so he will need to redesign the board completely. Last year one company tried to modify the sensor board - rotate the sensor relative to the connector - and that did not work out.

3) Yes, you are correct - laying out the PCB is much more expensive than $0.75, PCB manufacturing, stencils, programing the PCB assembly and other NRE costs will go to thousands.

4) Additional consideration. Pixel size of the sensor is 2.2um (there are other sensors with even smaller pixels). With the high power lenses the depth of field on the sensor side is also about those 2um - you move the senor/PCB by 2 microns and it goes out of focus. DSUB is heavy, the cable - rather rigid too. So shaking of the camera may cause the sensor board to bend slightly and blur images. That is why we use light and flexible "flex cables" between the system board and the sensor board

5) On the other hand - it is very easy to implement such signals for the cost of just a single camera (less than a thousand USD), not redesigning the camera system board that with NRE costs will easily get to tens of thousands USD. Each 10353 boards has two inter-board connectors (of cause they are small and fine pitch, not DSUB). One is 40-pin for ATA, the second (30-pin) has multiple signals (USB, serial, power) and includes 12 GPIO from the FPGA. It is less than DAN wanted but he could use DDR or just higher frequencies to fit the signals he wanted to fit into 12 signal lines. That would save him a lot of money as he will need "just" to modify the FPGA code, no hardware changes to the camera itself. Of cause he will still need to develop and manufacture his memory subsystem, it will be expensive too.
Andrey Filippov is offline   Reply
Reply

DV Info Net refers all where-to-buy and where-to-rent questions exclusively to these trusted full line dealers and rental houses...

B&H Photo Video
(866) 521-7381
New York, NY USA

Scan Computers Int. Ltd.
+44 0871-472-4747
Bolton, Lancashire UK


DV Info Net also encourages you to support local businesses and buy from an authorized dealer in your neighborhood.
  You are here: DV Info Net > Special Interest Areas > Alternative Imaging Methods > Apertus: Open Source Cinema Project

Thread Tools Search this Thread
Search this Thread:

Advanced Search

 



All times are GMT -6. The time now is 10:12 PM.


DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network