Mirage Recorder - Page 11 at DVinfo.net

Go Back   DV Info Net > Special Interest Areas > Alternative Imaging Methods

Alternative Imaging Methods
DV Info Net is the birthplace of all 35mm adapters.


Reply
 
Thread Tools Search this Thread
Old January 22nd, 2008, 02:48 AM   #151
Major Player
 
Join Date: Mar 2005
Location: canterbury
Posts: 411
Quote:
Originally Posted by John Papadopoulos View Post
This is some interesting dsp. We removed the bilinear debayer effects and did a debayer and lens correction from scratch. I didn't code the processing but I find amusing that you can get from A to B:)

http://img255.imageshack.us/img255/6387/rebuildaq5.jpg
Very impressive work John, especially the edge reconstruction! So when can we buy it :) (I know...!)

What do you think of the Red debayer? It seems very naturalistic.

cheers
paul
Paul Curtis is offline   Reply With Quote
Old January 22nd, 2008, 03:07 AM   #152
Major Player
 
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
I found that I did some stupid things, like using a power function instead of the exponential function for the X-axis of my per pixel LUT that I used to do. Now I will try cubic interpolation to guess the values better, and use adaptive pixel repair.

So I have some work to do.
__________________
VOSGAMES, http://www.vosgames.nl/
developer of Boom Recorder and Mirage Recorder
Take Vos is offline   Reply With Quote
Old January 22nd, 2008, 07:39 AM   #153
Major Player
 
Join Date: Nov 2007
Location: Athens Greece
Posts: 336
Quote:
Originally Posted by Paul Curtis View Post
Very impressive work John, especially the edge reconstruction! So when can we buy it :) (I know...!)

What do you think of the Red debayer? It seems very naturalistic.

cheers
paul
It looks good but I wonder how fast it is. I also do not think anyone needs 4k. Hollywood has been quite happy with 2k and many cinematographers are happy with 16:9 low compression or uncompressed sd. We have to think about those who care about productivity. 800p is 10times cheaper in cpu resources compared to 4k and it will easily outperform any small pixel solution (especially cmos ones) in terms of low light performance and pixel quality. I have seen a night clip using the Red with a car moving through a well lit part of the city. It was only 1024 pixel wide, more than 16 pixels summed to 1, but the sky was a noisy dark blue mess. A CCD in that situation would look ok at 1:1 pixel and it would still look excellent with lots of gain if the output was uncompressed. I have also seen iso800-1000 shots (which were a true iso300-500 according to some users) and the output wasn't usable at all. When designing a practical solution, it's not as easy as maximising the most commercial feature(pixels), the goal is to improve the performance of the package as a whole. When designing something for the independent and low budget markets, you have to think about low light quality, affordability of fast lenses, ease of editing etc because that market cannot afford large crews, lots of lighting or expensive editing facilities. A digital cinema camera for low budget features that has lighting requirements similar to low sensitivity film is not very useful in the real world.
John Papadopoulos is offline   Reply With Quote
Old January 22nd, 2008, 09:26 AM   #154
Major Player
 
Join Date: Mar 2005
Location: canterbury
Posts: 411
Quote:
Originally Posted by John Papadopoulos View Post
It looks good but I wonder how fast it is. I also do not think anyone needs 4k. Hollywood has been quite happy with 2k and many cinematographers are happy with 16:9 low compression or uncompressed sd. We have to think about those who care about productivity. 800p is 10times cheaper in cpu resources compared to 4k and it will easily outperform any small pixel solution (especially cmos ones) in terms of low light performance and pixel quality. I have seen a night clip using the Red with a car moving through a well lit part of the city. It was only 1024 pixel wide, more than 16 pixels summed to 1, but the sky was a noisy dark blue mess. A CCD in that situation would look ok at 1:1 pixel and it would still look excellent with lots of gain if the output was uncompressed. I have also seen iso800-1000 shots (which were a true iso300-500 according to some users) and the output wasn't usable at all. When designing a practical solution, it's not as easy as maximising the most commercial feature(pixels), the goal is to improve the performance of the package as a whole. When designing something for the independent and low budget markets, you have to think about low light quality, affordability of fast lenses, ease of editing etc because that market cannot afford large crews, lots of lighting or expensive editing facilities. A digital cinema camera for low budget features that has lighting requirements similar to low sensitivity film is not very useful in the real world.
In my experience cinematographers sadly don't always get to choose (producers usually do) but in principle i agree with you. The varicam still outputs very nice images at 720. Although one aspect of the Red that draws people is shallow depth of field offered by a 35mm sensor. Now with the right lenses the same applies to 2/3 and 1" but unless you're at zeiss superspeed quality levels a lot of lenses aren't so good around T1.3. I'm just starting to experiment with c mount so who knows what's possible.

Since you've been posting your comments have made me rethink CCD although im not *totally* convinced or converted yet :) because i only see CCD examples, no comparison with like for like CMOS to be able to quantify the differences visually.

In broad terms with generic cmos/ccd sensors i can see where you're coming from but i'm interested in specific sensors, the altasens ProCam HD vs the Kodak KAI2093 for example. On paper the altasens is actually more sensitive, across a broader range of light and cmos, as i understand, generally outputs a cleaner signal (less hardware needed around the sensor because a lot of it is onboard). CCDs bloom and CMOS usually have a rolling shutter (mitigated more by making sure the sensor is running fast enough). CMOS frame rates are higher and that is a valid narrative requirement sometimes, especially for model shots and so on.

Have you found *any* cmos that you're impressed with? Aside from rolling shutter, what about the other aspects?

paul
Paul Curtis is offline   Reply With Quote
Old January 22nd, 2008, 11:42 AM   #155
Major Player
 
Join Date: Nov 2007
Location: Athens Greece
Posts: 336
Well, the producer is usually not an engineer or even the photographer, so I guess every single one of them will want the Red because of the higher resolution even if that's not required by the project.

Most people involved with advertising or special effects will also prefer the Red because the higher chroma resolution gives more options and you can use as much light as you want in these shoots. They will only need CCD is there is lots of real camera motion (not synthetic).

Where CCD will always be better is in very active camera work, action shots, natural light cinematography etc. On a good CCD outputing uncompressed there is natural ccd noise that can be used creatively. You normally do not have to use gain, so there is no grain, but if you like it, it look great and has excellent statistical properties that are very close to high sensitivity film. On a compressed format or a CMOS sensor you would not want to do that.

On the other hand, not everyone has the same idea about what an image should look like. These days everything has too much grading and an artificial image is very common and even considered cool by some people. I personally think film is still the reference for color quality and this is what we are after. If the user is after desaturated, metallic, green/blue looking science fiction looks, any camera will be equally good I guess. But some people want realism out of the camera and most people interested in the camera so far are involved in pseudodocumentaries, drama, comedy and horror. Which is nice, since that was the intended market for this camera. We also have interest from film people who have never used a digital camera. Which made us extremely happy I admit.

When comparing sensors of the same resolution, the ccd is usually superior in every image quality aspect except smear performance. You can except the kodak 2093 to significantly outpeform any CMOS at 2k, including SI and Red in 2k crop.

To answer you question, in a direct comparison, the CCD will have higher sensitivity, an excellent statistical nature in its noise, higher saturation, better motion quality, more realism and a far more impressive image. In low light situations the CMOS will quickly deteriote to a flat lifeless image. Every single cmos I have seen so far has these characteristics. I have first hand experience with many cmos sensors including the altasens 1080p but not with the Red sensor. We would only use CMOS if the reduction in cost was extremely important for the complete package. With an altasens it can be very significant, so there is always room for CMOS even in our product.

The issue of depth of field has many sides. F1.4 is cheap in the 2/3" and 1" sensors, but how much does it cost on the full frame Red? SLR lenses at that speed (beyond 50mm) do not exist and film lenses are extremely expensive. You settle for f2.8 or larger and there you go, there is no DOF advantage and you also get an enormous loss of light. There are many 35mm shooters that prefer to stay above f4 because of DOF limitations and consider working at larger apertures a problem. They will be very satisfied with a sensor like the Kodak that has much more sensitivity for the same DOF. 2.3" and 1" has good shallow DOF capabilities. If a user comes from an 1/3" or 1/4" f1.6 or f1.8 camcorder he might be starved for some DOF flexibility, but super35mm is probably too much for most applications. Personally I find 2/3" DOF annoying sometimes at large apertures because it is too shallow.

CCDs are not made by startups or companies that were created yesterday, they are made by Sony and Kodak, companies that pioneered and are dominating the imaging (even film) market for decades. There is some solid engineering behind the sensors and it is obvious in the output of the sensors. Why is our sony 2/3" CCD sensor used in camera heads that cost 20,000 euro and output a frame every 10 seconds? Why is it considered the highest quality low light ccd for scientific applications if CMOS actually had a chance to compete at a fraction of the cost? Why do all serious microscopy cameras come with that sensor? Why do NASA choose CCDs for all space based inspection instead of the affordable cmos sensors? You would expect the engineers behind these solutions to have done their homework and be able to read beyond the CMOS related marketing and be free from cost restrictions of a few $100s. And you would be right:)

I only see CMOS in a few places on the market, mobile phones, consumer camcorders, and a couple of digital cinema cameras. Why is Sony and 95% of other companies still using the expensive CCD sensors in even the cheapest still cameras if CMOS is up to the task? They don't want to save money or improve quality? CMOS is used in DSLRs now, but there is no camera motion involved in those and the system can afford to do quite a lot of processing on the cmos output. These cameras (body only) do not make any real profit, the companies survive on selling aftermarket lenses for their system, and there is obviously a pressure to reduce cost and, why not, increase resolution on the side. The expensive medium format camera backs still use CCDs, just like any serious camcorder and digital cinema camera from the big manufacturers.

Red are very smart. I can get an altasens implementation and do a direct comparison with a CCD. We have evaluated an altasens head and are in the process of evaluating another one. But since RED claim the sensor is not available to anyone else, they can claim anything and we have to spend 20,000 to buy a Red One camera and wait many months to get it in order to directly compare. We would never be trusted when publishing such results, so there is not much we can do except wait for the users to discover the quality differences in actual use. Which will be hard. The Red users thought the NoX samples looked bad but they look excellent IMHO. We have a DVX user in the team and, damn, it is like questioning one's religion sometimes when comparing anything to the DVX!
John Papadopoulos is offline   Reply With Quote
Old January 23rd, 2008, 03:55 PM   #156
Major Player
 
Join Date: Mar 2005
Location: canterbury
Posts: 411
John,

I just wanted to say thank you for such a comprehensive thought out reply!

I'd like to add that you talk about having real chroma and luma and the nice thing about a bayered 2k or 4k is the extra information for chroma by the time it's downsampled to 2k. Although i know your debayer is exceptionally good.

You make a good point about the nature of 'uncompressed' noise

Can smear be 'fixed' or minimised in CCD sensor design by good supporting electronics and hardware?

The point about lenses is important especially with this sensor, im finding it quite difficult locating glass that has a big enough image circle. Even the zeiss superspeeds and S16mm cookes don't look like they'll cover it. There's a bit of a void here in the market save for some machine vision lenses (like the fujinons) - SLR lens are too big and all the masses of 2/3 lenses are too small. Have you found some quality lens manufacturers?

>We have evaluated an altasens head and are in the process of evaluating another one.

don't suppose you care to mention what you have tested and your thoughts? I suspect you're in a unqiue position of really being able to test and understand these heads?

thanks again
paul
Paul Curtis is offline   Reply With Quote
Old January 25th, 2008, 07:33 AM   #157
Major Player
 
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
I've have worked on my non-uniformity calibration algorithm and here is the result:
http://www.vosgames.nl/images/Mirage.../fr_cal_bp.png

I first had to find the best values for the x-axis of my per pixel luts, as the errors are exponential on an exponential system (or is that logarithmic on a logarithmic system) I had to choose the x-axis accordingly.

Then I added a bad pixel detector. A bad pixel is marked when it is non uniform (after uniformity restoration) by more than 6% (taken from a kodak white paper) from the average. I do this for each white field that has been taken and mark the bad pixel by the brightest value that it is off by more than 6%.

During rendering, first the pixels are made uniform. Then each pixel value is compared to its own bad-pixel-brightness-value. If the pixel is bad then the pixel is interpolated from neighbour pixels that are good, if no good neighbour pixel is found an average is taken from all the neighbours (I guess I could do this using weights).

There is also a bad line somewhere at the top of the image that goes all the way from left to right. weirdly this line does not show up in the white fields, otherwise the bad pixel detector would have detected it (I also checked it visually). I guess I will have to find a way to add manual pixels and lines to the bad pixel map.

Next step is getting color conversion to work, i am thinking of using a 3D-LUT for color-space conversion, instead of a color conversion matrix. I am not sure how to implement one, but I guess I will find out.
__________________
VOSGAMES, http://www.vosgames.nl/
developer of Boom Recorder and Mirage Recorder
Take Vos is offline   Reply With Quote
Old January 25th, 2008, 10:36 AM   #158
Trustee
 
Join Date: Mar 2003
Location: Virginia Beach, VA
Posts: 1,095
Hey Take,

3D LUT's are not hard per-se, just depends on where you need to apply one.

The GPU will give you a 3D LUT for "free", at least sort of, meaning linear interpolation is typically a part of any GPU's architecture (i.e., it can linearly interpolate a texture map), so you can use a volumetric texture and the GPU will linearly interpolate it using trilinear interpolation to get all the values you need out the 3D LUT.

If you have to rely on the CPU for the 3D LUT generation, then trilinear gets a bit cumbersome since there are a lot of operations (you need a total of 8 points to construct the interpolated point). Tetrahedral interpolation can be a lot easier since you're now only dealing with 4 points necessary for interpolation. Since you're not doing a scattered mesh, you don't need to worry about stuff like Delaunay triangulations, i.e, your 3D LUT should be a regular mesh of points evenly spaced, so you can simply split each cube into six tetrahedra using a single diagonal on the cube. In fact, it's probably a lot simpler than that, i.e., with an evenly spaced mesh, you can just pick the 4 closest points and make a tetrahedra out of them, but I think you will need to make sure that no point can be addressed by two separate tetrahedra (i.e., if you don't pick an axis of the cube for creating the tetrahedra, then there is a situation where one point can become assigned to one set of 4 points, and another point right next to it might get assigned to another set of 4 points, and then the point right next to that will be assigned back to the first tetrahedra, so now you have this odd cross-over situation. Depending on the precision of the math used, this could cause interpolation inconsistencies. Theoretically it wouldn't, but in the real-world it might).

Thanks,

Jason
Jason Rodriguez is offline   Reply With Quote
Old January 25th, 2008, 12:06 PM   #159
Major Player
 
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
Hello Jason,

Thanks for the information.

I started and stopped with trilinear interpolation, it was to messy. I was actually doing a scattered mesh, i.e. only the colors from the ColorChecker would be in the LUT. And I think I had it quite good, until I needed to do extrapolation and then it became extremely weird.

So instead I am trying to find a 3x3 matrix for color conversion.
I am using Gauss-Jordan to find the matrix.
Then I am going to repeat that for each combination of 3 ColorChecker colors.
Then I take the median of all the results and that will be my correction matrix.

Cheers,
Take Vos
__________________
VOSGAMES, http://www.vosgames.nl/
developer of Boom Recorder and Mirage Recorder
Take Vos is offline   Reply With Quote
Old January 25th, 2008, 12:37 PM   #160
Major Player
 
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
Jason asked me the exact steps when creating the image fr_cal_bp.png so here they are.

+ Read footage into Final Cut Pro
- Camera bayer 12 bit linear (already black corrected with a small offset to handle negative values)
- Apply Per Pixel uniformity correction (also linearizes each color channel).
- Fix bad pixels
- Debayer using a direction algorithm
- <--------- Here is where the color conversion will be.
- Add rec709 gamma correction
- Convert to YUV using rec709 YUV conversion
+ Final Cut Pro with "3 way color correction filter", with neutral settings, just to force high dynamic range rendering when previewing (not needed for normal export)
+ Export to my own intermediate codec from Final Cut Pro
- Convert to RGB with rec709 YUV conversion
- Remove rec709 gamma correction
- Saved as 16 bit float linear RGB
+ Read back into Final Cut Pro
- Add Apple native gamma correction (1.8) (because it is exported to .png)
- image is now in 24 bit RGB (because it is exported to .png)
+ Save as .png by Final Cut Pro.

Just to reiterate, the "3 way color correction filter" is not doing anything to the image, it is just there for forcing high quality rendering for debugging purposes.
__________________
VOSGAMES, http://www.vosgames.nl/
developer of Boom Recorder and Mirage Recorder
Take Vos is offline   Reply With Quote
Old January 25th, 2008, 12:39 PM   #161
Trustee
 
Join Date: Mar 2003
Location: Virginia Beach, VA
Posts: 1,095
Quote:
I started and stopped with trilinear interpolation, it was to messy. I was actually doing a scattered mesh, i.e. only the colors from the ColorChecker would be in the LUT. And I think I had it quite good, until I needed to do extrapolation and then it became extremely weird.
Eek, yeah, that's gonna be messy. . . first-off you can't really do effective trilinear on a scattered data-set . . . trilinear likes evenly spaced meshes. What would be more effective and accurate would be something like tetrahedral with only 4 points and do a triangulation of the mesh (like Delaunay). Secondly, the placement of the colors on the color-checker in device-independent color-spaces like CIELab are not beneficial for scattered data-set interpolation (at least not using literal interpolation, i.e, you need to move to some regression method), and typically what happens is that after triangulation you have tetrahedra that do not allow one to weight the points in the color-space appropriately for the weight of the point that is the training point . . . i.e., you can end up with saturated colors that are also interpolating through the mid-tones, so your mid-tone range gets all skewed as you move around the convex hull of the training points which are the colors from the ColorChecker chart.

In an ideal scattered data-set training sample, the colors are correctly spaced so that a nice evenly spaced lattice/solid can be used to create the interpolated data, but the ColorChecker series typically "clumps" samples around the convex hull of the color-space, and 4-points is not enough to weight the points in the interior correctly. So what typically happens is that colors that are near the exterior of the color-space and closely aligned with the color-checker points look fine, but then as you move into the interior of the solid, the points are all skewed incorrectly as there simply aren't enough points to create a nice interpolation lattice.

I think a better chart would be the IT-8 or something of that nature that creates a more "even" interpolation lattice with evenly spaced samples throughout the volume of the color-space, not clumps along the convex hull like the ColorChecker series.
Jason Rodriguez is offline   Reply With Quote
Old January 25th, 2008, 12:45 PM   #162
Trustee
 
Join Date: Mar 2003
Location: Virginia Beach, VA
Posts: 1,095
Quote:
Just to reiterate, the "3 way color correction filter" is not doing anything to the image, it is just there for forcing high quality rendering for debugging purposes.
So if I understand correctly that image then only has had gamma correction applied, there has been no saturation added to the image? For instance, you mentioned:

- Add rec709 gamma correction
- Convert to YUV using rec709 YUV conversion

So these two steps are not applying some form of color saturation multiplier on the image (they shouldn't, but was just wondering)?

I'm just wondering if that's the level of saturation you're getting straight from the camera head, or if there is a multiplier somewhere in your color-conversion steps to give the more saturated image I'm seeing as the end product. It sounds like from your description there isn't any saturation stages.

Thanks,

Jason
Jason Rodriguez is offline   Reply With Quote
Old January 25th, 2008, 12:51 PM   #163
Major Player
 
Join Date: Mar 2007
Location: Amsterdam The Netherlands
Posts: 200
Quote:
Originally Posted by Jason Rodriguez View Post
So if I understand correctly that image then only has had gamma correction applied, there has been no saturation added to the image? For instance, you mentioned:

- Add rec709 gamma correction
- Convert to YUV using rec709 YUV conversion

So these two steps are not applying some form of color saturation multiplier on the image (they shouldn't, but was just wondering)?
Indeed, these two steps are NOT adding color saturation.

There have not been any color saturation added anywhere, nor has any whitebalancing taken place, the colors are still in camera RGB.

The picture was taken in natural light from an overcast sky during noon in the netherlands.
__________________
VOSGAMES, http://www.vosgames.nl/
developer of Boom Recorder and Mirage Recorder
Take Vos is offline   Reply With Quote
Old January 25th, 2008, 12:58 PM   #164
Trustee
 
Join Date: Mar 2003
Location: Virginia Beach, VA
Posts: 1,095
Quote:
first-off you can't really do effective trilinear on a scattered data-set . . . trilinear likes evenly spaced meshes.
BTW, I'd just like to clarify the "tone" of that statement . . . it sounds a little harsh and didactic, I wasn't meaning for it to sound like that . . . I'm sure trilinear can be done (and you obviously said it was working to some extent), but according to a lot of papers and sources I've read, scattered data-sets that are not evenly spaced tended to get better results from tetrahedral vs. trilinear interpolation.

Thanks,

Jason
Jason Rodriguez is offline   Reply With Quote
Old January 25th, 2008, 01:01 PM   #165
Trustee
 
Join Date: Mar 2003
Location: Virginia Beach, VA
Posts: 1,095
Quote:
There have not been any color saturation added anywhere, nor has any whitebalancing taken place, the colors are still in camera RGB.
Wow, that's pretty impressive then . . . it hopefully won't take you too much work to align those results to a proper color-space . . . a lot of your color vectors are already in the right spot or very close to them, so you shouldn't need any crazy matricies like those needed to uncouple sensors with a lot of color-channel cross-talk.

Thanks,

Jason
Jason Rodriguez is offline   Reply
Reply

DV Info Net refers all where-to-buy and where-to-rent questions exclusively to these trusted full line dealers and rental houses...

Professional Video
(800) 833-4801
Portland, OR

B&H Photo Video
(866) 521-7381
New York, NY

Z.G.C.
(973) 335-4460
Mountain Lakes, NJ

Abel Cine Tech
(888) 700-4416
N.Y. NY & L.A. CA

Precision Camera
(800) 677-1023
Austin, TX

DV Info Net also encourages you to support local businesses and buy from an authorized dealer in your neighborhood.
  You are here: DV Info Net > Special Interest Areas > Alternative Imaging Methods

Thread Tools Search this Thread
Search this Thread:

Advanced Search

 



Google
 

All times are GMT -6. The time now is 11:10 AM.


DV Info Net -- Real Names, Real People, Real Info!
1998-2017 The Digital Video Information Network