DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   Alternative Imaging Methods (https://www.dvinfo.net/forum/alternative-imaging-methods/)
-   -   Mirage Recorder (https://www.dvinfo.net/forum/alternative-imaging-methods/89837-mirage-recorder.html)

Take Vos January 4th, 2008 07:06 AM

Hello John,

You made me think again about my black level subtraction. As it is now, noise that is darker than black will be clamped to black. Also if parts of the sensor are darker than the black lines (non uniformity) they will also be clamped to black.

So I will be adding a constant offset to the sensor data, which will enable to me to represent a small range of negative values in 12 bits.

This should increase the quality of my per-pixel-non-linearity compensation algorithm.

Cheers,
Take

John Papadopoulos January 4th, 2008 08:57 AM

The noise is a little higher than it should be. Could it be an issue with your trigger? How do you power it?

Take Vos January 4th, 2008 09:01 AM

Power and trigger is over firewire.

John Papadopoulos January 4th, 2008 09:30 AM

You mean it's a software trigger or a firewire trigger? Or firewire just for power? If you are just connecting a camera, perhaps the computer ports are not properly designed.

Take Vos January 4th, 2008 09:42 AM

Hello John,

The trigger is based on the packet size you set and on how much data it wants to send. As the packets are put in the real-time stream on firewire, the trigger is basically the firewire clock. I will include a trigger based on SMPTE or word clock (simpler) at a later time.

I think the power/ground of the firewire connection is not clean. I've tried running the notebook from the internal battery, but there was no change, so the noise is picked up or caused by the notebook itself.

In any way, this is why I want to use fiber-firewire, and give the camera its own battery.

John Papadopoulos January 4th, 2008 10:26 AM

Depending on the camera and the interface, limiting free run with packet size is not dependable. You could do a long duration video to test. But this is not critical if you are just using a single camera and recording sync audio.

Take Vos January 4th, 2008 10:36 AM

John,

It is even worse, with the packet size I can not even get it within 1% of 24 frame/sec. So I will have to make my own trigger.

Taking a word clock of 48000 and divide it by 2000 would be an easy route. Or maybe ask ambient to make a firmware change to output a frame trigger (it already does word clock, tri level sync and smpte) and it is battery powered.

In all probability I have to design some electronics, maybe based on a small microcontroller like the PIC, then I can add some buttons to the camera to change mode and shutter time. I also need a thermostat for the peltier element.

Cheers,
Take

John Papadopoulos January 4th, 2008 12:44 PM

1% is too much. What is the exact resolution and frame rate you want to get? 1920x800 24p 12bit?

Take Vos January 4th, 2008 12:59 PM

John,

Yip, that is too much. But it will be no problem when I make an external trigger.

I am capturing 1920 x 940 x 12 bit @ 24 fps.
Then I do black levelling in software and crop the image to 1920 x 800 x 12 bit @ 24 fps.

The 12 bit is of course a sort-of-logarithmic conversion from the 14 bit A/D converter.

Cheers,
Take

John Papadopoulos January 4th, 2008 01:05 PM

That's very close to full frame. Nice:) You can actually get a very sharp 720p if you crop the scope sides to 16:9 and downscale.

Take Vos January 4th, 2008 01:16 PM

Hi John,

Yes, if the manufacture will release firmware that does the black level correction in the way I like; I could actually use all those 940 lines for the image, instead 140 lines are covered by a piece of aluminium to get a couple of black lines.

Cheers,
Take

Take Vos January 4th, 2008 01:18 PM

Right now, I include a shoot-and-protect square for 16:9 720p in the 1920 x 800 frame preview. Scaling would be better, but as I said in the post before all those other 140 lines are unusable.

Take Vos January 6th, 2008 12:50 PM

Hi,

So I have started working on the calibration utility.
I will share you some statistical research of before and after.

First I point the camera at a white piece of paper and over exposing it at 80ms, then Boom Recorder creates a movie where it automatically reduces the exposure time and records the results. The calibration program will read this movie and average at least images at each exposure time.

The results are below, the statistics are split for each color component, and then shown between parenthesis the: average pixel intensity, the standard deviation of pixel intensity (spatial noise or pixel non-uniformity), and the average of the deviation of each pixel from multiple images from the same exposure time (temporal noise, for use in ISO calculations). The values are in stopFS (FS stands for full scale), so -1.0 stopFS means 50% exposure, -2.0 stopFS means 25% exposure.

black movie, 0.0 ms:red (-10.7,-12.1,-10.1) stopFS; green (-10.7,-12.1,-10.1) stopFS; blue (-10.7,-12.0,-10.1) stopFS.
white movie, 0.1 ms:red (-8.1,-9.4,-9.4) stopFS; green (-7.7,-8.9,-9.3) stopFS; blue (-7.5,-9.1,-9.2) stopFS.
white movie, 0.2 ms:red (-7.8,-9.3,-9.3) stopFS; green (-7.3,-8.7,-9.1) stopFS; blue (-7.0,-9.0,-8.9) stopFS.
white movie, 0.3 ms:red (-7.4,-9.0,-9.2) stopFS; green (-6.7,-8.6,-8.9) stopFS; blue (-6.4,-8.8,-8.7) stopFS.
white movie, 0.4 ms:red (-7.0,-8.8,-9.2) stopFS; green (-6.2,-8.4,-8.8) stopFS; blue (-5.8,-8.5,-8.5) stopFS.
white movie, 0.5 ms:red (-6.6,-8.7,-9.0) stopFS; green (-5.7,-8.3,-8.4) stopFS; blue (-5.3,-8.3,-8.1) stopFS.
white movie, 0.7 ms:red (-6.2,-8.6,-8.9) stopFS; green (-5.2,-8.1,-8.2) stopFS; blue (-4.7,-8.1,-7.8) stopFS.
white movie, 1.0 ms:red (-5.7,-8.4,-8.6) stopFS; green (-4.7,-7.9,-7.8) stopFS; blue (-4.3,-7.9,-7.4) stopFS.
white movie, 1.4 ms:red (-5.3,-8.1,-8.2) stopFS; green (-4.2,-7.8,-7.5) stopFS; blue (-3.8,-7.6,-7.0) stopFS.
white movie, 2.0 ms:red (-4.8,-8.0,-8.0) stopFS; green (-3.8,-7.4,-6.9) stopFS; blue (-3.3,-7.2,-6.5) stopFS.
white movie, 2.8 ms:red (-4.3,-7.8,-7.6) stopFS; green (-3.3,-7.0,-6.5) stopFS; blue (-2.8,-6.8,-6.1) stopFS.
white movie, 3.9 ms:red (-3.9,-7.5,-7.1) stopFS; green (-2.8,-6.6,-6.0) stopFS; blue (-2.4,-6.3,-5.6) stopFS.
white movie, 5.5 ms:red (-3.4,-7.1,-6.7) stopFS; green (-2.3,-6.2,-5.6) stopFS; blue (-1.9,-5.9,-5.1) stopFS.
white movie, 7.6 ms:red (-3.0,-6.7,-6.2) stopFS; green (-1.9,-5.7,-5.1) stopFS; blue (-1.4,-5.4,-4.6) stopFS.
white movie, 10.7 ms:red (-2.5,-6.3,-5.7) stopFS; green (-1.4,-5.2,-4.6) stopFS; blue (-0.9,-4.9,-4.2) stopFS.
white movie, 14.9 ms:red (-2.0,-5.9,-5.3) stopFS; green (-0.9,-4.8,-4.2) stopFS; blue (-0.5,-4.5,-3.8) stopFS.
white movie, 20.8 ms:red (-1.5,-5.4,-4.7) stopFS; green (-0.4,-4.4,-3.9) stopFS; blue (-0.0,-4.6,-6.7) stopFS.
white movie, 29.0 ms:red (-1.0,-5.0,-3.2) stopFS; green (-0.1,-4.5,-4.9) stopFS; blue (-0.0,-8.0,-11.5) stopFS.
white movie, 40.5 ms:red (-0.0,-5.9,-11.9) stopFS; green (-0.3,-2.6,-5.4) stopFS; blue (-0.0,-8.5,-11.7) stopFS.
white movie, 56.6 ms:red (-0.2,-4.3,-5.6) stopFS; green (-0.0,-7.6,-11.4) stopFS; blue (-0.0,-8.6,-11.2) stopFS.
white movie, 79.0 ms:red (-0.0,-8.5,-12.3) stopFS; green (-0.0,-7.6,-12.5) stopFS; blue (-0.0,-8.9,-12.5) stopFS.

From these measurements a PPLUT (Per Pixel Look Up Table) is calculated with 8 values, then this PPLUT is applied to the above images to linearize the pixel values and reduce the spatial noise. As you see, I win around 2 stops of spatial noise reduction. Of course when the pixels are over exposed the spatial noise increases.

I have found that this algorithm makes pixels that are normally counted as bad (too hot or too cold) pixels useable again. The bad pixels that can not be rescued can now be found and using a simple average be restored from other pixels.

Would be cool to do bad pixel fixing inside the debayer algorithm, so that pixels can be fixed horizontally and vertically and let the debayer algorithm figure out which is best.

calib movie, 0.0 ms:red (nan,-10.7,-inf) stopFS; green (-15.8,-8.6,-inf) stopFS; blue (-15.5,-11.7,-inf) stopFS.
calib movie, 0.1 ms:red (-8.6,-10.3,-inf) stopFS; green (-7.8,-8.5,-inf) stopFS; blue (-7.3,-10.1,-inf) stopFS.
calib movie, 0.2 ms:red (-8.3,-10.5,-inf) stopFS; green (-7.4,-9.6,-inf) stopFS; blue (-6.9,-10.3,-inf) stopFS.
calib movie, 0.3 ms:red (-7.8,-10.4,-inf) stopFS; green (-6.7,-10.2,-inf) stopFS; blue (-6.2,-10.8,-inf) stopFS.
calib movie, 0.4 ms:red (-7.2,-10.3,-inf) stopFS; green (-6.2,-10.9,-inf) stopFS; blue (-5.7,-10.5,-inf) stopFS.
calib movie, 0.5 ms:red (-6.8,-10.5,-inf) stopFS; green (-5.7,-10.5,-inf) stopFS; blue (-5.2,-9.8,-inf) stopFS.
calib movie, 0.7 ms:red (-6.3,-10.5,-inf) stopFS; green (-5.2,-9.9,-inf) stopFS; blue (-4.7,-10.4,-inf) stopFS.
calib movie, 1.0 ms:red (-5.8,-10.3,-inf) stopFS; green (-4.8,-10.3,-inf) stopFS; blue (-4.3,-11.8,-inf) stopFS.
calib movie, 1.4 ms:red (-5.3,-9.9,-inf) stopFS; green (-4.3,-11.2,-inf) stopFS; blue (-3.8,-10.5,-inf) stopFS.
calib movie, 2.0 ms:red (-4.8,-10.3,-inf) stopFS; green (-3.8,-10.1,-inf) stopFS; blue (-3.3,-10.5,-inf) stopFS.
calib movie, 2.8 ms:red (-4.4,-11.7,-inf) stopFS; green (-3.3,-10.2,-inf) stopFS; blue (-2.8,-11.1,-inf) stopFS.
calib movie, 3.9 ms:red (-3.9,-10.5,-inf) stopFS; green (-2.8,-11.8,-inf) stopFS; blue (-2.4,-10.0,-inf) stopFS.
calib movie, 5.5 ms:red (-3.4,-10.3,-inf) stopFS; green (-2.3,-9.9,-inf) stopFS; blue (-1.9,-10.4,-inf) stopFS.
calib movie, 7.6 ms:red (-2.9,-11.4,-inf) stopFS; green (-1.9,-8.9,-inf) stopFS; blue (-1.4,-9.5,-inf) stopFS.
calib movie, 10.7 ms:red (-2.4,-10.2,-inf) stopFS; green (-1.4,-9.5,-inf) stopFS; blue (-0.9,-8.8,-inf) stopFS.
calib movie, 14.9 ms:red (-2.0,-10.1,-inf) stopFS; green (-0.9,-8.7,-inf) stopFS; blue (-0.4,-8.3,-inf) stopFS.
calib movie, 20.8 ms:red (-1.5,-9.4,-inf) stopFS; green (-0.4,-7.0,-inf) stopFS; blue (-0.0,-5.1,-inf) stopFS.
calib movie, 29.0 ms:red (-1.0,-7.8,-inf) stopFS; green (-0.0,-4.5,-inf) stopFS; blue (0.0,-3.7,-inf) stopFS.
calib movie, 40.5 ms:red (0.1,-3.4,-inf) stopFS; green (-0.2,-2.6,-inf) stopFS; blue (0.0,-3.7,-inf) stopFS.
calib movie, 56.6 ms:red (-0.1,-4.8,-inf) stopFS; green (0.0,-3.4,-inf) stopFS; blue (0.0,-3.7,-inf) stopFS.
calib movie, 79.0 ms:red (0.1,-3.4,-inf) stopFS; green (0.0,-3.4,-inf) stopFS; blue (0.0,-3.7,-inf) stopFS.

John Papadopoulos January 6th, 2008 12:53 PM

A before and an after image would be more useful:)

Take Vos January 6th, 2008 01:08 PM

Hi John,

I guess it would, but I don't have them yet, I first need to get my calibration program a little bit better. I already got it a little bit better by moving my calibration point to two stops down.

I also want to calculate the color correction matrix/lut from the ColorChecker image.

I learned a new term yesterday "daylight balanced film stock", I guess this calibration of the camera would be pretty much like that. It should also be possible to calibrate the camera to tungsten, but you can not really mix multiple calibrations.

Cheers,
Take

Take Vos January 10th, 2008 05:38 PM

I've spoken with the manufacturer and they told me that it is probably a ground loop problem caused by the firewire cable, something about the chassis not being connected to shield of the cable.

Although the computer was running on batteries and not connected to ground at the time, you can still have the "ground loop"-like effect where the firewire cable acts as an antenna. Especially because my firewire cable is rather long.

I guess running the camera on fiber with its own power will eliminate this problem so I'll be looking forward to testing it that way.

Next step on the agenda is adding reading of the calibration data by the codec, then I can make a couple of comparison pictures.

Cheers,
Take

John Papadopoulos January 11th, 2008 07:56 AM

I remember that the Pike has a shading correction feature. Doesn't that handle the pattern noise problems?

In audio installations, you usually disconnect the shield to break the loop but I don't know much about electronics. Almost all firewire cameras I have seen have horizontal lines deep into the shadows but you can't show them by just using gain. You need gain at very high values and extreme gamma settings.

Take Vos January 11th, 2008 08:31 AM

Hello John,

The standard way of reducing pattern noise is by taking a dark and a bright image (usually averaged out to remove the spatial noise). Basically you do linear interpolation using the two points captured by the dark and bright image.

However this doesn't seem to work (not enough for my taste) on my camera (maybe the camera is broken but the manufacturer says this is normal when you use gamma correction). What happens is that when there is a small amount of light the pixels start to behave non linear and each pixel has a different non-linear curve. When it gets brighter the curve becomes linear again, but because of the first part of the curve the black image can not properly be used to calculate the offset.
Either the non linearity of the curve is normal and caused by non-linear effect of semi-conductors on a per pixel basis; or it is caused by the microlenses, which may cause non-linear effects at such scales.

Instead of interpolating over a line using only a black and a white image, my algorithm interpolates over a curve that is measured from multiple gray images and a black image. And thus follows the non-linear parts of the curve at dark levels. The algorithm works extremely well, even hot and cold pixels become usable again (as long is there is a little bit of life in them).

As for ground loop problems, this is caused when two pieces of equipment are connected together and there is a voltage difference between the two grounds. This causes current to travel between the two, this is normally not a problem and needed to eliminate the voltage difference. The actual noise problem is caused when the current is travelling in the ground wire of a signal pair.

Sound engineers used to break the shield on the cables to eliminate current travelling over this shield. As we now know the actual cause of the problem, well educated sound engineers now make proper ground connections between the pieces of equipment, using heavy gauge wires and laid out in a star pattern; eliminating any voltage difference between the equipment. This makes everything a little bit saver and less entertainers get electrocuted by their microphones and guitars (fantom power which is pretty high voltage is send over the microphone cables to feed the amplifiers inside the microphones).

Cheers,
Take

Take Vos January 11th, 2008 08:43 AM

Oh about the non-linear curve of the pixels.
Astronomers used to pre-flash a CCD before taking an image, basically when they start the integration they turn on a light in the telescope that gives off a known amount of light that will fill the light buckets until they are no longer non-linear. A dark image is then taken from this pre-flashed image and subtracted from photographs that are also pre-flashed (and thus compensating for the pre-flash).

I could have done the same, but that would have taken me some work in mechanics, optics and electronics. Instead I've chosen to make a better algorithm.

Cheers,
Take

Take Vos January 13th, 2008 05:32 PM

Hello everyone,

Here are some pictures, these are exported as PNG after rendering in Final Cut Pro.

The first one is without non-uniformity calibration data:
http://www.vosgames.nl/images/Mirage...alibration.png

The second one is with non-uniformity calibration data:
http://www.vosgames.nl/images/Mirage...calibrated.png

To see the difference, look at the black swatch on the color tester. In the first image there are some wiggly lines which are absent in the second image.

The images are demosaiced using my own debayer algorithm that preserves more noise than for example AHD. If you zoom in you will see short horizontal and vertical lines which are caused by the directional interpolation, this happens in other algorithms as well.

I've made a modification on my algorithm and it will eliminate these short lines completely, but it makes the image slightly softer. I think I will need to teach my algorithm the difference between noise and lines.

John Papadopoulos January 14th, 2008 06:15 AM

What gain is this frame?

I think the uncalibrated one looks more natural. The noise looks more random. I assume both noise types are static, not changing from frame to frame. The bayer artifact should flunctuate more than the sensor artifact because it depends on the frame content. I remember an old comparison using eAHD(I think) that had much better signal per noise. It didn't have the maze artifact.

Have you kept the bayer frames?

Take Vos January 14th, 2008 07:30 AM

0 gain, I do not allow the user to change the gain.

There is no more static noise in the calibrated frame, this is the whole purpose of the calibration.

I actually want to have all the noise that was original on the sensor, noise reduction normally means that the image becomes less sharp. My algorithm actually uses high-frequency transplantation;
- I first find all the high frequency noise/texture.
- Then I interpolate a low frequency green.
- Then I interpolate red and blue using the full green image.
- Then I add the high frequency noise/texture to all the color components.

I have an idea of using square interpolation when there is no edge to be seen, this would smooth out the greek-restaurant-pattern. I am not sure how to figure out which of the orientations to use, unless I would do homogeneity comparison like in AHD.

I also have an idea for reducing the color aliasing even further, by using an weighted average, so the colors stay on the correct side of an edge.

Take Vos January 14th, 2008 07:33 AM

I still have the bayer frames, but they are quite large and in my own special movie format.

John Papadopoulos January 14th, 2008 11:53 AM

Greek restaurant pattern:P

http://www.mi.sanu.ac.yu/vismath/morrison/

It's sometimes called the greek key but it was quite popular throughtout the world. Someone found an ancient demosaicing description and thought it looked nice:)

Take Vos January 15th, 2008 04:10 AM

Hi,

I've been experimenting with a new algorithm for color interpolation to eliminate color aliasing. It worked, sort of, it does remove a whole pixel of color aliasing in exchange for a zipper effect.

But the zipper effect was actually caused because the weird color changes where not color aliasing but color aberration from the lens. This caused my algorithm to behave weird.

Cheers,
Take

Djee Smit January 15th, 2008 04:52 AM

Hey Take, it's looking more promising every day. Maybe a bit too early to ask, but do you have any ideas on how the camera as a total package is going to look like? Not in an esthetical way, more in a practical sense. Somehing like the SI-2k (mini)?

Take Vos January 15th, 2008 05:25 AM

Hello Djee,

Yes, I am pretty sure how it will look.

The camera head will be:
- The pike 210 C fiber
- A gate in front of the sensor for 2.40:1 filming
- An alluminum block that will be the:
tripod mount
lens rods mount
the trigger and temperature controller casing
a mount for the exposure rotary switch and mode switch.
a mount for the screen
a mount for the battery
and will also function as a cool block.
- Temperature Controller
- Peltier element
- Battery for 4-8 hours running time
- Trigger controller for a stable 24fps

The camera head will only be connected to the computer by a fiber cable, which can be rather long.

The computer:
- MacBook Pro
- Firewire 800 fiber hub
- eSATA controller in the PCI Express Card slot
- single SATA disk to record on
- VGA->composite converter
- video transmitter
- A sort of a docking station to put all the extra equipment in.
- USB high quality audio interface.

The edit computer could attach to the recording computer using ethernet and its own SATA disk, so it could make backups of the data and start editing on-set.

The monitor:
- LCD monitor
- Battery mount
- Battery for 4-8 hours of operation
- video receiver

Of course things can change, I am not sure yet about the monitor solution.

Djee Smit January 15th, 2008 06:25 AM

sounds good, good luck with it. When do you think it's ready for use?

Take Vos January 15th, 2008 06:38 AM

Djee,

I will probably start using it with my friends in a month or two. In principle it seems that the recording application, calibration application and QuickTime component are functioning pretty good.
So I could already start with filming and increase the quality of the calibration application and QuickTime component as I get more experience with it.

But I would love to first electrically separating the camera from the computer before starting filming. And a tripod mount and screen would also be nice.

Creating a package that someone will be able to buy is a whole other can-'o-worms. I would love to assemble a package which includes everything (excluding the computer) even with a nice carrying case.

Take Vos January 20th, 2008 11:22 AM

So, I was quite annoyed by the maze pattern, so I build a new debayer algorithm that interpolates horizontally, vertically and crosswise.
The crosswise interpolation is used when there is no edge in the neighbourhood. This means that noise now shows as noise, not as a decoration of a greek restaurant.

Of course after this I found that the noise was actually fixed pattern noise caused by my own fixed-pattern-elimination-algorithm, ironic I guess.
Somehow the green pixels in the red line are brighter than the green pixels on the blue lines. I seem to have fix this bug, somehow.

The system now works in 12 bit linear, instead of 12 bit non linear.
Although the A/D converter is 14 bit, the sensor is only 12 bit. It makes processing much easier though.

Cheers,
Take

Take Vos January 20th, 2008 12:05 PM

Hi,

Here is a new image, I am not entirely happy with the calibration in the almost black, like on the black/white pillows on the right.

Also it seems my debayer algorithm can't handle blacker than black (negative values), so I will need to find out how to solve that.

http://www.vosgames.nl/images/Mirage...alibrated2.png

Cheers,
Take

John Papadopoulos January 20th, 2008 01:53 PM

I think you should leave resolution aside and compare with a commercial ccd camcorder. Try to reduce the resolution and see if the image has any punch or realism in comparison. The outputs reminds me of a CMOS sensor. The lighting looks normal but there is lack of saturation and the blacks have a very unnatural character. If you try to push the shadows you will not get the natural grain like pattern of a ccd outputing uncompressed video. If you add saturation in post you will add more problems. If this happens at 0dB of gain how will a pushed 12dB image handle the processing? The loss of power in blue and green spectrum of an incadescent shot will quickly make it look very noisy. Generally, it is a good idea to check all algorithms at high gain with some ccd noise present. It's easier to fix the problems there.

Take Vos January 20th, 2008 02:58 PM

Hello John,

I can not test my algorithms at high gain, I will need to do all the calibrations again.

The lack of saturation is normal, there has not been any color conversion to rec/itu 709 space done, so the RGB is still in camera color space.

I explained that the fixed pattern noise in the dark areas is still apparent, which with some tweaking I hope will be gone. This is what you mean by unnatural, that is the fixed pattern noise of the sensor.
It also seems because the fixed pattern noise here has actual zero values that it doesn't look good when increasing the contrast in the dark areas.

If you look at the black patch of the ColorChecker, it does hold up well after pushing the contrast.

Below I include the same image that has been pushed a bit by Final Cut Pro. The original bayer image was first rendered by Final Cut Pro and rendered into the 16 bit float intermediate format. Then the 3-Way Color Corrector filter was used on this intermediate, added some saturation, pushed the mids (increases contrast in the blacks) and white balanced.

http://www.vosgames.nl/images/Mirage...ed2_pushed.png

Cheers,
Take

John Papadopoulos January 20th, 2008 04:05 PM

You have a bug which creates a positive offset in the Red channel.

Take Vos January 20th, 2008 04:11 PM

Hi John,

How did you notice that, I did notice there is some mathematical/physical seepage of red into the green channels on the red/green line?

I think I found more prove of the bug, my calibration program should operate almost as good with or without a black field. However when I don't add a black field it goes completely wonkers. I hate it, I've been looking at this bug for a couple of days now.

Cheers,
Take

John Papadopoulos January 20th, 2008 05:41 PM

I just looked at the frame! There is a red cast all over the frame.

If you correct it, dip the noise to black and change gamma to something approaching video gamma, it will look like this:

http://img265.imageshack.us/img265/9...hed1coplo0.jpg

John Papadopoulos January 20th, 2008 05:49 PM

Btw, did you get the color correction coefficients from the manufacturer preset of calculated them youself?

Take Vos January 21st, 2008 01:34 AM

Hello John,

I do have the color conversion matrix from the manufacturer, but I rather calculate them myself. But I first need my other algorithms to function correctly.

The red cast is strange, but I've got all sorts of strange things happening now, it almost seems like something is overwriting the calibration values. The red cast could be caused by wrong calibration values as well.

Cheers,
Take

Paul Curtis January 21st, 2008 04:52 AM

Take,

Great work, i assume all the later images are via the Pike 210? This uses the KAI2093 CCD sensor at 1920x1080?

Are you able to get the full un bayered data from this over firewire? (at 30/32fps)? I didn't think there would be enough bandwidth over firewire for this. (or is this the reason for 2.40?)

Do you have any problems with getting lenses to cover the sensor (14.2mm x 7.9mm). The circle needed for that is bigger than 16mm and a little bigger than S16. Most c mount and older style cine lenses would vignette on that size.

cheers
paul

Take Vos January 21st, 2008 04:58 AM

Hello Paul,

Bandwidth is the reason for the 2.40 ratio, but I actually transfer at 2.00 ratio to get some black bars. It is firewire 800 at 12 bits and my framerate is only 24 fps. Also I get bayered data, debayering is done in the QuickTime codec.

I have a lens designed for 1" sensors, and it doesn't show vignetting.

All my images are from the Pike 210, although earlier images where screenshots which had lower resolution. The newer images are exported from Final Cut Pro.


All times are GMT -6. The time now is 05:19 PM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2025 The Digital Video Information Network