Justin Molush |
January 31st, 2012 03:49 PM |
Re: Canon 4k Raw Codec Patent application
Gonna browse through it and pick out some tidbits....
Quote:
A method of reconstructing image data for successive frames of a moving scene, comprising: accessing a mixed resolution sequence of mosaiced frames of raw sensor data recorded on a recording medium, wherein the raw sensor data comprises pixel data for an image sensor having pixels arranged in correspondence to a mosaic of plural different colors in a color filter array; wherein each frame of the mixed resolution sequence of mosaiced frames of raw sensor data comprises pixel data sampled at one of at least first and second sampling mosaic patterns respectively corresponding to first and second resolutions in which the second resolution is lower than the first resolution
|
Quote:
The process steps comprise designating a first sampling mosaic pattern for sampling the pixel data at a first resolution, designating a second sampling mosaic pattern for sampling the pixel data at a second resolution which is lower than the first resolution, selecting one of the first or the second sampling mosaic patterns for a frame by applying a predetermined rule, sampling pixel data of the frame using the selected sampling mosaic pattern, and recording the sampled pixel data for the frame onto a recording medium.
|
So it records 4K and 2K frames in sequence, and uses the 4K sample to upscale the 2K frame in post?
Quote:
Repeated application of the steps of selecting, sampling and recording can be performed, for successive frames of the moving scene, whereby a mixed resolution sequence of mosaiced frames of pixel data is recorded on the recording medium. The second sampling mosaic pattern can be a subset of the first sampling mosaic pattern.
|
I wonder what this means for color depth? I mean, if this is a method of getting a raw sensor dump, the color depth could be enormous. The second sampling pattern can be used to bolster the data of the first frame? (as it says, it does it in very close temporal proximity, so the sensor readout HAS to be extremely fast as to not introduce ghosting... *read: minimal jello*)
I read this as, it reads out a second frame for extra color data OR as an extra frame in low-er resolution, higher framerate output (ie 2k, as opposed to the blended frame, semi-upscaled 4k it seems to be talking about).
Quote:
Each frame of the mixed resolution sequence of mosaiced frames of raw sensor data comprises pixel data sampled at one of at least first and second sampling mosaic patterns respectively corresponding to first and second resolutions in which the second resolution is lower than the first resolution. The process steps further comprise identifying whether an active frame corresponds to the first sampling mosaic pattern or to the second sampling mosaic pattern, and applying first or second processing to the active frame in respective correspondence to the identification of whether the active frame corresponds to the first sampling mosaic pattern or to the second sampling mosaic pattern. The first processing demosaics the raw sensor data by using the raw sensor data for the active frame, so as to obtain image data for the active frame. The second processing demosaics the raw sensor data by using the raw sensor data for the active frame as well as raw sensor data for a high resolution frame in close temporal proximity to the active frame, so as to obtain image data for the active frame. In addition, the process steps comprise outputting the image data for each frame for at least temporary storage and rendering.
|
At first I read that it will create internal proxies, ready to use right out of the camera, but I highly doubt it.
Quote:
More specifically, recording can be performed with a set of high resolution frames interlaced with one or more sets of lower resolution frames, where the pixels in the lower resolution frames are a subset of the high resolution pixels. Because of the interlacing of high and low resolution frames, the lower resolution frames can be reconstructed at the high resolution during demosaicing. Furthermore, data rate during recording can be reduced compared to recording all frames exclusively at the high resolution. In addition, the sensor data in each frame may be uncompressed, or compressed using an algorithm that introduces minimal artifacts, while the resolution of each frame may change with time.
|
Quote:
Thus, by a mixture of high and low resolution frames, the lower resolution frames can be reconstructed at the high resolution using temporally neighboring high-resolution frames during demosaicing. Furthermore, data rate during recording can be reduced compared to recording all frames exclusively at the high resolution. In addition, the sensor data in each frame may be uncompressed, or compressed using an algorithm that introduces minimal artifacts, while the resolution of each frame may change with time.
|
Someone with a better mind for patent applications go over it with a fine tooth comb.
|