View Full Version : 4:4:4 12-bit Uncompressed DVX100


Pages : 1 2 3 4 5 6 7 8 9 10 11 12 [13] 14 15 16 17 18 19 20 21 22 23 24 25 26

Richard Mellor
May 11th, 2004, 09:24 AM
one last link http://www.imperx.com/datasheets/IPX2M30H_Manual.pdf

Obin Olson
May 11th, 2004, 09:47 AM
juan what will the cost of this mod be? do you have even a ballpark idea?

Obin Olson
May 11th, 2004, 09:49 AM
Richard that one only does 33fps...so yes it would work but at 6grand I don't think it is worth it if you can't even get 1/2 speed slo-mo from it

Benjamin Palmer
May 11th, 2004, 10:50 AM
hey folks
the new version of graphic converter will convert the cineon .cin files to whatever you like (tiff sequence, etc.) so you can then open in FCP.
great software, only $35, here: http://www.lemkesoft.de/en/graphdownld_en.htm

Ernest Acosta
May 11th, 2004, 01:02 PM
Great work Juan. I've been reading this thread since you started it. I have suggestions for a name. How about Pert4 Mod or JuaPer4 or JP444 or JP Mod4 or JuanMod4 or Pert Mod4. I like the last two. Any other suggestions?

Obin Olson
May 11th, 2004, 01:18 PM
how fast is it Mr. Palmer?

Randall Larsen
May 11th, 2004, 01:58 PM
Juan,

Keep us posted on your progress with the DVX100 mod. I am also interested in what you think the eventual price of the mod will be with the firewire 800 interface.

I am also interested in a High Def solution as are others on this list. If the DVX100 can be made to provide useable 720p that is great. We also need to look at available alternatives if uprezzing does really work for compositing and effects.

Les and listmembers,

The datasheet I have for the altasen 3560 sensor is a pdf.

You could email me at randall@eye-net.net and I could send it to you or you may contact Doug at Altasens for a mfg. provided copy.

The Dalsa sensors also look interesting to me. They might be difficult to obtain since Dalsa is trying to market their expensive camera. I am still not sure whether I buy Dalsa's argument that a single chip solution is better that a 3 chip solution. There is the problem of the prism optics compatibility with a 35mm M.P. lenses.

One of the advantages of Juan's mod is that it provides 3 chip color in raw form. Perhaps if would be nice to look at Dalsa's
L3 compression or some other lossless compression scheme to
limit the size of the data capture needed but we are quite happy to get 12 bit data in 16 bit tiffs for now.

Benjamin Palmer
May 11th, 2004, 02:50 PM
hey obin
graphic converter is as fast as any batch conversion would be... it's a raw format to raw format, so depends on your processor/disk speed, but it took under a minute to do that batch juan posted, on an 800mhz tibook. there is a demo you can download at the url previously posted

Peter Plevritis
May 11th, 2004, 03:25 PM
Many probably have heard of huffyuv. It's a lossless compression codec. Seems to work. Don't know if it does 10, 12, or 16 bit per channel.

http://www.divx-digest.com/software/huffyuv.html

source code it there also.

Ben Syverson
May 11th, 2004, 04:13 PM
A minute to compress five seconds of material is not exactly mindblowing... then again, the limiting factor might be hard drive speed with files this big.

- b

Peter Plevritis
May 11th, 2004, 06:57 PM
huffyuv. i meant it for sharing large numbers of files, not for capturing.

Juan P. Pertierra
May 11th, 2004, 09:19 PM
I wasn't aware that FCP and photoshop could not open the cineon files...i think i'm just going to upload a zip file with the TIFF frames, and take the cineons down...

Unless anyone has another suggestion i will do that tonight. Can someone check if there is any 10-bit or 12-bit format that is handled by FCP? Maybe the DPX format works?

I'm also going to upload some more frames with film comparisons, and different HD uprezzes to get opinions from all of you on what looks best...that way I can upload the entire clip uprezzed. I was able to get the raw frames VERY close to the 35mm stills.

One thing to note, is that color correction after uprezzing is probably not optimal, so if anyone has any color correction suggestions for the frames let me know so I can apply them before uprezzing.

About the cost, I am working on that now. There are a lot of things i have to consider if I am going to sell this, so as soon as I have a figure i'll put it out there and see if all of this is worth doing.

Juan

Juan P. Pertierra
May 11th, 2004, 11:16 PM
I am uploading the TIFF frames now, will post when it is done.

One thing that we didn't note, the zip compression already takes care of the '0' bits that pad the 10-bits-16bits and so the TIFF zip is the same size as the cineon zip. :)

Juan

Les Dit
May 11th, 2004, 11:17 PM
This is an unfortunate part of the film business, there are no real standards for lossy compression that can do more than 8 bits per channel. My business uses a 12 bit jpeg format internally, but I don't think that any common applications handle this. If a non linear editor like Vegas started supporting >8 bits, it would revolutionize the budget digital intermediate world. If anybody has any new information on this, please pipe up!
-Les

Nick Hiltgen
May 12th, 2004, 01:06 AM
This may be a little off topic but I'm wondering if anyone has a guess as to how much more information would be carried by 3 16x9 SD chips as apposed to the ones in the dvx 100. Would this be something comparable to the 40-50 mbs per second we'd need with juan's mod or significantly more?

Juan P. Pertierra
May 12th, 2004, 07:06 AM
The TIFF frames are uploaded, at the same link as before:

http://expert.cc.purdue.edu/~pertierr/output.zip

Jason Rodriguez
May 12th, 2004, 10:13 AM
<<<-- Originally posted by Juan P. Pertierra : Can someone check if there is any 10-bit or 12-bit format that is handled by FCP? Maybe the DPX format works?-->>>

No, the only 10-bit formats that FCP supports are the 10-bit Quicktime Codecs. It also doesn't support frame sequences, those must be passed through Quicktime Pro to be made into a Quicktime movie file. FCP isn't frame-sequence friendly :-(

But of course you could use the timeline in something like combustion if you really need that type of support, and then render out quicktimes for editing in FCP.

Or with Shake you could do a match-frame edit from an offline.

There are ways around the DPX/16-bit Tiff problem in FCP.

Richard Mellor
May 13th, 2004, 11:56 AM
Hi everyone -

I think we are the future of image acquisition! The new modular camera will allow us the lenses of our choice, the chips of our choice (CCD, CMOS), raw capture, and eventual output to the codec of our choice.

All this with repair and upgrade similar to a PC.

This is a link to what that camera will look like: http://www.kinetta.com/home.php

Juan P. Pertierra
May 13th, 2004, 04:59 PM
In playing around with the green speckles, i decided to post a DV-RAW comparison that's more fair. The DV frame is clean in this one, and corresponds to the RAW frame within 24 frames. Note that i was playing around and there is some noise in the green channel that is not obvious to the eye, so the RAW frame is handicapped. :) Also, totally uncorrected...

http://expert.cc.purdue.edu/~pertierr/DVComp2_DV.tif
http://expert.cc.purdue.edu/~pertierr/DVComp2_RAW.tif

Juan P. Pertierra
May 15th, 2004, 11:59 AM
I am adding some functionality to the PGA design for the prototype, does anyone here have the SMPTE SDI specifications?

Also, there definitely is a way to preview the raw material on the on-board LCD, but so far the only way i've figured it out is by means of modifying the camera...the best option is still a an output of some sort. The on-board viewfinder is still useful but it will clip at a lower luminosity than the RAW footage.

I've also added an LCD screen to the device, along with menu controls to adjust options such as RGB alignment, etc, and possibly compression.

Right off the bat I can implement some simple non-desctructive compression like RLE or maybe LZW if the PGA runs fast enough. Anyone know if the 50Mpbs DV(DVCPRO50?) specs are available somewhere?

Juan

Edon Rizvanolli
May 15th, 2004, 01:04 PM
Hi Juan and everyone else,

here is one link with info's on DV50 and the other formats:

http://www.adamwilt.com/DV-tech.html

I hope it is helpfull.

Randall Larsen
May 15th, 2004, 04:58 PM
Juan,


SDI SMPTE 292M (10bit at 1.5Gb/s) (1) F BNC

Standard Definition:
SDI SMPTE 259M (10bit at 270Mb/s) 8 bit 143 MBS

Dual HD SDI SMPTE 292 is sometimes used with sony 950 camera to output to AJA KONA2 or SONY's 444 RGB HD recorder SR.

The full spec for SD SDI is available for purchase on the SMPTE site:

http://smpte.org/smpte_store/standards/index.cfm?scope=0&CurrentPage=10&stdtype=smpte

under standards by number:

SMPTE 259M-1997 Television - 10-Bit 4:2:2 Component and 4fsc Composite Digital Signals - Serial Digital Interface
$24.00 - Purchase this Document

Your university library might have the SMPTE cdrom or SMPTE
journal 1997 when this was published.

I have Poynton's Technical intro to Digital Video on page 249
he gives a brief description:

4:2:2 a serial composite interface uses TRS timing reference sequence to achieve sync rather than digitized analog sync. (same as ITU-656).

The serial interfaces use ECL levels, 75ohm impedence, BNC connectors and coax (we knew that).

I've manually added the 0001's to the Tif download files. It plays back nicely in fcheck. I was able to uprezz to 720P with good results although the speckles are annoying.

Richard Mellor:

I've been to the kinetta site. Jeff Kreines has got a good idea and some expert help. There are some unanswered questions though.

The 35mm lens does not have the same Depth of Field and focal length characteristics with an 2/3" Altasens sensor. If Kreines could get the Dalsa 35mm sized sensor it might work!

Its a single chip camera which may have advantages in avoiding the chromatic aberration inherent in sony's F-950 prism block.
But avoiding the prism block doesn't make 35mm lenses any more usable since the image size at 2/3" is wrong. I don't see Kreines
using a field lens or the PhotoTeknica ground glass solution!

I am not convinced a single chip camera can get the depth of modulation and the color possible from 3-chips. Although the argument that Digital still cameras are single chip has some weight.

Kreines capture media has not been described is he using solid state flash memory? What compression scheme? Has any footage shot with the camera been seen? It looks like vaporware to me!

Nick H:

Just a guess but I suspect 16:9 might require 30% more bandwidth.

HD requires 75MHZ clock for 16bit output. Juan's clock is
20MHZ I believe. His capture board won't go any higher so
he may have difficulty capturing 16:9. However the plan is to
build a firewire interface eliminating the necessity of a capture board.

I suspect unsqueezed 16:9 might require 26MHZ clock. However,
I just remembered that the way most dual systems work is by
cropping the chip for 16:9 so maybe the bandwidth is about the same?

A better frame grabber would be the Matrox Helios XCL single
Full 12 bit camera-link interface running at up to 280MHz.

see:

http://matrox.com/imaging

Randall Larsen
May 15th, 2004, 05:17 PM
Is there a way to run photoshop in a batch mode. I know photoshop was originally designed by an ILMer for film work. Have the batch features been left out on purpose?

A quick and dirty color correction is just to run Equalize. The auto level, auto balance etc. don't seem to do the trick. Don't know if I would want to hand time all my footage frame by frame!

Is FCP's color correction adequate to correct Juan's tif files?
Of course we have to get the files into FCP using 10 bit DVPRO50?

I think after effects now accepts 16bit color files. Of course shake can handle 32bit floating point color files.

I agee it would be nice to build some level/color correction into the PGA for monitoring purposes.

If it exists a very fast dsp setup might make real time uprezzing possible. This could cost big bucks though.

Randall Larsen
May 15th, 2004, 05:45 PM
Juan and listmembers,

Any news on a ballpark price for the mod? All we really need is
firewire 800 out (monitoring the raw output would be nice though).

I talked to JVC the list price on their new HD POV camera is $19,995 so it will eventually carry a street price of 18,000K or less w/o lens. She didn't know whether the camera would come with a raw or a dual SMPTE 292 out. The are pushing the new HDV as is Sony. The idea is to partition the market into low end, mid range and high end. Sony would like to sell the high end
their HD SR recorder. However, JVC sees that the market will eventually go to Hard disk, solid-state, or optical.

Tape will continue as a studio and archiving medium for a while.

If a DXC100 can get usuable images for film transfer, Juan has
just made the low end equivalent to (in many respects) the
high end.

Juan P. Pertierra
May 15th, 2004, 06:01 PM
I'm looking at that right now. Adding a digital out such as SDI ~should~ not add anything to the price, but an analog out might add just a tad.

What's the consensus on this? Do we need a monitor out and what kind is best? I'd rather implement just one thing that everyone is going to use. S-video out?

It is clear by now that to really adjust the camera for a specific shot you need to view the RAW output, and the DV viewfinder/LCD only works for framing, and maybe focusing.

Randall Larsen
May 15th, 2004, 06:17 PM
Juan,

Once the camera is tested we could rate the sensor like film stock.

The DP can use a light meter to predict what the picture can be printed to.

Are you saying we need monitoring to decide what ND filter to apply. Perhaps we need a daylight 85B filter to precorrect color?

Perhaps we need to have many different NDs or set the iris manually from a light meter reading?

Once the peformance of the chip is known, in theory the built in monitoring in the DVX could be adjusted to roughly correspond.

I see the issue of iris and ND settings as something that won't vary much the operating characteristics of the camera are known.

Now some DPS and users like us might want to paint the color on the set. RAW files are supposed to make different looks possible in post. You don't want to overcorrect on the set leaving yourself no options in post! There have to be default settings at different light levels. Gamma etc should be applied in POST!

Having said this. Filter effects are hard to predict. Will the image be too soft with a pro-mist filter? Unless you have a real HD monitor on the set its hard to tell! Its also hard to even tell if the camera is in focus for HD without an HD monitor.

Will we be getting better than the rate 500 horizontal tv lines
of resolution out of the camera? If so you need a good monitor to even focus!

Jeff Donald
May 15th, 2004, 06:22 PM
Is there a way to run photoshop in a batch mode. I know photoshop was originally designed by an ILMer for film work. Have the batch features been left out on purpose?
Randall, Photoshop was written by Thomas Knoll (http://www.photoshophalloffame.com/Thomas_Knoll.html). His brother John now works for ILM.

You can create actions or droplets to do batch processing in Photoshop.

Jason Rodriguez
May 15th, 2004, 07:07 PM
How can you monitor the ouput of the RAW files at 12-bits on an 8-bit monitor without some form of LUT? Is monitoring really that big of a deal, especially when you know that if you're getting a good image on the camera's built-in LCD (no crazy clipping), that you're getting great highlight detail on the RAW ouput? Just don't clip too much on the LCD and you're fine. Think of the RAW files as added headroom like 24-bit audio has over 16-bit audio.

Actually if anything is added, I think the suggestion of an "up-rezzing" chip is a wonderful idea. Actually the Varicam (PanasonicHD) only records on tape a image with a horizontal resolution of 960 pixels, not too much more than what Juan has with the DVX100 mod. So you could choose between standard RAW ouput and an uprezzed RAW ouput.

Jeff Donald
May 15th, 2004, 07:15 PM
The RAW convertor should do the up-rezzing, to take advantage of the more powerful CPU and better algorithms.

Juan P. Pertierra
May 15th, 2004, 07:51 PM
I agree with Jeff, I think by all means the uprezzing should be done on the PC, and the RAW SD output should be kept as the original or 'negative'.

I'm no expert in color correction, but i beleive all color correction should be done on the SD original and then it should be up-rezzed. up-rezzing creates some artifacts whch i beleive will be amplified if it is color corrected afterwards.

I might be wrong, any experience with this?

Juan

Jeff Donald
May 15th, 2004, 08:03 PM
Up-rez file first, then color correct, enhance etc., sharpening should be last in workflow. Sharpening will vary depending on the intended purpose of the files. sharpening for output to film will be different than sharpening for video projection, broadcast etc.

Juan P. Pertierra
May 15th, 2004, 09:39 PM
That makes sense....i guess i was assuming that sharpening was part of the up-rezzing since Photozoom does it under the preset-settings...

If anyone knows the details of what algorithm photozoom/spline uses that would be great...if it's not patented or something to that effect I can probably implemented in my software.

Randall Larsen
May 15th, 2004, 11:18 PM
Juan and listmembers,

There is an interesting discussion on one chip bayer filter cameras such as Arri, Dalsa, and Kinetta on one of the CML lists.
The verdict was that 3 chip cameras are best unless its a 4K resolution chip that implementing the bayer filter. Arri limits their output to 1920x1080 or so to limit the motion artifacts introduced by bayer filtering.

http://www.siliconimaging.com/Specifications/SI6600%20Manual%20R1_7.pdf

Has the spec for the camera link interface 1.2gig used by
most industrial and scientific camera mfgs for their HD sensors.

Silicon imaging has built a camera link to gigabit ethernet interface. This allows the camera to be 100 meters from the computer with copper cat 6 UTP cable no fiber!

Now camera link may be overkill for the signal coming out of the
DVX100 but it would be nice to have a design that could be transported to a new HD chip camera when they become cheap enough. Maybe gigabit ethernet is better than 800mbs firewire?

Jason Rodriguez
May 16th, 2004, 12:41 AM
<<<-- Originally posted by Randall Larsen : The verdict was that 3 chip cameras are best unless its a 4K resolution chip that implementing the bayer filter.-->>>

I'm on the CML and I do not believe that was the verdict, except by certain individuals who make products for 3-chip cameras.

According to papers written by Dalsa, a bayer sensor will have about 75% of the resolution that a 3-chip camera will have at any given resolution setting. So a bayer sensor will give you around 1440 horizontal pixels compared to the 1920 that a true HD camera can give you. Now you ARE getting 1920 horizontal pixels out of the bayer imager, it's just that like HDCAM, the actual amount of real resolution is around 1440 horizontal pixels. Which isn't bad, we've been living this for a while with HDCAM. In fact all the HD formats right now except for HDCAM SR and D-5 pre-filter the image before compression.

I saw output from the Arri D-20 at NAB and it was great. I saw the output from the Dalsa Origin at NAB and it was great. The arri BTW is only a 3K chip, and they're basically outputting 2K horizontally-a little more thant the 75% I was talking about earlier (more like 66%), which is probably to play it safe.

Anyways, I'm sure the Kinetta will look great. And all this rumoring that Bayer "won't be any good" just doesn't seem to pan out with the experiences I've had viewing footage off these systems that have good bayer-sampling algorithms.

Randall Larsen
May 16th, 2004, 12:46 AM
Jeff Donald,

Thanks for the photo batch tip.

Jason Rodriguez, and listmembers

Good point about the LUT (Juan would need to do this in an FPGA).

I like the idea of uprezzing right away. I don't know if that is going to complicate the color correction by introducing artifacts.
As I understand it the photoZoom algorithm is interpolating the missing pixels with some sort of S-spline based interpolation. We really have to find some articles on this as well as L3 and wavelet compression.

For transfer to film I'd say leave putting "detail" or enhancement in until last. You don't know how this is going to play with the light and filter effects the DP uses. Perhaps you don't want any "detail" correction at all if you've got 720P.

In designing the camera interface Juan really does have to think about the whole production pipeline. What will the workflow be?

Monitoring:

Juan,

Could the "monitor" have a histogram (digital) or waveform monitor overlay feature (analog). Maybe a vectorscope implemented in software would be nice (if we had analog).

The advantage of a waveform monitor would be to assure that
we are only lowering the gain enough (or NDing enough) to keep the highlights from clipping. If we lower the gain or the light levels too much we lose detail at the low end.

We might lower the light and simultaneously raise the gain because:

Its very difficult to get good rack focus effects with 1/3" chips. You have to ND a lot or use long lenses to cause the depth of field to decrease enough to get cinema style DOF effects.

This is one of the advantages of the mini-35 approach to using the 1/3" sensors yet retaining cinema style DOF.

There is also an optimal setting for gain vs. the signal to noise ratio. As everybody knows if we raise the gain more than necessary we get more noise than necessary. If you have got 12 bits maybe you don't need to raise the gain so much.

Juan's 12 bit shot of the trees was amazing. Uncompensated inside the foliage it was totally dark.

However, because of the extra bits you could actually pump up picture in all but the darkest areas (with the sacrifice of the
clipping the highway shot).

With some dodging, softclipping, and some black stretch you could create a picture that showed lots of detail and fit within the dynamic range of your viewing monitor (8-bit) or film stock (10-bit).

The image would simulate more realistically what the human eye and brain see with foveal sampling.

FOCUS:
Maybe the slow auto focus built in to the DVX100 is good enough. There are also conversion tables to convert the percentages on the lens focal distances to ft or meters. A real camera crew "tapes" critical focal distances. Of course that procedure assumes you have a well calibrated lens.

Randall Larsen
May 16th, 2004, 02:18 AM
Jason Rodriguez,

Perhaps I overstated the "consensus" for 3 chip cameras on CML.

Do you know of any 35mm image area sensors that are publically available? Dalsa and Arri have 35MM image area sensors but the biggest ones I've found that one can order run at 24fps in 1" optical format.

Dalsa has one camera (maybe they sell the chip) that can be pushed to 20fps. The chip probably could be pushed to 24fps by reading sections in parallel and assembling the image in a frame store.

I am glad to hear a report on the Arri-20. This camera I believe has the advantage of a 35mm Motion Picture imaging area. Dalsa also claims this advantage.

75% of the resolution of a 3 chip configuration may be the visual impact of Bayer to a viewer. One or two postings doubted that a Bayer camera of similar resolution as a 3 chip camera would be as good for green screen and matte work. Some said it might.

How did the bayer camera's look when recording Motion? Did you notice any artifacts?

I personally would rather have 1920x1080 than 1440 by 1080. Perhaps as you say it doesn't make a difference. But in the long run Display of HD will improve. 1440 may look soft in the future. Maybe I will want to perform a digital zoom on my footage to repurpose some of my shots. Maybe I will need to extract close ups from my wide shots. I need all the resolution I can get!

Did any single chip camera mfg. show 60fps images from any of these cameras?

I assume Kinetta did not show any footage? or live demo?
I like the altasens sensor but how will Kinetta get around its
2/3" format size in that small camera (no room for a field lens)?

1440 line HDCAM is not really good enough for tape to film transfer or for effects work. HDCAM SR should be OK.

However there is no need to buy expensive D6 or HDCAM SR or D5 recorders when you could output 12 bit uncompressed raw data to relatively cheap SATA and EIDE drives over firewire 800 or gigabit ethernet. There are lossless compression schemes that could be implemented (like Dalsa's L3) to save bandwidth and disk space without giving up the camera resolution you paid for.

I wuld like to avoid HDCAM or other compressed formats. These formats are part of the camera companies business models now.
Its a market segmentation scheme. Anybody should be able to record full bit depth and full resolution signals if they really want to. That is what Juan's mod is all about.

If you need to archive the shot because you don't trust Hard drives use blue ray or blue laser recorders.

The different compressed formats no longer reflect the technical limitations of recording digital video. They are an artifact of the market segmentation scheme camera vendors want to impose on us to extract consumer surplus (Broadcaster dollars) from the marketplace.

So Jason, I never said "Bayer won't be any good." I just said "Bayer won't be as good." Will Bayer be cost effective? Perhaps? There are yet roadblocks to be overcome.

There are roadblocks in building a good 3 chip camera as well.
Can Sony work out the problems it has with chromatic aberration in its prism system? Can film lenses be practically used with prism cameras using the ground glass technique perfected by the Germans?

Can CMOS fabricators build 14 megapixel chips that can be read out full resolution at 60 fps?

The whole beauty of Juan's mod is that for practical purposes a raw data file from a DVX100 may be "good enough" for HDTV and for low budget film transfers.

Capra Mauro
May 16th, 2004, 03:18 AM
Hi Jaun,
i've downloaded the clip, in my opinion the speckless aren't casuals: they appear where the color is closer to white. Maybe that the caps you are using didn't handle this value and so "clips" the relative pixel. If you individuate the value that clip you can choice the right headroom of the caps (i think).
Furthermore the RGB seems to be not well aligned: the green channel is too much (someting like 2 pixels) down and right (look at the writes on the thank).
For me the standard output solution is better than the uprezzed one: it avoid the artifacts problems, it is lighter to be processed and stored and it's good enough for normal TV projects.
I hope that this can help you.
Bye

John Cabrera
May 16th, 2004, 04:49 AM
Juan,

I guess I haven't actually posted this, but please put me on the list for the mod... if you do actually have a list going.

Thanks,
John Cabrera

John Cabrera
May 16th, 2004, 06:49 AM
Juan,

I noticed something that maybe you're already aware of in regards to the alignment of the rgb cannels. I'd been trying to move them to the correct alignment (I'm sure as everyone has tried) to no avail. One pixel one way shifts green to magenta. I tried upconverting so that I could work in essentially half and quarter steps, but still no luck. Then I noticed something with your raw file DVComp2_RAW.tif. The green channel seems to have a slightly larger field of view (about a pixel worth). If you look in channel mode you can see it by switching between the red channel and the green channel while focusing on the far right balcony post that's sticking out of frame right. There's a slight shift in the amount of the post that's viewable. However if you look at the other side of the frame (pretty much any point, but I chose the upper left coner of the frame at the top of the sliding glass door), you'll notice that there isn't nearly that much of shift anywhere along that side of the frame. Of course there's a little shift with all three channels, but its clear to my eye that what's happening on the other side of the frame isn't a shift at all but rather a wider field of view in the green. The blue channel seems to be about right compared to the red. I tried correcting the green by resizing it. I enlarged it by a pixel on the horizontal axis and this seemsed to help out a bit. Just thought I'd mention it. If that's being caused by the optics of the camera, you may be able to fix it in the programming of the software by doing a slight resize of the green channel... of course, if it's being caused by the prosumer optics it may be slightly different in everyone's camera.

Ben Syverson
May 16th, 2004, 10:36 AM
If you resize the green channel (which carries the majority of the luminance information we see as "detail"), you get into issues of resampling, which would be a generational loss. Also, there's no way the channel could be resampled in "real-time" unless you used a very low-quality interpolator. Even an average interpolator such as a 16 pixel bicubic or spline would take too long...

- ben

Thomas Smet
May 16th, 2004, 02:35 PM
Hey Juan could you capture a still frame of an object in front of a solid blue or solid green background. It could just be a small piece of green or blue paper that is fairly saturated. Maybe you could just put your hand in front of the paper. It would be nice if we could test one of the most important reasons of having a 4:4:4 mod which would be keying. We may also have an easier time to figure out any alignment issues of color channels this way.

If you could just point one light at the object and color background. Shadows shouldn't matter because we will not be trying to get a perfect key. We just need enough to test the edging.

John Cabrera
May 16th, 2004, 02:54 PM
Ben,

According to what Jaun proposed earlier about uprezing, he seems to believe that an interpolator such as bicubic or spline could be implemented in his software. Take a look at some of Juan's last posts.

As for image quality loss, I did a resample in photoshop of the green channel using bicubic and my eye didn't notice any quality loss (I only resized by 1 pixel)... what my eye did notice was an improvement in the green channel's croma shift... and I don't know about you, but that lack of alignment is what I consider the poor quality. You gotta give a little to get a little, you know.

Ben Syverson
May 16th, 2004, 03:02 PM
John,

Any quality interpolation will not happen in realtime on modern computers. It's just a mathematical reality -- at 24fps, you're looking at over 7 million pixels a second to pass through the interpolator, and each pixel needs to see a neighborhood of at least 16 pixels.

But maybe Juan has some awesome algorithms he should patent immediately. :)

I would hope that if he did implement interpolation, he'd use Spline rather than Bicubic. Juan, if you want any help with the interpolation code, let me know -- I've got some laying around for really nice 16 and 36 pixel spline interpolation...

- ben

Juan P. Pertierra
May 16th, 2004, 04:36 PM
Randall(waaay up there):
I just wanted to make this clear: ALL the raw captures i have posted so far are 10-BIT. Nothing I have posted so far is 12-bit because my test gear doesn't handle the extra 6 bits. So everything i have posted is handicapped in color precision and a little latitude from the final prototype output.

Now, about the interpolation...I think it is best to allow the user to decide, and if Ben you can email some Spline interpolation algrithm that would greatly help. I'm sure it's not that difficult to implement, specially on the PC.

It would get really difficult to do it on an FPGA, specially since it will be running at around 100Mhz or so. I don't think real-time is an option.

About the alignment, my current code from which the clip came out of does not align the channels, right now just puts them on top of each other as they are...it's a matter of three variables but I haven't added that yet.

I did notice however what others have pointed out...that the green channel seems to be offset from one side of the frame to the other, but i think this is due to just the level of manufacturing and alignment of the camera...we might just have to live with this.

Only solution I can think of, is to uprez the R,G,B channels separately and then use the extra resolution to do a fractional alignment in the original image space.

Green/Blue Screen capture: Sure! I just don't have a blue/green screen so what do you guys suggest? Can i get something at lowe's or just get a green piece of construction paper/poster board?

That's a good idea...probably one of the most important tests. I imagine lighting is also very critical for a good key?

John Cabrera
May 16th, 2004, 05:02 PM
For the last film I did, we used green fluorescent poster board from the local Walgreens drug store for one of our days of pickups when we didn't have the green screen stage anymore. It worked just as well as the stage we worked on with the exception of the seam lines between pieces of boards.

Also, I've looked at it several times, and it's not a fractional offset, but rather a slightly different image proportionally... everything on the left side of the frame is much better aligned than on the right... which tells me that the images themselves have are too different in FOV for them to ever align right... I already tried uprezzing the footage by 400% to do fractional alignment and it did not work. I agree that it's probably just an issue with the manufacturing, but when I did a resizing of the green channel in Photoshop to compensate (enlarged it by 1 pixel), the alignment got noticeably better. I agree that using the algorithm for uprezzing the footage to HD would merely be a perk, but using the algorithm to compensate for this manufacturing defect (not an offset defect but rather a FOV defect) may be a crucial part of getting a good green screen key with the camera. Having the green channel gradually shifting like that across the image will cause problems with green screen, I promise.

Juan P. Pertierra
May 16th, 2004, 05:06 PM
I wonder if this problem is also evident in the DV frame i posted? i.e. how does the camera originally deal with it? I'm thinking they just got it as close as possible...

John Cabrera
May 16th, 2004, 06:54 PM
I just took a look at it... from what I see, there's no difference between any of the three channels in the DV frame. The camera has to be doing something to compensate what's happening in the green channel. I'm gonna keep studying the raw version and the DV version to see if there are more clues as to what's happening.

John

John Cabrera
May 16th, 2004, 08:32 PM
When I place the DV still on a layer above the Raw version, they don't match up at all. The DV version is narrower on its horizontal axis than the raw version. I had to enlarge it by about 106% horizontally for the balcony posts to line up on the two.

Why is that? Could it be a clue? And what does that mean the correct pixel aspect for the RAW footage you're pulling off the chips is? I guess I assumed that it was the same as DVs .9

Am I missing something?

John

Ben Syverson
May 16th, 2004, 08:53 PM
Green flourescent posterboard is actually a near-perfect color for greenscreen.

Juan, you shouldn't have to uprez to correct the problem -- you should be able to just use interpolation to do a sub-pixel distort or scale or whatever.

I'll dig up those interpolation functions and send them over with my notes!

- ben

John Cabrera
May 16th, 2004, 09:08 PM
When I say narrower on the horizontal axis what I mean is it's more compressed horizontally than the RAW version.

John