DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   Alternative Imaging Methods (https://www.dvinfo.net/forum/alternative-imaging-methods/)
-   -   4:4:4 10bit single CMOS HD project (https://www.dvinfo.net/forum/alternative-imaging-methods/25808-4-4-4-10bit-single-cmos-hd-project.html)

Jason Rodriguez June 20th, 2004 03:47 PM

Hey Steve,

Quick question here on the rolling shutter (again :-)

Okay, please tell me if I have this straight. Right now the shutter is working like this:

line 1: Read, output, reset
line 2: Read, output, reset
line 3: Read, output, reset
. . . .
line 32 : Read, output, reset

so in other words, each line has to read, output its stuff, and then reset before it gets to the next line.

Can the camera work like this?:

line 1: Read
line 2: Read
line 3: Read
. . .
line 32: Read

and then say after getting to line 32, or line 102, or line whatever, then it starts to "roll up" going like this:

line 1:Output, reset
line 2: Output, reset
line 3: Output, reset
. . .
line 32: Output, reset

If the shutter could work in this manner, then it'd be the exact same thing as a film camera's curtain shutter, or the mechanical shutter in film, and that should reduce any problems with the "rolling shutter" artifacts we're having, if I'm getting it straight.

Les Dit June 20th, 2004 07:17 PM

The horror of rolling shutter explained ;)
 
Jason,
The following doc has a great write up on the whole rolling shutter issue. Check out the picture of the school bus, it's what I meant by having each scan line moved over by a pixel or so.
( not interlacing artifact, which is every other line moved over )

http://www.kodak.com/global/plugins/...Operations.pdf

-Les

Obin Olson June 20th, 2004 08:27 PM

perfect! that schoolbus is what my camera looks like when you pan fast with the mhz on the camera set low - like 27mhz

Steve Nordhauser June 20th, 2004 09:10 PM

Jason on RS readout:
When a line is read, the charge is moved to a one line shift register and shifted out. There is no room for another read until that line goes through the shift register one pixel at a time (2x for double tap) to the A/D.

Wayne:
Camera link does not limit the max clock rate - the on-board A/D and shift register do. This means a local buffer won't speed up the readout.

Jason Rodriguez June 21st, 2004 12:32 AM

Okay, I read the Kodak paper, and everything makes sense, rolling shutters work the way I though, that is like a focal plane shutter on a still camera.

So the question is, how come there's so much distortion in the image without motion blur? I think the real problem here is that you're getting this "slant" because there's no motion blur. That image of the bus, if it's really moving that fast, or the camera is moving that fast, should be very blurred in a traditional camera-with the rolling shutter in this camera, how come it's not?

I really think that's our problem, we should be getting some good motion blur here when we're panning around real fast or when something's moving through the frame quickly, and we're not. For instance, on a still camera, 1/48 of a second can produce a lot of blur if you're whipping the camera around. I haven't noticed any amount of blur like that on any of the rolling shuttered cameras I've seen so far from this thread.

Rob Scott June 21st, 2004 06:49 AM

Quote:

Jim Lafferty wrote:
Write even and odd frames to separate disks
That's a good idea, and I'm not sure why I didn't think of it :-)

In addition, to get the most performance out of the Capture phase, I've been thinking about "pre-allocating" large chunks of disk space before capturing.


I'm thinking most people using this system would want to dedicate a drive (or two) to capture anyway. So here's how I'm thinking it might work...
  • Format the drives to eliminate any possibility of fragmentation
  • Configure the Capture software to pre-allocate, say, 100 GB on each drive
    (The amount of space on each drive would need to be equal, even if the drives are not the same size).
  • Capture would pre-create files in 256MB chunks up to 100 GB.
  • Capture would take note of how long each file took to write and keep a catalog of this information.
My thinking is that this would help determine how many fps could be captured at any time. Toward the end of the drive we might not be able to handle higher frame rates, so we'd be able to warn the user.

Using a scheme like this, we should be able to support as many drives as practical without requiring RAID configuration.

The "Convert" software would recognize the scheme and automatically recombine the frames appropriately when doing its processing.

Obin Olson June 21st, 2004 09:58 AM

Ok I have 2 frames uploaded in 16bit tiff files :

www.dv3productions.com/test_images/studio095.tif
www.dv3productions.com/test_images/studio096.tif
5.2megs each

Rob S, have you found a good bayer filter yet?

Les Dit June 21st, 2004 10:41 AM

Obin, Nice images, can you upload the pre-bayer ones that those came from? ( B&W )
I'd like to look at the raw pixels from the camera for noise. The Bayer smears it all out...
It's Monday, I know!

edited: Actually , there seems to be an image pan on those.
Oh well.


-Les

Obin Olson June 21st, 2004 11:09 AM

maybe..I am not sure I have the pre-bayer file I will check

I am uploading a 180meg 8bit SHeervideo codec quicktime file full on native HD resolution, it has some over exposure in the image but I think some people would like take a look

Obin Olson June 21st, 2004 11:26 AM

Ok Les, I have 2 images the camera was not moving at all it's flowers outside I will upload as soon as the big 180meg file is done

Rob Scott June 21st, 2004 11:38 AM

Quote:

Obin wrote
have you found a good bayer filter yet?
I have several algorithms, but I haven't yet attempted to implement any in C/C++ yet. I'll be doing that soon.

I was going to start here: http://www-ise.stanford.edu/~tingchen/
... and implement a few of the better-performing algorithms such as "Linear Interpolation with Laplacian 2nd order color correction terms I" and "Pattern Recognition". If anyone has any better suggestions, please let me know.

Obin Olson June 21st, 2004 01:39 PM

It's a big one!

http://www.dv3productions.com/video ...0bit-4-2-2.mov

180megs

Les for you:
www.dv3productions.com/test_images/flowers1.tif
www.dv3productions.com/test_images/flowers2.tif

flowers with no over exposure! ;)

http://www.dv3productions.com/test_i...flowersRAW.jpg
http://www.dv3productions.com/test_i...DflowersCC.jpg
from 8bit

Jason Rodriguez June 21st, 2004 03:47 PM

I'm getting a "page cannot be found" error on the movie file.

Les Dit June 21st, 2004 10:56 PM

Thanks Obin
 
Well, those have less motion in them.
Let me explain what I wanted to do:


Here is how you separate noise from image:
You take two images of a static scene. You difference them.
What is left over is just the camera noise.
Now, if something moved in the frame on one of the images, what you see in the diff is primarily the edges, with the rest being mostly black.
In order for this to work, there can be no motion between the two frames. Otherwise you have no way of telling if it's noise or just image detail being different on the two frames!

I know it's hard to not move the camera a few microns, so that is why I previously recommended a slightly out of focus setup. That way fine detail is gone, so it's not as sensitive to a very slight positional shift.

Shooting frames to look for noise will result in very crappy looking images, artistically and compositionally, but they fit the need for studying if the camera will fall apart on moderate color grading or trying to get good pictures of a dark scene.

How about an indoor windless locked off setup, throwing it so far out of focus that details are blurred to 10% of the image width ?

No priority for me, but others may be interested in the results, especially if they may be getting the camera for a project or whatever. I did notice that one pair you posted had some broad noise bars new the top of the frame, almost like hum bars on a TV. Anyone see those?

-L


<<<-- Originally posted by Obin Olson : Les for you:
www.dv3productions.com/test_images/flowers1.tif
www.dv3productions.com/test_images/flowers2.tif -->>>

Rob Lohman June 22nd, 2004 05:03 AM

Les: I'm not seeing those noise bars.

What I'm seeing on those studio*.tif files however is some very
apparent blocking on the red spots on the bottle.

Do you happen to have the original bayer file on that Obin?

I'm wondering if that is due to a low quality bayer conversion or not

Thanks for posting the pictures Obin. I'll see if I can run some
test on those regarding our software.

Obin Olson June 22nd, 2004 11:03 AM

what software? you and Rob S?

sorry guys check the link again for sheervideo file I fixed the URL

<<<-- Do you happen to have the original bayer file on that Obin? -->>>

here it is:

http://www.dv3productions.com/test_i...andw-bayer.tif

slow-motion - 48fps shot, playback at 24fps

lowquality test footage

www.dv3productions.com/Video Clips/48fps.mov

OK I think I am going with a Shuttle computer system! check them out at : http://sys.us.shuttle.com/

It will cost me $900 for a shuttle, 2.8ghz p4 CPU, 512megs 400mhz ram, 2 7200rpm sata disk drives at 200GB each, a dual head graphics card and a 40gb OS disk drive..then I will use firewire to transfer all the footage from shuttle to video server after shoot..I am hoping to get 60fps 8bit on this system and 48fps 12bit all 1280x720p

if anyone has an idea of a much better system for size -speed - price please let me know soon!

Rob, get your camera yet??? :)

Rob Scott June 22nd, 2004 02:45 PM

Quote:

Obin Olson wrote:
Rob, get your camera yet??? :)
Nope, not yet! You'll be the first to know :-)

Juan M. M. Fiebelkorn June 23rd, 2004 12:13 AM

Hello everyone.This is my first post so I guess I'll make some mistakes.Sorry in advance...
Going back in time, to a post I don't remember its location....

About the bandwidth needed to record HiDef I think there are a couple of useful solutions out there.
I believe recording the RAW Bayer image is the right way.Also I think it could be really easilly compressed.
Some ways could be these:
Use an FPGA (or something else which can do the trick) to seperate the three color channel, giving you, for a 1280x720 sensor a 640x720 image for green or Luma (the way you prefer to call it) and two 640x360 images for Blue and Red.
After this go to three different compressors, I propose JPEG2000 which is available as DSP chips which can deal with these pixel rates and is meant to work with 10 bit depth.
Then compress lossless the green channel and may be lossy Red and Blue ( 10:1 or 20:1 should be enough).This will also help with chroma noise, because of internal JPEG2000 wavelet transform.
Record to disk or a DLT tape (a DLT tape drive which records 8 MB per second costs around 1,500).It could also be stored in DVD with the new Philips 16x burner giving a 20 minutes storage.
This approach would give us a 5.5 MB per second @24p for green , 0.5 MB for Red and 0.5 MB for Blue. Total = 6.5 MB per second @24fps.
Then whe can go the usual way decompressing these three images , assemblying again the Bayer pattern and applying demosaicking on a normal computer.
This would give a really big bandwidth reduction without affecting image quality too much. (edited after some misundertandings)

Rob Lohman June 23rd, 2004 01:56 AM

Juan: I'm not sure how this does NOT effect image quality when
you are proposing a lossy compression method. It is lossy, right?

That's still a large reduction from 21 or 26 MB/s (see below)
which can probably only be done through a lossy compression
which is what we are trying to avoid. The system will also record
straight to harddisk, for now.

The thing you are forgetting (I think), no offense, is that the
image we get now is 10 bits per "bayer pixel" and will increase
to 12 bits this year if all goes well. Most compression algorithms
do not support this. Yes, you could seperate them, but then
I'd highly doubt you could reconstruct anything meaningfull from
a lossy compression algorithm.

We appreciated all input on this, but I'm not seeing how this
would easily work and work at all.

How knowledgable are you in regards to FPGA's? I have a bit of
a hard time figuring out what could be a possible way for that
in the near future. For now everything is computer based...

Juan M. M. Fiebelkorn June 23rd, 2004 02:26 AM

Well, seeing your comments, they sound like, following your thoughts, Cineform's codec is really a nonsense, cause it is supposed to be used for processing and producing on line quality results.No problem :).
Sorry about my error saying '' without affecting image quality" I forgot to add ''too much''.
I guess I don't know what you think are the results of a lossy compression like JPEG2000 on an image.Anyway lossless can be used for Red and Blue too.Which would give around one half the bitrate.
Notice that I said ''Lossless'' compression for Luma/Green, not "Lossy".
I work everyday of my life with digital images, but may be I'm really wrong or my english isn't good enough to express my ideas properly.
Sorry.

What do you mean by "bayer pixel" ?
Why it is different from a grayscale pixel, 8 or 10 bit?
Isn't it just a discrete point with an intensity value ranging from 0 to 255 or 1023?

JPEG2000 supports lossless and lossy 10-16 bit grayscale images compression.
PNG also supports 16 bit and both are royalty free. (I can be wrong about JPEG2000)
About PNG I don't know of any hardware for it.
There are chips for huffman compression.

http://www.jpeg.org/jpeg2000/index.html

A Spartan 3 from Xilinx costs around $12 and a development board for it around $ 100.
Don't know the JPEG2000 DSP pricing yet.

Just another thing.Actually a Bayer pattern array (like the ones these sensors have) or any name someone wants to put to it, isn't a 4:4:4 color scheme.In fact it is more like a 4:2:0 or 4:2:2 sampling.I say this because I don't understand why the thread title says "4:4:4"

P.S. My knowledge about FPGA, is really limited.My best friend is an electronical engineer who works mostly with DSP and he suggested that to me for the task of decomposing the Bayer array in its color groups to be able to compress them individually and to be able to recover the Bayer structure after that.

Rob Lohman June 23rd, 2004 04:48 AM

Glad we agree on the lossy thing. The problem I have with lossy
is that you remove vital information to reconstruct an RGB image
from the bayer pattern. As you indicated the Bayer pattern
produces something else than a true 4:4:4 signal. It's hard to
compare since those numbers always apply to YUV encoding,
not RGB. But for every 2 samples of green we have one sample
of red and green. So perhaps a 2:1:1 naming is more approriate?

The reason the thread is called 4:4:4 is that it is meant to imply
uncompressed / no futher loss. Also we (and the thread starter
Obin) weren't too familiair with Bayer and the way CMOS chips
operate back then. That's changed now.

Bayers is indeed 10 or 12 bit grayscale pixel in the range of
0 - 1023 or 0 - 4095. If you use a lossy compression you will
loose some important information to more accurately reconstruct
the full RGB image. It will probably work fine but you will introduce
even more errors that the Bayer to RGB algorithm already
introduces. Therefor in my opinion lossy is not a way to go for
Bayer compression. It might certainly be an option after the
reconstruction to full RGB (but I leave that choice up to the end
users).

We are not goint to decompose the Bayer to FULL RGB before
storing. The process is devided into two phases/steps:

1. the camera system will record / compress raw Bayer and store this to harddisk(s) as fast as possible

2. after you are done shooting you either hook this system up to a "normal" computer (or in case you recorded with a normal computer you can use the same) and start a post-processing application that will convert the format to full RGB.

The reason for deviding these steps are these:

- we want to use as less processing power as possible to allow making the camera as small as possible (lower power consumption, lower speed usually)

- bayer will allow smaller datarates in a lossless way due to the data being a lot less

- we will not have the power to do high quality bayer to full RGB conversion, even with a high speed DSP.

If we do these conversions in a later step we have the possability
to use a high quality bayer algorithm and allow futher things like
color tweaking and encoding to the final format (users choice).

Anyway, if all goes as planned it will be open source so if you or
anyone else wants to do the conversion in camera and do a lossy
compression on the fly then by all means go right ahead.

Although I suspect it will be cheaper and easier to just get a
consumer camera because it will probably have similar qualities
due to the compression etc.

Juan M. M. Fiebelkorn June 23rd, 2004 04:54 AM

When did I say to decompose the bayer array to RGB?
The only thing I said was to decompose/cut/divide/disassemble (don't know which is the right word for you) the array in its three color groups , corresponding to the GREEN filter in the array, the RED one and the BLUE one as three different group of pixels "not related with a RGB colorspace file" ,just to LOSSLESSLY (my original idea was LOSSY for the less important planes) compress them and then after storage, IN A LATER STEP and with the help of a COMPUTER decompress this three groups and re-arrange them to form again the original Bayer structure to, after this operation, apply a demosaicking algo.Just that.NO LOSS of info in this situation.
JPEG2000 supports 16 bit LOSSLESS in a single chip that can process, if I'm not wrong at a bitrate of around 133 MBYTES/s.
In case it can't, comes the idea of seperating this three groups to use three chips.
Anyway it seems my idea was really stupid.

Sorry again, but I'never said the things you are responding to me.
I guess you read too quick man! :D (just a joke)

Rob Lohman June 23rd, 2004 05:23 AM

Juan: relax. Any civil input is always welcomed on this board and
your points add to an interesting discussion. Through discussion
we often arrive on things we hadn't considered before, so please
don't say "my idea was really stupid". Why would it be stupid?
Mis-communication just happens on boards like these because
we can't speak and draw etc. to eachother. No harm in that,
we are a friendly place!

An idea is an idea. Whether I or anyone else "agrees" is a whole
different matter. I'm actually very interested in FPGA and chips
that can help me out. Only not in this stage of the project since
we are just getting started, basically. But you've added your
ideas to this thread which we can always look back upon when
we get to that stage! So don't be sorry or sad, please.

Back to your points...

You quoted your friend when you where talking about Bayer
to full RGB on a chip. Which suggests doing it BEFORE storing
the data. In this case data expands (as you know). Quote:

" and he suggested that to me for the task of decomposing the Bayer array in its color groups "

It looks like I mis-interpreted that line. You meant to split each
individual Bayer RGB grayscale plane before compressing. Instead
of compressing a signal where all the "colors" are mixed. Sorry
about that. We are going to do tests with various algorithms to
see what would be more efficient indeed. Thanks for suggesting it.

I'm a bit confused in regards to lossy versus lossless. You seemed
to agree with me that it IS lossy due to this line in your previous
post:

" Sorry about my error saying '' without affecting image quality" I forgot to add ''too much''. "

Too much implies a lossy algorithm. But if JPEG2000 supports a
lossless form as well then that is certainly interesting (in a chip).

So I do feel you "said" it is/was lossy.

Can you post links to this chip and a place that clearly explains
that JPEG2000 supports lossless compression? Not that I'm not
believing you, but it will be nice to have for reference.

Again thank you very much for your participation and discussion,
it's just mis-communication we are experiencing. No worries!!

Juan M. M. Fiebelkorn June 23rd, 2004 05:29 AM

Posted far above, but again here:

http://www.jpeg.org/jpeg2000/index.html

also try lossless JPEG and lossless JPEG2000 in Google.

By the way, I work for the cine industry, both as a video technician on set and in postproduction, especially film recording.
I have experience with many DV cameras, DigiBeta, HDCAM and Photosonic high speed camera.I also developped my own 35mm movie camera with a variation upon Oxberry film transport.
If anyone is interested look for me at www.imdb.com as Juan Manuel Morales.
Next post, DSP info....

Here it is:

http://www.analog.com/Analog_Root/pr...ADV202,00.html
http://www.amphion.com/cs6590.html
http://www.amphion.com/acrobat/PB6590.pdf

Steve Nordhauser June 23rd, 2004 05:36 AM

Rob and Juan:
My $.03 on this (inflation is everywhere). Juan is absolutely correct that if you are going to compress data from a Bayer sensor and not go to RGB or YUV space, pulling the image apart into a R,B and a twice as large G is the way to go. Rob is correct that too much final image quality depends on the RGB conversion so we will not do that on the fly.

I think that any compression solution for this group has to be scalable - workable with more horsepower at 1280x720, 8, 10 and 12 bits, 24, 30 and maybe up to 60fps, 1920x1080, same range. Because a number of different developers are working on this with somewhat different ends in mind, generic and scalable is important.

FPGAs are the cheapest, fastest way to do any fixed point algorithm. I've designed with them before but I don't consider them on the table unless someone is willing to do the work. If I could get a Spartan design (schematic and FPGA-ware) to do the decompose, even lossless compression (RLE, Huffman?) I would consider embedding it into the camera head.

Anyway, I think that DSPs might be good on the low end but I doubt an aggregate 3 channels totaling 150Mpix/sec of 12 bit data will work (the high end target). You might be right that multiple DSPs will be easier to manage than RAIDs that can deal with that rate, but part of this is balancing on the expertise of the group and the off-the-shelf-ness of the solution.

Maybe the collision has happened between you two because Juan is entering a bit late into the discussion and trodding on ground we have discussed or decided. They are good points and worth hashing again before we get too deep.

There is a definite split here on completely lossless and "virtually lossless" because no one is willing to lose anything too early in the chain but the gains in reducing system complexity are high if you can compress early in the chain.

Steve

Rob Lohman June 23rd, 2004 07:42 AM

Thanks Juan & Steve: I have no doubt in either of your knowledge
on these matters.

If I came on a bit harsh, my appologies. I can only speak for myself
that I'm certainly open to options and nothing is set in stone.

However, Rob Scott and myself have started on some preliminary
designs according to our (and I thought the groups) initial thoughts.

The primary focus is on getting a connection to the camera and
have it write to disk.

Then we can start to look around at where to go next. Both Rob S.
and myself are interested in a partial or full blown FPGA solution
perhaps with embedded processors and whatnot. Problems is
neither of us knows anything about this (basically).

I'm quite sure both of us could program an FPGA (responding to
you Steve). We just don't know which FPGA from who and what
is important things to consider and keep in mind when picking
a solution.

Personally I would love to see a couple of chips on a board
with a harddisk array writing raw to harddisk. We are currently
opting to go with a form of software RAID if that is warranted
due to data bandwidth (ie, compression). In this we will write
frame 1 to disk 1, frame 2 to disk 2 etc. depending on how many
disks there are.

We just aren't familiair with the whole word of FPGA's and other
things alike. So for that reason we are focussing on desinging
a good platform and programming (testing?) this on a computer
first.

Thoughts?

p.s. as said, Rob S. and myself where definitely planning on
working on each image plane seperately (as you both suggest)
pending algorithm testing. I'm hoping to test a dictionary/RLE
based custom compression this weekend to see what it can do.

Rob Scott June 23rd, 2004 08:27 AM

Compression
 
Quote:

Rob Lohman wrote ...
definitely planning on working on each image plane seperately (as you both suggest) pending algorithm testing.
OK, now that the dust has settled a bit ... :-)

I think there are definite possibilities with compression of the Bayer image planes. I found a real-time compression library called LZO, and tried this; but it wasn't quite fast enough in my initial tests.

OTOH, I was using MinGW to compile at the time. I just purchased MS Visual C++ 2003 and I will try cranking up the optimizations to "11" :-) before giving up on LZO. LZO also contains a couple of different compressors; in my initial test I used the "simple" one. Other compressors may be more optimized for speed.

I think JPEG2000 is also an excellent idea. I had also had the idea of developing an open-source codec based on using 16-bit JPEG2000 compression separately on the Y, U and V components (allowing 4:2:2 subsampling if desired). However, JPEG2000 is rather patent-encumbered, which could pose problems.

Recently I've also run across the Dirac project which also uses wavelet compression (but I'm not sure if it supports more than 8 bits per channel). Dirac is apparently designed to be un-patent-encumbered.

If it's high enough quality, and possible to do in real time, I'm all for doing "lossy-but-visually-near-lossless" (LBVNL?) compression at capture time. The commercial companies like CineForm (IIRC) do this, but they have lots of full-time developers, hardware, tools, money, and know-how. Given our resources, we can't possibly compete with them -- and I don't want to. I think we can do some cool things, but a fully embedded FPGA/DSP system is a long way away unless someone comes on board who has a great deal of experience with it.

Dirac: http://www.bbc.co.uk/rd/projects/dirac/
LZO: http://www.oberhumer.com/opensource/lzo/

Jason Rodriguez June 23rd, 2004 09:51 AM

Quote:

Personally I would love to see a couple of chips on a board
with a harddisk array writing raw to harddisk
Gosh, I hate to sound like a broken record on this, but that's basically what the Kinetta is.

I'm not trying to say, "don't bother, it's already been done", but if you're going to sink that much cash into a project for all the R&D it's going to take to do FPGA's, etc., then you'd better think of a way to make your solution either much cheaper, or fulfilling a specific niche that the Kinetta doesn't. Because IMHO it makes no sense to spend $20K on a camera that can't do half of what a $30K camera can. That's just the nature of competition. If we can keep the FPGA "hard-disk dumping" solutions below, say approx. $7K (with bundled bayer conversion software), then I'd say very, very nice, you've got sales.

David Newman June 23rd, 2004 10:20 AM

Rob Scott -- "If it's high enough quality, and possible to do in real time, I'm all for doing "lossy-but-visually-near-lossless" (LBVNL?) compression at capture time. The commercial companies like CineForm (IIRC) do this, but they have lots of full-time developers, hardware, tools, money, and know-how. Given our resources, we can't possibly compete with them -- and I don't want to."

CineForm is not yet a huge player in the NLE and compression world so I would prefer that we are not seen as competition rather we hope to be a good partner for this community. We are a startup company developing new technologies for new markets, this market is pretty new, that's why you will see a company like CineForm rather than a more established names.

The compression model Juan proposed is spot-on and very similar to the way I would compress Bayer data in it original form. Juan is also correct than Red and Blue channels can be compressed more than Green without any visually degradation -- good compression technologies typically exploit the human visual characteristics. Here is a slightly better way to exploit human visual models and get better compression of a Bayer image.

Instead of storing all the green as one image, consider the green Bayer pairs as part of separate channels. Your now have four channels : R,G1,G2,B. For a 1280x720 image that is four planes of 640x360. Now that they are the same size the compression can be optimized for the human eye by compressing the color planes in this form

G = G1 + G2
Gdiff = G1 - G2
Rdiff = R - G
Bdiff = B - G

Using this lossless transform, G now contains a noise reduced luma approximation green channel (only apply light compression.) Gdiff contains green noise and edge detail (very compressable). Rdiff and Bdiff are approximations of U and V chroma difference channels, these can be compressed more than red and blue native channels has the "luma" component has been removed. Consider a black and white image -- this is the type of image the human eye is most sensitive too; here R,G,B channels contain the same data, you would be compressing the same data three times. In a B&W scene Rdiff and Bdiff would be mostly zero with some edge detail (due to Bayer misalignment.) Now we aren't shooting B&W scenes (the world is color) but our eye can be fooled. Moving the significant image data into G, allow the compressor to optimize to the way you see.

All the can be done easily on the fly in software compression, the trick is to do it fast.

Rob Scott June 23rd, 2004 10:56 AM

Quote:

David Newman wrote:
I would prefer that we are not seen as competition rather we hope to be a good partner for this community.
Sorry about that -- I was trying to think of an example and yours was the first to come to mind -- mostly because you're HERE I guess :-)

... and I agree absolutely -- there is no need for us to compete directly with you or anyone else. I think we as a community can come up with solutions -- part open-source, part commercial -- to meet some of our needs at lower price points than before.

Thanks for being involved here!

Rob Lohman June 23rd, 2004 11:40 AM

<<<-- Originally posted by Jason Rodriguez : Gosh, I hate to sound like a broken record on this, but that's basically what the Kinetta is.

I'm not trying to say, "don't bother, it's already been done", but if you're going to sink that much cash into a project for all the R&D it's going to take to do FPGA's, etc., then you'd better think of a way to make your solution either much cheaper, or fulfilling a specific niche that the Kinetta doesn't. Because IMHO it makes no sense to spend $20K on a camera that can't do half of what a $30K camera can. That's just the nature of competition. If we can keep the FPGA "hard-disk dumping" solutions below, say approx. $7K (with bundled bayer conversion software), then I'd say very, very nice, you've got sales. -->>>

That might be basically what the Kinetta is, but it's a lot more
expensive. We don't have to pay for programming and research
because we can all do it in our spare time. Yes, time is money,
but in a different way in this case. Yes the chips costs money,
but I don't see how this would go to $20K with what we have
in mind. At least for me personally. I'm not sure where you pulled
those numbers from.

Les Dit June 23rd, 2004 11:57 AM

The Kinetta looks like it might be a nice product.
What I'd like this threads effort to result in is a system that allows 75% of the camera cost to be sunk into the sensor. Over the longer haul, we might end up with better image quality than them.

Jason Rodriguez June 23rd, 2004 01:37 PM

Quote:

Yes the chips costs money,
but I don't see how this would go to $20K with what we have
in mind. At least for me personally
Once we start including ProspectHD/Boxx dual Opteron/big RAID's, etc., then we're talking $$$$. Of course nothing agaist any of these products, I'm just saying that you will quickly find by incorperating these products into the system, the cost will quickly climb.

BTW Obin,

You're latest slo-mo clip didn't seem to have any objectionable artifacts in the way that a rolling shutter would render motion. Also I was doing a study of still camera SLR's and they too are rolling shutter, or focal plane shutter. The only thing was that when shooting at 1/50th of a second for the shutter, I couldn't do the movements you're doing without a lot of motion blur, which basically hid any rolling shutter artifacts if they were even there. So how come your footage has no "natural" motion blur like I'd expect from a motion-picture camera? If that motion blur was there, then maybe there wouldn't be any problems with the rolling shutter artifacts.

Rob Scott June 23rd, 2004 01:50 PM

Quote:

Jason wrote:
So how come your footage has no "natural" motion blur like I'd expect from a motion-picture camera?
Are you referring to the young lady with the plastic sheet? I certainly see motion blur in that clip. It looks fairly natural to me, but then I don't have a good eye for this yet.

Obin Olson June 23rd, 2004 02:02 PM

hmm I am not sure. I still don't have my head around how mhz and shutter speed work on this camera...I understand fps and shutter speed but camera MHZ?? not me..Steve?

I am not sure how you can say you don't see any motion blur?? it's all over that slo-motion stuff!

Steve Nordhauser June 23rd, 2004 03:17 PM

Obin:
Here is what you are juggling. Camera pixel clock. This sets how long it will take to read out a frame. Also, for the most part, the integration time (shutter speed for each line) can't be larger than a frame time. In rolling shutter cameras, you can reset each line a certain amount prior to readout - less than a frame time if you want a fast shutter speed for minimum motion blur.

You ccould set the integration time very short, slow pixel clock and pan if you wanted to really see rolling shutter artifacts. This would give you minimum motion blur and max artifacts. An artifical test but good when you want to do images of school buses for application notes.

I've been wondering if we are busting our butts trying to emulate film and all its ecentrisities. Is the rolling shutter stuff objectionable or just different? Does it add a feeling of speed? It would crack me up if Lucas adds this effect in a feature film to be different from film effects. Just a thought.

Rob Lohman June 23rd, 2004 03:38 PM

Steve: well, it's something we aren't used to. So that always looks
out of place. Now the funny thing is that this only happens with
fast moving objects (or camera). In this case we should have
more motion blur, but we don't.

If the integration time is low, the rolling shutter is as well but
we get almost no motion blure. But when we go to a higher
integration time the rolling shutter effect increases but so does
the motion blur?

Jason: I'm not thinking of a dual opteron box with a special
RAID card. As it looks now this will work on a single processor
(nothing fancy) with 1 drive for 8 bit and 2 drives probably for 10
bits in our initial design (at maximum framerates). This is at
1280 x 720. For higher resolutions and 12 bit a third or fourth
drive might be needed. But we aren't at that stage yet (since
this sensor does 1280x720x10bitx60 fps max). We'll do "RAID"
in a software "solution" as it looks... So no need for a fancy
solution there either.

As for ProspectHD. Nothing has been set in stone. If the price
is too high then it is too high...

Valeriu Campan June 23rd, 2004 03:41 PM

<<<-- Originally posted by Steve Nordhauser :
I've been wondering if we are busting our butts trying to emulate film and all its ecentrisities. Is the rolling shutter stuff objectionable or just different? Does it add a feeling of speed? It would crack me up if Lucas adds this effect in a feature film to be different from film effects. Just a thought. -->>>

A projected fillm without "motion blur", after few minutes becomes unwatchable. The eye is not perfect.
When film standards were set they saw that 30fps was sharper, but only marginally and not justifying the extra production costs, so 24fps was adopted. The motion blur is only a part of the "film look".

Steve Nordhauser June 23rd, 2004 03:55 PM

Rob L:
The integration time is fairly independent of the readout time except that you can't integrate longer than readout.

To remove rolling shutter you run fast. As you say, this limits the longest integration time (most motion blur).

I did check and we can probably run at 48fps readout time, stretch the vertical blanking so the trash frame doesn't hit the bus. That would mean bursting a frame at about 55Mpix/sec to memory, quiet for a frame time (1/48th sec) followed by the next frame.

The real win here (****drum roll ****) is that now you can integrate during the vertical blanking time. This means a full 1/24th sec integration (motion blur) with fast readout (minimal RS artifact).

I will try to verify this in the next week but my guru on the 1300 said it should work.

Rob Lohman June 23rd, 2004 04:03 PM

Sounds cool although I will need to think about what you are
saying... Heh. Thanks a lot for your time and please thank
your guru as well! Keep us updated.

It is all MUCH appreciated!


All times are GMT -6. The time now is 07:48 PM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network