4:4:4 10bit single CMOS HD project - Page 26 at DVinfo.net
DV Info Net

Go Back   DV Info Net > Special Interest Areas > Alternative Imaging Methods

Alternative Imaging Methods
DV Info Net is the birthplace of all 35mm adapters.

Closed Thread
 
Thread Tools Search this Thread
Old June 22nd, 2004, 11:03 AM   #376
Trustee
 
Join Date: Jan 2003
Location: Wilmington NC
Posts: 1,414
what software? you and Rob S?

sorry guys check the link again for sheervideo file I fixed the URL

<<<-- Do you happen to have the original bayer file on that Obin? -->>>

here it is:

http://www.dv3productions.com/test_i...andw-bayer.tif

slow-motion - 48fps shot, playback at 24fps

lowquality test footage

www.dv3productions.com/Video Clips/48fps.mov

OK I think I am going with a Shuttle computer system! check them out at : http://sys.us.shuttle.com/

It will cost me $900 for a shuttle, 2.8ghz p4 CPU, 512megs 400mhz ram, 2 7200rpm sata disk drives at 200GB each, a dual head graphics card and a 40gb OS disk drive..then I will use firewire to transfer all the footage from shuttle to video server after shoot..I am hoping to get 60fps 8bit on this system and 48fps 12bit all 1280x720p

if anyone has an idea of a much better system for size -speed - price please let me know soon!

Rob, get your camera yet??? :)
Obin Olson is offline  
Old June 22nd, 2004, 02:45 PM   #377
Major Player
 
Join Date: May 2004
Location: Knoxville, TN (USA)
Posts: 358
Quote:
Obin Olson wrote:
Rob, get your camera yet??? :)
Nope, not yet! You'll be the first to know :-)
Rob Scott is offline  
Old June 23rd, 2004, 12:13 AM   #378
Major Player
 
Join Date: Jun 2004
Location: Buenos Aires , Argentina
Posts: 444
Hello everyone.This is my first post so I guess I'll make some mistakes.Sorry in advance...
Going back in time, to a post I don't remember its location....

About the bandwidth needed to record HiDef I think there are a couple of useful solutions out there.
I believe recording the RAW Bayer image is the right way.Also I think it could be really easilly compressed.
Some ways could be these:
Use an FPGA (or something else which can do the trick) to seperate the three color channel, giving you, for a 1280x720 sensor a 640x720 image for green or Luma (the way you prefer to call it) and two 640x360 images for Blue and Red.
After this go to three different compressors, I propose JPEG2000 which is available as DSP chips which can deal with these pixel rates and is meant to work with 10 bit depth.
Then compress lossless the green channel and may be lossy Red and Blue ( 10:1 or 20:1 should be enough).This will also help with chroma noise, because of internal JPEG2000 wavelet transform.
Record to disk or a DLT tape (a DLT tape drive which records 8 MB per second costs around 1,500).It could also be stored in DVD with the new Philips 16x burner giving a 20 minutes storage.
This approach would give us a 5.5 MB per second @24p for green , 0.5 MB for Red and 0.5 MB for Blue. Total = 6.5 MB per second @24fps.
Then whe can go the usual way decompressing these three images , assemblying again the Bayer pattern and applying demosaicking on a normal computer.
This would give a really big bandwidth reduction without affecting image quality too much. (edited after some misundertandings)
Juan M. M. Fiebelkorn is offline  
Old June 23rd, 2004, 01:56 AM   #379
RED Code Chef
 
Join Date: Oct 2001
Location: Holland
Posts: 12,514
Juan: I'm not sure how this does NOT effect image quality when
you are proposing a lossy compression method. It is lossy, right?

That's still a large reduction from 21 or 26 MB/s (see below)
which can probably only be done through a lossy compression
which is what we are trying to avoid. The system will also record
straight to harddisk, for now.

The thing you are forgetting (I think), no offense, is that the
image we get now is 10 bits per "bayer pixel" and will increase
to 12 bits this year if all goes well. Most compression algorithms
do not support this. Yes, you could seperate them, but then
I'd highly doubt you could reconstruct anything meaningfull from
a lossy compression algorithm.

We appreciated all input on this, but I'm not seeing how this
would easily work and work at all.

How knowledgable are you in regards to FPGA's? I have a bit of
a hard time figuring out what could be a possible way for that
in the near future. For now everything is computer based...
__________________

Rob Lohman, visuar@iname.com
DV Info Wrangler & RED Code Chef

Join the DV Challenge | Lady X

Search DVinfo.net for quick answers | Buy from the best: DVinfo.net sponsors
Rob Lohman is offline  
Old June 23rd, 2004, 02:26 AM   #380
Major Player
 
Join Date: Jun 2004
Location: Buenos Aires , Argentina
Posts: 444
Well, seeing your comments, they sound like, following your thoughts, Cineform's codec is really a nonsense, cause it is supposed to be used for processing and producing on line quality results.No problem :).
Sorry about my error saying '' without affecting image quality" I forgot to add ''too much''.
I guess I don't know what you think are the results of a lossy compression like JPEG2000 on an image.Anyway lossless can be used for Red and Blue too.Which would give around one half the bitrate.
Notice that I said ''Lossless'' compression for Luma/Green, not "Lossy".
I work everyday of my life with digital images, but may be I'm really wrong or my english isn't good enough to express my ideas properly.
Sorry.

What do you mean by "bayer pixel" ?
Why it is different from a grayscale pixel, 8 or 10 bit?
Isn't it just a discrete point with an intensity value ranging from 0 to 255 or 1023?

JPEG2000 supports lossless and lossy 10-16 bit grayscale images compression.
PNG also supports 16 bit and both are royalty free. (I can be wrong about JPEG2000)
About PNG I don't know of any hardware for it.
There are chips for huffman compression.

http://www.jpeg.org/jpeg2000/index.html

A Spartan 3 from Xilinx costs around $12 and a development board for it around $ 100.
Don't know the JPEG2000 DSP pricing yet.

Just another thing.Actually a Bayer pattern array (like the ones these sensors have) or any name someone wants to put to it, isn't a 4:4:4 color scheme.In fact it is more like a 4:2:0 or 4:2:2 sampling.I say this because I don't understand why the thread title says "4:4:4"

P.S. My knowledge about FPGA, is really limited.My best friend is an electronical engineer who works mostly with DSP and he suggested that to me for the task of decomposing the Bayer array in its color groups to be able to compress them individually and to be able to recover the Bayer structure after that.
Juan M. M. Fiebelkorn is offline  
Old June 23rd, 2004, 04:48 AM   #381
RED Code Chef
 
Join Date: Oct 2001
Location: Holland
Posts: 12,514
Glad we agree on the lossy thing. The problem I have with lossy
is that you remove vital information to reconstruct an RGB image
from the bayer pattern. As you indicated the Bayer pattern
produces something else than a true 4:4:4 signal. It's hard to
compare since those numbers always apply to YUV encoding,
not RGB. But for every 2 samples of green we have one sample
of red and green. So perhaps a 2:1:1 naming is more approriate?

The reason the thread is called 4:4:4 is that it is meant to imply
uncompressed / no futher loss. Also we (and the thread starter
Obin) weren't too familiair with Bayer and the way CMOS chips
operate back then. That's changed now.

Bayers is indeed 10 or 12 bit grayscale pixel in the range of
0 - 1023 or 0 - 4095. If you use a lossy compression you will
loose some important information to more accurately reconstruct
the full RGB image. It will probably work fine but you will introduce
even more errors that the Bayer to RGB algorithm already
introduces. Therefor in my opinion lossy is not a way to go for
Bayer compression. It might certainly be an option after the
reconstruction to full RGB (but I leave that choice up to the end
users).

We are not goint to decompose the Bayer to FULL RGB before
storing. The process is devided into two phases/steps:

1. the camera system will record / compress raw Bayer and store this to harddisk(s) as fast as possible

2. after you are done shooting you either hook this system up to a "normal" computer (or in case you recorded with a normal computer you can use the same) and start a post-processing application that will convert the format to full RGB.

The reason for deviding these steps are these:

- we want to use as less processing power as possible to allow making the camera as small as possible (lower power consumption, lower speed usually)

- bayer will allow smaller datarates in a lossless way due to the data being a lot less

- we will not have the power to do high quality bayer to full RGB conversion, even with a high speed DSP.

If we do these conversions in a later step we have the possability
to use a high quality bayer algorithm and allow futher things like
color tweaking and encoding to the final format (users choice).

Anyway, if all goes as planned it will be open source so if you or
anyone else wants to do the conversion in camera and do a lossy
compression on the fly then by all means go right ahead.

Although I suspect it will be cheaper and easier to just get a
consumer camera because it will probably have similar qualities
due to the compression etc.
__________________

Rob Lohman, visuar@iname.com
DV Info Wrangler & RED Code Chef

Join the DV Challenge | Lady X

Search DVinfo.net for quick answers | Buy from the best: DVinfo.net sponsors
Rob Lohman is offline  
Old June 23rd, 2004, 04:54 AM   #382
Major Player
 
Join Date: Jun 2004
Location: Buenos Aires , Argentina
Posts: 444
When did I say to decompose the bayer array to RGB?
The only thing I said was to decompose/cut/divide/disassemble (don't know which is the right word for you) the array in its three color groups , corresponding to the GREEN filter in the array, the RED one and the BLUE one as three different group of pixels "not related with a RGB colorspace file" ,just to LOSSLESSLY (my original idea was LOSSY for the less important planes) compress them and then after storage, IN A LATER STEP and with the help of a COMPUTER decompress this three groups and re-arrange them to form again the original Bayer structure to, after this operation, apply a demosaicking algo.Just that.NO LOSS of info in this situation.
JPEG2000 supports 16 bit LOSSLESS in a single chip that can process, if I'm not wrong at a bitrate of around 133 MBYTES/s.
In case it can't, comes the idea of seperating this three groups to use three chips.
Anyway it seems my idea was really stupid.

Sorry again, but I'never said the things you are responding to me.
I guess you read too quick man! :D (just a joke)
Juan M. M. Fiebelkorn is offline  
Old June 23rd, 2004, 05:23 AM   #383
RED Code Chef
 
Join Date: Oct 2001
Location: Holland
Posts: 12,514
Juan: relax. Any civil input is always welcomed on this board and
your points add to an interesting discussion. Through discussion
we often arrive on things we hadn't considered before, so please
don't say "my idea was really stupid". Why would it be stupid?
Mis-communication just happens on boards like these because
we can't speak and draw etc. to eachother. No harm in that,
we are a friendly place!

An idea is an idea. Whether I or anyone else "agrees" is a whole
different matter. I'm actually very interested in FPGA and chips
that can help me out. Only not in this stage of the project since
we are just getting started, basically. But you've added your
ideas to this thread which we can always look back upon when
we get to that stage! So don't be sorry or sad, please.

Back to your points...

You quoted your friend when you where talking about Bayer
to full RGB on a chip. Which suggests doing it BEFORE storing
the data. In this case data expands (as you know). Quote:

" and he suggested that to me for the task of decomposing the Bayer array in its color groups "

It looks like I mis-interpreted that line. You meant to split each
individual Bayer RGB grayscale plane before compressing. Instead
of compressing a signal where all the "colors" are mixed. Sorry
about that. We are going to do tests with various algorithms to
see what would be more efficient indeed. Thanks for suggesting it.

I'm a bit confused in regards to lossy versus lossless. You seemed
to agree with me that it IS lossy due to this line in your previous
post:

" Sorry about my error saying '' without affecting image quality" I forgot to add ''too much''. "

Too much implies a lossy algorithm. But if JPEG2000 supports a
lossless form as well then that is certainly interesting (in a chip).

So I do feel you "said" it is/was lossy.

Can you post links to this chip and a place that clearly explains
that JPEG2000 supports lossless compression? Not that I'm not
believing you, but it will be nice to have for reference.

Again thank you very much for your participation and discussion,
it's just mis-communication we are experiencing. No worries!!
__________________

Rob Lohman, visuar@iname.com
DV Info Wrangler & RED Code Chef

Join the DV Challenge | Lady X

Search DVinfo.net for quick answers | Buy from the best: DVinfo.net sponsors
Rob Lohman is offline  
Old June 23rd, 2004, 05:29 AM   #384
Major Player
 
Join Date: Jun 2004
Location: Buenos Aires , Argentina
Posts: 444
Posted far above, but again here:

http://www.jpeg.org/jpeg2000/index.html

also try lossless JPEG and lossless JPEG2000 in Google.

By the way, I work for the cine industry, both as a video technician on set and in postproduction, especially film recording.
I have experience with many DV cameras, DigiBeta, HDCAM and Photosonic high speed camera.I also developped my own 35mm movie camera with a variation upon Oxberry film transport.
If anyone is interested look for me at www.imdb.com as Juan Manuel Morales.
Next post, DSP info....

Here it is:

http://www.analog.com/Analog_Root/pr...ADV202,00.html
http://www.amphion.com/cs6590.html
http://www.amphion.com/acrobat/PB6590.pdf
Juan M. M. Fiebelkorn is offline  
Old June 23rd, 2004, 05:36 AM   #385
Silicon Imaging, Inc.
 
Join Date: May 2004
Location: Troy, NY USA
Posts: 325
Rob and Juan:
My $.03 on this (inflation is everywhere). Juan is absolutely correct that if you are going to compress data from a Bayer sensor and not go to RGB or YUV space, pulling the image apart into a R,B and a twice as large G is the way to go. Rob is correct that too much final image quality depends on the RGB conversion so we will not do that on the fly.

I think that any compression solution for this group has to be scalable - workable with more horsepower at 1280x720, 8, 10 and 12 bits, 24, 30 and maybe up to 60fps, 1920x1080, same range. Because a number of different developers are working on this with somewhat different ends in mind, generic and scalable is important.

FPGAs are the cheapest, fastest way to do any fixed point algorithm. I've designed with them before but I don't consider them on the table unless someone is willing to do the work. If I could get a Spartan design (schematic and FPGA-ware) to do the decompose, even lossless compression (RLE, Huffman?) I would consider embedding it into the camera head.

Anyway, I think that DSPs might be good on the low end but I doubt an aggregate 3 channels totaling 150Mpix/sec of 12 bit data will work (the high end target). You might be right that multiple DSPs will be easier to manage than RAIDs that can deal with that rate, but part of this is balancing on the expertise of the group and the off-the-shelf-ness of the solution.

Maybe the collision has happened between you two because Juan is entering a bit late into the discussion and trodding on ground we have discussed or decided. They are good points and worth hashing again before we get too deep.

There is a definite split here on completely lossless and "virtually lossless" because no one is willing to lose anything too early in the chain but the gains in reducing system complexity are high if you can compress early in the chain.

Steve
__________________
Silicon Imaging, Inc.
We see the Light!
http://www.siliconimaging.com
Steve Nordhauser is offline  
Old June 23rd, 2004, 07:42 AM   #386
RED Code Chef
 
Join Date: Oct 2001
Location: Holland
Posts: 12,514
Thanks Juan & Steve: I have no doubt in either of your knowledge
on these matters.

If I came on a bit harsh, my appologies. I can only speak for myself
that I'm certainly open to options and nothing is set in stone.

However, Rob Scott and myself have started on some preliminary
designs according to our (and I thought the groups) initial thoughts.

The primary focus is on getting a connection to the camera and
have it write to disk.

Then we can start to look around at where to go next. Both Rob S.
and myself are interested in a partial or full blown FPGA solution
perhaps with embedded processors and whatnot. Problems is
neither of us knows anything about this (basically).

I'm quite sure both of us could program an FPGA (responding to
you Steve). We just don't know which FPGA from who and what
is important things to consider and keep in mind when picking
a solution.

Personally I would love to see a couple of chips on a board
with a harddisk array writing raw to harddisk. We are currently
opting to go with a form of software RAID if that is warranted
due to data bandwidth (ie, compression). In this we will write
frame 1 to disk 1, frame 2 to disk 2 etc. depending on how many
disks there are.

We just aren't familiair with the whole word of FPGA's and other
things alike. So for that reason we are focussing on desinging
a good platform and programming (testing?) this on a computer
first.

Thoughts?

p.s. as said, Rob S. and myself where definitely planning on
working on each image plane seperately (as you both suggest)
pending algorithm testing. I'm hoping to test a dictionary/RLE
based custom compression this weekend to see what it can do.
__________________

Rob Lohman, visuar@iname.com
DV Info Wrangler & RED Code Chef

Join the DV Challenge | Lady X

Search DVinfo.net for quick answers | Buy from the best: DVinfo.net sponsors
Rob Lohman is offline  
Old June 23rd, 2004, 08:27 AM   #387
Major Player
 
Join Date: May 2004
Location: Knoxville, TN (USA)
Posts: 358
Compression

Quote:
Rob Lohman wrote ...
definitely planning on working on each image plane seperately (as you both suggest) pending algorithm testing.
OK, now that the dust has settled a bit ... :-)

I think there are definite possibilities with compression of the Bayer image planes. I found a real-time compression library called LZO, and tried this; but it wasn't quite fast enough in my initial tests.

OTOH, I was using MinGW to compile at the time. I just purchased MS Visual C++ 2003 and I will try cranking up the optimizations to "11" :-) before giving up on LZO. LZO also contains a couple of different compressors; in my initial test I used the "simple" one. Other compressors may be more optimized for speed.

I think JPEG2000 is also an excellent idea. I had also had the idea of developing an open-source codec based on using 16-bit JPEG2000 compression separately on the Y, U and V components (allowing 4:2:2 subsampling if desired). However, JPEG2000 is rather patent-encumbered, which could pose problems.

Recently I've also run across the Dirac project which also uses wavelet compression (but I'm not sure if it supports more than 8 bits per channel). Dirac is apparently designed to be un-patent-encumbered.

If it's high enough quality, and possible to do in real time, I'm all for doing "lossy-but-visually-near-lossless" (LBVNL?) compression at capture time. The commercial companies like CineForm (IIRC) do this, but they have lots of full-time developers, hardware, tools, money, and know-how. Given our resources, we can't possibly compete with them -- and I don't want to. I think we can do some cool things, but a fully embedded FPGA/DSP system is a long way away unless someone comes on board who has a great deal of experience with it.

Dirac: http://www.bbc.co.uk/rd/projects/dirac/
LZO: http://www.oberhumer.com/opensource/lzo/
Rob Scott is offline  
Old June 23rd, 2004, 09:51 AM   #388
Trustee
 
Join Date: Mar 2003
Location: Virginia Beach, VA
Posts: 1,095
Quote:
Personally I would love to see a couple of chips on a board
with a harddisk array writing raw to harddisk
Gosh, I hate to sound like a broken record on this, but that's basically what the Kinetta is.

I'm not trying to say, "don't bother, it's already been done", but if you're going to sink that much cash into a project for all the R&D it's going to take to do FPGA's, etc., then you'd better think of a way to make your solution either much cheaper, or fulfilling a specific niche that the Kinetta doesn't. Because IMHO it makes no sense to spend $20K on a camera that can't do half of what a $30K camera can. That's just the nature of competition. If we can keep the FPGA "hard-disk dumping" solutions below, say approx. $7K (with bundled bayer conversion software), then I'd say very, very nice, you've got sales.
Jason Rodriguez is offline  
Old June 23rd, 2004, 10:20 AM   #389
CTO, CineForm Inc.
 
Join Date: Jul 2003
Location: Cardiff-by-the-Sea, California
Posts: 8,095
Rob Scott -- "If it's high enough quality, and possible to do in real time, I'm all for doing "lossy-but-visually-near-lossless" (LBVNL?) compression at capture time. The commercial companies like CineForm (IIRC) do this, but they have lots of full-time developers, hardware, tools, money, and know-how. Given our resources, we can't possibly compete with them -- and I don't want to."

CineForm is not yet a huge player in the NLE and compression world so I would prefer that we are not seen as competition rather we hope to be a good partner for this community. We are a startup company developing new technologies for new markets, this market is pretty new, that's why you will see a company like CineForm rather than a more established names.

The compression model Juan proposed is spot-on and very similar to the way I would compress Bayer data in it original form. Juan is also correct than Red and Blue channels can be compressed more than Green without any visually degradation -- good compression technologies typically exploit the human visual characteristics. Here is a slightly better way to exploit human visual models and get better compression of a Bayer image.

Instead of storing all the green as one image, consider the green Bayer pairs as part of separate channels. Your now have four channels : R,G1,G2,B. For a 1280x720 image that is four planes of 640x360. Now that they are the same size the compression can be optimized for the human eye by compressing the color planes in this form

G = G1 + G2
Gdiff = G1 - G2
Rdiff = R - G
Bdiff = B - G

Using this lossless transform, G now contains a noise reduced luma approximation green channel (only apply light compression.) Gdiff contains green noise and edge detail (very compressable). Rdiff and Bdiff are approximations of U and V chroma difference channels, these can be compressed more than red and blue native channels has the "luma" component has been removed. Consider a black and white image -- this is the type of image the human eye is most sensitive too; here R,G,B channels contain the same data, you would be compressing the same data three times. In a B&W scene Rdiff and Bdiff would be mostly zero with some edge detail (due to Bayer misalignment.) Now we aren't shooting B&W scenes (the world is color) but our eye can be fooled. Moving the significant image data into G, allow the compressor to optimize to the way you see.

All the can be done easily on the fly in software compression, the trick is to do it fast.
__________________
David Newman -- web: www.gopro.com
blog: cineform.blogspot.com -- twitter: twitter.com/David_Newman
David Newman is offline  
Old June 23rd, 2004, 10:56 AM   #390
Major Player
 
Join Date: May 2004
Location: Knoxville, TN (USA)
Posts: 358
Quote:
David Newman wrote:
I would prefer that we are not seen as competition rather we hope to be a good partner for this community.
Sorry about that -- I was trying to think of an example and yours was the first to come to mind -- mostly because you're HERE I guess :-)

... and I agree absolutely -- there is no need for us to compete directly with you or anyone else. I think we as a community can come up with solutions -- part open-source, part commercial -- to meet some of our needs at lower price points than before.

Thanks for being involved here!
Rob Scott is offline  
Closed Thread

DV Info Net refers all where-to-buy and where-to-rent questions exclusively to these trusted full line dealers and rental houses...

B&H Photo Video
(866) 521-7381
New York, NY USA

Scan Computers Int. Ltd.
+44 0871-472-4747
Bolton, Lancashire UK


DV Info Net also encourages you to support local businesses and buy from an authorized dealer in your neighborhood.
  You are here: DV Info Net > Special Interest Areas > Alternative Imaging Methods

Thread Tools Search this Thread
Search this Thread:

Advanced Search

 



All times are GMT -6. The time now is 05:53 PM.


DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network