View Full Version : 4:4:4 12-bit Uncompressed DVX100


Pages : [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26

Juan P. Pertierra
January 24th, 2004, 07:49 PM
Success!

After a month or two of slow progress, I managed to setup my test equipment and capture 4:4:4 12-bit uncompressed output straight from the DVX100 A/D converters.

My current test equipment is only capable of capturing a portion of a frame, but I wanted to try it out first to make sure it works. I have ordered the gear I need to perform uninterrupted capturing.

There are some side effects, for example, the signal out of the A/D converters is raw so all the corrections(WB,etc) done by the DVX have not been done yet and need to be done in post. This can be avoided by capturing the signal after it leaves the big chip that does all the color corrections, but the signal would then be 10-bit Y/C, whereas the signal I am getting now is a very fat RGB 12-bit.

It will be interesting to see what the difference is between the 12-bit RGB and the Y/C 10-bit signals.

I am going to attempt to put together a complete frame section by section and also post the image from tape for comparison.

Theoretically, this can be done with any camera but the DVX is a great choice because of the nice progressive scan CCD's. Once I have continuous capture running and tested, I plan to design a small portable interface(possibly firewire) that can be attached to the camera for portable use.

The RAW image is extremely fat...something like 38MB per second at 24fps...however this can be thinned down considerably.

More to come....

Juan

Stephen van Vuuren
January 24th, 2004, 11:37 PM
Juan:

Very interesting - love to hear and see more about how you are doing this and what results you get.

I think the DV format is the DVX100's biggest problem. What lossy or lossless codec are you thinking of using for processing the image?

Jeff Donald
January 25th, 2004, 05:58 AM
He's going to have to find a RAW convertor first, or use the Y/C signal (probably the better option).

Juan P. Pertierra
January 25th, 2004, 11:22 AM
Right now, all I have is a set of 12-bit R,G,B values, which can be trivially padded to 16-bit or 24-bit.

There is more than one graphics format that works with interleaved RGB values. One example is BMP. A simple program can be written which creates a BMP from each frame simply by creating a standard header(since all frames are of equal size), and dumping the interleaved RGB data straight from the camera.

The independent frames can then be assembled into a video using any app like Shake for example.

The question here is what digital video format can handle this fat a signal...i could encode it in 4:2:2 raw and it would still be many times better than what comes out of the DVX100, and technically even the SDX900(ignoring lenses, CCD size). I am not sure if there is any format out there that handles 4:4:4 uncompressed, but I guess even if I do endup throwing out or averaging some of the samples it's still ~much~ better than DV.

If I had a choice, I would rather use the 12-bit RGB because this is a standard DVX100(non A) and has 10-bit Y/C output so I am loosing some precision. In the future, if I get my hands on a DVX100A, the Y/C 12-bit output will provide a signal that's 2/3 as fat as the RGB and probably just as good since it has no overlapping data.

Juan

Stephen van Vuuren
January 25th, 2004, 11:38 AM
What would be interesting to know is how many lines of resolution you can get off this vs. DV signal.

Juan P. Pertierra
January 25th, 2004, 11:56 AM
That is also interesting...and I could answer it if I had the data sheet for the CCD's which I do not have, but I am working on capturing the entire frame at once direct to a computer, so I should have the exact numbers soon. I assume there are dummy pixels around the image but as of now I am not sure how many effective lines the CCD reads out, without doing a rough estimate from the 440k figure.

I tried this experiment intially with a single-ccd sony camera. I did have the data sheet for the CCD so I wrote a program to de-mosaic the image, but the real interesting prospect would be to get a higher-end 1-CCD cam that also takes stills...i beleive those have high-res CCD's that readout the complete frame (in interlaced mode) and then is downsampled to DV frame-size.

The problem with doing this, is that since it was a 1-CCD approach, the data rate coming out of the single A/D converter and for a CCD that's almost 1MP in size is waaaay to fast for my equipment to capture. If I come up with a custom firewire solution then this can be done, but for now, i have to stick with 3-CCD systems which normally have a much lower sampling clock rate.

Juan

Ken Tanaka
January 25th, 2004, 12:36 PM
Forgive me for asking but what in the world is the point of this endeavor?

Stephen van Vuuren
January 25th, 2004, 12:48 PM
The signal captured and recorded by CCD has for more info i.e. bits of data than is recorded on tape. To record on tape, that data must be compressed and processed to the DV format, thus a signifigant amount of color and resolution goes out the window, lost forever.

Although i'm not enough of electronics/video engineer as Juan obviously is, the point would be to use the DV to capture a signal in a higher format. E.G., using an 720p MPEG-2 based HD format, it would be fascinating to see how the DVX100 or 100A would compare to the JVC-HD CAM. I think it would be signigantly better image, both resolution and latitude.

If Juan can come with a system to get 720p 24p stream from the DVX100, he might become a millionaire overnight.

Or even see if there's enough info to process a true 1080p signal.

Or several other RGB based formats that he outlined.

Read up on the net or here about the Thompson Viper (and recent similar) cameras - it uses this basic idea to get a better image than possible with HD cameras.

Juan P. Pertierra
January 25th, 2004, 01:47 PM
The main goal of this endeavor departs from the fact that with any DV camera, the signal captured is of MUCH higher quality and contains much more information than the DV signal that is actually recorded. This is also true with all but the highest end cameras. The Thomson GrassValley Viper is the ultimate example of this concept. This camera simply outputs the raw 4:4:4 uncompressed stream and capture/processing is done on a separate computer unit.

I don't know the exact details but the rough estimate is that 80% of the data is thrown away in the 4:1:1 quantizing and DV compression process. This experiment attempt to get the clean signal with no degradation.

After all, you PAID for the CCD's and the electronics. Why shouldn't you be able to get what you paid for?

It is relatively simple for manufacturers to include some interface which allows the end user to get this quality...the main reason is that they do not want to outshine their high-end products.

If someone doesn't beleive me, all you have to do is take a look at the DVX100 service manual...it has full board layouts, completely labeled, except for the side of the board that one would need to carry out the experiment i'm doing.

Although i'm enthusiastic about this i wouldn't go as far as saying that I will gain monetary anything like Stephen says, but the encouragement is appreciated :)

It does rise an interesting question...how many people would actually be willing to void the warranty on their DVX's to get this quality output from their cameras? It would be invaluable for transfers to film and green-screens among other things.

Juan

Ken Tanaka
January 25th, 2004, 02:02 PM
I see.

So, in summary Juan, your goal is to turn a $3,500 camera (in this case the Panasonic DVX100) into the equivalent of a $100,000++ Thompson Viper by intercepting the raw imaging signals from the 1/3" CCD block before the camera processes them?

I seem to recall someone attempting a similar maneuver with the XL1s a year or so ago here. If you can find the thread it might offer you some pointers.

OK, well thanks for that clarification and good luck with your project.

Stephen van Vuuren
January 25th, 2004, 03:35 PM
Juan:

Based on the number people dissassembling and altering their camera with home-grown mini35's, I would a pretty reasonable amount.

Of course, it would depend how much skill required and/or someone could to the board work for them. Also, if an reasonably elegant solution were devised the retain some measure of portability, durability for location shooting and recording time of at least 30 minutes.

The DVX100 already has great mindshare and the only cams to enter that discussion are the JVC HD cams (but it's a bear to shoot with and the camera and codec leave alot to deserved) and the XL1s with mini35, but it cost's a ton and ain't exactly portable either.

So, if you take those two cams as the primary target, I think you would grab a lot of interest.

I personally think that 24fps 720p is the holy grail of indie filmmakers. More rez than PAL, none of the 25 Mb/s DV crap, no 25 fps pal problems, etc. etc.

Justin Burris
January 25th, 2004, 06:06 PM
Wait, I'm really confused. How are you going to get 720p out of the DVX's CCDs? Is there that much info even being captured on the CCDs?

Juan P. Pertierra
January 25th, 2004, 06:15 PM
I'm pretty sure that I can't get a full 720p frame out of the CCD's on the DVX.

However, the CCD's ~do~ have more pixel sensors than there are actual pixels per DV 720x480 frame, so the raw output will have a larger frame size than 720x480(and 720x576), but once again I don't think it reaches a 720p frame. For example, i beleive the DVX's CCD's have 440k pixels, while a 1024x720 frame is over 700k pixels.

The largest advantage will probably be the fact that it will have FULL color information and no compression artifacts.

I'm going back to the lab tomorrow, I will try to get an exact figure for raw frame size.

Juan

Justin Burris
January 25th, 2004, 06:17 PM
And more lattitude?

Juan P. Pertierra
January 25th, 2004, 06:20 PM
I doubt it.

I estimate that the color will be better not only because it is full color sampling for each pixel, but since it is 12-bit and not 10-bit. The data will be more precise but the white and black will be at the same values.

There might be something about the quantizing and compression that I am overlooking, which limits the effective latitude of the DV output, but i'm not sure.

Stephen van Vuuren
January 25th, 2004, 06:23 PM
But since there are the CCD's, you have acutally 1.2 Megapixels total. With a little "pixel shift" like Canon uses, you might be able to get close enough to 720p.

DVX100 does 480p in DV mode, but in thin mode, I'm guess it gets more rez - the question is how much and how close to 720p?

Juan P. Pertierra
January 25th, 2004, 06:26 PM
That is a good point. I am going to keep that in mind when I write the software.

I beleive 720p uses some quantizing lower than 4:1:1 anyway right?

Stephen van Vuuren
January 25th, 2004, 06:31 PM
Juan:

I thought 720p uses regular HD 4:2:2 color sampling, but depending on how you capture the signal, you could use 4:4:4 12-bit and the after post, output standard 720p.

Here's a interesting article:

http://videosystems.com/ar/video_ccd_counting_needed/

Ben Gurvich
January 25th, 2004, 08:59 PM
<<<-- Originally posted by Stephen van Vuuren : Juan:

Based on the number people dissassembling and altering their camera with home-grown mini35's, I would a pretty reasonable amount.

-->>>


Just a question, what exactly is mini35, i hear a lot of talk about it on this forum.

Cheers
Ben Gurvich

Stephen van Vuuren
January 25th, 2004, 09:30 PM
Ben:

Check the dedicated mini35 forum here.

Andre De Clercq
January 27th, 2004, 11:34 AM
Juan, I only red yr first post...I wish you lots of succes, but I doubt if you are going to see any difference apart from somewhat more compression artifacts in the DV compressed mode for very specific test pictures. Did you know why the first digitalisation needs 12 (up to 16!) bits? Simply because WB, gamma correction,electronic masking, knee processing... all "eat" bits equivalents. If you have much luck (and knowledge in imager signal processing) you will end up with a 8 bit 4:4:4 datastream with somewhat more colordepth and spatial color resolution than the compressed 4:1:1 stream, and this at the expense of a lower S/N. Cam designers allways try to get the "best" picture out of a given CCD structure. Only the DV compression sets some limits which, in the final results are pretty well hidden.

Stephen van Vuuren
January 27th, 2004, 11:50 AM
Andre:

I agree that it's not going to magical, but I'm more optimistic. First, it's likely it's the same CCD block that produces PAL signal, so there are more lines available than encoded NTSC DV wise. Secondly, using the thin mode in progressive really strains the DV codec.

I think it's worthwile to see what signal is available.

Ken Tanaka
January 27th, 2004, 11:57 AM
And, after all, Juan has been generous enough to destroy his camera for the cause! That's like Jonas Salk injecting himself with an experimental polio vaccine. This is a man who must be cheered-on!

Juan P. Pertierra
January 27th, 2004, 12:04 PM
Haha Ken! :)

Actually, the procedure is completely non-destructive. The DVX is easy to dissasemble, and the board of interest is the first one that comes out. All probes clip-on, and it doesn't have much effect on the rest of the operation as long as impedance is high enough.

I agree with stephen...i think the difference will be extremely noticeable...i'm not saying it's going to be ~magical~ and generate stuff that's not there, but it IS going to be better. In the simplest of cases, if DV where such a good algorithm, nobody would bother buying expensive DVCPRO50 or digibeta equipment. If the difference between 4:1:1 (5:1) and 4:2:2 (3.3:1) is so noticeable, well...

Andre De Clercq
January 27th, 2004, 01:27 PM
Agree Juan, 4:2:2/3.3:1 has better spatial color resolution and lesser compression artifacts ( both important for post processing/editing) but 4:2:2 at a 3.3:1 also looks better better because it goes together with larger CCDs, better lenses and is defacto being showed/evaluated in component form. It's also chosen by pro's for the more robust tape format. But plse go on Juan..I am really interested in yr results.

Kaifoong Kok
January 29th, 2004, 06:24 AM
I am so surprise to see Juan said around 80% of data will be lost in the DV compression! I'm keen to see the results!

Andre De Clercq
January 29th, 2004, 09:25 AM
It's even much more than 80%. If you compare to the already crippled 4:1:1 or 4:2:0(PAL) at 8 bits/sample, then indeed the DV compressor only keeps 20%.(1:5 compression) If you start from Juan's 4:4:4 @ 12bits/sample, allmost 99.5% (~4 Gbits/sec vs 25 Mbits/sec for DV) of the picture content (in term of datarate!) is "thrown away" in the DV concept.

Taylor Moore
February 1st, 2004, 04:50 PM
Juan,
Very excited about your undertaking as I sure would like the 4:4:4 12 bit for compositing and possible HD applications. If it cannot get to the full size of HD I'm sure a tool like S-Spline can take it the rest of the way.

Great work.

Will Boggs
February 2nd, 2004, 10:47 PM
Juan,

What is your idea of an output? Would you use the Y/C or do you think the DVX could be slightlly modified to house an SDI or RGB (BNC type) connector(s)? I am very interested in this process. Right now I am shooting a lot of content that would be much better suited to do with 4:2:2 or better color. I am running numbers and am thinking of taking the leap to a 4:2:2 system. I am in no rush but your developments could change drastically the financial hole I would need to jump into if I can just modify and use my DVX rather than get into a 4:2:2 camera. Keep on truckin!!

Will Boggs

Juan P. Pertierra
February 2nd, 2004, 11:02 PM
I know for a fact that a firewire interface can be devised(along with a driver), which would basically allow you to record the signal into any computer.

I am not aware of how the SDI standard works, all i know is that it is unpacketed. However, I would think that yes, it is possible to adapt it to a single or dual SDI interface. However, since the output is most likely not going to comply with any known standard(non-standard frame size, 4:4:4, no compression), i would doubt the SDI output would be of any use because no tape deck would understand/handle it.

So, this is why my goal is a firewire interface and a custom software driver. This way, you can record directly to hard disk and then edit it as you wish. I am also planning to include some option to include decimation in this interface, though probably not compression in the early stages. This way, you can select to record 4:2:2 or 4:1:1 to save space.

I'm waiting on a part right now, but will post updates here.

Juan

Rodger Marjama
February 3rd, 2004, 08:01 AM
Juan,

First off let me salute you for the remarkable efforts you are exerting in this quest. I to would love to see success in the direction you're pushing toward. I do however have serious doubts of you ultimately reaching a usable goal. Considering the necessity to bypass ALL camera features to grab signal directly from the CCD's, I just wonder what is left? If the ultimate goal is to develop a usable camera, where will all these features be returned and how?

I almost think this effort would be better served if started with some manufactured industrial CCD camera and then once proved look for ways to develop your work into a usable camera. At best, all I can see this becoming is an interesting experiment, in and of itself. Whether or not something useful shows up by way of passing through it along this journey, of course is anyones guess.

If you are determined to "build" a processing facility to handle this 40 MB per sec data stream, and all that this will entail, you might consider the alternate possibility of not bypassing the cameras compression scheme, but rewriting it so to speak.

A number of us had a discussion a year or so ago over at 2-pop involving the compression methods used in today in DV codac and how they process black content when using in-camera letterbox mode. It occurred to us whether the upper/lower black bars are actually being compressed, or ignored entirely buy the algorithm. Consensus is that it is still processed. If so, this is one area where considerable improvement/quality could be derived by reworking the codacs algorithm to exclude this area during the compression cycles.

Just some thought here. Again, if you continue your course (as you most likely may), I hope your achieve your goals. If not, then I hope you find some useful knowledge along the way and put that to some practical purpose.

Good luck and good fortune.

-Rodger

Juan P. Pertierra
February 3rd, 2004, 08:30 AM
Hi Rodger,

I appreciate the input.

First of all, I already have a 40MB/sec capture facility which is comercially available for a relatively modest price. I am using this only as a test platform, and once i get the prototype working I intend to build a small firewire interface/driver which easily handles this data rate.

Just to define 'modest', my entire test setup including the DVX does not exceed $5,000. The final device should not cost more than $50 to make.

Second, the experiment does not necessarily bypass all camera functions. there are two options as described before in this thread: the 12-bit 4:4:4 raw RGB output from the A/D converters, and the 10-bit Y/C output(12-bit for DVX100A) from the main camera processing chip.

I am starting with the 12-bit RGB simply because i want to try and get the cleanest signal for the experiment. An almost exact setup can be used with the Y/C output. The difference is that the Y/C output has most of the camera corrections done on it, yet it is still uncompressed and non-decimated.

Why i am doing it in this order? Simple. The 10-bit Y/C output needs less bandwidth. If i can get the RGb output working, I can not only compare the two and see the quality difference, but it is guaranteed that the system will work with the smaller Y/C datarate once proven to work on RGB.

About using an industrial camera, have you seen how much they cost? This is more cost efficient, and I get great SD CCD's. Besides, I would like this to be something everyone can use, not some monster made from hard-to-find parts that sits in a closet.

Rodger Marjama
February 3rd, 2004, 09:15 AM
Good luck Juan. As I said, I hope you achieve your goals.

BTW, I am buying the DVX100a this week. I have been trying to buy for over a year, but with life's struggles, the possibility has not presented itself until now.

If you succeed, I'll be waiting in line like everyone else.

Take care.

-Rodger

Rob Lohman
February 3rd, 2004, 05:57 PM
The DV algorithm is indeed compressing the black bars. Why?
Because it is a constant bitrate encoding that is always using
the same amount of compression on the same amount of macro
blocks.

Lucia de Nieva
February 7th, 2004, 02:14 AM
Ola Juan,


congratulations to your challenging plan! I wonder if the DVX sports a programmable DSP or the DV25 hardware codec Panasonic used in the past. These chips could be cascaded, therefore DVC50 machines had two of them, while DV100 (HD) systems were built around a matrix of four. I assume you know that FCP 4 already supports DVC50 over Firewire, thus the question is whether it would be possible to alter the digital data and hence the format by means of the firmware, assuming there is a DSP capable of transformations beyond 4:1:1. Probably this wasnīt your exact intention, but imagine the benefits of a hard drive firewired mobile DVX writing DVC50 streams. From my experience, the colour compression has the highest impact on the imageīs look. Detail compression artefacts are normally inconspicuous, but the typical contrast halos strike the eye even in Digibeta recordings. Naturally the best solution would be the 4:4:4 uncompressed way, but I am a bit afraid the D1 words wonīt be satisfied with firewire, let alone the drives. So what is the state of the signal at the Y/C stage you mentioned? Is it still 4:4:4?

Juan P. Pertierra
February 8th, 2004, 12:42 PM
Lucia,

Thanks for the input. What do you mean by 'D1 words'? Do you mean the bandwidth of the uncompressed 4:4:4 signal?

From my rough calculations it seems like any modern computer mounted harddrive should be able to keep up, as long as that's the only thing it is doing. I know firewire 800 can handle it at 24fps, although FW400 is right on the edge, and might not be able to mantain the data rate needed.

In the worst case, the portable interface can be designed so it interfaces to a triple firewire or ultra-ata, and record each color on a different hard drive which can be carried as a shoulder bag, or something of the sort.

however, i am pretty confident that any computer with FW800 will do.

The reprogramming of on-board logic is indeed a very interesting question...the main problem being that in order to do that, i need tons and tons of technical information about the IC's used in the camera and how they are used. This information is usually not easy to come by. My approach relies on components present in all cameras now days so it's more straightforward in that sense...although yes, it would be nice to be able to implement, say DVCPRO50 right on the camera....but that would be a much more expensive and time-consuming endeavour, and still my approach should yield a cleaner signal.

Juan

Lucia de Nieva
February 8th, 2004, 01:52 PM
Juan,

the uncompressed signal you are getting from the A/D converter stage would most probably be some form of D1 data stream, which is 4:4:4 SD at 8 bits nominally, and 12 bits in your case, used as an intermediate step for CGI and the like. However, my approach was that there are already simple firewire 400 harddrive recording devices designed for the very purpose of direct-to-disk streaming and a DVC50 via firewire protocol. Let us assume you could get a 4:4:4 8 bit quantisized signal at the Y/C stage, which should equal about 30 MB/s. The best 7200 rpm single drives can sustain that in combination with the newer FW400-to-IDE bridges. For PAL, the total recording time for a cheap 160 GB drive would be about 90 minutes. Furthermore, I expect the enhanced headroom of the 12 bit resolution to absolutely make sense during the image taking process, but not too necessary for the recording itself. Think of the DVD as a principle - high quality input (film) and thus high quality output in the dedicated environment (TV). So if the firewire protocol could transport the untamed Y/C data, we would have a simple and mobile means to capture undistorted images for mattes or compositions.

Juan P. Pertierra
February 8th, 2004, 02:04 PM
Ah, i get what you're saying...

I think you might be over-estimating the output from the A/D converters. I am not knowledgeable on the technical details of how D1 is encoded, but the output from the A/D iscompletely dependent on the output of the CCD's. For example, it outputs a non-standard frame size along with many dummy and black pixel values. From the little I know, D1 SD is standard NTSC frame size, so right there it wouldn't work. There are several other reasons but I won't go into them because like I said, i'm not sure what the D1 standard implies at the lowest level.

The Y/C 10-bit output from the main chip might be a better bet...however, I am unsure at this stage if it is still 4:4:4...i ~am~ sure, however, that it is still uncompressed, and contains all camera corrections.

I guess on a DVX100A, even if the Y/C is already decimated to 4:1:1, it would be uncompressed and 12-bit color, so it would still be worth while due to the much higher color precision and uncompressed nature.

Juan

Taylor Moore
February 8th, 2004, 02:45 PM
am~ sure, however, that it is still uncompressed, and contains all camera corrections.

Will this approach still retail all the great aspects of the DVX for instance frame rate, shutterspeed, gamma etc?

Juan P. Pertierra
February 8th, 2004, 02:52 PM
Either approach(RGB straight from A/D's or the Y/C corrected) will have the progressive frame rate, and shutter. only the Y/C would have the gamma corrections made to it.

The idea is that with 12-bit 4:4:4 uncompressed footage, it is a trivial matter to load up the frames into your favorite editing program and do gamma corrections as you like.

Those of us that have tried to accomplish a film look with a camera other than the DVX know it's pretty simple. And with a full color, uncompressed frame to start with, the results will be so much better than starting with DV.

Juan

Lucia de Nieva
February 8th, 2004, 05:22 PM
Juan,


can you give me the name of the main chip in the DVX? I might be able to evaluate my estimations in the first place, whether it is still the old DV25 hardware codec with fixed decimation or a new, more versatile DSP. Call it a hunch, but I wouldnīt be astonished if the DVX uses the same DSP like the 900 for instance. In the last years, LSI developing significantly surpassed manufacturing in costs. It is simply more economic to have one chip design for all purposes. From my adventures with various editing suites and gearings, chroma compression can become really frustrating especially in matching real life shots and composed elements and is highly responsible for the video look. But donīt take me wrong, itīs definitely worth a try. Thank you again for your research and share of insight.

Juan P. Pertierra
February 8th, 2004, 05:36 PM
it is an Atmel microcontroller, so it is hard to know what it does without somehow downloading the code with which it was programmed.

Lucia de Nieva
February 9th, 2004, 07:32 AM
Very nice, so at least it is not the old all-in-one codec. Whatīs the proper name? By the way, if you happen to have a scope you might be able to scan the carrier frequencies in the Y/C stage. These should be around 13,5 MHZ for 4:4:4 and 6,75 MHZ or 3,375 MHZ for 4:2:2 or 4:1:1 respectively.

Juan P. Pertierra
February 12th, 2004, 10:45 AM
Ok, this is a question to everyone who is interested in this idea.

I've gotten a ~lot~ of emails from people who are interested in doing this to their DVX's, asking how much it would cost. Now the reason I am writing this, is because I am at a stage were i have to make a choice, whether i purchase a last piece of equipment for test purposes, or to actually design this to be encased in the DVX itself.

So i guess my question is, how many of you would be interested in having this installed on your DVX, and how much would you pay? The assumptions are that it provides 4:4:4 12-bit uncompressed full CCD frame resolution video at at least 24fps (progressive of course). Also, that it is that the DVX will work as normal, completely assembled, with the only changes being a ribbon cable coming out the back and hooking up to a small box mounted on the camera itself.

Note that the $25,000(+lenses) SDX900, puts out 4:2:2 with 3:1 compression and standard frame size. The setup I am describing here would provide much better quality video, the only differences being the glass and the CCD size(correct me if i'm missing anything, i know there are extra gamma settings, etc but i've never used an SDX).

Juan

Rodger Marjama
February 12th, 2004, 11:26 AM
Juan,

If you can produce what your saying you think is possible, then money isn't (won't be) the issue. What is at issue, is whether or not it's possible.

I would therefor suggest you provide a working prototype or at least prove your theory with an actual working design, even if using bulky parts that would never sell as a prototype.

I would further suggest to those who desire this to work so much they are entertaining some form of financial support, be very cautious. This is at this point, this is only a concept and in no way a reality.

Again, I hope this is possible for you Juan and that you succeed in you quest and develop a working model or prototype.

-Rodger

Rob Lohman
February 12th, 2004, 11:53 AM
I'd say the major problem is storing all of this. Even if you can
get a RAID array to store the information that fast you would
have to carry your PC with you on your shoots? And not to
mention how to edit it, but that's a more easily solved problem.

A working prototype would definitely be the way to go. Also
to see what the difference in qaulity would be. Put the camera
on a tripod and take a framegrab of the firewire stream from
a highly detailed scene and at the same time from the 4:4:4
12 bit out as well. That way you can easily see in any paint
program what the increases are.

What camera controls will be missing with such a system?
(gain & white balance at least I'd assume)

Stephen van Vuuren
February 12th, 2004, 12:20 PM
Juan:

I encourage you to move full speed ahead. Based on what people pay for increased rez on the DVX100 and other cams via anamorphic adaptors and/or software (Magic Bullet's deartifacting etc.), I would think $1000 would be an easy starting price point for a system.

I also think that with hard disc beginning to replace tape (the new 2.5" drives from Seagate would be ideal for this), that the DV codec days are numbered.

It's the early part of this change, so your timing is just right.

Plus, some people are willing to drop $8000 on a mini35 setup or rental rates approaching 16mm film cameras which I think is too much to spend for DV footage.

Juan P. Pertierra
February 12th, 2004, 01:01 PM
I understand why one person thinks that it is closer to concept than reality, but this is not true. it is very close to being reality and this is why:

Right now I have a digital capture system which captures at more than the speed necessary to take in all the data, and I am capturing sections of frames but with very reduced precision, because of the simple problem that in order to get the full signal, I have to probe around 36 tiny pins on a surface mount chip. I only have a few surface mount probes....thus the capture system IS capturing all 36+lines, but they are hooked up to a dummy signal because i don't have the probes.

This last part i am referring to is a chip-clip that snaps on the surface mount chip allowing me to probe all 200+ pins at the same time, and thus completeing the test setup. I know that the card captures this fast, once I have the clip it's just a matter of assembling frames from the R,G,B data, which is straightforward.

The reason why I ask, is that any clip of this kind is expensive, but the smaller kind that can be put on the chip and then close the camera is even more expensive. So, this is why I ask.

I think i'm going to go all out and even if I don't end up doing anything else with it, I can leave the setup on my own DVX.

Rob: i HIGHLY doubt you need a RAID array or anything that complex to capture this data. The data rates involved are very well within the data rate supported by firewire 800. All you need is a laptop with a firewire 800 card. I've designed firewire interfaces before, and i'm positive this is not a hurdle to overcome.

Juan

Rob Lohman
February 12th, 2004, 02:30 PM
Well let us do the math (correct me if something goes wrong).
We are using 12 bits per YUV channel. We have one YUV sample
per pixel which is 3 * 12 = 36 bits or 4.5 bytes per pixel. Assuming
you are going to store the laste 4 bits in a half byte and then
on with the next pixel (thus 2 pixels form 9 bytes completely),
we multiply the 4.5 by 720 and then by 480. This results in
1,555,200 bytes per frame or 1.48 MB/frame. Now assuming we
do 24 frames of that each second the datarate is 37,324,800
bytes per second or 35.59 MB/s (megabytes!). Even if you where
going to drop to 3 bytes per YUV (8 bit) this would still be 23.73
MB/s.

Which harddisk will keep up with that *sustained* without
dropping frames? Especially through a laptop!! You can't
use any internal laptop harddisks because they definitely aren't
fast enough and I think you'd be hardpressed to find any ATA
harddisk that can keep up with that sustained rate. The firewire
protocol is fast enough. Even a 400 mbit connection must just
be able to do it (although probably not due to overhead).

Let's just assume you find a SCSI harddisk or S-ATA or ATA
10,000 RPM harddisk (just out) that can keep up with this, what
about power consumption for this all? Personally I'd much rather
have the harddisk mounted directly to a camera without the
laptop.

Last time I checked there were chips and boards available with
complete ATA and firewire interfaces available, so this wouldn't
probably be that difficult to do. I'd bet you would get a way better
transfer rate out of that harddisk without going thourh a "real"
computer and the extra overhead that gives.

Now please do correct me if you think I mixed up somewhere or
am missing some points. I'm not trying to bring the project down
or anything, but I just have my "thoughts" about the actual
realworld implementation at the end. That's all.

John Gaspain
February 12th, 2004, 02:42 PM
cool thread Juan!

I wanna do this to my DV953, it would be great if you made a PDF tutorial with some pictures of the chip and probes. I would love to help out in any way I can because I was thinking about this exact type of project- because the miniDV format should be replaced by a Hard drive simply because the CCD's are out performing the tapes capabilities!

Also my home PC could handle the data rate, as it is the baddest ass mofo thing that ever existed!