View Full Version : 4:4:4 12-bit Uncompressed DVX100


Pages : 1 2 3 4 5 6 7 [8] 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26

Jon Yurek
April 19th, 2004, 09:02 PM
The most they could conceivable get you on is a DMCA violation to circumvent copy protection (which is what MS used against the XBox Linux project). And since there's no copy protection that you're trying to circumvent, you're free to modify the equipment you legally own.

As for the actual mod... I imagine (although I can't say for certain since it's not even done let alone set in stone) it would be a piggyback chip, which either clips on or solders on. A clip on would be dead simple, you'd just have to know which chip to clip. A soldering would be harder but not impossible.

Juan,

Are you using the camera's clock to drive your chip and the capture card, or are you supplying your own to sync up and drive it? If you have your own, it could be that your clock is *slightly* out of phase and is ending up being systematically off on the important bits that result in the noise, but on in the surrounding bits (which wouldn't be subject to that nonsync).

Juan P. Pertierra
April 19th, 2004, 11:49 PM
I am using the clock already on the camera, so i guess either the clock is getting degraded somewhere, or the actual data is. However, since switching the capture card sync from rising to falling edge increases the speckles, i think it's the clock.

Robert Martens
April 21st, 2004, 09:14 AM
Ah, I see, good points. Guess it's not such a big deal after all.

<<<-- Originally posted by Isaac Brody : Don't you basically just void your warranty when you open the unit up?>>>

If you were referring to my fiddling with my camcorder, well, the warranty had expired before I did this disassembly, so there was no worry. Still risky, to be sure, but I have more curiosity than common sense.

Don't worry, though, I wouldn't do this to hardware I hadn't purchased myself; I won't be taking apart anyone else's cameras just to see what's inside. :P

Steve Ipp
April 22nd, 2004, 10:34 AM
Juan, much respect to what you're doing. Keep it up!
I'm sorry if this comment gets you away from the actual problem, but could it be that the "problem" isn't really present?
The thing is, ccds transfer charges to registers far from the actual sampling locations (pixels). Unlike CCDs, CMOS censors are more suitable for getting values/sampling at each pixels but raw values from these chips still require cunning algorithms to improve the quality of the final image.
My wild guess (I am not a professional) is that Pana encoding chip has additional functionality as "retouching" facility.
Forgive me if I'm off point, just trying to help.

Juan P. Pertierra
April 22nd, 2004, 10:43 AM
Steve,

Thanks for your input.

I've considered that possibility, and so far the biggest supporting evidence for it is that the speckels are very easy to get rid of with a smart median algorithm...something like using the magic wand in photoshop and then performing median on those specific pixels using the surrounding values(like a mosaic CCD).

However, i think so far there is far much more evidence to support otherwise, like the fact that the noise increases if i change the sync edge of the clock but stays in the same general areas, and the fact that i've seen the noise slightly reduced by using different grounding techhniques.

I'm keeping an open mind however.
-------
Also, to everyone else, i have a test this friday along with a robot vision project so this is why i've been disconnected, once i get done with that next week i'll back to work on this...

Just to put something else out there, i have uploaded the Photoshop CS file for the last low-speckles <g> frame i got. I think it's the best frame so far, because it was very dark to the naked eye yet it captured incredible detail in the sky and the dark areas.

This file is completely raw, i just put the layers in and i think i might've shifted them to where i thought they where aligned. If you guys want to post any of your color corrections that'd be awesome...even though this frame doesn't nearly use all of the dynamic range available.

http://expert.cc.purdue.edu/~pertierr/cap5_RAW.psd

Oh and steve is definitely right in that ~some~ processing is done to the raw frame...i'm not sure if this frame is slightly off focus, but it looks much better if you perform a Sharpen operation in PS...the color came out surprisingly good for a RAW frame.

Obin Olson
April 22nd, 2004, 01:04 PM
OMG Juan...I can do stuff with that image in PS that I have only DREAMED of!!!!! PLEASE PLEASE stay on this project...I have CASH in hand waiting for you to finish it!!!



guys you gota download that image and play with it....it's AMAZING how far you can push it!!!!

Obin Olson
April 22nd, 2004, 01:15 PM
Juan!!! got some news for ya! I just went outside and shot a DV test with my dvx100 VERY dark like your test.....guess what? I pushed it really hard in premiere pro color curves and it has the SAME dots yours does!!!

everyone do this and see if some cameras have dots more then others do....I think it's bad ccd's that ONLY show up when you shoot really dark and push it really hard...take a look:

http://www.dv3productions.com/uploads/dv_cap.jpg

Obin Olson
April 22nd, 2004, 01:20 PM
BTW, every frame has dots in a new location in the frame....very random sometimes more dots sometimes less

Obin Olson
April 22nd, 2004, 01:50 PM
more info...check this out Juan ..BOTH are INVERTED and I took the RGB curve all the way DOWN..ahh BOTH have RED GREEN and BLUE bad pixels..and they look like the same type of pixel....the DV version is just muddy and jaun's is VERY CRISP because he has NO compression....maybe????

http://www.dv3productions.com/uploads/inverted_jaunRAW_16bit.jpg

http://www.dv3productions.com/uploads/inverted_dv_8bit.jpg

I hope this will help you find the answer...can you write some code that will "search and destroy" bad pixels? better yet try and expose your camera with a good light and see if the pixels go away....

Obin Olson
April 22nd, 2004, 02:09 PM
here is what we get if Juan can make this MOD for our cameras work:

http://www.dv3productions.com/uploads/juancap_colorwork.jpg
http://www.dv3productions.com/uploads/dvcap_colorwork.jpg

I did the same color work to both pics

here is my original image:

http://www.dv3productions.com/uploads/dvcap_original.jpg

Robert Martens
April 22nd, 2004, 03:13 PM
Downloaded that PSD file, opened it in combustion (I don't have PS), and tinkered a bit. A little gain boost, Median noise filter with a .5 radius--completely eliminating the noise--saved the result, cropped and resized it, and boom! Looks pretty good by my standards. The JPEG compression artifacts help alleviate the smoothing done by the Median operator, and aren't too ugly.

You sure ain't kidding about the details in there. :)

I'd show off my result, but the FTP password for my website that I've been using for the past few months no longer works, courtesy of the geniuses at CI Host (the same people who changed my email password a month ago without telling me). But that's a whole other post.

Obin, I can't seem to access the URL you posted for the "juancap_colorwork" image. The other two show up just fine, but that one gives me a "Page cannot be found" error.

Stephen van Vuuren
April 22nd, 2004, 03:14 PM
Obin:

I can't get all your links to work. And do you know how to change them to be clickable? If not, let me know or search here for the code.

Juan P. Pertierra
April 22nd, 2004, 03:54 PM
Robert,

Email me the image and i will post it on my website. I just got extra space added to it so now I can post some clips when i'm done with all the work i have this week.

The link to one of Obin's file that doesn't work is because my name is spelled wrong :) thus it is:

http://www.dv3productions.com/uploads/jauncap_colorwork.jpg

Robert Martens
April 22nd, 2004, 04:02 PM
It's on its way, Juan, should be there soon.

I've contacted tech support for my website, and usually hear back within a day or so; as soon as I'm able to log in, I'll take the job of hosting my file off your hands.

Thanks a bundle!

Robert Martens
April 22nd, 2004, 04:39 PM
Talk about fast response -- got the correct password, uploaded the file, you can view my image here:

http://www.gyroshot.com/images/juancombustion.jpg

Update -- I've uploaded an additional (larger, better looking) version of the image; since everyone else seems comfortable with uncompressed TIFFs, I figure what the heck? You can still access the JPG, if you like.

http://www.gyroshot.com/images/juancombustion.tif

I would also like to know what's with the green and magenta edges on everything in the source photo...any ideas? It's especially noticeable on the sidewalk, as it passes behind the vertical bars of the guardrail. Does this have something to do with the alignment of the color channels, perhaps?

Juan P. Pertierra
April 22nd, 2004, 05:10 PM
Here is my somewhat 'corrected' version, i couldn't get rid of all the speckles using photoshop only...

http://expert.cc.purdue.edu/~pertierr/cap5_RAW.tif

Obin Olson
April 22nd, 2004, 06:57 PM
Juan, what do you think? bad pixels? or?

Brett Erskine
April 22nd, 2004, 07:10 PM
This is why past my realm of knowledge but when I mentioned this pixel problem to someone much more experienced he said that manufactures are allowed to have a certain number of dead pixels in a display when they create them. Apparently they have the means of masking these dead pixels from showing up but since your pulling off raw files off the CCD you might be by passing this masking and revealing the dead pixels.

Again I dont know if this is correct. Im only passing on the info that was told to me. Im not even sure techically what they are doing to mask the pixels and whether that masking would or wouldnt come into play when you are doing raw files. Any thoughts?

Obin Olson
April 22nd, 2004, 08:59 PM
i would say yes BUT the pixels are moving around in each frame...a dead pixel stays dead in every shot....

Dan Vance
April 22nd, 2004, 11:41 PM
Juan,

I've tried to read through all the posts to come up to speed on this thread, so maybe I've
missed something that's already been said, but...
I'm not sure you're going to be happy with your results when you get where you're going,
because:

You're starting with 4:1:1 and trying to get 4:4:4. But that first 4, which you already have,
represents luminance samples, and that's where, practically, all your "resolution" is. So,
you can expect to get significantly better color rendition, but NOT better resolution. The
resolution of your Y output (of the Y/C output), is the full resolution that the camera is
capable of. There is essentially no resolution loss in the Y signal going from the chips to
the Y output. That will be a little better than the equivalent compressed DV luminance,
but not a whole lot. And of course you can get at that without going inside the camera.
Also, at one point you said you're limited "only by the glass and CCD size"!! Yes! But the
overall performance can only be as good as the worst component. The DVX100 lens
probably has a resolution of 500 lines or so, or at the very best, 600 lines. And who knows
what it does to color? It's good glass FOR THE MONEY, but relative to pro lenses, it's
pretty mediocre. So even if the CCD was capable of 1000 lines, the lens limits you to 500.
You're improving the milkshake, but trying to drink it through the same tiny straw.
You also seem to be neglecting the bandwidth limitation of the NTSC signal, which is 540
lines, and there's no way around that, as long as the signal is NTSC.

So, you're doing a lot of work for a little more color info. Is it worth it? Maybe. I certainly
would be the last person to dissuade anyone from experimenting. But the physics say it
might not do what you're expecting.

Also to clarify some other posted info, PAL CCDs and NTSC CCDs are not
interchangeable, because the pixel aspect ratios are different. And "5:1" compression does
NOT in any way "throw away 80% of the information." Compression is not truncation,
it's an analytical processing method of getting rid of as much UNNEEDED info as
possible.

As you've mentioned, the Canon XL1 is another candidate, and at least you could put
better glass on that one, but unfortunately, it doesn't do 24P.
I don't mean to be throwing cold water on this, and I think you should continue and see
how good you can make it, but just be aware of the limitations of the camera and don't get
your hopes up too high.

Luis Caffesse
April 22nd, 2004, 11:47 PM
Actually, you bring up some good points.

But what about the perceived difference in dynamic range?

Juans modification may not be increasing resolution, but getting a 12 bit signal vs the compressed 8bit signal should count for something, shouldn't it?

-Luis

Juan P. Pertierra
April 23rd, 2004, 12:11 AM
Dan,

Thanks for your input. However, i think there are some significant innacuracies in your post...

1.)I am NOT starting with 4:1:1 and going to 4:4:4. I am literally capturing 4:4:4 straight out of the CCD's, at the full sampling rate of the A/D's.

2.)You're right in that the lens on the DVX might do 600 lines at the most. However, afaik that doesn't really pose a limitation since the CCD's have 494 lines.

3.)I AM getting a little more resolution because of the fact that I am getting the full CCD frame. It's not much but if there is something i have learned so far is that the more information you start with before correcting your video for a specific look, the better.

4.)I'm not really sure why you're talking about NTSC signals, because I am not even dealing with that at all. What I am getting is a non-standard frame size, and it's up to the person working with the footage to decide what medium it is going on. If he/she wants to print the entire 773x494 frame to film, there is no NTSC standard compliance involved here ~anywhere~.

To top it off, i think a lot of people will agree that the very significant increase in dynamic range, as well as the fact that the 12-bit RAW output can display at least 48 times as many colors as the 8-bit DV output alone are worth it.

Finally, i recall some people doing up-rezzing experiments with the standard DVX output and getting results comparable(with better color) than the JVC single chip HD cam. I am almost 100% sure that uprezzing of this raw full color footage will yield even better results while we wait for a 3-chip HD camera to apply this on.

Oh yeah, and NO COMPRESSION ARTIFACTS. Hate those big blocks on the reds, blues, and have you ever tried pulling a chroma key? You can say that the DV compression only gets rid of uneeded data, but i think very few people here will beleive you.

Cheers!
Juan

Peter Plevritis
April 23rd, 2004, 12:39 AM
Just take a look at the images.

They are very impressive. And I believe these are only 10bit instead of 12.

The dynamic range is outstanding. The added resolution is a great plus. All great for going out to film; if that is what you want.

Hopefully Juan can eliminate the digital specks and move on to getting this thing production ready. (at least I hope he plans on that).

John Cabrera
April 23rd, 2004, 12:49 AM
Juan,

I think what Dan meant was that your starting with a camera that's already capible of 4:1:1 and your trying to get 4:4:4 out of it (just to point out that the 4 is already there for you in luminance and make his point about resolution) . I don't think he meant that you're trying to turn a 4:1:1 signal into a 4:4:4 signal.

Your other counterpoints do make sense, though.

Keep going with this... I monitor this posting every day. You're a true innovator, and asside from what this could do for indy filmmakers, the ongoing story itself makes for great drama. :)

John

Thomas Smet
April 23rd, 2004, 01:41 AM
I just took a look at the U and V channels of the latest sample image and we are clearly getting real raw 4:4:4 frames instead of an upsampled 4:1:1 frame. The seperate color channels are crisp, clean, and acurate. If you work with high end visual effects this is the ultimate feature I would want for this mod and so far it is working better than I expected. The dvx100 might not be better than a 1/2 or 2/3 inch camera but I would take 4:4:4 frames any day over 4:1:1 or even 4:2:2 frames. If you want to check out how much better 4:4:4 is then in photoshop change your color mode to lab color. The a and b channels are the U and V color channels and l is the y channel. Compare a raw frame to any dv frame and you will see the clear differance.

Rodger Marjama
April 23rd, 2004, 08:16 AM
<<<-- Originally posted by Obin Olson : i would say yes BUT the pixels are moving around in each frame...a dead pixel stays dead in every shot.... -->>>

A pixel doesn't necessarily have to be "dead" to be bad. Most likely this is the root cause for these defective pixels and Juan will need to do a bit of research on pixel mapping. Anybody who has played with the 3x3 grid sliding puzzle will understand how you would need to account for a bad pixel in the 773x494 grid. Any pixel out of spec will need to be mapped out and either the x or y grid line will shift 1 pixel over and be remapped to make up for the bad pixel. There is obviously a "point of no return" where this cannot be done without excessive visual distortion, so during CCD QC testing any chips with more bad pixels on either an "X" or "Y" axis line would not be used. With the actual pixel count available on the CCDs, you have 53 "X" axis and 14 "Y" axis pixels available for remapping. Whether or not they would shuffle pixels over the full 53 amount I wouldn’t know, but the 14 for the "Y" axis should not be any problem I suppose.

Food for thought.

-Rodger

Milosz Krzyzaniak
April 23rd, 2004, 08:27 AM
Hello. I didn't write for a long time because I was away. As I see the problem still occurs and no stepping forward...

Althrough I don't know the exact solution, I can give some thughts in terms of methodology.

First obvious thing is the problem touches only R or G component. The CCD that makes component B is just OK (just check RGB values of those bad pixels and their surroundings - it's always R or G component that makes the problem). So, I would check why the B CCD is read out correctly and others not.

Also, that means that the CCD __COULD__ be read correctly so just work for solutuion and not substitute it with postproduction.

Juan, suggest to prepare 20-30 test screens with very simple patterns, for egsample with two colors of basic R-G, (and maybe one with B to show there is no problem with it) that have straight border between them.

Then place it in various positions in front of camera and shoot it to check how the noise react to it. Then show in different exposures to check how noise react to level on exposure. So on, so on. The more the better.

And then put them as jpgs to download and maybe we could see any pattern that could help.

Jon Yurek
April 23rd, 2004, 10:00 AM
Ok, after playing around with the photoshop file for a few minutes, I've come to a few conclusions:

1. After converting to 8-bit, the pixels off the raw image are close to 20 (00100000, 32) than what I said before, which discounts the "high bit error" problems since, well, it's not the high bits that aren't getting set. Of course, this may still mean the middle bits are off. In some sampled pixels, the value of the dead channel was 0 and sometimes it was 1. In the color-corrected version, they could get as high as C (00001101, 13).

2. All the CCDs have different points of "deadness" but the blue one seems to have FAR fewer than the others, like Milosz said, but *unlike* what he said, there are still dead pixels in the blue channel. They're just much less common. Is there something electronically in your capture sequence that would make a difference here?

3. The more I think about it, the more it makes sense that a simple filter would be able to run on these images that checks the median values of the surrounding 8 pixels and if they're far enough off, to flip the bits. Basically, a normal median filter, but specifically targetted towards individual pixel-hunting so the whole thing doesn't get bogged down (also, so it can be integrated into the capture sequence). Judging from these shots, it won't do any damage to the final frame. Unfortunately I don't have the time to write one, although I'll see what I can do this weekend (don't cross your fingers ;) )

Dan Vance
April 23rd, 2004, 11:06 AM
I still think you guys are missing something here. I didn't see anything about altering the CCD timing, maybe I missed it. That's the only way to get around the NTSC limitation, but even if you do that, you immediately run into the sampling rate limit of the CCD itself.

Juan,

Yes, I was saying you are extracting 4:4:4 COMPARED to the DV signal of 4:1:1, not that I thought you were trying to get 4:4:4 FROM 4:1:1.

And I also said you WOULD get a LITTLE better resolution.
It's not about NTSC "compliance," it's about the CCD timing that is designed to produce the NTSC signal.

Regarding: "You can say that the DV compression only gets rid of uneeded data, but i think very few people here will beleive you," my point is that the 5:1 compression cannot be translated to mean a loss of 80% of the info. Yes, compression is bad and the artifacts look horrible. I was only clarifying that misstatement, which I realize was not yours, but was in someone else's comment.

I really think this is a very worthwhile endeavor, and I'm interested in seeing the results. What would be even better, once you're done, is to get "someone" to go in and remove the DVX100 lens and put a nice Fujinon (or Canon) lens on one, just to see how much difference it makes.

Jon Yurek
April 23rd, 2004, 12:12 PM
The thing is, Dan, that you're posting stuff we basically already knew. We knew the limitations of the CCD, we knew how much resolution we were getting. The whole reason this is exciting is because we would finally have real depth of color and all the information our CCDs can record instead of the (admittedly not terrible, but definitely not perfect) information coming off the DV tape. And the more information you have, the better your end result will look after you're done tinkering with it.

I don't think anyone's had unreasonable expectations about what this project means.

Juan P. Pertierra
April 23rd, 2004, 02:03 PM
Dan,

>I didn't see anything about altering the CCD >timing, maybe I missed it. That's the only way to >get around the NTSC limitation

that's incorrect.

I assume you are talking about how much bandwidth NTSC allows for luminance, and the two chroma signals. This is irrelevant at the stage i am extracting the signal. The only difference between the PAL and NTSC CCD's in the DVX's is the pixel aspect ratio and the frame size.

There are 3 CCD's, all MONOCHROME. There's a color filter in front of each one. The CCD's output the entire frame, that's it. It's S/H'ed and then sampled by an A/D converter for each CCD. THUS, you get full RGB for each pixel at 12-bit/channel precision.

There is NO YUV signal at this point.

It doesn't matter if you time the CCD's for 25fps, or 30fps, or 24fps, you STILL get FULL COLOR RGB FRAMES out of the A/D's.

These frames correspond to 4:4:4 if they where converted to YUV, but they are not.

About your compression statements this is what i was referring to:
>Compression is not truncation,
>it's an analytical processing method of getting rid >of as much UNNEEDED info as
>possible.

This is also incorrect. There are different types of compression but the type used in DV is destructive. It's just a matter of what is "good enough" for prosumers. It's a tradeoff, between degradation of the original signal and bandwidth.

>my point is that the 5:1 compression cannot be >translated to mean a loss of 80% of the info.

Actually it's even more than that. However, i don't think anyone here was thinking of the effects of ~just~ the compression, but rather the entire DV process which is what i mean.

I can mathematically show that given the original image sampled by the camera, you loose at least 80% of the info just in the decimation, frame resizing and color precision, without even taking into consideration compression, panasonic's chosen dynamic range, etc. if you want to see it i can do it, but i think we might be going off topic here, and Stephen might use his Samurai sword on us. <g>

Juan

Juan P. Pertierra
April 23rd, 2004, 04:56 PM
I agree completely with Luis.

Also, it wasn't my intention to come off as rude at any point, if i did i apologize. I'm just trying to explain things as best as I can. :)

Update:
I've started ordering the materials for the prototype. Like I said right now I am stuck with work and finals but once i'm done I will also dedicate some time to either eliminating the noise problem, or writing a simple program that gets rid of it by doing a specific median(pretty easy).

I am aware this is not a solution, however I do not want to get stuck because of a small problem that i am almost %100 is in my test setup. In the end the prototype is setup so differently that it won't make a difference if it works perfectly in the test or not.

Before any of this, 10-bit 4:4:4 uncompressed clips will be posted.... :)

Cheers,
Juan

Dan Vance
April 23rd, 2004, 05:29 PM
I was merely addressing some issues that I read in the threads. Luis expressed expectations of signifcantly better RESOLUTION, which won't be there. So how is that "already covered"?

Juan, you keep saying that "it's not NTSC," and in virtually the same sentence you say you're "taking the signal right off the chips AT THE A2D SAMPLING RATE." The A2D sampling rate of 13.5MHz *IS* THE NTSC-DEFINED SAMPLE RATE.

It would have been fun to be in on this project--I could even have helped, believe it or not. But it's not worth my time defending stuff that, because you guys don't understand it, you think I'M wrong! Pretty funny really.

I guess my 12 years as an electronics engineer and 35 years working with NTSC is just a fluke.

I'm sure you'll all be happy to know this is my last post.
Good luck, Juan. It's a commendable project, and I wish you all the best.

Juan P. Pertierra
April 23rd, 2004, 05:35 PM
Dan,

I'm sorry to see you go, and I really don't have control over what other people say on this board so all i can do is reply for myself.

The sampling rate of the A2D's is not 13.5Mhz. What else can I say? It's mathematical. I'm not doubting you have tons of experience, but i think you are making huge assumptions about a camera you haven't looked at the schematic for? I could sit here and flash degrees at you all day, but if the math doesn't add up.

Sorry, i'm just being honest.

All the best,
Juan

Obin Olson
April 23rd, 2004, 08:36 PM
Juan, will the software you write take the R G B images and layer them into one 16bit file and write that to disk? or will we have to open all the RGB files and layer them on our timeline for editing?

Juan P. Pertierra
April 23rd, 2004, 08:40 PM
Obin,

My software right now takes a raw capture file, and outputs raw 16-bit RGB frame files in a new directory, labeled with frame time code. Before I upload the clip i will write the remaining code for the TIFF header in each frame file so they can be opened with any program.

However, I am still unsure about my R,G,B frame shift values. That last frame i uploaded has my best guess so far as to how the RGB frames align.

Juan

Luis Caffesse
April 23rd, 2004, 11:22 PM
Not that this matters at this stage of the testing - but do you anticipate that the RGB shift values you are using would be the same for every DVX100?

If not, that would mean you'd have to calculate the RGB values for each camera that was modified, and rewrite your program on a one to one basis.

Juan P. Pertierra
April 23rd, 2004, 11:28 PM
Very good question.

I expect the alignment might not to work for all DVX's. This is why I will be including user-adjustable settings for this. If i do install this on a case-to-case basis, then i will take care of entering the correct values in some documentation or the software i include.

The reason why i think it will be different for all cameras, is because there is an aligment procedure on the DVX service manual, which basically calibrates on-board software. Since my mod bypasses all on-board software, this calibration has to be done at the time when the layers are put together.

My capture program will have all calibration options user-accessible, including the option to decimate footage in order to save space.

Juan

Rob Lohman
April 24th, 2004, 04:34 AM
Okay, as some of you may notice I've removed two posts and
editted some others. Please stay on-topic and civil to each
other. Because we all come from a different background and may
be thinking differently about certain things it does not give us
the right to call people names, insult them etc. Such things will
not be tolerated here.

Just discuss the matter. Perhaps Dan knows some things we
all don't and vice versa. Civil discussion can only lead to better
things!

Thank you.

Chris Hurd
April 24th, 2004, 09:57 AM
Thanks, Rob. By the way, I'd like for all participants in this thread to know that I met Dan Vance personally at NAB. He is an exceptionally talented engineer and filmmaker. Thinking that we would all benefit from his input, I invited him into this particular forum in which he expressed significant interest. Because of the regretable actions of a few people, I am now in the postion of asking Dan to overlook the way he was rudely treated by some of us and please give us another chance. We need to remember that no matter how much we think we know about something, someone else may come along who knows even more. It is my hope for the DV Info community project that we attract open-minded individuals who are willing to consider alternative viewpoints that don't always agree with our own. And now, I'm off to apologize to Mr. Vance on behalf of a couple of members here who unfortunately could not act beyond their own zeal.

Be sure to review the VanceCam 25p camcorder (http://home.teleport.com/~gdi/vancecam.htm). While many of us discuss alternative imaging methods, Dan Vance has actually built a working homemade camcorder. He has a lot to offer.

Jon Yurek
April 24th, 2004, 03:38 PM
I'm sorry if I may have come across as unwelcoming, I just didn't understand what Dan thought Juan was trying to do. Dan's initial post made it sound like we were trying to squeeze blood from a stone in terms of increasing resolution and whatnot, when in fact the resolution didn't really come into the picture. This whole thing (unless I'm mistaken) has been about getting (very) full color uncompressed video. The talk about the increase in resolution was about how well the image would look when upressed, not that Juan could suddenly get 1080p out of the CCDs (at least I don't remember that. I could very well be wrong though).

I'm sure Dan has plenty of experience that may well be useful here, I just think we got started on the wrong foot.

Robert Martens
April 24th, 2004, 05:35 PM
That's the worry with internet message boards; they're only text. The intricacies of real world communication (body language, inflection, etcetera) are lost. Comments tend to seem a lot more caustic than they were intended to be, and it's easy for this stuff to get out of hand. This has happened to me more times than I can count; I'm currently on "sabbatical" from about thirty five thousand different boards for fear of getting involved in pointless arguments. I'm hoping to start anew on these boards, to start acting like an adult. We all know what they say about arguing on the internet.

But one thing there's no debating is that DV Info Net is quite the step up from DV.com (no offense to those of you "crossover" members :P ). This collection of forums has Bullpuckey® tolerances even lower than Something Awful--which is enormously impressive, for those not familiar with the site--and we're all better off for it.

Dan, if you see this, I implore you, give it another shot, noting especially that all parties involved have offered apologies, and are being civil. That's something you're hard pressed to find anywhere else these days, and it says a lot about the caliber board member you'll be dealing with.

I'm proud to be a part of this wonderful community, and so will you, if you just spend a little more time with us.

Jon Yurek
April 24th, 2004, 06:11 PM
By the way, Juan, what did you decide on for the prototype's storage situation? IIRC, you were thinking of sticking a laptop SATA drive in there, right? What are the chances that you would be able to make it easily removable?

How are you going to interface with the drive? Directly, or via a 3rd party controller (running an embedded OS)? I ask because it may be easier to add features like a capacity meter and hot-swapping empty drives with an OS in between, although it would probably be much more complicated electronically.

Also, for the software side of storage, will you be streaming the data to the disk straight away, or will you try to put some kind of formatting on it? It seems as though it may be easier to stream the 12 bit data either completely raw or with a minimalist header and have a special codec for importing into NLEs.

Sorry if you've answered these already, I don't remember seeing a firm position in any direction.

Juan P. Pertierra
April 24th, 2004, 06:49 PM
Jon,

My current prototype design for the final device has a Firewire 800 port only. This port can be connected to a PC/Mac to capture raw data directly as with a standard DV camera.

However, the firewire port can ALSO be connected to a ~standard~ Firewire800 drive for capture. You do not need a special drive like those designed for DV capture. The circuitry automatically recognizes the drive and saves data accordingly.

The device will have a recording control separate from the recording control on the camera. I haven't decided yet, but the device will probably have an LCD screen with menu buttons, such that you can select if you want to decimate the data in order to save space, along with other options like color precision, etc.

The files will most likely be saved like I am doing now, using uncompressed RGB frames. I haven't found a video format capable of handling the data so frames will have to do.

As always, i'm open to suggestions...a SATA drive was harder to implement than the FW800 solution using the components I have.

Another option would be to include an SDI link, but since the raw data is not standard, either a compromise must be done or a propietary driver for capture.

Juan

Obin Olson
April 24th, 2004, 10:43 PM
Juan what about audio? do you think we can still use the dvx to record DV and audio while we capture 4:4:4 ? it would be awesome to have sync sound that is recorded to tape with the DV preview video for matching it up in post....also how long do you think it will be until you have a working prototype? how will we deal with 4:4:4 in post..Does anyone know if ANY NLE supports the bit depth this will have? I guess we could treat it like film and color work all the footage in AfterEffects etc BEFORE the edit....

Juan P. Pertierra
April 24th, 2004, 11:07 PM
Obin,

The DVX will work completely as usual, and separately from the capture box. This means that you can record to DV tape and 4:4:4 simultaneously, and as such the audio is captured to tape.

As far as the prototype, my aim is to have it ready within a month. I can give a better estimate in a week when i'm done with most of my finals.

About the 4:4:4 footage, we really need the opinion of some FCP(or Vegas, Premiere,etc) guru. I have seen in FCP that there are is an option for 'Animation', which suggests that you can use any images as independent frames, but I am not sure. Also, there has to be a way, like you said, to treat it like film slides at full color sampling...i've just never done it before in an NLE. I know shake does it.

Juan

Gordon Lake
April 24th, 2004, 11:28 PM
<<<-- Originally posted by Juan: we really need the opinion of some FCP(or Vegas, Premiere,etc) guru. I have seen in FCP that there are is an option for 'Animation', which suggests that you can use any images as independent frames...-->>>

I use the Digital Juice Jumpback files for background animations in Vegas. They're in the PNG format and work fine.

Gordon

Juan P. Pertierra
April 24th, 2004, 11:34 PM
I took a break and read a bit in my FCP manual, and tried some tests. it seems that at least FCP has no problem importing entire directories of frames, the question is will it work in 4:4:4 and at least 12-bit RGB? Does anyone know?

The render options allow for a high-precision rendering mode, but it doesn't say what precision is used...there is also an RGB render mode which does not specify the precision. I would assume that the system works in 4:4:4, it makes sense for effects.

A good experiment if anyone has the time, would be to drop one of the raw frames i posted into your NLE program, and then render it as a short video...see if there is a way of setting a custom resolution, and if any color precision is lost....

Juan

Obin Olson
April 25th, 2004, 12:40 AM
Juan here are some really cool guys, if you ever want to build a custom made HD camera give them a call...I am working with them now to try and make/build/buy a highspeed camera that can shoot 60fps or more..I need this all the time for commercial productions...


http://www.illunis.com/index.html


I think it would be so cool if you took your ideas and applied them to an HD camera..all you need is the raw digital output right? and you can build/code a system for capture?

Joel Corkin
April 25th, 2004, 01:37 AM
Juan,

I'm not sure about FCP, but Adobe After Effects imports image sequences and can handle and render out to 16-bit per channel. Apple products aren't always as transparent about what they are doing to your video as Adobe products are.

Good luck with this project, you must be a bonafide genius.