![]() |
Downloaded that PSD file, opened it in combustion (I don't have PS), and tinkered a bit. A little gain boost, Median noise filter with a .5 radius--completely eliminating the noise--saved the result, cropped and resized it, and boom! Looks pretty good by my standards. The JPEG compression artifacts help alleviate the smoothing done by the Median operator, and aren't too ugly.
You sure ain't kidding about the details in there. :) I'd show off my result, but the FTP password for my website that I've been using for the past few months no longer works, courtesy of the geniuses at CI Host (the same people who changed my email password a month ago without telling me). But that's a whole other post. Obin, I can't seem to access the URL you posted for the "juancap_colorwork" image. The other two show up just fine, but that one gives me a "Page cannot be found" error. |
Obin:
I can't get all your links to work. And do you know how to change them to be clickable? If not, let me know or search here for the code. |
Robert,
Email me the image and i will post it on my website. I just got extra space added to it so now I can post some clips when i'm done with all the work i have this week. The link to one of Obin's file that doesn't work is because my name is spelled wrong :) thus it is: http://www.dv3productions.com/upload..._colorwork.jpg |
It's on its way, Juan, should be there soon.
I've contacted tech support for my website, and usually hear back within a day or so; as soon as I'm able to log in, I'll take the job of hosting my file off your hands. Thanks a bundle! |
Talk about fast response -- got the correct password, uploaded the file, you can view my image here:
http://www.gyroshot.com/images/juancombustion.jpg Update -- I've uploaded an additional (larger, better looking) version of the image; since everyone else seems comfortable with uncompressed TIFFs, I figure what the heck? You can still access the JPG, if you like. http://www.gyroshot.com/images/juancombustion.tif I would also like to know what's with the green and magenta edges on everything in the source photo...any ideas? It's especially noticeable on the sidewalk, as it passes behind the vertical bars of the guardrail. Does this have something to do with the alignment of the color channels, perhaps? |
Here is my somewhat 'corrected' version, i couldn't get rid of all the speckles using photoshop only...
http://expert.cc.purdue.edu/~pertierr/cap5_RAW.tif |
Juan, what do you think? bad pixels? or?
|
pixels and masking
This is why past my realm of knowledge but when I mentioned this pixel problem to someone much more experienced he said that manufactures are allowed to have a certain number of dead pixels in a display when they create them. Apparently they have the means of masking these dead pixels from showing up but since your pulling off raw files off the CCD you might be by passing this masking and revealing the dead pixels.
Again I dont know if this is correct. Im only passing on the info that was told to me. Im not even sure techically what they are doing to mask the pixels and whether that masking would or wouldnt come into play when you are doing raw files. Any thoughts? |
i would say yes BUT the pixels are moving around in each frame...a dead pixel stays dead in every shot....
|
Reality Check?
Juan,
I've tried to read through all the posts to come up to speed on this thread, so maybe I've missed something that's already been said, but... I'm not sure you're going to be happy with your results when you get where you're going, because: You're starting with 4:1:1 and trying to get 4:4:4. But that first 4, which you already have, represents luminance samples, and that's where, practically, all your "resolution" is. So, you can expect to get significantly better color rendition, but NOT better resolution. The resolution of your Y output (of the Y/C output), is the full resolution that the camera is capable of. There is essentially no resolution loss in the Y signal going from the chips to the Y output. That will be a little better than the equivalent compressed DV luminance, but not a whole lot. And of course you can get at that without going inside the camera. Also, at one point you said you're limited "only by the glass and CCD size"!! Yes! But the overall performance can only be as good as the worst component. The DVX100 lens probably has a resolution of 500 lines or so, or at the very best, 600 lines. And who knows what it does to color? It's good glass FOR THE MONEY, but relative to pro lenses, it's pretty mediocre. So even if the CCD was capable of 1000 lines, the lens limits you to 500. You're improving the milkshake, but trying to drink it through the same tiny straw. You also seem to be neglecting the bandwidth limitation of the NTSC signal, which is 540 lines, and there's no way around that, as long as the signal is NTSC. So, you're doing a lot of work for a little more color info. Is it worth it? Maybe. I certainly would be the last person to dissuade anyone from experimenting. But the physics say it might not do what you're expecting. Also to clarify some other posted info, PAL CCDs and NTSC CCDs are not interchangeable, because the pixel aspect ratios are different. And "5:1" compression does NOT in any way "throw away 80% of the information." Compression is not truncation, it's an analytical processing method of getting rid of as much UNNEEDED info as possible. As you've mentioned, the Canon XL1 is another candidate, and at least you could put better glass on that one, but unfortunately, it doesn't do 24P. I don't mean to be throwing cold water on this, and I think you should continue and see how good you can make it, but just be aware of the limitations of the camera and don't get your hopes up too high. |
Actually, you bring up some good points. But what about the perceived difference in dynamic range? Juans modification may not be increasing resolution, but getting a 12 bit signal vs the compressed 8bit signal should count for something, shouldn't it? -Luis |
Dan,
Thanks for your input. However, i think there are some significant innacuracies in your post... 1.)I am NOT starting with 4:1:1 and going to 4:4:4. I am literally capturing 4:4:4 straight out of the CCD's, at the full sampling rate of the A/D's. 2.)You're right in that the lens on the DVX might do 600 lines at the most. However, afaik that doesn't really pose a limitation since the CCD's have 494 lines. 3.)I AM getting a little more resolution because of the fact that I am getting the full CCD frame. It's not much but if there is something i have learned so far is that the more information you start with before correcting your video for a specific look, the better. 4.)I'm not really sure why you're talking about NTSC signals, because I am not even dealing with that at all. What I am getting is a non-standard frame size, and it's up to the person working with the footage to decide what medium it is going on. If he/she wants to print the entire 773x494 frame to film, there is no NTSC standard compliance involved here ~anywhere~. To top it off, i think a lot of people will agree that the very significant increase in dynamic range, as well as the fact that the 12-bit RAW output can display at least 48 times as many colors as the 8-bit DV output alone are worth it. Finally, i recall some people doing up-rezzing experiments with the standard DVX output and getting results comparable(with better color) than the JVC single chip HD cam. I am almost 100% sure that uprezzing of this raw full color footage will yield even better results while we wait for a 3-chip HD camera to apply this on. Oh yeah, and NO COMPRESSION ARTIFACTS. Hate those big blocks on the reds, blues, and have you ever tried pulling a chroma key? You can say that the DV compression only gets rid of uneeded data, but i think very few people here will beleive you. Cheers! Juan |
Just take a look at the images.
They are very impressive. And I believe these are only 10bit instead of 12. The dynamic range is outstanding. The added resolution is a great plus. All great for going out to film; if that is what you want. Hopefully Juan can eliminate the digital specks and move on to getting this thing production ready. (at least I hope he plans on that). |
Dan's 4:1:1 point
Juan,
I think what Dan meant was that your starting with a camera that's already capible of 4:1:1 and your trying to get 4:4:4 out of it (just to point out that the 4 is already there for you in luminance and make his point about resolution) . I don't think he meant that you're trying to turn a 4:1:1 signal into a 4:4:4 signal. Your other counterpoints do make sense, though. Keep going with this... I monitor this posting every day. You're a true innovator, and asside from what this could do for indy filmmakers, the ongoing story itself makes for great drama. :) John |
I just took a look at the U and V channels of the latest sample image and we are clearly getting real raw 4:4:4 frames instead of an upsampled 4:1:1 frame. The seperate color channels are crisp, clean, and acurate. If you work with high end visual effects this is the ultimate feature I would want for this mod and so far it is working better than I expected. The dvx100 might not be better than a 1/2 or 2/3 inch camera but I would take 4:4:4 frames any day over 4:1:1 or even 4:2:2 frames. If you want to check out how much better 4:4:4 is then in photoshop change your color mode to lab color. The a and b channels are the U and V color channels and l is the y channel. Compare a raw frame to any dv frame and you will see the clear differance.
|
Pixel Shift or Pixel Remapping Scenario
<<<-- Originally posted by Obin Olson : i would say yes BUT the pixels are moving around in each frame...a dead pixel stays dead in every shot.... -->>>
A pixel doesn't necessarily have to be "dead" to be bad. Most likely this is the root cause for these defective pixels and Juan will need to do a bit of research on pixel mapping. Anybody who has played with the 3x3 grid sliding puzzle will understand how you would need to account for a bad pixel in the 773x494 grid. Any pixel out of spec will need to be mapped out and either the x or y grid line will shift 1 pixel over and be remapped to make up for the bad pixel. There is obviously a "point of no return" where this cannot be done without excessive visual distortion, so during CCD QC testing any chips with more bad pixels on either an "X" or "Y" axis line would not be used. With the actual pixel count available on the CCDs, you have 53 "X" axis and 14 "Y" axis pixels available for remapping. Whether or not they would shuffle pixels over the full 53 amount I wouldn’t know, but the 14 for the "Y" axis should not be any problem I suppose. Food for thought. -Rodger |
Noise problem
Hello. I didn't write for a long time because I was away. As I see the problem still occurs and no stepping forward...
Althrough I don't know the exact solution, I can give some thughts in terms of methodology. First obvious thing is the problem touches only R or G component. The CCD that makes component B is just OK (just check RGB values of those bad pixels and their surroundings - it's always R or G component that makes the problem). So, I would check why the B CCD is read out correctly and others not. Also, that means that the CCD __COULD__ be read correctly so just work for solutuion and not substitute it with postproduction. Juan, suggest to prepare 20-30 test screens with very simple patterns, for egsample with two colors of basic R-G, (and maybe one with B to show there is no problem with it) that have straight border between them. Then place it in various positions in front of camera and shoot it to check how the noise react to it. Then show in different exposures to check how noise react to level on exposure. So on, so on. The more the better. And then put them as jpgs to download and maybe we could see any pattern that could help. |
Ok, after playing around with the photoshop file for a few minutes, I've come to a few conclusions:
1. After converting to 8-bit, the pixels off the raw image are close to 20 (00100000, 32) than what I said before, which discounts the "high bit error" problems since, well, it's not the high bits that aren't getting set. Of course, this may still mean the middle bits are off. In some sampled pixels, the value of the dead channel was 0 and sometimes it was 1. In the color-corrected version, they could get as high as C (00001101, 13). 2. All the CCDs have different points of "deadness" but the blue one seems to have FAR fewer than the others, like Milosz said, but *unlike* what he said, there are still dead pixels in the blue channel. They're just much less common. Is there something electronically in your capture sequence that would make a difference here? 3. The more I think about it, the more it makes sense that a simple filter would be able to run on these images that checks the median values of the surrounding 8 pixels and if they're far enough off, to flip the bits. Basically, a normal median filter, but specifically targetted towards individual pixel-hunting so the whole thing doesn't get bogged down (also, so it can be integrated into the capture sequence). Judging from these shots, it won't do any damage to the final frame. Unfortunately I don't have the time to write one, although I'll see what I can do this weekend (don't cross your fingers ;) ) |
I still think you guys are missing something here. I didn't see anything about altering the CCD timing, maybe I missed it. That's the only way to get around the NTSC limitation, but even if you do that, you immediately run into the sampling rate limit of the CCD itself.
Juan, Yes, I was saying you are extracting 4:4:4 COMPARED to the DV signal of 4:1:1, not that I thought you were trying to get 4:4:4 FROM 4:1:1. And I also said you WOULD get a LITTLE better resolution. It's not about NTSC "compliance," it's about the CCD timing that is designed to produce the NTSC signal. Regarding: "You can say that the DV compression only gets rid of uneeded data, but i think very few people here will beleive you," my point is that the 5:1 compression cannot be translated to mean a loss of 80% of the info. Yes, compression is bad and the artifacts look horrible. I was only clarifying that misstatement, which I realize was not yours, but was in someone else's comment. I really think this is a very worthwhile endeavor, and I'm interested in seeing the results. What would be even better, once you're done, is to get "someone" to go in and remove the DVX100 lens and put a nice Fujinon (or Canon) lens on one, just to see how much difference it makes. |
The thing is, Dan, that you're posting stuff we basically already knew. We knew the limitations of the CCD, we knew how much resolution we were getting. The whole reason this is exciting is because we would finally have real depth of color and all the information our CCDs can record instead of the (admittedly not terrible, but definitely not perfect) information coming off the DV tape. And the more information you have, the better your end result will look after you're done tinkering with it.
I don't think anyone's had unreasonable expectations about what this project means. |
Dan,
>I didn't see anything about altering the CCD >timing, maybe I missed it. That's the only way to >get around the NTSC limitation that's incorrect. I assume you are talking about how much bandwidth NTSC allows for luminance, and the two chroma signals. This is irrelevant at the stage i am extracting the signal. The only difference between the PAL and NTSC CCD's in the DVX's is the pixel aspect ratio and the frame size. There are 3 CCD's, all MONOCHROME. There's a color filter in front of each one. The CCD's output the entire frame, that's it. It's S/H'ed and then sampled by an A/D converter for each CCD. THUS, you get full RGB for each pixel at 12-bit/channel precision. There is NO YUV signal at this point. It doesn't matter if you time the CCD's for 25fps, or 30fps, or 24fps, you STILL get FULL COLOR RGB FRAMES out of the A/D's. These frames correspond to 4:4:4 if they where converted to YUV, but they are not. About your compression statements this is what i was referring to: >Compression is not truncation, >it's an analytical processing method of getting rid >of as much UNNEEDED info as >possible. This is also incorrect. There are different types of compression but the type used in DV is destructive. It's just a matter of what is "good enough" for prosumers. It's a tradeoff, between degradation of the original signal and bandwidth. >my point is that the 5:1 compression cannot be >translated to mean a loss of 80% of the info. Actually it's even more than that. However, i don't think anyone here was thinking of the effects of ~just~ the compression, but rather the entire DV process which is what i mean. I can mathematically show that given the original image sampled by the camera, you loose at least 80% of the info just in the decimation, frame resizing and color precision, without even taking into consideration compression, panasonic's chosen dynamic range, etc. if you want to see it i can do it, but i think we might be going off topic here, and Stephen might use his Samurai sword on us. <g> Juan |
I agree completely with Luis.
Also, it wasn't my intention to come off as rude at any point, if i did i apologize. I'm just trying to explain things as best as I can. :) Update: I've started ordering the materials for the prototype. Like I said right now I am stuck with work and finals but once i'm done I will also dedicate some time to either eliminating the noise problem, or writing a simple program that gets rid of it by doing a specific median(pretty easy). I am aware this is not a solution, however I do not want to get stuck because of a small problem that i am almost %100 is in my test setup. In the end the prototype is setup so differently that it won't make a difference if it works perfectly in the test or not. Before any of this, 10-bit 4:4:4 uncompressed clips will be posted.... :) Cheers, Juan |
Last Post
I was merely addressing some issues that I read in the threads. Luis expressed expectations of signifcantly better RESOLUTION, which won't be there. So how is that "already covered"?
Juan, you keep saying that "it's not NTSC," and in virtually the same sentence you say you're "taking the signal right off the chips AT THE A2D SAMPLING RATE." The A2D sampling rate of 13.5MHz *IS* THE NTSC-DEFINED SAMPLE RATE. It would have been fun to be in on this project--I could even have helped, believe it or not. But it's not worth my time defending stuff that, because you guys don't understand it, you think I'M wrong! Pretty funny really. I guess my 12 years as an electronics engineer and 35 years working with NTSC is just a fluke. I'm sure you'll all be happy to know this is my last post. Good luck, Juan. It's a commendable project, and I wish you all the best. |
Dan,
I'm sorry to see you go, and I really don't have control over what other people say on this board so all i can do is reply for myself. The sampling rate of the A2D's is not 13.5Mhz. What else can I say? It's mathematical. I'm not doubting you have tons of experience, but i think you are making huge assumptions about a camera you haven't looked at the schematic for? I could sit here and flash degrees at you all day, but if the math doesn't add up. Sorry, i'm just being honest. All the best, Juan |
Juan, will the software you write take the R G B images and layer them into one 16bit file and write that to disk? or will we have to open all the RGB files and layer them on our timeline for editing?
|
Obin,
My software right now takes a raw capture file, and outputs raw 16-bit RGB frame files in a new directory, labeled with frame time code. Before I upload the clip i will write the remaining code for the TIFF header in each frame file so they can be opened with any program. However, I am still unsure about my R,G,B frame shift values. That last frame i uploaded has my best guess so far as to how the RGB frames align. Juan |
RGB SHIFT VALUES
Not that this matters at this stage of the testing - but do you anticipate that the RGB shift values you are using would be the same for every DVX100?
If not, that would mean you'd have to calculate the RGB values for each camera that was modified, and rewrite your program on a one to one basis. |
Very good question.
I expect the alignment might not to work for all DVX's. This is why I will be including user-adjustable settings for this. If i do install this on a case-to-case basis, then i will take care of entering the correct values in some documentation or the software i include. The reason why i think it will be different for all cameras, is because there is an aligment procedure on the DVX service manual, which basically calibrates on-board software. Since my mod bypasses all on-board software, this calibration has to be done at the time when the layers are put together. My capture program will have all calibration options user-accessible, including the option to decimate footage in order to save space. Juan |
Okay, as some of you may notice I've removed two posts and
editted some others. Please stay on-topic and civil to each other. Because we all come from a different background and may be thinking differently about certain things it does not give us the right to call people names, insult them etc. Such things will not be tolerated here. Just discuss the matter. Perhaps Dan knows some things we all don't and vice versa. Civil discussion can only lead to better things! Thank you. |
Thanks, Rob. By the way, I'd like for all participants in this thread to know that I met Dan Vance personally at NAB. He is an exceptionally talented engineer and filmmaker. Thinking that we would all benefit from his input, I invited him into this particular forum in which he expressed significant interest. Because of the regretable actions of a few people, I am now in the postion of asking Dan to overlook the way he was rudely treated by some of us and please give us another chance. We need to remember that no matter how much we think we know about something, someone else may come along who knows even more. It is my hope for the DV Info community project that we attract open-minded individuals who are willing to consider alternative viewpoints that don't always agree with our own. And now, I'm off to apologize to Mr. Vance on behalf of a couple of members here who unfortunately could not act beyond their own zeal.
Be sure to review the VanceCam 25p camcorder. While many of us discuss alternative imaging methods, Dan Vance has actually built a working homemade camcorder. He has a lot to offer. |
I'm sorry if I may have come across as unwelcoming, I just didn't understand what Dan thought Juan was trying to do. Dan's initial post made it sound like we were trying to squeeze blood from a stone in terms of increasing resolution and whatnot, when in fact the resolution didn't really come into the picture. This whole thing (unless I'm mistaken) has been about getting (very) full color uncompressed video. The talk about the increase in resolution was about how well the image would look when upressed, not that Juan could suddenly get 1080p out of the CCDs (at least I don't remember that. I could very well be wrong though).
I'm sure Dan has plenty of experience that may well be useful here, I just think we got started on the wrong foot. |
That's the worry with internet message boards; they're only text. The intricacies of real world communication (body language, inflection, etcetera) are lost. Comments tend to seem a lot more caustic than they were intended to be, and it's easy for this stuff to get out of hand. This has happened to me more times than I can count; I'm currently on "sabbatical" from about thirty five thousand different boards for fear of getting involved in pointless arguments. I'm hoping to start anew on these boards, to start acting like an adult. We all know what they say about arguing on the internet.
But one thing there's no debating is that DV Info Net is quite the step up from DV.com (no offense to those of you "crossover" members :P ). This collection of forums has Bullpuckey® tolerances even lower than Something Awful--which is enormously impressive, for those not familiar with the site--and we're all better off for it. Dan, if you see this, I implore you, give it another shot, noting especially that all parties involved have offered apologies, and are being civil. That's something you're hard pressed to find anywhere else these days, and it says a lot about the caliber board member you'll be dealing with. I'm proud to be a part of this wonderful community, and so will you, if you just spend a little more time with us. |
By the way, Juan, what did you decide on for the prototype's storage situation? IIRC, you were thinking of sticking a laptop SATA drive in there, right? What are the chances that you would be able to make it easily removable?
How are you going to interface with the drive? Directly, or via a 3rd party controller (running an embedded OS)? I ask because it may be easier to add features like a capacity meter and hot-swapping empty drives with an OS in between, although it would probably be much more complicated electronically. Also, for the software side of storage, will you be streaming the data to the disk straight away, or will you try to put some kind of formatting on it? It seems as though it may be easier to stream the 12 bit data either completely raw or with a minimalist header and have a special codec for importing into NLEs. Sorry if you've answered these already, I don't remember seeing a firm position in any direction. |
Jon,
My current prototype design for the final device has a Firewire 800 port only. This port can be connected to a PC/Mac to capture raw data directly as with a standard DV camera. However, the firewire port can ALSO be connected to a ~standard~ Firewire800 drive for capture. You do not need a special drive like those designed for DV capture. The circuitry automatically recognizes the drive and saves data accordingly. The device will have a recording control separate from the recording control on the camera. I haven't decided yet, but the device will probably have an LCD screen with menu buttons, such that you can select if you want to decimate the data in order to save space, along with other options like color precision, etc. The files will most likely be saved like I am doing now, using uncompressed RGB frames. I haven't found a video format capable of handling the data so frames will have to do. As always, i'm open to suggestions...a SATA drive was harder to implement than the FW800 solution using the components I have. Another option would be to include an SDI link, but since the raw data is not standard, either a compromise must be done or a propietary driver for capture. Juan |
dealing with 4:4:4
Juan what about audio? do you think we can still use the dvx to record DV and audio while we capture 4:4:4 ? it would be awesome to have sync sound that is recorded to tape with the DV preview video for matching it up in post....also how long do you think it will be until you have a working prototype? how will we deal with 4:4:4 in post..Does anyone know if ANY NLE supports the bit depth this will have? I guess we could treat it like film and color work all the footage in AfterEffects etc BEFORE the edit....
|
Obin,
The DVX will work completely as usual, and separately from the capture box. This means that you can record to DV tape and 4:4:4 simultaneously, and as such the audio is captured to tape. As far as the prototype, my aim is to have it ready within a month. I can give a better estimate in a week when i'm done with most of my finals. About the 4:4:4 footage, we really need the opinion of some FCP(or Vegas, Premiere,etc) guru. I have seen in FCP that there are is an option for 'Animation', which suggests that you can use any images as independent frames, but I am not sure. Also, there has to be a way, like you said, to treat it like film slides at full color sampling...i've just never done it before in an NLE. I know shake does it. Juan |
<<<-- Originally posted by Juan: we really need the opinion of some FCP(or Vegas, Premiere,etc) guru. I have seen in FCP that there are is an option for 'Animation', which suggests that you can use any images as independent frames...-->>>
I use the Digital Juice Jumpback files for background animations in Vegas. They're in the PNG format and work fine. Gordon |
I took a break and read a bit in my FCP manual, and tried some tests. it seems that at least FCP has no problem importing entire directories of frames, the question is will it work in 4:4:4 and at least 12-bit RGB? Does anyone know?
The render options allow for a high-precision rendering mode, but it doesn't say what precision is used...there is also an RGB render mode which does not specify the precision. I would assume that the system works in 4:4:4, it makes sense for effects. A good experiment if anyone has the time, would be to drop one of the raw frames i posted into your NLE program, and then render it as a short video...see if there is a way of setting a custom resolution, and if any color precision is lost.... Juan |
Juan here are some really cool guys, if you ever want to build a custom made HD camera give them a call...I am working with them now to try and make/build/buy a highspeed camera that can shoot 60fps or more..I need this all the time for commercial productions...
http://www.illunis.com/index.html I think it would be so cool if you took your ideas and applied them to an HD camera..all you need is the raw digital output right? and you can build/code a system for capture? |
Juan,
I'm not sure about FCP, but Adobe After Effects imports image sequences and can handle and render out to 16-bit per channel. Apple products aren't always as transparent about what they are doing to your video as Adobe products are. Good luck with this project, you must be a bonafide genius. |
All times are GMT -6. The time now is 06:39 PM. |
DV Info Net -- Real Names, Real People, Real Info!
1998-2025 The Digital Video Information Network