![]() |
DV on the big screeen
This is not going to devolve into a HDV vs. SD thread. This is a question for SD producers only.
I was reviewing some videos we have done on the new 50" TV. I was rather disappointed with how they look. They look great on my 32" CRT TV but the big LCD TV really seems to degrade the picture quite a bit. Enough so that I worry now about a client playing it on thier 50" screen. Now I fully understand why this happens. Streching a SD picture across 50 inches on a HDTV is pushing it. My questions are two: 1. Are there any tricks to overcoming this limitiation that I am not employing? I shoot DVCAM and encode with ProCoder at Mastering Quality. I am not sure how much better my DVD encoding can get. I currently use 2 pass encoding and have heard rumor that 1 pass can be better. Any truth to this? 2. Does anyone warn clients about playing DV on big screens? DV being such a highly compressed format right at recording we are simply not going to look as good as, say, DigiBeta. Does anyone address this at all? Thanks! |
Part of the problem is that the source (most likely) is interlaced video. LCDs are native progressive displays. Good deinterlacing in post (easier said than done) can help, rather than depending on the player to do the deinterlacing. Also, the playback device makes a big differance (how it upsizes the image). Most DVD players do not upsize images well at all (another one of those lovely industry agreements). If you play the same DVD through a computer, using something like PowerDVD, it will look much better (I think they are using bicubic resizing).
|
What camera did you use? If you shot with a PD-170/VX-2100, GL2 or XL1s then they are not going to look very good on a widescreen TV because these cameras don't shoot native widescreen. I'm assuming that you're talking about 16:9 video here. Those cameras create 16:9 by cropping the image and then stretching it to make it anamorphic. This results in a 25% loss of vertical resolution.
Also, check the settings on the HDTV. Reducing the sharpness may give more pleasing results. |
I'll agree with Robert with the interlacing issues. I've played interlaced footage from a GL2 on my mother-in-law's 53" HDTV and it looks ok. Footage from my DVX100A looks much better though, as it is natively progressive. Boyd is also right in terms of native 16:9 CCD's. If you displayed something shot with an XL-2 (for example) you would get a much better image because it is shot in 16:9 AND it is progressive. SD, due to its resolution (which for a long time has been superb) can only be stretched so far before its limitations show. That's not a plug for HD (I'm still SD), it's just a fact.
|
Well I've projected 16:9 footage that I shot with the little Sony PDX-10 on a 44 foot wide screen with a 10,000 lumen DLP projector for one of our operas, and it looked surprisingly good; the reviewer even praised the video quality. The PDX-10 shoots interlaced video, but has native SD 16:9, and is a big improvement over the PD-170 in widescreen mode.
If you're concerned about how your video will look on a widescreen LCD or plasma screen then it makes sense to use one when you edit also. I've been doing this for several years and it helps a lot I think. |
I've done this three times with different setups.
The first time we used a firewire transcoder device that gave us component video output which was captured (from FCP on my laptop) by a pair of Doremi hard disk video recorders (one was a backup). The projectors were fed via component output from the Doremi's, and they were controlled by a serial interface with some old software running on a Powerbook 170 which advanced the deck to the next cue. The second time, for a production in South America, we used a variety of old and rather unreliable devices for multiple screens, including an ancient DVCAM deck and some sort of HP PC running Pinnacle software. This was all far from ideal, but we had to use what they had and the situation there included lots of cheap labor but no budget for new equipment. Most recently we used the Catalyst Media Server from High End Systems for video on two screens. This is a terrific solution, and the one I will plan to use in the future (just now working on a budget for a show next fall in fact). There are a variety of output and input options on this system, but we used DVI. The software runs on a PowerMac G5 and you can download it for free - more info here: http://www.highend.com/products/digi...g/catalyst.asp But the real beauty of this system is that it "speaks" the DMX512 protocol which allows it to be programmed through the stage lighting board as though it were a moving light fixture. This is a huge advantage in that it centralizes control of all your lighting and video in one place. |
I should also add that your question raises some important points about the original issue. Aside from the playback medium (DVD, etc) the method of connecting the the big screen TV or projector is very important. I've found that s-video yields significantly worse results than component video. If you're using a DVD player then use the component outputs. Or if using your camera and it doesn't have component video then consider using an external device that will transcode from firewire. Here's a pretty inexpensive option:
http://www.adstech.com/products/API-...sp?pid=API-555 |
Sharpness at -1
Quote:
ProCoder has a pretty darn good reputation - you should be ok in that respect. I also view my videos on a 53" screen. After viewing the very first video I ever did on the 53" I went through a mryid of testing camera settings and encoders because I also was not satified with the results. Looked great on a 32" CRT but not so hot on my 53" rear projection. Just like you said Mike. Ever since my "myrid of testing" both of my Sony cameras (PD-170 & VX-2000) are on special custom presets which I use all the time. Color in the middle and sharpness set to one notch below the center setting. IMHO most DV cameras sharpness is just a hair too aggresive and the result when viewed on a large screen is obvious... lot's of jaggedness, flicker, and pixelation. TEST: If you video a brick building and pan slowly it's obvious - with sharpness at middle you will see rapid flickering between the bricks as the camera struggles to redraw fine detail it can only do efficiently in a still shot. Now try the same shot with your sharpness set one notch lower. View the results of both and the difference is like night & day. You loose some fine detail and it is a trade off to have the sharpness reduced but IMO the end result throughout an entire DV production is a more accpetable picture when viewed on a large screen. You won't offend anyone with a slightly softer picture and that may be more akin to the look of film anyway. |
Hmm all interesting. I am shooting on PD-150's but the widescreen is not the issue. I am not trying to strech the image across the whole screen, I let the TV put the black bars on the side just like SD TV does (if you let it).
The issue is more the aliasing and, I am guessing, the interlacing. My DVD player is hooked up the component so that should be OK. The sharpness setting is an interesting thought, I will have to test that. Another question regarding incoding. I have not looked but I wonder if ProCoder can incode in progressive. That might help but then we are talking about having to make two different disks for clients, an interlaced for CRT and Prog. for LCD/Plamsa. Dang, I wish they would just settle the HD format and start selling true HD cams for 3k. Is that too much to ask?? Mike |
Quote:
Try the sharpness test Mike and you will see a lot of that crappy "DV look" improve. |
This is an interesting topic. I am going to test some stuff and post what I find.
Meanwhile, do any of you "warn" a client about this sort of thing? I had a CRT 55" before and things looked normal, this is my first progressive TV so it is a eye opener for me. I am concerned about a client popping in thier DVD and being agahst. I must be getting old, I just used the word agahst..... Mike |
Your average person has no understanding of video formats. It would be like talking to a wall. What if your customer has both. The couple has CRT and their parents have LCD? What if they currently has a rear projection and then buy an LCD TV in a year?
I have two home theater setups. Downstairs a 53" rear projection and upstairs is a 42" LCD. The 42" LCD's is hooked up to just an X-BOX and the interlaced DVD video I play on it looks fine. I think Mike just wasn't aware of the true limitations of the heavily compressed DV25 format and it became stunningly obvious when it was blown up on a 50" screen. DV25 is weak and no matter what you do. If you are not using some big budget Hollywood DigiBeta conversion like in the movie "Open Water" which was shot in DV25 it's not going to look as good as a DVD you rent. |
Well, actually I am well aware of the super lossy DV25 limitations, I am just trying to figure out how people handle it, if at all. My assumption is that there is not a way to make DV look good at 50" on the LCD (without hollywood budgets) and just wanted to verify that I am not missing something.
More importantly I am trying to find out how people handle it with customers. Again, without delving into HDV or progressive shooting, what do people tell the client if anything? We are intentionally holding off on HDV since I think it is a transitory format and will quickly be replaced by affordable HD. I don't really want to get into that arguement so hold off on that one. Since we are a little way off of HD distribution (for the average household) I didn't see a pressing need to switch gear just yet. Now for HDV. We are talking about an even more compressed format. Granted, a higher resoulution, but way compressed. How does HDV (once put onto DVD) hold up? Is the DVD encoded 720p? What exactly is the diff on HDV once you get it to DVD (standard, not the new stuff coming out)? Thanks guys, I really find all this interesting. Your insights are worth alot. |
Quote:
Quote:
Quote:
Quote:
|
Interesting point on the crunching of HDV. As I understand it the only reason it is crunched so heavily is to keep it inside of the 25 limit. It would seem that as HD players become common, it would be a simple software change (perhaps hardware if we are talking about a physical chip doing the compression which I suspect is the case) to get a HDV camera to create a far less compressed format.
But, aren't we really talking about a whole new format? A HDV cam that creates a 12 or 17 BPS stream would be something new, would it not? It would now be something between HDV (25) and true HD. Quasi HD if you will. Again, a new avenue that may open up makes me question running out to buy 3 HDV cams and associated hardware/software to shoot and edit it. It seems we are very, very close to busting away from the 25 limit and associated limits with current DVD technology. Now is a good time to work for someone else (who gets to buy all the gear) and wait 'till things settle down. |
Quote:
Personally, if I was paying someone for a wedding video (and my daughter is getting pretty serious with her boyfriend :-), I wouldn't accept a 4:3 DVD with ugly black bars on my plasma screen. And I certainly wouldn't accept something shot with a PD-150 in widescreen mode. But then again, I'm not your usual "Joe Sixpack"... ;-) |
Quote:
No offense but I am guessing you also wouldn't be willing to accept the price tag of a full HD production. |
No, I wouldn't want HD. But I would expect "real" 16:9....
|
"But I would expect "real" 16:9...."
But for what we are talking about, (camera price-wise) isn't the XL2 the only real solution here? Everthing else is "fake" 16:9. (I don't know anyone who would shoot w/ an anamorphic adaptor in this situation) Also, just to add my $0.02, I have a 57" rear projection and 24p 16:9 and 2.35:1 from a DVX (non anamorphic) looks pretty dang good on it. And yes, I do have a pretty discriminating eye myself. Also, I will say that I have yet to see a plasma/LCD screen that I thought looked good. |
Quote:
At the low end of the "pro" spectrum there's the Sony PDX-10, which was just discontinued. I've had one of these for years and it shoots remarkably good 16:9. A great value for $1,700: http://www.bhphotovideo.com/bnh/cont...ughType=search Or how about the inexpensive Sony HC1? http://www.bhphotovideo.com/bnh/cont...ughType=search Or the HC3 or HVR-A1? Then there's the FX1 in the $3000 range, about the same as a PD-170. All the HDV cameras shoot SD 16:9 with great quality. |
Quote:
This kills me, from a consumer/enthusiast point of view. I think *all* Televisions and video monitors, should be calibrated correctly to one standard and set as a default. You can have one of the most expensive HDTV sets out there, but one won't ever see the full potential of their sets because they have it in "full torch" mode. Most sets have their defaults set with the sharpness cranked too high, the brightness near blooming white, and the color levels cranked up so your "average" consumer will be "wowed." Almost ten years ago, a LaserDisc from Joe Kane called "A Video Standard" was introduced. It provided all kinds of test patterns and color bars to set up your tv. I was skeptical at first, but once "I got it"--every movie I viewed, from that point on was a revelation. And of course video from my camcorder looked better too[except for the differences in black levels straight out of the cam] A couple years back, Luca's THX-certified DVD's started to include their "THX optimizer" which attempts to help consumers dial in their sets. It's pretty good in my opinion, and free. Once dialed in, I think, the criticisms of DV looking "bad" on a big screen will go away. And with a big screen--most viewers won't be watching it from 12 inches away, they are usually sitting a couple feet back. One last comment, this is what I gleaned from reading up www.avsforum.com. Recently, their have been huge amounts of "upconverting" dvd players that upconverts the 480 signal to 1080i for HD sets. I was skeptical at first, but it really is an improvement, especially if your HD set's built in scaler is not very good. |
Quote:
Isn't the fact that the cheaper priced consumer cams are available but not for the pro quality we want to deliver? I see this as farming backwards a bit IMHO. Taking the comments posted in this thread into consideration, doesn't this show that the higher res cameras are becoming a thing to consider strongly for the future??? I don't mean to hijack the thread for another purpose other than to illustrate that the SD market for footage is beginning to encounter a few issues???? I too battled the same way Mike did - and in the end relented. What I encountered was exactly the same Mike has, and I wasn't able to find a good workable solution coupled with the frustrations it brought me. Converting footage for output on a large screen is an option but the reduction in quality was just not good enough for my liking. Cheers |
Native 16:9? How does that change things?
This still is a 720x480 picture at about 5:1 compression. Stretching that accross 55" will still not look good. True, you are taking a higher def chip and downsampling it but I would think 3 4:3 chips are still better than 1 16:9 chip. |
The difficulty with most SD "prosumer" 3 chippers, for shooting 16:9, boils down to effective pixel count, combined with shape, of the CCDs. The only way to get 16:9 footage from a 4:3 shaped CCD with approx 400k effective pixels, is to crop the image on the CCD, which then effectively gives you only about 360 lines of resolution (losing approx 120 lines - 25% of SD resolution), regardless of whether or not it is recorded as 360 lines or stretched out and recorded over 480 lines. When played back, letterboxed on a standard 4:3 CRT, that's not a problem, because the playback is essentially only at 360 lines of resolution anyway, but when you play back the same footage on a high resolution 16:9 television or monitor, the loss of resolution is noticeable.
Many consumer camcorders made nowadays have much larger effective pixel counts on the CCD, and are capable of acquiring a full 480 lines from the CCD, while shooting at 16:9 (even if the CCD is not physically 16:9 in overall shape). It may seem odd, but in many situations, a fairly cheap consumer camera can outperform a much more expensive professional camera, for shooting 16:9. I have a TRV70 (1/3.6" CCD, with approx 1 million effective pixels), which I shoot 16:9 with on occasion (I believe it is a 4:3 shaped CCD, but I do not know that for certain - and I'm not going to take the camera apart to look!). It shoots a very clean, crisp image for a small consumer camera. In good light, I would use it, for shooting 16:9, before using my GY-DV300U (3-1/3" CCD, 410k effective pixels, similar in CCD resolution characteristics to a PD, VX, GL or XL1). |
Ok, I must be missing something.
A 16:9 CCD or CMOS will capture a native 16:9 image. However, once it makes it to tape it is still a 720x480 image (in the DV25 world). For it to retain the 16:9 shape one of two things must happen. 1. to fit the 16 width in, somewhere your top/bottom letterbox is added reducing the actual picture to 400something pixels just like letterboxing 4:3 in post. 2. The image is sqeezed when recoreded then unsqueezed when played. Only now you are stretching the 16 part across greater than 720 pixels. Or is there another option I am missing here? |
Per the specifications printed in GY-DV300U manual, there are 380,000 effective pixels, arranged in 492 rows with 768 columns (492x768 is actually 377,856, so I assume JVC was rounding off to obtain the 380,000 figure). I'm not sure why I had a 410,000 figure in my mind earlier today for the DV300U, but that might be the effective number of pixels for a VX2100, since I looked at those specs recently. Regardless, the number of effective pixels for either camera is close to 400,000 and arranged similarly. For the sake of simplicity, let's call it 480 rows with 720 columns (matches the specs for recording NTSC SD digital video, at "D1", in either 4:3 or 16:9 aspect ratios). Also, for purposes of this discussion, let's think of the 3 CCDs as simply constituting one pickup device.
If you take those 480 rows with 720 columns and arrange them as a grid that measures (with a ruler) 3/4 as tall as it does wide (4:3 aspect ratio), you will have pixels that are, in essence, taller than they are wide. That is basically how most of the prosumer SD cameras' pickup devices are laid out. To capture an image at a 16:9 aspect ratio, using a pickup device laid out like that, the only way (unless an anamorphic lens is used) is to discard the output from 120 of the rows (cropping). To fit the D1 NTSC digital video recording spec, the recording can then either be of a cropped image also, or the output from the effective pixels can be stretched (either way, you lose effective resolution). Conversely, you can layout a grid with 480 rows with 720 columns arranged such that the grid measures (with a ruler) 9/16 as tall as it does wide. That would give you exactly enough effective pixels, properly laid out, to capture a 16:9 aspect ratio image, to record at NTSC SD D1 digital video specifications with maximum resolution, and the pixels would be, in essence, wider than they are tall. To capture an image with a 4:3 aspect ratio, using a pickup device laid out that way, would require discarding the output from 180 of the columns (cropping). To fit the D1 NTSC digital video recording spec, the recording can then either be of a cropped image also, or the output from the effective pixels can be stretched (again, either way, you lose effective resolution). With 1,000,000 effective pixels (an effective megapixel), you can layout a grid of varying height/width ratios (measured with a ruler), with far more than 480 rows and 720 columns. That way (with a reasonable layout) you can capture either a 4:3 or a 16:9 aspect ratio image, with enough individual effective pixels from the pickup device, for "D1" NTSC digital video specifications. Essentially, you can attain full NTSC D1 resolution either way, simply by overkill (more effective resolution acquired by the pickup device, than can be recorded under the specifications). The NTSC D1 specification for either 16:9 or 4:3 recorded video (720x480) does involve stretching or squeezing, to playback properly, on a square pixel display. For example, 4:3 SD LCD monitors/televisions commonly have a display resolution of 640x480 pixels (CRT resolution isn't actually measured in pixels - that's native analog technology, which involves different types of measurement for resolution). That's a lot of verbiage, but I hope it explains well. I can't think of a more succinct way to put it, with sufficient detail for a thorough explanation, without using illustrations. |
Quote:
Rob, THERE ARE 3 WAYS TO PRODUCE 16:9 depending on the camera - not just 2 ways as you seem to believe. [1] Cropped [2] True 16:9 chip [3] Anamorphic Squeeze. This 25% loss in resultion figure you speak of is only true if the camera crops the image to produce 16:9. But that isn't how it's done in most prosumer cameras. Most prosumer cameras achieve 16:9 using method 2 or 3. There is no loss in resolution with anamorphic because the chips are not 4:3. I'll repeat... the chips ARE NOT exactly 4:3. This is a big assumption you made and it's wrong. The anamorphic chip has greater than 480 lines and squeezes the picture above 480 then the pixels are stretched to produce 16:9. Your big assumption is that unless it's a 16:9 chip it is a "4:3 shaped chip" and this is just not true. |
There is no physical way to layout a sensor array (grid), with only 400,000 (or thereabouts) effective elements, that can acquire a full D1 resolution image (720x480) at BOTH 4:3 and 16:9 aspect ratios (without use of an anamorphic lens in acquisition, for at least one of the aspect ratios). Can't be done. It's simply a physical impossibility. There are not enough elements in the array (no matter what the physical layout arrangement is) to do so.
Don't confuse cropping from the sensor for cropping in the recording (two entirely different things). An image acquired by cropping the output from the sensor, can be recorded as appearing "squeezed" (and is an option on some of the cameras mentioned). Actually, when that is done with one of the cameras mentioned (VX, GL, etc.), the use of the term "squeeze" can create confusion, since first, the image must be stretched vertically before recording (the acquired image is not actually squeezed with one of these cameras). The resulting recording is of an image that appears to be squeezed horizontally (must be stretched horizontally, or squeezed vertically, upon playback, to display at the proper aspect ratio). |
Quote:
The debate is with this loss of resolution you claim with the use of a camera that doesn't have a true 16:9 chip. You said that recording in 16:9 mode would result in a 25% loss in resolution when compared to 4:3 mode and I've said that isn't true. They can aquire 16:9 with a pixel count not less than the standard 4:3 pixel count (or it's very very close - not anywhere near a 25% loss). Sony's own specs for this camera claim no loss in resolution. |
dont forget, when shooting progressive scan, you effectively have twice the resolution PER FRAME than you would with an interlaced source.
When shooting with a DVX100 in squeeze mode, progressive scan, vertical detail set to thin and you use the native widescreen (squeeze) you CANNOT tell the difference between this and native 16:9 from an XL2. Maybe if u ran a res chart you might find a 50 odd line difference, but lets face it, in the real world, if it looks good and it does the job WHO CARES |
All times are GMT -6. The time now is 05:25 PM. |
DV Info Net -- Real Names, Real People, Real Info!
1998-2025 The Digital Video Information Network