View Full Version : Pal 1440?


Thomas Smet
May 26th, 2005, 12:36 PM
Any info on if the PAL version of the HVX200 will actually use 1440x1080 instead of 1280x1080? I read a few rumors that was being added for the PAL unit.

Rob Lohman
May 28th, 2005, 04:46 AM
It may be, however, as far as I know all HD(V, whatever) camera's use the
same resolution for the different models. They finally did away with different
resolutions for different regions in the world (for HD). However, different
framerates will no doubt still apply (yikes).

Radek Svoboda
May 28th, 2005, 11:36 AM
The camera uses DVCPRO HD format. 50i resolution in this format 1440x1080.

Rob Lohman
May 29th, 2005, 04:19 AM
Remember that 1280 x 1080 is *NOT* an HD resolution, 1920 x 1080
(alongside 1280 x 720) is. 1280 x 1080 is a non-widescreen format.

The reason the camera's are recording 1440 x 1080 has to due with pixel
aspect ratio's (ie, the pixels are not square). If you convert that back to
square pixels you will end up with 1920 x 1080 (which is a HD standard).

Kevin Dooley
May 29th, 2005, 05:59 AM
I'm sure most people are aware of that, but I think the question arises from the fact that DVCPRO HD in the 1080i format for 50i (not really PAL, but it uses a PAL framerate) is 1440x1080 versus the "NTSC" version which is 1280x1080...

Radek Svoboda
May 29th, 2005, 04:36 PM
>>Remember that 1280 x 1080 is *NOT* an HD resolution, 1920 x 1080
(alongside 1280 x 720) is. 1280 x 1080 is a non-widescreen format.

Rob,

it is a wide scren format, 60i HDCAM HD

Radek

Rob Lohman
June 1st, 2005, 03:16 AM
Do you have a link to that Radek?

I'm not saying no camera's are recording at 1280 x 1080, I'm just saying it
is not in the HD consumer spec.

Here are the screen aspect ratio numbers:

640 : 480 = 1.33
1280 : 1080 = 1.185 (even less "widescreen" than normal TV!!)

1280 : 720 = 1.77 (= 16:9)
1920 : 1080 = 1.77 (= 16:9)

I would not consider anything under 1.77 to be "widescreen". The two most
common film widescreen formats are 1.85 & 2.35

The only way that the mentioned HDCAM is recording widescreen is that if it
is not using square pixels. So you guys are saying that 1280 x 1080 gets
stretched to 1920 x 1080? That means it has a pixel aspect ratio of 1.5
versus 1.33 for 1440 (which you say is PAL?).

(Remember, DV NTSC 720 x 480 = 0.9 pixel aspect, resulting in 640 x 480
resolution @ a pixel aspect ratio of 1.0. The higher the screen aspect ratio
(horizontal divided by vertical) the wider the "look")

Barry Green
June 1st, 2005, 04:12 AM
HDCAM does indeed use non-square pixels. It records 1440x1080, which gets up-rezzed to 1920x1080 on playback.

DVCPRO-HD uses 1280x1080, which gets up-rezzed to 1920x1080 (but uses higher color sampling).

Square pixels are not the rule, in fact square pixels are the extreme exception. Just running through formats in my head, DV/DVCPRO50/DVCPRO-HD/DigiBeta/MPEG-IMX/BetaSX/DVCAM/Digital Betacam/HDCAM/HDV, I can think of only ONE format that uses square pixels: HDV 720p. Every other one uses non-square pixels, and HDV 1080i, DVCPRO-HD, and HDCAM all record sub-sampled versions which need to be up-rezzed on playback.

HDCAM-SR is, I believe, a square-pixel 4:4:4 system, but that's about it.

Rob Lohman
June 1st, 2005, 04:23 AM
Thanks for that Barry. That sums it up then.

Graeme Nattress
June 1st, 2005, 07:26 AM
HD is always square pixels, but various HD compression algorithms do resolution reduction, which makes the video look like it's using non-square pixels. However, it's not, it's just using a compression method that leaves exactly the same result. Confusing, isn't it! That's because when you decompress the video fully, you end up with a square pixel format HD video.

Graeme

Steven White
June 1st, 2005, 08:38 AM
I don't buy that interpretation Graeme. Tell me if this isn't correct:

In order to get 1920x1080 from any non-square pixel source, a digital up-sample has to be done on the horizontal dimension of the image.

Even if the image was acquired in native 1920x1080, when it's downsampled to 1440x1080 or 1280x1080, that resolution is lost forever - in fact, the image is softened horizontally compared to what a native 1440x1080 image would produce. Resampling back to 1920 softens the image again, resulting in an effective resolution that is less than 1920 (probably even less than 1440 or 1280).

This is a problem for all digital displays. In principle, the only system that doesn't have a problem with pixel-aspect ratios is the scanning CRT system - where the horizontal scan is an amplitude-modulated electron beam. (Though I expect this isn't done in the ideal way in the majority of displays).

Boy do I loathe the principle of pixel aspect ratios. Here's hoping the next set of codec standards is at least full 1920x1080 - if not uncompressed 4:4:4 at that resolution.

-Steve

Radek Svoboda
June 1st, 2005, 08:55 AM
>>HDCAM does indeed use non-square pixels. It records 1440x1080, which gets up-rezzed to 1920x1080 on playback.<<

The CCD's have square pixels, 1920x1080, recorded on tape at 1440x1080.

Radek

Graeme Nattress
June 1st, 2005, 10:41 AM
One of the reasons that as part of the codec the video can get squashed and then expanded out is that there's not much detail at that level anyway, and that the lenses probably can't get upto that resolution anyway, so it's a very effective compression to just eliminate that resolution that you're not going to see anyway. Yes indeed, it is better not to do it, but given that you can only fit so many bits on tape, it's better to loose some resolution than to have full resolution, but with more compression artifacts. Because in the big cameras that have CCDs that produce the full raster, you get a super-sampling effect when scaled down which means lower noise which might also compress easier.

Compression is different from pixel aspect ratio. They can lead to the same visual result, but that does not mean they're the same thing.

Graeme

Steven White
June 1st, 2005, 11:11 AM
Hey Graeme:

My definition of compression is the following:
- given a fixed resolution video source, a compression algorithm reduces the data stream by accounting for rendundant and/or similar information both spatially and/or temporally within the image.

My definition of scaling (down-sampling, up-sampling) is the following:
- given a fixed resolution video source, a scaling algorithm changes the resolution of the incoming image to a new fixed resolution.

Taking a 1920x1080 stream to a 1440x1080 stream would be a down-sampling operation followed by a compression operation.

While both operations discard information, the principle of a lossless compression algorithm is that with decompression the data can be fully recovered. There is no such thing as a lossless scaling algorithm*. Admittedly there are precious few lossless compression algorithms as well - but they are at least possible.

It strikes me that super-sampling 1920 and scaling down to 1440 fails on noise arguments as well. If you used a native 1440 grid if the imaging cells had the same noise properties per incident photon the "signal averaging" effect would be the same. The main benefit of super-sampling would be to reduce aliasing artifacts... but this could be done with sub-pixel shifts on the CCDs to slightly blur the image. I would also argue that a 1.33x or a 1.5x super-sampling isn't really enough to get rid of aliasing in a meaningful way. I think you'd need at least 4x super-sampling to get a signficant benefit (as seen in anti-aliasing algorithms in GPUs for computer games).

-Steve

*unless we are discussing an upsampling operation where the output resolution is an integer multiple of the input resolution - and even in this case, interpolation is often used.

Graeme Nattress
June 1st, 2005, 11:30 AM
So is 4:2:2 chroma sampling chroma compression or chroma resolution reduction and expansion? Either way, it's irrellevent - they're both just the same thing.

Compression is just reducing the amount of data. It doesn't matter how you do it, be it wavelet, DCT, resolution reduction or whatever - you can still compare it back up to the original and see how it looks.

By it's nature, resolution reduction is lossy, but only if there are details beyond which the reduced resolution cannot store, at which point you must filter them to stop aliassing artifacts on the reduction.

In scaling, interpolation and filtering are nearly always used, unless you're using a "nearest neighbour" algorithm which is going to produce vast visible artifacts. Whether an integer scaling is used or not is irrellevent.

If you want compression based upon scaling, you could look through each area of the image and decide by what factor (based upon level of detail) you could scale each section down, and back up again without loosing detail. By setting the allowable detail loss, you could then have variable levels of compression, all just using scaling.

Graeme

Steven White
June 1st, 2005, 11:54 AM
Bah. This is just semantics now. The order and the methods ARE important. Ultimately, if you carry "scaling = compression" to it's logical conclusion, you can state that scaled SD video = HD... which networks actually do.

-Steve

Graeme Nattress
June 1st, 2005, 12:06 PM
Steven, you're spot on right!

I guess what I'm getting at is that if you feed 1080i60 1920x1080 4:2:2 into a DVCproHD tape deck, and then play it back out over SDI, what you get is 1920x1080 4:2:2, but it looks worse as it's been compressed. It's only internally that the tape has the HD video as a non-square pixel resolution, but if you didn't know what was going onside the deck, you'd only ever see square pixel HD video.

As for broadcasters scaling SD to HD and calling it HD, yet, it happens :-(

Graeme