Please help me understand aspect ratio / 16:9 at DVinfo.net
DV Info Net

Go Back   DV Info Net > The Tools of DV and HD Production > Open DV Discussion
Register FAQ Today's Posts Buyer's Guides

Open DV Discussion
For topics which don't fit into any of the other categories.

Reply
 
Thread Tools Search this Thread
Old January 22nd, 2007, 02:30 AM   #1
Regular Crew
 
Join Date: Apr 2006
Location: Portland, Oregon
Posts: 57
Please help me understand aspect ratio / 16:9

Ok I've been reading up on this and trying to wrap my head around it. I don't know why, but it still hasn't just "clicked."

I think I am clear on this: the lens width (what is the technical term for this) has nothing to do with 16:9. I could be at 2.5mm or 100mm, and be in 16:9 no matter what...

But the problem I have is, why? How can an image that is so wide, shot on a wide angle lens, take up the same rectangle as what seems a much narrow image shot on a 100mm focal length? It just doesn't make sense to me.

If anyone is good with words/teaching, I would appreciate it SO much if you could explain in detail exactly why this is...

and not just this, but how the aspect ratio works and what's going on, what's with the resolution differences, what does it have to do with the width of an image, where does the vertical space go, and why doesn't that have to do with the lens, and etc. etc... on and on it goes!

I just don't get it, and I really want to.


Please help and thank you so much!
Daniel Cegla is offline   Reply With Quote
Old January 22nd, 2007, 03:32 AM   #2
Wrangler
 
Join Date: Oct 2003
Location: DFW area, TX
Posts: 6,117
Images: 1
Ok Daniel, I'll bite. Aspect ratio is nothing more than a fractional representation of a picture's width to its height. The first number is the horizontal and the second number is the vertical.

If you want to do it purely by numbers of pixels, say for graphics files to have correct proportions you would divide 16 by 9 and come up with 1.78. That's the number you would then use to multiply the vertical pixel count by to arrive at the correct number of horizontal pixels.

Let's use SD as an example:

480 X 1.78 = 854.4

So if you were making a graphic in an image editor for your 16:9 SD sequence it would need to be 854 X 480.

That works for graphics, but video is handled a little differently. Since the DV spec says the video must be 720X480 pixels, we are forced into a 4:3 ratio with those values. What to do? Simply stretch the 720 pixels in the horizontal direction while leaving the vertical size of the pixels as is.

This is where the term 'anamorphic' comes into play. It means, 'to change shape'. There are two methods to accomplish this stretching, optically, or electronically. Therefore, an anamorphic lens adapter is meant to work on a native 4:3 camera such as the DVX100. If a camera has native 16:9 chips, such as the XL2, the anamorphic effect is created electronically by using more horizontal pixels (960 in the XL2's case) and mapping them onto a 720 grid to fit the DV spec.

The end result is the same either way in that you will instruct your NLE that you have 'anamorphic' footage so it knows to stretch out your 720 pixels so you see a 16:9 image. If you then author a DVD with this footage, you will set an 'anamorphic bit' when you burn the image to disc so that the player knows it's widescreen and can react appropriately.

i hope this clears it up somewhat. There are several other threads here on the forum that discuss the issue in one way or another. You are correct about lenses having nothing to do with aspect ratio.

-gb-
Greg Boston is offline   Reply With Quote
Old January 22nd, 2007, 05:02 AM   #3
Regular Crew
 
Join Date: May 2006
Location: Sterling Heights, Michigan
Posts: 105
Daniel,

I hope that I am answering your question, because I'm not sure what your question is.

Anyhow, a lens always projects a circle, no matter what shape the image is. This is called the image circle. And a lens is designed for the image circle to be big enough to cover the shape of the chip (or film). The chip sits in the middle of the image circle. That is why generally the larger the chip is on any given camera, the larger the lens is (1/4-inch chips like in my TRV900, 1/3-inch chips in the pd-170/canon xh-a1 etc., 1/2-inch and 2/3-inch chips in full size ENG cameras). An example would be, if you could find an adapter to put a lens designed for 1/2-inch chip on a 2/3 inch camera it may not work because the image circle may not be big enough causing vignetting.

As for the angle of view, which is what I believe you are asking with the reference to the 2.5mm lens or 100mm. That is optical design and that is why all those elements move around inside the lens. The best example I can use is with still lenses. If you look at a 28mm lens you will see that the rear element is quite rounded and protrudes from the back of the lens. A telephoto lens rear element will be recessed into the lens and quite flat. A zoom lens has to incorproate both of these aspects and usually have more lens elements than a fixed length lens.

If you could actually see the image circle from a zoom lens, you would see that the image circle in wide angle is smaller than the image circle when set to telephoto. But both are big enough to cover the chip.

I hope this helps,

Duane
Duane Burleson is offline   Reply With Quote
Old January 22nd, 2007, 05:12 AM   #4
Major Player
 
Join Date: Aug 2004
Location: Syke, Germany
Posts: 249
Greg explained it very well. Let me just add a few words to answer this question:

Quote:
Originally Posted by Daniel Cegla
But the problem I have is, why? How can an image that is so wide, shot on a wide angle lens, take up the same rectangle as what seems a much narrow image shot on a 100mm focal length? It just doesn't make sense to me.
Every lens projects a circular image. The diameter of this circle, however, depends on the film or chip size. Let's assume a useable diameter of 45mm (this is typical for a lens that is used on the classical 35mm still cameras). From this circle you can "cut" any rectangle with a diagonal of 45mm or less. This includes also a square with an aspect ratio of 1:1. The classical 35mm still frame is 36mm x 24mm, hence an aspect ratio of 3:2. And of course you could also have a frame size with an aspect ratio of, say 16:9, provided, the diagonal is not larger than 45mm.

I hope this explains your question. A simple illustration would be better, but I've got none at hand at the moment.

Edit: I just saw Duane's reply. I just need too long to type an answer ;-)
__________________
Keep rolling

Rainer
Rainer Hoffmann is offline   Reply With Quote
Old January 22nd, 2007, 05:25 AM   #5
RED Code Chef
 
Join Date: Oct 2001
Location: Holland
Posts: 12,514
The aspect ratio is the ratio of height to width on the RECORDED image
(a sensor or film also has an aspect ratio, that's why the recorded image
has a certain aspect ratio, I just think it's easier to think about the images
you're working with)

So a 16:9 (or 16x9 / 16 by 9 as you say when you pronounce it) image has an
aspect ratio of 1.77777777778. I divided 16 by 9 to get that number.

This means that the RECORDED image is 1.77777777778 times as wide as it is
high. A standard 16:9 high definition image is 1280 x 720 or 1920 x 1080. If
you calculate 1280 : 720 or 1920 : 1080 you end up with the same 1.77777777778 number.

But it doesn't have to be a "standard" resolution to have that aspect ratio. An
image that is 16 pixels wide by 9 pixels high is also a 16:9 image, as is an image
with a 1600 x 900 resolution. You get the idea.

We are talking about image (screen) aspect ratio. The calculations I gave you
above assume the pixel (aspect ratio) is square. In other words, each little
pixel is as wide as it is high. For broadcast HD like 1280 x 720 & 1920 x 1080
this is indeed the case. It may not always be so.

This PAR (pixel aspect ratio) for NTSC DV is 0.9 for example. The image resolution
for NTSC DV is 720 x 480 pixels. To get the resolution of the image in square pixels
(PAR of 1.0) we multiply the horizontal resolution by the PAR. In this case 720 x 0.9.
This yields 640 (actually 648, but that's another story). So a PAR 0.9 720 x 480 image
has a resolution of 640 x 480 in PAR 1.0 (square pixels). The SCREEN / IMAGE aspect
ratio for this image is 4:3 or 1.33333333333 (640 divided by 480).

So PAR calculation is first, then you can calculate the screen/image aspect ratio.

Let's say you recorded that 640 x 480 image but you want to extract a widescreen
image from that in a 16:9 shape (aspect ratio of 1.77, other common ones are
1.85 & 2.40 [the higher the number the wider the image]) you need to crop
that image to 640 x 360 (640 : (16 : 9) or 640 : 1.7777777778).

This assumes the sensor / film was in 4:3 aspect ratio. Obviously there is
widescreen film and widescreen sensors which give you the image in a 16:9
or other aspect ratio.

Normally when talking about "aspect ratio" we mean screen / image aspect
ratio and not pixel aspect ratio (PAR). When talking about PAR you explicitly
say it is PAR you're talking about.

Depending on the work you do you may never encounter PAR. You will most
likely have to deal with screen / image aspect ratios. Hopefully this explanation
was of some use to you.

As a final note: lenses are designed for a specific sensor / film size. Normally
this is designed for the width/diagonal of the sensor film. The height is smaller
so will always fit within the image circle the lens will project on the sensor / film.

You can see an image of this image circle with two different sensor sizes (and
thus crop factors) on this Wikipedia page: http://en.wikipedia.org/wiki/Crop_factor
__________________

Rob Lohman, visuar@iname.com
DV Info Wrangler & RED Code Chef

Join the DV Challenge | Lady X

Search DVinfo.net for quick answers | Buy from the best: DVinfo.net sponsors
Rob Lohman is offline   Reply With Quote
Old January 22nd, 2007, 05:34 AM   #6
Major Player
 
Join Date: Aug 2004
Location: Syke, Germany
Posts: 249
Link to illustration

Here is a little picture that illustrates the image circle and aspect ratios:

http://www.eyebiz.de/content/special/aspect_ratio.jpg

I hope this helps.
__________________
Keep rolling

Rainer
Rainer Hoffmann is offline   Reply With Quote
Old January 22nd, 2007, 07:21 AM   #7
Major Player
 
Join Date: Nov 2006
Location: Northern New Jersey
Posts: 391
one thing which has occurred to me although I've never had a chance to look into it, is that with all of the changes in the aspect ratio during the process, does that imply an optical compression in the process (in additional to any intra/inter frame compression happening in the digital side of the process).

that is, if I have a wide format, then narrow it down to fit on film, a chip, etc. then widen it back out for display, that process seems to be a compression / de-compression process.

in the analog (film) world, I would think that it does. E.g. using lenses, I compressed the image in a way that uses fewer grains of film to represent a wide space than we did for the height. (maybe the 'compression' is relative), the lenses doing the work to compress (narrow) the width, then widen it again (uncompress).

In the digital world, I would think it would be similar, when using the DV spec. So if we take a native 16x9 camera, it's chip(s) will produce a 16x9 image. So far, we have not used the lens to change the aspect ratio..the chip(s) captured the image in the same proportions as the lens. Now, I've got to take that 16x9 and fit it into a 720x480 matrix (pixels). What process is used to do that? In the analog world, I can see that we'd just use fewer grains than the height, but there's still many (thousands, millions?) grains on the film to represent that space. However, with DV, we have 720 (SD) discrete pixels of width (more with HD, but same process - digital representation of analog input, with much less resolution than film). Granted, we can change the aspect ratio of how that is displayed (e.g. use more screen width for that same pixel), however that pixel, still can have only one color associated with it. What was the process used to resample or compress the pixel width into the DV format which is 4:3?
Dave Stern is offline   Reply With Quote
Old January 22nd, 2007, 08:12 AM   #8
Regular Crew
 
Join Date: Mar 2006
Location: Baltimore, MD
Posts: 161
Quote:
Originally Posted by Dave Stern
What was the process used to resample or compress the pixel width into the DV format which is 4:3?
Hmmm, this brings up an interesting thought. Obviously a native 16:9 SD camera like an XL2 has to downsample the horizontal resolution to 720 pixels. I have no idea what the algorithm used is, but its probably not much different than asking Photoshop to do the same thing.

So some food for thought -- what does a better job: electronically downsampling an image from a native 16:9 chip into the 4:3 SD space, or optically downsampling with an anamorphic lens and capturing on a native 4:3 chip? I'm sure electronically is always cheaper, but can a good piece of glass actually be better?

-Terence
Terence Murphy is offline   Reply With Quote
Old January 22nd, 2007, 08:20 AM   #9
Major Player
 
Join Date: Oct 2005
Location: Hillsborough, NC, USA
Posts: 968
I don't think it necessarily downsamples at all. If the CCDs are native 16:9, it can sample 720 pixels directly. The DV specification dictates that sampling is done at 720 pixels per line, too (equivalent to D1 sampling).

DV is ignorant of the concept of 4:3 vs 16:9 except for a flag. The image is always 720 x 480 or 720 x 576, neither of which are true 4:3 anyway!
John Miller is offline   Reply With Quote
Old January 22nd, 2007, 08:29 AM   #10
Inner Circle
 
Join Date: Oct 2004
Location: Port St. Lucie, Florida
Posts: 2,614
John-Daniel,

See this thread for great info on your subject.

http://www.dvinfo.net/conf/showthread.php?t=83910

Mike
__________________
Chapter one, line one. The BH.
Mike Teutsch is offline   Reply With Quote
Old January 22nd, 2007, 09:06 AM   #11
Wrangler
 
Join Date: Dec 2002
Location: Mays Landing, NJ
Posts: 11,787
Quote:
Originally Posted by John F Miller
I don't think it necessarily downsamples at all. If the CCDs are native 16:9, it can sample 720 pixels directly.
If we're talking specifically about the XL2, the chip is 960 pixels wide so something is going on. I think you will find that to be the case with any of the prosumer standard definition cameras that do good 16:9. The chips in my PDX-10 are 1152 pixels wide for example.
Boyd Ostroff is offline   Reply With Quote
Old January 24th, 2007, 07:56 AM   #12
RED Code Chef
 
Join Date: Oct 2001
Location: Holland
Posts: 12,514
John: seem my post on pixel aspect ratio, so yes, those resolutions are 4:3 (screen aspect ratio)

As indicated DV is always 720 x 480 (NTSC) or 720 x 576 (PAL). If the sensor has
more pixels than this it either needs to crop or resample the image to a lower
resolution. That is the only way (and is what 16:9 native DV camera's do).

Even a HDV camera might be downsampling (HDV 1080 records in 1440 x 1080
if I'm not mistaken which means downsampling if the camera actually has a
1920 x 1080 sensor).

Best quality would probably be an anamorphic lens or getting a higher resolution
sensor with a camera that does not downsample (like the RED camera we're building)
__________________

Rob Lohman, visuar@iname.com
DV Info Wrangler & RED Code Chef

Join the DV Challenge | Lady X

Search DVinfo.net for quick answers | Buy from the best: DVinfo.net sponsors
Rob Lohman is offline   Reply
Reply

DV Info Net refers all where-to-buy and where-to-rent questions exclusively to these trusted full line dealers and rental houses...

B&H Photo Video
(866) 521-7381
New York, NY USA

Scan Computers Int. Ltd.
+44 0871-472-4747
Bolton, Lancashire UK


DV Info Net also encourages you to support local businesses and buy from an authorized dealer in your neighborhood.
  You are here: DV Info Net > The Tools of DV and HD Production > Open DV Discussion


 



All times are GMT -6. The time now is 07:03 AM.


DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network