![]() |
Perhaps not the support built in, but the capability of using the camera on the shoulder by having a forward V/F option. In the past, some cameras with a flat base (like the CP16) were extremely good for hand held work.
|
Quote:
The camera is designed to be used with a motorized zoom (once it is released). With the hand grip and zoom rocker in the position it is in, handholding is going to be very awkward. It may not be designed as an "ENG" camera but the presence of the zoom rocker and motorized zooms suggests that it is aimed beyond just digital cinema. I can see such a camera being used extensively in EFP. From what I have seen with the limited time I spent with the prototype, Sony have worked wonders with the sensor, that must have been expensive. Why skimp on simple ergonomics? If shoulder mount was too expensive, which I doubt because JVC managed it on far cheaper cameras, the design should have been more modular. I am looking forward to seeing what the NXCAM S35 camera brings in this regard. The Arri Alexa includes a shoulder mount and its ergonomics are widely praised among operators. Quote:
I think the big problem with handheld is not the amount it is used but the way it is used. The whole point is to mimic documentary. I can't stand constant, deliberate zoom adjustments and crazy long lenses shaking so much that I feel sick. |
1 Attachment(s)
Of course I agree that SPR was an awesome use of hand-held (the
exception rather than the norm)... that was Janusz Kamiński, after all. But the cameras he used didn't have *built-in* shoulder supports, either! |
But they had forward V/Fs. The built in shoulder pad is optional, for example a Panaflex doesn't have one built in, it's heavy but still pretty good for hand held.
|
Quote:
As for supports, then Brian really says it all. You either need an external viewfinder (mounted much further forward) or accept the camera being well in front of the operator. Hardly ideal if the whole point is to have a smaller camera for use in tight spaces. Yes, no doubt it can (and will) be adapted to get by, but why not just design the ergonomics better in the first place? |
Ergonomics are everything. The point is not to only have the viewfinder in the correct (forward) position for handheld, but to also have it there so that you can 'hug' the camera for all sorts of operated shots from sticks. It is hugely beneficial to have your face near the nodel point of the camera.
It's the same reason why most operators do not use extension eyepieces when operating 35mm except out of necessity as in when they are using a geared head or are in a spot where they cannot reach the viewfinder. Hanging your head off the back of a camera is just silly. Not only for handheld, but just about all the time. Have a look at the Alexa, or any Aaton - these designers obviously care about the camera and take the time to consult with the users. Ben Ruffell DP www.ruff.co.nz/blog |
Quote:
|
I'm sure if Sony used the pixel count as a measure of resolution as Red do, the F3 would be approaching 3K or more. Sony's F35 has 12.4 million pixels to achieve 1920x1080 resolution. Compare that to the 8 MP of Red One used for "4k" or the 13.8 MP that Epic use for the headline figure of 5k. Pixel count does not equal resolution with bayer sensors. I also just noticed that Epic requires a whopping 60 Watts!
I don't know the pixel count for the F3, but as it is a Bayer pattern I expect (and hope) it will be considerably higher than 1920x1080. |
Quote:
They designed it like a handycam for a reason... If they built it like a red camera where its just a brick, it would cannibalize their upper end. DP's might take it too seriously and end up having the F3 on productions where the 9000PL might be used. It was made to look a little consumer for a reason, this is not a mistake or poor judgment. The limitations are there so they don't ruin an entire line of cameras that cost much more. Face it, if the F3 was the exact same specs and insides, but came in a metal housing, no ugly little view finder on the back, and just an option for a proper evf, this camera would be taken just as seriously as the Alexa. Once an F3 has the 444 option enabled its really a no brainier. This is a powerful camera, but it was put into a consumer body so they don't hurt their sales. They might also be coming out with another F4 type camera that could be just what I described above... who knows. |
Interesting take Giuseppe, I believe you are onto something. Still the placement of the LCD?!!
|
F3 ...
Quote:
|
Sounds like the upcoming RedRock Micro EVF is going to be a popular option if Sony enables simultaneous output of HDMI and SDI.
|
Please give me more details,
Quote:
Arriflex 35 III, Aaton A-minima, Canon 5DM2 and 7D(I feel so cheap) |
The term 4k started in film when a film maker would scan the frames of the film using a single row scanner that was 4,096 pixels wide. Each line of the film was scanned 3 times, once each through a red, green and blue filter, so each line was made up of three 4K scans, a total of just under 12k per line. Then the next line would be scanned in the same manner all the way to the bottom of the frame. For a 35mm 1.33 aspect ratio film frame (4x3) that equates to roughly 4K x 3K. So the end result is that each 35mm film frame is sampled using 3 (RGB) x 4k x 3k, or 36 million samples. That is what 4k originally meant, a 4k x 3k x3 intermediate file.
Putting that into Red One perspective, it has a sensor with 8 Million pixels, so the highest possible sample size would be 8 million samples. Red Epic 13.8 million. But it doesn't stop there because Red (like the F3) use a Bayer sensor where the pixels have to sample the 3 primary colours. As the human eye is most sensitive to resolution in the middle of the colour spectrum, twice as many of these pixel are used for green compared to red and blue. So you have an array made up of blocks of 4 pixels, BG above GR. Now all video cameras (at least all correctly designed ones) include a low pass filter in the optical path, right in front of the sensor. This is there to prevent moire that would be created by the fixed pattern of the pixels or samples. To work correctly and completely eliminate moire and aliasing you have to reduce the resolution to close to half that of the sample rate. So if you had a 4K sensor the resolution would need to be dropped to around 2K to avoid aliasing altogether. BUT a 4k bayer sensor is in effect a 2K Green sensor combined with a 1K Red and 1K Blue sensor, so where do you put the low pass cut-off? If you set it to satisfy the Green channel you will get strong aliasing in the R and B channels. If you put it so there would be no aliasing in the R and B channels the image would be very soft indeed. So camera manufacturers will put the low pass cut-off somewhere between the two leading to trade offs in resolution and aliasing. This is why with bayer cameras you often see those little coloured blue and red sparkles around edges in highly saturated parts of the image. It's aliasing in the R and B channels. This problem is governed by the laws of physics and optics and there is very little that the camera manufacturers can do about it. In the real world this means that a 4k bayer sensor cannot resolve more than about 1.5k without aliasing issues. Compare this with a 3 chip design with separate RGB sensors. With a three 1920x1080 pixel sensors, even halving the resolution with the low pass filter to eliminate any aliasing in all the channels you should still get at least 1k. That's one reason why bayer sensors despite being around since the 70's and being cheaper to manufacture than 3 chip designs, with their own issues created by big thick prisms have struggled to make serious inroads into professional equipment. This is starting to change now as it becomes cheaper to make high quality, high pixel count sensors allowing you to add ever more pixels to get higher resolution, like the F35 with it's (non bayer) 14.4 million pixels. This is a simplified look at whats going on with these sensors, but it highlights the fact that 4k does not mean 4k, in fact it doesn't even mean 2k, the laws of physics prevent that. After all that, those that I have not lost yet are probably thinking: well hang on a minute, what about that film scan, why doesn't that alias as there is no low pass filter there? Well two things are going on. One is that the dynamic structure of all those particles used to create a film image, which is different from frame to frame reduces the fixed pattern effects of the sampling, which causes the aliasing to be totally different from frame to frame so they are far less noticeable. The other is that those particles are of a finite size so the film itself acts as the low pass filter, because it's resolution is typically lower than that of the 4k scanner. Until someone actuall does some resolution tests or Sony release the data we are a bit in the dark as to the pixel count. IF it resolves around 1000TVL, which is about the limit for a 1920x1080 camcorder then it should have a 3.5k sensor or thereabouts. |
Thanks Alister. That is the most cogent write up on the Bayer process I have read to date.
|
I have reviewed what I wrote and realised that some of it may appear incorrect as I have mixed up pixel resolution and TVL/ph resolution in the same sentence in a few areas so I have re-written it and it should make more sense.
First lets clarify a couple of terms. Resolution can be expressed two ways. It can be expressed as pixel resolution, ie how many individual pixels can I see. Or as line pairs or TVL/ph, or how many individual lines can I see. If you point a camera at a resolution chart, what you are talking about is line pairs, or at what point can I no longer discern one black line from the next. For the black lines to be separated there must be white in between, so TVL/ph is a combination of BOTH the black and white line so will always be a lot less than the "pixel" resolution. With video cameras TVL/ph is the normally quoted term, while pixel resolution is often quoted for film replacement cameras. I believe the TVL/ph term to be prefferable as it is a true measure of the visible resolution of the camera. The term 4k started in film with the use af 4k digital intermediate files for post production and compositing. The exposed film is scanned using a single row scanner that is 4,096 pixels wide. Each line of the film is scanned 3 times, once each through a red, green and blue filter, so each line is made up of three 4K pixel scans, a total of just under 12k per line. Then the next line is scanned in the same manner all the way to the bottom of the frame. For a 35mm 1.33 aspect ratio film frame (4x3) that equates to roughly 4K x 3K. So the end result is that each 35mm film frame is sampled using 3 (RGB) x 4k x 3k, or 36 million samples. That is what 4k originally meant, a 4k x 3k x3 intermediate file. Putting that into Red One perspective, it has a sensor with 8 Million pixels, so the highest possible sample size would be 8 million samples. Red Epic 13.8 million. But it doesn't stop there because Red (like the F3) use a Bayer sensor where the pixels have to sample the 3 primary colours. As the human eye is most sensitive to resolution in the middle of the colour spectrum, twice as many of these pixel are used for green compared to red and blue. So you have an array made up of blocks of 4 pixels, BG above GR. Now all video cameras (at least all correctly designed ones) include a low pass filter in the optical path, right in front of the sensor. This is there to prevent moire that would be created by the fixed pattern of the pixels or samples. To work correctly and completely eliminate moire and aliasing you have to reduce the pixel resolution of the image falling on the sensor to less than that of the pixel sample rate. You don't want fine details that the sensor cannot resolve falling on to the sensor, because the missing picture information will create strange patterns called moire and aliasing. It is impossible to produce an Optical Low Pass Filter that has an instant cut off point and we don't want any picture detail that cannot be resolved falling on the sensor, so the filter cut-off must start below the sensor resolution increasing to a total cut off at the pixel resolution. Next we have to consider that a 4k bayer sensor is in effect a 2K Green sensor combined with a 1K Red and 1K Blue sensor, so where do you put the low pass cut-off? As information from the four pixels in the bayer patter is interpolated, left/right/up/down there is arguably some room to have the low pass cut off above the 2k of the green channel but this can lead to problems when shooting objects that contain lots of primary colours. If you set the low pass filter to satisfy the Green channel you will get strong aliasing in the R and B channels. If you put it so there would be no aliasing in the R and B channels the image would be very soft indeed. So camera manufacturers will put the low pass cut-off somewhere between a bit above green and a bit below leading to trade offs in resolution and aliasing. This is why with bayer cameras you often see those little coloured blue and red sparkles around edges in highly saturated parts of the image. It's aliasing in the R and B channels. This problem is governed by the laws of physics and optics and there is very little that the camera manufacturers can do about it. In the real world this means that a 4k bayer sensor cannot resolve more than about 1.5k TVL/ph (3k pixels ish) without serious aliasing issues. Compare this with a 3 chip design with separate RGB sensors. With three 1920x1080 pixel sensors, even with a sharp cut-off low pass filter to eliminate any aliasing in all the channels you should still get at least 1k TVL/ph. That's one reason why bayer sensors despite being around since the 70's and being cheaper to manufacture than 3 chip designs (with their own issues created by big thick prisms) have struggled to make serious inroads into professional equipment. This is starting to change now as it becomes cheaper to make high quality, high pixel count sensors allowing you to add more pixels to get higher resolution, like the F35 with it's (non bayer) 14.4 million pixels. This is a simplified look at whats going on with these sensors, but it highlights the fact that 4k does not mean 4k, in fact it doesn't even mean 2k TVL/ph, the laws of physics prevent that. In reality even the very best 4k pixels bayer sensor should NOT be resolving more than 3k pixels or about 1.5k TVL/ph. If it is it will have serious aliasing issues. After all that, those that I have not lost yet are probably thinking: well hang on a minute, what about that film scan, why doesn't that alias as there is no low pass filter there? Well two things are going on. One is that the dynamic structure of all those particles used to create a film image, which is different from frame to frame reduces the fixed pattern effects of the sampling, which causes the aliasing to be totally different from frame to frame so it is far less noticeable. The other is that those particles are of a finite size so the film itself acts as the low pass filter, because it's resolution is typically lower than that of the 4k scanner. Until someone actuall does some resolution tests or Sony release the data we are a bit in the dark as to the pixel count. IF it resolves around 1000 TVL/ph, which is about the limit for a 1920x1080 camcorder then it should have a 3k sensor or thereabouts. |
There's only one extra thing that I think you may need to add to that, Alister, and it's the definition of TVL/ph - which I understand to be "TV line pairs/horizontal".
If we are talking about 1920x1080, the "pixel resolution" you talk about will be (theoretically) just that - 1920x1080. It follows that you can expect the equivalent figures expressed in line pairs to be 960 horizontally, and 540 vertically. Important thing to realise is that talking about 960lp horizontally, and 540lp vertically are both referring to lines the same distance apart on a chart - albeit at 90 degrees to each other. Hence the introduction of lph - line pairs referenced to the horizontal. What this means is that resolving a pair of lines a given distance apart will always be given a fixed value, regardless of whether they are vertical or horizontal lines - or even diagonal. So, on the vertical axis, a resolution of 540 lp, will be exactly the same thing as 960 lph. This all becomes especially important when charts are used with circular resolution bands, or zone plates. It means that a ring can be given a unique lph figure which is equally valid at any point around the ring. It follows that for a 1920x1080 recording system, the maximum resolution that can be got is 960 lph. If anyone claims and sees more than that - they must be seeing aliasing. |
Dave and Alister ...
Thanks much for this clarification.
|
Thanks David for adding that. One issue is that TVL/ph and Lph can be a little higher than 1/2 horizontal pixels because it is measured as the extinction point of the pair of pixels, ie the point where you can no longer see one black pixel separated from the next on the chart, so this implies that the white pixels can no longer be seen (or measured) so your actually looking at less than 2 pixels. When you measure using a scope you are looking for the point where both the white and black lines both become 50% grey That's why it is not impossible to see a measured lph resolution slightly higher than half of the pixel resolution.
|
Quote:
1kl/ph on a 3chip+prism system is, for the reasons you explain above about optical low pass filters only achievable with quite visible aliasing. 1kl/ph is 1.9k horizontal resolution, very close to the actual pixel resolution of the sensors with no-where-near enough "room" for any reasonably amount of optical filtering to work in. Good optical filtering, as you point out, is a necessary component for all types of cameras, and will necessarily reduce resolution when implemented properly. This is tough when you only have as many samples as is necessary to produce your HD image as you're now in a real battle between soft and aliasy. Optical low pass filters don't come in sharp-cut-off varieties. I wish they did as they'd make life oh-so-much easier. The lack of control over the roll-off independent of the strength of the low pass is due to the lack of negative photons. Such darkons would make lighting so much easier too :-) F35 is RGB stripe, with 12 million pixels used to make the image in a 1920x1080x3x2 array. Although using significantly more pixel in it's colour filter array, it manages to make for strong vertical luma aliasing and strong horizontal chroma moire rainbows due to the RGB stripe pattern coupled with low strength OLPF. When talking about camera systems, it's vitally important to properly measure with a good high resolution circular zone plate to show resolution, aliasing and MTF performance of the system. All these factors go hand in hand and are readily visible and comparable with a single image shot on these charts. They are an invaluable tool in camera analysis. |
Zone plates don't tell the full story with Bayer sensors as they are looking at a mix of the R G and B channels. They do not as a result show up all the issues that occur in areas of highly saturated colour. This is where Bayer sensors tend to fall behind 3 chip designs as the resolution is not equal in the R G and B channels. No matter how you tell it, the resolution in the R and B channels is half that of the G channel and that presents the potential for serious moire issues in saturated colours. With "4k" cameras this is a little non sensical as many people are drawn to 4k for shoots that involve compositing and green screen work where uniform resolution in all the colour channels is advantageous.
Nyquists theory, demands that the sampling frequency must be no more than half the number of samples to eliminate moire, so for a 4k bayer sensor to have no moire in the green channel the frequency cut off should be 2k. But designers cheat as they use the assumption that there will never be a high frequency pure green image falling on the sensor, so they allow the cut off of the LPF to sit above 2k relying on cross talk between colours. For most situations they get away with it, but some simple scenes like vivid green foliage can cause havoc. An OLPF doesn't have to stop light or reduce it to zero so negative photons or darkons are not required. An OPLF simply prevents the frequency rising past the cut off. The light still passes through, almost un attenuated, just frequency limited or for want of a better word "blurred". In effect a pair of black and white lines above the OLPF cut off would still be seen through the filter, only they would be seen as mid grey. A good Birefingent OLPF can have a fairly sharp cut off. |
Alister, that's why we use multi-colour zone plates of primary R,G,B and black-and-white. They show up all issues that we're talking about here.
To say "the resolution in the R and B channels is half that of the G channel" is an over-simplification because if the colour has any change in brightness then it will be pulling that detail from the green pixels on the sensor. Even in the worst possible case, you will still have more measured resolution and lower aliasing in R, G and B than a 1920x1080 3 chip camera system. There is potential for moire in all camera systems - but control over chroma moire on a bayer pattern sensor is not hard with some good algorithms and OLPF design, and the results are fine for extensive compositing use. Pulling keys is not something we hear VFX houses complaining about. With sampling theory, to avoid aliasing, you must have at least twice as many samples as the frequency you wish to sample. In other words, you must have at least twice as many samples as line pairs you wish to sample, which means you must have at least as many samples as lines you wish to sample. In practise, moire in our camera systems is utterly negligible and much lower than that with 3chip HD systems and RGB stripe systems. The requirements of sampling theory to avoid aliasing are very much harder to achieve in 3 chip systems where say for instance you have three 1920x1080 sensors on your prism. For an OLPF to achieve much reduction in MTF at 1920 you will necessarily reduce MTF at lower spatial frequencies and you will see a blurry image. In practise, a weaker OLPF is used which allows through a stronger MTF at 1920, producing a sharper image and allowing stronger aliasing too. The problem being that you cannot use a sensor of the final resolution you wish to capture, have an image that measures that same resolution and not have aliasing. When you put "4K" in quotes, you should also be putting "HD" in quotes as when such cameras are measured they either produce a resolution less than 1920x1080, or they have strong aliasing issues, or in the case of cameras that line skip, they have both low measured resolution and strong aliasing issues. |
Quote:
Quote:
Quote:
Quote:
Quote:
|
Alister, you're missing the point that practically speaking, the RED One has lower levels of aliasing and moire than HD cameras. I know. I've measured them. Yes indeed there are theoretical issues with Bayer pattern systems (as there are theoretical issue with all camera systems) yet in practical measured circumstances on real world measured cameras they are negligible.
Colorimetry issues with Bayer pattern sensors are easily handled by appropriate colour correction matrices and measured colorimetry errors are as low or lower than 3 chip cameras I have measured. Cross colour leakage leads to advantages for colorimetry under discontinuous light sources though, so it can be rather useful given the amount of discontinuous sources in use. Is there is a majority of sub2k camcorders that have such great measured resolution and aliasing results?Lookking through Alan Roberts' published zone plates of such cameras, I see significantly more aliasing than I would class as "negligible". Sure OLPFs are compromises on Bayer sensors, just as they're compromises on 3 Chip systems also, where you still have to balance aliasing / resolution. And because you're trying to achieve 1920 out of a sensor with 1920 pixels, this will lead to more aliasing as a much weaker OLPF is generally used. The theoretical issue with setting an OLPF to avoid chroma moire on a Bayer sensor is just that - and with a good demosaic algorithm the visibility of chroma moire is so reduced as to be a non-issue. Theoretical camera design is very different to practical camera design. The issue with OLPF design for a sensor in a 3 chip design is that generally the sensor will have just as many pixels as the measured resolution that is desired - as in 1920 pixels across and the hope is to be able to measure 1920 lines across an image. It's pretty obvious from this that if an OLPF is strong enough to reduce the MTF at 1920 to zero, you will not be able to measure 1920 resolution and the image will appear soft. Similarly, if you relax the OLPF to allow through a good MTF at 1920, you will allow aliasing to occur. The is the crux of the issue with optical filters and sensors. It is a battle you face with every sensor design type. Now, for a 3 chip system, the answer would be to oversample. Have three sensors of 2880 x 1620 (oversample by 1.5) set the OLPF for negligible MTF at 2880, then use a good downsample filter to achieve a sharp image at 1920x1080 with negligible aliasing. The extra costs are higher resolution sensors, potentially lower dynamic range and a lot of extra horse power for the good downsampling filter. However the results would be visually excellent in the areas we're discussing - measured resolution and aliasing. Back in the standard def days, there were three chip systems that over-sampled and they did have superb results. The main comment that drew me to post in this thread is: "In the real world this means that a 4k bayer sensor cannot resolve more than about 1.5k TVL/ph (3k pixels ish) without serious aliasing issues" which is not the case. Practical real world resolution in such a system is around 3.2k, which is around 1.8k l/ph. That a 3chip HD camera can measure a resolution of 1k l/ph is often the case, but because of the above issues with optical low pass filters you will have stronger aliasing at such a resolution. Max resolution in l/ph of a HD camera is 1080 l/ph. There is no such thing as an optical filter that is strong enough to reduce MTF at 1080 to near zero while allowing through good MTF at 1000. What it comes down to is that if you have x samples across on your sensor and hope to measure x lines of resolution you will get strong aliasing. To get negligible aliasing you probably want to aim to measure around 80% of x, or have at least 1.25x (more is better, but see above for drawbacks) the number of samples of the resolution you wish to measure. In both cases you're building in enough of a buffer to allow for an OLPF to work in. OLPFs by their nature are slow filters. They don't have sharp cut-offs. I wish they did, as it would make camera design a fair bit easier, but that's just the way the physics of them is. Graeme |
Sadly the zone plates I've seen from Red tests have been poorly executed, very often forgetting that to tell the true story you have to take the plate frequency out to at least 2x the camera resolution. The ones I've seen say hey look no aliasing, but you can still see all the rings clearly, so the frequency is not high enough to truly show the aliasing performance, you must go out past the extinction point. In addition the Red plates that I've seen do exhibit colour moire artefacts from well below the the extinction point. Perhaps Graeme you have some links to correctly done tests?
|
Like these examples; Red and Moiré? - REDUSER.net ?? They're only showing a small section of the chart that goes out to 2k. The full charts (that go out to 4k) are often shown to visitors to the RED studios in full resolution so they can see the full effect.
|
Yes I've seen those. They don't go out far enough on the Red to show if it's aliasing or not in the luma, but they do show noticeable colour moire, typical of a bayer sensor starting at about 1k. If you download the image and boost the saturation a bit the colour moire becomes clearly visible.
Certainly the F35 produces strong aliasing in that example. |
Not seeing much in the way of any chroma moire here.
Have you shot the F3 on a zone plate yet? Be keen to see the results if you do, especially in comparison to it's big brother, the F35 which is pretty poor on the aliasing front as noted. |
Quote:
Quote:
Quote:
TVL/ph is not "line pairs horizontal." A TVL is either a dark line or a light line, not a pair. TVL/ph is "TV lines per picture height." For TV it was expressed as the number of lines, either vertical or horizontal, (light or dark) that could be resolved inside a circle with a diameter equal to the vertical dimension of the frame. 1080 is Nyquist, not 960. TVL resolution can be expressed at MTF50, or any other level between 0 and 100%. The resolution number varies accordingly. For photographic lenses and film, resolution can be expressed as lp/mm, line pairs per millimeter. I'm sorry for interjecting here, and if you feel I'm wrong feel free to correct me. I'm not sure how it affects the discussion for 4k/2k/1k bayer filters, but the term TVL/ph has a defined meaning. The use of TVL/ph (tv lines/pic height) is helpful because it removes the aspect ratio of the frame size from being a factor in the discussion of horizontal resolution versus vertical resolution. If you had perfect theoretical resolution for a 1920x1080 sensor, you would have 1080 TVL in both axis, but you would measure 1920 lines across the frame width. |
Thanks for jumping in and putting us straight.
I don't think it changes the argument. But good to have the terms corrected. Of course while TVL/ph is as you say individual lines, to be able to see one line form the next you do have to be able to see or measure the complimentary line. Not sure how you would express MTF below 50% as once you get down to 50% grey any further MTF reduction would just be more of the same 50% grey, unless I've missed something? If you boost the saturation a little of the zone plates on the Red site the colour moire is plainly visible. I have not yet seen a zone plate from an F3 it will be interesting to see. |
If you take your zone plate and plot a scanline you'll see a series of sine waves increasing in frequency and decreasing in amplitude from the centre out. Although printed with equal amplitude, the imaging system will see them with reducing amplitude - this is the MTF of the system.
If we call the largest peak to peak amplitude on our plot 100%, we can travel down the sines of increasing frequency until we get a peak to peak amplitude of 50% (of our largest amplitude) and now we've found out MTF50 point. At this frequency we're still seeing a strong MTF and a good figure for frequency here will appear visually as a sharp image. If we keep on going until our peak to peak is 0%, we now have mid-grey and no detail at all, although for limiting resolution I'd probably call it below ~10% as it gets hard to tell below that. Graeme |
I know Graeme, I was getting muddled by Toms references to both MTF50 and 50% grey in the same sentence. I was referring to the fact that once you get down to 50% grey (the extinction point) the frequency response becomes zero so can't be measured, or at least there is nothing to measure. So MTF50 is before you get to 50% grey. Getting muddled, as zero contrast which is zero MTF normally means 50% grey, assuming the lines or rings were originally at 100% and zero.
Doh.. head hurts. You have to be so careful how you read and how you express all this stuff. |
It's tricky when you're dealing with 50% MTF's and mid (50% grey), but mid grey is when you get to 0% MTF :-) It's so much easier to show visually rather than in textual comments.
One nice thing you can do with your circular sine zone plate is plot MTF, and also you can look at the area under the MTF curve which strongly correlates with what we perceive as overall image sharpness. Graeme |
Gray (half way between white and black) would be extinction, 0% MTF.
I'm sorry, I didn't think that through. Thanks for the correction. |
My own references were to lph, which I understood to be "line pairs/horizontal", and shouldn't really be confused with terminology along the lines of TVLetc.
In other words, for a 1920x1080 image, the reference is to 1920 LINES res horizontally ("either a dark line or a light line, not a pair") OR 960 LINE PAIRS. (Being able to resolve a white/black pair of lines.) My understanding is that it's relevant as this is what most res charts are nowadays marked in? With the outer ring of an HD chart typically 1000 lph? If the scale was TVL/ph (tv lines/pic height), it would be given the value 2000. |
This is a little tough to write from my Blackberry, but here goes:
Lph still refers to lines per pic height, not line pairs horizontal. LW/PH, LPH, TVL are all really expressions of the same quantity, lines not pairs. 400 TVL would mean if you had a 4x3 tv set, you would measure 200 black and 200 white lines from east to west over a length equal to the north-south dimension. LPH is saying the same thing. ISO 12233 charts for digital still cameras use LW/PH for the scale. Analog tv EIA-1956 charts use TVL, but they are the same as LW/PH or LPH. TVL is always stated as horizontal resolutiion, dating back to interlaced broadcast, where the vertical rez can be no better than the number of horizontal scan lines. For progressive images, it's valid to state the vertical resolution the same way, TVL or LPH. For film and photography. LP/mm (line pairs per millimeter) is sometimes used. You can sometimes find a spec for Sony HD cams where they state the resolution as 1000 TV Lines. What that's saying, is you should be able to count 500 black and 500 white, vertically oriented lines, over a horizontal distance equal to the vertical dimension. That would also equate to 1778 lines if you counted them across the full width of the panel, and I suppose 889 line pairs to your way of counting. |
Quote:
Where are you getting that 8 Megapixel number from? Perhaps the old sensor? The latest Red One (MX) is 13.8 megapixels, FWIU. CAMERAS / RED ONE |
Yes that was for the old sensor. If I had meant MX I would have said MX. Still doesn't change the fact that Red like to boast about pixel count as resolution, which it is not. Yes MX should achieve higher resolution than the old sensor but it doesn't make it a "5k" camera in the true sense of the meaning. Working on the Red principle Sony, Panasonic etc could claim a 1080 camcorder to be "2k" near as damn it, but they don't. All the headline talk of megapixels does is create confusion as you have to factor in many other things, including sensor type, pixel offsets etc.
|
Alister,
Come on, this is really not being fair to Red. First off, the Red One currently comes with the MX sensor. So you've been talking about a sensor that the camera does not even come with anymore. And owners with the old sensor can upgrade to the new one; MANY have. But more than that, anyone w/ $25K to spend on a camera should pretty quickly be able to understand that Red is quoting the horizontal pixel count of the image area. They claim the actual measured luma resolution is about 3.2K. And any resolution chart you can pin up is going to show that the Red has significantly more resolution than any other digital camera. Please show an example if I'm wrong. Where I do find Red misleading is the 4:4:4 claim. It's interpolated 4:4:4. You can claim "4K" or you can claim 4:4:4 at a lower resolution, but Red really can't claim both. If chroma resolution is less than luma resolution, then it's not 4:4:4. |
Let's be fair please. Any discussion of RED One should refer to its current shipping configuration which is in fact the MX sensor.
|
All times are GMT -6. The time now is 06:50 PM. |
DV Info Net -- Real Names, Real People, Real Info!
1998-2025 The Digital Video Information Network