Sony NEX-VG10 AVCHD E-Mount Lens Camcorder - Page 6 at DVinfo.net
DV Info Net

Go Back   DV Info Net > Sony XAVC / XDCAM / NXCAM / AVCHD / HDV / DV Camera Systems > Sony Alpha and NEX Camera Systems > Sony NEX-VG10 / VG20 / VG30 / VG900

Sony NEX-VG10 / VG20 / VG30 / VG900
Interchangeable lens AVCHD camcorders using E-Mount lenses.

Closed Thread
 
Thread Tools Search this Thread
Old July 16th, 2010, 03:45 AM   #76
Inner Circle
 
Join Date: Jan 2006
Posts: 2,699
Daniels hit the nail on the head as regards sensitivity. With the stock lens, you are likely to get performance not too different to a conventional 1/3" camera - similar low light capability, similar depth of field, similar zoom range etc. It's because making an 11x zoom reasonably small and economical is not possible except at smaller apertures.

The big difference here is that you can take that lens off and replace it with a prime or small range zoom, which it should be possible to make with a fairly large max aperture. And that will give the possibility of cinematic depth of field and very high sensitivity.

As far as the progressive/interlace matter goes, I look at it from the other side to most on this board. This is a consumer camera (albeit an expensive one) and I would have thought 25p/30p would not be so good for this market - I can see a lot of complaints about "my videos jerky!" It does look as if it is what hs been described as "25p in a 1080/i25 wrapper" - which is more properly described as 1080psf/25. Again, that means it is exactly the same as films (or 25p video) shown on broadcast TV, and hence compatability with domestic equipment it's likely to be shown on. It is also possible to seamlessly and losslessly reconstruct the original 25p from it.

The real question people should then be asking is why ONLY 25p (even if as psf)? Why not 1080ps/25 AND 1080i/25? (Incidentally, "1080/50i" is old terminology - "50i" shuld now be replaced by "i25") It's only a theory, but MAYBE, just maybe, it could be because they can read out the entire chip prior to downconversion at 25Hz, but not at 50Hz? Doing that, rather than the arrangement used in most current DSLRs would be a huge step forward since it should take away many of the aliasing problems and give far better sensitiivity. Just a thought.....
David Heath is offline  
Old July 16th, 2010, 04:07 AM   #77
Trustee
 
Join Date: Dec 2002
Location: Gwaelod-y-garth, Cardiff, CYMRU/WALES
Posts: 1,215
David,
Surely the depth of field is more a factor of the chip size, rsther than the lens itself (OK, there's a perceived less DoF with a wide angle compare to a telephoto...) but I can't see why there will be a difference between the stock lens and others...
Robin Davies-Rollinson is offline  
Old July 16th, 2010, 04:31 AM   #78
Inner Circle
 
Join Date: Jan 2006
Posts: 2,699
Robin - it's both, chip size and lens. Yes, dof is a factor of chip size - if you keep the aperture and angle of view constant. If you increase the dimensions of the chip by 4x (the area by 16x), you will get exactly the same dof if you decrease the relative aperture by 4 stops at the same time. (Assuming a focal length to give the same angle of view.)

As far as sensitivity goes, then obviously f8 will be a disadvantage compared to f2 - but (all else equal) a chip with 16x the area should exactly compensate. This is exactly the point Daniel Browning is making a few posts earlier.

Practically, the designers seem to be allowing the lens to ramp (have a much lower max aperture when zoomed in). All I've said above assumes a small aperture throughout the range, so it should show sensitivity improvements over a comparable 1/3" camera at the wide end of the zoom, if not at the tight end.
David Heath is offline  
Old July 16th, 2010, 05:25 AM   #79
Trustee
 
Join Date: Oct 2008
Location: Byron Bay, Australia
Posts: 1,155
Quote:
Originally Posted by Robin Davies-Rollinson View Post
David,
Surely the depth of field is more a factor of the chip size, rsther than the lens itself (OK, there's a perceived less DoF with a wide angle compare to a telephoto...) but I can't see why there will be a difference between the stock lens and others...
Different lenses & with a wider aperture have a huge effect on the DoF. You can check this with a simple test by flicking your camcorder into AV/aperture priotity mode, zooming in and focusing on an object then, stepping through the aperture range. Even with small-chipped cameras you will notice a change in the DoF. Do it on a full-frame camera with a f/1.4 lens and you'll notice a huge variation as the image goes from knife-sharp all over to a very thin focal plane. Even going from f/3.5 to f/1.8 you will notice a huge difference on DSLR's.

The focal length is a part of it too... a lens at f/2.8 & 200mm will have a shallower dof than one at f/2.8 & 28mm. But the biggest factors are the sensor size & maximm aperture of the lens used.

Regarding the sensitivity, one other note is that this camera will likely have a much cleaner image when gain is added than camcorder's with smaller chips. With the DSLR's it is not uncommon to shoot at or upwards or ISO 800 and get a usable image. You can effectively use ISO as one of your variables, so if you want to keep the aperture closed down a bit, you can bump the ISO upwards to compensate. With most camcorders, you would only use gain as a last-resort, but with this thing you will be able to play around with it alot more freely.
John Wiley is offline  
Old July 16th, 2010, 07:47 AM   #80
New Boot
 
Join Date: Jun 2008
Location: Charlestown, RI
Posts: 14
I just hope that Sony has added a visual audio level monitor meter in the viewfinder.
This is something that is seriously lacking in my SR11 right now.
Rick DeBari is offline  
Old July 16th, 2010, 08:06 AM   #81
Major Player
 
Join Date: Mar 2007
Location: Sussex, UK
Posts: 317
If you download the video file on Vimeo or the Sample test shot of the NEX-5 From D-Preview I can’t see the interlacing at 100%?

Footage from the NEX-5 reports back as “Video Tracks:
H.264, 1920 × 1080, 25 fps, 36.20 Mbps”

As the NEX-VG10 uses the same chip what gives?

I could understand Sony’s film on Vimeo being de-interlaced but not the raw sample from the NEX-5 on dpreview.

Am I missing something here?

Reference:

– 1.13GB
Sony NEX-3 & NEX-5 Review: 12. Photographic tests: Digital Photography Review -92MB

Cheers, James
James Miller is offline  
Old July 16th, 2010, 09:28 AM   #82
Major Player
 
Join Date: Jan 2007
Location: Portland, OR
Posts: 949
Quote:
Originally Posted by Brian Drysdale View Post
I suppose calling it a higher ISO, rather than gain is less confusing to stills photographers and it looks more impressive for marketing..
That's probably it.

Quote:
Originally Posted by Brian Drysdale View Post
Perhaps gain could be considered as an amplification of the sensor signal before recording, whereas an increasing the ISO could be some thing that is applied in post to the recorded RAW as per the RED.
That would be one possibility. The bigger problem in my mind is the sorry state of the implementation of gain in almost all raw cameras, which use one or more of:
  • Idiot gain
  • Numpty analog gain
  • Useful analog gain
  • Metadata gain

Idiot gain is digital amplification applied to the raw file in-camera. It causes clipped highlights, larger files, increased quantization error (for non-integer gain factors), and slightly increased post-processing requirements. There is no excuse for ever using it in a raw camera, yet most manufacturers do it.

Numpty analog gain is the kind that does not reduce noise (as a ratio to signal -- not in absolute terms). In order to have any use whatsoever, the analog gain must reduce either quantization error, late state read noise (e.g. ADC self-noise), or some other noise source. If there is no reduction, then it would be the same as using idiot gain and therefore harmful to the image.

Useful analog gain, in contrast, does reduce the noise ratio. This does not always make it the right choice, since it still clips one stop of highlights for every one stop increase in gain, but at least it has a benefit. Ideally, all of the noise sources (e.g. self-noise of the ADC) would be reduced to a level that it does not contribute to the total read noise at all. Then there is no reason to ever use analog gain (with its attendant loss in highlight headroom). But in most cameras, that is not achieved (probably for a good engineering reason). The next best thing is to offer the ability to eleminiate the contribution of late stage read noise by feeding the late stage electronics with a stronger signal.

For example, here are two images with the exact same exposure and brightness, one that has linear digital gain (applied in post) and the other that has useful analog gain:





In that particular case (5D2 ISO 1600 vs 100), there is a very nice reduction in noise, but the cost is 4 stops of blown highlights. The ISO 100 shot could be processed to preserve those highlights (rather than clip them as in the example) with nonlinear digital gain if desired.

The noise-vs-headroom balancing act is the same one that sound engineers do when choosing the gain level on their mixer if the ADC (or another component) has high self-noise.

Metadata gain is when the data from the sensor is left as-is until post production.

Most digital cameras offer a mix of all four types of gain. For example, on my 5D2:
  • Idiot gain at f/2.8 or faster f-numbers (compensation for sensor angle of response).
  • Metadata gain any time Highlight Tone Priority is used.
  • ISO 50: metadata gain
  • ISO 125: idiot gain
  • ISO 160: useful and idiot gain
  • ISO 200: useful gain
  • ISO 250: useful and idiot gain
  • ISO 320+HTP: useful, idiot, and metadata gain.
    [...]
  • ISO 2000: useful and idiot gain
  • ISO 2500: useful, numpty, and idiot gain

It would be much better if manufacturers got rid of idiot and numpty gain alltogether. They never have any useful purpose in a raw camera. Then the only choice would be between useful analog gain and metadata gain. Ideally, the user would have full control over which type to use, so that it would be possible to shoot ISO 6400 using only metadata gain, analog gain, or a combination of both. Of course, the camera would choose the most sensible option by default to make it easy to use.

In non-raw cameras (i.e. most video cameras), the possible gain implementation types are far less varied:
  • Linear digital or analog gain
  • Non-linear digital gain

I can measure raw files to determine what types of gains are used, but it's not possible to do that with non-raw cameras, so I'm not sure what my XH-A1 is using, for example.

One big difference between raw/non-raw is that after useful analog gain has been exhausted, digital gain is no longer a bad option. Non-linear gain does the same job as linear gain on the midtones, but preserves highlights through tonal compression (also tending to reduce contrast). The video cameras I've used only expose this type of feature through settings such as gamma, knee, etc., but it can also be implemented directly as a gain control.

As video and stills cameras get closer together I hope they bring the best of both worlds (metadata and nonlinear digital gain) instead the worst (numpty and idiot gain).
Daniel Browning is offline  
Old July 16th, 2010, 10:16 AM   #83
Inner Circle
 
Join Date: Sep 2002
Location: Vancouver, British Columbia (formerly Winnipeg, Manitoba) Canada
Posts: 4,088
For those who care:
ISO (International Standards Organization) ratings "evolved" (I'm going to hear from the non-North Americans on that one...) from the ASA (American Standards Association) ratings for film, which measured the sensitivity to light.

Film was sensitive to light due to the formulation of the light reactive particles in it's emulsion. The larger the particles, the more receptive to light they were (higher ASA/ISO) but creating a grainier image. As well, in chemical processing, one could "push" the exposure higher by developing longer or at higher temperatures. This affected the contrast of the images on the film as well as the overall exposure.

It's best to think of video as being more akin to colour positive film (slides) than colour negative film (prints) due to the extra step available to "correct" or "enhance" exposure in the printing process associated with colour negative film. Although a RAW workflow does begin to approximate a colour negative work flow.

Why ISO in digital video/cinema? For folks USED to working with motion picture film, the analogous nature of calling sensitivity an ISO rating allows for a more seamless transition. For people that use light meters for exposure and calculating contrast ratings, it's a no-brainer.

When one changes the ISO setting on a digital video/cinema camera, one is NOT inherently changing the sensitivity of the sensor assembly - one is adjusting the electronic processing that is done to the image. Or conversely, when one RATES a RED ONE at a certain ISO, one is making a learned call on how much one wants to protect highlights.

It isn't magic - higher ISO means more gain. More gain means more noise. Film OR video. HOWEVER, just like in film, sensor size and technological wizardry CAN VASTLY affect grain signature at a given sensitivity. ISO 400 film in a medium format (120 film for example) shows MUCH less grain than ISO 400 in 35mm (or 110 film or Kodak Disc for those who remember those...) as the individual particles are smaller COMPARED TO THE OVERALL FRAME SIZE.

The interesting thing is just how well some dSLRs with video capability handle increased sensitivity. I hear of Canon 5D and 7D cameras being exposed at ISO's that stagger me, with little discernible noise. Even on video cameras, gain has changed dramatically (but not AS much...) - I remember when choosing to add gain while shooting really was a risky move in production (not so much in news...). Now 6 - 12dB of gain is negligible in all but the highest of standards environments (I'm think National Geographic and the like).

Someone want to help me down off this soapbox?
__________________
Shaun C. Roemich Road Dog Media - Vancouver, BC - Videographer - Webcaster
www.roaddogmedia.ca Blog: http://roaddogmedia.wordpress.com/
Shaun Roemich is offline  
Old July 16th, 2010, 12:09 PM   #84
Trustee
 
Join Date: Mar 2005
Location: Coronado Island
Posts: 1,472
Quote:
Originally Posted by James Miller View Post
If you download the video file on Vimeo or the Sample test shot of the NEX-5 From D-Preview I can’t see the interlacing at 100%?

Footage from the NEX-5 reports back as “Video Tracks:
H.264, 1920 × 1080, 25 fps, 36.20 Mbps”

As the NEX-VG10 uses the same chip what gives?

I could understand Sony’s film on Vimeo being de-interlaced but not the raw sample from the NEX-5 on dpreview.

Am I missing something here?

Cheers, James
I'm really thinking, from the bits I have read, that the camera is recording progressive frames (30p) and dividing each frame into 2 fields for 60i output.
One clue I noticed was in some of the sample footage where a bird was flying across the frame very quickly. It had the distinct juddering motion that I associate with 30p. It definitely did not look like the motion rendering you see with 60i. This was Flash for web, so it may have been some sort of encoding artifact, but it looked exactly like 30p motion to me.
If this is true, you could do a simple deinterlace in post and end up with a pure 30p file.
__________________
Bob
Robert Young is offline  
Old July 16th, 2010, 02:07 PM   #85
Regular Crew
 
Join Date: May 2010
Location: Spring Valley CA
Posts: 55
So in response to all this news about the camera. Looking at the features, I am ready to buy another camera, and was wondering what you guys thought between the Sony FX1000 and the new Sony VG10. I know of shooting capabilities and love that the FX1000 shoots at 30p and 24p. Your thoughts?
Kristian Roque is offline  
Old July 16th, 2010, 04:37 PM   #86
Inner Circle
 
Join Date: Jan 2006
Posts: 2,699
Quote:
Originally Posted by Robert Young View Post
I'm really thinking, from the bits I have read, that the camera is recording progressive frames (30p) and dividing each frame into 2 fields for 60i output.
Yes - this is what is properly called psf - "progressive, segmented frame". Exactly how films have always been shown on TV.
Quote:
If this is true, you could do a simple deinterlace in post and end up with a pure 30p file.
"De-interlace" is a specific term and it's exactly what you'd NOT do on 1080psf. To go between psf and p is a straightforward matter of reordering lines (shuffling 1,3,5,7 etc and 2,4,6,8 etc to become 1,2,3,4,5,etc) - the actual data doesn't get altered at all.

De-interlacing involves forming frames from fields, and involves such things as interpolation. Key difference is the data DOES get altered.
David Heath is offline  
Old July 16th, 2010, 04:55 PM   #87
Trustee
 
Join Date: Mar 2005
Location: Coronado Island
Posts: 1,472
Kristian
That's a tough question.
My first thought is that you are comparing a real camera (FX 1000) about which everything is known, to a somewhat theoretical camera (NEX VG)- although we do have the Sony supplied specs, and some sample footage.

FX 1000- if you love the camera, have the post production workflow for it, and it does what you need, how can you go wrong. On the other hand, HDV is kind of yesterday's format, and 1/3" chips are looking pretty tiny these days.

NEX VG- Full raster 1920x1080 24 mbs AVCHD with a 1" chip (10X the surface area of a 1/3" chip), probably a good lens, smaller & lighter (2.5lb with lens vs 5lb), no servo zoom, but ability to change lenses and will possibly at some point be adaptable to existing 35mm Canon, Nikon, etc. lenses. Looks like it has a fair amount of manual control. Probably an excellent still camera as well. Could be the real game changer in small HD videocameras- the new paradigm.

Myself... I preordered one yesterday :-)
__________________
Bob

Last edited by Robert Young; July 16th, 2010 at 06:09 PM.
Robert Young is offline  
Old July 16th, 2010, 05:03 PM   #88
Trustee
 
Join Date: Mar 2005
Location: Coronado Island
Posts: 1,472
Quote:
Originally Posted by David Heath View Post
De-interlacing involves forming frames from fields, and involves such things as interpolation. Key difference is the data DOES get altered.
I appreciate the detailed reply.
I had assumed, inutitively, that if you did a simple (non interpolated) deinterlace, the two artificially seperated fields would be put back together into the original frame.
Like many things, it's apparently more complicated than it would appear at first glance :(
__________________
Bob
Robert Young is offline  
Old July 16th, 2010, 05:04 PM   #89
Inner Circle
 
Join Date: Jan 2006
Posts: 2,699
Quote:
Originally Posted by John Wiley View Post
Different lenses & with a wider aperture have a huge effect on the DoF. You can check this with a simple test by flicking your camcorder into AV/aperture priotity mode, zooming in and focusing on an object then, stepping through the aperture range. Even with small-chipped cameras you will notice a change in the DoF.
That's all correct, but I think you're missing the point of what Daniel and I are implying. The example you give is correct in that FOR A GIVEN CHIP SIZE dof will change with aperture, no question about it.

But what Daniel and I are saying is that for the same angle of view, the same dof can be achieved with a variety of chip/aperture combinations. So if a particular dof look is achieved at (say) f2 with a 1/3" chip, the exact same dof can also be got at f4 with a 2/3" lens, and also at f 8 with a chip size of 4/3"!
Quote:
You can effectively use ISO as one of your variables, so if you want to keep the aperture closed down a bit, you can bump the ISO upwards to compensate.
Yes again, and that's the other part of what we're saying. Taking the example above, a 2/3" chip should be two stops more sensitive than a 1/3" chip, so f4 with a 2/3" chip will give the same results exposure wise as f2 and the 1/3" chip. Convienient, eh?

Optically the focal length for the 2/3" case will twice that for the 1/3" chip to give the same angle of view. If we assume simple lenses, and (say) 10mm for the 1/3" case, 20mm for the 2/3" it works out that the lens diameter will be the same in each case, by definition. Practically, for a given chip technology, the ONLY way to improve the sensitivity is to increase the lens diameter - the chip size is irrelevant.

Of course, in practice it's impossible to get much better than about f1.4 (approx), even in a prime lens, and regardless of chip size. If we were to compare f1.4 lenses on both 1/3" and 2/3" chips, the latter would be more sensitive - but it also follows that for the 2/3" case the lens would have twice the focal length, and hence twice the diameter to maintain the f stop.
David Heath is offline  
Old July 16th, 2010, 05:15 PM   #90
Inner Circle
 
Join Date: Jan 2006
Posts: 2,699
Quote:
Originally Posted by Robert Young View Post
I had assumed, inutitively, that if you did a simple (non interpolated) deinterlace, the two artificially seperated fields would be put back together into the original frame.
I think you've got it right in your head, it's just the use of the word "de-interlace" to describe reconstructing p from psf that I disagree with. It may sound pedantic, but "de-interlace" means something quite specific and not at all what's involved here. Start referring to de-interlacing, and that's exactly what some people will start doing, and not like the results......!
David Heath is offline  
Closed Thread

DV Info Net refers all where-to-buy and where-to-rent questions exclusively to these trusted full line dealers and rental houses...

B&H Photo Video
(866) 521-7381
New York, NY USA

Scan Computers Int. Ltd.
+44 0871-472-4747
Bolton, Lancashire UK


DV Info Net also encourages you to support local businesses and buy from an authorized dealer in your neighborhood.
  You are here: DV Info Net > Sony XAVC / XDCAM / NXCAM / AVCHD / HDV / DV Camera Systems > Sony Alpha and NEX Camera Systems > Sony NEX-VG10 / VG20 / VG30 / VG900

Thread Tools Search this Thread
Search this Thread:

Advanced Search

 



All times are GMT -6. The time now is 05:10 AM.


DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network