Some questions on the current viability of 3D indie features at DVinfo.net

Go Back   DV Info Net > Special Interest Areas > 3D Stereoscopic Production & Delivery

3D Stereoscopic Production & Delivery
Discuss 3D (stereoscopic video) acquisition, post and delivery.


Reply
 
Thread Tools Search this Thread
Old November 5th, 2008, 07:09 PM   #1
Major Player
 
Join Date: Apr 2006
Location: Melbourne, Australia
Posts: 785
Some questions on the current viability of 3D indie features

First of all, thanks to Tim and Chris for having the foresight to set up this forum. I've finally taken an interest in 3D (my DP and I are currently discussing the possibility of shooting the next feature in 3D) and I find this great resource already set up and waiting for me on DV Info!

That Autodesk white paper that Tim referenced is terrific for a newcomer (while I concurrently struggle through Lenny Lipton's book) and it's also great to see early practitioners such as Alister Chapman posting their work here. But I was a bit alarmed when watching the Lenny Lipton video interview (which Tim gave the link to) where he seemed to be saying that some current 3D features have ghosting problems when shown in the cinema (presumably these were not RealD projects). And that triggered a series of questions that I have (concerning the current state of play):

1/ I'm wondering if it's possible to shoot 3D handheld. Do you know of any small "single body" 3D cameras (with at least 720p and the ability to adjust the "interaxial" separation and angle)?
(Otherwise, maybe two lighter cameras, such as PMW-EX1s, on a lightweight frame might do?)

2/ Stunts. I was originally thinking (for a 2D movie) of actually trying rear projection in the studio for certain stunts. But, for 3D I'd imagine greenscreen would be the only option.
There aren't any known problems which would prevent greenscreening for a 3D background, are there?

3/ NLE for 3D. I saw your post about Quantel providing 3D support. But are there any plug-ins (that you know of) for Final Cut Studio so that you can edit and export in 3D?

4/ The final distribution package to send to the cinemas.
If the Digital Cinema Package (DCP) for 3D has to be prepared by an expensive outside facility, that could be a viability-killer right there. I know that the Wraptor plug-in for Compressor by QuVIS allows the indie to prepare the DCP himself (or herself) for a 2D movie. But it doesn't do 3D, does it? (I was just wondering if you might have trialled it on any of your own 3D test projects. I'll probably be downloading a trial of Wraptor this weekend, but I don't have any 3-D project to test it with.) Or does this get into the area that Lenny Lipton was talking about with ghosting (presumably from improperly prepared DCPs?) and it's currently best to go with RealD? (Which would probably rule out indies.)


Thanks, Tim.
David Knaggs is offline   Reply With Quote
Old November 7th, 2008, 10:49 AM   #2
Inner Circle
 
Join Date: Dec 2005
Location: Bracknell, Berkshire, UK
Posts: 4,957
Hi David,

I may have some of the answers to your questions.

There is only one true single body 3D camera that I know of and that is the 3DVX which is basically two heavily modified DVX100's attached very close together in a modified body. It has quite a few limitations in particular it is SD NTSC only. You can add devices such as the NuView which adds a prism and mirror and uses LCD shutters to record left and right views on alternate fields. I got some OK results with one of these on a DV camcorder, but it is problematic when you move to HD. First the HD processing in many cameras introduces timing issues and any camera that uses a Long GOP codec (HDV, XDCAM) will suffer from a lot of artifacts.
As Tim has proved you can use just about any matched pair of camcorders to shoot good stereoscopic video. Inter-occular separation might be an issue depending on what you are shooting. I have found that provided you are not working too close to your subject the kinds of separation you can get with many small HD cameras is useable. A further issue is sync. Unless you have some way of syncing the two cameras you will get issues with motion. Even a half frame delay will cause problems with any fast action or motion such as hand held wobble.

I've done a couple of 3D projects now using an EX3 and an EX1, both side-by-side and over-under with a mirror rig. On a side-by-side rig the EX3 viewfinder makes it difficult to get two EX3's close together. By using an EX3 and EX1 and removing the Zoom handle from the EX3 I can get the lens centers around 4" apart. The rig could be adapted for shoulder mount use and the EX3 has genlock there are no sync issues.

One of my projects include some green screen work. provided you assume you background to be at infinity it's not too difficult to do.

So far I have used FCP to edit my 3D projects. There is a plug in that allows you to convert pairs of clips into Anaglyphs but for any serious work it's of little use. I edit by first working with one channel to make my edit decisions then duplicate that edit for the other channel. If you use EX's you can sync the clip names so that you will have two sets of clips with matching numbers. I then convert my two channels to an anaglyph using the levels filter method (Red only and Blue/Green only then composite using screen mode). I can then tweak the 3D using the motion tab. I then export the separate channels as needed for projection. So far this has been as 1920x1080 30fps JPEG stills sequences for polarized projection using a Sun workstation feeding a pair of Sony SXRD 4k projectors. I have been very please with the final results even on cinema sized screens.

Next year I plan on shooting my storm chasing expeditions in 3D using my side-by-side rig plus a pair of the new sony mini-cams.
__________________
Alister Chapman, Film-Maker/Stormchaser http://www.xdcam-user.com/alisters-blog/ My XDCAM site and blog. http://www.hurricane-rig.com
Alister Chapman is offline   Reply With Quote
Old November 7th, 2008, 04:31 PM   #3
Major Player
 
Join Date: Apr 2006
Location: Melbourne, Australia
Posts: 785
Hi Alister.

Absolutely brilliant response! Thank you so much. That was solid gold.

So to recap:

An EX1 paired with an EX3 could work because the EX3 has genlock which can maintain the (essential) sync between them. (I'm hoping for a lot of handheld plus a number of action sequences.) Also, for 24p would a traditional shutter of 1/48 be okay for 3D?

With the artifacting from the long GOP of the EX1/3, did you mention it because the artifacts tend to get magnified with the 3D process? Or was it relating to doing any type of general shooting with a long GOP camera?

I guess next week's RED announcement about the new specs for Scarlet could prove interesting. If it's indeed a "pocket" pro camera then it could prove lightweight enough to mount two of them for easier hand-held and closer spacing between the lens centers. Plus I assume it'll use REDCODE rather than a GOP structure for compression (less artifacts). Although I fear that it'll have limited zoom capabilities and no genlock. But, hey, you never know!

I noticed that the new 3D cinema pioneered by RealD (Lenny Lipton) uses a method called Zscreen which uses only one projector in the cinema which alternately projects the right-eye frame and left-eye frame. I have no idea if this requires an entirely different NLE workflow!

And it's great to hear about your results with the Sony 4K projectors. I look forward to seeing your next storm-chasing results!
David Knaggs is offline   Reply With Quote
Old November 7th, 2008, 05:22 PM   #4
Inner Circle
 
Join Date: Dec 2005
Location: Bracknell, Berkshire, UK
Posts: 4,957
The artifact issue only relates to using the field alternating shutter type adapter (NuView etc). As each field contains separate left/right eye information the codec get really stressed. I wouldn't use this type of device for any serious work anyway.

I shot at 30P with a 180 degree (1/60th) shutter. You get less eye strain with 3D at higher frame rates in my opinion.

Sony also have a new adapter lens for the 4K projectors that allows a single projector to be used, not exactly sure how it works. You can either feed the SXRD's with a computer or by using the dual stream function of HDCAM SR vtrs. The SR VTR's can record and playback two simultaneous 4:2:2 HD streams.
__________________
Alister Chapman, Film-Maker/Stormchaser http://www.xdcam-user.com/alisters-blog/ My XDCAM site and blog. http://www.hurricane-rig.com
Alister Chapman is offline   Reply With Quote
Old November 7th, 2008, 10:03 PM   #5
Major Player
 
Join Date: Apr 2006
Location: Melbourne, Australia
Posts: 785
Thanks, Alister.

Quote:
Originally Posted by Alister Chapman View Post
I shot at 30P with a 180 degree (1/60th) shutter. You get less eye strain with 3D at higher frame rates in my opinion.
That's interesting. From what I was reading about ZScreen, its projection requires a very high framerate of 72 frames per second per eye. Which apparently means that, for a 24 fps project, each frame has to be projected three times to reduce flicker.

Quote:
Originally Posted by Alister Chapman View Post
Sony also have a new adapter lens for the 4K projectors that allows a single projector to be used, not exactly sure how it works. You can either feed the SXRD's with a computer ...
This adapter lens sounds promising. If the final mastering of the DCP (Digital Cinema Package) in 3D proves too financially daunting for indie use (it currently seems to be mostly in the province of Disney, James Cameron and other top-liners) an alternative might be to rig up your own computer with installed anti-piracy measures and then do the dual-feed, as you said, to a projector with this adaptor. That would mean you could bypass the DCI spec server, which also means bypassing any special NLE requirements plus the DCP mastering step. I'm really just "thinking out aloud" with this idea, but it is very plain to me that the widespread adoption of Digital Cinema around the world is going to be driven by 3D. So it makes strategic sense for an indie (or at least it does to me) to shoot the next one in 3D. You'll still do your cut (using footage from only one of the cameras) for regular "2D" cinema showings. But you'll have the advantage of an additional 3D cut of the movie. This additional cut (in 3D) can only increase the chances of a movie being taken up for exhibition, it won't lessen them. The extra costs would seem to be: buying one extra camera, a mounting rig for the 2 cameras, extra memory cards, double the hard drives, and extra editing time for the additional 3D cut (and if you sync the clip names, as Alister said, you could cut down on a lot of that time).
David Knaggs is offline   Reply With Quote
Old November 8th, 2008, 10:10 AM   #6
Inner Circle
 
Join Date: Dec 2005
Location: Bracknell, Berkshire, UK
Posts: 4,957
I've been working closely with a UK operation called Holovis. HoloVis International - Products: Apollo
They use Sun workstations to stream dual feeds. Each feed is simply a 30 fps sequence of numbered JPEG's. Any NLE can produce these streams and while a Sun workstation is more expensive than your average PC the cost is not prohibitive, certainly cheaper than a HDCAM SR deck. The quality achievable is quite remarkable. I was at Disney World last week and saw most of the 3D movies, frankly I was seriously disappointed by the image quality and believe I have been getting much sharper, clearer 3D than any of the Sony films. Admittedly most of these are 10 or more years old, but for many these are the only 3D movies they have seen.

I also saw the 3D IMAX film "Magnificent Desolation". This was better than the Disney films but suffered from terrible motion judder.
When you see some of the Pace or Cameron digital footage projected by a 4K projector it blows you away compared to most film solutions. Most of this material is no more than 2 streams of HDCAM which is 1440x1080, so well set up EX's should be pretty darn close in terms of picture quality.

I think great things can be achieved with a very simple EX based rig and an off-the shelf NLE provided you have the time and commitment to make it work. So far my experience has been that it takes 30-40% longer to shoot in 3D. That's mainly down to the extra time needed to check camera alignment. Then editing takes around two to three times as long depending on the complexity of the project. The last 3D edit I did had 10 layers of video in each channel and took a lot of planning to et right.
__________________
Alister Chapman, Film-Maker/Stormchaser http://www.xdcam-user.com/alisters-blog/ My XDCAM site and blog. http://www.hurricane-rig.com
Alister Chapman is offline   Reply With Quote
Old November 9th, 2008, 12:04 AM   #7
Tourist
 
Join Date: Aug 2008
Location: Fresno, CA
Posts: 2
This won't help the projects of today, but going back to the single body 3D cameras, about 15 years ago Toshiba put out a limted number of single body 3D camcorders (in VHSc) that interlaced the fields.
Mike Louie is offline   Reply With Quote
Old November 9th, 2008, 03:43 AM   #8
Major Player
 
Join Date: Apr 2006
Location: Melbourne, Australia
Posts: 785
That was great info. Thanks, Alister.

Quote:
Originally Posted by Alister Chapman View Post
So far my experience has been that it takes 30-40% longer to shoot in 3D. That's mainly down to the extra time needed to check camera alignment. Then editing takes around two to three times as long depending on the complexity of the project.
Hmmm. The extra editing time might not be too bad (hiring an assistant editor), but the 30-40% extra time to shoot is obviously where it can quickly become unviable. If the camera alignment can be checked between shots, while the DP and gaffers set up the lighting for the next shot, perhaps the focus puller can do double-duty as the alignment-checker? (Unless the alignment has to be checked after each take within a shot. I have no idea how often it has to be checked.) If they're done concurrently (lighting and alignment checking) then I guess it could be viable. But this is where we'd really have to do a smaller 3D project beforehand to really know what we're up for and to better plan it out and prepare (as you said).

I just checked out (on the web) the Pace/Cameron camera (Reality System) and found a good link with a video demonstrating it:
James Cameron Stereoscopic 3D camera

They seem to have filed down the side of each lens (where they "touch") to bring the spacing of the lens centers down to 70 mm. I take it that, in your considered opinion, the 100mm that you've got the EX1/3 spacing down to should be adequate to get a good result with a 3D feature?

The other thing I was wondering about (seeing how I was just talking about a focus puller) is whether it's practical to install some kind of Follow Focus on the type of EX1/3 rig you've been using?
David Knaggs is offline   Reply With Quote
Old November 9th, 2008, 09:05 AM   #9
Inner Circle
 
Join Date: Dec 2005
Location: Bracknell, Berkshire, UK
Posts: 4,957
The often quoted rule is that the interoccular minimum is 1/30 of the distance from the camera to the subject. So 4" separation should allow you to shoot objects no closer than 10ft away and the projects I have done so far have confirmed this to be a realistic guide. I've been using simple drive belts looped aver both lenses to link the focus together. I am working on a more sophisticated gear based mechanism. Basically a single rod that runs over the lenses and drives both lenses with a worm gear, this can be used for both focus and zoom.

If you had an EX3 (or a pair) that you could afford to dedicate to a permanent 3D rig it would be pretty easy to get the interoccular down even further, but at this stage I'm not sure there is much to be gained with the kinds of projects I am doing.

Unless you choose to shoot parallel and then sort out the convergence in post you need to set the convergance for each shot. On some shots it may even be necessary to pull the convergance during the shot, for example if you are getting closer to the main subject. How easy this is will depend on the design of your rig. I have a knob that turns to adjust the convergance. On-set 3D monitoring is another thing to consider. You need some way of seeing the 3D on set to be able to align the cameras. One way to do this is to use a vision mixer to mix both camera outputs together while applying a colour effect to each channel to give a suedo anaglyph. Other options include 3D head worn glasses.
i-3D Video Glasses
__________________
Alister Chapman, Film-Maker/Stormchaser http://www.xdcam-user.com/alisters-blog/ My XDCAM site and blog. http://www.hurricane-rig.com
Alister Chapman is offline   Reply
Reply

DV Info Net refers all where-to-buy and where-to-rent questions exclusively to these trusted full line dealers and rental houses...

Professional Video
(800) 833-4801
Portland, OR

B&H Photo Video
(866) 521-7381
New York, NY

Z.G.C.
(973) 335-4460
Mountain Lakes, NJ

Abel Cine Tech
(888) 700-4416
N.Y. NY & L.A. CA

Precision Camera
(800) 677-1023
Austin, TX

DV Info Net also encourages you to support local businesses and buy from an authorized dealer in your neighborhood.
  You are here: DV Info Net > Special Interest Areas > 3D Stereoscopic Production & Delivery

Thread Tools Search this Thread
Search this Thread:

Advanced Search

 



Google
 

All times are GMT -6. The time now is 02:40 AM.


DV Info Net -- Real Names, Real People, Real Info!
1998-2017 The Digital Video Information Network