View Full Version : Canon 3D?
Sareesh Sudhakaran February 10th, 2010, 10:15 PM Is it possible to shoot 3D or create a rig with the Canon 5D/7D/Rebel 2Ti? Are there any custom made rigs for the same? If so, can we change the lenses on these cameras - do they use beam splitter or side-by-side, etc?
The opportunity is fantastic - to shoot full HD with a light-weight system. That's one big dream coming true.
Adam Stanislav February 10th, 2010, 10:39 PM Aren't the bodies of the Rebels too wide? I do not have a digital Rebel, but my film Rebel is 14 cm wide, more than double the span of human eyes. Though, I suppose if you mounted one of them upside down, you might be able to get the lenses close enough for 3D.
The tricky part would be to get both cameras focused the same.
Alister Chapman February 12th, 2010, 02:16 PM You could use a beam splitter to get the interaxial down, but there is no way to accurately sync the cameras that I know of.
Ryan Farnes February 15th, 2010, 09:39 PM Syndicate - Canon 7D BNC-R Cine HD. (http://www.syndicate.se/Default.aspx?Id=294)
Scroll down a bit and behold the magic. Preliminary, but promising. They even have some red/cyan 1080p and 720p clips of just shooting on a street.
I was pretty impressed.
Adam Stanislav February 15th, 2010, 11:42 PM Looks like they did just what I suggested and mounted one of them upside down.
Sareesh Sudhakaran February 16th, 2010, 12:16 AM But the syndicate one doesn't allow for locking the two cameras. Don't they shoot H.264? If that's the case, how do they guarantee the same frames are being shot? What about syncing the shutters?
Also, we can't control convergence or change the distance to less than 12cm. We can't achieve the 65mm distance. ET does have a 5D (should work for 7D) beam splitter rig that looks more robust.
Ryan Farnes February 16th, 2010, 02:13 AM I have no idea how accurate their "rig" is for professional stereo production. Just had been at their site recently and thought I'd pass it along. The sample videos they have did seem pretty cool as far as just being a test. If you've got red/cyan glasses, check them out.
I wonder if they've taken their 3D rig ideas any further.
Adam Stanislav February 16th, 2010, 02:15 AM They do say "Then next step is a mirror solution covering 0-120 mm eye distance."
As for syncing, they seem to be using the nanoFlash. You can sync two nanoFlashes, so maybe that is how they do it.
Bruce Schultz February 16th, 2010, 08:30 PM I've been shooting some 3D footage with 2 Canon 7D's for the month or so with pretty good results.
I'm using a simple rail system and placing the cameras next to each other and also toe-in to a common frame. I use prime fixed lenses - 2 28mm 2.8, 2 50mm 1/4 lenses and shoot at 1080/23.98P. For outdoor photography I find that it's necessary to use a 3 or 6 ND filter for bright days as the 200ASA / 1/50 shutter isn't slow enough for that much light.
All in all with this rig, you can shoot very nice long shots and scenics with very good results. I sync the cameras with a simple film slate clapper and line up in post easily. For post I have found the Stereo 3D Toolbox FCP plug in to work very well, however I am curious about the Bororo 3D plug in for Sony Vegas. If there is a more complete tutorial on this software I'd appreciate being directed towards it.
For any subjects closer than say 20-30 feet a mirror rig is required as the interaxial of the lenses is far too wide even at the closest spacing on the rail. I too am curious about the smaller form factor of the new Rebel and await it's arrival to be able to measure the interaxial distance.
For those closer shots and for sync sound shots I'm putting together a mirror rig much like Alistar's using 2 Sony EX cameras and 2 NanoFlash recorders genlocked and timecode synched. I'll be utilizing the 4:2:2 100MBs I-Frame only codec to insure frame to frame accuracy, but I'd be delighted to hear that a lower data rate NanoFlash codec would work as well if anyone knows.
I'll be uploading a few sample clips to Vimeo in the next few days and I'll start a new thread to post the URL - the clips will be in full color Anaglyph mode.
Adam Stanislav February 17th, 2010, 12:17 AM I am curious about the Bororo 3D plug in for Sony Vegas. If there is a more complete tutorial on this software I'd appreciate being directed towards it.
There will be eventually. I have diabetes, so I have days when I can just code, code, code, or write, write, write the documentation, and then I have days when my energy is so low I am a zombie and cannot do anything. And the last few days my energy has been at the bottom.
But I will get my high energy days again and will write more documentation. I have also ordered the nanoFlash in the hope of using it to grab video directly off my computer screen. I am hoping to produce a DVD which explains everything about the plug-in. Of course, I still have more work to do with the plug-in itself and see no point in producing the DVD before the plug-in has everything I want in it.
For now, if all you need to do is combine a left and a right video, follow these instructions (http://www.pantarheon.org/Bororo3D/Composite/). If you need to combine multiple left and right tracks, there is a Bororo 3D Single View under the Video FX section of Sony Vegas. Apply the appropriate left or right view to each track. Choose Left/Right or Above/Below as needed for the left tracks, and the same with Swap Right/Left selected for the right tracks. You can use that on any number of tracks. In that case, your Compositing Mode should be Source Alpha (this is the default anyway).
Or for anaglyphs, with the same Bororo 3D Single View pick the appropriate anaglyph and, again, select Swap Right/Left for the right tracks. This time your Compositing Mode needs to be Add with one of the two tracks being the child of the other.
If you need to place a 2D background behind it all, just place it at the bottommost track for anaglyphs and everything other than Above/Below or Left/Right. For the latter two, select the Bororo 3D Double Vision from Video FX and select either Left/Right or Above/Below. This is only for 2D tracks that you want in the background. You do not need it for anaglyphs because the anaglyph of a 2D video is the 2D video itself.
Just make sure you have the latest version (currently 1.2.1, build 7 - this appears on all the Bororo 3D dialogs).
I hope this can get you started for now. I will keep you all posted when I write more docs.
Bruce Schultz February 17th, 2010, 10:36 AM thanks for that information and URL Adam, I hope you are feeling better soon.
Sareesh Sudhakaran February 19th, 2010, 11:18 PM I've been shooting some 3D footage with 2 Canon 7D's for the month or so with pretty good results.
I'm using a simple rail system and placing the cameras next to each other and also toe-in to a common frame. I use prime fixed lenses - 2 28mm 2.8, 2 50mm 1/4 lenses and shoot at 1080/23.98P. For outdoor photography I find that it's necessary to use a 3 or 6 ND filter for bright days as the 200ASA / 1/50 shutter isn't slow enough for that much light.
All in all with this rig, you can shoot very nice long shots and scenics with very good results. I sync the cameras with a simple film slate clapper and line up in post easily. For post I have found the Stereo 3D Toolbox FCP plug in to work very well, however I am curious about the Bororo 3D plug in for Sony Vegas. If there is a more complete tutorial on this software I'd appreciate being directed towards it.
For any subjects closer than say 20-30 feet a mirror rig is required as the interaxial of the lenses is far too wide even at the closest spacing on the rail. I too am curious about the smaller form factor of the new Rebel and await it's arrival to be able to measure the interaxial distance.
For those closer shots and for sync sound shots I'm putting together a mirror rig much like Alistar's using 2 Sony EX cameras and 2 NanoFlash recorders genlocked and timecode synched. I'll be utilizing the 4:2:2 100MBs I-Frame only codec to insure frame to frame accuracy, but I'd be delighted to hear that a lower data rate NanoFlash codec would work as well if anyone knows.
I'll be uploading a few sample clips to Vimeo in the next few days and I'll start a new thread to post the URL - the clips will be in full color Anaglyph mode.
I am keen on getting a couple of Rebel 2tis myself and start to experiment. My main concern is definitely the 'closer than 20-30 feet' bit. Being interesting in telling stories on a low budget, a beam splitting rig is what I probably need. ET has a rig for the 5d/7d, but I have no clue how they manage sync and other stuff.
You mentioned you used a clap to sync. When the camera records, it does so in H.264 in a MOV file - this I'm assuming is not broadcast quality, but I guess is good enough for films. How can you precisely match each frame? Since each camera is slightly different, does the encoding make things worse - or is it a non-issue? What if I'm using a 16GB SD card and recording a single take for about 30 minutes (4GB gives 12min)? Will I still have sync at the tail end of my video?
Can we use a zoom lens at all for this 3d rig? I understand it would be a very difficult task to set the lens correctly, but are there any systems in place for this that make it achievable?
Would love to see those test shots you're doing! Thanks.
Alister Chapman February 20th, 2010, 02:27 PM Just a couple of points:
While you can sync a pair of nanoflashes to start and stop recording at the same time, it is the cameras that must be synced together via genlock or some other means to prevent the stereo from breaking down during camera moves or any on screen motion. The NanoFlash is just a recorder, it is the signals arriving at the recorder that must be in sync so for frame to frame accuracy the cameras must be genlocked or controlled in such a way as they run in sync.
I use the NanoFlashes at 50 Mb/s Long GoP when I need longer record durations and the pictures are just fine. For most projects I shoot at 100 Mb/s. Make sure you have the new firmware that allows you to name the cards and keep the card names after a format to help you keep track of your left/right clips.
If your going to use a mirror rig with a vDSLR you need to check that the mirror has as near to 50/50 split as possible. Many low cost rigs use Prompter mirrors which have a 30/70 or 60/40 split. This means that the cameras will have different light levels, thus different apertures and depth of field. With small chip video cams the difference in DoF is less noticable but with vDSLR's and large sensor cameras it is a much bigger issue.
Bruce Schultz February 20th, 2010, 08:31 PM You mentioned you used a clap to sync. When the camera records, it does so in H.264 in a MOV file - this I'm assuming is not broadcast quality, but I guess is good enough for films. How can you precisely match each frame? Since each camera is slightly different, does the encoding make things worse - or is it a non-issue? What if I'm using a 16GB SD card and recording a single take for about 30 minutes (4GB gives 12min)? Will I still have sync at the tail end of my video?
Can we use a zoom lens at all for this 3d rig? I understand it would be a very difficult task to set the lens correctly, but are there any systems in place for this that make it achievable?
I have not, nor expect to use the Canon 7D rail rig with sync sound. It is a "training wheels" rig for me so I have just been using it for obtaining footage which I can then analyze in FCP using Mr. Dashwood's 3D plugin to see where the flaws in my camera adjustments are. However, so far I haven't detected any problems of picture sync between the two data streams on short 2-5 minute shots. I recently shot a series of boats and kayaks in a harbor (at a distance with 50mm lenses) and had no trouble syncing the two streams at all. Using a clapper at the head and a tail slate also is a very easy way to find the first common frame of any clip(s) and my tests so far have held sync right up to the tail slate - as I mentioned for short clip bursts.
Since I use prime lenses, zooming is not part of the equation. I recently toured a very high end 3D facility in Los Angeles. Their zoom lenses are computer locked and their proprietary software not only tracks the two zooms but adjusts the camera mounts for physical discrepancies between the cameras also. Very sophisticated and very expensive at roughly $50,000 USD rental per day per 2 camera rig. So zooming seems to be a slippery slope in 3D. I'd love to hear that it isn't though.
Alister Chapman February 21st, 2010, 09:14 AM Getting the two cameras in sync for 3D is crucial if anything moves in the shot, especially if anything is moving across the frame. Imagine a shot with a car traveling through the frame. If the cameras are not completely in sync, the position of the car when the image is captured will be slightly different for the left and right views. This will in effect move the car forwards or backwards in 3D space as the cars positional difference between the left/right frames will alter the convergence/divergance for the car. The static parts of the frame will be unaffected. Imagine also a shot of a person walking or running, if one camera takes its shot slightly behind the other, the persons legs will be in a different part of their stride so the left eye will see legs in one position while the right eye will see legs in a slightly different position and the 3D will break down.
Also consider what happens when you pan. If one camera captures its image slightly ahead of the other then the 3D depth will appear to either compress or expand as the cameras are panned because the left and right images will be shifted slightly left or right with respect to each other. Even at 30 frames per second a half frame sync difference would equate to a half degree difference between the 2 cameras with a 5 second 180 degree pan, which is not all that fast.
It's not just a case of getting the cameras to go in to record together, as the it is the video streams that the cameras are producing that need to be in sync, so you need both cameras to switch on and power up in sync.
At 24P (the slowest typical frame rate) it is traditional to use a 1/48th shutter. To ensure that both cameras are exposing for at least half of the open shutter period together they must be within roughly 1/100th of a second of each other. This is the absolute minimum needed and wont be ideal for any fast movement. Ideally you want cameras running within 1/1000th of a second of each other. This can be achieved with a Lanc controller or genlock, but I know of no way of doing this with the canon vDSLR's.
Adam Stanislav February 21st, 2010, 10:26 AM You can sync two nanoFlashes, so maybe that is how they do it.
Alister is right. Now that I have my own nanoFlash I realize that the only way to sync two nanoFlashes is by syncing the cameras themselves.
Bruce Schultz February 21st, 2010, 03:06 PM I'm all in for theory, it would be difficult if not impossible to move ahead technologically without it.
Here is a link to a clip I've posted on Vimeo of 3D footage shot with 2 Canon 7D cameras.
HerrSchultz 3D on Vimeo (http://vimeo.com/channels/88114) (click on description for details and watch full screen if you have a good monitor system) It's an anaglyph output so put on your red/cyan glasses to watch.
It appears to turn some theory about non-genlocked cameras a little on edge. It's not perfect by any stretch, and yet throughout all of the 7/3D tests that I've shot I've yet to see the type of artifacting that not genlocking should cause - for no-sound MOS shots of course. Granted these are short clips, but I think for the 3D enthusiast or home user this setup works pretty well.
Now that my 2 NanoFlash units have arrived I'll most likely be abandoning the 7D's for EX cams, but the 7D's have been invaluable as training tools for 3D motion photography for me.
Adam Stanislav February 21st, 2010, 04:49 PM It looks nice, Bruce.
Sareesh Sudhakaran February 22nd, 2010, 11:54 PM If genlocking is absolutely crucial, can a lanc controller be built for the canon 7d? There are many DIY tutorials on the website.
There are a lot of 3D images and footage on the web. How does one really test these to see if they are perfect 3D? Since each eye is different, how do people on set know if their 3D is right or wrong?
One thing I've noticed in my basic 3D-making is that it is important for the eye to adjust and focus on an image as soon as possible. How does one guarantee that? In Avatar, James Cameron really played it safe with the 3D. It subtly enhances the world he was creating and that itself was a great achievement. Did he avoid any radical 3D because he had no way of truly judging its result?
Sareesh Sudhakaran February 22nd, 2010, 11:56 PM I recently toured a very high end 3D facility in Los Angeles. Their zoom lenses are computer locked and their proprietary software not only tracks the two zooms but adjusts the camera mounts for physical discrepancies between the cameras also. Very sophisticated and very expensive at roughly $50,000 USD rental per day per 2 camera rig. So zooming seems to be a slippery slope in 3D. I'd love to hear that it isn't though.
Wow! I was hoping to make a movie in about $250,000! If you don't mind, which 3D facility was it? Or are all of them this pricey?
What about movies like Valentine 3D or Final Destination 3D? The second was made by the PACE system but I'm sure they couldn't afford 'Avatar rates'. Here's to hoping...
Alister Chapman February 23rd, 2010, 01:57 AM It may be possible to build a controller for Canon cameras as they do have remote control via USB, whether it is possible to use this to bring the cameras in to sync I don't know. The Lanc controllers for the Sony cams work by turning on both cameras in sync, however after a few minutes the sync will drift off, requiring a re start. Genlock locks cameras together and they remain in sync while still connected.
There are many on set 3D monitoring solutions, most of these only work with genlocked cameras, but there are a few dual monitor and mirror type systems that will work with non genlocked cameras.
Cameron uses camera systems designed by Vince Pace, these use modified Sony HDC950 (I think) HD cameras on very expensive computer controlled side by side mounts. There is no "radical" 3D in Avatar as it's not realistic and detracts from the story turning the film into a trick show. Cameron's aim was to produce an immersive experience where the 3D makes you believe you are in the film, not merely watching it, which IMHO he succeeded in doing.
Sareesh Sudhakaran February 23rd, 2010, 02:03 PM Is 3D doomed to be judged only subjectively on set? Are there no objective standards or tools that can help determine an 'acceptable' range that can then be fine-tuned in post?
Giroud Francois February 23rd, 2010, 04:22 PM yes, some tools exist for correcting 3D after shoot, but if you budget is tight, these tools cost the hell (several thousand dollars at best).
And frankly i would not take the risk to ruin a scene that cost time and money to shoot, just because i skipped on a simple check in the field.
Bruce Schultz February 23rd, 2010, 10:55 PM Vince Pace says that Avatar was shot entirely on 2/3" chip cameras, specifically Sony F23, Sony F950 and Sony 1500. The last two were broken out of their normal mountings and placed in new thinner case mountings external to the recorder (in the case of the 1500) and the rest of the electronics. They were still not small enough to do closeup 3D work, so a mirror rig was designed.
The genius of Avatar as I see it is that Cameron chose to marry focus and convergence on the same plane for most of the film. If you pay close attention you will notice that most of the action, especially dialogue scenes take place near the camera and not in the mid or deep background. This seemed to have the twin effects of easy audience focusing and not having but a few semi-jarring cuts to deep focus/convergence shots.
Sareesh Sudhakaran February 24th, 2010, 12:50 AM Assuming one shoots with long lenses even for close-ups and two-shots (if there's enough set space), why would anyone want to shoot on a beam-splitter?
I agree 20 feet is important ground for ENG and documentaries, but for feature films, can't there be a workaround just for side-by-side? What's missing here?
Nick Hiltgen February 24th, 2010, 04:47 AM You're going to lose a lot of your 3d effect shooting with long lenses, for close up work. It your subjects will appear to be kind of card board cut outs instead of feeling actually 3d. I think the mirrored rig really is your best option. For what it's worth if you don't want to pay 50k a day you could rent a beam splitter and an experienced stereographer for significantly less. Further I don't know that Zoom lenses really are your Ideal choice as you are dealing with significantly more elements of glass.
3d is definitely not doomed to be judged subjectively on set in fact I would say that's probably the worst way to do it as most likely what you're subjectively judging off of is a screen no where near the same size as what your material will be projected on. There are a few programs out there and 3d calculaters, Florian the inventor/designer of the p+s technik rig is supposedly releasing a 3d calculator for that contraption, and there is another program (calculator) called stereoscopic master that will let you enter in all of your parameters to design your shots effectively.
On the canon front I think you might have far more effective 3d if you use a camera with a smaller sensor size (as that's closer to the iris of your eye) however that won't give you the 2d movie look that a lot of people like.
Sareesh Sudhakaran February 25th, 2010, 01:25 AM I understand what you mean by the 'forced' focus that long lens gives - but isn't that what Avatar has predominantly? It had very safe 3D where your eyes had to focus on the screen and nothing else. Judging by its success, I guess revolutionary 3d is less welcome. If we use video lenses instead of film lenses, we will get more depth of field for the same frame, so close-ups zoomed on on those lenses will have more depth and wide-distortion in them, wouldn't it?
I guess judging 3d and using the 3d calculators is like cinematography using film and a light meter. I'm sure one day there will be a standard for this. It's difficult to judge on set because there are so many ways in which 3D can be viewed, plus the monitor you use on set won't really tell you how it's going to look like on another screen (cinema or HDTV).
Would the Canon Rebel 2Ti have a smaller sense than 5d? Otherwise we might be better with SI2Ks or Reds with Video lenses.
Bruce Schultz February 25th, 2010, 03:57 PM Excuse me if this is a Simpson's "Doh" moment but, I have noticed that the 3D "effect" changes based on different sizes of displays. When I do a convergence of 2- 3D streams in FCP on the 23" computer screen and 23" external HDTV to where it looks good, then when I make a 1080P BluRay and watch the same shots on a 50" HDTV it doesn't have the same look, it generally looks flatter.
Stereoscopic's calculator allows you to define the final viewing screen size, but I don't see any changes in the calculation by switching from HDTV 42" size to theater size, so I'm not sure what to make of that.
Nick Hiltgen February 25th, 2010, 11:20 PM Bruce you're absolutely right on both accounts. I've been told you can actually take 3d that works on the big screen and bring it down to the small screen without much difficulty the problem is when you do the reverse it cause too much divergence. which is most likely what you're seeing with your work going from your home editing system to the best buy monitor.
I hope to play around with the stereoscopic calculator some and see what kind of results I can get. (I.e. see if I agree with their findings...)
Sareesh, I think maybe I'm misunderstanding you or you me. While yes long lenses will give you a shallower depth of field, as far as 3d goes it also really does make the charachters seem like they're cardboard cut outs, it appears to be layers of paper instead of an immersive 3d enviornement. Kinda like if you look at something through binoculars.
I think pretty much if you use any lens over let's say a 50mm (in motion picture 35mm terms, divide by 2.5 to get 2/3"-20mm 5.6 or so for 1/3" equivalent 9mm) you'll start to get more binocular vision. To me this isn't really groundbreaking 3d, but then again I'm not really sure what is.
Sareesh Sudhakaran February 25th, 2010, 11:52 PM Can we really achieve a sort-of standard 3D viewing experience if we use a 3D calculator - when viewed on screens of different size and technology (internet, TV, HDTV and cinema screens)?
Sareesh Sudhakaran February 26th, 2010, 03:09 AM Sareesh, I think maybe I'm misunderstanding you or you me. While yes long lenses will give you a shallower depth of field, as far as 3d goes it also really does make the charachters seem like they're cardboard cut outs, it appears to be layers of paper instead of an immersive 3d enviornement. Kinda like if you look at something through binoculars.
I think pretty much if you use any lens over let's say a 50mm (in motion picture 35mm terms, divide by 2.5 to get 2/3"-20mm 5.6 or so for 1/3" equivalent 9mm) you'll start to get more binocular vision. To me this isn't really groundbreaking 3d, but then again I'm not really sure what is.
You're right about the cardboard cut out paper effect. It's anything but 3D.
Is it possible to use video lenses (that are on the longer side but still have less DOF) at a distance of 20-30 feet to achieve an 'immersive' 3D effect? If so, it would avoid the use of beam-splitter rigs. I'm just hoping here. Designing a beam-splitter just seems too much of a risk-taking venture and that is before actual shooting.
Bicky Singh March 2nd, 2010, 08:13 PM any new discoveries about syncing two canon vdslr's together?
The potential of using these camera's for 3D is so huge if only we can overcome this sync problem.
Nick Hiltgen March 2nd, 2010, 10:28 PM I don't know that DSLR's are really the thing you want to use even if it is possible to genlock the two of them. I think maybe having a camera with something closer to the size of a human iris might make for a really immersive environment, Unfortunately that will probably limit any sort of 2d release. But hey that's really more of an aesthetic thing.
You really don't have to design a beam splitter, there are quite a few DIY guides out there, and if you want a ready made low cost one, then you could check out 3D Film Factory. Creating dynamic entertainment*for a new dimension. (http://www.3dfilmfactory.com/3d_camera_rigs.html#3dbsindie) I'm not sure of the legitimacy of the company but it's something I'm looking into. At 3500 with offset adapters thats about as cheap as I think you'll find a beam splitter rig.
Sareesh Sudhakaran March 2nd, 2010, 10:35 PM I agree with you on buying the rig which also comes with the experience those guys have. I wanted to make one cheap to learn the ropes for myself, so to speak.
What cameras in your opinion resemble the human eye, and which can be used for 3D? If DSLRs are out, then I guess so are ARRI lenses. Is that right?
Adam Stanislav March 2nd, 2010, 11:00 PM I think maybe having a camera with something closer to the size of a human iris might make for a really immersive environment
Interesting. That would be the Iconix Studio 2k, I suppose.
Nick Hiltgen March 3rd, 2010, 10:11 AM Please keep in mind this is strictly untested opinion on my part. I think that 1/3" and 1/4" ccd's or CMOS censors will give a more "you are there" effect as you will have a DOF that is similar to the human eye. Of course, the issue may be that we (as an audience and as filmmakers) have come to expect that Movies are supposed to have a certain depth of field (35mm-motion picture depth).
I think it really depends on the type of filmmaking you're doing to me Immersive usually means boarding on virtual reality as far as camera work goes. If you want just a very deep screen or a large distance between the front of the screen and the back. That would require careful setup of your shots and cameras so as to not get divergence. I think you'll also want to start off with less cuts to let your audience get used to the notion of 3d cutting, but again that's just me. I'm sure there are others out there that can offer other solutions.
Ben Winter July 1st, 2011, 03:18 PM To clear the air:
Canon 5Ds and 7Ds can be closely synced but not certifiably gen-locked; this is the method, assuming all camera settings are the same: Press start to begin recording; fire both shutters simultaneously with a dual-wired remote or wireless pocket wizards. The sensors will reset their scan from the top line simultaneously and resume recording video to within a few milliseconds of each other. The sync can be verified by firing a flash at the cameras; the flash should fill only a portion of the frame, but at the same part of the frame on each camera, due to the CMOS shutter roll.
With one camera flipped and the other right-side up, the lenses' centers can be mounted 5" apart at the minimum. Using the 1/30 rule of thumb for 3D shooting (interocular distance is 1/30 of nearest object in the frame), this allows shooting objects as close as 12.5 feet. Any closer requires a split mirror.
Hope this helps.
David M. Cole July 2nd, 2011, 11:30 AM T...Canon 5Ds and 7Ds can be closely synced but not certifiably gen-locked
One method that works well to minimize initial drift is to power the cameras up at exactly the same time using an external power source. This, combined with the ganged hard-wire shutter release to reset the sensor timer is the best approach for sync. My 5D's come up with less than .05ms sync offset, but, unfortunately drift out of sync fairly quickly. The key seems to be to have two cameras that were manufactured at the same time (mine were not).
We have MUCH better luck with Panasonic GH1's and 2's. There is an active group shooting stereo w/ hacked GH1s @ GH1 Stereoscopic Rig Collaboration (http://www.dvxuser.com/V6/showthread.php?221553-GH1-Stereoscopic-Rig-Collaboration) You can find the design for a simple mechanical sync controller there - as well as sync testing methodology and advice.
Alister Chapman July 3rd, 2011, 04:39 AM Just consider that if one of the cameras is upside down the any rolling shutter artefacts will go the opposite way on each camera, further complicating alignment.
Jesse Blanchard July 6th, 2011, 10:15 AM I have been shooting on a beamsplitter rig with two T2i's. I had a 2.5mm splitter made for me (should get it this week), to split the remote signal to the cameras. I've been told this will give me perfect sync after the photos.
If that doesn't work, I'm going to try powering up both cameras off of the same external battery. They are the same voltage as the larger XHA1 batteries so I think I have a good chance at getting that to work. I'll post my progress once I get there. The t2is do not output HD video via the HDMI port so there are not a good candidate for nano flash recording. Also, the white focus box stays up. You can clear that with Magic Lantern, though.
David M. Cole July 7th, 2011, 09:13 AM The T2i's will sync fairly well with a Y-cable on the shutter release - however - the cameras internally poll the shutter release port for the switch closure. This results in some offset, depending on the polling cycle of each camera. Using the external battery to fire-up the cameras will increase your chances of achieving good sync.
Another note on using mechanical switches for powering up the cameras and triggering the shutters... mechanical switches are prone to bounce which could possibly introduce some variance between cameras. Because of this, "purists" use solid-state switches for this purpose. I remain unconvinced that this matters and have achieved perfectly acceptable results with plain old mechanical switches. Your milage may vary.
Finally, most camera manuals warn that you NOT leave the camera power switches in the on position when introducing external power. This is because a surge could fry the camera. I've never had a problem with battery or A/C regulated power sources with Canon or Panasonic cameras, nor have I ever heard of anyone else having a problem. Again, your milage may vary.
|
|