DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   3D Stereoscopic Production & Delivery (https://www.dvinfo.net/forum/3d-stereoscopic-production-delivery/)
-   -   To toe in or not to toe in, that's the question (https://www.dvinfo.net/forum/3d-stereoscopic-production-delivery/482803-toe-not-toe-thats-question.html)

Adam Stanislav August 2nd, 2010 12:16 PM

To toe in or not to toe in, that's the question
 
Yes, that is quite a common question that seem to come up in this forum with a certain regularity. In reality, this question is absurd and should not be asked except on some very rare occasion when you may want to take your audience for a brief moment of spin. Under any other circumstances, the answer to that question is a resounding no.

And no, I am not saying this because of the keystoning problem usually associated with toeing in. I say that is not a problem at all because if you ever want to take your audience for a toe-in spin, you may as well add the keystone spin to the mix.

The reason why toe in is almost never a good idea has to do with how our vision works. We all know that when our eyes gaze to infinity, they gaze in parallel. That is a given that no one argues with. What most people forget is that infinity is not hundreds of millions of gazillions miles or kilometers away. As far as the eyes are concerned, infinity is quite close, so close indeed that we almost always gaze to infinity. Why? Because our eyes are only 55-65 mm apart, while the objects we are looking at are almost always meters away. One meter is 1000 mm, a gigantic distance compared to the distance between the eyes.

When calculating the angle to toe in, we need to consider each eye separately, that is as if it was some 30 mm away from some imaginary center in-between the eyes. And we need to pretend the object we are looking at is precisely in front of this mathematical center. That way we can consider the right angle triangle formed by the eye, the center and the object. That allows us to use simple trigonometry.

Now, so we are all on the same page here, I shall refer to the parallels as being 0º (zero degrees) apart, hence a 0º toe in. And I shall talk about nº, the number of degrees each of the eyes (or cameras) converge, as the nº toe in (n-degree toe in).

This angle can be calculated by getting the arcustangent of the distance of the object in millimeters divided into 30 (the distance of the eye from the center), then multiplying the result by 180 and dividing by π (pi = 3.14...). Bear with me, I do not expect you to do the math, I'll do some for you. Of course, if you want to do it, more power to you. :) (and in that case the PowerToy Calculator for Windows XP is your friend).

So, if we are looking at an object that is 30 mm in front of our eyes, we do not even need trigonometry, we know from basic geometry the angle is 45º, something we can achieve very easily with two cameras. But, hey, how often do we look at something that close? Once a year during the eye exam perhaps.

What if we are shooting an object standing 2 meters (2000 mm) in front of the camera? That is a good close-up that can happen a couple of times in a movie. So, we enter atan(30/2000)*180/pi into our trusted PowerToy Calculator, press Enter, and instantly the result of 0.859... appears (the calculator itself uses an insane number of decimal points, so I only copied a few. Yes, folks, that is less than 1º. Can you even use such a precise toe in and converge each camera by 0.859º without using some microprecision gear?

Well, what if the object is 3 meters (3000 mm)? Then the angle is 0.57º. For 5 meters it is 0.34º, For 10 meters, 0.17º. For 20 meters, 0.086º. That is less than a tenth of a degree, so I'll stop here.

I think by now you get the point. As far as our eyes are concerned, infinity is so close that any toe in, if done right, would require extreme precision. Can you even tell if your two cameras are parallel within an error of less than 1º without using some very expensive and very precise angle measuring equipment?

So, now you understand why I have always said to stay away from toe in and keep using the inter-axial distance adjustment for whatever 3D needs you have.

Maybe someday someone will start manufacturing precision equipment that allows us to use toe in the same way our eyes use it. When that happens, I will reconsider my advice (though what I have described is not my only objection against it). But for now, my answer to the question, especially considering it is usually asked by people new to 3D, is steer clear from toe in.

And of course, all rules can be broken. But you should only break them given two conditions: (1) You understand why the rule is in place, and (2) You are breaking the rule to achieve a special effect. So, as long as you know exactly what you are doing and what you are trying to accomplish, by all means do it. But for everything else, toe in is a bad idea, or at least an idea which is not yet technically feasible to do properly.

Petri Teittinen August 2nd, 2010 01:20 PM

Thank you for this nicely argumented post which was easy enough to understand even for a doofus like myself. It was also nice because my 3DFF Indie rig has no convergence adjustment at all - in fact, it doesn't even make sure the cameras are both pointing straight ahead at zero toe-in or toe-out. A slight problem there, as you can imagine.

So, using I-A exclusively. What was the formula again? Distance to picture plane divided by 30 or so equals distance from lens center to lens center?

Alister Chapman August 2nd, 2010 02:55 PM

The thing that gets forgotten again and again is that we are not creating 3D images but a pair of 2D images presented in 2D to each eye. We are creating an optical illusion, not a 3 dimensional image. Because of this it is not always desirable to exactly mimic they way our eyes work or what they would see in the real world. In addition what seems to have been forgotten in Adams explanation is what is happing to the image our eyes see beyond the point of convergence. Even if the angle of toe in is minute when we converge on an object that is 10m away, objects that are 20m away will have the equivalent of 65mm of disparity. It's not all just about what is converged but also what is diverged, beyond the convergence point. If I was talking to someone 2m from me and there were mountains several miles distant there will be the equivalent to a large amount of disparity in those mountains in my visual system, whether I am conscious of it or not. There will also be differences in the parallax seen by both eyes with each eye seeing a slightly different vanishing point. Parallel shooting does a very poor job of mimicking this.

While I can't easily measure the angle that my cameras are set to on my rigs to sub 1 degree increments, I can measure the foreground and background disparity on my monitor and this is what is important. I can then make the tiny fraction of a degree angulation adjustments and I-A adjustments necessary to give me full control of the depth of my scene, convergence and roundness while staying within the appropriate disparity limits for my target audience. Using a mix of I-O and angulation allows me to adjust the roundness of the image which changes with focal length and to compensate for image foreshortening at longer focal lengths. If you only shoot parallel it's far harder to control roundness, convergence and disparity together. The narrow I-A required for convergence in close up shots if you shoot parallel often leads to a very flat image. Simply increasing the I-A a little and adding a small amount of toe in will make the image rounder and deeper. This undesireable effect is particularly noticeable on faces shot close up with parallel cameras where a narrow I-A has had to be used to allow for close convergence.

Pete Bauer August 2nd, 2010 03:58 PM

I don't shoot stereo video (yet?) so don't have an opinion about when or if one should toe the cameras in a given situation.

However, stereopsis is effective to around 100m in a person with a normal visual system...no tropias and 20/20 "best corrected" vision in both eyes. So at least for a matched pair of eyes, "infinity" is generally 100m or more, not 1m even though the calculated arc that is subtended is quite small. Apparently some parts of our brains are pretty good at geometry, no matter our classroom grades!

Bruce Schultz August 2nd, 2010 05:27 PM

Quote:

Originally Posted by Petri Teittinen (Post 1554739)
3DFF Indie rig has no convergence adjustment at all - in fact, it doesn't even make sure the cameras are both pointing straight ahead at zero toe-in or toe-out. A slight problem there, as you can imagine.

Your 3DFF rig will do toe-in, here's how. Set both cameras at 0 - have them lined up so there is no overlap to them and you are seeing essentially a 2D image on the 3D monitor in superimpose mode. Move your upper camera to the desired I-A distance and now undo the two round 3DFF camera plate knobs on the front and back of the plate of the Lower Camera and rotate it ever so gently to the left until the disparity of your foreground image is resolved and it looks more like a 2D image at that point (usually the point of foreground focus) and you have now converged on that point.

If you don't have these plates, buy them from them (overpriced I know) ; http://3dfilmfactory.com/index.php?o...d=56&Itemid=79 top item on this page - then you can converge with caution.

Bruce Schultz August 2nd, 2010 05:37 PM

Alister, not to hijack this thread, but will your Hurricane rig have measuring indicators for I-A and convergence? Are you still on target for sales this year?

Adam Stanislav August 2nd, 2010 05:56 PM

Alister, I cannot think of anyone more fitting what I said about breaking the rules. You are a very experienced 3D filmmaker, you probably know what you are doing better than any of us here.

I aimed my message at the people who come to the forum trying to learn the most on how to get into 3D. They should stay away from toe in as a plague. They need to learn to work with the interaxial distance before they do anything else. Once they have as much 3D experience as you, they will be able to tweak all the variables. Until then, toe in will only cause headaches to both themselves and their audience.

Nice discussion, everyone.

Giroud Francois August 3rd, 2010 12:31 AM

while some number look small (the angle in degree calculated here), you have to take in account they can influence other numbers by several magnitude. (especially the TAN function)
So a 0.5 degree of difference can represent A LOT of horizontal shift if far enough.
this leads to windows violation (object on one view are not visible on the other view) or impossibility to overlap distant objects (ghosting)
The easy way is to shoot with parallel views and very small I-O distance, something that seems a common acceptance today.

Alister Chapman August 3rd, 2010 01:20 AM

Hey Adam. There is nothing wrong with shooting parallel as such and it is certainly one of the safest ways to shoot S3D, especially if you don't have accurate monitoring. Indeed one of my handheld rig is fixed I-A (35mm) parallel for run and gun type shooting.

I favour a hybrid shooting method that uses (in most cases) a fixed toe in for the scene and then uses I-A for convergence. I start by setting up for the deepest shot in the scene. With the cameras set to zero I-A. I then toe in the cameras until the measured on screen background disparity hits the limits I am shooting to. So at this point I have convergence on the sensor plane and the background diverged by my on screen maximum (hypothetically 65mm on screen). This means that no matter what else I do, provided I don't change the angulation (convergence) I will never exceed by diverged disparity maximum.

Then I bring the cameras apart and use the I-A to set the desired convergence point. The further apart the cameras, the further away the convergence point will be. One of the beauties of this method is that it is pretty "safe" and you don't need to constantly adjust the angulation. It also works well when cutting between close converged and distant converged shots as the overall scene depth only changes by a very small amount compared to what would happen if you used the toe-in (angulation) to adjust the convergence. Angulation changes have the effect of compressing and stretching the scene and cutting between different scene depths is un pleasant for the viewer.

It's not always as simple as this as there will be shots where you need to compensate for roundness or to force a little more or less depth. But for most shots this works very well and is straight forward to do, provided you have some way of monitoring your S3D output. This could be as simple as a regular monitor with an A/B switch to flick from one camera to the other and a ruler or grid on the screen (chinagraph pencil of printed on acetate?) that allows you to physically measure the disparity. If you work in percentages it's very easy to scale up and down for different screen sizes. I use a Transvideo Cinemonitor 3D which can add grids etc so it's easy, but it's not a cheap piece of kit. I like the idea of using stereoscopic multiplexer on a tablet PC with a couple of USB dongles, that would work too.

My Hurricane Rigs will have calibrated I-A scales. We are working on calibrated angulation, but there is huge variation in the lens centering of most video cameras and it will depend on how accurately you mount the cameras to the quick release plates. In fact as you change the focal lengths of most lenses the centering drifts from side to side as well. We hope to have a pointer and adjustable scale so you can at least see whether you have more or less toe in from shot to shot. Angulation will be done by a single knob on the rigs. Really though there is no substitute for a good monitor. As with all these things we have had a couple of delays. I should be in possession of the first 4 pre-production rigs in the next two to three weeks. Once these have been thoroughly tested we will have a small production run that will again need to be tested before full scale mass production. I'm still hoping that we will be in full swing by late September and we are still on target for a truly stunning price. We've even managed to add a couple of very clever tricks that no other rig I know of can do, but I can't tell you what yet as it's subject to a patent application.
We should also be releasing the dual focus kit and motorisation kit for the rig at around the same time. The motorisation kit will add full remote powered I-A and angulation adjustment, but for most applications this isn't essential.

Petri Teittinen August 3rd, 2010 11:59 AM

Quote:

Originally Posted by Bruce Schultz (Post 1554800)
Your 3DFF rig will do toe-in, here's how (...) If you don't have these plates, buy them from them (overpriced I know)

Thanks for posting, Bruce. OK, this is off-topic for this thread for which I apologize. Bruce, my Indie rig does not have those plates, and yes, they are overpriced. So was the Indie rig, horribly so. Unfortunately all this shooting 2D/3D video lark is only a hobby to me and I need to finance it from my other income - which does not exist at the moment, as I was laid off back in February. So, yeah, not going to buy those plates :)

Bruce Schultz August 3rd, 2010 03:27 PM

Petri, you only need one of them for the bottom camera and although they are overpriced, I think you can call Karl over there and see if he can do better than list $ for you. If you get one, your rig will be 2X more efficient at getting quality 3D than without them. You've gone this far . . .

Leonard Levy August 3rd, 2010 06:20 PM

Well that's a really interesting post Alister.

OK please bear with me if I totally misunderstood you:

Do I understand correctly that withe cameras at 0 I-A you set a toe-in for the most distant part of the shot which diverges everything to the max parrallax you can accept for your set-up & final screen size etc. This will diverge the background. Then you pull the cameras apart until they converge on your chosen convergence point? This puts your background somewhere between pure parallel and converged and this method also determines the I-A instead of say consulting a table or using a 1/30 type rule. Would this also mean your background will have a bit less than maximum allowable parallax and just a touch of keystoning?

If i have this right, then I was wondering that you said you set this for a "scene", not just a "shot". Assuming you are maybe using different focal lengths and moving the camera around in a scene would you keep the that intitial tow-in the same?

I gather also that maybe you adjust for shots that are more extreme - long or short lenses or close or more distant shots?

Quoting your previous post by "close-up" do you mean that as in the close-up of an actor or interview subject in a scene- i.e typically a somewhat longer lens shot, or did you mean a shot that is just close to the camera? Sounds like the former:

"The narrow I-A required for convergence in close up shots if you shoot parallel often leads to a very flat image. Simply increasing the I-A a little and adding a small amount of toe in will make the image rounder and deeper."



BTW - I'm not going to try this yet, I'll remain safe with parallel for a while, but I will test it.

Thanks

Lenny

Petri Teittinen August 4th, 2010 04:50 AM

Quote:

Originally Posted by Bruce Schultz (Post 1555129)
Petri, you only need one of them for the bottom camera and although they are overpriced, I think you can call Karl over there and see if he can do better than list $ for you. If you get one, your rig will be 2X more efficient at getting quality 3D than without them. You've gone this far . . .

Hang on, bottom camera? Oh, I see. No, I have the side-by-side 3DFF Indie rig, the cheapest of the lot at $499. Couldn't afford anything fancier.

Prech Marton August 4th, 2010 07:23 AM

Hi,

I work with 2 HV10, minimum base distance 6cm.
When recording close subjects, i have to toe in, because when parallel,
at editing i have to crop from left and right of both pictures.
Then if i want the full 1080 resolution back, i have to zoom in, and that degrade quality.
Instead i converge the cameras, and for me it is not hard to view.
When there is big disparity in the far away objects, use smaller DOF.
Like in Avatar :)

"This could be as simple as a regular monitor with an A/B switch to flick from one camera to the other"

That's exactly what i want to make last week!
With the very simple circuit, that switch from two composite input to one output, the display has to be in sync, and that's a half sec noise in picture when switching, so i cannot compare the two video.
I think i need a genlocked circuit. How hard is to make such a circuit?
Marton

Alister Chapman August 4th, 2010 02:00 PM

Leonard: Yes you do seem to have understood me correctly. My starting point at zero I-A does give you a scene with everything diverged, but that is only for setup. My background disparity will always be less than my maximum allowable, that's the beauty of this method, it's difficult to mess up! Convergence is adjusted using a monitor and not tables, although generally you find that the I-A etc will tie in pretty closely with most converged calculations.

Because I'm toe'd in there will be some small keystone errors but these are normally tiny as I'm generally working to 2% average - 4% max disparity limits which results in quite narrow I-A's.

Yes by close up I did me a shot where a longer focal length is being used to shoot a head and shoulders type shot, but the same flattening can also happen if you have a narrow I-A (less than 60mm) and the cameras are very close to the subject.

By scene I mean sequence of shots that will be cut together and represent the same location or subject. Change subject or location and you need to re-set. Also changing focal length will require a reset, just as you would if you were using a calculator or tables. One of the challenges is getting from say an interior scene with only 20ft of depth to an exterior with many miles of depth. This is where other depth cues really help such as scale. So for my last shot in the room I would try to include a person in the front of the scene (not in positive space, just well away from back of the room) and then in the following exterior have a tree or something else in the foreground. These scale cues then tell the brain how to interpret the stereoscopic cues it's getting. When working in a narrow disparity range you tend to use almost all of it for every scene, so if you only have stereoscopic cues the interior shot would appear to have the same depth as the exterior landscape. But fortunately scale cues get used by the brain for depth more than just stereoscopic cues. The stereoscopic cues tell our brain what's in front or behind, or that "A" is twice as far away as "B", but scale is used by the brain to work out what the distance between A and B is.

Prech: Good old fashioned CRT Tube monitors without any fancy electronics work best for A/B switching as they switch almost instantly and will re-sync without any big delay.


All times are GMT -6. The time now is 02:35 AM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network