View Full Version : 35mm adapter - double cameras.


Oscar Spierenburg
January 12th, 2005, 04:45 PM
I'm making a new 35mm adapter only the projected image will be split in two and filmed by two (identical) cameras.
My first tests (just a still frame) gave a very sharp picture after stitching them together on the PC.

I'll try to do it with one mirror to split the screen and the cameras on their sides.

Matt Champagne wrote:
The one thing I'd be worried about if you are compiling images like this is if your frame rates will properly synch up...if not your going to get something akin to very serious interlacing.


I thought about it, and came up with this:
Filming with two cameras will give you 4 audio channels (2x stereo)
If you have a portable audioplayer of some kind (even mono) and let it play a SMPTE synch signal, you could record that on the left channel of the first camera and the right channel of the second camera for example.

I know this code from music synch and it's a tic on every frame and is used in most audio and video software.


I'll post a test frame soon, I'm new here and have to figure out how.

Oscar

Aaron Shaw
January 12th, 2005, 05:01 PM
Ocar, forgive my ignorance but why would you want to do this? To create a 3D cinema type experience?

Oscar Spierenburg
January 12th, 2005, 05:47 PM
No, not for 3d. It's to double the resolution (you'll have the frames side by side, not mixed) The test I made, was with my consumer DV camcorder, and made a very sharp 1040x720 image.

Two cameras film one projected image (augus35 idea) and are stitched together again on the PC.

One problem would be if you need a slight overlap to get a seamless stitch. Maybe tilt the split-mirror a bit more than 45°

Oscar

Matt Champagne
January 12th, 2005, 05:51 PM
That's not so much what i'm talking about...yes that will solve your problems in post...but i'm talking about the synchronization of the cameras themselves...they have to start their frames at the exact same moment otherwise camera one may start recording its first frame..and camera two might start its first frame a few milliseconds later. It won't be that much of a problem unless you get into high shutter speeds or do alot of motion...but like I said if you did those things it would be like seriously strange half-screen "interlacing". Or more likely...peoples bodies being temporarily cut in two.

At first I thought you meant projecting the exact same image to both cameras...not one on one half, one on the other. If you did this...and say started two cameras that shot 30p 1/60th of a second apart from one another you could effectively create 60p. (This was what my dual camera idea was intending to do..I found machine vision cameras that shot hi-def with 1" ccd's...but only shot 15fps)

On that last note...man do I wish cameras had midi in/out/through...that would make external sound sync so easy....plus they could even use that as another control mechanism with some creativity. Can you imagine hitting a C# on your keyboard, and your camera zooms in? lol

Matt Champagne
January 12th, 2005, 06:00 PM
Another thing occurred to me...theoretically would it be possible to take the raw image from a 35mm lens (not in a typical adaptor as we currently think of it)...and use mirrors in a simular way and project it directly onto multiple ccd's. It would be a simular idea...probably alot more engineering skill involved though...but you may be able to resolve the lens' image directly to the ccds to make a dof adaptor with no focusing screen/GG necessary. Just a wild idea...no idea if it would work.

But anyway...unless that extremely inspires anyone...its probably best to not worry about it.

Do you have any framegrabs to post of your results so far?

Oscar Spierenburg
January 12th, 2005, 06:03 PM
But if you deinterlace in post, it would be easy to synch with full frames, right?
The SMPTE code will match every frame, if they are full non-interlaced frames.

Maybe I'll get to that point in the next days and really understand the problems I'm facing.

Aaron Shaw
January 12th, 2005, 06:10 PM
You can get the timecode to match no problem. The problem arises from the fact that the information in each frame would have been acquired at a slightly different time between the two cameras. You really need to find a way to start and stop them together.

Matt Champagne
January 12th, 2005, 06:24 PM
maybe interlace is a bad word....i just mean the effect would be simular. I should preface this by saying I may be totally wrong. You have the thing built...so give it a try I guess. But what I am saying is lets say you start both cameras at the what you think is the exact same time...but in fact they are not started at perfectly the same time. As long as the delay between cameras is a multiple of 1/30 second...your ok because you can correct that in post to some extent. But if its more like 1/50 of a second, then you what I am saying will probably occur. Lets just assume camera one shoots the top half of the image, and camera two shoots the bottom. If you panned quickly, or something ran across the screen quickly, you would notice the top half detach from the bottom half because it is delayed.

Now, I must admit now that I do the math....I think it won't be that much of a factor...maybe only a 7ms delay max between halves since greater delays could be offset by shifting the frames.
But deinterlacing definatly won't do anything.

All of that is also assuming your shooting progressive to begin with...I really don't even have an idea of what you have to consider interlaced.

Also, shooting 24p the delay would be a little worse.

Oscar Spierenburg
January 12th, 2005, 06:27 PM
I think if the cameras record the same SMPTE signal at the same time, and synched in editing software, the frames would be exacly in place.
But maybe it could be done with a clapboard.

Here is a frame (it was only one frame anyhow):

http://s01.picshome.com/52a/frame.jpg_t.jpg (http://s01.picshome.com/view.php?image=/52a/frame.jpg)

or:

http://s01.picshome.com/view.php?image=/52a/frame.jpg

Oscar Spierenburg
January 12th, 2005, 06:38 PM
Sorry, the link was to a thumbnail. Again(it's compressed JPG, so don't zoom in):

<img src="http://s01.picshome.com/52a/frame.jpg" width="1040" height="773" border="0" alt="Picture hosted by PicsHome.com" />


I think I meant de-field. And I would film the left and right of the image, and maybe the artefact Matt is talking about, will be less if I can make a bigger overlap that fades the two images.

And I didn't build it already, I've started today. I've build the normal 35mm adapter but it doesn't have space for the second camera.

Matt Champagne
January 12th, 2005, 06:39 PM
Looks very nice. I don't know if the SMTPE signal is even necessary if the frames line up. But what i'm saying is not so about the frames not being able to line up in post...but its the fact that they will likely always have a delay that is a fraction of a frame apart...so there is no movement of frames you can do to synchronize them perfectly. Unless, you go through the trouble of sycnhronizing the clocks of the camera.

Its like if you start two people adding 10 over and over again...but one starts at 0 and the one of them starts at 2. At the end of the day you are only allowed to add or subtract multiples of ten to their number, so there is no way you can ever make them say the say number.

But like I said...the delay may be negligible...probably 7ms or less. And I know audio delays of 30ms are hardly noticable...so can imagine the same thing is true of video. Before I did that calculation I thought it would be a serious problem...but now I feel like I'm just being nittpicky lol....Its probably really no big deal.

Oscar Spierenburg
January 12th, 2005, 07:25 PM
I think it would be like fields in video, TV's and monitors, you can't see it. (I hope)
Maybe some frame-blending will help too.
I'll start and stop the two cameras with the same remote, bur that won't be ms accurate, I'm sure.
Thanks for the replies, I'll post my progressing work and every suggestion is welcome.

Oscar

Brett Erskine
January 12th, 2005, 07:49 PM
Oscar-
I thought about a similar idea but before you kill yourself trying to make one of these I might recomend that you try these two tests first:

1) I have a feeling that after you put your higher definition image on to a standard definition DVD your going to loose just about all that you gained. Try it with you still image test and find out. Make sure you also encode a example of the image from just one camera so you can compare it to two cameras side by side. DVD requires you to downrez to standard definition. Unlike in the film world, when dealing with video it doesnt really help to start with a HD master over a SD master if your going to SD DVD. Most people will tell you otherwise but thats because they are seeing the effects of the HD cameras better/less compressed image, wider latitude CCD, etc. But when all things are the same between two cameras except the resolution they will end up looking the same when put to DVD as long as both of them are shooting at least to the maximum resolution that SD can handle. People that have bought the new Sony HDV camera will unfortunately come to realise this in the months to come. Heres the proof:

http://www.dvxuser.com/cgi-bin/DVX2/YaBB.pl?board=sony;action=display;num=1103104690

2)Try shooting another test shot but this time at wide angle. You should notice that you can stitch the two together anymore due to minor distortion problems. You will have to bend the image completely flat again in post for them to come together seemlessly and doing so will result in some loss of resolution.

As to how to sync up to wild video camera perfectly - hit up these guys. Hopefully they will be willing to share.

http://www.camcorderinfo.com/content/3DVX-Announced-Finally-Portable-3-D-Made-Easy.htm

Brett Erskine
January 13th, 2005, 01:54 AM
More than a few typos in my last post. Heres some clarification:

2)"....You should notice that you CANT stitch the two together anymore due to minor distortion problems..."

"...As to how to sync up TWO wild video cameras perfectly..."

Bob Hart
January 13th, 2005, 06:34 AM
Brett.

In the same way you get a seemingly better resolution if you blend or layer four or five frames of standard DV, would not two cams framed on the same actual image through a splitter prism achieve something similar, though probably not to same degree as a joined frame?

If splitting a single image, the optical splitter system on a single input - two output head-mounted night-vision would be an obvious place to start as a design base and possible source.

According to a DOP I spoke with, Disney's once made a special splitter lens for 35mm for making frame-accurate images and mattes. Two films were shot or something like that. I did not understand the tech at the time.

A interesting experiment is to print two consecutive frames from MiniDV on photopaper. Mount them side by side then make yourself crosseyed and look at both frames together.

You know, stick your finger up close to your eyes then focus on the center image of the three you see. The apparent resolution and contrast of that centre image is much improved.

Another variation on this experiment is to print two frames of the same subject, one sharp on the subject, the other sharp on the background, both shot through a long lens. An interesting 3D effect occurs.

Provided you de-interlace and apply motion blur by the method described on this very site and then layer the two timelines at 50% transparency, I suspect there should not be too much trouble even if they were up to four frames out in sync when there is little movement.

Aaron Shaw
January 13th, 2005, 07:47 AM
Bob, couldn't you just duplicate the footage on the timeline then from a single camera for the same effect?

Oscar Spierenburg
January 13th, 2005, 07:47 AM
I made a quick drawing with the setup from te top view:

http://s01.picshome.com/52a/double35mmadapter.jpg



http://s01.picshome.com/52a/double35mmadapter.jpg

Brett Erskine
January 13th, 2005, 01:13 PM
Bob Hart wrote-
"In the same way you get a seemingly better resolution if you blend or layer four or five frames of standard DV, would not two cams framed on the same actual image through a splitter prism achieve something similar, though probably not to same degree as a joined frame?"


Yes this will increase your resolution. In fact I've heard of software that does just that. People sometimes use it to make higher quality photo prints from the combined resolutions of a short series of images from a video camera that are extremely SIMILAR. So yes this works sometimes BUT my point was that we loose all of are efforts and are back to standard definition when we finish the project to DVD because of DVD's current limitations on resolution.

Now if your shooting 60i you might be able to make some form of 30P from Oscar's device when you make the final DVD BUT once again you can NOT make it higher resolution than that and 60P isnt possible either when it comes time to make the DVD. Basically I believe you will only see your efforts on your computer or some HD format tape.


Aaron wrote-
"Bob, couldn't you just duplicate the footage on the timeline then from a single camera for the same effect?"



That will give you a higher pixel count but not higher resolution/clarity because your not giving the image any NEW data to make up its total image - just more of the same. Having a HD camera or two different cameras making up a larger image introduces both more pixels and new/different ones.
Hope I made sense.

What do you think of the 3DVX rig. I know I want one.

Donnie Wagner
January 13th, 2005, 04:18 PM
I DON'T think you lose "all" of the benefits of the two camera recording system when it is compressed to standard size for DVD on a television. My understanding is that starting with more pixels equals high resolution and more color space. Compression of sorts is part of every form of presented video, whether its for a dvd, cable tv, or a streaming video. Without going into a technical discussion, my thought is that to get your best "looking" final result, you need to start with the highest resolution and colorspace you can achieve. Consider a cool car commerical that you can stream from the companies website, they no doubt used the highest resolution (maybe film) they could afford to shoot the content, even though it would would be presented on tv and even more compressed on the web. I think the two camera concept has merit even if its for dvd.

Oscar Spierenburg
January 13th, 2005, 06:05 PM
I didn’t want to start a tread on DVD quality. Anyway, if you double the resolution, the video noise is two times less (or smaller, really) and the detail is twice as high. And that would also remain in DVD.

The system I'm currently building is possibly just good enough for a transfer to 35mm film. That’s just a dream for me of course, but still, that's also the thing I would like people’s opinion on. What resolution and quality would be a minimum for a transfer.

Note that I have two 'consumer' camera's. Not HDV or anything.

Donnie Wagner
January 13th, 2005, 07:34 PM
a 1040X720 image is pretty close to a 4X3 aspect ratio. Your frame looks 16X9. Did you letterbox the image for the widescreen look, if so, your losing about 20% of you vertical resolution and colorspace, leaving you with 1040X580. Another question is how are determining that two consumer camera's turned on there side will add up to 1040X720? I guess that would mean each camera had an effective 720X520 lines of resolution. I dont think consumer camera's have nearly that high of
"effective" horizontal or vertical lines of resolution. Anyone else know more about this?

Brett Erskine
January 14th, 2005, 01:47 AM
Donnie-
Take another look at all of my post and you will see that we agree. Look what I said about film, compression (and its relationship with color space). My argument is simple. When all things are the same EXCEPT resolution (as is the case here) you will loose that extra resolution when you down res it to DVD. Film is different. Take a look at my links. It proves this is true for video (when compression, color space, latitude etc are the same but one is HD and one is SD). This isnt true for film that has been telecined to video. 35mm will looks sharper than 16mm even though 16mm has much more relative resolution than a TV can show. So when it comes to the basic concept of starting with a higher res format than the ultimate format - Video and Film just arent the same and dont follow the same rule. Pixels vs. organic shapped images.

Anyways none of this matters if your not making DVD's out of your work and instead plan on doing a film out. For a film out your definately going to see a HUGE improvement if you use twice the resolution than SD (two cameras). As to what resolution do you need for a quality film out...? Well when they do a digital intermediate of a 35mm print they say that scanning it at 2K is pretty damn good but at 4K you have a "perfect" digital copy of a 35mm print. But in reality how many people shooting DV end up doing a film out...seriously. If its extremely unlikely then making something like this 2 camera rig wont be of much benefit. Thats why I brought up the whole DVD thing in the first place - save you alot of headaches, money and time. But dont just take my word for it. Look at the link I posted and read up. Also run a test yourself. Remember to look at the final DVD NOT the combined image on a PC. Let us know.

Oscar Spierenburg
January 14th, 2005, 06:32 AM
The test image I posted is a quick test, so I squeezed it a bit to get the idea of the final aspect ratio. I don't know how I will do this for real, I made a very big anamorphic lens some time ago, it works very well, but not when light hits the glass. I'm also thinking about a smaller anamorphic lens inside the camera, between the GG and 35mm lens. But I don't know if that's possible. Another way would be to put the cameras not on their sides, but that would give a very widescreen 1400 X 576 pixels.

I would like to end the discussion about resolution and DVD, maybe it's mathematically correct, but it's also about what you compress, DV has a lot of artefacts like staircase on edges and ugly color noise.

Brett Erskine
January 14th, 2005, 02:21 PM
Sure thing. So to answer your question about rear lens mounted anamorphics - yes they exist. They are somewhat of a rare thing these days. They used to be more popular because they could create a unique effect. They often could be rotated behind the lens while shooting to create a "drunk" or "dazed" pov shot. When you dont rotate them they act like a traditional anamorphic lens but mounted in the back like you want. I dont know for sure what they are using for elements but my guess is a achromat cylindrical lens. But using a film anamorphic lens will give you a 2.66 image in the end due to videos CCD aspect ratio being different than a anamorhic film print. Im not sure if you want a traditional 2.35 image in the end. If you do then for video your going to have to have a anamorphic lens that squeezes the image by 1.78X in order to end up with a squeezed 2.35 image on the CCDs. Good luck with your project.

Aaron Shaw
January 14th, 2005, 02:29 PM
Brett's right if you use a premade rear anamorphic. You should be able to set this compression yourself though if you build your own system. The compression depends on both the 35mm lens, the cylinder lens, the distance between the two and the focal length.

Oscar Spierenburg
January 14th, 2005, 04:59 PM
So, everything changes if you put another 35mm lens (like a zoom or wide angle) in front of the rear anamorphic lens?
That would be bad, because I like the 35mm adapter to use different lenses.
Thanks for the replies. Some time ago I made a (two triangle) water prism as anamorphic lens, but those would be way too big for a rear lens.

Isn't it possible to make from a lens protection glass (or UV) with curved sides and glue a piece of acrylglass curved on top and fill that with water or transparent oil. Or is another lens needed besides one curved.
These are just my stupid thoughts.

I've made a firm construction today with a lens-mount and plate to put the cameras on. I'm testing a GG: a CD dipped in an oil that attacks the plastic. One side protected with tape.

Aaron Shaw
January 14th, 2005, 05:21 PM
Sorry, I wasn't very specific. You should be able to use any 35mm lens without trouble if you were to build such an adapter.

It depends on you using 35mm lenses that is to say. Using a 16mm cine lens for instance will give a different result. Now if we were to use an anamorphic adapter which went in front of the 35mm lens then we would have to focus both the adapter and the anamorphic lens for any given focal length - in both the horizontal and vertical planes. Since we are talking about a rear system (called in its simplest form a bravais system) this is not a problem.

Now all a bravais system does in increase the image size while maintaining the images position. In our case with cylinder lenses this gives an anamorphic image (expansion in one plane). You can either compress the image horizontally or you can expand the image vertically. With the compression technique you end up with an even smaller image area than a normal 35mm on its own would give. This is bad for our purposes. The expansion would give a larger area in the vertical direction. Not very useful but it allows us to retain our 35mm image area on the GG at the cost of a little bit of light.

Bob Hart
January 15th, 2005, 06:53 AM
I recently read somewhere, that Canon may have an anamorphic product which goes between the front lens and the CCD. I can't recall where now unfortunately.

Barend Onneweer
January 15th, 2005, 01:20 PM
Besides possible visible seams due to the lens distortion on the edges of the frame, color difference etc, there's one big problem with this concept:

Synchronisation of the left half and right half of the image. Of course you can line up the shots in your compositing app, but since the clocks of both camera's aren't genlocked, there can be 1/60th or 1/50th of a second timedifference between the left half of the image and the right half of the image.

No problem with static shots, but if there's fast moving objects across the frame, or even worse: camera movement, you'll get very visible seams where you stitch together the images.

Bar3nd

Oscar Spierenburg
January 15th, 2005, 03:15 PM
Possibly, but I'm not so sure, because the difference may not come up in regular times of a take. You don't show one camera position for two minutes mostly.
And if Barend means the time difference also mentioned by Matt, I'm sure this (less than a 30th of a second) difference can be dealt with in after effects or whatever.

I have a setup with GG and lens, so maybe tomorrow I can make some tests with two cameras.

Aaron Shaw
January 15th, 2005, 04:11 PM
Bob, that's precisely a rear anamorphic. I read about that too. It's meant to allow their 16:9 cams to shoot 2.35.

Oscar Spierenburg
January 15th, 2005, 05:00 PM
The first thing that's clear to me after a quick test is that you loose a lot of light with a mirror.
So the next test I'll make tomorrow is to put the two camera's almost parallel (side by side) facing the GG. They must be panned a little to the centre and I’ll loose some pixels when I correct the perspective in post.
If this would be the best method, would there be an optical solution for this distortion?

I'm also curious about rear anamorphics, does anyone know the curve of such a lens?

Barend Onneweer
January 15th, 2005, 05:30 PM
I'm not referring to 'drift', causing the images to run out of sync during a long shot.

I'm talking about the camera's being out of sync due to a different phase of their internal clock. 1/60th of a second timedifference between the two halves of the frame would be a big problem.

I doubt this would be easily fixed in post, but tests will tell...

Bar3nd

Oscar Spierenburg
January 16th, 2005, 06:59 PM
I'm very glad to say that after two heavy tests on synchronisation, the effects we've discussed did not occur. I didn't test it in the 35m adapter, because it's not finished yet, but I made a simple panorama (a bit like the old cinescope with three cameras)
Of course I have a weird lens distortion in the centre, but with panning slow and fast to very fast, no shifting happens.


By the way; the synch between two footages is set up in no time, even without SMPTE signal. I just flashed a light.

Oscar Spierenburg
January 17th, 2005, 05:54 PM
So everything would work with synch and no side effects, is anyone interested in discussing ways to film the projected image with two cams?
The best thing would be to have a resonable overlap to blend the images.
I also thought of filming the whole frame twice and do some sort of pixel shift(if that's the right word), but that's just a stupid Idea, I think.
So, does enyone know some sort of beamsplitting, without much lightloss?

Aaron Shaw
January 17th, 2005, 06:01 PM
You're going to loose half your light if you split the beam. There is no way around this. Whether you loose more than that depends on how you do the splitting exactly. In any case it seems like it would be very hard to pull off well.

Do you have some test footage you can post from the 2cam shoot?

Oscar Spierenburg
January 17th, 2005, 06:21 PM
The footage isn't spectacular, so I won't bother you with it. But the main thing it showed me was no shifting. None when looking frame by frame and none playing normally. Besides that it's crap, because it's without the 35mm adapter and I didn't work on the lens distortion in the centre, so it looks like a raindrop had run vertically. How they did that in cinescope (not cinemascope) I don't know.

If I'd loose half the light, I won’t split the beam.

What if I place the cams almost parallel, like I said, is there a way to correct the perspective with a lens? I mean, you film the image a little from the side.

Oscar Spierenburg
January 21st, 2005, 07:29 PM
I got the setup finished in general. Everything is still uncovered, but the camera's are in place, I have a mirror just like in the drawing I made(if the mirror is exactly 45° the light loss is not too much)

I hope I get some more people interested in this concept.
These are the advantages so far:


-If you have a pretty good (consumer)DV camera which is a few years old, you can probably get the same one for a bargain second hand.
-You have high resolution, 720 x 1040 in my case.
-The system is made not too much more work than a 35mm adapter.
-The camera's share the same remote (and in my case start and stop at exactly the same time)
-If you have the adjustments right in a post software, it can be used every time (just replace the footages), unless you take off the camera's.
-You could have surround sound if you want.
-And of course you have the superb DOF of the augus 35mm system.

I'll do some more testing tomorrow and post some if it is good enough. Of course, no one buy a second camera before I show a full test on synch, color and other corrections.

Bob Hart
January 22nd, 2005, 07:33 AM
Ocar.

It would be a bitch to set up and get focussed correctly, but two perspective correcting SLR lenses might fix the center problem, otherise with separate images from separate identical conventional lenses onto separate Agus's gg's, it might be possible to skew the left lens and right lens and their gg's enough to impart the perspective correction that the specialised lenses achieve so that the edges coincide at the new center.

This might also be contrived to take advantage of the hotspot issue with one lens fading as the other gains.

The downside may be a focal shift from center to outside when pulling focus plus a parralax error.

Oscar Spierenburg
January 23rd, 2005, 05:12 PM
Bob, I don't think it will be necessary. I Made the setup like I fist intended, so without the perspective distortion.
I made a (very) quick test shoot. One of the things I did wrong is setting two different diaphragms.

Here is the test (very much compressed):

http://www.degalerie.nl/1/test4b.wmv