View Full Version : High Definition with Elphel model 333 camera
Wayne Morellini February 11th, 2007, 08:42 PM sorry for interrupting but i couldn't understand the reason why you are not using a hd webcam sensor that is ready to connect by usb2 and a laptop to capture directly?
sorry about that if you mentioned the answer before but my english is not very good and i might have missed it...
Because the camera is an networked security camera, designed to be shared over an network, primarily. USB2 is not capable of the multiplexed extension Ethernet is, so you would be limited. Secondly, the chips incorporated don't support it. USB2 is also an notorious processor Hog, as it is designed to be looked after by the processor (more brilliance that slows PC's). At the rates USB2 can work at, it can seriously tax a portable PC device (dropped frames etc).
Hurcan Emre February 12th, 2007, 01:45 AM thanks for the detailed expenation :)
Matteo Pozzi February 12th, 2007, 02:39 AM Finally - got first images from the 353
wow! when you can, post a link
many thanks
Matteo
Odd Nydren February 12th, 2007, 04:10 AM ..I second that! :)
Thanks!!
//O.
Andrey Filippov February 12th, 2007, 08:53 AM Images are really ugly (so I do not want to post camera IP here :-) ) - the only sensor 5MPix board left I had has a malfunctioning sensor (probably not all he sensor chip pins were hand-soldered nicely) and I did not yet finish porting of all the software (some parts are still missing - like setting the gamma tables).
My goal was to test hardware before going to larger batch, especially interface to the new CPU - PIO, interrupts, DMA. I even had to solve some problems with the new Spartan 3E (vs. Spartan 3) FPGAs to make them work as needed.
So this part works and being built as they are tested so far 10353 can do at least what 333 could. There are some features not tested yet (IDE, USB, fast system bus acquisition) - but that will be done later - in parallel with the camera manufacturing.
Andrey Filippov February 12th, 2007, 11:17 PM I hope it can make development of the camera interface easier - PHP now works in the 353
Matteo Pozzi February 13th, 2007, 01:57 AM cool I tink is possible to use it to make a database to save the setting
and to load preset ....did you suggest some more spec that can be used with this new capability (php rocks)?
Andrey Filippov February 13th, 2007, 09:49 AM cool I tink is possible to use it to make a database to save the setting
and to load preset ....did you suggest some more spec that can be used with this new capability (php rocks)?
Matteo, there is no database (yet) - just php itself to simplify server(camera)-side interface software. I'm thinking of a number of smaller binaries (instead iof the ccam.cgi) for low-level hardware-related functions with the interface level implemented in PHP.
Odd Nydren February 13th, 2007, 01:22 PM Andrey,
PHP in-camera - very cool...
..would it be possible to start a in-camera timer that executes a PHP script say every 5minutes or something...taking a photo or doing something else?
Just curious
//O.
John Wyatt February 15th, 2007, 05:18 AM I believe there are some light Linux distros which can be installed and run from a USB memory stick. Compared to running the OS from a live CD, this would be a way to save camera settings. I imagine though there are a number of reasons why this is not suitable for the job in hand...
Andrey Filippov February 15th, 2007, 10:21 AM Yes, we have one on a 4GB USB stick. The problem is - not all the computers yet boot from the USB stick.
Oscar Spierenburg February 15th, 2007, 02:34 PM Andrey, how did you install the knoppix to the USB stick?
Matteo Pozzi February 15th, 2007, 02:43 PM Hi Andrey have you seen that in april will be released the new version of ubuntu (Feisty Fawn project) and also in the same period april will be relesed ubuntu studio http://ubuntustudio.org/
" Ubuntu Studio. A multimedia creation derivative of Ubuntu.
Ubuntu Studio is aimed at the linux audio, video and graphic enthusiast as well as professional."
maybe we can start make a new version of the live cd starting from this distro ....it is more motion picure oriented!
Andrey Filippov February 15th, 2007, 09:34 PM Andrey, how did you install the knoppix to the USB stick?
Spectr (spectr) and Dmitry (d.belimov) - both at our company did that - you may email them directly. I'll ask them to post to wiki.
Andrey Filippov February 15th, 2007, 09:36 PM Hi Andrey have you seen that in april will be released the new version of ubuntu (Feisty Fawn project) and also in the same period april will be relesed ubuntu studio http://ubuntustudio.org/
" Ubuntu Studio. A multimedia creation derivative of Ubuntu.
Ubuntu Studio is aimed at the linux audio, video and graphic enthusiast as well as professional."
maybe we can start make a new version of the live cd starting from this distro ....it is more motion picure oriented!
Yes, we are considering using Ubuntu and actually our partners in Switzerland already tried that. We are eager to donate some hardware and sponsor such development by Ubuntu community.
Charles Hurley February 16th, 2007, 01:12 AM If you guys are still looking for a viewfinder solution you might think about going old-school.
http://www.imagehosting.com/out.php/i224091_Picture4.png
http://www.imagehosting.com/out.php/i224090_Picture3.png
I've loosely followed this thread and enjoyed it. Take Care, Chuck.
Matteo Pozzi February 21st, 2007, 01:02 PM very interesting lens but are very expensive and you lose a lot of light with them and for this type of adapter we need as much light as we can!
if you use it directly with the camera cause the sensor is much smaller than a 16 mm film you will get a tele lens starting from a wide one
so it depend of what you want to do!
Ron Lemming February 22nd, 2007, 07:06 AM So, if this is a network camera, will it work with wireless ethernet?
Andrey Filippov February 22nd, 2007, 11:14 AM So, if this is a network camera, will it work with wireless ethernet?
http://www.google.com/search?q=wl330g. It is ~4W, can work as a client, not only AP. Unfortunately it does not have an external antenna connector so I had to add it. And a small hack in the camera - it should be activated from the wired connection (like a simple ping to it's IP) before it will start responding to the wireless.
With dish antennas I connected over some 3km
Andrey
Rob Scott March 1st, 2007, 04:40 PM Andrey,
I've been thinking about the best way to utility a 333/353 camera for filmmaking using an IDE hard drive. Storing the raw data -- even compressed -- seems like a long shot.
A while back, Wayne mentioned the possibility of using 16-bit grayscale JPEG to compress each of the bayer "channels" separately, thus preserving the high-bit-depth and (hopefully) introducing very little noise.
What do you think of that idea? Would it be possible to adapt the existing JPEG FPGA code to support 16-bit grayscale?
Thanks,
Rob
Phil Stone March 2nd, 2007, 03:48 AM So, if this is a network camera, will it work with wireless ethernet?
I had the older 313 working over wireless to XP & the VLC player/recorder but streaming video is limited with the lower bandwidth.
Wayne Morellini March 2nd, 2007, 09:26 AM $@%%$^ Crashed during my post, 756323847 Opera.
What are people currently working on, maybe we should wait. But for now, why not test the quality of saving RAW Bayer frames as Gray scale Jpegs, no modification needed. Though scaling the red and blue pixels to equivalent green values before grey scale might improve quality a lot, they can be restored to bayer post production. These are the things we can do simply now.
Scott, that is an good idea in the longer run (but can't remember which one that was). Was it, take three separate color channels, do all the elimination stuff that Jpeg does on all values for compress green channel first, then do difference compression on the remaining color channels to the green (first scaling to green and even comparing to the average of the surrounding green (simple interpolation) preferably)? There are already people doing projects, whats are they, maybe we should wait, and do the simple grey scale experiments in the meantime. When you look at it, you could treat it like an 4:4:4 image, an full resolution interpolated green frame could be produced then RED and blue interpolated frames compressed through difference. The interpolation is already done to get Bayer to Jpeg anyway, this is an extra step.
We should not get tied up on Intra versus Intre, simple intre can give us huge gains in quality, as it allows more into the current bandwidth, which means more quality. Though what will really make an codec shine in small bandwidth, is an sensor with high Signal to noise ratio to eliminate noise, and lead to low noise in dark situations. Using an memory buffer to store the compressed frame and average out the flow, this would allow for bigger higher quality gop frames, and more space fro the gop at the start of scene changes etc. I typed this up before, but cannot remember now. The intre compression could be done between subsequent adjacent Jpeg preprocessed elimination images held in the memory buffer, to maintain even image quality. What is saved is space and the finale steps of Jpeg processing.
I have approached Andrey before about the possibility of using the main camera controller as an component and HDMI compressor storage unit, with IDE we finally have the bandwidth to do this well. I also have been in contact with tzerotech (and Analogue devices) in times past, about their UWB wireless wavelet HDMI technology as an way to record HDMI to computer, or through an direct USB version. Such an device can be ultra cheap, and hook directly to your laptop or Ultra Mobile PC, even, no brainier, saving directly to an portable hard disk enclosure. From memory, 100-200Mb/s 4:2:2 wavelet should be expected (faster is possible). Most of the work would have already been done, just an matter of interfacing USB to an reference design, and arranging the driver, if it has not been done already. The price could beat the pants off an PCIE based portable computer system, and be attachable to an camera.
Unfortunately, engineers tend to be more interested in their latest glamours projects, then these simpler faster solutions.
Thanks
Wayne.
Rob Scott March 2nd, 2007, 10:01 AM Was it, take three separate color channels, do all the elimination stuff that Jpeg does on all values for compress green channel first, then do difference compression on the remaining color channels to the green (first scaling to green and even comparing to the average of the surrounding green (simple interpolation) preferably)? There are already people doing projects, whats are they, maybe we should wait, and do the simple grey scale experiments in the meantime.
From my own experience, the two green channels should not be combined; you would separate the four channels -- R, B, G1, G2 -- and compress each one as a separate 16-bit-deep grayscale image. No interpolation would be done at this stage; the resulting 4 mini-JPEGs would be written to disk.
Offline, you would decompress and recombine the 4 channels, resulting in a true raw image with (hopefully) just a tiny bit of noise/distortion from the JPEG compression. Then, finally, the Bayer interpolation would be done in order to end up with a high-bit-depth file such as 48-bit TIFF or OpenEXR.
Obviously, this would only work if it was possible to adapt the JPEG FPGA code to support 16-bit grayscale.
Wayne Morellini March 2nd, 2007, 11:19 AM I suspected that might be the case. I think the two greens could be combined, but it might be tricky to get right mathematically, I suspect some combination where the red and blue are matched to alternative channels. The rest of my stuff is only to improve performance over the scheme you mentioned, but does not need to be done that way.
Wayne Morellini March 2nd, 2007, 11:29 AM Sorry, posted before I finished.
The Jpeg compression routine might not act the same way across all four images, I think that an normalised single Gray scale image might do better. If we went the extra step to normalise the image, the existing interpolation section in the camera might handle it, by telling them to interpolate to green in an fashion that just scales the red and blue values.
Andrey has mentioned compressing the Bayer image as an gray scale Jpeg before, so I think it would be supported. Did you get an camera?
What do you think Scott?
Andrey Filippov March 2nd, 2007, 01:19 PM We used compressed color images as monochrome (with Bayer processing later) in our model 323 cameras (http://elphel.cvs.sourceforge.net/elphel/camera323/fpga/x3x3/color_proc.v?view=markup). Actually we rearranged the pixels in each 16x16 macroblock to reduce high-frequency components caused by color tone - that made compression more efficient.
As for more bits per pixel - I do not see any need for it with current sensors. Micron 5MPix has about 8ke- of pixel FWC, so even with it's 12 bit output the number of the levels that can be distinguished is far less than 4096. So I believe "gamma" table (actually - optimized for noise performance table) can compress the 12bit range into 256 without sacrificing sensor data.
Matteo Pozzi March 2nd, 2007, 05:48 PM I think that a good quality mjpg file is more than enough ....if a jpg is a standard in digital camera why we need more than a good jpg compression for video where a picture remain only 1/24 of a second ....I prefer semplicity over big file!
Andrey Filippov March 2nd, 2007, 08:38 PM I made a simple javaScript program that calculates number of bits that are needed to represent image pixels without loosing sensor data
http://www.elphel.com/actualbits.html
Rob Scott March 3rd, 2007, 07:32 AM I think that a good quality mjpg file is more than enough ....if a jpg is a standard in digital camera why we need more than a good jpg compression for video where a picture remain only 1/24 of a second ....I prefer semplicity over big file!
For simplicity, you'd be better off with a regular DV or HDV camera! :-)
But seriously, there are reasons to press for (nearly) raw images with as few compression artifacts as possible -- for example: video effects work, where reduced color space and noise will interfere with the quality of a chroma key mask.
Rob Scott March 3rd, 2007, 07:36 AM I made a simple javaScript program that calculates number of bits that are needed to represent image pixels without losing sensor data
Thanks for putting that together, Andrey, it's very instructive.
Question: How does pixel binning affect this? For example, you can configure the Micron 5MP sensor for 2x2 binning, resulting in a 1.25 MP image with less noise. How many effective bits of resolution would this image have?
Wayne Morellini March 3rd, 2007, 08:54 AM As for more bits per pixel - I do not see any need for it with current sensors. Micron 5MPix has about 8ke- of pixel FWC, so even with it's 12 bit output the number of the levels that can be distinguished is far less than 4096. So I believe "gamma" table (actually - optimized for noise performance table) can compress the 12bit range into 256 without sacrificing sensor data.
Yes, the current sensors won't even do 8 bits of S/N db. 10 bits in only really needed for professional colourisation etc, and pushing images in other ways. Higher S/N than this is useful for low light when you have to increase the gain. It is not completely wasted. Another factor is, that there is, mathematically, more bits there to work with, as long as they can register real differences between adjacent pixels.
Need for better picture versus straight Mjpeg
To answer the other questions, it is about an higher quality image, the more something is blown up the more obvious the defects, and it simply looks more stunning. If we can't get an higher quality image from any given camera then an tape based HDV camera, then why use it. I see the possibility for the Elphel to match the XDCAM Eng cameras, but beating them is what we should look at. 50-70mb/s 8-10bit 4:2:0 (100Mb/s 4:2:2 or 4:4:4) Mpeg2 performance is what to look at beating. At this level (with sensor to match, even binned) we can hold our heads high and take our cameras out their, and never blame our equipment again ;). With simple compressed RAW Bayer, we could get in this range, with advanced bayer compression we could achieve it. But, most of these figures look meaningless, but Visually lossless like with Cineform Bayer on the SI camera, is definitely the target we should aim for.
Another important reason is component and HDMI recording. Yes, you can do it, but you also have to hike a mini computer with you, and when I say mini, it is probably more akin to an shoe box in value, and also as significant expense on the camera. This restricts the type of work you can do with it conveniently, mainly inconveniently tripod work. Being able to do something the size of an HDV camera itself, with disk and screen at just the price of the HDV camera, is an amazing advantage over an HDMI recording system. With this, you can go portable, and do regular small production work, versus having an computer box in an backpack or on the ground.
But this is an project of the people, an couple of years ago Andrey could have profited enough from it to justifying doing it all himself, but those days I think are gone. Modern cameras, and HDMI recording being the distraction that will limit sales, but still substantial potential.
Re-edit:
Mateo,
I forgot to mention, there is not enough bandwidth to do really good Jpeg on the camera. We are limited to around DVCPROHD quality levels, not quiet as much as we need fro pro stuff. With bayer we should hopefully, be able to double, even triple performance.
Wayne Morellini March 3rd, 2007, 09:02 AM Andrey,
About your grey scale Bayer Jpeg, what form of grey scale scheme was it, can you describe it, and the quality of the results achieved.
Is there any sample pictures?
Thanks
Wayne.
Andrey Filippov March 3rd, 2007, 11:06 AM Thanks for putting that together, Andrey, it's very instructive.
Question: How does pixel binning affect this? For example, you can configure the Micron 5MP sensor for 2x2 binning, resulting in a 1.25 MP image with less noise. How many effective bits of resolution would this image have?
We did not try this mode with the 5MPix yet and I expect it would help only to lower relative pixel readout noise. In the ideal case (like it is in the CCD) if you join charges of 4 pixel into one _before_ output amplifier tract (that adds noise) your gain 2 in S/N compared to just adding the pixel values in the picture (in that case signal will be the same - 4x of individual pixel value, but noise will also increase as square root of 4 that is two).
I'm not sure about FWC in this sensor (BTW we measured it to be about 8500). Theoretically it is possible to make the output path capable of handling 4x the maximal value of individual pixels (how it is pretty common with CCDs that have 3 values fro FWC - pixel, output register pixel and output capacitor - each larger than the previous) - I just don't think it is implemented. I believe the binning is suplemental mode there and targeted to low-light conditions, not to increasing S/N for large signal. But, as I wrote above - we haven't tested this mode yet.
Andrey Filippov March 3rd, 2007, 11:10 AM Andrey,
About your grey scale Bayer Jpeg, what form of grey scale scheme was it, can you describe it, and the quality of the results achieved.
Is there any sample pictures?
Thanks
Wayne.
Wayne, I just started to put together some design ideas for http://code.google.com/soc/ that we plan to ally for next week. It is in our wiki - http://wiki.elphel.com/index.php?title=SoC. It includes demosaic project
Current implementation is in
http://elphel.cvs.sourceforge.net/elphel/camera333mjpeg/fpga/x3x3/color_proc333.v?view=markup
(Unfortunately tabs length mismatch, so formatting falls apart)
If you scroll down the code you'll see the comments with description of the calculations.
Andrey Filippov March 3rd, 2007, 12:21 PM About your grey scale Bayer Jpeg, what form of grey scale scheme was it, can you describe it, and the quality of the results achieved.
Is there any sample pictures?
Wayne, in each group of 16x16 Bayer pixels:
R001 G002 R003 G004 ... R015 G016
G017 B018 G019 B020 ... G031 B032
...
G241 B242 G243 B244 ... G254 B256
they are rearranged:
R001 R003 ... R015 G002 ... G016
R033 R035 ... R047 G034 ... G048
...
G017 G019 ... G031 B018 ... B032
...
G241 G243 ... G254 B242 ... B256
So each of the 4 8x8 blocks consists of the same color component. Then the data is compressed with the regular monochrome JPEG encoding.
Odd Nydren March 4th, 2007, 07:06 AM ..very interesting discussion about colours and raw bayer!
I am very inclined to buy the new camera whenever it is released - hopefully april...but I have a few issues that I need to know how to solve or have an idea if they will be solved by Andrey or the team.
I do not mind being without a viewfinder and I even wouldn't mind controlling the camera via wifi from my mobile phone using a custom gui. The phone has 320x240pixel touchscreen + wifi.
..however I would very much like to avoid having a laptop wth me when shooting.
I know this is early days still and that I probably ask for way too much...but I need to ask:
1. Harddisk adapter: is this something that you need to design specifically or can it be bought somewhere else?
2. will somekind of bayer raw or bayer compressed stream written to the camera disk be developed this year? if not...what would be the next best thing available?
I do not mind extensive post processing once out of camera...and I do not mind using flashcards instead of harddrive.
If there where a solution to these issues...I would not hesitate to buy. I can live with all the other limitations..as there is really no other camera for that price that can do the resolutions and framerates. ...plus have an open architecture...wich to me is really the way to go.
Am I wishing for too much too early? :) I hope not.
please let me know
thanks!!
//O.
Wayne Morellini March 4th, 2007, 07:48 AM Thanks for that Andrey, I can see that, pooling similar pixels to get compression advantage. How much advantage does it get out of that scheme, does it have the ability to get an 8*8 green block and do difference compression against the other 8*8 green block, and the red and blue? In this case, you may well be achieving close to the best from Jpeg.
An good question would be, if there is exactly the same image data repeated four times in an image, does Jpeg pick it out and use one image as the basis to do difference compression against the remaining four images? If that is the case, we would not need to do the 4 bayer pixels as four separate frames and do an difference between them, we could just make an extended frame with the four images one after the other, or an row of red, green, then an row of green blue images ;).
But, with four separate images, an more intelligent prediction can be built in for difference compression (even arranged in three separate images in an image, or as in 8*8 blocks as you have done). The first image from the first green bayer pixels can be compressed by normal Jpeg, and the next green could be stored as an difference from that, then compressed, and so forth for the other colours. But a way to reduce the difference further, raising compression ratio, and increase compression, might be to store the remaining colour images as the difference to an predicted value made from the interpolation of adjoining surrounding pixels (by averaging) that is then compressed. This prediction could go further by examining the image for details that move in direction, rise and fall in an certain way, areas, and edges, to decide which pixels to interpolate and use for the difference, but this is too elaborate and complex for what we need. Trying to get rid of the first green by storing it as the difference from the interpolation of the other pixels, is where my brain starts to melt down, but I do believe there is an actual mathematical method somewhere that can store all four bayer images as an difference from each other, and still be able to restore all four.
It is amazing the digital still camera industry never concentrated on doing Bayer related codec performance, as most of them are purely single chip.
Wayne Morellini March 4th, 2007, 07:58 AM Odd, that gives me a thought. I have been looking at doing an third world computer system based on media player, or mobile phone architectures. Phone and media players offer an cheap way to view and control the camera (except for the general lack of network interfaces on them) with the disk being on the camera itself. The Sony PSP has wireless network interface. there is some Linux based gaming media handhelds (Game Park related, both those companies).
Andrey Filippov March 4th, 2007, 10:55 AM Wayne, yes - it is possible to do something like that - it may turn out to be a wavelet variation for Bayer data. In any case it makes sense to simulate everything in software on the model images before trying to implement it in FPGA - in software it is much easier.
Andrey Filippov March 4th, 2007, 11:10 AM .1. Harddisk adapter: is this something that you need to design specifically or can it be bought somewhere else?
We will make something to start experimenting with HD. Microdrives could even fit in the same camera body, but with new 16GB flash - will they survive at all? Using laptop disks (more practical) will need a larger camera body.
Other concern - I haven't tested IDE port on the 10353 board yet and we will get to 10357 (http://wiki.elphel.com/index.php?title=10353#10357) probably in April - first 353 camera are designed to have IDE port but it is not tested so there is some probability that a new board revision may be needed (if I did something wrong in that simple part)
2. will somekind of bayer raw or bayer compressed stream written to the camera disk be developed this year? if not...what would be the next best thing available?
The algorithm described above with reordering of Bayer components and treating the result as monochrome will be available very early - it will be the main mode for larger sensors.
I do not mind extensive post processing once out of camera...and I do not mind using flashcards instead of harddrive.
If there where a solution to these issues...I would not hesitate to buy. I can live with all the other limitations..as there is really no other camera for that price that can do the resolutions and framerates. ...plus have an open architecture...wich to me is really the way to go.
Am I wishing for too much too early? :) I hope not.
please let me know
thanks!!//O.
Your requirements overlap with those I'm already working on. So I believe it is "too much" - maybe only somewhat "too early" :-)
Wayne Morellini March 4th, 2007, 11:37 AM Wayne, yes - it is possible to do something like that - it may turn out to be a wavelet variation for Bayer data. In any case it makes sense to simulate everything in software on the model images before trying to implement it in FPGA - in software it is much easier.
Do you mean that you are working on wavelet compression ? I have talked to the Open source BBC Dirac, about their open FPGA and your camera, are you working with that?
The cineform RAW technology is worth looking at to see their version, and how they did difference compression and wavelet (not that I have read it yet, but heard this was the case). The interesting thing is that the licensing for their version is quiet low, don't know about FPGA licensing:
http://www.cineform.com/technology/Cineform_RAW.htm
http://www.cineform.com/technology/Cineform_RAW_060413.pdf
I myself, am curious about 2d and 3d Wavelet compression as an alternative way of doing intre compression.
Congratulations, wish you the best Andrey.
Wayne Morellini March 4th, 2007, 11:58 AM Hmm, doesn't look like that link has the actual raw tech mechanism, I thought David Newman over at Cineform posted a link, I think he might have even sent me an copy (I still think I didn't get to read it more than skim through it).
Odd Nydren March 4th, 2007, 12:01 PM Andrey: Excellent! I really look forward to what is coming! I'll be patient. laptop harddrive I think would be optimal!
Wayne: I'm glad you find the mobile phone solution interesting...
..designing my own control gui that works on a 320x240 pixel touch screen will be a breeze. Its a small screen...but extremely portable + has a slip out keyboard. (I read & post on this forum using it & 3G)
None of the other solutions we have seen here on dvinfo can boast having a in-camera harddisk solution...and controlling the camera like this makes for a very portable solution.
Regarding lenses:
I plan to make a gglass solution for the camera using off the shelf thorlabs adjustable tubing, a moving Canon Ee-S/Ee-A focusing screen like the one from http://www.jetsetmodels.info/news.htm
and a canon lens to CS mount adapter (found on ebay for around 40usd) - this means no custom parts...just screw the parts together & a bit work to adjust focus etc - I still need a macro solution between the gglass and the elphel...ill post more when I have found something.
If then on a later stage Andrey add canon lens control to the elphel board - this lens adapter would be easy to modify to add lens control via the gui!! (basically a wired connection between the canon cs adapter & the 353 board + update the gui)
No other camera solution i've found out there is this promising! :)
Yes...initially I would have no viewfinder (maybe I can stream a small preview to the phone though) and would have to measure distance to adjust focus...but that's a small trade off for having a camera that is open, adaptable, portable and can truly grow feature wise. Cool bonus to have the control gui on a wireless device.
Andrey: I know we are not a large group of customers at the moment...but I'm sure that can change very quickly once there is a few working setups that show people what can be done! (especially if they are built with off the shelf standard parts wich is what I intend to do)
Thanks for posting your camera developments so regularly!
//O
Andrey Filippov March 4th, 2007, 12:35 PM Do you mean that you are working on wavelet compression ? I have talked to the Open source BBC Dirac, about their open FPGA and your camera, are you working with that?
Wayne, I'm not yet working on wavelet compression - just having it in mind.I also count that somebody else could implement it with the help of our hardware. Dirac people had one of the first 333 cameras but it does not seem it was useful for them.
Andrey Filippov March 4th, 2007, 01:06 PM Odd,
you probably know that there are many motorized C/CS lenses (just iris, iris+focus, iris+focus+zoom) but there is no single standard for those - different iris control, different voltages, no standard connectors for anything but a 4-pin iris. It is complicated by the fact that C/CS is a thread, not bayonet so most lenses come to camera manufacturers with no connector at all and each manufacturer uses proprietary solution.
I was thinking on developing a bayonet mount that is closer to the sensor than CS-mount, so it would be possible to make this_proposed_bayonet-to-CS adapter. Then - mount a small (5mm wide) PCB with programmable micro-controller inside this adapter, solder the wires from the motorized lens directly to the PCB and have a motorized lens with bayonet mount. We designed such board http://wiki.elphel.com/index.php?title=10331 - it has just 2 contact pads that provide both power and data, different programs might be used to accommodate different lens controls.
But we got stuck with mechanical design for the bayonet - it would be nice to have it really strong (motorized lenses are heavy) and it could also be nice to be able to seal such mount (and adapter converted to a hermetic lens enclosure). 333/353 boards themselves are designed to use a sealed network connector ( http://www.rjfield.com/ethernet_connectors_rjf_en.htm )
Maybe such bayonet mount should have additional bolts that could be used if the lens is heavy and/or sealed connection is needed.
Andrey Filippov March 4th, 2007, 01:12 PM it is possible to add eos control to some universal extension board for the 10353, but CS-to-Canon adapter does not have electrical contacts so there is no easy solution for that.
Andrey Filippov March 4th, 2007, 01:19 PM Andrey: I know we are not a large group of customers at the moment...but I'm sure that can change very quickly once there is a few working setups that show people what can be done! (especially if they are built with off the shelf standard parts which is what I intend to do)
I also believe it is very important to be able to use as many off-the-shelf components (like cell phone you suggested) as possible instead of building everything in-house. This will provide much more flexible, easily upgradeable solutions. And those solutions require less development resources
Thanks for posting your camera developments so regularly!
You are welcome
Odd Nydren March 4th, 2007, 04:03 PM Thanks for all the info Andrey! Some of it I knew...other things, I sure didnt :)
The main reason I'm interested in EF lenses despite the obvious problems with them being electronically controlled, is that I have several of them already with my canon 20D...another reason is that I hope to electronically control them in the future. (only focus can be manually controlled when they aren't powered...and some of the older ones not even that... as you most probably already know)
..also I saw the article on the wiki about your board after you first mentioned it and it sounds very exciting!
http://www.birger.com/Merchant2/merchant.mvc?Screen=ef232_home
This company makes an adapter that can control and power EF lenses and adapts them to c mount...although it costs something like 1000USD. That totally brings it out of reach for me.
However...they sell a version without the controller part for something like 150 usd and that might still have the connector on it. This made me think I really should get an EF lens extension ring and see how hard it would be to modify it. (maybe this is what they do)
So the way I see it there are two possible solutions:
- try to add a lens connector to the EF CS mount adapter I found on Ebay...like you say it is without connector. (Probably too hard...and strays too far from the off-the-shelf thinking in my opinion.)
- look at modifying a canon EF extension ring.
I will order one of these and see what I can come up with.
A few questions:
1. When a lens is connected to your extension board and then connected to the 10353...can you send commands from a html browser to the lens? ..also - can you read values from it?
2. how much would that extension board be? and if/when will it be available for purchase?
please let me know! :)
//O.
Andrey Filippov March 4th, 2007, 05:13 PM This company makes an adapter ...
I have one.
- look at modifying a canon EF extension ring.
That was what I initially did myself.
A few questions:
1. When a lens is connected to your extension board and then connected to the 10353...can you send commands from a html browser to the lens? ..also - can you read values from it?
So far I do not have such board - lens control was implemented in our model 323C camera (just 323 used Nikon lenses with no control) and now it is part of the 10347 that I'm working with.
As the protocol is not published it requires some guesswork and experimentation to control particular lenses and our in-camera web page definitely allows this control (and reading back too). Here is the web page code (the application source is in the same directory) - it is not too helpful w/o the actual hardware.
http://elphel.cvs.sourceforge.net/elphel/camera323/apps/ccd/lensraw.html?view=markup
2. how much would that extension board be? and if/when will it be available for purchase?
I can not promise such board as there is no solution so far for production use of it (I explained the reasons in previous post). So I'll add such circuitry (similar to that of 10347 board) only if there will be some extra room on the board and if it will have 5V power (preferably - 6V, 5V is not enough for some lenses) - i.e. for USB port.
I'm also planning to make some universal extension board with fine-pitch connectors included for experimentation. The electrical part of the lens interface is rather simple so you could add it yourself if we'll not have the board you need (at least in near future).
Wayne Morellini March 4th, 2007, 11:35 PM Wayne, I'm not yet working on wavelet compression - just having it in mind.I also count that somebody else could implement it with the help of our hardware. Dirac people had one of the first 333 cameras but it does not seem it was useful for them.
Maybe the FPGA inside the 333 was not enough, but then there is the 353. I can tell you, that the BBC would probably prefer an sensor with an SN ratio of 48db+ for 8bit, and 60db+ for 10bit, and similar latitude to an professional camera. These days, the SN and latitude of older cameras has been surpassed, the next generation should be great, but those old characteristics are important. The BBC had managed to do many great productions in Digital Betacam, and had problems with DVCPRO HD, so they probably expect smoothing that is like an HD version of Digital Betacam. If you wanted to talk to them, asking about an better sensor might spark their interest. I am sure there is an suitable cheap option without having to resort to an Altasens.
|
|