View Full Version : High Definition with Elphel model 333 camera


Pages : 1 2 3 4 5 6 7 [8] 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

Phil Stone
December 18th, 2006, 05:56 AM
Controlling the camera is a bit more tricky, one has to write some interface sw for that, but that's not very hard.

Zsolt

The 333 does have a html control screen . It takes a bit of getting used too but its all there from white balance & auto exposure to a histogram & colour correction. It does not look all that different to the one thats on the Silicon Imaging camera. Just a tad more complex as you really can adjust everything. With a laptop its a bit fiddly so I use a trackball thats taped to the laptop when filming on the move with it. But for real film making with actors etc then you will be wanting something much more simple with all the options hidden away. Perhaps a way to save the settings for different filming conditions?

Rob Scott
December 18th, 2006, 08:05 AM
Perhaps if its just for preview a wireless server would work best? Imagine using a pocket PC to watch a low bandwidth streaming video direct from the camera, being able to adjust it etc without any cables remotely?
Perhaps a Nokia 770 (http://www.nokiausa.com/770/1,7841,feat:1,00.html) would be suitable for the viewfinder. It's based (almost?) entirely on an open-source stack and appears to be fully customizable.

Oscar Spierenburg
December 19th, 2006, 05:47 AM
Hi Phil,

I've been away from the board and the camera for too long...
I remember you talking about using the camera on the windows OS. How did you get that working?

I'm slowly picking things up, so I'll post things again when I have the time. My 35mm adapter is working fairly good with the 333 and I have ordered a wide angle lens.

Zsolt Hegyi
January 5th, 2007, 06:50 AM
I gave up the thought of using 35mm lenses with converters because of their low resolution. Therefore I won't have control over depth of field, but what the heck. Image quality is more important to me.

So I began thinking about c-mount lenses: Fujinon has some new 5mpixel models with 2/3" areas. The bad news is that they've got no smaller lens than 12.5mm (except the 1.8mm fisheye). With 1/3" sensor size, this matches up with a 95mm tele-lens (in the normal 35mm film world) which is unusable in most recording situations. These lenses are expensive as well.

If we go down to the normal "megapixel" range, we can find lenses from several manufacturers down to 4mm of focal distance but usually no more than 1.5mpixel resolution. Because with 2x2 binning we'll get 1.25mpixels out of the sensor this might be enough, however, these four pixel groups won't have as great dynamic range as they could with a better lens.


Zsolt

John Wyatt
January 13th, 2007, 02:28 PM
Zsolt -- you might consider taking a chance with secondhand lenses. If you get one of the better C-mount lenses (Angenieux, Schnieder, etc), but the image looks softer than you would have expected, it probably means the lens needs collimating. This can happen with the older lenses; many of the popular secondhand ones are from 16mm cameras dating from the 60's and 70's (elements can get a little loose over time). With these lenses it might be considered as part of the cost to get the lens professionally serviced (set once again to the factory ideal). This adds to the cost but might still be good value compared to some of the new HD resolution lenses...

Zsolt Hegyi
January 14th, 2007, 04:02 AM
These 16mm lenses project to 16mm image size so they would behave as telephoto lenses when applied to a 1/3" sensor. So image quality comes as second only in importance to the focus distance.

Zsolt

Juan M. M. Fiebelkorn
January 15th, 2007, 05:20 PM
Sorry for saying this, but I've got the feeling that you are all going the way we follow 2-3 years ago.
Anyway you all have my support.

Zsolt Hegyi
January 16th, 2007, 02:55 AM
Sorry for saying this, but I've got the feeling that you are all going the way we follow 2-3 years ago.
Juan, could you please be more specific? I think we all have followed a lot of threads here on dvinfo for quite a while so if you write a few things about your way then we'll understand.

But anyway I think it's clear for all of us that this Elphel-based camera of ours won't change the world as, for example, the SiliconImaging or the Red cameras do. We're thinking in smaller things than those guys and even the guys on the DIY-HD threads here. Our ambitions are modest. Personally, however, I think the price/performance ratio of this stuff -if we can make it- will beat those other cameras'.

Thanks,
Zsolt

Juan M. M. Fiebelkorn
January 19th, 2007, 12:15 AM
Ok, I will try again.First try I wrote a lot of things, but don't know how they all got lost.

I will try being more "specific" :)

In case nobody knows it,I was quite involved outside of this forum at the early stages of planning/development of SI's camera.
Together with a partner we manufactured a mechanical shutter test-bed and PL mounts that we sent to Ari Presler.
Our resources were quite limited at that time so maybe our stuff wasn't the best out there but, hey, at least we did it!!
Now we have a Hass CNC machine center, much better...

Ari Presler is a really smart guy and he has a sharp sense for business.
At the beggining I was heading for FPGA compression at the camera head level.Ari decided to go for the X86 software, so he can get technical and commercial support from Intel and Cineform and even having the option of Adobe Premiere installed on camera so you can use the camera for editing, Ikegami style.
The rest of things are a really long story, and I'm not in the mood right now for writing them down here.
End of story is that after several delays because of hardware limitations because of Cineform processing requirements and waiting till the arrival of Pentium-M/ Core DUO and PCI Express the camera got ready.

Right now the most simillar thing to what I firstly envisioned despite not liking its external design is the RED camera. (if you don't believe me just search for my posts all through almost 3 years).

Now my own opinion:

USB may sound nice at first, but it is an asynchronous crappy connection and with too limited a bandwidth.So we are forced to compress heavily using Theora.(8 bit limited BTW)
If we just had a way of connecting elphel 353 through PCI to a mobo and get a driver for it, we would have a whole system ready.
Instead of needing Theora compression, we could use a much easier compression scheme.I've tested myself an integer transform lossy compression based on expired patents mostly from the 70's, which is not computationally intensive and which doesn't produce blocky artifacts if compressing a bayer pattern.If I can do it, anyone out there can.That is why I was always asking for 10 bit raw bayer stuff with no success.
Need a little bit more compression with unnoticeable losses? You could also apply some ideas I posted here when discussing a codec with D. Newman long ago.
If Theora works on the elphel camera, this one will also.
Of course the codec SHOULD be open source so anybody out there can implement it on anything from VFW, DShow,Quicktime.So no worry about NLE support.
The old and hated PCI bus gives us at least 100 MB per second to play with so, we can send from the camera head two streams (if necessary) one is the full resolution compressed bayer and the other one could be a quarter resolution 8 bit per color channel image which wouldn't need debayering and could also be lightly compressed so to be sent to viewfinder.
Let's say Elphel 353 costs 1800 dollars, and a NanoITX which is 12cmx12cm ( 4.5 inch) goes for 400.
For 400 dollars you get everything you could need.Two VGA outputs one for Viewfinder, 3D glasses or whatever and the other maybe an external Monitor, TV encoded output (PAL or NTSC), HDTV output.Several audio inputs and outputs and the option of compressing them on many different codecs.
You don't need the biggest CPU for this tasks.
Want to use NTFS, HFS+,EXT3 ? you have it. You can install a full LINUX.
Want to use RAID X, so to avoid any posible data loss? You have it.
Want to output a DV stream through Firewire at the same time? You probably can.
Any thing you can do with a PC is at your hand.
Want to customize your camera because you know C programming? Go Ahead!!! If anything goes wrong just reinstall the software and you are ready.

As final words: This configuration isn't in fact a lot different from what SI camera is.It is just much more flexible and powerful

I hope my thinking is clearly expressed.Because I'm not a native english speaker I'm sure I made a lot of mistakes.

Zsolt Hegyi
January 19th, 2007, 04:32 AM
Thanks for the detailed answer Juan. Our projects have much in common (Elphel, fpga lossless encoder) but they're still fundamentally different at the same time.

As for PCI, the gerber files will be available so if someone has the intention it's fairly easy to design a PCI interface, make some boards and then plug one into the ITX, write a driver and a recording software for it, maybe a Premiere plugin for the proprietary format and go.

What I'm planning is much more simple: write an fpga encoder, disable the camera's streamer and replace it with a hard disk recording sw which can be controlled through USB by a numeric keypad and put a minidv camera beside the Elphel to function as a viewfinder. And I need a Premiere plugin to process the recorded data but I've already written that.

I don't think that one of the projects should stop. Neither of them are futile. I think that the more people, the more projects are involved in modifying the Elphel the better the chance that someone will come up with a usable result, so I'm glad that you too are working on it. Moreover, if we have two types of encoders for the fpga then we can test both of them and choose the better of the two.


Let's say Elphel 353 costs 1800 dollars

The planned price won't be much different than the 333 which is around $800.


You don't need the biggest CPU for this tasks.

I'm not sure about that. The ITX boards have a fairly limited CPU performace compared to the desktop ones and if you push huge data in through the PCI bus although you have the chance to record it without involving the CPU but if you want a real-time viewfinder image then you must do a lot of computations on the fly. If you're going HD then you might find the ITX boards too weak. Have you got any test results so far?


Zsolt

Juan M. M. Fiebelkorn
January 19th, 2007, 07:09 AM
Ok.Not trying to sound aggresive but I feel that we got lost in translations.
Most of the explanations I wrote seem to not function at all.If you can please read again my post above.

Second try:

You said

"I'm not sure about that. The ITX boards have a fairly limited CPU performace compared to the desktop ones and if you push huge data in through the PCI bus although you have the chance to record it without involving the CPU but if you want a real-time viewfinder image then you must do a lot of computations on the fly. If you're going HD then you might find the ITX boards too weak. Have you got any test results so far? "

In reply to this

"we can send from the camera head two streams (if necessary) one is the full resolution compressed bayer and the other one could be a quarter resolution 8 bit per color channel image which wouldn't need debayering and could also be lightly compressed so to be sent to viewfinder."

My question is, where are those X86 computations you mention applied?

The second quarter resolution 8 bit per color channel slightly compressed stream is supposed to come from the camera head and doesn't need demosaicing because it already has one color sample per pixel.The only computation if any would be decompressing it.You could even send a 4:1:1 or 4:2:0 stream instead.The conversion to RGB will be performed inside de GPU (Graphics Processing Unit).Also scaling.

(SI at the begining said that they needed a Geforce FX just to display a crappy 1024x768 2D video)

If a 1 GHZ NanoITX can decode Mpeg2, why not decompressing something a lot way simpler?

(SI's camera, as I said before, needs a Pentium-M because of the wavelet based Cineform codec which is really heavy, not because of problems for displaying the incoming video stream.)

On another side 99.9 % of people don't know how to design a PCB or code an FPGA (VHDL or Verilog whatever you like).I'm within that number.I don't even know how to code in C.Don't know how to perform operations at the bit level and can't even implement a huffman table.What I did was just some internal testing to check image quality and compression ratio.

There are enough examples for things not working because of lack of a suitable platform for coders to work with, and I have the fear that this one, at least within the Video camcorder area, could become one of them.

Nano-ITX

http://www.via.com.tw/en/products/mainboards/motherboards.jsp?motherboard_id=361

http://www.digit-life.com/articles2/via-epia-n/via-epia-n.html

Mini-ITX

http://www.via.com.tw/en/products/mainboards/motherboards.jsp?motherboard_id=450

Final Note: MS Windows is a no go for this system.

See you.

Zsolt Hegyi
January 19th, 2007, 11:19 AM
"we can send from the camera head two streams (if necessary)"

Whoops, my bad. It seems like I accidentally deleted this part when replying so it got left out. Answering to this topic: I'd be glad if we could send two streams from the camera head. After a quick review of the 333's fpga architecture I came to think that we can't but we need Andrey's answer on this because I'm still not sure what modifications are needed to achieve this.


Final Note: MS Windows is a no go for this system.

Agreed...


Zsolt

Wayne Morellini
January 21st, 2007, 12:14 PM
..
Ari Presler is a really smart guy and he has a sharp sense for business.
At the beginning I was heading for FPGA compression at the camera head level.Ari decided to go for the X86 software, so he can get technical and commercial support from Intel and Cineform and even having the option of Adobe Premiere installed on camera so you can use the camera for editing, Ikegami style.

Hmm, exactly like what I was suggesting back at the start of the digital cinema camera projects. I was against the increased complexity for doing FPGA and custom camera for the first cameras (due to complexity) but also suggested using the Elphel back then because it already was designed. Unfortunately, it turned out that doing machine vision software modification for live recording was actually an relatively simple thing, according to people from machine vision companies that now have software versions to do that, that I talked to in the last year or so. The only people that listened to me, and asked for my advice, went on to actually make commercial cameras.

I think you should all get together, in a spirit of open source, and brotherly love, and make as many codecs versions as you like, and the camera work of course?

Wayne Morellini
January 21st, 2007, 12:39 PM
I forgot, the Elphel really needs better sensors then the previous Microns, this is the one biggest problem. While we can jazz the color up, even normal cmos HDV cameras are starting to produce latitude and low noise. With an better sensor, they can maintain an edge over HDMI capture solutions on cheap cameras, the problem is that this is just infeasible. The solution would be a much more expensive sensor board for indie people using the best of the second class solutions (even the special Ibis implementation as implemented on the Drake). In reality, I have only come across a few sensor technologies that lead for the price. FF and Kodak are the old ones, Foveon, and another that I posted here before, with 170db latitude and super super sensitivity (name slips me). Any others I can't remember.

I have talked with Andrey before about the possibility of using the Elphel unit as an encode/capture device for other cameras. I think, as far as Andrey is concerned, if you want to do it, then you are welcome to. To it's natural conclusion, if the camera was designed to take, apart from just component/HDMI, USB/FIREWIRE/GIGE/Machine-vision, then it could replace a computer for acquisition from other box cameras.

Jason Rodriguez
January 21st, 2007, 05:13 PM
Juan is right . . . for "quadlet" based demosaic, you don't need a powerful graphics chipset . . . we've been able to-do it on any of the modern Intel embedded chipsets such as the GMA900 and 950. Quadlet is REALLY easy since you're just either averaging or picking one of the green bayer channels, and then combinding that with a R and B channel to make a RGB pixel . . . the resolution is a quarter of the HD full-resolution version, but it's VERY fast to-do.

You'll have issues on a 855GME based system because of the way it was constructed with minimum memory bandwidth and the north-bridge/southbridge interlinks, but the 915 and 945 chipsets will work just fine for ingest and display of HD quadlet data in real-time.

With the Core 2 Duo, Mini-ITX boards can be VERY fast. And if you're not requiring CineForm compression or something of the equivalent, then using one of the LV versions of those chips (so only 10-15W max TDP) would seem like a pretty no-brainer solution as it could be passively cooled (no fan required, just a really big heat-sink) and easily battery powered.

Zsolt Hegyi
January 22nd, 2007, 08:28 AM
even the special Ibis implementation as implemented on the Drake

What's with the Drake nowdays? Are they still in business? If they aren't then we might just ask them to provide us with the design of that sensor board and attach it to the Elphel...

Zsolt

Matteo Pozzi
January 23rd, 2007, 02:04 AM
for me the images of the elphel 333 look better then the drake ....but this is my opinion

Wayne Morellini
January 23rd, 2007, 03:27 AM
Well, back. I have discovered a new CCTV sensor promising 2000 times more low light sensitivity and an color hi-res version:

http://www.marketwire.com/mw/release_html_b1?release_id=198786

Good news, there are more options now.

Wayne Morellini
January 23rd, 2007, 03:51 AM
What's with the Drake nowadays? Are they still in business? If they aren't then we might just ask them to provide us with the design of that sensor board and attach it to the Elphel...

Zsolt

I don't know, Rai has never told me (like nobody ever has mentioned what happened to the Sumix camera, which was a very good opportunity for them). The Drake was an very good opportunity, but unfortunately it would need to price against the Red/Silicon, which means higher resolution or lower price. I take it that the Drake is gone. Wherever you and can pry the sensor broad design from anybodies dead hands is another thing. But even though it had an descent well capacity and SN (through the external circuit) and good latitude, it would be good to look at alternatives too. It would be interesting if Cirrus re-targeted the FF Ibis chip towards higher quality or lower price.

I am now of the mind that 50/60fps and an pocket camera would also be useful for portable production, on the video side, along side a full digital cinema type camera. Unless you can spend the time to do cinematic style shooting (to reduce judder in moving objects) the smaller size and higher frame rate matters.

Wayne Morellini
January 23rd, 2007, 04:51 AM
More news on that sensor. it appears that it uses an quantum effect to move thousands of electrons for each photon. I mentioned other research that got two electrons moved by one photon, an number of years ago, now it looks like QE efficiency ratings just went out of the window.

http://www.pcmag.com/article2/0,1895,1908418,00.asp

Matteo Pozzi
January 23rd, 2007, 05:53 AM
I've also seen this info 2 weeks ago but I've found no images produced by this sensor and no spec info about. L'et's look on it

Nikos Olsen
January 23rd, 2007, 06:06 AM
Wow, I guess I managed to read the whole thread! This project seems truly interesting.

What I was wondering is when the 353 arrives, it most likely will have support for a HDD. And you could use something like a Nokia 770 to control it. But how are you going to monitor the video? Through the Nokia monitor? Has it got enough resolution to adequately pull focus and such? It is after all HD. Or would it be possible to add an additional monitor? Or would a UMPC be the best solution if it has a high res screen?

Zsolt Hegyi
January 23rd, 2007, 07:14 AM
I always forgot that with the new Micron sensor we can go up to 60fps which is around the minimum if we want to avoid the use of a mechanical shutter. I think fps is the first criteria to choose by and only then we can take a look at other parameters. (And its resolution is also really good for the Elphel because it's around the limit we might compress and record with the camera.)

Btw, about this new sensor they write:

the sensor can capture images when light level is less than 1 lux (or up to 0.1 lux).

This can mean that it won't work when the light level is higher than that threshold.

Zsolt

Nikos Olsen
January 23rd, 2007, 04:30 PM
Made a GUI idea designed for touchscreen purposes. Wanted the GUI too look simpler and not as "frightening" as the current display, and also less industrial.

It´s just a quick sketch but any suggestions on improvement? Am I heading in the right direction?

Oscar Spierenburg
January 24th, 2007, 07:25 AM
Nikos, do you mean to design this for the Elphel? I own the 333 and I'm really looking forward to a more simple setup/control panel, dedicated to film making.
Maybe something that also implements audio recording.

Matteo Pozzi
January 24th, 2007, 07:44 AM
Hi oscar have you got some screengrab to show us what is possible to do with the elphel 333+35mm adapter? I want to make the project too but for now I'm waiting for the 353... I don't know much of programming but I'm trying to understand how the "ajax-html" control panel of the elphel work so that I "can" change the gui but now I've so little time to spend
nice to see so many people looking at this project!

Nikos Olsen
January 24th, 2007, 07:55 AM
Nikos, do you mean to design this for the Elphel? I own the 333 and I'm really looking forward to a more simple setup/control panel, dedicated to film making.
Maybe something that also implements audio recording.

Yes, it would be for the Eplhel, dedicated to filmmaking as you say. Although I do not have the skills to program the interface, I would like to come up with ideas that maybe someone else can implement.

Since you own and use the 333, please come up with suggestions on which main buttons that should be in the interface if you have any.

Also, like Matteo asked, I would love to see some pics shot with the 333 with the adapter.

Oscar Spierenburg
January 24th, 2007, 09:01 AM
Matteo, I'll climb on the roof and shoot something.... (and I don't mean pigeons)

Nikos, that's great.. I'll write down what buttons I use when I shoot a clip.

Igor Babic
January 24th, 2007, 12:17 PM
This is perfect. I know its hard to do, but its good guide how to do it.
http://www.siliconimaging.com/DigitalCinema/SiliconDVR_Interface.html
Only thing I will add is disk free space indicator all the time on screen.

I think is on this Touchscreen LCD:
http://www.cappuccinopc.com/parts/mr-xe-700y.asp

Matteo Pozzi
January 24th, 2007, 01:18 PM
the silicon imaging gui is the best one I said that on this thread long time ago :-)
for the lcd the best one is one from Lilliput (http://www.lilliputweb.net/product8.html) I've made a projector based on one of this to watch dvd on the wall and work awesome and is not so expansive! you can find one new on ebay at about 230$

Nikos Olsen
January 24th, 2007, 02:00 PM
I agree the SI gui is great but I dindīt think it would be possible to code something like that i Ajax with overlays and all. or is it?

Igor Babic
January 24th, 2007, 03:52 PM
353/333 interface dont have to be so fancy, all that matters is functionality and this is the way how. I wish SI sell their software at lower price and without cineform suff and compatible with couple of machine vision cameras so we can have a chioce to choose a camera on our budget and resolution.

Matteo Pozzi
January 25th, 2007, 08:03 AM
http://ubuntustudio.org/
maybe is arrive the time to switch to linux

Oscar Spierenburg
January 25th, 2007, 07:45 PM
When I put all my equipment on the roof and set up the elphel software, a dark cloud stopped by and it began to rain very badly...

I shot some low light tests indoors, but I forgot to crank up the JPEG quality (standard at 70 %) That's the reason for the large compression artifacts.
Besides that, you can see the result of my 35mm wax adaptor and the Elphel 333.

http://s09.picshome.com/d35/comp_1__0-00-08-04_.jpg
http://s09.picshome.com/d35/comp_1__0-00-30-00_.jpg


I'll write more about the software/control panel later when I have more time.

Matteo Pozzi
January 26th, 2007, 07:19 AM
good result Oscar I've tried to resize the picture to 1280x720 and they look much sharper than the result of a hdv camera with 35mm adapter cool!
one question; you said that thay are low light test indoors ....but how much low light!? are you able to tell more about the light setup!?
and last ....are you able to shot a test in complitely night with the light of only a wax candle at a distance of about 1 meter with a 50mm lens so that is possible to compare with some other setup?
many thanks Matteo Pozzi

Oscar Spierenburg
January 26th, 2007, 04:01 PM
Matteo, I used one 150W light.
I'll write more when I have more time. I don't have the camera with me the comming three days.

Oscar Spierenburg
January 27th, 2007, 08:27 AM
Nikos, here are some thoughts on the software.

Most of the time you spent before actual filming is setting up all basic settings to 'default' film. Default would be i.m.o. - 24fps, shutter at max 30, 900x1600 pixels, JPEG quality to at least 80%, streamer to kuklin-rtsp at 24FPS.

I think everyone who wants to use the camera for film making, would want these settings set up as default.
This would also mean that you don't need these settings on the 'main page' anymore.

The only buttons you need are:

-exposure control (auto and manual)

-White balance (which can be automatic while pointing at a white object or something and an option to press 'hold' so it won't change while filming.)

-Brightness, contrast and gain

-color/BW (or color saturation)

-STEAMER ON/OFF (very important)

-REC (somehow I can't use the rec button on the AJAX panel so I use some button called stream.sdp. (maybe Andrey knows more about this)


I think that's it. It would be nice to have big buttons because of the clumsy touchpad on most laptops.
Nikos, you say ' for touchscreen purposes', but I hope -not only- for touchscreen?

(edit)For those who want to see how the current AJAX controls works, take a look at the online Elphel camera's: http://www3.elphel.com/online_cameras

And here is a good article on the software: http://linuxdevices.com/articles/AT5951285077.html

Matteo Pozzi
January 29th, 2007, 08:43 AM
Hi oscar I haven't understand what is the streamer on/off
if I have time I'll try to develop something!
best reguard Matteo Pozzi

Oscar Spierenburg
January 29th, 2007, 06:54 PM
Matteo,
Normally, the camera will give about one picture per second. But when you start he streamer (Theora) it will start streaming images at a given FPS. The nice thing about the AJAX interface is that it shows a big image and a magnified smaller window for focusing, when the streamer is OFF. Of course a 'film' control panel could start the streamer as default, but I don't know if I'd prefer it.

Here you can see the streamer on/off button in the AJAX setup:
http://s09.picshome.com/0b9/ajax_streamer.jpg

Matteo Pozzi
January 30th, 2007, 02:20 AM
perfect I've understand so the interface is exactly the html page that is on the elphel.com page (now with windows I cant use it ...don't know why...but I've used some ago) are you able to tell me some more about the firmware upgrade of the camera? is the html included in this one or there are some different mode to mount it on the camera? how many kb is it possible to use to make the html? and last is it possible to make a big html with a good graphics (big image) to install on the pc and not in the camera to control it with button that are link to somthing like http://192.168.0.9/axis-cgi/jpg/imag...compression=30 ( ---->to obtain an image with the assigned set of parameters)? maybe I'll write also to Andrey to ask for more specific info

Oscar Spierenburg
January 30th, 2007, 09:27 AM
I've asked Andrey if he has some time to clear some things up over here...
If it's 'plain' HTML language, I can possibly help myself.

Andrey Filippov
January 30th, 2007, 10:03 AM
I recommend using ccam.cgi, as we do not have anybody to support Video API now - the FPGA/drivers code went ahead and Video API is lagging.

The latest production firmware is 6.4.35 (available at SourceForge), default streamer (working) is made by Khlutchin, you may try it with the computer boot from our Live CD/DVD were the Mozilla plugin to view RTP stream with MPlayer is already configured.
With MS Windows you may try the software - it should probably work with the same firmware 6.4.35.

https://sourceforge.net/project/showfiles.php?group_id=105686&package_id=217626

Meanwhile I'm busy with the 353 - I had to change the design to use "large page" flash as the ones I used became obsolete in several months after they appeared. a good thing - the new flash is twice bigger (128MB).

Now I'm testing the second revision of the 353 board - hope to get real images from it next week.

I also released multiple other boards that will work in/with the 353 camera - these boards will be tested in the next couple months:

10357 - a board with 8 CF slots - with 16GB cards it will be possible to have 128GB solid state "disk";
10359 - splitter/preprocessor for up to 3 sensor boards. It has the same FPGA+DDR SDRAM as the main board and should be capable of some image pre-processing (like some of 3-d extraction from stereo imagers). I plan it to make an optical camera position tracker to compensate for the ERS distortion effect. This board is connected to the 10353 board instead of a single sensor board and has 3 of the same flex cable connectors fro connecting up to 3 sensor boards to it.
Next 3 boards are designed for non-video applications (2 fps, 11 and 16 MPix with Kodak KAI-011002 and KAI-16000 CCDs) - improved from our model 323 camera:
10347 - Timing/control for KAI series sensors;
10342 - KAI-11002 sensor front end - in pair (sandwich) with 10347 allows connection of KAI-11002 to 10353 board;
10344 - same for the KAI-16000 sensor.

Igor Babic
January 30th, 2007, 11:23 AM
Andrey,
Can you make support for Kodak KAI-2093. This would be very interesting for us! If you can make this how much would cost 353 with this sensor? Anyway have you ever seen footage from this sensor? Is this any good for us (film making )?

I have also have interest for 353&KAI-11002, how much will be cost estimate for this combo?

Oscar Spierenburg
January 30th, 2007, 01:46 PM
But can we change the ccam.cgi and save it in some way.
If you look at the page of one of the online cameras as a HTML source, you see a very complex script, but still, it's a html webpage. Can it be edited and saved to the camera or LiveCD?

Andrey Filippov
January 30th, 2007, 07:39 PM
Oscar,

you can change ccam.cgi but you do not need to do it. It is a binary application that processes HTTP requests in the camera, requests themselves are generated from html+javascript (camvc.html, *.js) files in ftp://<camera_ip>/usr/html/ directory (visible as http://<camera_ip>). This directory resides in JFFS system and is write enabled so you may change/add files there. For experimenting I would recommend using /var/html directory (linked as /usr/html/var) that is in RAM-disk so it is perfectly safe to change data there (will be erased after reboot) - files will be visible as http://<camera_ip>/var/

Andrey

Andrey Filippov
January 30th, 2007, 07:43 PM
Andrey,
Can you make support for Kodak KAI-2093. This would be very interesting for us! If you can make this how much would cost 353 with this sensor? Anyway have you ever seen footage from this sensor? Is this any good for us (film making )?

I have also have interest for 353&KAI-11002, how much will be cost estimate for this combo?

Igor, maybe it is possible, but I can not work on this right now. I can help if somebody will be eager to do that - it is likely that the 10347 will work as is, just 10342 should be replaced by a sensor-specific one.

I have the design files for the not-yet-released boards in the password-protected directory, but we do give access upon request.

As for the price - these cameras are more expensive, especially the senors themselves. I can estimate it may be 50% more than the 333 camera, plus the cost of the sensor.

Matteo Pozzi
January 31st, 2007, 08:08 AM
Many thanks Andrey so if we make an html page with buttons that link to a HTTP requests in the camera we can make a different GUI right? ok, you say that is better use ccam.cgi so the http request of the button of the new gui will have to "start" with http://<camera-ip-address>/admin-bin/ccam.cgi? + the parameter that they change like described in http://wiki.elphel.com/index.php?title=Ccam.cgi#HTML.2C_XML_or_VRML

or can we use also the link described in http://wiki.elphel.com/index.php?title=Video_API

How much, big (kb) will have to be the html page? is it possible to use .swf flash file to save space and make a better graphics? I've seen F4L (http://f4l.sourceforge.net/?q=node&from=30) and maybe.....

and last did you plan to produce a 5 mpixel camera based on 333 hardwere?

Andrey Filippov
January 31st, 2007, 09:57 AM
Many thanks Andrey so if we make an html page with buttons that link to a HTTP requests in the camera we can make a different GUI right? ok, you say that is better use ccam.cgi so the http request of the button of the new gui will have to "start" with http://<camera-ip-address>/admin-bin/ccam.cgi? + the parameter that they change like described in http://wiki.elphel.com/index.php?title=Ccam.cgi#HTML.2C_XML_or_VRML


Yes, that is correct


or can we use also the link described in http://wiki.elphel.com/index.php?title=Video_API



This API is lagging behind ccam.cgi ( - we do not have anybody to work on it now. As for ccam.cgi - I always upgrade it myself when I change drivers or FPGA code. So it may be ugly (being incrementally patched from our first model 303 camera), but it is current.


How much, big (kb) will have to be the html page? is it possible to use .swf flash file to save space and make a better graphics? I've seen F4L (http://f4l.sourceforge.net/?q=node&from=30) and maybe.....


Actually - a lot. in the 333 it is probably around 8MB (flash is 16MB vs. 8MB in model 313 and the amount of software is not much higher). In 353 flash is 128MB.


and last did you plan to produce a 5 mpixel camera based on 333 hardware?

Yes, the 5MPix board was designed for 333 (and is supported by the current firmware). The only difference between 5MPix sensor board (10338) for 333 and 353 is the interface voltage and a single linear regulator IC on the board determines that.

Igor Babic
January 31st, 2007, 12:19 PM
Thanks Andrey, very good info, you are most helpfull.
So for KAI-2093 we are talking about under 3000$?
Is it GigE in your plans for 353 or any other model?
I think that you mentioned IDE interface before CF? Whats happend?
I like interchangeable CF idea but I like more fixed IDE.
Keep up your good work.

Solomon Chase
February 1st, 2007, 09:35 AM
I'm also interested in a camera using the KAI-2093 or KAI-4021 sensor.