View Full Version : High Definition with Elphel model 333 camera


Pages : 1 2 3 4 5 6 [7] 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

Andrey Filippov
October 22nd, 2006, 03:19 PM
I had do deal with some events of a personal matter - that caused slowdown with the 353 project. But now I'm again working on putting life into the first unit, CPU+network+memory are tested, now we have to write NAND flash boot loader for ETRAX FS - it turned out that it is not yet available from Axis.

For those who are interested, I can email circuit diagrams - I don't want to post it until I'm sure it does not have errors (2 already found/fixed so far).

We also made some noise measurements with Micron 3 and 5 megapixel sensors and plan to make this software as part of the camera firmware. 3 MPix has about 22ke- full well capacity, 5Mpix - about 10ke- with rather low readout noise - just above a couple electrons (with high settings of analog gain).

Rob Scott
October 22nd, 2006, 07:33 PM
But now I'm again working on putting life into the first unit, CPU+network+memory are tested, now we have to write NAND flash boot loader for ETRAX FS - it turned out that it is not yet available from Axis.
Andrey, if I was able to find funding and time to help with this project, what would I need to purchase and roughly how much would it cost?

Thanks!

Andrey Filippov
October 22nd, 2006, 10:25 PM
Andrey, if I was able to find funding and time to help with this project, what would I need to purchase and roughly how much would it cost?
Thanks!
Rob, when somebody wants to help with the project we usually provide hardware free of charge, but I don't have any extra 353 boards right now. The one I'm playing with is (by tradition) mostly built by myself - only BGA chips are mounted professionally - I don't have an oven.

As soon as I'll test the board electrically (with minimal software running) we'll be able to order a small batch assembled - then it will go to developers.

Rob Scott
October 23rd, 2006, 01:13 AM
Rob, when somebody wants to help with the project we usually provide hardware free of charge, but I don't have any extra 353 boards right now.
I would love to help, if I possibly can. In the meantime I will get familiar with the 333 source code. What else should I do/read up on?

Thanks!

Wayne Morellini
October 23rd, 2006, 06:23 AM
We also made some noise measurements with Micron 3 and 5 megapixel sensors and plan to make this software as part of the camera firmware. 3 MPix has about 22ke- full well capacity, 5Mpix - about 10ke- with rather low readout noise - just above a couple electrons (with high settings of analog gain).


Andrey,

What did you find for noise average db and latitude, for 3 mpixel, and 5 mpixel binned to 720p?


Have you seen this brand, they have 170db latitude, and 0.001 lux sensitivity?:

http://www.hdrc.com/sensors.htm
http://www.hdrc.com/hdrctech.htm

Some of the demo pictures are pretty interesting, and I wonder when they will go HD.


Thanks

Wayne.

Juan M. M. Fiebelkorn
October 24th, 2006, 03:00 AM
Guys, is there any way for me to get a small video clip of the bayer kind of 10 bit bitdepth?

Andrey, I know I can sound really stupid sometimes but I will make this question anyway.

Is there any advantage about using an ATMEL AVR 32 bit RISC instead of the actual ETRAX FS?

Second stupid question:

http://www.latticesemi.com/products/fpga/

Lattice has some interesting and inexpensive FPGAS with gigabit output/input, is there any use for them inside the Camera framework?

Rob Scott
October 24th, 2006, 07:29 AM
Guys, is there any way for me to get a small video clip of the bayer kind of 10 bit bitdepth?
It's on my list of things to do :-)

Andrey Filippov
October 24th, 2006, 11:07 AM
Andrey,

Have you seen this brand, they have 170db latitude, and 0.001 lux sensitivity?:

http://www.hdrc.com/sensors.htm
http://www.hdrc.com/hdrctech.htm

Some of the demo pictures are pretty interesting, and I wonder when they will go HD.


There is virtually no information there, but I am not impressed. The logarithmic transfer function is an old idea easily implemented in CMOS sensors. Sensitivity - it is not so easy to fool the Nature - quantum efficiency/amplifier noise levels of the modern CMOS sensors (like Micron's) are not that far from the theoretical limit.

0.001lx? - easy, any regular sensor can do it with long enough exposure.

Zsolt Hegyi
October 25th, 2006, 05:19 AM
I would love to help, if I possibly can.
Thanks!
Rob, what is exactly do you want to develop for the camera?

Rob Scott
October 25th, 2006, 06:06 AM
Rob, what is exactly do you want to develop for the camera?
I am open to helping in any way I can, but in particular I am interested in implementing cinema-related features. Andrey mentioned that the board would have an IDE connector; I would like to help implement writing to disk.

Zsolt, you mentioned working on a lossless compressor for the FPGA (to reduce the size of the the raw image data) and I'd love to help with that as well. I've never had a change to do any FPGA work and it sounds like an interesting challenge.

I'd also like to write front-end (or is that back-end?) software to provide a viewfinder/heads-up interface. Perhaps this would involve a modification/plugin for ObscuraCap; I'm not sure yet.

Rob Scott
October 25th, 2006, 10:49 AM
I just saw an announcement for Hitachi's new CinemaStar hard drives (http://www.hitachigst.com/portal/site/en/menuitem.542259f3e0a45215e85c1a70eac4f0a0/). They could be good choices to use with the 353.

Wayne Morellini
October 25th, 2006, 11:04 AM
There is virtually no information there, but I am not impressed. The logarithmic transfer function is an old idea easily implemented in CMOS sensors. Sensitivity - it is not so easy to fool the Nature - quantum efficiency/amplifier noise levels of the modern CMOS sensors (like Micron's) are not that far from the theoretical limit.

0.001lx? - easy, any regular sensor can do it with long enough exposure.

I've seen a distinct lack of this outside of SD CCD's. It is also a matter of noise and gain, which CMOS is not as good at.

The QE of the Microns is about 40% or lower, isn't it (not to mention low fill factors)? I've seen a sensor with upto 90% QE, and experiments in recent years have succeeded in getting one photon to move two electrons (QE*2). Then there is loss from the color filter, and another advantage that the FF sensor had, was that because of the 100% fill factor, a microlens was not required, so very fast aperture lens could be used. Rai, that designed the Drake camera, could get F0.85 or something lens. So, there is a lot of latitude over what the Micron can currently do. If HDRC go HD can get some of this advantage I think it could be a sweet deal. I can't get a reply from them yet, I am interested in one of these cameras for my own project, and some research purposes.


Thanks

Wayne.

Zsolt Hegyi
October 26th, 2006, 03:37 AM
I would like to help implement writing to disk.
Zsolt, you mentioned working on a lossless compressor for the FPGA

Implementing my encoder is a one-man job so I'd like to do it myself. But if you can do the disk recording part (can be tough if the processor is too weak to handle two streams), that's okay for me.


I'd also like to write front-end (or is that back-end?) software

The elphel already has the client which can be used as viewfinder, I suppose. It's not easy to write a streamer and our stream will be huge so, for starters, we should try to use the current theora stuff.

The problem is that the fpga will have to create two streams simultaneously and I don't know that the architecture and software will support that or not. Andrey?

And if we somehow manage to create two streams, the Theora will be very bad quality - I don't know if it will be enough for precise focusing. I'm afraid we'll have to modify the theora encoder too (maybe with losing all the color information) and that's also not very easy.

Zsolt

Rob Scott
October 26th, 2006, 06:34 AM
Implementing my encoder is a one-man job so I'd like to do it myself. But if you can do the disk recording part (can be tough if the processor is too weak to handle two streams), that's okay for me.
Sounds good.
The elphel already has the client which can be used as viewfinder, I suppose. It's not easy to write a streamer and our stream will be huge so, for starters, we should try to use the current Theora stuff.
I agree, no reason to reinvent the wheel. Perhaps a custom version of VLC? I'll have to look into the options.
And if we somehow manage to create two streams, the Theora will be very bad quality - I don't know if it will be enough for precise focusing.
I was thinking perhaps two modes --

Focus mode: No disk writes, high-quality Theora @ low frame rate (if necessary)
Capture mode: Disk writes, lower-quality Theora @ full frame rate

What do you think?

Zsolt Hegyi
October 26th, 2006, 10:50 AM
Sounds good.

You can develop the disk recording part with theora while I'm working on my encoder.


Focus mode: No disk writes, high-quality Theora @ low frame rate (if necessary)
Capture mode: Disk writes, lower-quality Theora @ full frame rate

Follow focus would be nice but if there's no other way then we must do it like you've said.

Zsolt

Rob Scott
October 26th, 2006, 11:35 AM
You can develop the disk recording part with theora while I'm working on my encoder.
Good idea -- and perhaps the raw frames to see how many FPS it can handle without any compression at all.
Follow focus would be nice but if there's no other way then we must do it like you've said.
Given that the CPU and FPGA will be quite a bit more powerful, it may be possible to do both. I guess we'll have to see how close to the edge we are.

Wayne Morellini
October 28th, 2006, 08:20 AM
Good ideas guys.


Andrey, I know that you are somewhat into sensor design, I am not, but I had a few ideas today, and some older ones.

I was thinking that the sampling mechanism between four or more Bayer sensor pads could be shared, reducing fill factor. Using a multiplexer, the inputs of different pixels could be sampled sequentially or combined into different combinations producing an new value for binning and even debayering (4Mp binned 2*2 to either red green or blue for 720p bayer for that position). Because of the speed of internal sample the delay between pixels should be minimal. A larger number of pixels could be shared depending on noise induced on trace length, to further reduce fill factor loss. A large amount of data could be fed out in parallel because of the serial digital data nature. If you used a serial like A/D conversion, the fill factor loss could be reduced to a few percent. I turned to the thought of storing charge, and realised that the chip manufacture process is compatible with extra Vapour layering (or magnetic level layering). To stop over charge effects, the sampling method gives some resistance of course, but then continual sampling and dumping/discharge of over charge would give further resistance, but an over charge sink can be made to earth it (I have a unique process to do this (to handle large amounts of over charge) but it has greater potential and I have this in mind for other things). The circuits can be protected and the pads can be covered with layers promoting light gathering and capacitance. This allows for very much capacitance.

I realise that some of this sounds suspiciously a bit like a CCD and existing cmos methods, but I think it is an alternative method that can easily be added to a cmos assembly line without having to substantial change existing components, and keeping it open for production runs of other normal chip items.

What do you think about this process, your comments?


Thanks

Wayne.

Andrey Filippov
October 29th, 2006, 10:03 AM
Good ideas guys.

Andrey, I know that you are somewhat into sensor design, I am not, but I had a few ideas today, and some older ones.

Wayne, I'm sorry - I never worked on sensor design myself - only used them. So I can not provide you with educated opinion on your ideas.

Wayne Morellini
October 29th, 2006, 10:20 AM
That's O.K.

I saw your paper on some sensor function some time ago, and thought you might be into the senor hardware side.


Thanks

Wayne.

Régine Weinberg
October 29th, 2006, 01:55 PM
Do you have a glue about the Playstation 3 and the linux disk
in France maybe, maybe middle next year ????

Wayne Morellini
October 31st, 2006, 04:52 AM
I have seen a news article about it. Somebody is preparing a Fedora based distribution. I suspect we will see a number. But before it is goign to be of use to us, they really are gong to have to have a method for normal Linux programs and codecs to automatically take advantage of the extra parallel processing power, or the developers are going to have to customise their codecs and other software. Which means somebody will have to ask them, or write it themselves.

http://news.spong.com/article/10969?cb=94


Ronald, I tried to email you a little while ago, to ask for some advice. Have you upgraded your mail redirection in your profile.

Zsolt Hegyi
October 31st, 2006, 12:29 PM
The new Micron sensor is only 1/2.5" of size. I think without a ground glass we won't have an adequate DOF. The 2/3" size is around the minimum that we could use. Their previous sensor with 1/2" size might have been worth a try (with different resolution and fps of course) but this has just became too small. Moreover, we lose 25% of the vertical resolution if we need 16:9 aspect ratio so in the end we get the size of a 1/3" sensor or even worse.

Any suggestions for a good GG? I've seen there are a lot of them on this forum - I just can't choose... All I know that I don't want any DIY stuff, I'll have more than enough work with the camera itself.

It is also a good question the maximum aperture we can use with the C-mount lens without causing problems with the sensor's microlenses. The GG and this aperture will add together to give the total losses of f-stops we'll have (that's probably around f1.5-2 if the glass loses f0.5 and we have a lens with f1.2). I suspect we'll have to use almost always the maximum shutter width the sensor is capable of, ie. something around 1/24 (this includes the readout time). This will cause motion blur as a side-effect.

Luckily, the readout time can be really short with the new sensor. On 96Mhz it runs with 12fps on maximum resolution (as opposed to the previous sensor with a 48MHz max). We'll use it in 2x2 binning mode so 60fps can be reached in the planned 1280x720 resolution - this is especially good for overcranking. I suppose this readout speed doesn't cause rolling shutter artifacts anymore so that's good news.

Andrey, what are your experiences with the new sensor? Could we do it as described above or you see some problems? Also, ony of my earlier questions left unanswered: can we create two simultaneous video streams in the fpga (one of them for the disk recording and the other for the network preview)?

Thank you,
Zsolt

Rob Scott
October 31st, 2006, 01:00 PM
Any suggestions for a good GG?
I don't have experience with any of them, but I've noticed that none of them have explicit support for a C-mount camera; they all assume a camcorder with lens built in. I e-mailed one of the guys about this and they said to use a C-to-Nikon adapter, then use a macro lens and a 10-inch (!) extension in front of the GG adapter. I know, 10" is awfully long -- I don't know if that can be shortened.
The readout time can be really short with the new sensor. On 96Mhz it runs with 12fps on maximum resolution (as opposed to the previous sensor with a 48MHz max). We'll use it in 2x2 binning mode so 60fps can be reached in the planned 1280x720 resolution
That was my thinking too. Sounds good!

Zsolt Hegyi
October 31st, 2006, 01:29 PM
I've noticed that none of them have explicit support for a C-mount camera

I was thinking putting a c-mount lens on the camera and the GG after that. I don't know which solution is better, perhaps it depends on the quality of lenses.


and a 10-inch (!) extension in front of the GG adapter

It isn't a problem if we need those classic 15mm metal bars anyway to mount the whole thing.

Zsolt

Wayne Morellini
November 1st, 2006, 07:44 AM
Any suggestions for a good GG? I've seen there are a lot of them on this forum - I just can't choose... All I know that I don't want any DIY stuff, I'll have more than enough work with the camera itself


Most of the static projects struggle to get HD resolution, so looking carefully through the threads for HD capable adaptors. Even moving ones, I don't know what the quality will be.


Thanks

Wayne.

Zsolt Hegyi
November 9th, 2006, 01:18 PM
I've been browsing through the m-audio/e-mu catalogs and it seems like they don't write linux drivers. So we won't be able to connect them to the camera. I think the audio will have to be recorded with the pc instead (we need one anyway for the camera control/viewfinder functions).

Zsolt

Wayne Morellini
November 10th, 2006, 09:37 AM
Eliminating the need for a PC would be quiet good.

EMU and maudio are not the only ones, Terratac??, and a number of others exist. Try looking on www.via.com.at website under there Envy 24 audio processors for manufacturing partners and products based on them. these manufacturers also produce items like those. www.digit-life.com , may also still have reviews of such products (their reviews tend to be quiet good) but the companies mentioned are worth a look at.

If you look through the linux stuff you might also find drivers and API's for the USB sound cards, even for specific models like the Audigy series (now much improved). I think I remember something about Audigy and Linux.


You don't really need a computer, most of the work is now in camera. You could use PDA, Micro-controller board, Sony PSP, or something like http://www.gamepark.com/ pocket game machines (some are Linux). The PSP is the best, but not strictly open, it has re-programmable processing line they use for sound and 3D, sort of like FPGA a bit and has good processing abilities. It can provide controls through their own controls, and display, but the camera could give live video feed port if made that way.

Andrey Filippov
November 11th, 2006, 10:02 PM
Extra work was needed to make the board boot from the NAND flash - that software is not yet available from Axis. But that seem to be done and camera now boots Linux from its flash memory. Next week I'll be working on trying the FPGA,

Régine Weinberg
November 15th, 2006, 10:25 AM
That sounds great and maybe a dream comes true.
A stupid question anyway.. for 10 minutes , the estimated disk space and sustained rate should be what about

Serge Victorovich
November 16th, 2006, 09:32 AM
The usage of KWEII Wavelet-based Compressed RAW format for the purpose of camera inner storage has following advantages over JPEG:

1. Wavelet compression takes less time than JPEG compression does. RAW data compression leads to speeding up of the capturing process.
2. If two same-size compressed images are compared, the RAW compressed file has better quality than JPEG does. (Or, for the same quality, RAW compressed file is smaller).
3. Preparation of a small version of an image for its displaying on a phone screen for a quick check or sending it via radio channel becomes easier and does not require an extra storage space. A Wavelet compressed file has a special structure: it contains smaller versions of the image as its part. For example, if camera provides a 1,2 Mpix image, to get a VGA-size image only a part of a compressed file should be used for decompression and processing. Even smaller parts will be used for 1/4 VGA-size and 1/16 VGA-size images.
Two processes should be clearly distinguished:
* Image processing for data storing;
* Image processing for displaying on a cell phone screen.
These processes are independent and second process does not assume that the first process should be previously completed.
4. Because of Wavelet specifics, the noise level is automatically reduced upon resizing.
5. For data storing purposes, an image might be processed with phone CPU in a background (with lower priority) during a longer period of time and, as a result, a better quality of an image is provided. (http://www.kweii.com/site/services/raw_compression/raw_compression.html)

Text above is about phones :) But same type of compression is used in RED, SI2K...
All correction of recorded RAW in post! If you really wanna cheap, but good HD cam choose right compression - Wavelet Based RAW!

Rob Scott
November 16th, 2006, 11:06 AM
Wavelet Based RAW
That certainly would be cool. Aside from Andrey's work on Theora, the only similar free/open FPGA project I found was dwt2d (http://www.opencores.org/projects.cgi/web/dwt2d/overview) -- no files were released though. Anyone else know of something similar?

Serge Victorovich
November 16th, 2006, 12:52 PM
Rob, google: raw to wavelet+open source ;)
Some useful links also:
http://www.compression-links.info/Wavelets
www.openraw.org
www.openfpga.org
www.doom9.org

Very interesting "geometric wavelets" (http://scholar.google.com/scholar?hl=en&lr=&c2coff=1&q=%22geometric+wavelets%22)

http://www.cs.tau.ac.il/~amir1/PS/gwcoding.pdf

Wayne Morellini
November 17th, 2006, 05:41 AM
A good post Serge. From my understanding, at lower compression ratio the quality difference narrows for standard Jpeg pixel format. The present differential compression should be a lot simpler and faster. But using wavelet for the differential over a number of frames should compress more. Cineform does this, and their Raw codec get around 6:1, but I do not know wherever that is just visual lossless or true lossless, as David's answers were a bit obscure. I imagine at least 3:1+ on average.

Their are many wavelet codec projects, their are an number listed on wikipedia, I posted a number of links here a few months ago. 2D and 3D inter frame Wavelet (used for security cameras and webcams) gets more, but I do not know if they are any good for visually lossless or true lossless. I had also been in contact with a guy from the BBC Dirac open wavelet codec (and FPGA design) they are good to contact (though they are normal codec so far).

Andrey Filippov
November 26th, 2006, 11:17 PM
I finished the tests I wanted to perform with the 353 prototype - upgraded FPGA software to version Xilinx WebPack 8.2i (it was not so easy to make it produce the same timing), updated some of our drivers to work with Kernel 2.6 (it was 2.4) and with new ETRAX FS processor. FPGA configuration works, attached 64MB DDR SDRAM - also OK. So now the corrected Gerber files are released in production.

Wayne Morellini
November 27th, 2006, 07:55 AM
Andrey, I still get people enquiring about projects like this. What is the new time line you expect for camera arrival, and any new additions and sensors that you might be considering?

It is hard to answer people about cameras when I'm not in the project and information is incomplete. If there was basic information on the performance, capabilities, limitations, and pricings of the camera, and the Digital Cinema projects surrounding it, I could simply point people there?

Andrey Filippov
November 29th, 2006, 01:56 AM
Andrey, I still get people enquiring about projects like this. What is the new time line you expect for camera arrival, and any new additions and sensors that you might be considering?

It is hard to answer people about cameras when I'm not in the project and information is incomplete. If there was basic information on the performance, capabilities, limitations, and pricings of the camera, and the Digital Cinema projects surrounding it, I could simply point people there?

Wayne,

I'm trying to update development/production status here, and I do (usually) answer direct emails. At the moment I do not know the exact price (just hope it to keep about the same as it was for 333). I hope to have a first batch of the camera boards in about a month - it is now our highest priority.

The first sensor available with the camera will be Micron 5 MPix, older boards will need some modifications as the 353 board uses 2.5 V for the interface signals - not 3.3V as earlier models did. I am planning to use slower CCD (11 and 16 MPix) - similar to our model 323 camera, 8MPix - when it will be available from Micron, and have some work on dedicated hardware/firmware for compensating of the ERS effect of the moving camera.

We'll have info on the camera when it will actually be available posted on the web site, preliminary one - it is only here, in this thread.

Andrey

Wayne Morellini
November 29th, 2006, 10:37 PM
Thank you Andrey, but it might be more efficient to house a summary of the broad/preliminary information in one place (because it is very scattered and forgotten in this long thread) then repeatedly answering the same questions by email.

I have suggested to dvinfo in times past, for a modifiable sticky post, or link, at the head of threads like this, for the same reason. So, thread starters/moderators can post summary of where projects are, and save people reading through project threads. I think this can cut down traffic by 50% on these long threads too (or more).

Andrey Filippov
December 7th, 2006, 08:08 PM
Wayne, we can start something on our wiki.elphel.com about that. Now I'm pretty busy with new hardware development - several new boards will be released soon.

Wayne Morellini
December 9th, 2006, 09:44 AM
Cool!
,,,,,,

Andrey Filippov
December 12th, 2006, 01:18 PM
I finished Gerber files for 2 new boards - one with 8 Compact Flash slots and the other has FPGA+DDR SDRAM (64MB) - exactly like on 353 board and 4 flex cable connectors (one - to 353, 3 - to sensor boards). It allows to attach up to 3 sensor boards to a single 353, making it possible perform some image processing there (i.e. stereo).

Juan M. M. Fiebelkorn
December 12th, 2006, 05:15 PM
Nice, now I just need a PCI interface to connect the camera to a PC and get higher bandwidth than 100 Mbit and have greater flexibility.
That way I can really make a camera system. :)

Zsolt Hegyi
December 14th, 2006, 01:19 PM
Hi all,

What do you think about this one: using a small camcorder for viewfinder (as close to the Elphel as possible, its lens set to the same distance) and a usb numeric keypad plugged into the Elphel to control the recording? We won't be able to record sound tough.

Zsolt

Rob Scott
December 14th, 2006, 02:19 PM
What do you think about this one: using a small camcorder for viewfinder (as close to the Elphel as possible, its lens set to the same distance) and a usb numeric keypad plugged into the Elphel to control the recording?
How about the Nostromo SpeedPad N52 (http://www.extremetech.com/article2/0,1697,2034221,00.asp)?

Zsolt Hegyi
December 15th, 2006, 03:01 AM
How about the Nostromo SpeedPad N52 (http://www.extremetech.com/article2/0,1697,2034221,00.asp)?
Sounds good but we need linux drivers - the article doesn't mention it. If we can use it as a regular keyboard then it's ok.

Zsolt

Rob Scott
December 15th, 2006, 06:05 AM
If we can use it as a regular keyboard then it's ok.
Without a driver it generates keystrokes and mouse wheel events. It looks affordable, too, at around $20.

Juan M. M. Fiebelkorn
December 15th, 2006, 03:23 PM
so, you are indeed talking seriously about using a parallel viewfinder and that keyboard?? Oh, my God......

Serge Victorovich
December 16th, 2006, 05:10 AM
I finished Gerber files for 2 new boards - one with 8 Compact Flash slots and the other has FPGA+DDR SDRAM (64MB) - exactly like on 353 board and 4 flex cable connectors (one - to 353, 3 - to sensor boards). It allows to attach up to 3 sensor boards to a single 353, making it possible perform some image processing there (i.e. stereo).

Very interesting! Andrey, when we can see more information on your website?

Zsolt Hegyi
December 17th, 2006, 04:47 AM
so, you are indeed talking seriously about using a parallel viewfinder and that keyboard?? Oh, my God......
If we think of the two types of video streams (recorded and displayed), changes in the software (switching between streams, because I'm not sure we'll be able to run both of them at the same time), additional mini-pc (or some other device) requirements, plus the need of simultaneous sound recording (I still haven't found an external usb1 device with linux drivers) then we're talking about some big changes.

First I want to reach the simplest solution and that might be a separate viewfinder+sound recorder, no matter how weird it sounds... However, what's good in the Elphel is that a lot of people can make their contributions to it so we might end up with a perfect camcorder sometime. I do my part (the raw video encoder), others can do other parts; but for the simplest solution the encoder will be enough in itself.

Zsolt

Phil Stone
December 18th, 2006, 03:14 AM
If we think of the two types of video streams (recorded and displayed), changes in the software (switching between streams, because I'm not sure we'll be able to run both of them at the same time), additional mini-pc (or some other device) requirements, plus the need of simultaneous sound recording (I still haven't found an external usb1 device with linux drivers) then we're talking about some big changes.

First I want to reach the simplest solution and that might be a separate viewfinder+sound recorder, no matter how weird it sounds... However, what's good in the Elphel is that a lot of people can make their contributions to it so we might end up with a perfect camcorder sometime. I do my part (the raw video encoder), others can do other parts; but for the simplest solution the encoder will be enough in itself.

Zsolt

So its almost like two cpu's (camera bodys) but with just one sensor. One camera doing a low compression & pumping the images direct to a hard drive. The other doing a very heavy Mjpeg compression & pumping these small images out to anything with a lan port & Linux on it?

I was able to get usable video over a wireless lan in my house to XP from the 333. Perhaps if its just for preview a wireless server would work best? Imagine using a pocket PC to watch a low bandwidth streaming video direct from the camera, being able to adjust it etc without any cables remotely? You could put the camera anywhere & get great video. Sound is not such an issue, There could be a marker button that will create a point in the video (clapper board?) that you can use to line up the recorded sound with the video images.

Zsolt Hegyi
December 18th, 2006, 04:12 AM
So its almost like two cpu's (camera bodys) but with just one sensor.

That's a good idea but I have my concerns as usual...

Andrey, can we do such a thing? Use just one sensor board, divide its output in two and sync the two cameras?


I was able to get usable video over a wireless lan

That's what I was thinking too. The Elphel pushes out the stream onto a wireless network and you can have as many receivers as you want. Controlling the camera is a bit more tricky, one has to write some interface sw for that, but that's not very hard.

Zsolt