View Full Version : High Definition with Elphel model 333 camera


Pages : 1 [2] 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

Andrey Filippov
April 15th, 2006, 04:49 PM
Welcome Andrey.
OK, I understand, so your saying that USB2 will work at less than 100Mb/s on your camera. the idea of having an external sound box, like M-Audio and EMU/creative is very helpful.

It will be more like USB 1 - maybe a little faster. There is no dedicated circuitry/DMA access for it in the current design. But for sound it is an easy solution.

Welcome Andrey.
Something that nobody has answered, is the maximum data-rate of the compressed stream the camera can send over Ethernet for VP3 and for Mpeg. From this we can calculate minimum compression achievable for any resolution, as compression heavily effects quality of image.

ETRAX100LX does not have hardware checksum calculation for the Ethernet, so TCP speed is limited to approximately 30Mbps. For streaming we use now UDP (no checksums) and we can get to about 70Mbps.


That will change as I'm planing an upgrade to newer CPU (FX) that is both faster and has hardware checksum calculation. That camera will also have somewhat bigger FPGA and twice memory (64MB system, 64MB video, 32MB system flash) - it will likely have faster USB, but it will still be host, not device.

Current Ogg Theora implementation is not really good for general filming - it was intended for fixed-view network camera applications, so only two types of frames - INTRA (key) and INTER_NOMV (inter, no motion vectors). It gives a lot of extra volume savings only if the background is not moving. Motion vectors will wait for the next bigger FPGA :-)

The precise bandwidth for current Ogg Theora depends on multiple factors, I would say about 1-2 MB/sec is usually enough.


I know people think Mpeg is good, but for cinema even 100Mb/s Mpeg is not high grade for 4:2:2 1920*1080 frame (though it is not bad at 720p, and lossless bayer could fit in 100Mb/s, not quiet 1080). Once the image is blown up for the field of view used in cinema then compression artifacts could be ten times more evident then on a computer monitor. So, VP3 performance is the maker for this application I think.

We do not have 4:2:2 - only 4:2:0 - anyway the sensor has Bayer pattern (so only one, not 3 color components in each physical pixel) and 4:2:2 will require 3 times amount of bits to compress compared to raw sensor data (4:2:0 - 1.5 times). Additional data is interpolated so I believe it is a waste to calculate it in the camera and increase bandwidth and storage - you can do the same by post-processing of the records.

Welcome Andrey.

But, if people want to use 3Mp sensor instead of 1.3Mp they have to consider this. But so far people experiment and play.

The Micron 1.3Mp we experimented with a year or so ago had problems with blooming etc, which the 3Mp solved with a new circuit structure. Do any of the newer Microns 1.3Mp sensors solve these problems?

(1.3Mp sensors, with larger sensor pads, are an important consideration, because of larger well capacity and lower noise, giving larger latitude and sensitivity.)

1.3 MPix sensors are out of production, 3.0 will be discontinued soon and we will try new faster 5MPix sensors. It seems to me that the quality of Micron CMOS sensors is now the best but they are mostly interested in high-volume mobile phone market.

On the other hand - each new of their sensors so far was better than the previous one, so 3MPix with binning is better than 1.3, and 5MPix with binning will have approximately the same resolution as 1.3MPix one.


Anyway, this on camera Axis ETRAX100LX processor, would it be fast enough to stream/control the current compressed stream to a Ethernet/USB caddy?

Hope to work on the new ETRAX100FX soon

Wayne Morellini
April 15th, 2006, 09:44 PM
- it will likely have faster USB, but it will still be host, not device.
I have seen devices to allow slaves to act as hosts, I imagine there is also so Master can act as salve. But if it is controlling a drive caddy it would be master anyway wouldn't it?

Current Ogg Theora implementation is not really good for general filming - it was intended for fixed-view network camera applications, so only two types of frames - INTRA (key) and INTER_NOMV (inter, no motion vectors). It gives a lot of extra volume savings only if the background is not moving. Motion vectors will wait for the next bigger FPGA :-)
That is not such a problem, some sacrifice on movement should still make it better than Mpeg. I their is converter/transcoding software out their in the Linux domain, or some editing support then it is workable for work flow. ON the Cinema project their was no direct software, but transcoding, or using a RAW format was all that was needed.

Have you considered raising the data rate (GigE or USB2) and implementing Bayer based Lossless compression routines in the FPGA?

The precise bandwidth for current Ogg Theora depends on multiple factors, I would say about 1-2 MB/sec is usually enough.
That is not so good for cinema. I am still waiting to see what happens with RED, and various HDV and H264 cameras this year before I make decisions on personal path. I will have to wait for this camera then, 36Mb/S+ is preferable for Mpeg2 video work (I don't know VP3) 100Mb/s+ for Mpeg1, and double those rates are ideal for cinema work. Pity VP 4 to 7 are not available in public domain.

We do not have 4:2:2 - only 4:2:0 - anyway the sensor has Bayer pattern (so only one, not 3 color components in each physical pixel) and 4:2:2 will require 3 times amount of bits to compress compared to raw sensor data (4:2:0 - 1.5 times). Additional data is interpolated so I believe it is a waste to calculate it in the camera and increase bandwidth and storage - you can do the same by post-processing of the records.
I thought it was compressed 4:2:0 are you saying it is Mpeg/Ogg compressed bayer output rather than 4:2:0?

1.3 MPix sensors are out of production, 3.0 will be discontinued soon and we will try new faster 5MPix sensors. It seems to me that the quality of Micron CMOS sensors is now the best but they are mostly interested in high-volume mobile phone market.
We noted a drop in latitude and sensitivity with the move from 1.3 Mp to 3 Mp, 5 Mp might be hard to keep up with the optical picture quality of 3Mp. Microns did not impress me too much, a good Ibis 5a has much more potential (suitable potential for film). A camera based on this was developed (Drake camera) but there is a problem with poor implementations of Ibis 5a on cameras, that use internal ADC/s, and poor support circuits, that really destroyed the performance on a sensor that should trample Micron. The specs of the Micron are great for a phone, but consumer/prosumer grade for video work.

so 3MPix with binning is better than 1.3, and 5MPix with binning will have approximately the same resolution as 1.3MPix one.
Binning doesn't regain the fill-factor lost to circuits around the sensor pad. Binning makes it around 1000 pixels across doesn't it? Maybe binning on the 5Mp+ would get true 1280*720, that would be a good compromise.

Andrey Filippov
April 16th, 2006, 12:37 AM
I have seen devices to allow slaves to act as hosts, I imagine there is also so Master can act as salve. But if it is controlling a drive caddy it would be master anyway wouldn't it?

We are making it only master, slave USB functions are not planned (we use Ethernet to communicate to the camera)

Have you considered raising the data rate (GigE or USB2) and implementing Bayer based Lossless compression routines in the FPGA?

GigE - yes, but there was no good PHY with the documentation available w/o signing NDA. Now there is, so I'm considering it as one of the projects. USB - no, we are making network cameras.

That is not so good for cinema.

What exactly do you mean by "not good"? You want higher bandwidth or lower?


36Mb/S+ is preferable for Mpeg2 video work (I don't know VP3) 100Mb/s+ for Mpeg1, and double those rates are ideal for cinema work.

It seems we are using different units. b=bit, B=byte.

So what I meant was that with full speed (like 1280x1024x30fps) I need 1-2MB (megabytes)/s fro "good" quality, when most background stays the same. There are ways to decrease it even more.
With the camera moving, current implementation will give very little advantage over plain motion JPEG - I do not have real measurements, but would estimate it as under 50% difference

Pity VP 4 to 7 are not available in public domain.

We do not have VP3 implementation, only Ogg Theora and they are not exactly the same. And Ogg Theora is a licensed software (not public domain) - it comes with a BSD-style license.


I thought it was compressed 4:2:0 are you saying it is Mpeg/Ogg compressed Bayer output rather than 4:2:0?

No, it is 4:2:0 and we send 50% more components than are actually available from the sensor (raw Bayer), making additional by interpolation. If the camera will interpolate to 4:2:2 it will need 3 times more color components compared to raw Bayer so in that case it will be better to compress just raw Bayer (with possible re-arrangement of blocks) and do or the color conversion as post-processing.

We noted a drop in latitude and sensitivity with the move from 1.3 Mp to 3 Mp, 5 Mp might be hard to keep up with the optical picture quality of 3Mp. Microns did not impress me too much, a good Ibis 5a has much more potential (suitable potential for film). A camera based on this was developed (Drake camera) but there is a problem with poor implementations of Ibis 5a on cameras, that use internal ADC/s, and poor support circuits, that really destroyed the performance on a sensor that should trample Micron. The specs of the Micron are great for a phone, but consumer/prosumer grade for video work.

I never tried IBIS5 with external ADC, but I believe CMOS sensors should work with internal ADCs - they are not the CCDs (CCDs I prefer from Kodak - like the one in our model 323 camera). It is one of the advantages of the CMOS technology that ADC can be on-chip (you can even have individual slow ADC for each row or column of the sensor. And IBIS5 performed not as good as Micron does, and, as I wrote - 3MPix is better than 1.3MPix ones. BTW it has many undocumented featuires that as we found experimentally do work. Such as Flip-X and flip-Y. Or binning not only by 2 and 3, but by any number up to 8 in each direction.

As for the "grade for phones" - this technology really benefits from higher volume and one of the best IC manufacturer (we all trust their memory, don't we?)

Binning doesn't regain the fill-factor lost to circuits around the sensor pad.
That is wrong. When they move to smaller elements it applies to multiplexer transistors as well as the photo-diodes so the fill factor stays about the same. And the dark current in 3MPix is lower (saturation by thermal current takes longer in 3MPix than in 1.3 ones). And even the same 3MPix were made in several chip release versions - each next had some bugs fixed.

Binning makes it around 1000 pixels across doesn't it?

I did not understand about 1000 pixels.

Frank Hool
April 16th, 2006, 01:23 PM
how big is the image plane of this thing? I guess it's standardized somehow with C-mount?
Does it have fiber optical taper front of CCD?

Wayne Morellini
April 16th, 2006, 10:35 PM
We are making it only master, slave USB functions are not planned (we use Ethernet to communicate to the camera)

I think, we are also talking about different things here. When I say Master I meant it acts as master controlling slaves etc. I am no longer suggesting that you change it, I was just noting that people can buy external converters to do it if they wanted to reprogram it. But at USB1 speeds there is little need.

As far as sound, there are many modules, USB2.0 as well. It should be able to control/sync external sound recording modules, or be good for minimum cinema sound 48Khz 2-8 channels, or stereo at 96Khz uncompressed.

What exactly do you mean by "not good"? You want higher bandwidth or lower?
Yes, 2-3MBytes/s is good for security video (unless you want high quality identification). For consumer video it needs to be another grade again (3MB/s VP3 with motion vectors would do it). For high quality professional video it needs to be roughly double again, for quality cinema yet double yet again. The highest quality cinema is lossless, double again, but I don't think low end productions need to necessarily go that far, between pro quality video and quality cinema should be enough (please note, that a few major film releases have been transfered from consumer video to film, but even though they generally go through very heavy computer picture processing, in film transfer labs by professionals, to make them look a lot better, they still look low quality). The problem is that the larger the screen the more field of vision it covers, making the resolution look smaller. So, a Cinema screen can take up many times more field of vision than a security screen, making quality differences many times more noticeable.

With the camera moving, current implementation will give very little advantage over plain motion JPEG - I do not have real measurements, but would estimate it as under 50% difference
50% is preferable to no improvement.

Bayer compression, yes good. If you can get the next camera upto 12.5-25MBytes/s with RAW bayer compression that would be really good for this market. We did find a number of lossless routines, some open source, I don't know about bayer, even visually lossless is good. But I think you are more oriented to purely security video and don't really need anything more than visually lossless 99% of the time, and most of the time only upto consumer grade video.

I never tried IBIS5 with external ADC, but I believe CMOS sensors should work with internal ADCs
That is the problem, signal to noise ratio from the on chip ADCs is lower, Steve, from Silicon Imaging, showed us samples from their camera (and Drake camera did even better) and the difference between that and what we got from an internal the Sumix one, was day and night (well it looked like dusk a lot of the time actually ;).

The Kodak CD's, are they better than the Micron, are they still available?

As for the "grade for phones" - this technology really benefits from higher volume and one of the best IC manufacturer (we all trust their memory, don't we?)
Yes, volume helps pricing, I think the pricing is under half of fill-factory's, but they are not very good grade compared to good film and video sensors, and even fill-factory (which is being used by top cinema camera company and has been by Kodak for their top digital camera, and probably many more under NDA, but the internal ADC, is the "low cost" option). The Altasens, which was a high grade sensor from the previous year, is reported to achieve upto 96 db S/N during testing, the Ibis5a, can achieve something roughly in between at the level of the previous best professional video cameras, or just ahead, 37db-43db, is not good for low light (still have a documentary filming interest). The Ibis5a also had other distinct advantages, because of it's 100% fill factor scheme (where as the Micron is much lower than 50% with microlens, I believe, and global shutter. Because of this, it could get much more even image (no "fly screening" effect that requires interpolation and filtering to cover up) and you could use a super wide lens (under 1.0 aperture) that was a stop or two ahead of what HD Microlens sensors could achieve and still get a good quality image. The larger pad, and well capacity, also helps with range, apart from the multi-slope feature. This made the Ibis5a a good compromise for cinema cameras, over more costly higher performing sensors. I know, I was in contact with the engineer of the Drake camera from the very beginning before it became the Drake.

I still think that Micron is good for cheap cinema/doco camera, as it is as good/better than some prosumer HDV cameras. I would be surprised if it could match a mid end camera like the Sony XDCAM HD 1/2 inch though.

That is wrong. When they move to smaller elements it applies to multiplexer transistors as well as the photo-diodes so the fill factor stays about the same. And the dark current in 3Mpix is lower (saturation by thermal current takes longer in 3Mpix than in 1.3 ones). And even the same 3Mpix were made in several chip release versions - each next had some bugs fixed.
I agree with you from that perspective, but if they use the same smaller process to make a 1.3Mp, it's circuits shrink allowing for even more pad space, and there are other issues that I won't get into here. But concentrating on the mobile market, I suspect they have little reason to keep older resolutions in the same sensor format size.

I did not understand about 1000 pixels.
The sensor is around 2 thousand pixels wide, a binning of two halves that, which is why I am holding hopes out that the 5Mpixel chip will have a binning of two that is closer to 720p's 1280pixels. Maybe the situation will go much better for the Micron chips, this now interests me. If they could only raise latitude and S/N, and add multi-sampling, it would turn the situation a lot around.

Well Andrey, thanks for clearing these things up for me, I had been wondering about them for a while, I can stop now and wait to see the next camera and 5Mp sensor. I have had a little voice in me for a while telling me not to buy the present model, and now I understand why, it can process the framer rate but not the data rate I desire.


Thanks

Wayne.

Andrey Filippov
April 17th, 2006, 12:22 AM
But at USB1 speeds there is little need.
As far as sound, there are many modules, USB2.0 as well. It should be able to control/sync external sound recording modules, or be good for minimum cinema sound 48Khz 2-8 channels, or stereo at 96Khz uncompressed.

USB1 will easily handle 96KHz audio.

Yes, 2-3MBytes/s is good for security video (unless you want high quality identification).

I'm still confused with your question. I think I wrote both - the bandwidth required by Ogg Theora with the setting I consider good and the data rate we can send from the current (ETRAX100LX-based) camera (70Mbps).

That is the problem, signal to noise ratio from the on chip ADCs is lower...

You mean - lower in FillFactory sensors or in any sensor? If you mean first - yes, probably. If the second - I would not agree. CMOS technology is the same in the sensors as in the ADC, so if the company is good in both areas (or licenses the ADC design) it should be better from the S/N point of view. To say nothing that (as I wrote last time) you can put a thousand slow ADCs on-chip (for each column) - something completely impossible fro the off-chip solution.

The Kodak CD's, are they better than the Micron, are they still available?
You mean CCDs? Yes, they are - and we use some of them in our model 323 cameras (http://www.elphel.com/3fhlo/), but I could not find one that combines resolution and speed of the Micron CMOS imagers.


Yes, volume helps pricing, I think the pricing is under half of fill-factory's,

It might change now, but when I was buying them the price difference was more like 10x, not 2x :-)

... and has been by Kodak for their top digital camera, and probably many more under NDA, but the internal ADC, is the "low cost" option). ...

That camera was not really "the top" from the performance point of view, but I do agree that FillFactory has interesting sensors and nice features like multi-slope. And it is not about FillFactory (now Cypress) vs. Micron - it is that a high-performance ADC should be part of the sensor I believe. And all the CMOS imagers will have it sooner or later.

If they could only raise latitude and S/N, and add multi-sampling, it would turn the situation a lot around.

Or Cypress will have a decent ADC on-chip :-)


Well Andrey, thanks for clearing these things up for me, I had been wondering about them for a while, I can stop now and wait to see the next camera and 5Mp sensor.
The design I'm working on right now has 12x of that resolution, but frame rate is way smaller. Hope to get hands on 12-bit, 96MHz, 5MPix Micron sensor soon too.

it can process the framer rate but not the data rate I desire.

That will stay the same in the next camera too - 100Mbps network connection.

Wayne Morellini
April 17th, 2006, 02:43 AM
I'm still confused with your question. I think I wrote both - the bandwidth required by Ogg Theora with the setting I consider good and the data rate we can send from the current (ETRAX100LX-based) camera (70Mbps).
I'm sorry, I confused what you said, I thought you meant that the through put of the Ethernet was max of 70Mb/s (rather than 100Mb/s) and that the codec was limited to 3Mbyte per second. So, are you saying that the Ogg codec can do 70Mbit per second( and 9MB/s) that definitely helps. I was actually aiming to look at the length to size ratio of some of your sample footage verify this anyway.

ADCS quality:
In particular lower on the Ibis5a, but also many chips, because of thermal noise consideration etc, and the quality of high end ADCs. There is more to silicon sensor quality (and ADC) then normal silicon circuits (on good ADC they go beyond silicon) just because a company is good in one does not mean they are good in another. But, seriously, I don't think Micron aims to make costly top quality sensors to put in mobile phones and security cameras, I think they might aim for cheap top quality mobile and security sensors instead.

Kodak:You mean CCDs? Yes, they are - and we use some of them in our model 323 cameras (http://www.elphel.com/3fhlo/), but I could not find one that combines resolution and speed of the Micron CMOS imagers.
But will it do a 720p or 1080p frame at 25fps?

Ibis5a price:
It might change now, but when I was buying them the price difference was more like 10x, not 2x :-)
I am speaking of price drops last year for mass quantity on the Ibis, maybe the Micron price was older, so maybe it was not the best comparison.

That camera was not really "the top" from the performance point of view

I thought is was in the Kodak range for a 35mm sensor when released, but now things, of course, have moved on.

it is that a high-performance ADC should be part of the sensor I believe. And all the CMOS imagers will have it sooner or later.
I agree, I was shocked at the quality coming out (I would imagine that some of the external circuitry issues I heard about, might have something to do with it too). Even for a "me too" on chip ADC for lower cost applications, I was not impressed. I think the on chip ADC should be much better. I don't have S/N figures for it, but it would not be surprised if it was not top far away from the Micron's 37db (6bit accuracy). But it doesn't matter, the FF chip is just to expensive for you to put in your cameras at their price point. Though if you wanted a really cheap chip with multislope like feature (apart from descent SN 48db min, 60db+ preferable, multislope is worth looking at because of latitude extending properties)) then Smalcamera is now owned by Cypress as well. Their feature is called autobrite, and I think it adjusts the gain on a pixel by pixel basis instead, but I am not sure. Though, I expect the quality might be just a bit low from non Security applications. There is another company with sensors for security cameras with multislope like feature that sounds like the Smalcamera ones, but I can't locate the web link at the moment.

The design I'm working on right now has 12x of that resolution, but frame rate is way smaller.
That doesn't really worry me, as long as it can bin down to close to at least the horizontal size of 720p or 1080p frame.

There has been talk of upcoming 5Ghz programmable gate array technology, is that any close to a commercial product?

That will stay the same in the next camera too - 100Mbps network connection.
Well, with 100Mb/s full implementation of Ogg with motion etc, then at least the quality should be good for cinema. I don't know where ever you can get good lossless bayer results though in 100Mb/s for 1080p bayer though. I have just realised I have some compression ideas that might help this situation.

Before I get to these techniques I will share another one that I had previous that it also turns out people are using in film restoration. Noise reduction, should improve existing performance, and the performance of lossless compression immensely. Most compression performance is lost in the last 2 bits of a image, because that contains the most noise. if you eliminate this noise you ramp of the compression that should be achievable at the same quality. Basically rather than just finding a pixel of noise and interpolating it out with the surrounding pixels, the pixel itself might still contain some information (in 3 chip, the other channels might contain the information) and the proceeding and succeeding frames contain information about the piece of image that should be in that pixel. By using this extra information you can restore the pixel with great accuracy, producing a cleaner image to compress. This would be a of great performance benefit to the techniques below.


Thanks

Wayne.

Wayne Morellini
April 17th, 2006, 02:47 AM
While I wish to keep the most efficient ones for commercial reasons, I have also been talking about some, lesser, helpful ideas around here that might help. I will attempt to get back latter with links to previous discussions that outline it. But the basic idea is to store succeeding pixel values as the difference from proceeding ones, and to store the differential between succeeding frames. Now, all this data is compressed with run length encoding and the usual sorting/mnemonic representation compression techniques used in regular video compression, and in fax like compression techniques.

Now, the beauty is the next method to reduce the differential even more. We know that in an image luminance generally changes more often then chrominance, so colours are less variable pixel to pixel then luminance. This generally helps a debayer algorithm predict the other primaries for each pixel position. But with this scheme, what I propose is that the proceeding/surrounding pixels value be used to establish what the pixel should be, in that primary colour (using the previous/surrounding proportion of that colour present) as the base value for the differential, thus reducing the amount of data needed drastically. We also use the surrounding pixels to estimate an interpolate prediction to modify the base value. The whole basis is to use estimation/prediction (that does not have to be recorded as the decompression software makes the same prediction) to reduce the data size before finale compression, in a format hopefully more compressible by finale compression. There is a bit more sophisticated things that could be done then this, some of that commercial stuff I mentioned, but as you see, the work done would mostly be simple comparative circuits, plus the run lenght/mnemonic you already use.

I'll just summarise, so I can remember, prediction based on previous pixel and interpolation of surrounding pixels, and previous proportion of that primary colour, modified for primary colour at the present pixel. Once the bayer is reconstructed, it is then debayered for display.

Of course, the interpolation of surrounding pixels that have not yet been calculated, in decompression algorithm would enquire some fancy maths, but a simpler effective form of it can be done without interpolation of unprocessed pixels.

I think I have posted somewhere a 3chip version of this scheme as well.

This is just one of the several different areas of high performance compression technique I would like to use. It is also one of the most expendable, and potentially one of the least effective in compression performance.


Thanks

Wayne.

Frank Hool
April 17th, 2006, 06:08 AM
how big is the image plane of this thing? I guess it's standardized somehow with C-mount?
Does it have fiber optical taper front of CCD?

Answers to my own questions:
image plane = 6.55mm*4.92mm
registration distance = 17.52mm
FO taper = no
am i right?

Oscar Spierenburg
April 17th, 2006, 11:39 AM
Jef, what compression artifacts can you see? do you mean the image looks soft, or do you see jpeg squares (like the grid looking thing)

To come back to this discussion... don't forget after effects plug-ins like 'Re:Vision SmoothKit - Staircase Suppress' can reduce those artifacts greatly. I made a quick(!) test on one of Forrest's framegrabs. I did a big contrast and color saturation boost to show those blocks. On the right is the one with Staircase Suppress.
http://s03.picshome.com/d29/staircasesuppress.jpg

Andrey Filippov
April 17th, 2006, 11:47 AM
Answers to my own questions:
image plane = 6.55mm*4.92mm
registration distance = 17.52mm
FO taper = no
am i right?

yes, you are right

Forrest Schultz
April 17th, 2006, 07:36 PM
I do not even know how this is possible, But i was able to stream

1600 x 896 frame size @ 24fps. (16:9 aspect ratio, note: 900 had to 896)

and 1920 x 816 @ 24fps (2.35:1 aspect ratio)

This is unreal ! but i checked everything in the avi file on a timescale and made sure there are 24 frames for every second of footage and there is! it is true 24 at 1600 x 896 and 1920 x 816

the only thing i see that is the only downside of using more of the sensor is that the electronic rolling shutter artifact is more noticable.(at 1280 by 720, your only using a small portion of the sensor, so the electronic rolling shutter can sweep through unoticed. but even at 1600 x 896 the rolling shutter isnt too bad. but for 1920 x 816 you see the impact alot because of the widescreen. but for shots without quick whips , or shaky handheld footage. the rolling shutter can't be too bad.

I am just amazed that the camera can actually stream these sizes at these speeds. i find it unbeleivable. is there a reason why this is possible Andrey? thanks

Andrey Filippov
April 17th, 2006, 09:02 PM
I do not even know how this is possible, But i was able to stream

1600 x 896 frame size @ 24fps. (16:9 aspect ratio, note: 900 had to 896)


can be up to 26.93fps


and 1920 x 816 @ 24fps (2.35:1 aspect ratio)


up to 24.57, that's correct

BTW - I've just made some FPGA mods to provide exact (to 1 usec) timestamp for each frame (measures at the start of the first line in each frame) - just need to add software to deliver this info through the stream.

is there a reason why this is possible Andrey? thanks

333 can fully support the 3MPix Micron MT9T001 - there are timing calculation formulae in the datasheet (http://download.micron.com/pdf/datasheets/imaging/MT9T001_3100_DS.pdf)

The new 5MPix 96MHz will need an upgrade in the camera to keep up with the maximal frame rate (pixel rate can be 96MHz even with 333), it can also benefit from 12 (vs.10) bits - model 323 camera that uses older 313 boards receives 14 bit data.

Frank Hool
April 18th, 2006, 12:54 AM
one more question, Andrey. Do You have any experiencies using non-C-mount lens with such camera? In other words is it possible to use there FO taper front of ccd to get larger image plane. Wich would be very helpful to use there 35mm photolens in full functionality.

Wayne Morellini
April 18th, 2006, 01:17 AM
Did I say something wrong?:

Speed:
So, are you saying that the Ogg codec can do 70Mbit per second (and 9MB/s) that definitely helps?

Kodak sensor:
..will it do a 720p or 1080p frame at 25fps?

There has been talk of upcoming 5Ghz programmable gate array technology, is that any close to a commercial product?

The bayer differential compressing algorithm concept?


Thanks

Wayne.

Konstantin Serafimov
April 18th, 2006, 03:25 AM
i apologize for nit picking here, but, is the MT9T001 the only micron's sensor that does global shutter? So why do you have rolling shutter ussues at widescreen rez then?

And, honestly, all the footage placed in this thread is either overburn or noisy due to low light. Is it camera, or shooter or setup difficulties? In this form, it slightly misses main reason of sample footage - to show the device in its best.

to go a bit conceptual... i think from point of this forum, there are 2 features absolutely nesessary to make a good industrial camera head usable for digital filmmaking. First is direct recording of quality stream to harddrive and second is a way to monitor the picture on a device attached to the camera.

As far as i understand, the present camera is a kind of sbc with fpga, so ideally it should be sbc with vga-out and ide/sata, where fpga doing debayer and resize function for preview. But it is different concept (from network camera) and camera board redisign. Not good. And even that ideal system, would require 2 person team to operate. 1 to control the camera settings via laptop and 1 to actually point and shoot. To eliminate this problem there must be all in one place: lense, head, recorder and screen. Why not attach a compact capable laptop to the head then, like a Tablet PC? And a battery bank.

But in general, how could you summarize the reasons to use such a camera in real life shooting. Except very special areas, like permanent multicam video system in small music club, where it is probably excelent option to take already, and a toy for filmmaker wannabe who loves cameras more than above mentioned fimmaking? Iam not offending anybody here, of course, except myself :).

Wayne Morellini
April 18th, 2006, 04:48 AM
Konstantin,

I have covered most of these points a few times. even with Andrey by email in times past. And yes I have noted what you said, but I was being kind not to point it out, because it does not prove how much it can do (and sensor and compression has a lot to do with it).

What I have been discussing here is ways to maximise performance to get it upto quality output, and defining what the limits are. Including a practical bayer scheme.

The problem is that it is a security camera (not industrial)so it is made for good security quality, and limited. A little extra quality and we can get to at least pro video quality. The Kodak sensor might deliver a lot better performance, but I note that Micron has extended latitude features now (there MT9V032 wide vga security chip pdf shows 32K lux and 2 lux test charts in the same shot. So there is hope for this to turn up on the HD chips.

Computer:
If you want a small platform, then car-PCs (about the size of a DVD drive) Origami and UMD, small cheap Tablet PC's, are worth looking at. There might be one out there with higher res screen and microphone input/s.

Have fun with it, we need your sort of thinking around here.


Thanks

Wayne.

Forrest Schultz
April 18th, 2006, 11:57 AM
Yes, you are correct that none of the test footage was performed under ideal condintions. and also, i recently found the zoom on the lens i am using has to be turned just a nudge away from 13.5mm to actually be at true infinity focus, where everything is in focus. i think the reason being is that, i can go past 13.5mm to macro function, and i was too close to macro. and note my lens is at f1.8 with it open.

Can i ask if you have seen all the footage posted on the link? there is one shot called guitar chair. and i wouldnt call that blown out. i had some focus issues perhaps on it, but nothing much else.

and of course we can say there are improvements to be made, but that can always be said. What Andrey has here is a high definition network camera that he probally didnt plan to be used for this purpose. And he has made a great camera capable of beautful picture. and i applaud him for that. He has been very generous and helpful in making this project go foward. I will have to try a little harder to get shots out that can please you a little better. And its not a bad thing, it keeps me on my toes, which is good. I am going to shoot some stuff right now infact. ill see what i can do. thanks

Oscar Spierenburg
April 18th, 2006, 01:30 PM
Yes... but please don't make this forum into a commercial discussion thing. We are discussing the first attempts of using a security camera for film making.
Forrest is sharing his experiments here, he's not selling a camera or something. Please share every footage, good and bad.

I haven't received my camera yet, but they say it should arrive in a few days.

Forrest Schultz
April 18th, 2006, 01:39 PM
Yes, thank you Oscar. We should discuss and show what the camera is capable of, not "what the camera should be capable of in a perfect world"

Thats what this thread is about, to show progress of using this as a film camera. And here is some more footage i shot this morning.

The setup was this: Bright Sunny morning. The subject is under the shade of the tree in my backyard, the sky in the background was basically white to my eyes, with a little blue tint, but it was very bright. I tried my best to show that the camera can get a subject in the shade, while still preserving the colors of a bright background such as the sky.
here is the file: http://www.savefile.com/files.php?fid=2614781

It as at full 1600 x 896 resolution but i convereted it to wmv for space. Slower computers (like mine) might have a bit of trouble playing back the high bit rate of the video. if anyone has this problem, and wants to see a lower rez version. let me know.

Konst Seraf
April 18th, 2006, 03:01 PM
well, i should apologize for being extra critical, the "chair" file is correctly exposed, and overall visually pleasent. But, there was one problem that become apparent in last the file, with girl. It is luck of crispness. May I consider that the lense you are using is not even an industrial purpose megapixel lense? You wrote in initial post that it was some combination of 35mm model with wideangle adaptor. It may be not adequate.

I had no intention to start a commercial discussion. But i think even being a thing in itself, the project like this still requires some perspective view. What is it good for and what is the goal? The only reason to use such a modular system in real shooting process is execptional video quality. Lets say noticebly better than present prosumer hdv. An interesting task.

Eric Gorski
April 18th, 2006, 03:27 PM
that last clip is very nice. the footage you're shooting reminds me alot of clips that a board member by the name of OBIN was shooting a year ago with a similar camera, possibly the same chip?

Forrest Schultz
April 18th, 2006, 03:37 PM
Konstantin, you do not need to apologize at all. Your comments were very helpful in the for the fact that i must continue to better the progress and quality.

The lens i am using now is an f1.8 zoom lens that is c-mount. It is a correct type of lens, but prehaps its glass isnt good enough for the the cameras resoultion. That might be the case. I just finished ordering a 25mm c-mount prime lens. with f1.4. With this lens, i am going to build the 35mm adapter onto the camera with this lens to serve as the relay lens.

I will post footage with that f1.4 to see how it handles the resolution also. thank you.

And thank you Eric. I can't remember what chip Obin used, but the process that were going is quite similar. The chip in this model camera is a 3MPix Micron MT9T001 1/2" CMOS.

Andrey Filippov
April 18th, 2006, 07:47 PM
http://www.savefile.com/files.php?fid=2614781
It as at full 1600 x 896 resolution but i convereted it to wmv for space.

Seems some frames were still lost - Mplayer shows 21.75 fps average - 32 sec, 696 frames

I hope it will soon be easier to troubleshoot - each frame will have a precise timestamp

Forrest Schultz
April 18th, 2006, 08:41 PM
Oh wow, i didnt check that one. let me see if that has happened to some more recent footage i did. Thanks Andrey

EDIT: well it seems that things got a bit weird when i convereted to .wmv

The original footage coverted to avi from ogm via VirtualDub shows full 24 fps. 24 frames for every 1 second of footage. for some reason, the wmv version doesnt want to play right, also the wmv in windows media player states 29 sec duration. perhaps when it says 32 sec in mplayer it is not playing it right? i dont know.

Forrest Schultz
April 18th, 2006, 09:14 PM
I also think at 1600 by 896, i am really maxing it out. it seems on certain footage, it will do 24fps fine, but when i have footage in motion and pans, i lose frames. i think i need to scale down the resolution. 1600 by 896 is not really needed anyways. could this also cause the excess electronic rolling shutter artifact? the fact that i am trying to grab more frames than possible at that resolution. thanks

Andrey Filippov
April 18th, 2006, 10:58 PM
I also think at 1600 by 896, i am really maxing it out. it seems on certain footage, it will do 24fps fine, but when i have footage in motion and pans, i lose frames.
Forrest,

I think it will be easier to locate problems with frame timestamping. Only what you can try right now - set low quality and see if it helps (eliminates frame drop). You see, the FPGA compressor can now process all the data from the sensor at any quality settings without drops - this frame rate is calculated and reported in web interface (of cause there might be bugs and with some settings the calculation of the frame rate can be wrong). But the sending of the compressed data out requires CPU activity and higher bandwidth needs more CPU power, so it can limit the frame rate. If reduced quality helps - that is probably the case.

Did you use autoexposure? It also uses CPU and can steal some of it from the streamer - especially if the illumination of the scene really changes (when you move the camera).

Forrest Schultz
April 18th, 2006, 11:27 PM
I think the main problem is my CPU speed. im only at 1.5 Ghz right now. I will buy a 2.8 Ghz processer soon.,and that should help alot. because its when i go for better quality compression that it usually drops. thanks Andrey

Andrey Filippov
April 19th, 2006, 02:50 AM
I think the main problem is my CPU speed. im only at 1.5 Ghz right now. I will buy a 2.8 Ghz processer soon.,and that should help alot. because its when i go for better quality compression that it usually drops. thanks Andrey

No, Forrest, I meant CPU in the camera (with "Axis" written on it- http://wiki.elphel.com/index.php?title=Image:Board_overside.png ), not in your PC. Just recording is not so demanding application on the PC side, so frame drop is probably in the camera in that case.

Forrest Schultz
April 19th, 2006, 09:27 AM
Oh ok. I see now. thanks andrey

Forrest Schultz
April 21st, 2006, 01:05 AM
Hi guys. i was looking at my footage compared to the elphel cameras Andrey has set up.

camera4.elphel.com

and such

and i am noticing a big differece in resolution and im pretty sure its the lens im using. it dont think it was the best quality made c-mount lens. it doesnt even have a brand name written on it.

so i ordered 2 new lenses. One is a 12mm prime c-mount lens with f1.4 and the other is a computar f1.3 5-50mm cs mount zoom lens. The computar brand new costs 135. so i think that might do the trick. as for the other c-mount (the f1.4 one i bought) eh, i dont know if it will be good either, but the Computar i am hoping will get er' done.

both lenses are one their way, so when i recieve them. i will test with them.
PS, i have already tested with a 35mm adapter, and the results are good. great DOF. but i need to have a better lens because the quality of the glass really matters in this case. also, the ground glass i used for the test was just a clear plastic diffused on one side. it was a stationary peice so you can see all the stracthes and such. When i get all the new lenses and a new ground glass. I will post tests with that rigged up. And after that, its off to build an optical viefinder for the 35mm adapter. more on that later.

Wayne Morellini
April 21st, 2006, 09:42 AM
On Obin's 4:4:4 thread he had the same problem. They eventually tracked down the best c mount lens, but found lens from good film cameras very good (what format would that be, 8MM?). Have a read there, it should be within the first 1-10% of the thread.

Régine Weinberg
April 21st, 2006, 01:41 PM
a silly question
on the axis page
there is nothing to be found about their CPU's
maybe I'm stupid, seems so

Forrest Schultz
April 21st, 2006, 03:30 PM
I did a test today , so that when i get the new lenses, i will be able to compare the focus ablilities >The chart was hung in the shade out of direct sunlight.

http://i11.photobucket.com/albums/a162/vericetti/colortest1.jpg

please note the colors on that chart are not your standard R G B. the blue is a deeper blue and the Green is a deeper green also. That is how they are on the paper.
when i get a chance, ill print out colorcorrect r g b and shoot that.

here is the same thing with the exposure set longer.

http://i11.photobucket.com/albums/a162/vericetti/colotest2.jpg

John Wyatt
April 22nd, 2006, 05:18 AM
Forrest -- a while back I started a thread called "Call for C-mount lens info" here in Alt Imaging, there might be some info there for you...
John.

Régine Weinberg
April 22nd, 2006, 06:51 AM
Dear Wayne
16mm lenses are there with a real very good quality
adapter C-mount Pl are there also.

I used on an Aaton S16 Nikon lenses,
the wide angel was the best for money
we could get.

C-mount is industrie, for watching chicken farms or so quite ok LOL

Wayne Morellini
April 22nd, 2006, 07:19 AM
I think that was another of Obin's thread.

Ronald, 16MM lens, but aren't these sensors are far smaller than 16mm?

Be prepared, I have found something interesting, and if I get it home, I may post on it in future.

Wayne Morellini
April 22nd, 2006, 07:21 AM
I found nothing but trouble trying to run camera4.elphel.com . I get one photo with box and one web page that says loading.

Noah Yuan-Vogel
April 22nd, 2006, 09:01 AM
I'm impressed with the color and ease of use of this camera, and I know it can create sharp images using a nicer lens since i am using the same sensor. what kind of IR filter is being used? is it all built in? because ive had some problems getting good color out of my camera. Otherwise what im really interested in is the quality of the mjpeg compression, what is the least compression it can apply? all the way up to the ~7MBps that 100BT can handle? that was a little unclear. Another really important thing is how is the framerate control? accurate? you really need an accurate and consistent framerate for sound sync. Also does it give you control over vertical blanking to keep rolling shutter artifacts under control?


BTW this is the same sensor obin was using and decided to scrap due to rolling shutter or something. Also, these sensors arent that much smaller than 16mm, its 1/2" and 16mm i think is somewhere between 2/3 and 1". Although when running at 1600x896 only about a 1/3" area is being used.

Also Forrest, I noticed you mentioned rolling shutter artifacts at your high resolutions. rolling shutter is definitely a big problem with this sensor, ive even run it at 720x540 with 1200+ vertical blank and been able to notice significant rolling shutter artifacts in fast horizontal pans. Makes me kind of worry this sensor will never be appropriate for filmmaking. Now a 96MHz sensor sounds more like it, but if its 5mp its likely to have to be run at at a <2mp windowed resolution only using like 1/4" in sensor area if we want the minimal rolling shutter artifact and framerate we want...

Forrest Schultz
April 22nd, 2006, 03:02 PM
I agree with everything you said Noah, thats also nice to know i can get sharper picture with the better lens attached.

at 1600x896, you see more of the sensor, therefore rolling shutter is more evident. but that resolution isnt needed, plus it isnt too stable. 1440 by 800 is much more stable, and even 1280 by 720. I will do tests with all these when i get the new lens. I havent seen frames drop except in 1600 by 896 mode. I dont know if there is control over vertical blanknig. i dont know how that works. Thank you for your help.

Andrey Filippov
April 22nd, 2006, 06:22 PM
I found nothing but trouble trying to run camera4.elphel.com . I get one photo with box and one web page that says loading.

Wayne, there might be problems connecting to these cameras - interface is all open and visitors sometimes leave it with some weird settings - I need to make some simple script to periodically reset it to some known state.

What browser did you use when you've seen only "loading"? Interface is designed now only for Mozilla Firefox, but I've heard that with Opera it also works.

Andrey Filippov
April 22nd, 2006, 06:53 PM
I'm impressed with the color and ease of use of this camera, and I know it can create sharp images using a nicer lens since i am using the same sensor. what kind of IR filter is being used? is it all built in?

It is dielectric filter (t=0.3mm) and it is glued in. We can machine a version of the front-end piece (a little shorter to compensate for the optical length difference) - it should be rather easy to replace.

Otherwise what im really interested in is the quality of the mjpeg compression, what is the least compression it can apply?

any standard JPEG up to 100% quality - virtually lossless (all quantization coefficients equal to 1.0), but in most cases it will not work because of the limit in the network bandwidth. Maximum what we've got with the ETRAX100LX is about 70 mbps - that will increase with the new processor.

all the way up to the ~7MBps that 100BT can handle? that was a little unclear. Another really important thing is how is the framerate control? accurate? you really need an accurate and consistent framerate for sound sync. Also does it give you control over vertical blanking to keep rolling shutter artifacts under control?.

Yes, it is possible. When you set up fps limit, it adds vertical blanking to match the frame period. Frame rate is not very precisely set - up to integer number of lines, but we now have a project where we need much better - so it will be done. Currently I added precise timestamping of the frames (to one microsecond), but we still need to make some software interface to it.

significant rolling shutter artifacts in fast horizontal pans.

This I strongly beliebe can be solved with the software post-processing, especially if distortion is related to the camera pan, not the object moving. It can use similar algorithms as used when the frames are interpolated when changing from 24 to 30fps (or opposite). Each line is exposed at exactly known time, so it is possible to interpolate between the two frames.

Wayne Morellini
April 23rd, 2006, 07:40 AM
Wayne, there might be problems connecting to these cameras - interface is all open and visitors sometimes leave it with some weird settings - I need to make some simple script to periodically reset it to some known state.

What browser did you use when you've seen only "loading"? Interface is designed now only for Mozilla Firefox, but I've heard that with Opera it also works.

Opera, I didn't even know anything about controls, just saw a picture in the pop-up, and some sort of bordered region in that.

Wayne Morellini
April 23rd, 2006, 07:54 AM
Obin adjusted more then frame rate to reduce rolling shutter. He shot double frame rate and had to adjust the other parameters of the frame timing to reduce the rolling shutter further (big tip). I doubt the Elphel is set up for any of this normally (like overclocking the sensor, compressing every second frame, and dropping the others to get the 180 degree reduced rolling shutter). It is all mentioned in the first part of Obin's thread, I asked some questions to clarify it, and is worth reading to find out what to change.


Thanks

Wayne.

Noah Yuan-Vogel
April 23rd, 2006, 08:44 AM
Yes I recall that, but when it all comes down to it, isnt it all just keeping the frequency as high as possible while capturing a certain number of pixels? I mean in terms of reducing rolling shutter, isnt capturing a 1000x500 frame with 500 vertical blanking essentially the same as capturing twice as many 1000x500 frames and throwing every other one out? either way, the frame time for the active sensor area is the same, right?

Andrey, so you are saying that the camera does at least use vertical blanking (and only vertical blanking) to control frame rate? That is a good thing at least in terms of rolling shutter, although maybe not as useful for accurate framerates.

Also, could you tell me why it is that if the camera runs at 48mhz, the max framerates listed for 1600x896 etc correspond with a pixel rate more like ~35MHz? I have had this same issue in my camera running the same sensor, is the camera clock not reliable? Or is it some bottleneck?

Andrey Filippov
April 23rd, 2006, 11:34 AM
QUOTE=Noah Yuan-Vogel]Yes I recall that, but when it all comes down to it, isnt it all just keeping the frequency as high as possible while capturing a certain number of pixels? I mean in terms of reducing rolling shutter, isnt capturing a 1000x500 frame with 500 vertical blanking essentially the same as capturing twice as many 1000x500 frames and throwing every other one out? either way, the frame time for the active sensor area is the same, right?[/QUOTE]

Approximately - yes (there is a fixed minimal blanking of 4 lines or so fro MT9T001 sensor). And current FPGA (or software) does not support throwing each other frame (not so difficult to implement).

Andrey, so you are saying that the camera does at least use vertical blanking (and only vertical blanking) to control frame rate? That is a good thing at least in terms of rolling shutter, although maybe not as useful for accurate framerates.

Driver uses the vertical blanking, not the horizontal - exactly because of the rolling shutter issues, but it is still possible to use the horizontal blanking by specifying "virtual frame" width and height directly (available in cgi interface, but not in the javascript code - easy to add). As for precise frame rate - we will implement it by skipping/adjusting pixel clock during vertical blanking. That will be done on ythe FPGA level and precision will be limited only by the thermal drift of the quartz master clock. For timestamping we already have a real time clock (it counts microseconds - 20 bits and seconds - 32 bits) and is digitally adjusted with the step of +/-0.125ppm.

Also, could you tell me why it is that if the camera runs at 48mhz, the max framerates listed for 1600x896 etc correspond with a pixel rate more like ~35MHz? I have had this same issue in my camera running the same sensor, is the camera clock not reliable? Or is it some bottleneck?

Sensors need extra horizontal blanking - you may find formulae in the Micron datasheet - PDF on their web site. And horizontal is much bigger than vertical (only 4 lines but some 200-300 pixels)

Régine Weinberg
April 23rd, 2006, 01:58 PM
Dear Wayne that's quite true
but as you are at the center there is the least distorsion.

One of my silly stupid ideas,
Why not use a mechanical shutter somebodie did it with a Dvcam once.
There are no rolling shutter artifacts, Kreins (??) I guess does it this way.

a viewfinder from a Bolex would be fine as you see what the picture will be.

otherwise is on the board any RGB out to have a small CRT connect ??

Régine Weinberg
April 23rd, 2006, 01:59 PM
I can not post so much sick with my teeths

Andrey Filippov
April 23rd, 2006, 02:10 PM
otherwise is on the board any RGB out to have a small CRT connect ??
No there is none - there is no "videosignal" (and no RGB - Bayer data is directly converted to YCbCr 4:2:0 - this happens in the macroblock order in front of the compressor, not in the scan-line sequence) inside the camera so making such an output will require something similar to computer videocard.

But there is a solution - just use a separate hand-held computer with WiFi and use it as an external monitor (with reduced resolution/frame rate). I.e. Nokia 770

Forrest Schultz
April 23rd, 2006, 07:16 PM
I have thought about the external 180 degree mirror shutter. So you can also have a viewfinder image, but then i wouldnt think it would be easy to sync the movement of the shutter with the 24 fps to make sure it happens exactly how it should. And for the syncing part, i wouldnt know at all how to do that. I could easily build a 180 degree spinning shutter though.