DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   Apertus: Open Source Cinema Project (https://www.dvinfo.net/forum/apertus-open-source-cinema-project/)
-   -   High Definition with Elphel model 333 camera (https://www.dvinfo.net/forum/apertus-open-source-cinema-project/63677-high-definition-elphel-model-333-camera.html)

Phil Stone July 23rd, 2006 03:42 PM

http://www.tacx-video.com/raw-demo/R...3Raw-mjpeg.avi here is something I found on this laptop. Not ideal lighting (washed out sky) & its 4:3.

Matteo Pozzi July 24th, 2006 08:46 AM

Thanks Phil for the link; if you find can you post some raw elphel footage in ogg theora ...exactly what you get from the camera? I want to experiment all the process...(from conversion to avi to color correction and editing with vegas)
best reguards Matteo

Phil Stone July 24th, 2006 09:22 AM

Quote:

Originally Posted by Matteo Pozzi
Thanks Phil for the link; if you find can you post some raw elphel footage in ogg theora ...exactly what you get from the camera? I want to experiment all the process...(from conversion to avi to color correction and editing with vegas)
best reguards Matteo

That is raw mjpeg (was in a ogg container) from the camera. ive not yet used the theora codec which I think is a form of Xvid. All I have done is change the container to avi.

Its 27.1fps 1280x1024 & very slightly out of focus.

here is the same clip compressed 10x so it looks a bit like HD1 video http://www.tacx-video.com/raw-demo/R...Compressed.avi

Andrey Filippov July 24th, 2006 11:13 AM

353 in the process
 
Here is a screenshot of mostly routed 353 board - http://www.elphel.com/3fhlo/10353/353pcb.png

Matteo Pozzi July 24th, 2006 12:24 PM

for Phil:
I thought you were shooting in ogg-theora and not ogg-mjpg ...can you tall me the maximum bitrate (more or less I know it is variable for jpg compression) that you can achieve and the maximum quality of jpg you can achieve from the camera with you portable setup!? (did you use a laptop right?)
for Andrey:
many thanks for what you are doing for us also for this picture ...I don't know anything about electronic scheme but it is really cool that you make us known your progress in your work.
In the final version of the 353 how did you intend to implement audio in the ogg container files? with a minijack or with a usb external box? is the usb 1.1 transfer rate able to record more than 2 track (stereo sound)?
best reguards
Matteo Pozzi

Wayne Morellini July 25th, 2006 01:39 AM

Quote:

Originally Posted by Phil Stone
That is raw MJpeg (was in a Ogg container) from the camera. ive not yet used the theora codec which I think is a form of Xvid. All I have done is change the container to avi.

Its 27.1fps 1280x1024 & very slightly out of focus.

here is the same clip compressed 10x so it looks a bit like HD1 video http://www.tacx-video.com/raw-demo/R...Compressed.avi

:)

To me the 100Mb/s MJpeg, restricts it to HVX200 quality, which uses the DV adaption of MJepg, at the most (the compressor runs at a lower rate I understand). Double HVX200 data rate is a good zone, and that is why I have always been interested in maximum Ogg Theora performance, because it si more efficient, to cram as much data is there as possible in 100mb/s, but I understand maximum rate of Ogg codec is around 30mb/s here. So next camera, I understand, will have much better through put, because ti si now probably faster, and Andrey mentioned improving motion vector etc, but looks like the model after 353. But to get equivalent to double the, per pixel, data rate of DVCPROHD, in 1920*1080 (not HVX200 1280*1080) is goign to require very good Ogg Theora. But then again, we have not tested off theora to prove quality.

I understand that Cinelerra Linux NLE supports Ogg Theora, found here:

Ogg Theora, at it's fullest is supposed to be an alternative to Mpeg2, but closer to performance of Mpeg4 I think I understand. It is an advanced form of the VP3 codec, which is now upto VP7, which is, supposed, to be better than H264. The camera to match and beat today is the HVX200 100mb/s DVCPROHD camera (much less than in 720p). But this old codec is being replaced by a frame based H264 Intra codec next year, that has indications of getting double DVCPROHD quality at 100Mb/s, with 10bit 4:2:2. So, there is possibility that a full Ogg Theora might have the possibility to stand against it, unless severe scene changes. There is also AVCHD, based on H264 Intra (intermediate frame with motion detection etc). It is possible that at 24mb/s it will be better than 100Mb/s DVCPROHD, unless severe scene changes like motion again.

The one big advantage of the present bandwidth, is that theoretically 100Mb/s can be sued for 720p, which the DVCPROHD cameras do not support, but how high was the compressor through put compared to this.

I understand that Ogg can do 4:2:2, and 4:4:4, but forgot to check for 10bits. Is 4:4:4/4:2:2 in this camera Andrey?

Wayne Morellini July 25th, 2006 01:54 AM

Andrey,

Recently I have been looking up component to GigEthernet converters. I though, you could make one with a ADC front end, FPGA, and GigE conditioned back end (actually, trying to find some kind sole that might want to do it for us). Such a thing could be done DIY style for perhaps $100 with simple FPGA, and stream out to hard disk, or laptop in 10+ bits. Some simple, mild compression might be possible, or proper lossless compression with a bigger FPGA and more expense (like $500). You could then use it on a lot of HD cameras.

I then occurred to me, that you camera base could do this to hard disk (dropping the need for GigE) with only a mild adjustment to accept component, and record control signals from Firewire/lanc jack. All other camera and sensor control is done in the main body, so controls/battery/Elphel main board, and hard disk + component recording solution. If it can get at least 100Mb/s compression rate, it is useful cost effective equipment for some people. The only real thing needed on top of this, is to be able to input custom settings for the component readout sync and format configuration (even auto sync and remember) as some cameras output a unique component format and resolution (upto 14 bits) that can be processed into better image than normal component.

Does this sound doable?

Matteo Pozzi July 26th, 2006 04:34 PM

Hi to all ....
http://photos1.blogger.com/blogger/3...gp0736_std.jpg
this is what I'm planning to do with the elphel 353 (this photo is from a silicon imaging 1920 camera)....by now I'm working to a
a project for a canon fd static adapter with nokon f3 type d screen and if it work properly I'll go for a mamiya 645 medium format adapter cause I wont to achieve 2.35 aspect ratio.
the lcd screen will be a touch screen lilliput 7" 800x480 (I've done a diy projector some time ago and after some internet research I think it is the best quality and cheaper panel you can own for about 200euro)
all the cable go to a notebook pc so:
camera--> ehernet to pc --> pc to lcd (vga cable) -->and from the lcd (usb cable) to pc for touchscreen mouse control
what do you think about?

Wayne Morellini July 26th, 2006 10:49 PM

Nice, did you say it was a rig for a SI, what did you take the picture with.

Matteo Pozzi July 27th, 2006 01:37 AM

The photo of the si camera is from the blog of David Newman that with it wins "Best Cinematography" award at the 48 hour film festival. http://cineform.blogspot.com/

Wayne Morellini July 28th, 2006 02:14 AM

Quote:

Originally Posted by Phil Stone
http://www.tacx-video.com/raw-demo/R...3Raw-MJpeg.avi here is something I found on this laptop. Not ideal lighting (washed out sky) & its 4:3.


Have finally got the video downloaded. Do you have anything recorded at full quality settings? This is around 23.6Mb/s, and having snapshot it in VLC and examined the stills, I can see lots of blocking and faults as you can expect in highly compressed footage. Being 1.3mp and 27fps also put further load on. Could you post something at the highest but rate at 720p and 25fps, that would be an better indication?

I take it that the color etc looks loverly, much better than some footage I have seen. I can judge the true balance between this sort of man made colours, and some grass. This Micron sensor rendition is giving it a boost, over the compression appearance, but I think that the camera has done a loverly job at compression in 23.6Mb/s.


Thanks

Wayne..

Phil Stone July 28th, 2006 06:33 AM

Quote:

Originally Posted by Wayne Morellini
Have finally got the video downloaded. Do you have anything recorded at full quality settings? This is around 23.6Mb/s, and having snapshot it in VLC and examined the stills, I can see lots of blocking and faults as you can expect in highly compressed footage. Being 1.3mp and 27fps also put further load on. Could you post something at the highest but rate at 720p and 25fps, that would be an better indication?

I take it that the color etc looks loverly, much better than some footage I have seen. I can judge the true balance between this sort of man made colours, and some grass. This Micron sensor rendition is giving it a boost, over the compression appearance, but I think that the camera has done a loverly job at compression in 23.6Mb/s.


Thanks

Wayne..

Not sure what you mean about artifacts, are you sure thats not the VLC player? If you zoom into a frame you can see a grid like pattern but thats not visible at full resolution. To me the video is not quite as clear as I have had with the 313 in good weather but it did the job, the auto exposure was on steroids here also, I find it better to do this by hand until I figure out all the functions of it. I think the problems with the clip are my fault & not the camera.

There is no Mpeg4 type compression thats visible as its simply a stream of 70% quality Jpegs taken at 1280x1024. The image quality of the frame will be the same as that from one of the early digital cameras set to 70% but here there are 27 of them each second for about 3hrs!

Ive never managed to film at 100%, I do have some nice 85% quality film from the Alps with a fisheye. http://www.tacx-video.com/images/HD2...assic-HD/DV-HD There are also some crap looking Canon GL2 images in the same spots for comparison.

Wayne Morellini July 28th, 2006 10:49 PM

100% is probably the only mode we should be looking at. The percentages are not representative of compression ratio, 100 is far off 1:1 (but how much?). 85% might be good, but I would have to see the compression ratio.

So, you would have to go better than DVCPROHD's 6:1, and 4:1 or better (like 2:1) is preferably.
If you frame for 1280*720 @25/24fps, it would be interesting to see how high you can go in quality modes.

It has to do with maximum throughputs of the compressor and data path/transmission independently. By lowering frame rate and screen size, you reduce the capacity used, possibly, allowing a higher quality mode to be used.

I was viewing a snapshots from vlc in the normal windows picture viewer, and there were no grids but inaccurate blocks of pixels. Snapshot a number of frames through out the sequence, they should turn up in pictures folder. Zoom into the road surface and you will see disjointed pattern of blocks with lines and shapes in them, look around the car lights and rides suits colored patterns as well. This video has a lot of plainish surfaces, which could have boosted compressibility of ones with details quiet a bit.

MJpeg gets artifacts as well, because it is not lossless, and maybe even not visually lossless, though that might be the 100% mode. At 23.6Mb/s, it is less than DV's 25mb/s, and much less than DVCPROHD. DVCPRO gets true SD pro quality at 50Mb/s, which is somewhere around 3.1:1 (I think) and Digital Beta got around 2.3:1 at 80+mb/s I think (long time since reading up on these).

You can understand why I dropped out of the current generation of cameras and am waiting for the future models, my heart is in close to visually lossless territory or more. If it can't produce quality better than AVCHD, then I am not bothered, if it can produce DVCPROHD quality I am mildly interested (takes substantial time and effort setting up and coping with a custom rig). If it can exceed it, then it is worth and the target is the next 100Mb/s H264 frame based Intra codec, which you will be able to match by doing an inter coding format at 100mb/s. As far as MJpeg, the advantage is really if you can run 720p at 100Mb/s, which is pretty high in quality compared to what DVCPROHD will, normally, let you run at (due to being locked into lower resolution/data rates for low screen formats etc from my understanding).

Wayne Morellini July 28th, 2006 11:39 PM

I had a look at those stills, did you shoot them in 640*512?

The 512 ones, which I guess you must mean are not the GL-2 ones, look substantially better (but this is a lot to do with the way the cameras render the image) but there is still attracting there present. In web graphics I think the rule is to use 100% quality (but rather use GIF's etc) I definitely found a substantial drop off below 85% in testing.

I notice that one camera has a low compression ratio than the other, reflected in the image.

But it is hard to get a comparison here, because they are shot in two different resolutions, and the canon frame is not a native straight frame, so recompression comes into it. If you shot in hi res and then down scaled the Elphel footage you automatically also get some advantage. Have you got the native un-retouched frames?

Definitely makes the GL photos look second rate.

Can you soot 85%+ at a higher resolution than this, like 720/25p?

Wayne Morellini July 29th, 2006 01:29 AM

Microdrive pricing:
 
http://www.theinquirer.net/default.aspx?article=29927

Surprisingly cheap, the sort of thing that could go into the 353.

Andrey Filippov July 29th, 2006 09:38 PM

Quote:

Originally Posted by Wayne Morellini

These are CF and USB2, 353 need IDE - I meant http://www.seagate.com/docs/pdf/data...c/ds_st1_3.pdf - yes, they will fit into the small body. Other option - use much larger capacity laptop drives in a bigger housing

Andrey Filippov July 29th, 2006 09:40 PM

Quote:

Originally Posted by Wayne Morellini
What do you think of this 3D Wavelet, that is being used in Surveillance and other industries, is it any good?

Did not work so far with any wavelet compressors. They need somewhat more computational power - I'm waiting for 65nm Spartan 4.

Andrey Filippov July 29th, 2006 09:46 PM

Quote:

Originally Posted by Wayne Morellini
: Is 4:4:4/4:2:2 in this camera Andrey?

No currently it is 4:2:0 - If you need 4:4:4 (sensors are anyway Bayer) you can compress as if monochrome and then do color post-processing on PC. 4:4:4 requires up to 3 times the bandwidth to send interpolated data. 3MPix sensor is 2048x1536, not 3x2048x1536.

Andrey Filippov July 29th, 2006 09:48 PM

Quote:

Originally Posted by Wayne Morellini
Does this sound doable?

we'll see :-)

Wayne Morellini July 29th, 2006 10:17 PM

Quote:

Originally Posted by Andrey Filippov
These are CF and USB2, 353 need IDE - I meant http://www.seagate.com/docs/pdf/data...c/ds_st1_3.pdf - yes, they will fit into the small body. Other option - use much larger capacity laptop drives in a bigger housing

It is about the cheap Micro drive pricing, with CF version of the IDE Seagate drive the cheapest. This article linked from article on IDE Seagate.

Re-edit: Forgot to ask, what pixel formats does the Elphel do Mjpeg in, just 4:2:0 and monochrome?

Wayne Morellini July 29th, 2006 11:28 PM

Quote:

Originally Posted by Andrey Filippov
No currently it is 4:2:0 - If you need 4:4:4 (sensors are anyway Bayer) you can compress as if monochrome and then do color post-processing on PC. 4:4:4 requires up to 3 times the bandwidth to send interpolated data. 3MPix sensor is 2048x1536, not 3x2048x1536.

Thanks for answering the questions Andrey. The monochrome image conversion looks good, how well in compression does that compare in MJpeg and Ogg?

I mentioned a way to convert the bayer data into a normalised colour image before (I think through pixel by pixel conversion to make the color closer together for compression). But you can normalise the whole bayer pattern to one colour or full monochrome, at once, you then have a much smoother, more compressible image, that you can also apply more simple compression techniques to on a bit plain basis and difference compression etc, like JBIG (I think). To restore to bayer you already know the bayer pattern to reverse the process. You could do this by simply working out the normal color value from estimated difference to the pixel value itself, or through normal interpolation. (R30% and B10% Green60% etc).

There were some other schemes mentioned in the Digital Cinema Threads:
To split the bayer pattern into 3-4 separate bit-mapped images, one for each of the sub pixel RGB colors, compress separately, or do differential compression on images and compress the base image, I think cineform might have eventually did something like this (I got to read their white paper sometime). Good results were reported.

Second alternative, when 3864 wide sensor, through interpolation (to stop fly screen image problems) can extract 1280*720 bayer image, then compress and send that?


Thanks

Wayne.

Wayne Morellini July 29th, 2006 11:31 PM

I forgot again:

I have been meaning to ask, is there anyway to feed in a compression routine, or formulae into the Elphel, without having to learn FPGA code to do it?


Thanks

Again.

Andrey Filippov July 30th, 2006 12:22 AM

Quote:

Originally Posted by Wayne Morellini
I forgot again:

I have been meaning to ask, is there anyway to feed in a compression routine, or formulae into the Elphel, without having to learn FPGA code to do it?


Thanks

Again.

No, I don't see how you can do that - it is in the FPGA code.

Wayne Morellini July 30th, 2006 10:50 AM

What are the existing results from monochrome compression, what data rate/low compression ratio is possible?

I have been mining for gold in the webpedia, and things are a bit confusing (as you can understand). From the pedia Jpeg is 24 bits per pixel, is this so, or is it 4:2:0, or 4:2:2, in MJPEG? They also mention 12 bit color for Jpeg, is, that supported in monochrome, or colour modes?

This seems very interesting for us, because 100Mb/s is under 2:1 720p, or 4:1 1080p in monochrome bayer, well and truly in quality territory, even with 10-12bits, it is still looking good for 720p, providing compression holds up. Colour accuracy under bayer compression will suffer a fair bit unless very low compression ratios are used under Jpeg (so 720p would be the sweet spot at 100mb/s). Using the alternative techniques mentioned before would perhaps do that at higher compression.

I've got to be such an idiot to miss this.


Anybody willing to give this a burl on the existing 333, to see what the results are on it's (was that 70 or 30mb/s) datarate?


Ben Syyverson had a debayer I think, but no longer has it listed there:
http://web.archive.org/web/*/http://...gins/linbayer/

http://www.google.com/search?q=linba...ls=en&filter=0


He might still be around to contact. Here is some interesting links on debayering for people:

http://www.insflug.org/raw/software/tools/dcraw.php3
http://www.path.unimelb.edu.au/~ders...erpolator.html
http://www-ise.stanford.edu/~tingchen/


Thanks Andrey


Wayne.

John Wyatt July 30th, 2006 01:20 PM

linbayer
 
Wayne -- I tried to find a contact for Ben Syverson to buy a copy of his linbayer (After Effects plug-in). Google turned up nothing new, but Ben had a plug-in called dvMatte (a keyer not a debayer) advertised on a video website (www.dvgarage.com). I emailed them (info@dvgarage.com) to ask if they could give me a contact for Ben, or forward my request about linbayer to him, but no reply after about a month now. All leads to Ben seem dead...
John.

Jamie Varney July 31st, 2006 12:15 AM

Questions
 
Hello Andrey. I recently read through this entire thread and I must say I am very impressed; both by the quality of video that the 333 is capable of and your willingness to help out the community. But now that the 353 is on its way I have a few questions about it. So here goes:

1. With the IDE connector built in I am assuming that we are going to see much higher bandwidth then on the 333?
2. If the image data is going to be able to be stored on the hard drive I assume that we are still going to need a computer with a network interface to control the camera?
3. I know your priority is to use this as a security camera, so I was wondering how much code development you are going to do for the IDE interface? Are you going to fully implement it or are you going to leave it up to us who want to use it for other purposes?

Well these are all the questions I can think of for now. Thanks for your time.

Andrey Filippov July 31st, 2006 01:35 AM

Quote:

Originally Posted by Jamie Varney
1. With the IDE connector built in I am assuming that we are going to see much higher bandwidth then on the 333?

Not "much higher". Currently compressor speed is limited to 125MHz/2=62.5MPix/sec - for 4:2:0 it corresponds to 2/3*62.5Mpix/sec~=42MPix/sec of the original sensor pixels. 353 will have slightly (20%) larger FPGA, but I'm planning to first add more features, not to increase the speed - that I plan for the new generation of Xilinx chips

What will increase is the CPU speed, network operation (Ethernet checksum calculation in the dedicated hardware of the CPU chip) and FPGA->system memory transfer. And writing to disk is faster too - most difference compared to 333 will be for the low compression ratios/high quality.


Quote:

Originally Posted by Jamie Varney
2. If the image data is going to be able to be stored on the hard drive I assume that we are still going to need a computer with a network interface to control the camera?

Are there any computers w/o network interfaces :-)?

Other option will be with USB host (unfortunately CPU has only 1.1, not 2.0) that might be connected to USB WiFi adapter (one of those that has open source driver so it can be compiled to run on non-x86 camera CPU)

Quote:

Originally Posted by Jamie Varney
3. I know your priority is to use this as a security camera, so I was wondering how much code development you are going to do for the IDE interface? Are you going to fully implement it or are you going to leave it up to us who want to use it for other purposes?

No, it is not the only application I'm personally interested in. There are many others, but what is true is that are too many projects I'm already involved in so I can not take any additional custom developments - just provide minor consultations to others who have time and ability to work on related projects. And we usually can provide hardware (with discounts or free of charge) to those who are eager to get involved.

As for the IDE interface itself - I'll definetely make sure it actually works. After that - it is just a hard drive connected to the GNU/Linux computer (in the camera) - you can use it with any of the existent file systems and other software.

Jamie Varney July 31st, 2006 05:16 PM

Excellent, thanks for the reply. I have not used FPGA’s since back when I was in school, so I may be a little rusty at them. But I have a friend who works with them on a regular basis so I may get him to give me a refresher course. Then I will start looking over the great information on your site to get a better idea of how your setup works. I know you have posted the image of the routed 353 board, so would you be willing to share the actual schematic at this stage?

Thanks again.

Andrey Filippov July 31st, 2006 06:34 PM

Quote:

Originally Posted by Jamie Varney
so would you be willing to share the actual schematic at this stage?

No, not yet. It will be available (as the previous models are) when it will be actually manufactured and tested. So far it can have errors and I don't want misleading information to be set free on the Internet - somebody might use it as a reference and get fooled.

Phil Stone August 1st, 2006 11:06 AM

Quote:

Originally Posted by Wayne Morellini
It is about the cheap Micro drive pricing, with CF version of the IDE Seagate drive the cheapest. This article linked from article on IDE Seagate.

Re-edit: Forgot to ask, what pixel formats does the Elphel do Mjpeg in, just 4:2:0 and monochrome?

For me I would prefer a 7200rpm laptop drive mounted outside the camera, perhaps via a cable?

Andrey Filippov August 3rd, 2006 08:16 PM

Quote:

Originally Posted by Phil Stone
For me I would prefer a 7200rpm laptop drive mounted outside the camera, perhaps via a cable?

We'll make a larger body for that.

Zsolt Hegyi August 10th, 2006 05:09 AM

compression
 
Hello all,


In the last few months I've been busy building my own camera based on a micron sensor. It didn't go really well as I'm mainly a sw guy. Thanks to this thread I found out that Elphel is building nearly the same device that I wanted to build. So I decided to leave the hw work to professionals and develop my own software.

I'm interested only in the 1280x720 resolution as: 1) the new sensor will be able to provide that with binning and 2) this amount of data might be compressed in a lossless way to fit into the bandwith of the camera (with 24fps).

What we know:

-current camera bandwith is 8.75MB/s on ethernet. The new camera will have a faster processor which will slightly raise this number.
-direct-to-disk recording is not useful for those who want to see and control the picture while recording - that can be handled by a separate pc only.
-current memory i/o is slow and altough they plan to increase it with a huge factor there'll still be memory i/o during compression using theora.
-a LUT can be used to drop the bit depth down to 10.
-1280x720x10bitx24fps=26MB/s so using the 333 we would need 3:1 compression ratio but with the 353 a smaller value might also be sufficient.

After I have an encoder I plan to write the decoder part. This will be written in C and will be realized in a form of a plugin of a popular video editing software running on pc. This plugin will never modify the actual raw data beneath, it'll store it as metadata.

If all things are successful I intend to release the stuff. Andrey told me that he needs a few weeks to complete the camera so I'm planning to be ready with my things at the same time. I'll release sample images if I have my camera and all my software working with it.

I don't have experience in image compression so if you have suggestions feel free to submit them here; and Andrey, if you feel that I want to do sg. stupid please correct me :-) I don't want to run into dead-ends.

And an other thing: I'm only interested in simple algorithms; if the complexity reaches a specific level I'll stop development and rather use the built-in codec of the 353...


Zsolt

Wayne Morellini August 12th, 2006 10:55 PM

Welcome
 
Quote:

Originally Posted by Zsolt Hegyi
Hello all,
In the last few months I've been busy building my own camera based on a micron sensor. It didn't go really well as I'm mainly a sw guy. Thanks to this thread I found out that Elphel is building nearly the same device that I wanted to build. So I decided to leave the hw work to professionals and develop my own software.

You will find this camera imposes it's own unique set of constrictions because of the Ethernet throughput, but has good potential because of inbuilt compressor and bayer.

Zsolt, are you doing this compressor in software, or hardware? Most compression schemes are difficult. I don't know if it has enough processing power in software.

One of the easiest improvements is to buffer the frames (to smooth out data rate) and to store the difference between frames). Using the internal compressors as well this would give you an advantage much of the time in disk space and data rate. One of the guys in the Digital Cinema threads was Doing a simple lossless bayer codec with frame difference compression, and reported very good results. I don't know if it was one of the Juan's, or Jason Rodriguez who is now at Silicon Studios, but best to contact them. I think the person mentioned it in my Red Codec Suggestions thread in the Red camera sub forum.

Read my previous suggestion posts here, and I believe I posted links into the web-wikipedia which has subjects listing many open-ware and as well as lossless codecs. It might be easier to drop one in from existing software if software is what you plan. As you can see I mentioned that BBC has one coming along in FPGA, there is also more behind it, so worth looking at.

Quote:

I'm interested only in the 1280x720 resolution as: 1) the new sensor will be able to provide that with binning and 2) this amount of data might be compressed in a lossless way to fit into the bandwith of the camera (with 24fps).
Read my previous post in binning and converting to Bayer and compressing bayer. Please note, to get lossless with Jpeg I think you have to increase the precision of some of it's data paths (see the newsgroup lossless compression faq, using Google).

Quote:

-direct-to-disk recording is not useful for those who want to see and control the picture while recording - that can be handled by a separate pc only.
If you can tap the image feed you can duplicate it off to an external display, a external controls port, and some buttons and switches through that, would give you full independent camera control .

You can direct to disk on the camera and send a feed out through Ethernet to a portable computer (even handheld) for viewing. In this way all the handheld has to do is decode and display, and send control signals back, so lower cost device is all that is needed. A uncompressed image could even be saved to disk. With the simple compression algorithm like what Juan/Jason (whichever it was) was working on you could save heaps of disk space while sending a JPEG/Theora version to the portable.

Quote:

-current memory i/o is slow and altough they plan to increase it with a huge factor there'll still be memory i/o during compression using theora.
If you plan on using Theora, how much better do you think it will be for lossless (as normal Jpeg is very poor compression, see newsgroup Faq)?

Quote:

After I have an encoder I plan to write the decoder part. This will be written in C and will be realized in a form of a plugin of a popular video editing software running on pc. This plugin will never modify the actual raw data beneath, it'll store it as metadata.
Good.

Quote:

If all things are successful I intend to release the stuff. Andrey told me that he needs a few weeks to complete the camera so I'm planning to be ready with my things at the same time. I'll release sample images if I have my camera and all my software working with it.
You are getting a 353 camera, I thought that was next year, whatever happened to the 343? You are getting much information from Andrey.


A wish you success Zsolt, congratulations.

Wayne Morellini August 12th, 2006 11:16 PM

I forgot to mention, low cost devices that can be used instead of a computer for viewing and control, if they have Ethernet. Most will have some form of official/nonofficial Linux for them, so Linux development can be ported between systems:

Intel's UMPC (Ultra Mobile PC) platform, MS Origami (cheaper version coming).

Some Playstation (I don't think the Portable has Ethernet??)
PDA
Nintendo Wii (I think maybe Ethernet)
Embedded Industrial Micro-controller, hundreds, if not thousands to search through.


Future machine:
Nintendo Gamecube Portable (maybe called GBA2) I expect with Higher Res screen then PSP, but Ethernet status unknown (and maybe only wireless).

If there is cheap Ethernet to USB/SD/?? card adaptor, then most portable devices without Ethernet can be used.

Search for my Technical thread fro much information about previous Digital Cinema cameras here:

http://www.dvinfo.net/conf/showthread.php?t=28781

Wayne Morellini August 13th, 2006 02:19 PM

Hello Zolt,

I have spent some time posting a thread over at comp.compression newsgroup requesting advice on lossless Bayer compression etc. You might like to go over there and see what people say. There should be a lot of expert professionals there. If you don't have a newsgroup reader setup, you can find them through google's newsgroup reader. Starts "Lossless/Near lossless, Bayer".

There is a thread there with information on significant Jpeg recompression, there were a number of techniques, but the best is covered by a patent. Unless Elphel; has an Arithmetic coding license you probably will not be able to use most of them.

Don't be discouraged that nobody replies to your post, it happens around here, best to just keep looking into your ideas in the short term.

Andrey Filippov August 14th, 2006 02:26 AM

Quote:

Originally Posted by Wayne Morellini
Ywhatever happened to the 343?

That number was reserved for a different type of camera (slower) - like 323.

Wayne Morellini August 14th, 2006 04:47 AM

Thanks, just got a post from a guy over at the newsgroup that has some good suggestions for lossless and bayer that he also worked on and implemented with 1000 luts in an FPGA for his thesis. his suggestion sounds a bit like what was suggested in discussions with cineform a while ago.

Otherwise I am not really get the depth of breadth of answers I am looking for over there.

I wish I could post a Link, but I don't know how to, maybe through google, bit of a chore.


What do you think of the idea of changing the different colored bayer pixels into one colour, which can be restored after decompression, and then compressing as a grey scale, would that be simple, would it help much in compression?

Zsolt Hegyi August 14th, 2006 06:59 AM

Quote:

Originally Posted by Wayne Morellini
Zsolt, are you doing this compressor in software, or hardware?

I don't know yet but verilog is the most likely as the processor cannot handle all the raw data. Currently I'm in the process of understanding how the Elphel architecture works; we'll see.

Quote:

Originally Posted by Wayne Morellini
One of the easiest improvements is to buffer the frames (to smooth out data rate) and to store the difference between frames).

If I'm unable to reach the targeted compression ratio with lossless intraframe, the thing you mentioned will probably be the solution. However, this method will only be applied to be able to transform the data through the ethernet; the recording software must convert the incoming stream to an intraframe compressed one so linear editing will be possible later on.

The algorithm I'm currently using for intraframe will be sufficient also for the interframe compression. Only the previous frame needs to be stored not a whole group of frames with this method so it's easier to implement. The first frame must be stored without compression though but that's not a problem as the average bandwith usage will not increase in long-term.

Quote:

Originally Posted by Wayne Morellini
You can direct to disk on the camera and send a feed out through Ethernet to a portable computer (even handheld) for viewing.

Good idea but with different transmitting and recording formats I still cannot use the built-in disk recording in the Elphel. Plus, getting an sdk to handhelds usually costs money. But as I stated, I have no problems using a separate mini-pc with a touchscreen.

Quote:

Originally Posted by Wayne Morellini
A uncompressed image could even be saved to disk.

I'm afraid that won't be possible with the Elphel. The processor still has to participate in the disk tranfers to some extent so we have a limited data throughput there.

Quote:

Originally Posted by Wayne Morellini
If you plan on using Theora, how much better do you think it will be for lossless (as normal Jpeg is very poor compression, see newsgroup Faq)?

If I end up using Theora that won't be lossless. Tune it up to maximum quality (maybe a bit better than the 333) and use the thing, that's it.

Quote:

Originally Posted by Wayne Morellini
You are getting a 353 camera, I thought that was next year

Andrey only told me that he needs a few weeks for the hw to be finished. If I can get my hands on a proto that would be nice but no promises so far...

Quote:

Originally Posted by Wayne Morellini
congratulations.

Don't congrat until I have something to show :-) There are so many people here with failed attempts, I just hope I won't be one of them.

Zsolt

Zsolt Hegyi August 14th, 2006 07:09 AM

Quote:

Originally Posted by Wayne Morellini
What do you think of the idea of changing the different colored bayer pixels into one colour, which can be restored after decompression, and then compressing as a grey scale, would that be simple, would it help much in compression?

I remember you've already posted this somewhere... And I think we can achieve better ratios by compressing the channels separately. This is because the frequency content varies greatly between the three channels and one particular algorithm (whatever that might be) can always work better with data containing only similar frequencies.

Wayne Morellini August 15th, 2006 01:07 AM

Quote:

Originally Posted by Zsolt Hegyi
I remember you've already posted this somewhere... And I think we can achieve better ratios by compressing the channels separately. This is because the frequency content varies greatly between the three channels and one particular algorithm (whatever that might be) can always work better with data containing only similar frequencies.

Hi Zolt, have you looked at the news group thread, starting to go well.

As I understand, colour follows brightness, which is why bayer interpolation works, so the brightness contains the high frequency data but the colour tends to stay the same over large areas, so each color contains mostly the same frequency information, but converting to one reversible colour eliminates most of this difference in frequency to real underlying differences in frequency and the odd colour difference. When you convert to one colour you bring the values of the pixels closer to one another and can apply more efficient grey scale compression. But I see what you mean, retaining the backwards compatibility means their will be some extra frequency variations. The interesting thing is that the variation reflects real data differences.

This was only an simple, stop gap, idea that could be very easily implemented on the existing compression architecture to give an extra boost to the performance of compressing the bayer pattern as a grey scale (that Andrey recommended for the existing setup) by reducing the differences.

I have been thinking of a new method some time back. Here you split into channels like you suggest, convert to a normalised colour, compress one channel (this leads to compressing disjointed details, as the intervening pixels are missing) then you record the difference from the other channels. But with my idea you base the difference off the interpolation between the pixels of the first channel, i.e. the interpolation is the assumed value of the pixels of the other channels. This is not as simple as doing the conversion and using the existing grey scale compression.

Approaching the above system from another direction, you can leave all channels in their existing colors, in the assumed interpolation pixel used for comparison you can also calculate it out into the colour of the channel of the pixel being addressed.

I should have stated this earlier, the reason why I designed the above like that, is there are different response curves for each colour, which means exactly the same brightness on an object in all colours has a different value in each color. This is surplus predictable information and my methods are designed to eliminate it and increase compression.

Going one step further into the realm of FAX compression standards (see JBIG for the best). There is areas compression, lot less processing I believe. Where the difference across a line is recorded, then the difference between lines, which then can be easily run-length encoded.

To get over corruption, their are file and transmission techniques to fully recover data from corrupt files, for maybe 25% increase in finale compressed size, which can be greatly over compensated for by compression savings by either using difference compression on the original channel, or by longer GOPS in other compression etc. JBIG tends to get good compression with less noise and detail, but I think by combining with the normalised color and assumed value techniques I describe above you can get better compression then cineform.

Forgive that my reasoning here is not completely clear, as I have memory problems picking up the game of what I thought in designing this strategy previously in times past.


All times are GMT -6. The time now is 10:51 PM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2025 The Digital Video Information Network