View Full Version : High Definition with Elphel model 333 camera


Pages : 1 2 3 4 [5] 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

Phil Stone
July 23rd, 2006, 03:42 PM
http://www.tacx-video.com/raw-demo/RothTri-333Raw-mjpeg.avi here is something I found on this laptop. Not ideal lighting (washed out sky) & its 4:3.

Matteo Pozzi
July 24th, 2006, 08:46 AM
Thanks Phil for the link; if you find can you post some raw elphel footage in ogg theora ...exactly what you get from the camera? I want to experiment all the process...(from conversion to avi to color correction and editing with vegas)
best reguards Matteo

Phil Stone
July 24th, 2006, 09:22 AM
Thanks Phil for the link; if you find can you post some raw elphel footage in ogg theora ...exactly what you get from the camera? I want to experiment all the process...(from conversion to avi to color correction and editing with vegas)
best reguards Matteo

That is raw mjpeg (was in a ogg container) from the camera. ive not yet used the theora codec which I think is a form of Xvid. All I have done is change the container to avi.

Its 27.1fps 1280x1024 & very slightly out of focus.

here is the same clip compressed 10x so it looks a bit like HD1 video http://www.tacx-video.com/raw-demo/RothXvid10xCompressed.avi

Andrey Filippov
July 24th, 2006, 11:13 AM
Here is a screenshot of mostly routed 353 board - http://www.elphel.com/3fhlo/10353/353pcb.png

Matteo Pozzi
July 24th, 2006, 12:24 PM
for Phil:
I thought you were shooting in ogg-theora and not ogg-mjpg ...can you tall me the maximum bitrate (more or less I know it is variable for jpg compression) that you can achieve and the maximum quality of jpg you can achieve from the camera with you portable setup!? (did you use a laptop right?)
for Andrey:
many thanks for what you are doing for us also for this picture ...I don't know anything about electronic scheme but it is really cool that you make us known your progress in your work.
In the final version of the 353 how did you intend to implement audio in the ogg container files? with a minijack or with a usb external box? is the usb 1.1 transfer rate able to record more than 2 track (stereo sound)?
best reguards
Matteo Pozzi

Wayne Morellini
July 25th, 2006, 01:39 AM
That is raw MJpeg (was in a Ogg container) from the camera. ive not yet used the theora codec which I think is a form of Xvid. All I have done is change the container to avi.

Its 27.1fps 1280x1024 & very slightly out of focus.

here is the same clip compressed 10x so it looks a bit like HD1 video http://www.tacx-video.com/raw-demo/RothXvid10xCompressed.avi

:)

To me the 100Mb/s MJpeg, restricts it to HVX200 quality, which uses the DV adaption of MJepg, at the most (the compressor runs at a lower rate I understand). Double HVX200 data rate is a good zone, and that is why I have always been interested in maximum Ogg Theora performance, because it si more efficient, to cram as much data is there as possible in 100mb/s, but I understand maximum rate of Ogg codec is around 30mb/s here. So next camera, I understand, will have much better through put, because ti si now probably faster, and Andrey mentioned improving motion vector etc, but looks like the model after 353. But to get equivalent to double the, per pixel, data rate of DVCPROHD, in 1920*1080 (not HVX200 1280*1080) is goign to require very good Ogg Theora. But then again, we have not tested off theora to prove quality.

I understand that Cinelerra Linux NLE supports Ogg Theora, found here:

Ogg Theora, at it's fullest is supposed to be an alternative to Mpeg2, but closer to performance of Mpeg4 I think I understand. It is an advanced form of the VP3 codec, which is now upto VP7, which is, supposed, to be better than H264. The camera to match and beat today is the HVX200 100mb/s DVCPROHD camera (much less than in 720p). But this old codec is being replaced by a frame based H264 Intra codec next year, that has indications of getting double DVCPROHD quality at 100Mb/s, with 10bit 4:2:2. So, there is possibility that a full Ogg Theora might have the possibility to stand against it, unless severe scene changes. There is also AVCHD, based on H264 Intra (intermediate frame with motion detection etc). It is possible that at 24mb/s it will be better than 100Mb/s DVCPROHD, unless severe scene changes like motion again.

The one big advantage of the present bandwidth, is that theoretically 100Mb/s can be sued for 720p, which the DVCPROHD cameras do not support, but how high was the compressor through put compared to this.

I understand that Ogg can do 4:2:2, and 4:4:4, but forgot to check for 10bits. Is 4:4:4/4:2:2 in this camera Andrey?

Wayne Morellini
July 25th, 2006, 01:54 AM
Andrey,

Recently I have been looking up component to GigEthernet converters. I though, you could make one with a ADC front end, FPGA, and GigE conditioned back end (actually, trying to find some kind sole that might want to do it for us). Such a thing could be done DIY style for perhaps $100 with simple FPGA, and stream out to hard disk, or laptop in 10+ bits. Some simple, mild compression might be possible, or proper lossless compression with a bigger FPGA and more expense (like $500). You could then use it on a lot of HD cameras.

I then occurred to me, that you camera base could do this to hard disk (dropping the need for GigE) with only a mild adjustment to accept component, and record control signals from Firewire/lanc jack. All other camera and sensor control is done in the main body, so controls/battery/Elphel main board, and hard disk + component recording solution. If it can get at least 100Mb/s compression rate, it is useful cost effective equipment for some people. The only real thing needed on top of this, is to be able to input custom settings for the component readout sync and format configuration (even auto sync and remember) as some cameras output a unique component format and resolution (upto 14 bits) that can be processed into better image than normal component.

Does this sound doable?

Matteo Pozzi
July 26th, 2006, 04:34 PM
Hi to all ....
http://photos1.blogger.com/blogger/3768/1172/1600/imgp0736_std.jpg
this is what I'm planning to do with the elphel 353 (this photo is from a silicon imaging 1920 camera)....by now I'm working to a
a project for a canon fd static adapter with nokon f3 type d screen and if it work properly I'll go for a mamiya 645 medium format adapter cause I wont to achieve 2.35 aspect ratio.
the lcd screen will be a touch screen lilliput 7" 800x480 (I've done a diy projector some time ago and after some internet research I think it is the best quality and cheaper panel you can own for about 200euro)
all the cable go to a notebook pc so:
camera--> ehernet to pc --> pc to lcd (vga cable) -->and from the lcd (usb cable) to pc for touchscreen mouse control
what do you think about?

Wayne Morellini
July 26th, 2006, 10:49 PM
Nice, did you say it was a rig for a SI, what did you take the picture with.

Matteo Pozzi
July 27th, 2006, 01:37 AM
The photo of the si camera is from the blog of David Newman that with it wins "Best Cinematography" award at the 48 hour film festival. http://cineform.blogspot.com/

Wayne Morellini
July 28th, 2006, 02:14 AM
http://www.tacx-video.com/raw-demo/RothTri-333Raw-MJpeg.avi here is something I found on this laptop. Not ideal lighting (washed out sky) & its 4:3.


Have finally got the video downloaded. Do you have anything recorded at full quality settings? This is around 23.6Mb/s, and having snapshot it in VLC and examined the stills, I can see lots of blocking and faults as you can expect in highly compressed footage. Being 1.3mp and 27fps also put further load on. Could you post something at the highest but rate at 720p and 25fps, that would be an better indication?

I take it that the color etc looks loverly, much better than some footage I have seen. I can judge the true balance between this sort of man made colours, and some grass. This Micron sensor rendition is giving it a boost, over the compression appearance, but I think that the camera has done a loverly job at compression in 23.6Mb/s.


Thanks

Wayne..

Phil Stone
July 28th, 2006, 06:33 AM
Have finally got the video downloaded. Do you have anything recorded at full quality settings? This is around 23.6Mb/s, and having snapshot it in VLC and examined the stills, I can see lots of blocking and faults as you can expect in highly compressed footage. Being 1.3mp and 27fps also put further load on. Could you post something at the highest but rate at 720p and 25fps, that would be an better indication?

I take it that the color etc looks loverly, much better than some footage I have seen. I can judge the true balance between this sort of man made colours, and some grass. This Micron sensor rendition is giving it a boost, over the compression appearance, but I think that the camera has done a loverly job at compression in 23.6Mb/s.


Thanks

Wayne..

Not sure what you mean about artifacts, are you sure thats not the VLC player? If you zoom into a frame you can see a grid like pattern but thats not visible at full resolution. To me the video is not quite as clear as I have had with the 313 in good weather but it did the job, the auto exposure was on steroids here also, I find it better to do this by hand until I figure out all the functions of it. I think the problems with the clip are my fault & not the camera.

There is no Mpeg4 type compression thats visible as its simply a stream of 70% quality Jpegs taken at 1280x1024. The image quality of the frame will be the same as that from one of the early digital cameras set to 70% but here there are 27 of them each second for about 3hrs!

Ive never managed to film at 100%, I do have some nice 85% quality film from the Alps with a fisheye. http://www.tacx-video.com/images/HD2006/France/AlpineClassic-New/AlpineClassic-HD/DV-HD There are also some crap looking Canon GL2 images in the same spots for comparison.

Wayne Morellini
July 28th, 2006, 10:49 PM
100% is probably the only mode we should be looking at. The percentages are not representative of compression ratio, 100 is far off 1:1 (but how much?). 85% might be good, but I would have to see the compression ratio.

So, you would have to go better than DVCPROHD's 6:1, and 4:1 or better (like 2:1) is preferably.
If you frame for 1280*720 @25/24fps, it would be interesting to see how high you can go in quality modes.

It has to do with maximum throughputs of the compressor and data path/transmission independently. By lowering frame rate and screen size, you reduce the capacity used, possibly, allowing a higher quality mode to be used.

I was viewing a snapshots from vlc in the normal windows picture viewer, and there were no grids but inaccurate blocks of pixels. Snapshot a number of frames through out the sequence, they should turn up in pictures folder. Zoom into the road surface and you will see disjointed pattern of blocks with lines and shapes in them, look around the car lights and rides suits colored patterns as well. This video has a lot of plainish surfaces, which could have boosted compressibility of ones with details quiet a bit.

MJpeg gets artifacts as well, because it is not lossless, and maybe even not visually lossless, though that might be the 100% mode. At 23.6Mb/s, it is less than DV's 25mb/s, and much less than DVCPROHD. DVCPRO gets true SD pro quality at 50Mb/s, which is somewhere around 3.1:1 (I think) and Digital Beta got around 2.3:1 at 80+mb/s I think (long time since reading up on these).

You can understand why I dropped out of the current generation of cameras and am waiting for the future models, my heart is in close to visually lossless territory or more. If it can't produce quality better than AVCHD, then I am not bothered, if it can produce DVCPROHD quality I am mildly interested (takes substantial time and effort setting up and coping with a custom rig). If it can exceed it, then it is worth and the target is the next 100Mb/s H264 frame based Intra codec, which you will be able to match by doing an inter coding format at 100mb/s. As far as MJpeg, the advantage is really if you can run 720p at 100Mb/s, which is pretty high in quality compared to what DVCPROHD will, normally, let you run at (due to being locked into lower resolution/data rates for low screen formats etc from my understanding).

Wayne Morellini
July 28th, 2006, 11:39 PM
I had a look at those stills, did you shoot them in 640*512?

The 512 ones, which I guess you must mean are not the GL-2 ones, look substantially better (but this is a lot to do with the way the cameras render the image) but there is still attracting there present. In web graphics I think the rule is to use 100% quality (but rather use GIF's etc) I definitely found a substantial drop off below 85% in testing.

I notice that one camera has a low compression ratio than the other, reflected in the image.

But it is hard to get a comparison here, because they are shot in two different resolutions, and the canon frame is not a native straight frame, so recompression comes into it. If you shot in hi res and then down scaled the Elphel footage you automatically also get some advantage. Have you got the native un-retouched frames?

Definitely makes the GL photos look second rate.

Can you soot 85%+ at a higher resolution than this, like 720/25p?

Wayne Morellini
July 29th, 2006, 01:29 AM
http://www.theinquirer.net/default.aspx?article=29927

Surprisingly cheap, the sort of thing that could go into the 353.

Andrey Filippov
July 29th, 2006, 09:38 PM
http://www.theinquirer.net/default.aspx?article=29927
These are CF and USB2, 353 need IDE - I meant http://www.seagate.com/docs/pdf/datasheet/disc/ds_st1_3.pdf - yes, they will fit into the small body. Other option - use much larger capacity laptop drives in a bigger housing

Andrey Filippov
July 29th, 2006, 09:40 PM
What do you think of this 3D Wavelet, that is being used in Surveillance and other industries, is it any good?Did not work so far with any wavelet compressors. They need somewhat more computational power - I'm waiting for 65nm Spartan 4.

Andrey Filippov
July 29th, 2006, 09:46 PM
: Is 4:4:4/4:2:2 in this camera Andrey?
No currently it is 4:2:0 - If you need 4:4:4 (sensors are anyway Bayer) you can compress as if monochrome and then do color post-processing on PC. 4:4:4 requires up to 3 times the bandwidth to send interpolated data. 3MPix sensor is 2048x1536, not 3x2048x1536.

Andrey Filippov
July 29th, 2006, 09:48 PM
Does this sound doable?
we'll see :-)

Wayne Morellini
July 29th, 2006, 10:17 PM
These are CF and USB2, 353 need IDE - I meant http://www.seagate.com/docs/pdf/datasheet/disc/ds_st1_3.pdf - yes, they will fit into the small body. Other option - use much larger capacity laptop drives in a bigger housing

It is about the cheap Micro drive pricing, with CF version of the IDE Seagate drive the cheapest. This article linked from article on IDE Seagate.

Re-edit: Forgot to ask, what pixel formats does the Elphel do Mjpeg in, just 4:2:0 and monochrome?

Wayne Morellini
July 29th, 2006, 11:28 PM
No currently it is 4:2:0 - If you need 4:4:4 (sensors are anyway Bayer) you can compress as if monochrome and then do color post-processing on PC. 4:4:4 requires up to 3 times the bandwidth to send interpolated data. 3MPix sensor is 2048x1536, not 3x2048x1536.

Thanks for answering the questions Andrey. The monochrome image conversion looks good, how well in compression does that compare in MJpeg and Ogg?

I mentioned a way to convert the bayer data into a normalised colour image before (I think through pixel by pixel conversion to make the color closer together for compression). But you can normalise the whole bayer pattern to one colour or full monochrome, at once, you then have a much smoother, more compressible image, that you can also apply more simple compression techniques to on a bit plain basis and difference compression etc, like JBIG (I think). To restore to bayer you already know the bayer pattern to reverse the process. You could do this by simply working out the normal color value from estimated difference to the pixel value itself, or through normal interpolation. (R30% and B10% Green60% etc).

There were some other schemes mentioned in the Digital Cinema Threads:
To split the bayer pattern into 3-4 separate bit-mapped images, one for each of the sub pixel RGB colors, compress separately, or do differential compression on images and compress the base image, I think cineform might have eventually did something like this (I got to read their white paper sometime). Good results were reported.

Second alternative, when 3864 wide sensor, through interpolation (to stop fly screen image problems) can extract 1280*720 bayer image, then compress and send that?


Thanks

Wayne.

Wayne Morellini
July 29th, 2006, 11:31 PM
I forgot again:

I have been meaning to ask, is there anyway to feed in a compression routine, or formulae into the Elphel, without having to learn FPGA code to do it?


Thanks

Again.

Andrey Filippov
July 30th, 2006, 12:22 AM
I forgot again:

I have been meaning to ask, is there anyway to feed in a compression routine, or formulae into the Elphel, without having to learn FPGA code to do it?


Thanks

Again.

No, I don't see how you can do that - it is in the FPGA code.

Wayne Morellini
July 30th, 2006, 10:50 AM
What are the existing results from monochrome compression, what data rate/low compression ratio is possible?

I have been mining for gold in the webpedia, and things are a bit confusing (as you can understand). From the pedia Jpeg is 24 bits per pixel, is this so, or is it 4:2:0, or 4:2:2, in MJPEG? They also mention 12 bit color for Jpeg, is, that supported in monochrome, or colour modes?

This seems very interesting for us, because 100Mb/s is under 2:1 720p, or 4:1 1080p in monochrome bayer, well and truly in quality territory, even with 10-12bits, it is still looking good for 720p, providing compression holds up. Colour accuracy under bayer compression will suffer a fair bit unless very low compression ratios are used under Jpeg (so 720p would be the sweet spot at 100mb/s). Using the alternative techniques mentioned before would perhaps do that at higher compression.

I've got to be such an idiot to miss this.


Anybody willing to give this a burl on the existing 333, to see what the results are on it's (was that 70 or 30mb/s) datarate?


Ben Syyverson had a debayer I think, but no longer has it listed there:
http://web.archive.org/web/*/http://www.bensyverson.com/software/plugins/linbayer/

http://www.google.com/search?q=linbayer&hl=en&hs=cAt&lr=&client=opera&rls=en&filter=0


He might still be around to contact. Here is some interesting links on debayering for people:

http://www.insflug.org/raw/software/tools/dcraw.php3
http://www.path.unimelb.edu.au/~dersch/interpolator/interpolator.html
http://www-ise.stanford.edu/~tingchen/


Thanks Andrey


Wayne.

John Wyatt
July 30th, 2006, 01:20 PM
Wayne -- I tried to find a contact for Ben Syverson to buy a copy of his linbayer (After Effects plug-in). Google turned up nothing new, but Ben had a plug-in called dvMatte (a keyer not a debayer) advertised on a video website (www.dvgarage.com). I emailed them (info@dvgarage.com) to ask if they could give me a contact for Ben, or forward my request about linbayer to him, but no reply after about a month now. All leads to Ben seem dead...
John.

Jamie Varney
July 31st, 2006, 12:15 AM
Hello Andrey. I recently read through this entire thread and I must say I am very impressed; both by the quality of video that the 333 is capable of and your willingness to help out the community. But now that the 353 is on its way I have a few questions about it. So here goes:

1. With the IDE connector built in I am assuming that we are going to see much higher bandwidth then on the 333?
2. If the image data is going to be able to be stored on the hard drive I assume that we are still going to need a computer with a network interface to control the camera?
3. I know your priority is to use this as a security camera, so I was wondering how much code development you are going to do for the IDE interface? Are you going to fully implement it or are you going to leave it up to us who want to use it for other purposes?

Well these are all the questions I can think of for now. Thanks for your time.

Andrey Filippov
July 31st, 2006, 01:35 AM
1. With the IDE connector built in I am assuming that we are going to see much higher bandwidth then on the 333?

Not "much higher". Currently compressor speed is limited to 125MHz/2=62.5MPix/sec - for 4:2:0 it corresponds to 2/3*62.5Mpix/sec~=42MPix/sec of the original sensor pixels. 353 will have slightly (20%) larger FPGA, but I'm planning to first add more features, not to increase the speed - that I plan for the new generation of Xilinx chips

What will increase is the CPU speed, network operation (Ethernet checksum calculation in the dedicated hardware of the CPU chip) and FPGA->system memory transfer. And writing to disk is faster too - most difference compared to 333 will be for the low compression ratios/high quality.



2. If the image data is going to be able to be stored on the hard drive I assume that we are still going to need a computer with a network interface to control the camera?

Are there any computers w/o network interfaces :-)?

Other option will be with USB host (unfortunately CPU has only 1.1, not 2.0) that might be connected to USB WiFi adapter (one of those that has open source driver so it can be compiled to run on non-x86 camera CPU)


3. I know your priority is to use this as a security camera, so I was wondering how much code development you are going to do for the IDE interface? Are you going to fully implement it or are you going to leave it up to us who want to use it for other purposes?

No, it is not the only application I'm personally interested in. There are many others, but what is true is that are too many projects I'm already involved in so I can not take any additional custom developments - just provide minor consultations to others who have time and ability to work on related projects. And we usually can provide hardware (with discounts or free of charge) to those who are eager to get involved.

As for the IDE interface itself - I'll definetely make sure it actually works. After that - it is just a hard drive connected to the GNU/Linux computer (in the camera) - you can use it with any of the existent file systems and other software.

Jamie Varney
July 31st, 2006, 05:16 PM
Excellent, thanks for the reply. I have not used FPGA’s since back when I was in school, so I may be a little rusty at them. But I have a friend who works with them on a regular basis so I may get him to give me a refresher course. Then I will start looking over the great information on your site to get a better idea of how your setup works. I know you have posted the image of the routed 353 board, so would you be willing to share the actual schematic at this stage?

Thanks again.

Andrey Filippov
July 31st, 2006, 06:34 PM
so would you be willing to share the actual schematic at this stage?

No, not yet. It will be available (as the previous models are) when it will be actually manufactured and tested. So far it can have errors and I don't want misleading information to be set free on the Internet - somebody might use it as a reference and get fooled.

Phil Stone
August 1st, 2006, 11:06 AM
It is about the cheap Micro drive pricing, with CF version of the IDE Seagate drive the cheapest. This article linked from article on IDE Seagate.

Re-edit: Forgot to ask, what pixel formats does the Elphel do Mjpeg in, just 4:2:0 and monochrome?

For me I would prefer a 7200rpm laptop drive mounted outside the camera, perhaps via a cable?

Andrey Filippov
August 3rd, 2006, 08:16 PM
For me I would prefer a 7200rpm laptop drive mounted outside the camera, perhaps via a cable?

We'll make a larger body for that.

Zsolt Hegyi
August 10th, 2006, 05:09 AM
Hello all,


In the last few months I've been busy building my own camera based on a micron sensor. It didn't go really well as I'm mainly a sw guy. Thanks to this thread I found out that Elphel is building nearly the same device that I wanted to build. So I decided to leave the hw work to professionals and develop my own software.

I'm interested only in the 1280x720 resolution as: 1) the new sensor will be able to provide that with binning and 2) this amount of data might be compressed in a lossless way to fit into the bandwith of the camera (with 24fps).

What we know:

-current camera bandwith is 8.75MB/s on ethernet. The new camera will have a faster processor which will slightly raise this number.
-direct-to-disk recording is not useful for those who want to see and control the picture while recording - that can be handled by a separate pc only.
-current memory i/o is slow and altough they plan to increase it with a huge factor there'll still be memory i/o during compression using theora.
-a LUT can be used to drop the bit depth down to 10.
-1280x720x10bitx24fps=26MB/s so using the 333 we would need 3:1 compression ratio but with the 353 a smaller value might also be sufficient.

After I have an encoder I plan to write the decoder part. This will be written in C and will be realized in a form of a plugin of a popular video editing software running on pc. This plugin will never modify the actual raw data beneath, it'll store it as metadata.

If all things are successful I intend to release the stuff. Andrey told me that he needs a few weeks to complete the camera so I'm planning to be ready with my things at the same time. I'll release sample images if I have my camera and all my software working with it.

I don't have experience in image compression so if you have suggestions feel free to submit them here; and Andrey, if you feel that I want to do sg. stupid please correct me :-) I don't want to run into dead-ends.

And an other thing: I'm only interested in simple algorithms; if the complexity reaches a specific level I'll stop development and rather use the built-in codec of the 353...


Zsolt

Wayne Morellini
August 12th, 2006, 10:55 PM
Hello all,
In the last few months I've been busy building my own camera based on a micron sensor. It didn't go really well as I'm mainly a sw guy. Thanks to this thread I found out that Elphel is building nearly the same device that I wanted to build. So I decided to leave the hw work to professionals and develop my own software.

You will find this camera imposes it's own unique set of constrictions because of the Ethernet throughput, but has good potential because of inbuilt compressor and bayer.

Zsolt, are you doing this compressor in software, or hardware? Most compression schemes are difficult. I don't know if it has enough processing power in software.

One of the easiest improvements is to buffer the frames (to smooth out data rate) and to store the difference between frames). Using the internal compressors as well this would give you an advantage much of the time in disk space and data rate. One of the guys in the Digital Cinema threads was Doing a simple lossless bayer codec with frame difference compression, and reported very good results. I don't know if it was one of the Juan's, or Jason Rodriguez who is now at Silicon Studios, but best to contact them. I think the person mentioned it in my Red Codec Suggestions thread in the Red camera sub forum.

Read my previous suggestion posts here, and I believe I posted links into the web-wikipedia which has subjects listing many open-ware and as well as lossless codecs. It might be easier to drop one in from existing software if software is what you plan. As you can see I mentioned that BBC has one coming along in FPGA, there is also more behind it, so worth looking at.

I'm interested only in the 1280x720 resolution as: 1) the new sensor will be able to provide that with binning and 2) this amount of data might be compressed in a lossless way to fit into the bandwith of the camera (with 24fps).

Read my previous post in binning and converting to Bayer and compressing bayer. Please note, to get lossless with Jpeg I think you have to increase the precision of some of it's data paths (see the newsgroup lossless compression faq, using Google).

-direct-to-disk recording is not useful for those who want to see and control the picture while recording - that can be handled by a separate pc only.

If you can tap the image feed you can duplicate it off to an external display, a external controls port, and some buttons and switches through that, would give you full independent camera control .

You can direct to disk on the camera and send a feed out through Ethernet to a portable computer (even handheld) for viewing. In this way all the handheld has to do is decode and display, and send control signals back, so lower cost device is all that is needed. A uncompressed image could even be saved to disk. With the simple compression algorithm like what Juan/Jason (whichever it was) was working on you could save heaps of disk space while sending a JPEG/Theora version to the portable.

-current memory i/o is slow and altough they plan to increase it with a huge factor there'll still be memory i/o during compression using theora.

If you plan on using Theora, how much better do you think it will be for lossless (as normal Jpeg is very poor compression, see newsgroup Faq)?

After I have an encoder I plan to write the decoder part. This will be written in C and will be realized in a form of a plugin of a popular video editing software running on pc. This plugin will never modify the actual raw data beneath, it'll store it as metadata.

Good.

If all things are successful I intend to release the stuff. Andrey told me that he needs a few weeks to complete the camera so I'm planning to be ready with my things at the same time. I'll release sample images if I have my camera and all my software working with it.

You are getting a 353 camera, I thought that was next year, whatever happened to the 343? You are getting much information from Andrey.


A wish you success Zsolt, congratulations.

Wayne Morellini
August 12th, 2006, 11:16 PM
I forgot to mention, low cost devices that can be used instead of a computer for viewing and control, if they have Ethernet. Most will have some form of official/nonofficial Linux for them, so Linux development can be ported between systems:

Intel's UMPC (Ultra Mobile PC) platform, MS Origami (cheaper version coming).

Some Playstation (I don't think the Portable has Ethernet??)
PDA
Nintendo Wii (I think maybe Ethernet)
Embedded Industrial Micro-controller, hundreds, if not thousands to search through.


Future machine:
Nintendo Gamecube Portable (maybe called GBA2) I expect with Higher Res screen then PSP, but Ethernet status unknown (and maybe only wireless).

If there is cheap Ethernet to USB/SD/?? card adaptor, then most portable devices without Ethernet can be used.

Search for my Technical thread fro much information about previous Digital Cinema cameras here:

http://www.dvinfo.net/conf/showthread.php?t=28781&

Wayne Morellini
August 13th, 2006, 02:19 PM
Hello Zolt,

I have spent some time posting a thread over at comp.compression newsgroup requesting advice on lossless Bayer compression etc. You might like to go over there and see what people say. There should be a lot of expert professionals there. If you don't have a newsgroup reader setup, you can find them through google's newsgroup reader. Starts "Lossless/Near lossless, Bayer".

There is a thread there with information on significant Jpeg recompression, there were a number of techniques, but the best is covered by a patent. Unless Elphel; has an Arithmetic coding license you probably will not be able to use most of them.

Don't be discouraged that nobody replies to your post, it happens around here, best to just keep looking into your ideas in the short term.

Andrey Filippov
August 14th, 2006, 02:26 AM
Ywhatever happened to the 343?
That number was reserved for a different type of camera (slower) - like 323.

Wayne Morellini
August 14th, 2006, 04:47 AM
Thanks, just got a post from a guy over at the newsgroup that has some good suggestions for lossless and bayer that he also worked on and implemented with 1000 luts in an FPGA for his thesis. his suggestion sounds a bit like what was suggested in discussions with cineform a while ago.

Otherwise I am not really get the depth of breadth of answers I am looking for over there.

I wish I could post a Link, but I don't know how to, maybe through google, bit of a chore.


What do you think of the idea of changing the different colored bayer pixels into one colour, which can be restored after decompression, and then compressing as a grey scale, would that be simple, would it help much in compression?

Zsolt Hegyi
August 14th, 2006, 06:59 AM
Zsolt, are you doing this compressor in software, or hardware?

I don't know yet but verilog is the most likely as the processor cannot handle all the raw data. Currently I'm in the process of understanding how the Elphel architecture works; we'll see.


One of the easiest improvements is to buffer the frames (to smooth out data rate) and to store the difference between frames).

If I'm unable to reach the targeted compression ratio with lossless intraframe, the thing you mentioned will probably be the solution. However, this method will only be applied to be able to transform the data through the ethernet; the recording software must convert the incoming stream to an intraframe compressed one so linear editing will be possible later on.

The algorithm I'm currently using for intraframe will be sufficient also for the interframe compression. Only the previous frame needs to be stored not a whole group of frames with this method so it's easier to implement. The first frame must be stored without compression though but that's not a problem as the average bandwith usage will not increase in long-term.


You can direct to disk on the camera and send a feed out through Ethernet to a portable computer (even handheld) for viewing.

Good idea but with different transmitting and recording formats I still cannot use the built-in disk recording in the Elphel. Plus, getting an sdk to handhelds usually costs money. But as I stated, I have no problems using a separate mini-pc with a touchscreen.


A uncompressed image could even be saved to disk.

I'm afraid that won't be possible with the Elphel. The processor still has to participate in the disk tranfers to some extent so we have a limited data throughput there.


If you plan on using Theora, how much better do you think it will be for lossless (as normal Jpeg is very poor compression, see newsgroup Faq)?

If I end up using Theora that won't be lossless. Tune it up to maximum quality (maybe a bit better than the 333) and use the thing, that's it.


You are getting a 353 camera, I thought that was next year

Andrey only told me that he needs a few weeks for the hw to be finished. If I can get my hands on a proto that would be nice but no promises so far...


congratulations.

Don't congrat until I have something to show :-) There are so many people here with failed attempts, I just hope I won't be one of them.

Zsolt

Zsolt Hegyi
August 14th, 2006, 07:09 AM
What do you think of the idea of changing the different colored bayer pixels into one colour, which can be restored after decompression, and then compressing as a grey scale, would that be simple, would it help much in compression?

I remember you've already posted this somewhere... And I think we can achieve better ratios by compressing the channels separately. This is because the frequency content varies greatly between the three channels and one particular algorithm (whatever that might be) can always work better with data containing only similar frequencies.

Wayne Morellini
August 15th, 2006, 01:07 AM
I remember you've already posted this somewhere... And I think we can achieve better ratios by compressing the channels separately. This is because the frequency content varies greatly between the three channels and one particular algorithm (whatever that might be) can always work better with data containing only similar frequencies.

Hi Zolt, have you looked at the news group thread, starting to go well.

As I understand, colour follows brightness, which is why bayer interpolation works, so the brightness contains the high frequency data but the colour tends to stay the same over large areas, so each color contains mostly the same frequency information, but converting to one reversible colour eliminates most of this difference in frequency to real underlying differences in frequency and the odd colour difference. When you convert to one colour you bring the values of the pixels closer to one another and can apply more efficient grey scale compression. But I see what you mean, retaining the backwards compatibility means their will be some extra frequency variations. The interesting thing is that the variation reflects real data differences.

This was only an simple, stop gap, idea that could be very easily implemented on the existing compression architecture to give an extra boost to the performance of compressing the bayer pattern as a grey scale (that Andrey recommended for the existing setup) by reducing the differences.

I have been thinking of a new method some time back. Here you split into channels like you suggest, convert to a normalised colour, compress one channel (this leads to compressing disjointed details, as the intervening pixels are missing) then you record the difference from the other channels. But with my idea you base the difference off the interpolation between the pixels of the first channel, i.e. the interpolation is the assumed value of the pixels of the other channels. This is not as simple as doing the conversion and using the existing grey scale compression.

Approaching the above system from another direction, you can leave all channels in their existing colors, in the assumed interpolation pixel used for comparison you can also calculate it out into the colour of the channel of the pixel being addressed.

I should have stated this earlier, the reason why I designed the above like that, is there are different response curves for each colour, which means exactly the same brightness on an object in all colours has a different value in each color. This is surplus predictable information and my methods are designed to eliminate it and increase compression.

Going one step further into the realm of FAX compression standards (see JBIG for the best). There is areas compression, lot less processing I believe. Where the difference across a line is recorded, then the difference between lines, which then can be easily run-length encoded.

To get over corruption, their are file and transmission techniques to fully recover data from corrupt files, for maybe 25% increase in finale compressed size, which can be greatly over compensated for by compression savings by either using difference compression on the original channel, or by longer GOPS in other compression etc. JBIG tends to get good compression with less noise and detail, but I think by combining with the normalised color and assumed value techniques I describe above you can get better compression then cineform.

Forgive that my reasoning here is not completely clear, as I have memory problems picking up the game of what I thought in designing this strategy previously in times past.

Wayne Morellini
August 15th, 2006, 01:12 AM
I was going to reply to your other message, which I am most appreciative for, latter, but I'll condense it into here, as I am in a rush, as you can see by my jumbled thoughts:


lossless techniques

The techniques I am describing are all true lossless, the data still is reversible to lossless, you still store the differences that represent the lossless.


Advantages of using inter frame for storage and converting to intra frame for editing.

Converting to intra frame lossless for storage will quickly consumer a lot of space, and costs many hard disks. If done properly all the techniques with Inter compression added, might compress marvelously. The original intention with the inter frame compression idea was to convert to intra intermediate compression codec only when you were ready for editing, and save back to the inter frame version for storage. So you get the best of both worlds, but know that bayer lossless with 100 hours filming is a lot of drives, before you even get to extra footage in the editing process. Not completely nice, but a compromise between two problems.


First frame uncompressed

You do not need to leave the first frame without compression, you have the original image in the buffer to perform the difference comparison on, if you compress the first frame lossless then you get they original frame back on decompression. If you are talking lossy, that is a different matter, as there will be a quality difference between the first frame and others, as the others are virtually lossless differences, and the first frame is lossy. A intensive way around this is to compress all frame and decompress, now consistent quality, and then record the difference between each subsequent frame. there must be an smarter way of doing this without having to fully compress.

Maybe you could compress a heap of frames as a extended frame, therefore getting advantage of repetition represented there by the sub frames. This takes extra time, but the long buffer could be sued to smooth it out. As you can see there are too many variations ;).


UMPC/Origami

As long as you are not developing a Microsoft Origami/ Intel UMPC hardware device, I think free SDK/application development platforms should be available. I am sure there is a Microsoft cross platform development environment. With Linux, you are in the usual situation, somebody is probably is trying to develop a version of Linux for them.

At any price under $799, you are getting too close to to the cost of ITX+ monitor+ batter system. I expect we will see (try VIA Web-pads too) UMPC below $500 eventually.


Disk transfers uncompressed

If the processor is off loaded from the compression task, i think there might be enough processing power, as long a DMA is available and it is not restricted to 16MB/s. Just a very simple/easy option, and it can be compressed post for storage. Of course I am only talking about 720p25/24 here, not 1080, which I agree would be too much.


Not dual Ethernet/Disk formats

You can transmit via Ethernet for viewing and record to internal camera disk the same format, and do the intra conversion latter. But if the processor is free enough from the FPGA JPEG process, you could record to disk and use FPGA to produce Jpeg to Ethernet. If it pases through the processor then maybe not, unless there is DMA to perform the transfers.


Quality

Even if we can get max quality Jpeg on a disk, that is an improvement. Does anybody know what compression ratio max quality is? I think 3:1 or better is what we should be looking at, 2:1 is probably close to visually lossless of cineform. But Jpeg is very sloppy and imprecise, there are way to get it to compress a much sharper image. I do not know what quality of Jpeg mechanism the Elphel uses, the images look nice so maybe it already sues a better more accurate Jpeg mechanism, does anybody know? For this difference refer to the threads in those newsgroups I mention, and the compression faq related to the newsgroup, under the section to do with lossless Jpeg implementations that explains some problems with normal Jpeg unable of preciseness for true lossless.


Well the afternoon is gone again, looks like I didn't save any time anyway ;).

Zsolt Hegyi
August 15th, 2006, 05:42 AM
the brightness contains the high frequency data but the colour tends to stay the same over large areas, so each color contains mostly the same frequency information

The human eye is most sensitive to the green channel (that's why there are two of them). And because of this, this channel carries most of the luminance info therefore the frequency content will not match with the other two colors. However, the other two colors might be joined together into one channel as you suggested.


compress one channel (this leads to compressing disjointed details, as the intervening pixels are missing)

Yes but only one pixel's missing and that pixel is usually an intermediate between the two pixels on the channel. So basically we halve the original frequency of that channel but they're remain similar frequencies. If we introduce this middle pixel and do the grayscale conversion then what we win on the smaller difference between adjacent pixels we lose on the greater variances in the frequency domain, introducing more peaks and lows into the compressed stream. So the stream won't be as smooth even if the average data rate will remain the same. So what's the use, keep it simple.

However, all of the above is obsolete if we use an interframe method which I'm starting to like more and more.

I already have an algorithm which I implemented in Java and I've done some tests with it. In lossless mode the intraframe results were about 1.5:1 on high freq data (picture of a tree with sunlight coming through its leaves) and 2.8:1 with low freq data (an asphalt as background with some vehicles on it). If we consider that the frequency of the same pixel between frames is usually much lower than the content of the mentioned low freq image then we'll be able to achieve ratios even larger than that. And we need only 3:1 (or even smaller ratio is enough if we write to disk, see below) so there'll be some bandwith remaining for camera panning/object movement which generates images harder to compress in the time domain. If the camera/object movement is really fast then the resulting motion blur will smoothen the differences between pixels anyway so we get the same frequency as if the movement was slower.


The original intention with the inter frame compression idea was to convert to intra intermediate compression codec only when you were ready for editing, and save back to the inter frame version for storage.

The usability depends on how time consuming will be this conversion before and after editing. After all, we're talking about terabytes. But for long-term storage (archiving) the solution is definitely interframe.


You do not need to leave the first frame without compression, you have the original image in the buffer to perform the difference comparison on, if you compress the first frame lossless then you get they original frame back on decompression.

That's right but then we introduce two kinds of compression only to compress one frame of information with the first method and all the other frames with the second one. My current approach is to initialize an internal buffer to zeros and the first frame will contain the differences from this zero-filled buffer so one method will be enough. This first frame of course will be larger than the following ones, even larger than when there's camera/object movement, but that doesn't matter as it's only one frame. The reason I wrote "uncompressed" is because the resulting size will be the same as an uncompressed image's.


If the processor is off loaded from the compression task, i think there might be enough processing power, as long a DMA is available and it is not restricted to 16MB/s.

According to Andrey, current bottleneck comes from the encoder's speed which is around 40Mpix/s, 2/3rd of the original pixel speed of 60Mpix/s. Because the clock freq won't be raised in the 353, the maximum output rate we can reach is 1.5 times larger than the current speed. So, the current speed is 8.75MB/s, with the new memory architecture and processor we might have, let's say, 10MB/s on ethernet with Theora. Using a one pixel/one clock cycle compression we can have 15MB/s on disk, but to write totally uncompressed, we'd need 26MB/s.


Even if we can get max quality Jpeg on a disk, that is an improvement.

See my answer above about speed bottlenecks.

Zsolt

Wayne Morellini
August 15th, 2006, 10:12 PM
The human eye is most sensitive to the green channel (that's why there are two of them). And because of this, this channel carries most of the luminance info therefore the frequency content will not match with the other two colors. However, the other two colors might be joined together into one channel as you suggested.

Hi Zolt, thanks for replying, what I meant was (I did not put it correctly in my rush), is that detail is revealed by every pixel despite colour in Bayer. I know most of these facts. My ideas are just to process or use prediction, to smooth out the frequency data to real detail, to get over these problems and get better compression. My first suggestion would be a simple pre-processing stage that smoothed out the data for better compression using the existing Jpeg composer, just an add onto the existing Jpeg FPGA code. My last suggestions are more complex but still relatively simple compared to say Jpeg, and reduce the difference between the real data and there predicted results, to increase the performance of difference compression.

Yes but only one pixel's missing and that pixel is usually an intermediate between the two pixels on the channel. So basically we halve the original frequency of that channel but they're remain similar frequencies. If we introduce this middle pixel and do the gray scale conversion then what we win on the smaller difference between adjacent pixels we lose on the greater variances in the frequency domain, introducing more peaks and lows into the compressed stream. So the stream won't be as smooth even if the average data rate will remain the same. So what's the use, keep it simple.

Yes, that's exactly what I meant (well one form of it) except I am trying tho think of ways to smooth out the frequency domain and just leave real detail left to compress.. For even better difference predation, in the comparison you can adjust ether the green interpolation, or the red or blue pixel value, to better match the ratio of the other one (even further adjusting the value by using blue Green interpolation prediction to bring predicted value and reality further together then record the now much smaller difference).

I think we might be speaking two different languages here, you seem to be on about Spatial to frequency considerations like they use in Jpeg and wavelet (and most codecs), I am talking about whole values and simple integer based compression and difference schemes, Like used in Fax standards) as well, with some mathematical prediction to reduce this difference. These integer based difference schemes with some prediction are much simpler in software and FPGA then the normal schemes, and I think less processing.

What is best is probably to test all methods and decide which works best in which circumstances, and include the best, or for better compression, if small enough on FPGA, swap between them as context dictates better results (more advanced).

I already have an algorithm which I implemented in Java and I've done some tests with it. In lossless mode the intra frame results were about 1.5:1 on high freq data (picture of a tree with sunlight coming through its leaves) and 2.8:1 with low freq data (an asphalt as background with some vehicles on it). If we consider that the frequency of the same pixel between frames is usually much lower than the content of the mentioned low freq image then we'll be able to achieve ratios even larger than that. And we need only 3:1 (or even smaller ratio is enough if we write to disk, see below) so there'll be some bandwidth remaining for camera panning/object movement which generates images harder to compress in the time domain. If the camera/object movement is really fast then the resulting motion blur will smoothen the differences between pixels anyway so we get the same frequency as if the movement was slower.

Good, if you look through the Obin's thread and my technical thread, you might find additional ideas. I know that Cineform is doing 6:1 average at the moment after looking at these ideas, so I imagine 3:1-4:1 average might be a reality for us in lossless. I think the ideas presented here, and the ideas presented on that comp.compression newsgroup I told you about would help. But by all means pick the simplest ideas from them all first try them out. Most of the difference suggestion stuff is simple enough, and you can pick up the code, the pre-processing and predictive stuff is just a couple of registers for the different pixels values, a simple calculation, and an output result, much less complex then then most compression stages (thousands of gates).


The usability depends on how time consuming will be this conversion before and after editing. After all, we're talking about terabytes. But for long-term storage (archiving) the solution is definitely interframe.

I think it will not be too much hassle, and as you do a scene file at a time, or groups of takes, then it is a lot quicker.

That's right but then we introduce two kinds of compression only to compress one frame of information with the first method and all the other frames with the second one. My current approach is to initialize an internal buffer to zeros and the first frame will contain the differences from this zero-filled buffer so one method will be enough. This first frame of course will be larger than the following ones, even larger than when there's camera/object movement, but that doesn't matter as it's only one frame. The reason I wrote "uncompressed" is because the resulting size will be the same as an uncompressed image's.

I see what you mean, not two compression schemes like Mpeg2 does. Still with two simple compression stages we can get results. Consider this, fax like horizontal and vertical difference compression on one channel, then difference between the channels, and using your 0 difference first frame, difference between frames (comparing to previous uncompressed frame, you use similar circuits, and even the same if speed permits, for all stages. Predictive/data normalisation elements would be straight forward fro this as well.


Thanks for the accurate specs of the current systems performance. Pity that it can't do uncompressed, but it still puts reasonable compression within reach.

I should say, bayer compression is definitely the way to go, you instantly get a 3:1 improvement over 4:4:4, which is very hard to match by 4:4:4 compression. Do you know the Ogg Theora people were developing a wavelet lossless composer they put on hold to develop the Theora codec? That should be back up and running.

Keep it up Zolt, I am glad that you have ideas and are examining others. Still, it would be interesting to get Juan's input for ideas, he was doing a difference based post compression for storage.

Once again, I am about the length, I normally rewrite more to condense, but did not get away yesterday and have to rush again today.

Zsolt Hegyi
August 16th, 2006, 12:38 AM
I am talking about whole values and simple integer based compression and difference schemes

My current algorithm is a hybrid somewhere between the two kinds you're talking about: it is integer difference-based compression but it is very sensitive to frequency content. That's why I'd like to use it now for interframe compression because of the usually very low frequencies in that domain.


Thanks for the accurate specs of the current systems performance.

The only thing accurate was the current speed. The other numbers I only predicted from Andrey's comments in this thread because nobody seemed to summarize that part yet.

I did some thinking on using wireless networks. The new 540Mbps devices haven't come out yet or if they have then they're probably very expensive. So we could only use the normal wifi 54Mbps which is 6.75MB/s, way too thin. So forget wireless: record to disk and transfer a reduced resolution lossy stream to the display of a handheld attached to the camera directly, through a short ethernet cable.

The mentioned 15MB/s disk write is the maximum the new processor will handle. The current ethernet data rate is 8.75MB/s. If we halve the horizontal and vertical resolution and reduce the quality we could get 1-2MB/s so the disk transfer could still use a 13-14MB/s transfer speed. Question is, do we have the time to encode to two different formats?

The problem with this approach is that setting the lens focus won't be easy if based on a poor quality image.

Zsolt

Andrey Filippov
August 16th, 2006, 12:07 PM
According to Andrey, current bottleneck comes from the encoder's speed which is around 40Mpix/s, 2/3rd of the original pixel speed of 60Mpix/s.

This is not exactly true. In 333 you could easily hit the CPU/Ethernet limit before reaching the the full speed of the compressor (that is still faster than all the Micron sensors but 5MPix - I do not count their multi-output monsters)


Because the clock freq won't be raised in the 353, the maximum output rate we can reach is 1.5 times larger than the current speed.Zsolt

Also - that is not so. First of all, the Spartan-3E (compared to Spartan-3) has better implementation of DDR I/O registers, that frees some global clocks in the design. With that it is possible to use separate clocks fro memory and compressor - in 333 they had to be the same.

Also - the new chip is larger, so you may instanciate some modules twice and have twice the speed.

Next - 353 will have 5x speed of FPGA->system memory transfers possible with direct bus control (compared to only pseudo-DMA of ETRAX-100LX)

Zsolt Hegyi
August 16th, 2006, 11:24 PM
Thanks for the corrections Andrey. As I wrote, my numbers were only predictions, without exactly knowing the parameters of the new 353.

The increases in the fpga processing speed (mem/compr clock separation, double instantiation, 5x fpga-mem speed) are good news but if we reach the limit of the processor before reaching the limits of the fpga then we've no use of that.

If we record to disk we don't have an ethernet limit so the main question is now: what is the data throughput of the new processor?

Zsolt

Andrey Filippov
August 17th, 2006, 12:00 AM
If we record to disk we don't have an ethernet limit so the main question is now: what is the data throughput of the new processor?
Zsolt

Until we have the new hardware running, tried "this and that" the only source of the infoframtion for that is Axis web site with the documentation posted there.

Wayne Morellini
August 17th, 2006, 11:59 AM
Zsolt

Mihai Cartoaje has just posted, over in comp.compression, that he has added bayer support to the wavelet library Libima. Though he mentions something about lousy (Lossy?).

http://geocities.com/repstsb/libima.html

Probably worth going over to the newsgroup, and seeing their ideas. Have you had a look yet?

Question is, do we have the time to encode to two different formats?Zsolt

With simpler compression algorithms it would be possible to implement two in an FPGA, but we are talking related algorithms (that could use the same circuit if you wanted to save space and design time, but slower). For a really good hardware solution the FPGA should have a pipeline that passes the completed information to the processor, so processor performance is tens of times more than compressed data rate. Of course, I assume a common memory for both FPGA and processor is sued, which complicates things. But the solution there is to use SRAM registers on the FPGA (where ever already pre-manufactured or designed in FPGA) and keep the data in the pipe on the FPGA until it is ready to be written to storage. using very simple algorithms this is possible as the data needed to establish a pixel is only a few pixels around it. Now the data rate can be still kept, hopefully, low enough to compensate for memory timing complications. Counteracting the problems of random access to memory by buffering data on the FPGA in provide memory, or designed memory cells, takes up a lot of space but definitely will smooth and free up memory and processor operation.


I would not mind testing out some algorithm variations myself. My previous thoughts on the bayer predictive issues are becoming clearer now. It has to also do with establishing the ratio of the colours from analysis of the the surrounding pixels, and using that in the predictive value, for the difference operation, as well as the previous information I gave.



Thanks

Wayne.

Zsolt Hegyi
August 18th, 2006, 12:21 PM
Have you had a look yet?

No, not yet. First I want to get some results using my current method.


Of course, I assume a common memory for both FPGA and processor is sued, which complicates things.

Yes. And we have to think in 20x20 pixel sized bayer blocks (5x5 for each channel) if we don't want to rewrite a lot of the original design. Personally, I don't want to, so I'll stick with this input format.


But the solution there is to use SRAM registers on the FPGA (where ever already pre-manufactured or designed in FPGA) and keep the data in the pipe on the FPGA until it is ready to be written to storage.

The 353 will include a different bus control scheme than the 333 has now. I don't know yet how it'll work. And you cannot buffer data in registers, those arent' addressable. We have block-ram in the fpga but that's too small for video buffering.


I would not mind testing out some algorithm variations myself.

Good news. If you have results please post them here. I prefer java/jmf for this kind of testing; if you need a skeleton which reads frames from a video file and enables you to change it, I can send you one (but I don't do support :-)

Zsolt

Phil Stone
August 20th, 2006, 01:01 AM
http://www.tacx-video.com/images/HD2006/Italy/Rome A few reduced size pictures from the 333 in Rome last week.