High Definition with Elphel model 333 camera - Page 17 at DVinfo.net
DV Info Net

Go Back   DV Info Net > Special Interest Areas > Alternative Imaging Methods > Apertus: Open Source Cinema Project
Register FAQ Today's Posts Buyer's Guides

Closed Thread
 
Thread Tools Search this Thread
Old August 15th, 2006, 01:12 AM   #241
Inner Circle
 
Join Date: May 2003
Location: Australia
Posts: 2,762
I was going to reply to your other message, which I am most appreciative for, latter, but I'll condense it into here, as I am in a rush, as you can see by my jumbled thoughts:


lossless techniques

The techniques I am describing are all true lossless, the data still is reversible to lossless, you still store the differences that represent the lossless.


Advantages of using inter frame for storage and converting to intra frame for editing.

Converting to intra frame lossless for storage will quickly consumer a lot of space, and costs many hard disks. If done properly all the techniques with Inter compression added, might compress marvelously. The original intention with the inter frame compression idea was to convert to intra intermediate compression codec only when you were ready for editing, and save back to the inter frame version for storage. So you get the best of both worlds, but know that bayer lossless with 100 hours filming is a lot of drives, before you even get to extra footage in the editing process. Not completely nice, but a compromise between two problems.


First frame uncompressed

You do not need to leave the first frame without compression, you have the original image in the buffer to perform the difference comparison on, if you compress the first frame lossless then you get they original frame back on decompression. If you are talking lossy, that is a different matter, as there will be a quality difference between the first frame and others, as the others are virtually lossless differences, and the first frame is lossy. A intensive way around this is to compress all frame and decompress, now consistent quality, and then record the difference between each subsequent frame. there must be an smarter way of doing this without having to fully compress.

Maybe you could compress a heap of frames as a extended frame, therefore getting advantage of repetition represented there by the sub frames. This takes extra time, but the long buffer could be sued to smooth it out. As you can see there are too many variations ;).


UMPC/Origami

As long as you are not developing a Microsoft Origami/ Intel UMPC hardware device, I think free SDK/application development platforms should be available. I am sure there is a Microsoft cross platform development environment. With Linux, you are in the usual situation, somebody is probably is trying to develop a version of Linux for them.

At any price under $799, you are getting too close to to the cost of ITX+ monitor+ batter system. I expect we will see (try VIA Web-pads too) UMPC below $500 eventually.


Disk transfers uncompressed

If the processor is off loaded from the compression task, i think there might be enough processing power, as long a DMA is available and it is not restricted to 16MB/s. Just a very simple/easy option, and it can be compressed post for storage. Of course I am only talking about 720p25/24 here, not 1080, which I agree would be too much.


Not dual Ethernet/Disk formats

You can transmit via Ethernet for viewing and record to internal camera disk the same format, and do the intra conversion latter. But if the processor is free enough from the FPGA JPEG process, you could record to disk and use FPGA to produce Jpeg to Ethernet. If it pases through the processor then maybe not, unless there is DMA to perform the transfers.


Quality

Even if we can get max quality Jpeg on a disk, that is an improvement. Does anybody know what compression ratio max quality is? I think 3:1 or better is what we should be looking at, 2:1 is probably close to visually lossless of cineform. But Jpeg is very sloppy and imprecise, there are way to get it to compress a much sharper image. I do not know what quality of Jpeg mechanism the Elphel uses, the images look nice so maybe it already sues a better more accurate Jpeg mechanism, does anybody know? For this difference refer to the threads in those newsgroups I mention, and the compression faq related to the newsgroup, under the section to do with lossless Jpeg implementations that explains some problems with normal Jpeg unable of preciseness for true lossless.


Well the afternoon is gone again, looks like I didn't save any time anyway ;).
Wayne Morellini is offline  
Old August 15th, 2006, 05:42 AM   #242
Regular Crew
 
Join Date: Aug 2006
Location: Hungary
Posts: 59
Quote:
the brightness contains the high frequency data but the colour tends to stay the same over large areas, so each color contains mostly the same frequency information
The human eye is most sensitive to the green channel (that's why there are two of them). And because of this, this channel carries most of the luminance info therefore the frequency content will not match with the other two colors. However, the other two colors might be joined together into one channel as you suggested.

Quote:
compress one channel (this leads to compressing disjointed details, as the intervening pixels are missing)
Yes but only one pixel's missing and that pixel is usually an intermediate between the two pixels on the channel. So basically we halve the original frequency of that channel but they're remain similar frequencies. If we introduce this middle pixel and do the grayscale conversion then what we win on the smaller difference between adjacent pixels we lose on the greater variances in the frequency domain, introducing more peaks and lows into the compressed stream. So the stream won't be as smooth even if the average data rate will remain the same. So what's the use, keep it simple.

However, all of the above is obsolete if we use an interframe method which I'm starting to like more and more.

I already have an algorithm which I implemented in Java and I've done some tests with it. In lossless mode the intraframe results were about 1.5:1 on high freq data (picture of a tree with sunlight coming through its leaves) and 2.8:1 with low freq data (an asphalt as background with some vehicles on it). If we consider that the frequency of the same pixel between frames is usually much lower than the content of the mentioned low freq image then we'll be able to achieve ratios even larger than that. And we need only 3:1 (or even smaller ratio is enough if we write to disk, see below) so there'll be some bandwith remaining for camera panning/object movement which generates images harder to compress in the time domain. If the camera/object movement is really fast then the resulting motion blur will smoothen the differences between pixels anyway so we get the same frequency as if the movement was slower.

Quote:
The original intention with the inter frame compression idea was to convert to intra intermediate compression codec only when you were ready for editing, and save back to the inter frame version for storage.
The usability depends on how time consuming will be this conversion before and after editing. After all, we're talking about terabytes. But for long-term storage (archiving) the solution is definitely interframe.

Quote:
You do not need to leave the first frame without compression, you have the original image in the buffer to perform the difference comparison on, if you compress the first frame lossless then you get they original frame back on decompression.
That's right but then we introduce two kinds of compression only to compress one frame of information with the first method and all the other frames with the second one. My current approach is to initialize an internal buffer to zeros and the first frame will contain the differences from this zero-filled buffer so one method will be enough. This first frame of course will be larger than the following ones, even larger than when there's camera/object movement, but that doesn't matter as it's only one frame. The reason I wrote "uncompressed" is because the resulting size will be the same as an uncompressed image's.

Quote:
If the processor is off loaded from the compression task, i think there might be enough processing power, as long a DMA is available and it is not restricted to 16MB/s.
According to Andrey, current bottleneck comes from the encoder's speed which is around 40Mpix/s, 2/3rd of the original pixel speed of 60Mpix/s. Because the clock freq won't be raised in the 353, the maximum output rate we can reach is 1.5 times larger than the current speed. So, the current speed is 8.75MB/s, with the new memory architecture and processor we might have, let's say, 10MB/s on ethernet with Theora. Using a one pixel/one clock cycle compression we can have 15MB/s on disk, but to write totally uncompressed, we'd need 26MB/s.

Quote:
Even if we can get max quality Jpeg on a disk, that is an improvement.
See my answer above about speed bottlenecks.

Zsolt
Zsolt Hegyi is offline  
Old August 15th, 2006, 10:12 PM   #243
Inner Circle
 
Join Date: May 2003
Location: Australia
Posts: 2,762
Quote:
Originally Posted by Zsolt Hegyi
The human eye is most sensitive to the green channel (that's why there are two of them). And because of this, this channel carries most of the luminance info therefore the frequency content will not match with the other two colors. However, the other two colors might be joined together into one channel as you suggested.
Hi Zolt, thanks for replying, what I meant was (I did not put it correctly in my rush), is that detail is revealed by every pixel despite colour in Bayer. I know most of these facts. My ideas are just to process or use prediction, to smooth out the frequency data to real detail, to get over these problems and get better compression. My first suggestion would be a simple pre-processing stage that smoothed out the data for better compression using the existing Jpeg composer, just an add onto the existing Jpeg FPGA code. My last suggestions are more complex but still relatively simple compared to say Jpeg, and reduce the difference between the real data and there predicted results, to increase the performance of difference compression.

Quote:
Yes but only one pixel's missing and that pixel is usually an intermediate between the two pixels on the channel. So basically we halve the original frequency of that channel but they're remain similar frequencies. If we introduce this middle pixel and do the gray scale conversion then what we win on the smaller difference between adjacent pixels we lose on the greater variances in the frequency domain, introducing more peaks and lows into the compressed stream. So the stream won't be as smooth even if the average data rate will remain the same. So what's the use, keep it simple.
Yes, that's exactly what I meant (well one form of it) except I am trying tho think of ways to smooth out the frequency domain and just leave real detail left to compress.. For even better difference predation, in the comparison you can adjust ether the green interpolation, or the red or blue pixel value, to better match the ratio of the other one (even further adjusting the value by using blue Green interpolation prediction to bring predicted value and reality further together then record the now much smaller difference).

I think we might be speaking two different languages here, you seem to be on about Spatial to frequency considerations like they use in Jpeg and wavelet (and most codecs), I am talking about whole values and simple integer based compression and difference schemes, Like used in Fax standards) as well, with some mathematical prediction to reduce this difference. These integer based difference schemes with some prediction are much simpler in software and FPGA then the normal schemes, and I think less processing.

What is best is probably to test all methods and decide which works best in which circumstances, and include the best, or for better compression, if small enough on FPGA, swap between them as context dictates better results (more advanced).

Quote:
I already have an algorithm which I implemented in Java and I've done some tests with it. In lossless mode the intra frame results were about 1.5:1 on high freq data (picture of a tree with sunlight coming through its leaves) and 2.8:1 with low freq data (an asphalt as background with some vehicles on it). If we consider that the frequency of the same pixel between frames is usually much lower than the content of the mentioned low freq image then we'll be able to achieve ratios even larger than that. And we need only 3:1 (or even smaller ratio is enough if we write to disk, see below) so there'll be some bandwidth remaining for camera panning/object movement which generates images harder to compress in the time domain. If the camera/object movement is really fast then the resulting motion blur will smoothen the differences between pixels anyway so we get the same frequency as if the movement was slower.
Good, if you look through the Obin's thread and my technical thread, you might find additional ideas. I know that Cineform is doing 6:1 average at the moment after looking at these ideas, so I imagine 3:1-4:1 average might be a reality for us in lossless. I think the ideas presented here, and the ideas presented on that comp.compression newsgroup I told you about would help. But by all means pick the simplest ideas from them all first try them out. Most of the difference suggestion stuff is simple enough, and you can pick up the code, the pre-processing and predictive stuff is just a couple of registers for the different pixels values, a simple calculation, and an output result, much less complex then then most compression stages (thousands of gates).


Quote:
The usability depends on how time consuming will be this conversion before and after editing. After all, we're talking about terabytes. But for long-term storage (archiving) the solution is definitely interframe.
I think it will not be too much hassle, and as you do a scene file at a time, or groups of takes, then it is a lot quicker.

Quote:
That's right but then we introduce two kinds of compression only to compress one frame of information with the first method and all the other frames with the second one. My current approach is to initialize an internal buffer to zeros and the first frame will contain the differences from this zero-filled buffer so one method will be enough. This first frame of course will be larger than the following ones, even larger than when there's camera/object movement, but that doesn't matter as it's only one frame. The reason I wrote "uncompressed" is because the resulting size will be the same as an uncompressed image's.
I see what you mean, not two compression schemes like Mpeg2 does. Still with two simple compression stages we can get results. Consider this, fax like horizontal and vertical difference compression on one channel, then difference between the channels, and using your 0 difference first frame, difference between frames (comparing to previous uncompressed frame, you use similar circuits, and even the same if speed permits, for all stages. Predictive/data normalisation elements would be straight forward fro this as well.


Thanks for the accurate specs of the current systems performance. Pity that it can't do uncompressed, but it still puts reasonable compression within reach.

I should say, bayer compression is definitely the way to go, you instantly get a 3:1 improvement over 4:4:4, which is very hard to match by 4:4:4 compression. Do you know the Ogg Theora people were developing a wavelet lossless composer they put on hold to develop the Theora codec? That should be back up and running.

Keep it up Zolt, I am glad that you have ideas and are examining others. Still, it would be interesting to get Juan's input for ideas, he was doing a difference based post compression for storage.

Once again, I am about the length, I normally rewrite more to condense, but did not get away yesterday and have to rush again today.
Wayne Morellini is offline  
Old August 16th, 2006, 12:38 AM   #244
Regular Crew
 
Join Date: Aug 2006
Location: Hungary
Posts: 59
Quote:
I am talking about whole values and simple integer based compression and difference schemes
My current algorithm is a hybrid somewhere between the two kinds you're talking about: it is integer difference-based compression but it is very sensitive to frequency content. That's why I'd like to use it now for interframe compression because of the usually very low frequencies in that domain.

Quote:
Thanks for the accurate specs of the current systems performance.
The only thing accurate was the current speed. The other numbers I only predicted from Andrey's comments in this thread because nobody seemed to summarize that part yet.

I did some thinking on using wireless networks. The new 540Mbps devices haven't come out yet or if they have then they're probably very expensive. So we could only use the normal wifi 54Mbps which is 6.75MB/s, way too thin. So forget wireless: record to disk and transfer a reduced resolution lossy stream to the display of a handheld attached to the camera directly, through a short ethernet cable.

The mentioned 15MB/s disk write is the maximum the new processor will handle. The current ethernet data rate is 8.75MB/s. If we halve the horizontal and vertical resolution and reduce the quality we could get 1-2MB/s so the disk transfer could still use a 13-14MB/s transfer speed. Question is, do we have the time to encode to two different formats?

The problem with this approach is that setting the lens focus won't be easy if based on a poor quality image.

Zsolt

Last edited by Zsolt Hegyi; August 17th, 2006 at 12:32 AM.
Zsolt Hegyi is offline  
Old August 16th, 2006, 12:07 PM   #245
Major Player
 
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
Quote:
Originally Posted by Zsolt Hegyi
According to Andrey, current bottleneck comes from the encoder's speed which is around 40Mpix/s, 2/3rd of the original pixel speed of 60Mpix/s.
This is not exactly true. In 333 you could easily hit the CPU/Ethernet limit before reaching the the full speed of the compressor (that is still faster than all the Micron sensors but 5MPix - I do not count their multi-output monsters)


Quote:
Originally Posted by Zsolt Hegyi
Because the clock freq won't be raised in the 353, the maximum output rate we can reach is 1.5 times larger than the current speed.Zsolt
Also - that is not so. First of all, the Spartan-3E (compared to Spartan-3) has better implementation of DDR I/O registers, that frees some global clocks in the design. With that it is possible to use separate clocks fro memory and compressor - in 333 they had to be the same.

Also - the new chip is larger, so you may instanciate some modules twice and have twice the speed.

Next - 353 will have 5x speed of FPGA->system memory transfers possible with direct bus control (compared to only pseudo-DMA of ETRAX-100LX)
Andrey Filippov is offline  
Old August 16th, 2006, 11:24 PM   #246
Regular Crew
 
Join Date: Aug 2006
Location: Hungary
Posts: 59
Thanks for the corrections Andrey. As I wrote, my numbers were only predictions, without exactly knowing the parameters of the new 353.

The increases in the fpga processing speed (mem/compr clock separation, double instantiation, 5x fpga-mem speed) are good news but if we reach the limit of the processor before reaching the limits of the fpga then we've no use of that.

If we record to disk we don't have an ethernet limit so the main question is now: what is the data throughput of the new processor?

Zsolt
Zsolt Hegyi is offline  
Old August 17th, 2006, 12:00 AM   #247
Major Player
 
Join Date: Apr 2006
Location: Magna, Utah
Posts: 215
Quote:
Originally Posted by Zsolt Hegyi
If we record to disk we don't have an ethernet limit so the main question is now: what is the data throughput of the new processor?
Zsolt
Until we have the new hardware running, tried "this and that" the only source of the infoframtion for that is Axis web site with the documentation posted there.
Andrey Filippov is offline  
Old August 17th, 2006, 11:59 AM   #248
Inner Circle
 
Join Date: May 2003
Location: Australia
Posts: 2,762
Zsolt

Mihai Cartoaje has just posted, over in comp.compression, that he has added bayer support to the wavelet library Libima. Though he mentions something about lousy (Lossy?).

http://geocities.com/repstsb/libima.html

Probably worth going over to the newsgroup, and seeing their ideas. Have you had a look yet?

Quote:
Originally Posted by Zsolt Hegyi
Question is, do we have the time to encode to two different formats?Zsolt
With simpler compression algorithms it would be possible to implement two in an FPGA, but we are talking related algorithms (that could use the same circuit if you wanted to save space and design time, but slower). For a really good hardware solution the FPGA should have a pipeline that passes the completed information to the processor, so processor performance is tens of times more than compressed data rate. Of course, I assume a common memory for both FPGA and processor is sued, which complicates things. But the solution there is to use SRAM registers on the FPGA (where ever already pre-manufactured or designed in FPGA) and keep the data in the pipe on the FPGA until it is ready to be written to storage. using very simple algorithms this is possible as the data needed to establish a pixel is only a few pixels around it. Now the data rate can be still kept, hopefully, low enough to compensate for memory timing complications. Counteracting the problems of random access to memory by buffering data on the FPGA in provide memory, or designed memory cells, takes up a lot of space but definitely will smooth and free up memory and processor operation.


I would not mind testing out some algorithm variations myself. My previous thoughts on the bayer predictive issues are becoming clearer now. It has to also do with establishing the ratio of the colours from analysis of the the surrounding pixels, and using that in the predictive value, for the difference operation, as well as the previous information I gave.



Thanks

Wayne.
Wayne Morellini is offline  
Old August 18th, 2006, 12:21 PM   #249
Regular Crew
 
Join Date: Aug 2006
Location: Hungary
Posts: 59
Quote:
Have you had a look yet?
No, not yet. First I want to get some results using my current method.

Quote:
Of course, I assume a common memory for both FPGA and processor is sued, which complicates things.
Yes. And we have to think in 20x20 pixel sized bayer blocks (5x5 for each channel) if we don't want to rewrite a lot of the original design. Personally, I don't want to, so I'll stick with this input format.

Quote:
But the solution there is to use SRAM registers on the FPGA (where ever already pre-manufactured or designed in FPGA) and keep the data in the pipe on the FPGA until it is ready to be written to storage.
The 353 will include a different bus control scheme than the 333 has now. I don't know yet how it'll work. And you cannot buffer data in registers, those arent' addressable. We have block-ram in the fpga but that's too small for video buffering.

Quote:
I would not mind testing out some algorithm variations myself.
Good news. If you have results please post them here. I prefer java/jmf for this kind of testing; if you need a skeleton which reads frames from a video file and enables you to change it, I can send you one (but I don't do support :-)

Zsolt
Zsolt Hegyi is offline  
Old August 20th, 2006, 01:01 AM   #250
Regular Crew
 
Join Date: Apr 2004
Location: UK
Posts: 74
http://www.tacx-video.com/images/HD2006/Italy/Rome A few reduced size pictures from the 333 in Rome last week.
Phil Stone is offline  
Old August 21st, 2006, 05:26 AM   #251
New Boot
 
Join Date: Jun 2006
Location: Germany, near munich
Posts: 14
Hi,
one question, Phil, please tell me with lens you used in Rome.
I look for a wide-angle-lens like this.

Robert
Robert Schiebel is offline  
Old August 21st, 2006, 05:54 AM   #252
Inner Circle
 
Join Date: May 2003
Location: Australia
Posts: 2,762
Thanks Zolt.

It might take lots of time before I am ready, some unexpected things are up. I'll probably email you when I am freer.

The register suggestion (also implying on chip ram as register) was only on the basis of doing a pixel at a time, needing only a few memory words/registers for surrounding pixels and intermediate results, not on 20*20 blocks of pixels.
Wayne Morellini is offline  
Old August 21st, 2006, 08:06 AM   #253
Inner Circle
 
Join Date: May 2003
Location: Australia
Posts: 2,762
Noise and compression artifact removal.

If anybody is interested. Michael Schoeberl, who has had experience with noise removal in medical imaging, over at the comp.compression thread has put me onto some very good noise removal software and plugin, also works on compression artifacts. There is both a still and video versions.

http://www.neatimage.com/
http://www.neatvideo.com/index.html?snim

Would lead to more compressible cleaner files in post. I am aiming to look for routines suitable on camera as well.
Wayne Morellini is offline  
Old August 21st, 2006, 10:08 AM   #254
Inner Circle
 
Join Date: May 2003
Location: Australia
Posts: 2,762
Have looked over the examples, and the results are pretty amazing. I have compared the before and after file sizes on their site, and mostly reductions upto less then half the original size, usually less reduction for the stills. I must admit this does not entirely make sense, I think the re-compressor they are using is not doing such a good job, otherwise I would expect more reduction on average than this. Some minor loss in detail at times, and gain in some other places, as it tries to predict what is what. But still very nice.

http://www.neatimage.com/examples.html
http://www.neatvideo.com/examples.html
http://www.neatimage.com/reviews.html

Reported to be very good too (see conclusions)
http://www.michaelalmond.com/Articles/noise.htm
Wayne Morellini is offline  
Old August 21st, 2006, 02:29 PM   #255
Regular Crew
 
Join Date: Aug 2006
Location: Hungary
Posts: 59
Quote:
Originally Posted by Wayne Morellini
Have looked over the examples, and the results are pretty amazing.
Yes they're not bad but consider that these are interpolated rgb images, not bayer. Interpolation introduces noise and jpeg compression introduces more. This noise is algorithmical and can be removed later by software if correctly recognized.

You're probably aware that the cmos chips we intend to use contain analog noise removal circuits and they're really good (removing the noise digitally is not nearly as efficient).Well, unless you push up the analog gain in the chip - that seems to be the case with several images posted on the above link. And the other set of images are just poorly jpeg-compressed.

By using correctly exposed sensors with normal analog gain levels we should have no significant noise on the raw bayer images.

Last edited by Zsolt Hegyi; August 22nd, 2006 at 09:34 AM.
Zsolt Hegyi is offline  
Closed Thread

DV Info Net refers all where-to-buy and where-to-rent questions exclusively to these trusted full line dealers and rental houses...

B&H Photo Video
(866) 521-7381
New York, NY USA

Scan Computers Int. Ltd.
+44 0871-472-4747
Bolton, Lancashire UK


DV Info Net also encourages you to support local businesses and buy from an authorized dealer in your neighborhood.
  You are here: DV Info Net > Special Interest Areas > Alternative Imaging Methods > Apertus: Open Source Cinema Project


 



All times are GMT -6. The time now is 11:14 PM.


DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network