View Full Version : uncompressed deck


Keith Wakeham
January 10th, 2006, 07:51 PM
For those who don't know, I was involved with the electronics level design of a camera in alternative imagine part of this forum.

We'll we backburnered that and went full speed for a deck.

http://dvinfo.net/conf/showthread.php?t=57992

Anyway, figure alternative imaging needed the announcement since the HD camera projects seemed to die but this is where it will probably be most viewed.

So the link is in my signature or follow the above to the thread.

Comments?

Wayne Morellini
January 11th, 2006, 06:59 AM
Thanks Keith, good to see you going public with this.

Check his site ;)

Mike Marriage
January 11th, 2006, 07:16 AM
Looks great!

Do you have a rough price yet?

Keith Wakeham
January 11th, 2006, 07:23 AM
Were hoping for the 2.5K USD price range with an 800GB pack that should give about 84 minutes @ 1920 x 1080 @ 30p or 60i or 105 minutes @ 24p.

Petr Marusek
January 11th, 2006, 10:30 AM
Keith and Wayne, Thanks for the information. I am definitely interested.

Keith, I'm sure it records 10 bits, but H1 HDSDI has 2 bits set to zero. Maybe you could add an 8 bit mode with increased capacity.

Wayne Morellini
January 11th, 2006, 11:18 AM
Were hoping for the 2.5K USD price range with an 800GB pack that should give about 84 minutes @ 1920 x 1080 @ 30p or 60i or 105 minutes @ 24p.

Does the machine have 4:2:2, and 4:2:0 recording modes? I can see it looks like it does pixel packing.

Keith Wakeham
January 11th, 2006, 11:29 AM
HD-SDI for smpte 274 is 10 bit. Theoritically, yes we can have it drop the unnecessary bits but are we going to - I don't think so. More for that if someone is using something like a HDL-40 or varicam, and they accidently puts it in 8bit mode for a previous shoot and they shot something dark somewhere else with a 10bit capable camera and it reconizes it as 8 bit at first but then see's 10 the device is going to get very confused.

Now with the computer interface, it can scale it down to 8bit, but the hd-sdi output on playback still needs to be spitting 10bit, so if it doesn't then again things get a little crazy.

When we get to a stage where some people can test it out then we'll see about it maybe.

As for 4:2:2 and 4:2:0. SMPTE 292 single link defines only 4:2:2. So thats what will be captured. I think 4:2:0 is a bad idea for interlaced anyway which is what some stuff would be spitting out anyway.

I do have hopes that I can sync two of these together for 12bit per channel 4:4:4 over dual link uncompressed. - Good if you happen to own an Origin, D20, Genesis, F950, or Viper (Maybe Red?). Doubt many people here own them though.

K. Forman
April 1st, 2006, 09:30 AM
Keith- Any chance of hot swappable HDDs?

Keith Wakeham
April 3rd, 2006, 05:49 PM
The drives will be swappable, just not hot swappable. It will need to be shut down, swap drives, then start it up again. Considering it can "boot up" faster than the time it take for the hard drives to spin up I don't see it as an issue.

Sorry I kinda dropped off the planet. A lot of work and delays have changed a lot about the project and I'm really bad at keeping the website anywhat up to date.

The big thing that has come out of the problems we've had to face is that we have cut the expected drive numbers in half (now 5x7200 rpm laptop drives) which also cuts out power consumption in half. This has lead to a fair amount of re-coding of the project and was the result of another issue as well.

This re-coding has pushed everything back but its still going to happen.

Steven Thomas
April 3rd, 2006, 09:53 PM
Excellent Keith!
Keep us posted.
Sounds good to me.

Glenn Chan
April 3rd, 2006, 10:07 PM
Were hoping for the 2.5K USD price range with an 800GB pack that should give about 84 minutes @ 1920 x 1080 @ 30p or 60i or 105 minutes @ 24p.
That's barely more than the cost of some of the storage solutions for the HVX200...

Keith Wakeham
April 4th, 2006, 08:19 AM
Using less drives means recording time gets cut down unfortunately, but theirs less chance of drive failure and since we haven't programmed in our planned redundancy yet either into the prototype we really want less drives to reduce risk of failure.

Wayne Morellini
April 5th, 2006, 03:07 AM
Prices are dependent on the amount that people are willing/can pay (they tend to use the supply and demand economic theory) as well as cost. Unfortunately, it is upto the intelligence of the individual to purchase the most cost effective solution in a capitalist/market driven economy.

Wayne Morellini
April 5th, 2006, 04:05 AM
Using less drives means recording time gets cut down unfortunately, but theirs less chance of drive failure and since we haven't programmed in our planned redundancy yet either into the prototype we really want less drives to reduce risk of failure.

Unfortunately, recording HDSDI tends to consume a lot more space than machine vision bayer, but I have an idea for a very simple compression system that would very simply fit into your FPGA.

You know that you can store the difference between frames, and obviously you have dropped that because of the complexity. But chroma follows the luminance around, so the blue and red values are very predictable, therefore you can store the difference they have instead. I have wrote about this sort of idea before a number of time. You should get close to bayer sizes for 4:4:4, but you have to do a couple of more things first:

This is written for RGB logic, so has to be adjusted for Luminance plus two chroma (which would make the data value s flow differently).

1. Use the first chroma primary value in a sequence (maybe a number in a image, if the chroma colour turns completely on or off) as the base value (proportion). Now you can record the subsequent chroma pixel values as differences from each previous pixel primary chroma value. This might also be applied to luminance.

2. An alternative, that should reduce the magnitude of the difference (less bits) that has to be recorded even more, is to use the difference in luminance as an additional predictor. Now the new chroma value is calculated as the % of difference given to the new luminance (over the previous luminance) but applied in % to the previous chroma value being predicted.

3. In some circumstances you might get better with the chroma on it's own (testing would reveal if it is a real concern). Then you might have a switch for each pixel, or you could have a switch and a run count of how many following pixels use it. But as I said, there might be no significant advantage to having a switch.

4. The next step to get more compression is to record the difference between lines (simple, efficient, Fax standards regularly employ this one). With the above scheme, this might be a bit limited as compared to not having the above. Now you should definitely find situations where it is either better to record using pixel difference, or line difference. So a switch/run-length mechanism might likely help.

5. Run length encoding and other compression techniques might be applied, maybe best on a bit plane basis (this is a term where pixels are addressed by bit position in the pixels, so a plane of bit 2, is the monochrome image of all bit 2's of all pixels, except here you are doing bit plane on the difference data instead). Because all the data is organised into minimal differences, and likely big strings of 0's and strings of ones (especially with method 2 applied) except in luminance data steam, very high compression can be achieved. Faxes could do 50:1 with the simpler, and B&W, text aware version of this, but you shouldn't expect anywhere near that, 3:1 would be sufficient, if your lucky 5:1 might be possible.

6. If you did what you guys intended to do, recording the difference between images as well, you could easily lend up with compression over 10:1. Which means one drive. Or a few more for SHD.

Hopefully you will get upto 3:1 using methods 1 and 2, 3 if needed, which are simple calculations which just requires running registers for the proportions and the real value. Methods 4 is simple but requires more memory and logic, method 5 is more complex again requiring a memory. 6 simple, but a memory.


You are welcome to contact me about using this.

Keith Wakeham
April 5th, 2006, 11:15 AM
Wayne, as always some very interesting Ideas - Wish I had a little more time to digest them all. Our limiting factor is programming and Ram. We don't have any dedicated Ram whatsoever, so buffering a single frame is impossible without using some of our precious I/O pins for memory which pushes up to another chip with more routing and design of a circuit board, which just caused a 2 month delay already. The other is that most of these encoding techniques would take months to implement without a team of people.

Even with a prewritten core, if that core is not capable of doing exactly what I want in a fast fashion I can't use it. Its like opencores.net had some fpga to ata interface. Problem was that one side maybe give me the ATA outputs I want but the interface to the core was a wishbone interface which was just as complicated as ATA. I've had to go over hundreds of pages of documents and build a PIO interface and now have had to completely re-write it for uDMA. I started coding a year ago, and at the time was realitively inexperienced with fpgas compared to a Pro. Although its taken less than a month to get uDMA to work I can't handle doing comparative encoding or lossless compression in the foreseable future.

So although I understand that it would be beneficial to use compression I'm thinking shoot HQ first, dump to compressed latter when you wanna backup your shots and reuse the hard drive pack.

Wayne Morellini
April 5th, 2006, 12:10 PM
Wayne, as always some very interesting Ideas - Wish I had a little more time to digest them all. Our limiting factor is programming and Ram. We don't have any dedicated Ram whatsoever

That is why I listed the simple ones first. The first 3 require a few registers/locations of scratch-pad ram, and no other buffering, and are designed to be simple calculation that stream directly to the disk. They follow very simple rule that can be put in hardware, unlike the more complex buffered techniques (item 4 on wards, though run-length encoding only requires a value and a counter for the number times it is repeated).

I don't know if it takes to put this into hardware, but this is as simple, as a filter and might require less than 50 lines of code. Apart from the register memory (in case you have to encode static memory register) I think the simple functions might take hundreds or a thousand gates (If the function design is changed to suit FPGA economics).

So although I understand that it would be beneficial to use compression I'm thinking shoot HQ first, dump to compressed latter when you wanna backup your shots and reuse the hard drive pack.
Why not forget it for now, get the standard HD version on the market, and worry about compression afterwards, as a down loadable FPGA upgrade to customers?

Thanks

Wayne.

Ashley Cooper
April 5th, 2006, 02:56 PM
Changing gears a little bit, would this unit be able to deal with the variable frame rates of a Varicam or HVX200? Also, how would it handle HD100 material. I could be mistaken, but I thought I read that that camera's HD analog output was 60p only.

Keith Wakeham
April 5th, 2006, 05:37 PM
Thats an interesting question, and one that has an unfortunately not straight forward answer, so sorry in advanced.

Short answer yes, but not efficiently.

Long answer below
Varicam and I'm not 100% sure, but HVX200's analog outputs shoudl be similar are using a very simple method of variable framerate. The way Varicam outputs over HD-SDI is to use meta data encoded into the horizontal and vertical blanking periods that says its an active video frame(A "flagged" frame). What this means is that output still looks like 60p but with duplicate frames every now and again. This is done to ensure compatibilty with SMPTE 296, 292, and 274 specifications. Without this ensured compatability some displays wouldn't be able to deal with the non standard framerates. The HVX200 will be similar, just without any flags because its analog. SMPTE only defines a handful of framerates for each format, so they are the only true acceptable formates. They are 24, 25, 30, 50, 60 and 1.001 divisor derivatives of each. That means whatever a camera outputs has to be in one of these formats no matter what. So what my smpte decoder sees is only the active video, it doesn't understand metadata (yet, I still need to deal with timecode, audio, and stuff latter) so it will capture the 60p stream. So its very wasteful when 24p is encoded to 60p. But this can all be changed with a update at a latter time.

Also, what i'm working on is still in the HD-SDI only , so converters must be used to change the analog from the HVX and HD100 to HD-SDI before my HDU-1 will be able to capture that.