![]() |
I have been talking with Steve from Silicon Imaging. They don't intend to make something smaller than the SI-2k mini in terms of resolution for filmmaking purposes.
All he says is that they would supply PCB cameras for the Micron or Cypress sensors if we gather a group together for another open source design. I wanted to ask another question. What exactly is wrong with the Ibis? I need to see something shot with the marlin. Appart from that... 2K looks sooo good (even at 20 or 22fps) with the micron board! 1600x666 doesn't look too bad either. But do you think 720p will be enough? |
What I have been seriously considering is interfacing a Micron MT9T031P12STC with a Gumstix Verdex XL6P. Add an IDE hard drive and a small LCD and we should have a camera that can do 720P and possibly even 1600x680.
The biggest obstacle I can see is writing the Linux True IDE driver with DMA, and making it fast enough to write 24MB/s to the hard drive. Assuming this can be done my only other concern would be the rolling shutter, I have no idea how bad it is on this chip. |
Hi Jose,
did you read about that hydra mod for Panasonic HVX200? http://www.reel-stream.com/ http://www.freshdv.com/2007/03/reel-...he-hvx200.html They hook up some electronics to the ccd inside the HVX200 then the whole thing is connected by a Gigabit Ethernet (over CAT 5e or 6 cable) to any Intel / Mac computer running their SculptorHD 2.0 software. The image is captured as raw data directly from the ccd's. NATIVE FRAME SIZE: 2100 x 1090 RECORD MODES: Full RGB(4:4:4) only best regards Daniel |
Hi Daniel,
It sounds quite good in fact but I supose it'll cost even more than an unaltered HVX. We're trying to keep things cheap here. Let's say $2000 or little more for the whole package, at least for me. Hi Jamie, and welcome to the thread :) A couple of things about your option: first, the T series in Micron offer the same as the P series (the one I'm testing) but with less resolution and also less speed. The MT9P031 offers 1920x800 at about 22fps using USB (the worst possible interface for our purposes). Your option would get those 22-24fps at 1600x666 (1:2.40) and larger resolutions would mean less fps. That was in fact the sensor used in the Elphel 333. The new 353 uses the P series. Besides that, they both cost the same. 250euro for a camera head and 900euro for the whole demo board. I really think CMOS is the way to go. They offer a much more filmic image. Also global shutter is a good option but rolling shutter at 96Mhz is almost unnoticeable, so we don't have problems with that. I haven't seen large full HD CMOS sensors for the cheap yet. So the main options would be: - Large 720p sensor. Between 2/3" and 1". Bad points: We don't have 2k. - Small 2k sensor + DOF adaptor. About 1/2.5". Bad points: pixel sizes are probably too small to collect enough light with the adaptor attached. Maybe it's possible to find a 2/3" or 1" 1600x1200 sensor cheap enough to use in this project. There has to be something we're missing. The interface: I think GigE is the perfect interface. It doesn't require a framegrabber like the CameraLink and also GigE boards are quite cheap if your computer doesn't have one. USB can't do it. When I'm capturing with the demo board I can see the sensor goes at very high speeds, while the board sends less than half of the images through usb. Firewire 800? Maybe... Computer independent solution? Not yet. We would need someone to program a FPGA with IDE interfaces, flash card slots and LCD outputs appart from the fact that we'd also need a piece of software to control the sensor. Maybe in time... For now we need the camera and a computer. Unless of course we can plug a camera head to one of those Gumstix boards (the first time I read about them) and then add a HDD and a LCD. We would still need the software to control de sensor. I need help on this. |
I really wish that I had the money to purse this little project of mine, but I have some family issues at the moment so it may be a while before I can try. That said Avnet sells the raw MT9T031 sensor for $39.00 US and the Gumstix is only $160.00 US. EarthLCD sells lots of LCD's that would work for under $100.00. From there we would just need a HD, case, power supply and optics and we should have a working camera.
One advantage to having a degree in Electronics Engineering is that I am nearly positive that I can interface the sensor to the Gumstix. But like I said before, writing to the HD at full speed is going to be a real issue. Most likely the raw bayer data will have to be streamed to the hardrive and it will have to be demosaiced and converted to a usable format later. Sigh, maybe in a few months I will be able to start buying parts and see what happens :-( |
Wouldn't it be easier to program a FPGA to do all that? And believe me. Buy the P031 sensor. It costs the same and it gives 2k at more than 24fps. At least the sensor does it. The problem is finding a good interface to capture at that speed.
Well... the best part of this is that we've got an electronics engineer here! Why not start with something like the SI-2k Mini? A camera head you can plug to the computer and start capturing... We just need the sensor, a board with a GigE interface... And the difficult part: software to control and capture. |
Quote:
Quote:
|
Hi,
Personally I am going for the record in bayer format, then make a codec that can handle this format. It is a pretty CPU/GPU expensive codec, compared to for example a MPEG codec. I am hoping that Final Cut Pro sends hints to the codec when it wants RT-extreme (real-time, low quality playback), in that case I can easily switch between different debayer algorithms. I have one high quality debayer and one fast debayer algorithm. I will be releasing my digital negative file format, so that other people can implement this and share the tools build for it. I will probably make the QuickTime codec a free download. Cheers, Take |
Hey... This is already sounding like a real team!
Take, if we develop a camera head based on the micron chip and with a GigE interface, do you think you could write a tool to control the camera and capture with your codec? |
By the way and now that we're talking about Apple. Wouldn't it be possible to use a Mac Mini to capture? It has a sata disk and a core duo chip. You can get one for about $800. The camera head, a mac mini and a LCD would make it really portable.
It would be like a OSX based SI-2k and using Take's codec instead of Cineform. What do you think? |
Jose,
The codec would be extremely simple, it basically is packed (12/14 bit) bayer pixels. The headers (per image) include things like: - width - height - bit depth - color model (bayer rggb, bayer grbg, rgb) - LUT (integer to half) - color conversion matrix - orientation (horizontal and vertical flip) - camera euid (for loading calibration data) - left and right average dark current (measured from dark frames (shutter = 0.0 sec) just before the recording starts or from the left and right light covered zones). - nr frames since midnight - frame / second It needs both rgb and bayer modes, as Final Cut Pro only works fast when the codec for reading is the same as the codec on the timeline. So the result of internal Final Cut Pro renderings will be stored as lossless RGB with the same codec. It is a simple file format so that it would be easy to implement in a recorder, but as it is a proper QuickTime codec, you can also save it as a QuickTime file (which would happen inside Final Cut Pro). My current capture software is Boom Recorder (with a Mirage Recorder license) it handles things like buffering and viewing. And of course it also does all the old stuff like timecode, high quality audio, metadata and auto file naming. Currently I am only implementing IIDC cameras, but it would not be extremely difficult to add GigE once it is finished. The MacMini may be a little to light for doing this stuff, especially if you want to view the result. Mirage Recorder uses quite a lot of GPU power to draw the images on the screen. CPU usage is quite low 30-40% but with only one CPU this would be 60-80%. The internal disk is also not fast enough, and there is not a good interface for an external one either. I am writing it for the MacBook Pro, using the firewire800 with the Pike and a eSATA ExpressCard connected to a disk. One last thing, you can not configure the Pike to do exact 24 fps, I am planning to clock the Pike externally with a frequency divider (2000) on a audio word clock. This will make the Pike also lock to audio. Cheers, Take |
Take.
I had an look at that picture, it seems like an form of column noise. The Sanyo HD1 had it worse, I can't remember the explanation, there was an explanation over in the Sanyo HD forum, in one of the threads I was in. I remember it was an thread about the problem (mis-named problem too). See what the firm ware does, it might be the only hope. What you describe is an different response at different levels, which really requires an non linear fixed pattern to partly solve, which might be as simple as defining the noise at an few levels and mapping an mathematical function to estimate it at any point, you know what I mean. We rejected CCD's and firewire an long time ago because of various problems. I think, the situation might have been, that you can get CCD's to perform as well as the best CMOS, but at an price (cooling, power consumption, expense). There are some fill-factor advantages in CCD's, but the more complicated it is the less fill factor, so the more reliance on micro-lensing, and less widest aperture. For us, in todays market, there is little reason to look past CMOS Altasens, and some of the other, newer, sensors. Though my knowledge is to limited of present technology (in those days details of the Altasens technology was not even available). |
Ok, so Mac Mini is a no go.
We stay with camera head, GigE interface and a laptop. |
Jose, I was not referring to you. The Epix suggestion is good. But we have to look at fitting in between the cracks left in the market by the SI, Red (coming out with smaller camera) shake up with Canon HV20, and Sony XDCAMHD EX, attacking the low end of the market.
As for format, it depends on taste, extremely fast aperture 2/3inch, or 16mm, should give you descent depth of field (as Drake did) but another format for lenses (non cinema, is 4/3rds) which is an bigger sensor compromise for digital sensors and DOF. Apart from those,we are back to 35mm lens adaptors again. What some people do is use wider angle lens to get an descent field of view without lens image adaptor, but that means using the lens with the best aperture to get the DOF (expensive). You have an 720p system, what is it, if you don't mind me asking? Quote:
Reading your next posts. 720p benefit, is 50fps (if available) and marketing, it doesn't challenge the 1080p market as much, so can afford to be less. The internal analogue to Digital converter in the Ibis has been very poor and noisy, some additional external circuit design also improve performance, as Drake designer had revealed to me. Actually, somewhere, I have the reference to the original camera that the Drake was based on. Problem is many camera companies go the easy way, and just use the internal ADC. The other problem is, is when are they going to replace the Ibis5a, they have additional technology with the buy out of FF and Smal by cypress to replace it, but I think for the last 3 years not much happened. But performance may well be improving in the background through technological refinements. Ibis has less advantage now that other sensor companies have latitude extension technologies. According to macosrumors there is an gaming Mac Mini coming, actually they were talking about gaming Imac last year, but makes some sense. The GPU in the standard PC Mac Mini is not the best. So an newer version would be best. Heroine Warrior does an Cinelerra Open Source edit capture solution, that has been used on major productions to capture raw 4:4:4 1080p for years. http://www.heroinewarrior.com/cinelerra.php3 Jamie, Good idea about the gumstix. www.ambarella.com have an camera control/h264 codec chip, that can write to an IDE interface (and many others, even has USB). Such an chip can be directly connected to the sensor, even to USB camera, I image. H264 Intra frame codec is the new professional work codec, and would be quiet desirable for your project. About the ambarella chips, more advanced versions are more powerful, but all work around an large parallel array of SPARC RISC microprocessors. So I imagine it might be able to be reprogrammed to do what ever compression/code you like. There is also now two rival software development tool companies offering tools that take normal code and convert it into parallel code effectively, even FPGA code (forget names). Daniel I tried to talk reel-stream into releasing the board for more general solutions in times past. I think we could get better results than the HVX200 mod. I wonder if they would be more interested now. Well, I wish you guys the best with this, I am just waiting for an affordable solution nowadays. |
Hi Wayne,
About the Ambarella chip. It sounds great but we can't use it with the USB micron board. We'd get the same speed that I get when capturing to laptop. The main problem here is finding a camera head with a good 2k sensor and an interface fast enough to deliver 24fps at full res or finding someone who can create a different board for the Micron sensor. When we have that, we can do whatever we want, like having it attached to the Ambarella chip or connect it directly to the computer. Jamie, the full datasheet for the MT9P031 chip is at www.framos.co.uk Just register and download it. I'd go for full 2k. I don't care if I have to build the adaptor. In fact I already have all the parts. 2/3" 720p sounds very good, but I prefer 2k. |
I've gone through 30 or 40 google pages trying to get info about cmos sensors we don't know. No luck so far. It looks like all interesting options have already been discussed.
Jamie and Take, what do you think about this solution? Micron camera head - Ambarella chip - FPGA board or similar with USB interface to the laptop - Simple recording software and camera control under OSX to go directly to Final Cut. I suggest USB because the laptop captures an already encoded H.264 clip, so needed bandwidth is much smaller. If the Ambarella chip has an interface to connect it to the computer, we wouldn't even need the FPGA. That way we could send 2k H.264 clips to the laptop... Do you think it's possible? |
The biggest problem I see with this setup is this line from the Ambarella website: "The codec supports both interlaced 1080i format, as well as progressive 720p." found here: http://www.ambarella.com/technology/compression.htm
And personally I would like to avoid H.264 since I plan on doing extensive chroma keying. I am not saying this setup won't work, its just it may not give us the quality we are all looking for :-( |
I just sent an email to Ambarella asking if it would be possible to have 1920x800 (near 2k but 1:2.40 ratio) progresive encoded.
I'm also very interested in vfx, and if H.264 can't be used for chroma keying it won't be an interesting solution. Well see. Maybe we can find a lossless encoding chip. |
Why do you want to do lossless encoding?
A single 7200 rpm SATA harddisk is fast enough to sustain bayer 1920x800@24fps,14bits uncompressed. |
The ambarella, from memory has, storage (I think some form of ATA) and USB, and sensor interfaces. So, sensor->Ambarella->storage interface (card/hard disk/USB). Because it is not an PC, it should deliver close enough to full USB (2.0, I assume) speed to enable 50fps 720, fully buffered. You could do USB camera->ambarella, but unless the camera is buffered you will loss some bandwidth because of the peaking during readout. Still compared to an PC, it might be better.
H264 Intra is the replacement for Mpeg Intra in professional work-flow. It is available in lossless compression and down. What ambarella will handle is another matter, but they do offer professional versions for broadcast flows (from what I can tell). But if it is re-programmable, even just for h264 Bayer compression, it would be nice. This should be an very powerful chip compared to the fastest PC's, from memory. it is not something for most of us to hassle Ambarella about, they certainly don't give me answers to all my questions, but as we have an engineer here, it is worth looking at. Still, we already have Juan or reel-stream, and Elphel with possible engines suitable for an camera recorder. There are an few more sensor chips, refer to the Elphel thread, I post them sometimes, hoping that Andrey will relent and see an better sensor for his cameras (but changing suppliers, and designs, is an good reason for not changing, as an major hassle). Only one chip I don't mention, not really higher performer (probably better than Micron) but cheap, something for private development. |
Just incase anyone has missed it there is a thread over in the AVCHD forum about the Apitek GO-HD (http://www.dvinfo.net/conf/showthread.php?t=95227) This camera uses the ambarella chip to do 720P. While the footage is pretty good you can tell the compression is quite high, but as Wayne mentioned it depends on exactly how programmable the chip is to see if it is usable or not.
|
Going for a chip design could be quite cool.
Say for example you take a IIDC or GigE camera like the Pike, that already does all the communication with the sensor. Both IIDC and GigE broadcast their data, which is interesting. Now have a chip which has a Firewire800 or GigE and a eSATA interface. Simply record all the frames received on the bus and send it to the disk. You don't even need a filesystem if you like. Just think of the disk as a single file that you write linear. Now this chip does not need to be that fast either. A second chip with a Firewire800 or GigE and a DVI/VGA and USB interface. Its job would be to do a simple debayering and zooming, and show some things like numbers and icons. It would listen to a USB HID device, so that you can control the camera like the shutter time, and send commands to the recording ship to start and stop recording. The nice thing of this arrangement that it would allow a fully distributed system. You could have a second recorder chip, to record to multiple disk for safety, or have a second preview chip for the director. Once the recording chip works it would not need more development afterward, this will make the system stable and reliable. Which is nice in an already expensive production. Maybe at some time you want to add playback capability. I do expect continued development on the preview chips, adding features, increasing preview quality. It would be an interesting project, but I think right now I'll stick with the software route. |
It sounds great! We need to know the price for the chip though, and also what's needed for the whole system to work. I mean, it wouldn't be as easy as connecting the sensor to the chip, the chip to a sata disk and that's all, would it? We need to study the hard/soft for the controller/viewfinder part too.
I wanted to ask something else. It's not Ambarella chip related. Is there any way to overclock an usb interface? I mean, the bottleneck when capturing directly to the computer is the usb. The sensor actually goes like twice faster at full HD. If there's a way to make the interface go just a little faster, we wouldn't have any problem capturing 1920x800. This is really getting better with each new post. I'm seriously thinking about keeping the demo board. I was going to return it because of the usb. We've got a week to find out if the board-chip-disk solution is actually possible. By the way, Wayne... You say the ambarella chip has a sensor interface. Does it mean you just need to plug the actual single sensor and that's all or you have to connect the whole demo board? |
I've been busy and haven't had much time to follow the threads lately: good to see this debate still going on with a feeling of real purpose. I feel excited again with some of the possibilities people are talking about here. Solving the low cost solution has taken longer than I (and most other people) have thought, but I still keep the faith in home made HD, if only because I still can't get the images I want from a commercial camcorder that I can afford to buy.
As usual the main problem is getting HD down onto the HDD, so perhaps GigE will finally solve it. Until then I am still a fan of Bayer RAM recording, though this seems to be going out of fashion at the moment! True, you have the restricted duration of single shots, so if you want to make a movie with lots of long dialogue shots, this recording method is hopeless. But what of you don't want to make a movie like that? The sort of durations I am using are between 30-45 seconds, which is surprisingly long enough to get most shots (modern cinema grammar has speeded up, perhaps thanks to MTV). It's longer than the single takes I got with my spring-driven Bolex H16 in the early 1980's! And it's good discipline to plan out a shot and shoot it within that time span if possible. Each take is automatically split as a single clip, so at the end of the session you convert them to colour avi, assess the clips, and delete the takes you don't want (save HDD space with these large clips). The durations I use are slightly less than I can actually get, but the ultimate durations gave a slight "kick" towards the end of the shot which lost sync with the sound. I assume this is the Sumix software not being able to handle the larger clips from 2 GB of RAM. It seems from earlier posts here that Sumix may be updating their software to accomodate large RAM so the safe maximum duration might get extended. Rob Scott coded his own RAM-recording software for the M73 camera which extended the RAW recording duration using RAM buffering (writing data to HDD while still capturing to RAM). See Rob's thread here on Alt imaging called: "Interest in open-source capture software for Sumix M73". I'm not sure the status of Rob's project right now -- Rob if you're checking this thread, perhaps you could give us an update? My own preference at the moment regarding bandwidth problems is to shoot a more modest frame size. I'm currently shooting tests at around 1400 or 1600 width, which is my preferred method of "compression". Using the M72 camera 1600 is full sensor width, so field of view of the lens is not narrrowed. The post workflow is to take the uncompressed clip, export an uncompressed still sequence (tif, tga, whatever), and then finally use Photoshop to enlarge all the frames in the sequence to 2k width. Because you start with an uncompressed frame, and use Bicubic quality to make the enlargement, results may be more acceptable than onboard realtime jpeg of a larger frame. All the best, John. |
Jose,
Maybe you will be able to overclock the usb chips, but you want to double the speed, and that will probably not be possible. Maybe you can run dual usb, or use dvi, or your own form of sdi, or simply GigE. Cheers, Take |
Quote:
|
Quote:
Jose, It most likely will not be as simple as that, you would have to program the ambarella interface to suit, and maybe external circuits to connect the two. To overclock USB, would be interesting, but in reality, why not use GigE, and an GigE caddy? The problem with USB is the peaks, and the processor drain, but with embedded you can have an section handling the USB, and buffering the data peaks before and after the USB. This equals, nearly, 50MB/s max through put (8bit 1080p24/25, or 720p50 without compression). Take, Interesting solution, I did find an HDMI to wavelet chip solution that included USB like interface, tried to talk them into making an USB capture solution, but nothing came through, but somebody else could do it. |
Steve -- thanks for reminding me about avisynth. I remember looking at that a while ago but didn't go into the upsizing quality because I thought Photoshop would be superior. There are also Photoshop plug-ins like Genuine Fractals and S-spline (called something else now, can't, remember) which can also be used as an action in Photoshop -- I just walk away and do something more interesting while that's going on. I'll check out the latest version of avisynth...
Thanks. |
Quote:
I think it would be quite easy to change the Adaptive Homogeneity ... Algorithm, to do this. Cheers, Take |
Take, I don't want to double the speed of the usb interface. I just want to speed it up a bit. I already have like 18-20 fps at 1920x800. It would be just a matter of 40% or 50% more to get to 25-26fps and then fix it to 24fps.
The question is how can we do all that? I mean, we're talking about different solutions but almost all of them require hard/soft engineering. I.e. I've got my demo board. How can we modify that board so it has a GigE interface instead of USB? If we do it, we also need new software to capture from the new interface. That's why I was talking about overclocking the usb. Cause using parts that we already have is much easier than building new parts, putting them all together and developing new software. If we want to stay with camera-to-laptop solution, which IMHO is the best and easier way, I see two different options: I can capture 24fps bayer at 1600x666 using usb without any problem and it's direct to HDD capture so we have no time issues. I still prefer capturing to ram though, cause with this things faster is always better. We'd have to develop a piece of software that can read the bayer results and convert them to RGB 2048x858 in just one step while debayering. That would be Take's job if he wants to do it. Upscaling is very common even in top pro cameras, so if we know this upscaling process would give us much better quality than any pro camera, it's ok to me. We could use any of the great interpolation algorithms out there. The other option would be to overclock the usb interface to get 24fps at 2048x858 or at 1920x800. In the first case it would be like 75% faster, the other would be easier and more stable, as it's just like 50% faster. This way we would have 2k or near 2k bayer, but full HD anyway and we would just need to bebayer in post. The good part of this one is that we have more resolution to start with. There's in fact another option that's a little more difficult, but a good one nonetheless: If we connect the demo board to the Ambarella chip and then we stream to the computer using usb we could have lossless 2k directly to the laptop. We would need a way to connect the board to the chip, an usb interface from the chip to the computer and software to control the camera and capture. If we can do this one, we could easily turn it into a portable solution using a mini pc and a small lcd. With any option we have to work together. The best of this is that we can start with the most simple solution and then upgrade it till we have a standalone camera. So what do you think it's the most simple option? |
I did it.
I changed USB clock from 125Hz to 500Hz. The camera works and now it captures 2048x858 at about 25fps already debayered to uncompressed AVI. The question is... Is it bad for the camera? I mean, I already spent 900euro on the board. I used a patch used by pro gamers to increase mouse performance and the guy who programmed the tool says it's completely safe. In fact the program has an option to set the clock to 1000Hz and that's the only option the programmer says is not safe. I'll post a full res clip soon. |
Is that related to the actual speed, or an switching speed fro polling etc?
|
What do you mean?
|
Quote:
After talking to one of my friends I have given up on the gumstix idea. We are now talking about going purely with a CMOS-FPGA-hdd solution. Neither of us have ever attempted anything like this before, but that is kinda what makes it fun. Once I have some money to spare I think I am going to buy a Spartan-3E Xilinx dev board. But once again that could be a while. Oh and Jose, that website you gave me didn't have the full datasheet for the Micron MT9P031 only the product overview. It seems you have to sign a NDA with Micron to get the full datasheet :-( |
Quote:
It's faster than AHD because you don't have to convert data to CIE color space (which is a very slow operation involving cube root) to calculate homogeneity map, but the result should be quite similar. (and i've written it in a way that allow me to add easily MMX code where it matters) Creating an avisynth filter using internal resizer shouldn't be too hard, but i wanted to see if i could wrote something from a scientific paper. |
Hello Steven,
Acording to the AHD paper they also tried to use YUV with good result, so I am using YUV. I am doing camera_RGB->rec703_RGB->rec703_YUV using 4x4 matrices. Also, the conversion of RGB to YUV or RGB to CIElab is not the limiting factor, it is the amount of convolutions that have to be performed, including running the image three times to a median filter. BTW, I am running the complete AHD on the graphics card. It runs at about 25% (slower) real time. On a MacBook Pro. Your directional filtering algorithm also looks very much like AHD, except that the decision seems to be made before interpolating red and blue. It took me quite a long time to figure out what the AHD paper tried to explain. While the actual algorithm is quite easy to explain in detail. I which these math people would write papers with engineers in mind. Cheers, Take |
Quote:
Quote:
Quote:
I lost a lot of hairs before figuring it out and then i was "whaat ? is that all ???" edit : btw, if anybody has some raw bayer footage, i would glady download it so i can test the algorithm on different video material. |
Steve,
When I get the camera running correct, I will be able to put some footage up. However, the reseller of the Pike doubts that the artifacts are abnormal. He basically says: "you shouldn't do gamma correction". So to prove that it is wrong I will have to make a light sensitivity graph (x: exposure time, y: AD output value) for a couple of pixels. it looks to me like 50% of the pixels are not linear in mid-light. Cheers, Take |
Any overclocking is safe provided you can cool the parts enough...it goes up quickly though, you'll be liquid cooling the thing before too long ;)
Luddite question: is there a reason Firewire (IEEE1394) isn't being considered (or considered, then dismissed)? |
Steve, I can provide you with raw bayer material from my cam. Do you want anything in particular? Resolution? Colors? Lighting?
Just tell me where I can upload it. |
All times are GMT -6. The time now is 07:38 PM. |
DV Info Net -- Real Names, Real People, Real Info!
1998-2025 The Digital Video Information Network