DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   Silicon Imaging SI-2K (https://www.dvinfo.net/forum/silicon-imaging-si-2k/)
-   -   Cineform Encoding Quality diferences? (https://www.dvinfo.net/forum/silicon-imaging-si-2k/104777-cineform-encoding-quality-diferences.html)

Sergio Sanchez October 1st, 2007 05:31 PM

Cineform Encoding Quality diferences?
 
Hi:

I just wanted to know what is the compression difference between High and Very High Cineform encoding presets?

That is because I`ve been having some troubles with the RAM buffer when using Very High quality encoding.

David Newman October 1st, 2007 06:36 PM

There is a difference, however CineForm pushes to quality to the extremes, so you will not likely ever see the difference even under extreme post processing. Silicon Imaging labels CineForm Filmscan2 as "Very High", really this we internal call "OverKill". Filmscan1 (in SI-2K as High) is design to fully reproduce the shape of film grain, so that mode should be fine.

Jason Rodriguez October 1st, 2007 11:09 PM

Hi Sergio,

Are you running two monitors or just one?

If you're running two monitors and you need "Very High", use a VGA or DVI splitter so that you don't tax the GPU, which on the SI-2K is using a NUMA architecture with system memory as the frame-buffer. So when you have a lot of resolution trying to be drawn by the GPU, it's robbing memory bandwidth from the host processor. Using an external VGA or DVI splitter solves that issue.

If you're going to run dual-monitors off the internal VGA, our suggestion is that you set the GUI control monitor to 800x600 and the external client monitor to 1280x720.

Sergio Sanchez October 2nd, 2007 02:24 PM

Is there any way to increase the RAM in the DVR, because for me an output at that resolution is just not optimal. I had issues with matte boxes appearing on the frame. I have many solutions out there in wich i can record on data and have a ful res output, that is important a solution in wich what you see is what you get, is one of the hard points on choosing a digital workflow.

I believe that the decision to use integrated graphics instead of a dedicated video card was the wrong choice, you should remember one of the first rules in computer ingeniering (I know that and im not an ingenier, im a filmmaker) that is that the computer will be as fast as its slower component. And in this case the solution would be use more than the minimun to reach the specs and the goals you offer to your clients, not the opposite.

So what can i do to improve the performance of the Camera, because I can`t compromise a day of shooting using it at is, without knowing if i can shoot a sequence or not, or fighting with the buffer.

John DeLuca October 2nd, 2007 07:49 PM

I thought this was an issue of CPU speed rather than ram (if the cpu is fast enough it wont use ram buffer). Jason, is it possible to swap out chips and ram in the SI-2K?

Jason Rodriguez October 2nd, 2007 08:53 PM

Quote:

I believe that the decision to use integrated graphics instead of a dedicated video card was the wrong choice, you should remember one of the first rules in computer ingeniering (I know that and im not an ingenier, im a filmmaker) that is that the computer will be as fast as its slower component. And in this case the solution would be use more than the minimun to reach the specs and the goals you offer to your clients, not the opposite.
Hi Sergio,

I appreciate the advice and recommendations that you feel need to be given in this area of electronic system design, and am glad to know you feel we made the wrong choice, but if you only knew the effort and constant experiment and test platforms we built before settling on the current hardware, you would understand how this decision was reached. Now I'm sure if our design constraints where to carry around a grotesque 30lbs square fridge on your shoulder that we could start cherry-picking off-the-shelf computer components from CompUSA, but our goal was a solid integrated camera design that both behaved and looked like a camera from an ergonomic, mechanical, and electrical perspective, and was a familiar and welcoming form-factor to cinematographers, while also functioning inside of a heat and power-budget that allowed for an untethered mobile form-factor (not a 15A block-battery), and finally met a certain mechanical-shock tolerance. The electrical components in the current camera are the fastest parts we could obtain that delivered on all these requirements as well as conforming to driver compatibility specs and a number of other related aspects that deal with working in a Windows software environment. If you decide to ever crack the case open, you will notice that the boards inside look nothing like any PC motherboard you've ever seen, and there are a host of customized components integrated into that platform that make it tick that would not be available with off-the-shelf PC motherboards. This is not a New-Egg special. We have built our camera using the same performance-critical components that power the specialized computer systems operating in the Humvee's of Iraq, Railway systems, Fighter Jets, Sattelite systems, etc. Running with an Nvidia graphics card just to jump from a 1280x720 display to a 1920x1080 display would have doubled the power budget (no more little 90W/hr batteries), required extensive cooling, increased the form-factor size, and in the end, increased the price in order to create a bunch of customized components to compensate for the heat and power draw of a graphics card that was not designed to be running around on someone's shoulder anywhere in the world. And as far as notebook graphics cards go, please find for me a notebook that allows you to upgrade the graphics card . . . there aren't many, and the reason for that is that the graphics, such as the Geforce Go series, or Quadro FX mobile editions, are soldered onto a custom motherboard. We explored this solution early on, and found that it would double the price of the camera. It's one thing when you're shipping 100,000 notebooks like OEM Taiwanese manufacturers do per quarter, and its' another thing when your volumes are a lot lower and you are going to design a motherboard from scratch. And of course the big problem with desiging from scratch is that the upgrade-path becomes very limited if not impossible. The components we have chosen on the other-hand use a modular form-factor called Compact-PCI, so as Intel grows in capabilities, our camera can grow in capabilities.

Another thing you fail to realize is that there is an actual lead-time on components, meaning that what you see right now was designed at least 6-8 months ago. I can't sit around and read something on a tech rumors forum and have a working prototype a month later. Parts have to go through revisions by the manufacturer themselves before they are ready for integration. In other words, not only do we have a prototyping path ourselves, integrating components into the camera design, but each and every one of the major components we choose has it's own prototyping pathway, meaning they are trying to work out bugs and design flaws in their products before we get them, so that we're not trouble-shooting their problems and can concentrate on our own. Dropping a prototype motherboard in a camera design like ours just leads to problems that are un-related to our software/hardware architecture . . . so you have to wait around for production parts from the suppliers as well.

Quote:

I thought this was an issue of CPU speed rather than ram (if the cpu is fast enough it wont use ram buffer).
Because the integrated graphics system is a NUMA architecture, what happens when you heavily load the graphics subsystem is that it starves the CPU of memory bandwidth, so you end up with empty CPU cycles, or missed cycles as the CPU is trying to grab something from memory, but it has to wait for the next available moment because the memory bandwidth it needed on-demand was busy feeding the graphics. It's sort of like missing the train and having to wait around for the next one to arrive. You might be 2 seconds late, but once you missed the train, you missed it. Now if the CPU is faster, what you typically see is the train coming more often, so a missed event is not as critical as far as latencies are concerned, which is the real issue here, not necessarily CPU speed per-se. The latency is the fact that a frame has to be fully encoded within a given frame-time (i.e., 1/24th of a second for 24fps), and if it isn't, then you're a frame behind and that frame ends up in the RAM buffer.

Quote:

Jason, is it possible to swap out chips and ram in the SI-2K?
Yes, part of our design criteria was that the components of the system needed to be upgradeable. This long-term goal of course meant that the short-term design components may not have fitted the wish-list of Sergio above, and I'm sure that those looking at our system right now could point out other short-comings that if they were to build a simple one-off system could gather a better all-star component list from the available parts on the market right now, but that's simply not how electronic systems like this are designed, and it would mean no upgrade path since it would be a fixed-target platform. Furthermore I challenge anyone to assemble a component list better than ours that would meet the heat, power, and size requirements for a non-tethered mobile digital cinema camera while not running into exhorbant NRE charges due to requiring custom motherboard layout designs (that would send the price through the roof . . . our price target has already been missed, but if we had tossed in the amount of NRE that a custom motherboard would have required, all bets are off how much the camera would have cost).

Quote:

So what can i do to improve the performance of the Camera, because I can`t compromise a day of shooting using it at is, without knowing if i can shoot a sequence or not, or fighting with the buffer.
Sergio, Steve has already been working with you guys one-on-one all day today, as well as me sending you guys lots of private emails over the same topic . . . airing your troubles here on this forum won't give you a faster or different response. Some of the issues you're seeing seem to actually be a bit of a one-off, meaning that you might be having a system failure in a hardware component, or some software setting has gone very awry. We might need to get a fresh OS install on that system, which at this point in time might mean returning it for evalution (note: while we don't have a way to-do OS re-installs in the field at the moment, we are actively working on a solution, so that even a horribly corrupted XPe install that won't boot can be completed re-flashed using a USB stick with a fresh disk-image on it).

I do apologize if 1280x720 is not enough resolution for you, but with the current graphics chipset in the SI-2K system, that is where we are at for now. You can run the GUI at 800x600, and the external client monitor at 1280x720. If you need to analyze something, you have a 2x and 4x zoom control that lets you pan around the entire screen, enabling you to microscopically analyze the entire scene at greater than 2:1 pixel resolution. We allow you to-do a lot more for image inspection than the typical "2x center-only zoom" that you see with the cameras doing a focus-aide.

Sergio Sanchez October 2nd, 2007 11:22 PM

Jason:

As Dank said to Steve, I dont think that reinstalling the OS will solve the problem, it isnt a hardware component failing, there is one driver missing though let me check out the device name and I send it out to you because I couldnt recognize the device name. The SMBus driver was missing too, I installed it, I could record more than a minute after it.

I understand perfectly what you are saying to me, im not airing anything here at the forum. In fact the real answers I made were into an e-mail I sent you yesterday and in the Support forum in Cineform. Here I just wanted to know the difference between formats so you or someone at Cineform could clear it out.

Jason Rodriguez October 2nd, 2007 11:37 PM

I understand the points you are trying to make about the monitor format. That is fine.

But the attacks about not knowing how to correctly design the camera I think are going a bit too far. While my reaction may have been a bit thorough and over-the-top, it's stemming from the fact that I'm a bit irritated this was originally a private opinion solicited by one of your fellow co-workers in an email (which is of course justified . . . you can email me anything you want, and while I might not be happy hearing the criticism, I'll professionally take it), and now it's making it's way to this public forum, despite the fact that I've already answered it once in a nice professional manner. I hope you can see that having to repeat myself and defend our design cycle now on a public forum is not making me too happy.

I like giving people help. I don't like hearing arm-chair quarterbacking on what we did right/wrong and how you would have fixed it when your "fix" is one dead-end routes we actually investigated and even spent months building a dead prototype around. The path to market hasn't been painless, but I believe in the end we've been making the best design decisions we could with the resources we currently have available.

I will be the first to admit that best-fit compromises were made based on what would have been an "ideal" platform on-paper (i.e., our pie-in-the-sky unlimited platform pipe-dream), but compromise is part of every engineering product - reach the best end-goal possible for the resources you have at hand, whether they are limited by the laws of physics, mechanical requirements, functionality, avoiding incompatibilities, and of course budget. The good news is that everything we've done has been forward looking, and as far as performance goes, the road for Moore's law only goes up.

Bob Hart October 3rd, 2007 12:41 AM

It would be interesting to see if outfits like Sony, Panasonic, JVC, Canon etc., would as frankly and honestly discuss their trade secrets directly with their customers or on an open forum.

Even if it is way way out of my league, it remains refreshing to see the direct interaction between providers and clients that is occurring with entities like Cineform, SI, RED, where the culture of the enthusiast has not been suffocated within a larger corporate mass.

A little self-governance on the part of forum participants might be in order if this openess is to continue.

Jason Rodriguez October 3rd, 2007 01:08 AM

Quote:

A little self-governance on the part of forum participants might be in order if this openess is to continue.
Yeah, to be honest, my response was a bit over-the-top (apologies to Sergio), but I'm very passionate in letting people know that our goal is to deliver the best product possible. And when things don't function the way they should, we do the best we can to find out how to make things work right. One unfortunate side-effect of the "openness" of these development projects is that you've seen all the dirty laundry, especially in how much the "ideal" specs can change from start to finish . . . sometimes the change is in the positive direction, and sometimes some compromises need to be made in specific areas in order to make a better overall product that meets as much of the original goal-set as possible.

I can remember last December the anger and frustration after realizing that we had a DOA (dead-on-arrival) product in our hands after building what was suppose to be a fully working prototype based on a GPU-powered system. On paper everything should have worked, and individually the components all worked fine in a test-bench situation. But once it was all assembled, and we attempted to put it though it's paces, it flat-out didn't work for what was required of it, that is a tetherless mobile system that worked like a "real" camera, and could withstand the abuse of a "real" camera. As a result we had to work overtime going through a complete re-design from electronics through the form-factor between then and NAB in April. The good news is that we learned from our mistakes and dead-ends, and we've made a vastly better product as a result.

Jason Rodriguez October 3rd, 2007 01:23 AM

Quote:

Jason, is it possible to swap out chips and ram in the SI-2K?
BTW, I should clarify this a little better.

I order to meet the criteria for ruggedness, we ended up having to move away from socket designs (which can come loose over time or during vibration/shock, and cause instabilty issues or crashes that are hard to diagnose), and instead go with a soldered design where the RAM and the processor is soldered onto the motherboard.

That being said, the cPCI form-factor allows you to swap out motherboard for newer designs later. So what we've done with the current set of boards is we've gotten the fastest parts possible, and put them on. When a new chipset/boardset is available, we can swap out the current board and put a new one in (and it's fairly straight-forward since it's the entire board in a socket).

The upgrades of course will cost more than buying a single CPU/RAM combo, but then of course Intel makes sure that you have to purchase a new motherboard every time a new chipset comes out, so this board-swap would end up happening anyways. Going from a slower to a faster processor in a given family is one thing, but in our case we are already at the top tier for this chipset.

Our goal was to use a sub-system that gave us the easiest upgrade path while also maintaining the required rugged form-factor required for the intended abuse that the camera would go through. Using mission-critical and military spec'ed parts would be the logical path to go as those industries have similar requirements as well. While Intel embedded graphics right now are a pain in the butt to work with, the good news is that Intel is on a very aggressive pathway to increase embedded graphics performance by over 10x in the next 18 months, basically making embedded graphics equivalent to what would right now be considered a pretty decent gaming card. Of course 18 months is a long ways off for someone who is shooting right now, so rest assured that we have other near-term options on the table that we are currently developing that will help any performance deficit that is currently in our embedded platform. But I just wanted to let you know that we've tried, and are continually trying to keep the platform as flexible as possible. And then another thing to remember is that every SI-2K is also a MINI as well . . . so if there's something that is simply not working, you can always pull the MINI out and plug it into another box/laptop, etc. That of course is not an intended work-around, but I guess I'm just trying to say there are many ways to stay up with the latest-and-greatest that other camera systems don't offer you the flexibility of taking advantage of.

Bob Grant October 3rd, 2007 05:51 AM

All of this makes for compelling reading. It would have been better if we'd all been kept in the loop as things progressed (or didn't). Some of us have what for us are not small sums of money tied up in this project, it'd help no end if we knew what was the cause of the seemingly endless delays.
From the outside until today here's how it looked:

NAB 2006 you had a working prototype camera.
You decided to repackage the camera for cosmetic reasons.
NAB 2007 you have the working repackaged camera and 6 months later it's still not shipping.

Should mention I suppose at various points along the way a few announcements of new features but nothing about actually shipping a working camera.

As has been explained, that's so far from the situation as to be laughable but until we're all told otherwise what else are we to think, so please, continue to keep us informed of both the good news and the bad.

John DeLuca October 3rd, 2007 10:04 AM

Quote:

Originally Posted by Sergio Sanchez (Post 753282)
I had issues with matte boxes appearing on the frame

What would cause this? I'm planning to use a set of S16 primes and accessories.

Jason Rodriguez October 3rd, 2007 12:13 PM

Quote:

What would cause this? I'm planning to use a set of S16 primes and accessories.
I'm actually not sure. I've been using Zeiss Superspeeds and even a converted S16mm Cooke 10-64mm zoom with an Arri MB-16 and have had no such issues with the matte-box in the frame.

In fact with all the films that we've been involved in shooting haven't had that problem either.

I don't know how that happened. Sergio, can you elaborate on your setup that caused this issue?

Jason Rodriguez October 3rd, 2007 12:20 PM

Quote:

NAB 2007 you have the working repackaged camera and 6 months later it's still not shipping.
It is shipping, that's why Sergio has one and a couple other people on the list. The only issue right now is that we're trying to ramp production in Germany while weeding out any bugs with the current systems and waiting for volumes from other suppliers as well. This sort of causes a bit of a stop-and-go situation, and it's a bit compounded by the fact that we're now being compared to the power of the latest laptop solutions that people can run with their MINI, when the design cycle on the SI-2K necssitatest that the electronics in it will be a bit older, and not as powerful as the latest laptop on the market. The good news of course thought is that we're upgradeable as well, so when something newer comes out, we can include it as well, it's just that we can't hit market with the latest solution as fast as Dell or Apple.

One thing to note is that SiliconDVR was designed to meet the requirements of the SI-2K electronics. With a more powerful system that has unlimited resources, like gaming laptops or desktops, etc, you can naturally do more, but the SI-2K is not performance starved. Of course if you drop from the gaming laptop to the SI-2K, this might seem like the case, but it's not. In other words, if you hadn't used the powerful laptop/desktop, and you came to the SI-2K natively, you wouldn't be trying to-do on the SI-2K what you can do with the gaming laptop and then call it "performance starved".


All times are GMT -6. The time now is 02:06 PM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network