DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   What Happens in Vegas... (https://www.dvinfo.net/forum/what-happens-vegas/)
-   -   GPU versus CPU rendering (https://www.dvinfo.net/forum/what-happens-vegas/523615-gpu-versus-cpu-rendering.html)

Mervin Langley June 6th, 2014 09:00 PM

GPU versus CPU rendering
 
Is CPU render quality inherently better? I thought I read this somewhere.

Leslie Wand June 7th, 2014 01:54 AM

Re: GPU versus CPU rendering
 
apparently gpu ISN'T as good with low bit rates. but don't take my word for it ;-)

one thing for sure though - cpu is a lot more reliable ;-(

Gary Huff June 7th, 2014 09:22 AM

Re: GPU versus CPU rendering
 
Quote:

Originally Posted by Leslie Wand (Post 1848012)
apparently gpu ISN'T as good with low bit rates. but don't take my word for it ;-) one thing for sure though - cpu is a lot more reliable ;-(

Not sure where you are getting this information. That the GPU "isn't" as good with "low bit rates" doesn't make sense. Do you have a source for this?

Jeff Harper June 7th, 2014 09:40 AM

Re: GPU versus CPU rendering
 
I have read the same thing, Merv. I've seen in in a couple of forums, that GPU quality is less than CPU quality when it comes to rendering.

The way it was explained (which I don't recall) made sense to me, but I know nothing about processor design. The CPU on most desktops is a much more powerful processor than the GPU, and for some reason it was said the GPU sacrifices quality in order to get the job done.

Is it true? I have no idea.

Robin Davies-Rollinson June 7th, 2014 10:08 AM

Re: GPU versus CPU rendering
 
Surely, if it's digital, one can't be better or worse than the other? The only way you might get a quality difference is in the choice of rendering codec...

Jeff Harper June 7th, 2014 10:41 AM

Re: GPU versus CPU rendering
 
This article leads me to be open to the possibility that GPU "could" be inferior to CPU encoding. Again, not enough to prove anything to me, as a Vegas user.

For us to say it makes no sense is understandable, but we should be open-minded to the idea.

H.264 encoding - CPU vs GPU: Nvidia CUDA, AMD Stream, Intel MediaSDK and x264 (page 27: Conclusion) - BeHardware

Jeff Harper June 7th, 2014 11:12 AM

Re: GPU versus CPU rendering
 
Quote:

Originally Posted by Robin Davies-Rollinson (Post 1848027)
Surely, if it's digital, one can't be better or worse than the other?

Robin, what you say makes sense, but by the same token, if all of the codecs are digital they should all produce optimal results also.

Video encoding is extremely complex and an knowledgeable answer to this question might be hard to come by.

Juan Rios June 7th, 2014 12:42 PM

Re: GPU versus CPU rendering
 
Empirical evidence, ¿in low bit rates is better CPU? On my system I think not.

https://drive.google.com/file/d/0B9o...it?usp=sharing

https://drive.google.com/file/d/0B9o...it?usp=sharing

Sony Vegas 13
nVidia GTX 550i
Sony AVC.

Seth Bloombaum June 7th, 2014 02:07 PM

Re: GPU versus CPU rendering
 
I've been scratching my head over this thread, how could it be that an "acceleration" of a process would change the quality? After a little research I'm coming to some tentative conclusions:

"GPU Acceleration" is a misnomer. I think it's a marketing concept that is too dumbed down to illustrate the actual process. It's not descriptive. It leads us to think that it is taking the same process we do on the CPU, and somehow just doing it faster by having specialized hardware in the GPU take the load.

After a quick skim of the NVIDIA white papers on the subject, I think what's actually happening in a "GPU accelerated" render is that the render is offloaded to a hardware encoder in the GPU, that has its own embedded h.264 codec.

If that's true, it readily explains differences in quality, and potential variations between low-bitrate and high-bitrate qualities. Particular codecs are optimized for various tasks, and those of us who've been through numerous h.264 codecs have all realized that the x264 codec (not directly available from Vegas' render-as, or from an nVidia GPU-assist encode) is clearly superior to all others. This is also supported by a university study...

IMO, various manufacturers of encoding hardware and software (including MainConcept, Sony, and nVidia) don't spend enough effort optimizing low bitrate performance. To be fair, time is on their side, that is, all the internet pipes are getting bigger, including those connecting mobile devices.

Instead, the implied message from these manufacturers is "just give more bitrate to your encodes, then our quality is as good as everyone's..." As a certified Old Fart in streaming, that rubs me the wrong way, especially since the x264 open source codec so clearly is capable of better quality.

But, if you have to get 50 encodes out before 5pm, a single step process where you can more quickly encode straight out of Vegas starts to look pretty good, so why not give a little more bitrate to maintain quality?

I'm doing most of my mastering out to MXF out of Vegas, a fairly fast render, then to the wonderful,fast, and free Handbrake (a leading GUI for x264). It's a two-render process, not of value to those who need 50 by 5pm, but the quality is second to none.

My conclusion: It isn't about the GPU or CPU, it's about the codec. If someone implements x264 in GPU (or Vegas) we'll really see some outstanding performance!

Christopher Young June 8th, 2014 12:07 AM

Re: GPU versus CPU rendering
 
Absolutely agree with Seth on his findings. x264 in Vegas would be a fantastic addition. Handbrake for free with x264 is only marginally beaten by Telestream Episode Pro in quality which starts at $1,194.

Check out Jan Ozer's H.264 /x264 comparisons in his PDF called 'Encoding H.264 Video for Streaming and Progressive Download.' It is a mine of information on the whole of the web MP4 delivery scenario. He covers what the mass online market, TV and corporate B2B, is doing in relation to suggested bit rates, pixel dimensions, pixel bit depth, coding methods etc. Studying his PDF along with Telestream’s ‘Encoding Best Practice’ rapidly pointed me in the direction of creating better and improved quality encodes for the web. Just finished 13 ninety minute plus online learning seminar programs for a national medical body and ended up creating Handbrake encoded x264 videos at 768 kbps that were way superior to anything I could put out of Vegas, Mainconcept, TMPGEnc, Sorenson or Compressor. Compressor was so far behind Handbrake at these lower bitrates it wasn't funny.

To really get your head around all the settings in Handbrake watch or download the ten Handbrake ‘how to videos’ from Youtube. Well, well worth it if you want to extract the best out of Handbrake. I am totally convinced a good software encode with totally adjustable encoding attributes is way better than any of the GPU H.264 encodes at lower bitrates. Once you get upwards of 3000 kbps the difference becomes less noticeable. From 5~600 kbps up I don’t think I can really pick much difference. At the lower bitrates it’s like chalk and cheese, the x264 software encodes totally leap ahead of any of the GPU H.264 encodes I have tried at similar bitrates and sizes.

Jan Ozer link:

http://www.google.com/url?sa=t&rct=j...68445247,d.dGI


Telestream link:

http://www.google.com/url?sa=t&rct=j...68445247,d.dGI

Chris Young
CYV Productions
Sydney

Adam Stanislav June 8th, 2014 08:53 AM

Re: GPU versus CPU rendering
 
Quote:

Originally Posted by Robin Davies-Rollinson (Post 1848027)
Surely, if it's digital, one can't be better or worse than the other?

It most certainly can.

A CPU can do a lot of things that a GPU cannot do. A CPU puts a few cores on roughly the same amount of silicon that the GPU packs many cores on. To cram up so many cores on a chip, the GPU can only support a limited number of instructions and, in the case of floating-point operations, it has to sacrifice precision.

Unlike integers, floating-point numbers are not represented exactly. They use what is known as the engineering notation in decimal world, which represents a number, such as 123.456789 as 1.23E02, i.e., 1.23 times ten raised to the power of two. Such a notation sacrifices precision to achieve compactness.

In digital, floating point does the same, but it uses a binary point instead of a decimal point. Each digit can only be either a zero or a one (whereas in decimal each digit can be 0, 1, 2, 3, 4, 5, 6, 7, 8, or 9). You need many more binary places in floating point than decimal places in the engineering notation. And a GPU uses considerably fewer binary places than a CPU.

Not only that, but the large number of cores in a GPU forces its designers to use less exact hardware algorithms to perform its math. And they skip many useful math functions altogether. For example, a CPU can natively calculate a sine, a cosine and many other functions (FWIW, a CPU can calculate a sine and a cosine at the same time). A GPU has no native support for this kind of math and has to approximate it by oversimplified software algorithms.

You can only cram so much functionality on a given volume of silicone (or whatever material you use, but silicon is what most, perhaps all, use these days). And since a GPU places considerably more cores on roughly the same amount of silicon that the four or so cores of the CPU do, you have no choice but to reduce the amount and precision of available instructions.

A GPU is optimized for calculating the vertices of a 3D model in order to achieve high speed games, where the precision is not as important as it is in video editing, which generally does not even work with vertices but rather manipulates the colors of individual pixels using sophisticated mathematics. A gamer is too busy concentrating on winning the game, so he does not really pay much attention to how precise the pixels are. As long as the overall image looks casually great, the gamer is happy.

A filmmaker/videomaker cares about every little detail in the image. But the GPU is not designed with a filmmaker/videomaker in mind. Its purpose is to produce impressive games, not impressive movies or videos.

So, yes, from a video editor’s point of view, one not only can be worse than the other, one usually is worse than the other.

Christopher Young June 8th, 2014 08:14 PM

Re: GPU versus CPU rendering
 
Thx for the real tech explanation Adam. Makes even more sense to me now that I have an idea of what is going on in there.

Chris Young
CYV Productions
Sydney

Adam Stanislav June 8th, 2014 09:10 PM

Re: GPU versus CPU rendering
 
You are quite welcome, Chris.

Mervin Langley June 8th, 2014 09:14 PM

Re: GPU versus CPU rendering
 
Thank you for these informative responses. I am amazed at the collective intelligence represented here.

Robin Davies-Rollinson June 8th, 2014 11:54 PM

Re: GPU versus CPU rendering
 
Thank you Adam too. Very enlightening!

Jim Andrada June 12th, 2014 12:14 AM

Re: GPU versus CPU rendering
 
I was always my understanding that the GPU is used for real time preview and the CPU for final rendering/codec processing. The GPU cores are great for running the same small program simultaneously on a lot of data points, but applying a codec is a more complex process.

Jeff Harper June 12th, 2014 07:22 AM

Re: GPU versus CPU rendering
 
Vegas does offer GPU assistance for preview, as well as rendering of Sony AVC codec. And tests conducted by Kim O and a few others of us have demonstrated that GPU assistance for rendering can reduce times somewhat. In my case I've only used it a few times for rendering. I generally turn off all GPU "help" when using Vegas.

Danny Fye June 12th, 2014 10:12 PM

Re: GPU versus CPU rendering
 
I have used both CPU and GPU for rendering and preview. GPU on my system makes everything much faster especially with the 580 card that I have.

One big difference maker is the driver. Driver 296.10 is not only the fastest but also provides the best quality. I can get some strange looking previews with the later drivers as well as not so great quality renders and also be slower. With the 296.10 driver, I get the same great quality as I do with CPU only.

Problem is, NVidia apparently does not want to play nice in the sandbox. They also want to focus on their money making game support over video.

So what we end up with is a sort of dead-end support for video with NVidia unless we want to pay too much for their inferior premium cards (I cannot remember the names) and get inferior results.

Maybe I should take a chance and sell the NVidia cards I have and go to AMD? I do not have the time and/or money to mess with that.

Besides, what I have at the moment is mostly not broken so I will not fix it. I say mostly, because certain projects with NBTP3 and large jpg graphics can cause Vegas to crash with the 296.10 driver but not crash with the later driver. But I do not use those often enough to justify the later drivers so I use a work-around for those projects. The later drivers gives poor quality results so I do not want them anyway.

Norman Black June 13th, 2014 03:20 PM

Re: GPU versus CPU rendering
 
Quote:

Originally Posted by Mervin Langley (Post 1848005)
Is CPU render quality inherently better? I thought I read this somewhere.

A better quality encoder will give better results, regardless if it runs on CPU or GPU. All current GPU encoders are not as good as the more mature CPU encoders. Also, they probably never will be as good because people want speed when they think GPU use and one must make compromises in algorithm to increase speed.

Note that in Vegas, MainConcept AVC CPU, OpenCL/CUDA are all completely different encoders using different algorithms. Enabling OpenCL/CUDA does not just "accelerate" the CPU encoder in some way.

Sony AVC only seems to use GPU for motion estimation with the rest being the CPU encoder. It does not have a big speed difference CPU to GPU.

Quicksync is a single function hardware based encoder available as an option in Sony AVC.

Encoding a video is not a very parallel sequence, and GPU gets performance only from massively parallel algorithms. GPUs are slower than CPUs by a good margin, but they have hundreds or thousands of simple execution units.

In contrast most graphics/video effects can be massively parallel and match well to the GPU. Compositing is massively parallel. With playback and encoding you always get GPU benefit here.


All times are GMT -6. The time now is 05:40 AM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network