DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   Open DV Discussion (https://www.dvinfo.net/forum/open-dv-discussion/)
-   -   upsampling / supersampling (https://www.dvinfo.net/forum/open-dv-discussion/13091-upsampling-supersampling.html)

Frank Ladner August 10th, 2003 09:07 PM

upsampling / supersampling
 
I have been thinking on this for a while and wanted to see if you guys had any ideas.

After watching some footage on the Discovery channel of high-detail landscapes and backgrounds, I thought about how that is basically 720x480, which is what my camera captures at, but the detail is a lot better looking than a standard DV camera. Of course, this has a lot to do with the optics and how many pixels are being sampled up front. (Or if it is shot on film and transferred to video, it's automatically better.)

So I looked into the theory of upsampling images. Everyone knows that you can shrink a digital image with no problem, but when you blow one up, even using the best algorithms such as Lancoz(sp?), b-spline, and bicubic sampling, you'll still introduce a lot of softness to the image.

One approach for you Photoshop guys, is to take your 720x480 frame and upsize it in small increments, several times. This is better than doing it all in one step.

Now you do all your hue/saturation, curves adjustments and run the unsharp mask filter (which is easy to overdo, if you are running it on tye 720x480 image. ), since you would have more image data to manipulate.

Now when you take the final image and size back down to a 720x480 frame, you should have a better looking image.

Of course, this may be prohibitive since you would have to export your footage as a sequence of frames and run this one frame at a time using a Photoshop action script, but it may be worth it if you only use it on, say, landscapes.

This I have yet to try, but one thing I have done was to take a scanned 35mm picture, add grain, and render to a 720x480 avi, and it looked like footage. (Of course, you can't do this on waterfall scenes.)

Anyhow, I'm just throwing some ideas out there. Let me know what you think.

,Frank

Gints Klimanis August 10th, 2003 10:07 PM

>So I looked into the theory of upsampling images. Everyone >knows that you can shrink a digital image with no problem, but >when you blow one up, even using the best algorithms such as >Lancoz(sp?), b-spline, and bicubic sampling,

Actually, image shrinking suffers similar problems from aliasing which adds noise to the picture.

None of these upsampling algorithms listed above are particularly good. They are linear combinations of a small number of pixels and thus are popular because they are computationally simple for the job they do. Since few samples are required, the phase shift is also small. Larger filters will shift the image by 2 or more pixels, requiring extra processing to restore the result. Ideally, you would like to use a 2 dimensional sinc function sin(x)/x interpolation during the resampling, although this computationally intensive algorithm will take more time. However, at least 8 or so points are needed before the polynomial interpolators are bested.
With even more points, the polynomial interpolators fall behind
quickly.

>you'll still introduce >a lot of softness to the image.

If someone showed you the frequency response of these interpolators, you would gag. There will always be some softness as you zoom in because you are trying to view absent information in a higher resolution format. However, these algorithms readuce the original information.

Since I work in audio, I'll use an audio example. Say you have a telephone recording, which has a sampling rate of 8000 Hz,
a theoretical bandwidth of 4000 Hz. (However, the real bandwidth is somewhat smaller). Even if you record your telephone line on a 48000 Hz digital recorder and play it back on a fantastic stereo system, the recording will have no more high frequency content than that in the original recording. If you want to make it brighter, you can use an equalizer to boost frequencies
below 4000 Hz. The equalizer won't work above 4000 Hz because there are no frequencies present. However, if one of the bicubic methods were used to convert the 8000 Hz sampling rate telephone signal to a sampling rate of 48000 Hz, frequencies in the 2000-4000 range would be tapered off, resulting in a recording that is singificantly duller than the original. Although audio and video signals enter our bodies using different sensors,
the signals are processed in a similar manner.

>One approach for you Photoshop guys, is to take your 720x480 >frame and upsize it in small increments, several times. This is >better than doing it all in one step.

This is disturbing. If Photoshop users are doing multi-pass resampling, then Adobe should offer higher order interpolators so that only one step is needed. They are wasting your time.

Vladimir Koifman August 11th, 2003 05:20 AM

Can it be that Discovery channel cameras do not have optical anti-aliasing? Then what you see is many aliasing artifacts from the details that can not be completely resolved. Visually it could produce detail-reach picture, even though there is no actual resolution there.
On DV camera optical anti-aliasing basically removes all the details that can not be resolved by sensor. Mathematically such an approach is more correct, but it's percepted as a lack of details. Discovery might employ mathematically incorrect, but visulaly more pleasing effects.

Rob Lohman August 11th, 2003 05:34 AM

I can't imagine how upsampling an image and then down-sampling
it will INCREASE the quality. It may alter the look of the image to
something that you find more interesting.

I've seen some good results upsampling images using fractal
algorithms. Although these are very very time consuming
algorithms.

Frank Ladner August 11th, 2003 06:25 AM

Rob: I was thinking that not just upsampling, then downsampling, but upsampling, adjusting, then downsampling would be better, since curves adjustments, etc would have more pixels to work with when the image is larger. True, it probably wouldn't increase the quality, but maybe make it appear to look better.

Vladimir: I haven't heard a lot about optical anti-aliasing. I will look into this more.

Gints: Very thorough answer there. Thanks for explaining that!

Jeff Donald August 11th, 2003 06:31 AM

I've seen the technique Frank is talking about. It works very effectively in still photography. But it really is just tricking the mind and eye into perceiving more detail than what is actually in the image. I don't think it would be a viable tool for video production because of the length rendering times.

Frank Ladner August 11th, 2003 06:46 AM

Jeff: Yes, it would be very cumbersome to run this process on each frame. It would maybe be something you would do with just a particular piece of footage, like a landscape. Even that would be time consuming, though.

I was wondering - when DV footage is transferred to film, what sort of process do they run on the footage to make it look better when it is blown up?

Jeff Donald August 11th, 2003 07:04 AM

I would suspect they use unsharp mask to make the images appear sharper to the eye. There are some new sharpening filters (software) coming soon that should advance the quality and speed of operation.

The Discovery Channel also does a great deal of computer animation and the realism is amazing on some of the scenes.

Ron Johnson August 11th, 2003 09:54 AM

There are also fractal-based image 'magnification' techniques used in digital photography. I don't recall any specific products, but a few years ago when CCD pixels were very expensive, these programs seemed to be popular.

Frank Ladner August 11th, 2003 11:18 AM

I have heard complaints about how DVDs are being over sharpened, showing the white/black 'ringing' effect on the bigger TVs.

Jeff Donald August 11th, 2003 02:14 PM

Most viewers set the sharpness control on their TV's to the highest setting. This creates many visual problems for DVD's and other low resolution mediums. It should really be set to the lowest setting.

Gints Klimanis August 11th, 2003 02:58 PM

>I've seen the technique Frank is talking about. It works very >effectively in still photography. But it really is just tricking the >mind and eye into perceiving more detail than what is actually in >the image. I don't think it would be a viable tool for video >production because of the length rendering times.

Although I'm new at videography, I used to experiment with image processing. I noticed that some aliasing during resizing, expecially in black&white images, appeared preferable to none at all. The aliasing is most pronounced around edges (which are high frequencies), and so this noise results in an increase in image sharpness or more information around the edges.

Gints Klimanis August 11th, 2003 03:00 PM

For video, I suspect this aliasing noise around the edges would animate (like that found in line "jaggies") and thus be as undesirable.

Gints Klimanis August 11th, 2003 03:16 PM

>Rob: I was thinking that not just upsampling, then >downsampling, but upsampling, adjusting, then downsampling >would be better, since curves adjustments, etc would have >more pixels to work with when the image is larger. True, it >probably wouldn't increase the quality, but maybe make it >appear to look better.

Frank,

Provided the filters used to upsample and downsample are pretty good, more image fidelity would be preserved if upsampled, processed, then downsampled. Operating at higher sampling rates allow more bandwidth for out-of-band or aliased components. Non-linear effects such as color curves often introduce out-of-band components. Most video processors accept these artifacts as a fact of life. However, much of that could be reduced by processing at a higher samping rate to reduce the aliased energy by spreading it into the upper frequencies. These undesirable components which would then be filtered out when the signal is downsampled. Without the extra bandwidth, the aliasing components spill into the baseband and ccould never be eliminated. Generally, these aliasing components are part of random, animated grunge one sees at edges.

Audio signal processing has found significant quality improvements by operating at higher sampling frequencies.
Basic guitar distortion called waveshaping is quite similar to video's color curves. In Waveshaping, a non-linear curve is applied to the audio signal to remap it. The effect is harmonic generation and the results are usually easy to hear.

Frank Ladner August 12th, 2003 11:29 AM

Gints: That is very interesting!

In theory, if an image was converted to, say, 48 bits, manipulated, and then dropped to 24, would that work better than manipulating in 24 bit mode, or would the footage have had to originate in 48 bit mode?

I know a lot of the newer applications are taking advantage of 48 bit editing.

Gints Klimanis August 12th, 2003 01:25 PM

Frank,

Regarding bit depth, your image will probably have less noise if converted to a greater # of bits provided that the bits are allocated properly. Most bit depth increases are added to the bottom end, something I call "foot room." If the bits are added to allow for larger values, "head room" has been increased. Bit increases generally allow for greater signal resolution, but the conversion back to the shorter word width can be tricky when you have an image the exceeds the dynamic range of the shorter word. It's relatively straightforward to scan the image for peaks and chop off anything below that. However, it can also be useful to compress the peaks (hot spots) using a non-linear algorithm.

Jeff Donald August 12th, 2003 01:35 PM

The problem is not really the conversion, but rather availability of 16 bit (48 bit) software. Photoshop has limited tools in 16 bit (no layers for example). However PS 8 is rumored to have more 16 bit functionality (due to be released this fall). There is program called Film Gimp (it's free) and has a wide variety of 16 bit tools. Film Gimp is used to manipulate files for film and video in 16 bit mode. Some video boards also offer proprietary 16 bit paint/color correction programs. The boards, however, are usually thousands of dollars.

Gints Klimanis August 12th, 2003 01:50 PM

Jeff, thanks for the tip.

FilmGimp seems to have been renamed CinePaint.
http://cinepaint.sourceforge.net/

Actually, I'm looking for an image processing program that can
handle motion JPEG (MJPEG) avi files from my Canon S40 camera. Studio8 can
handle the files, but the software offers minimal processing.
PhotoShop doesn't do it. Would anyone know of a good program that can handle MJPEG?

Thanks !

Gints

Frank Ladner August 12th, 2003 02:36 PM

Once again, thanks for the extensive information, guys!

Before moving to DV, I was using an Iomega Buz capture device, which captured to an MJPEG compressed AVI. This came bundled with MGI Videowave, which I'd use for capturing only.

You should have no problems opening the MJPEG AVI into Adobe Premiere or After Effects.

Robert Knecht Schmidt August 12th, 2003 04:31 PM

I see that CinePaint claims to operate in "high dynamic range"--this is a bit misleading since it is actually only capable of 32 bit depth image manipulation, whereas what is normally referred to as HDR is typically a 96 bits/pixel floating point format (though there are actually several formats for describing HDR images--see, for example, Larson's descriptions of several such formats).

Rob Lohman August 14th, 2003 06:34 AM

Gints,

As long as you have an MJPEG codec on your system any NLE
(editor) should be able to handle your footage. From there you
can export individual frames so that Photoshop can input them.

Gints Klimanis August 14th, 2003 03:07 PM

Thanks, Rob. Actually, I'm lazy. I would like to use the Photoshop auto color balance and levels functions and some other photoshop processing like layer, apply them to an entire MJPEG file and output to MPEG1. Studio8 doesn have enough image processing options. Vegas only shows me the audio.

Hmmm. I checked the web for MJPEG codecs, but most, can you believe the gall of these software developers, require a monetary exchange. Chuckle. If any of you know of some MJPEG codec freeware, please post ! I'd pay $20 for a great codec, but I'd like to verify that it does the job in Photoshop and Vegas.

Rob Lohman August 15th, 2003 09:08 AM

Photoshop ISN'T a VIDEO editor. It cannot and will not read movies
(which MJPEG is). The ONLY thing you can do is:

1) load your MJPEG footage into an NLE and export individial frames and load these all up in Photoshop one by one

2) some NLE's might load a Photoshop filter that can be used on your footage

I don't think there is any freeware MJPEG codec. I'm surprised
your camera isn't coming with one since it makes such movies.
If you can play the movie in Windows Media Player you already
HAVE an MJPEG codec. So you don't need another.

Again, even with an MJPEG codec Photoshop WILL NOT load your
movie. If you want to see for yourself make sure you have an
MJPEG codec (movie plays in WMP) and try to load it in Photoshop.

Gints Klimanis August 16th, 2003 01:55 PM

Thanks Rob. My brother suggested Adobe ImageReady for splitting and combining images from movies.
Each movie frame becomes a layer in an Photoshop file. He didn't know how to select all 450 layers at once for a painless application of all of the desired image processing, so appying the same setting to each frame is tedious using this method. I'll keep on looking. I don't have Premiere ... yet.

Glenn Chan August 16th, 2003 06:17 PM

Quicktime Pro can convert from most video formats to an image sequence (lots of image files with a sequential naming system). You can use Photoshop's batch actions to process that image sequence, then use QT Pro to convert the image sequence to a video format (uncompressed AVI for example).

You could uprez your DV movie to HD by using genuine fractals in photoshop for example.

Increased bit depth (i.e. rendering in 32-bit floating point, which Vegas Video [i think] and FCP and higher end solutions do) can reduce quantization error on renders involving stacks of filters. If your capture and output format is mini-DV (8-bit), you can still benefit from rendering with increased bit depth. This should reduce banding that happens with certain color correctors (the not so good ones). I'm not too clear about this though as 32-bit floating point seems to be overkill when most capture formats do not come close (12bit for some film transfers is the highest I've seen; dv is 8 bit and uncompressed/standard definition video is 10 bit).


All times are GMT -6. The time now is 05:58 AM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2025 The Digital Video Information Network