|
|||||||||
|
Thread Tools | Search this Thread |
October 7th, 2009, 03:38 PM | #1 |
Major Player
Join Date: Apr 2009
Location: ITALIA
Posts: 416
|
How can I create 50p footage from 50i footage ???
Hi,
I'm trying to create 50p footage from 50i. Not 25p! I want to create 50p footage through a process that it is used in 100 HZ LCD TV sets This is the idea. If you have 50i footage you can easily interpolate fields and get a 25p footage. Fine. But I would like not to do a simple interpolation. Let's say I have two 50i frames. I want to create a frame between the two made with the lower field of the first one and the upper field from the second one . This is the process used by 100 Hz LCD monitors that can actually invent non-existent frames by this method. But Premiere doesn't work this way. If you try to export a 50i footage to 50p what you get is 50p footage with identical frames at twos (1=2, 3=4, etc..) You can try a slightly different approach. You can slow down 50% the 50i footage (and then speed it up again). This forces Premiere to interpolate and create an extra frame but.. here is the problem.. Premiere "thinks in frames".. and interpolates a totally new interlaced frame. This frame, coming from interpolation, is actually a "fake" frame Comes from nowhere and so it has much less definition then the originals (the one before and the one afterwards). If then you render, speed up at 200% and watch the result what you see is a good frame, a blurred frame, a good frame, a blurred frame, etc.. Does anybody know a method of getting out 50 real interlaced or progressive frames with this composition? INTERLACED (100i): 1U1L 1L2U 2U2L 2L3U 3U3L PROGRESSIVE 50p): (1U1L) (1L2U) (2U2L) (2L3U) (3U3L) .... That is exactly as interlaced but with the two fields interpolated Also please note that this method applied on an edited movie creates smooth transitions at cuts (a frame that has 1 field from the scene before and one form the scene after). thanks, Marcello |
October 8th, 2009, 11:19 AM | #2 |
Regular Crew
Join Date: Dec 2005
Location: Chicago, IL
Posts: 140
|
I use After Effects to get the type of output you want, the trick is to change the default Composition Setting to 50fps, and render to the desired format.
AE will also 'interpolate' the motion of your images and produce 100p material from a 50i source, the "frame blending" mode just needs to be set to "pixel motion." |
October 8th, 2009, 11:24 AM | #3 |
Major Player
Join Date: Apr 2009
Location: ITALIA
Posts: 416
|
Sorry.. not 100p but 100i or 50p no?
You are sure it doesn' interpolate bluring but is just rearranging fields the way I said? |
October 8th, 2009, 12:02 PM | #4 |
Major Player
Join Date: Apr 2009
Location: ITALIA
Posts: 416
|
I imported 50i footage in a 50 (can't choose i or p until I render) composition
I tried to slow down the footage by 50% and apply as you said "frame blending -> pix motion". Actually blending seems quite good but is tremendously slow.. and by checking at 200% or 400% magnification I think that After FX is interpolating. Also "your" method works even If I make a 40 fps composition or a 60 fps composition so I am quite sure that what AEFX is doing is interpolating. "My" method would work only with a frame rate exactly double (not more and not less), but on the contrary I can expect absolutely no degradation and almost instant speed Just to explain better "my" idea... If I shoot at 50i mode I am actually shooting 50fps at half vertical resolution Now.. normally these 50fps at half v resolution are combined 2 by 2 to create 25 full resolution interlaced frames: 1+2, 3+4, 5+6, But there is no reason why combining them 1+2, 2+3, 3+4, 4+5 and thus getting shouldn't work. If we were talking about viewing this new 100i footage on a traditional analogue CRT TV set this could have been a problem, because having the same lower field jumping up on the next 1/50th of a second would look funny. But we are talking about LCD monitors now. These TV sets have to (they don't have interlace at all).. have to create a single frame (progressive) for every 1/50th of a second. So we can say that all LCDs are progressive. But why I can't use this same algorithm to create 50fps footage. Sure.. it will be interpolated footage but the quality of a 50p footage done this way will be exactly the same quality of a 25p created by simple interpolation on 50i footage.. so there is no reason for not doing it... Better slow motions. So.. what I say... If you shoot 50i and not 25p (I do so often when we don't no exactly what we will be doing with the footage... because I know that in emergency I can always create 25p even if at lower quality but I cannot do viceversa) and then you find out that you have to go progressive... well.. doing it this way I think is much better. I think if I find a way of doing it it will not be inside a software (Premiere, AfterFX, FinalCut etc) because softwares think that you can ask for 50% slow motion as well as 45% or 55%. I think I will find (or have to make) an external tool that does this outside of any video environment. No video in fact is going to be processed. There is just some data to move around. It's like inverting fields, or creating a movie that goes backwards... I have to expect almost no rendering time. |
October 10th, 2009, 01:21 PM | #5 |
Inner Circle
Join Date: Aug 2005
Location: Atlanta/USA
Posts: 2,515
|
What you are trying to do is the kind of things video processors like AVISynth can do. You have to be half video enthousiast and half computer programmer to do it though, as it is a command line only app.
On this forum the talk is mostly about professional video processing, maybe a specialized AVISynth forum is better for your inquiry. |
October 10th, 2009, 04:16 PM | #6 |
Major Player
Join Date: Apr 2009
Location: ITALIA
Posts: 416
|
Thanks... at least I found somebody that understands what I want to do and is pointing me in the right direction. I'll see what comes out from here.
Thanks |
October 11th, 2009, 01:46 PM | #7 |
Trustee
Join Date: Jan 2004
Location: Scottsdale, AZ 85260
Posts: 1,538
|
I don't want to be a downer, but if I understand your post correctly (and I may not) what you're trying to do is impossible.
The issue is temporal offset. A true progressive scan captures "now" in a single pass. Hopefully fast enough that motion in the frame is properly frozen rather than what happens when you scan progressive but too slowly. When that happens, the upper part of - say - a car passing by in motion - is out of alignment with the bottom part of the car since it moves along during capture. So your goal is actually a high resolution picture of the current scene with all the information in temporal (time) alignment. When you do interlace recording, there's a TIME difference between the starting point of the odd field scan relative to the starting point of the even field scan. If an object being taped is in motion, it's POSITION relative to the camera will often change between those two fields. So the idea of somehow combining both fields into a single higher resolution frame fails because this approach actually generates LESS resolution as objects in motion - particular the details of them become LESS distinct as the edges are merged. Interpolation schemes fail as well since they can certainly calculate inter-frames, but they can ONLY accurately do so at the original raster resolution of the single interlaced field of the original. So they can increase temporal resolution but NOT raster resolution. If you were to try to write an algorithm to fix the spacial offset - then all you'd end up with is a copying of the lines from the first field into the space of the second field - which provides you no more actual raster resolution over what you had to begin with since there's no new data in the odd lines, just a repetition of the even ones. Interlace video actually has better temporal resolution than does progressive, but lesser raster resolution. But the temporal offset typically makes combining frames in order to increase actual resolution impossible. It would work on shooting "fixed" objects without motion. But it that's what you have, simply doubling the scan lines does the same thing. The ONLY real way to increase both raster and temporal resolution is to shoot progressive in the first place. Sorry. |
October 11th, 2009, 04:05 PM | #8 |
Major Player
Join Date: Apr 2009
Location: ITALIA
Posts: 416
|
No.. sorry.. I think you don't understand exactly....
I know that real 25p is better than 50i interpolated to 25p. Still.. sometimes... i shoot in 50i. The main reason is that sometimes I don't know exactly what I will be doing with the footage and although 50i can be interpolated to 25p, it's not possible to go from 25p to 51 (because you are lacking time information). Let's say we are in PAL. If I shoot at 25p I have a full good quality 720x576. If I shoot at 50i I have 2 different frames of 720x288 that happen to be are interlaced. This means I have less resolution but I have more time information because I have info from two different moments in time, separated by a 1/50th of a second. So.. given for granted that real 25p is better than 50i interpolated to 25p... with the method I described you can have 50p at the same quality than 25p. This means that ..yes.. real 25p would have had a better frame resolution, but in this way I can make much better slow motions. This method is not my invention. Is actually what 100hz LCDs do to simulate the difference between 50i (TV news look) and 25p footage (cinema look) I explain it again... just to be clear... Shooting 25p I have a full frame (720x576) every 1/25th of second Shooting 50i I have a half frame (full columns, half lines) every 1/50th of second This means that what we call an interlaced frame is actually made with 2 half frames shot one after another and then combined: 1+2, 3+4, 5+6, 7+8, etc... But I am not forced to combine them this way! I could ie make a 720x288 50p feed ! Or I could invent a new interlacing method made by 3 fields combining fields like that: 1+2+3, 4+5+6, 7+8+9, etc.... This will give me frames that represent 3/50th of a second and I would have only 17 fps And so on. Time ago.. to combine two by two had a reason that came from the PAL and NTSC system, the drawing of the elctron on the cathodic tube, and so no... But if I choose to interpolate to progressive then any choice I do is fine. In my method I'll still merge 2 by 2.. but instead of jumping a whole 1/25th of a second.. I'll jump forward only 1/50th of a second. So I'll make a new 100i stream with fields combined in this way... 1+2, 2+3, 3+4, 4+5, 5+6, etc... After I have done my 100i stream I'll interpolate fields in the same frame the usual way... (1+2), (2+3), (3+4), (4+5), ... Now.. you can't deny that each new frame has the same quality (not more, not less) than a normal 25p interpolated frame (coming each frame from 2 fields) ... BUT.. I can now create a double amount of frames. And I am not duplicating. Each frame contains info of the present 1/50th of second. I'm sorry but on paper all works to me. This is a faster way that actually interpolating frames with a slowmotion process. It is also a method that keeps good quality instead of creating frames with halos and ghosts around moving objects. Still.. it can only be applied when slowmotion is EXACTLY 50% This method is almost instantaneous. I software programmer with no video experience could do it because there is nothing to know about video. The point is just to double field data and move them around in the file creating new "field couples" Probably.. more than a video solution.. I should be looking for a software programmer! M |
October 11th, 2009, 07:00 PM | #9 |
Inner Circle
Join Date: Aug 2005
Location: Atlanta/USA
Posts: 2,515
|
Marcello, the more I read your posts, the more confusing they get. I think the starting point of your logic is flawed, thus the whole process is senseless.
You state that the newer TVs display double the frame resolution by creating additional progressive frames. Well, most of them don't - all they do is blank the screen for a very short time and than display the same frame one more time. It's basically the same technology used by good old film projectors - movies viewed at true 24 fps look jumpy, so they blank the screen with a mechanical shutter and they show the same frame one more time, tricking your eye to "see" 48 fps when in reality all it is it's 24 fps - you see each frame twice. Only the most expensive displays do some sort of "motion compensated rate conversion" (not true intermediate progressive frames!)... with more or less success; the newer models even boast 240 fps. Most of the manufacturers already know they do a poor job, so they leave it up to the user to turn the feature on/off... it's more of a marketing succes than a real technical one. Even these high end displays still suffer from the smearing inherent to LCD displays due to the refresh delay. Besides, what would be the practical use? These LCD displays will still only accept the standard 25 fps or 30 fps signal and do the upconversion internally! Last edited by Ervin Farkas; October 12th, 2009 at 05:18 AM. Reason: Typo corrected. |
October 11th, 2009, 11:09 PM | #10 |
Regular Crew
Join Date: Jul 2009
Location: San Jose, CA,, USA
Posts: 144
|
If I understand it right, what he wants to do is the following, using the example of a second-long, 50i clip as the input.
1. Import the second-long, 50i clip into a 25p (25fps) workflow. 2. Change the workflow to 50p (50fps), leaving you with duplicated frames. 3. For every even-numbered frame, "fill in the blanks" on the odd fields, and for every odd-numbered frame, "fill in the blanks" on the even fields, thus giving you an interpolated 50p product. 4. Export a second-long, 50p (50fps) clip. Isn't that pretty much what happens when you take a 50i clip, run it at half-speed, and then export it as 25p, except without the slow-mo? |
October 17th, 2009, 08:53 PM | #11 |
Trustee
Join Date: Jan 2004
Location: Scottsdale, AZ 85260
Posts: 1,538
|
Chris, et al.
Yes, I understand what he's talking about. But there's no way you can accurately interpolate the extra temporal resolution if it's not there in the first place. Not if there's any inter-frame movement. To avoid losing raster resolution over the time gap in even to odd frame capture, you'd have to take the odd line information and MOVE the pixels back into alignment with their position during the original first (even) scan line. And that is precisely the equivalent of just line doubling a duplicated EVEN frame. Which yeilds no more raster info or temporal info than the original interlaced footage. While the computer guys can PREDICT where some motion is by in-betweening. They can only actually something in the maximum resolution in which it was recorded. Anything else is interpolation. Which is all well and good - but it's temporal guesswork - NOT a pathway back to actual true original resolution. |
October 18th, 2009, 01:36 AM | #12 |
Major Player
Join Date: Apr 2009
Location: ITALIA
Posts: 416
|
Chris and Bill.. is not exactly as you say. . you are describing more a "normal" slow motion/doubling procedure...
Please remember that although interlaced footage is "packed" in frames representing 1/25th (2 times 1/50th) of a second what we actually have is just twice half res frames at 1/50th so I could "pack them" in a different way.. the way I described.. and that is not my invention but the algorithm used by LCD 100Hz TVset when used with 50i footage. Imagine a normal 50i frame made by to interlaced fields. delete the upper (1st) field. You are left with blank lines interlaced with the lower (2nd) field. Now.. you move up one line the lower field (so this becomes upper now) and you find yourself with blank lines in the lower field. Now you fill the blank lines with the information coming from the upper field OF THE NEXT FRAME (3rd field) by moving this down (thus changing it into a lower field). This new frame that you created really represents that moment in time not more or not less than the previous and the next. It's is in fact made by two fields, each shot in consecutive 1/50th of a second, and it is in no way an "interpolation"... It has EXACTLY the same quality of the previous and the next frame... if you understand what I mean. Obviously interpolation will occur if from this interlaced frame you will make a progressive frame but this we are talking about is just "normal" in-frame interpolation (that same that you would do changing from 50i to 25p). What I am saying is that if you shoot at 50i and you want to make 25p you can as well make 50p with exactly the same quality! |
October 18th, 2009, 05:53 PM | #13 |
Major Player
Join Date: Dec 2006
Location: Auckland, New Zealand
Posts: 444
|
Marcello, your explanation includes the problem with what you are trying to do, and it's an obvious one when you think of full frame motion, which is probably why it hasn't been implemented in standard NLE's.
Original footage FRAME 1(field 1 then field 2), then FRAME 2(field 3 then field 4) You want to show FRAME 1(field 1 then field 2), then FRAME 2(field 2 then field 3). The thing is, if you did this with full frame video just by swapping the fields from upper to lower, field 2 would move 'up' or 'down' on the screen, so your video would constantly be jittering up and down from frame to frame, as the scan lines are registering at different pixels on the screen. Lets label our our fields as "Field Frame_Field_Line" e.g Frame 1, Field 1, Recorded on Line 1 of the camera = Field 1_1_01. Frame 1, Field 2, Recorded on line 2 of the camera = Field 1_2_02 and so on. If we had 10 lines, fields would display as follows Frame 1 (normal): Line 01 - Field 1_1_01 Line 02 - Field 1_2_02 Line 03 - Field 1_1_03 Line 04 - Field 1_2_04 Line 05 - Field 1_1_05 Line 06 - Field 1_2_06 Line 07 - Field 1_1_07 Line 08 - Field 1_2_08 Line 09 - Field 1_1_09 Line 10 - Field 1_2_10 Then in Frame 2, to keep Field 2 in the same spatial position you have to blank the first line, if you move the picture up or down then the entire picture will shift up or down one frame, and your image will jitter. It has to display on the same line it was recorded, AND the same line it just played back on in Frame 1. Frame 2 Line 01 - (New) Blank Line 02 - Field 1_2_02 Line 03 - Field 2_1_03 Line 04 - Field 1_2_04 Line 05 - Field 2_1_05 Line 06 - Field 1_2_06 Line 07 - Field 2_2_07 Line 08 - Field 1_1_08 Line 09 - Field 2_2_09 Line 10 - Field 1_1_10 Ok, so now the image hasn't jumped up and down between Frame 1 and Frame 2, and Fields are in correct temporal order, and you've slown your footage down. Frame 3 becomes what would have been frame 2. i.e Frame 3 Line 01 - Field 2_1_01 Line 02 - Field 2_2_02 Line 03 - Field 2_1_03 Line 04 - Field 2_2_04 Line 05 - Field 2_1_05 and so on. It's worth noting that the ONLY result this methodology can have is a 33.33% slowdown, not half speed (It increases duration by 50%, 1 extra frame for every 2 original frames.). But I don't know how NLE's are programmed. It might be that if you do a 33.33% slowdown in your NLE this is exactly what it does. You could then take this slown down footage, and interpolate it to 50P, using whatever normal way your NLE does that math, and slow down further this interpolated footage. Maybe AVISynth can do this, as others have suggested.
__________________
www.afterglow.co.nz |
October 19th, 2009, 01:16 AM | #14 |
Major Player
Join Date: Apr 2009
Location: ITALIA
Posts: 416
|
I understand the jitter problem.. that is why i don't want to use the 100i footage but the interpolated 50p. Merging the two fields should make much more important when the fgields were shot (and not where.. on which line). Also I don't think that in the moment of shooting the camera actually jumps lines... There are many calculations that the camera does to create frames with fields.. it's not actually shooting one line every two.
Also.. no.. it's not one frame every two (33%) but one frame every other (50%). In fact I create an extra frame between every 2 frames but every frame (except first and last) can generate two extra ones.... Let's say I have frames 1,2,3,4. I can generate frame 1a using info from 1 and 2 and frame 2a using info from 2 and 3.. So actually frame 2 generated TWO frames.. thus a 50% at the end. Finally.. I KNOW 100Hz LCDs work this way.. They actually display 50i footage in 50 frames and 25p footage in 25.. but the 50frames (50p frames!) of the LCD screens don't have interlace (like old CRTs)... but are not half res (just duplicating lines in fields).. They actually use this algorithm I described. The reason why NLE don't have a similar way is, in my opinion, that they need an interpolation algorithm that works whatever speed you choose in slow motion (or in doubling frame rate). This I described is faster (and better) but works only for EXACTLY 50% |
October 19th, 2009, 08:52 AM | #15 |
Major Player
Join Date: Jun 2003
Location: Miller Place, NY
Posts: 820
|
I've held off on replying because I was so very confused, but I think I get it now, and as has been suggested, Avisynth is probably your answer. In fact, what you're attempting sounds like what the DoubleWeave() filter does. From the Avisynth documentation:
"If the input clip is field-based, the DoubleWeave filter operates like Weave, except that it produces double the number of frames: instead of combining fields 0 and 1 into frame 0, fields 2 and 3 into frame 1, and so on, it combines fields 0 and 1 into frame 0, fields 1 and 2 into frame 1, and so on. It does not change the frame rate or frame count." Even interlaced footage isn't truly "field based", the way they mean it, but the documentation goes on to say that when given frame-based video it will first separate the fields and then do its work, producing the proper results. My testing seems to confirm this, but I'm not sure if it's exactly what you're looking for, so you'd need to try it out yourself. On the other hand, at least this: I'm certain can be easily achieved in Avisynth with the SeparateFields() filter. It turns each field of the input clip into a frame of the output, producing a half height video as a result, so 720x576 50i PAL would become 720x288 50p. As I said, I only suspect that I understand the proposed methodology of what you're trying to do, but installing Avisynth would give you the ability to experiment with clips on your own terms. It's a scripting engine, and lets you combine operations in almost any way you can imagine. The Doom9 forum is the best place to go for advice about the software; if you want to ask a direct question, you can register an account, make sure you read the rules (no more stringent than what we have here), and head to the Avisynth Usage board, though of course I recommend browsing it first. There's also the Avisynth Development section for more about writing actual plugins (as opposed to just scripts) and, if you're so inclined, contributing code to the core software itself. For your situation I'd stick to the Usage area. Avisynth is incredibly versatile, and while intimidating at first should let you achieve whatever you're after. Start at the main wiki page to learn the basics, and if you're feeling adventurous take a look at the script syntax documentation to see exactly how powerful it is. To take it even further, the exceptionally complex, programmer-oriented stuff is in this reference section, though honestly I don't see your needs requiring such detailed information. Between the wiki and the documentation that comes with an installation you should be able to piece together a script to do what you want and see how well it works. Mind you, if you go as far as installing Avisynth, this discussion might not matter. The built-in bob deinterlacer does a serviceable job, and runs in real time on my older system (Q6600, 2GB RAM). To use it just save the following as a .avs file: AVISource("C:\example.avi") AssumeBFF() Bob(0.0,1.0) You can open .avs scripts in an application like VirtualDub, which supports them natively. There's a way to serve the output of these scripts to any application you want, actually, but it's rather complicated and this post is going to be long enough as is. For experimentation purposes, VirtualDub is the way to go. As for the details of what the script is doing: the source filter loads the clip, and obviously you'll need to change the directory path and filename to match your own file. There are different source filters to use if your input is not an AVI file, namely DirectShowSource, but that's all explained in the Avisynth documentation. The AssumeBFF() is there because it's always best to explicitly define the field order of your clips. Avisynth usually does a good job of detecting it automatically, but it's safest to define it yourself if you know what it is. Use AssumeTFF() if you have top field first material. Finally the bob operation itself; simple as that, and you don't even need to use any arguments, but entering the coefficients as I've done here will, according to the manual, strictly preserve the original field data and only fill in the missing lines. If you're unhappy with the quality of those results, as I would be, there are several other bob deinterlacers available (as well as standard deinterlacing plugins), listed at the External filters page on the Avisynth wiki. My personal favorite is TempGaussMC, the slowest of the bunch and most involved to set up, but the most stable (the least up and down line twitter), and produces the best results if you can stand the truly mind-numbing pace of single-digit frames per second. There's a multithreading plugin available that can help speed things up, but it would only complicate our discussion here, so I'm choosing not to go into detail right now. The next version of Avisynth (2.6, currently in testing) will have that multithreading code merged in, and won't require installing any plugins. For reference, here's a sample I deinterlaced with the TempGaussMC script from a clip I recorded while testing my MultiRig Pro: http://www.sendspace.com/file/zvssos The big orange "Download" button is underneath the advertisements. I left it at 720x480 to avoid letting any resizing affect the image (though that's really just paranoia, the resizers do very good work), so it may not display at the proper aspect ratio, but this shows the output of TGMC from a 60i NTSC clip input, and gives an idea of what you can expect to accomplish with the admittedly enormous amounts of time it will take to run the script. Of note are the relatively tiny details in the tree branches as we move around, and the shallow diagonal lines on both the van itself and the lengths of pipe up on the roof racks. They're still a little chunky at certain angles, but nonetheless smoother and more stable than any of the other bob deinterlacers I've seen. I know glacier-slow deinterlacing is not what you're after, but I thought maybe people following the discussion might benefit from seeing what can be done. If you have any questions about operating Avisynth, I'll be glad to try and help answer them, but as far as your original question this is about all the advice I can offer. |
| ||||||
|
Thread Tools | Search this Thread |
|