Another FrameFlex related topic popped up on the Avid Community forums regarding the ability to use the extra pixels available when linked to a higher resolution for stabilization. That thread can be read here. As noted in my previous blogs on FrameFlex, any effect applied to an image is only getting the scaled output format from FrameFlex which is your project type. So if you are linking to a 4K UHD size source clip and are in a 1080 HD project, any effect applied is to the already scaled image and is 1080 HD.
The only time there is no scaling being applied is when the extraction target is equal to the project size. That is done by checking “Size matches project raster” and this is now a pixel for pixel extraction of the source. So any re-framing that is greater or less than that is being scaled, or resized to fit.
So the desire is to take advantage of the larger resolution file and something called “oversampling”to maintain the highest quality image possible. Here is an online article that refers to some of those advantages: “Why Everyone Should be Shooting 4K - Even for HD Delivery“. But oversampling can work against you depending on the overall factor difference between the source and target sizes as discussed here.
The proposed workflow for stabilization is to do so in the file’s native source resolution. This means creating a project that matches the source resolution. If the aspect ratio is the same, as with HD and UHD, then it is possible to just flip the project. If not, you will need to create a pixel for pixel matching project size so as to not introduce any scaling with mismatched aspect ratios. So the steps are:
- Create project that matches source raster
- Add clip to timeline and apply stabilize effect. Make sure to turn off “auto-resize” or similar functionality. You want to end up with a clip that shows black borders as this indicates no re-sizing was done at this stage.
- Render to a high quality DNxHR resolution. This clip will now become your new source. One can also do a mixdown as a new source, but that has no reference at all to any sources should it be needed downstream.
- Export sequence out to create a new source clip. Unlike the recommendation from the linked thread, ensuring that the render stays in place when going back to the HD project and possibly adding more effects to it forcing another render is too much to keep track of and you may end up defeating the purpose of doing the effect in the high resolution project to begin with. I export Same As Source or MXF OPAtom at my desired DNxHR resolution to create that new source clip.
- Go back to the HD project and Link (AMA) to this new source clip. From here you can choose to re-frame out the black areas on the source, or if key frames are needed, do it on the timeline.
As you can see, there are a lot of steps and depending on the number of shots needing this, it can be quite tedious. Not to mention you have a new source clip so maintaining file names and/or timecode (if needed) takes some extra management as well. Then I decided to compare this workflow with just doing the stabilize in the 1080 project to begin with and seeing how much quality I am preserving compared to the amount of time it takes to do the roundtrip workflow.
For this test, I used Boris BCC Optical Stabilize. I used the same effect on the same AMA linked clip in both the 1080 and the 4K UHD project. The only difference was that in the 1080 HD project I let it auto resize to fill the screen and for the 4K UHD shot, I did as per the above steps. Then from the 1080 HD project, I exported an uncompressed 1920×1080 TIFF file to compare the image quality in PhotoShop. I have to say that it was very difficult for me to see the difference between the two, if at all. Now I realize that type of footage may have different results, as well as how much stabilization is needed to be done and how much offset needs to be compensated for - but this test was done with RED R3D with full debayer used in all cases. Here are screen shots of the two processes side by side at 100% and 200%.
(Click to see full quality)
In both cases, the image on the right used the 4K UHD/FrameFlex workflow and the one on the left was just using the effect “as is” in the 1080 HD project. Because they are so close in quality, which method to use is more dependent on overall workflow needs and turnaround time in getting the program delivered. As to why they look so close? It’s hard to tell without more testing, but my guess is that the scaling/resize algorithm used in FrameFlex.
Another thing to be aware of is when using stabilize in a greater than HD project is to not use any of the 1/4 or 1/16 proxy modes to perform the analysis as the results will not be the same. Notice the differences in resulting frame composition using the same stabilization effect in all three modes:
(click for full size image)
Update 10/14/2016: Media Composer 8.6 improved it scaling quality by adding Polyphase as an option (default). The quality will be retested for an additional update in the future.
From the “what’s New Guide” for 8.6:
In previous releases of the editing application, FrameFlex effects were always rendered using bilinear image interpolation. With this release, the FrameFlex effect will be rendered according to the Image Interpolation option selected in the Render Setting s dialog. This allows you to set FrameFlex as Polyphase for better interpolation.
24p blogs are available via a free iOS application available here: