Archive for February, 2014

Camera Sync

Thursday, February 13th, 2014

sync.png

Click graphic to enlarge

Since I have been blogging about syncing considerations, this blog is just a reminder that when recording audio directly to camera with on-board microphones can have a sync offset already inherent in the recording.  This will go further out of sync as the subject, or source of audio moves away from the camera. The farther away you go, the more of a sync offset is introduced. At some point it becomes moot, as you will no longer be able to see what should be in sync the farther out it goes. If you are using this audio track as a reference track to sync with tools like PluralEyes, then you need to keep this offset in mind. This is where 1/4 frame syncing comes in handy as offsets are not always one frame in duration, nor fall on frame boundaries. Related blogs on syncing are here and here.

The screenshot above shows the results from a quick test I did with the Blackmagic Cinema Camera recording ProRes at 1920×1080. The screen grab is from the timeline with three distances as logged on the clip names; 5 FFET, 10 FEET, and 20 FEET.  I used the iPad MovieSlate application that flashes the screen orange with a beep making it easy to see and hear. Clicking it will either prompt to download the full size image, or open in an other window/tab.

I added frame boundaries in a graphics program to make it easier to see. The last clip in the timeline (20 FEET) shows two frames of orange. That is actually a blend frame and based on “blend” I put the sync location where you see the dotted green line.  This camera has an inherent “audio ahead of picture” by 1/4 frame. Then, as the subject moves farther back, the sync offset get larger; 1/4 frame every 5 feet. This is a case where you would want to resync on 1/4 frame boundaries if possible for tightest sync possible. In a timeline you could slip clips before sending to PluralEyes to ensure even better sync when the results are returned.

(Auto)Sync Guide

Friday, February 7th, 2014

Many productions use double system audio workflows due to the low quality inputs on DSLR cameras or for the flexibility and higher bit rate quality available to dedicated production audio recorders.  In those scenarios, a syncing process is done either in a third party dailies system or within Media Composer itself. There are multiple ways to sync picture and sound in Media Composer:

  1. Both picture and sound elements have common timecode. This is the easiest and fastest way to sync as you can do the entire day’s dailies in one batch process. 
  2. The picture and sound elements have easy to see and hear slate and clap but no timecode. This involves marking a Sync point on the slate and clap and syncing one take at a time. 
  3. No slate, not timecode and all hell breaking loose. This type of syncing is usually done via the timeline where both elements can be slipped as needed to be in sync and played back to confirm. This is also the process if using PluralEyes to sync based on waveforms. 

For further sync accuracy, it is common to work in a 35mm film project so that sync can be slipped on 1/4 frame increments. You can read why I use film projects on digital camera workflows here.

Media Composer v7 brought new functionality that brings new workflows to the product - namely Color Transform (source side LUT) and Image Transform (FrameFlex). But… Keeping the flexibility of linked AMA, LUTs, and FrameFlex with double system audio workflows can be a bit of a minefield. Understanding what can be synced and 1/4 frame slipped with these functions before you start and workflow is important to know. In some cases, the production will have to decide what is more important; color management, quality image extraction or accuracy of sync with double system workflows.

The following  two images show the results of double system workflows with a total of 42 total combinations of 7 source clip types. The NAME in the bin starts with how the clip was created:

  • Picture AMA Linked
  • Picture Transcode from AMA link
  • Picture via Avid’s Dynamic Media Folder feature
  • Picture via a third party dailies system like Resolve, Colorfront, MTI Cortex, etc.  
  • Audio (BWF) via AMA link
  • Audio (BWF) via transcode from AMA link
  • Audio (BWF) import

For picture transcoding, there is an additional set of files that include compatibility mode ON or OFF for either Color or Image transforms (or both). This covers most, if not all the methods by which picture and sound essence can be created. Then, there are the two methods of syncing as mentioned resulting in 21×2 resulting sync clips. I have to admit this took a bit of time to create and keep track of, but the information is good to know before realizing too late that a particular method won’t work after spending all the time with a transcode process.

Each method is in its own bin; one for syncing clips in the bin and the other via the timeline. The clip I used was a typical DSLR type clip that did not have common timecode, but a clean slate and clap. The clip was VA1A2 as recorded in camera (scratch audio) and the BWF were 8 track polyphonic with MIX track on A1. The resulting VA1 .sync clip was a result of the options in the AutoSync dialog window to remove audio from video, and keep A1 from BWF. The project type is 1080p/23.976 and 35mm/4 perf active. 

In the first case, syncing the clips directly in the bin two at a time will always sync (as compared to the timeline method), but only the imported BWF audio was able to be 1/4 frame slipped for more accurate sync. All other combinations will not 1/4 frame slip. Green clips are 1/4 frame synced, Red clips are not. Click on thumbnail for full bin view with comments. The columns will indicate ability to sync and to 1/4 frame sync as separate processes. For each case where sync or 1/4 frame slip could be performed, the error message is listed. 

syncing-from-bin.jpg

In the case of timeline syncing, the results are a bit more of a mis-mash. In some cases you can sync and 1/4 frame slip, and in other just sync, and with some, nothing at all.

syncing-from-timeline.jpg

As seen with these results, in order to have 1/4 frame slip capabilities, the picture needs to be transcoded with no active image or color transforms applied. This means one needs to choose which is more important to the workflow in this stage of the process.  This is more or less handled by syncing clips from the bin, but for productions needing to use PluralEyes or have no slate/clap on their dailies and must sync via a timeline, their options are much more limited.  So be sure to plan ahead!

FrameFlex Continued

Wednesday, February 5th, 2014

There is quite an interesting FrameFlex thread evolving on the Avid Community Forums. It seems that there is still some confusion as to what FrameFlex is intended to do and expected behavior in the current version (7.0.3). To me, the parameters available are no better than what you would find in the standard resize effect as all you an affect is the XY pixel extraction and position. The only benefit of FrameFlex is its ability to access the full resolution of the camera’s original files when working with larger than HD sizes resulting in a better quality image than a scaling operation from an HD proxy. That’s it.

What is confusing, and what I was addressing in my previous blog entry on FrameFlex  is that the user needs to be aware that there is a quality difference when using FrameFlex on the source clip and when using it on an event in the timeline when doing a transcode. As long as the clip is dynamically linked to the camera original via AMA, what you see in the timeline is the extraction from the source file. Any operations done on an event in the timeline is combined with any that were done on the source clip.

One might use the source side FrameFlex to correct a boom mike in the shot for example, leading to more “corrective” use on the entire clip, as it does not allow for keyframes. Using FrameFlex in the timeline is for more creative needs as you are choosing the framing, moves, zooms and such in the context of the story and events before and after the event being affected. A different span from the same clip in another event can have different settings.  What you cannot do is save off a FrameFlex effect and apply to other clips as you with every other effect. Or the ability to have “relational” FrameFlex the same way color correction has for creative reframing on same shots in the timeline. Maybe in a future release. 

The important issue is that all this works great as long as the clips are dynamically linked.  But since this assumes greater than HD sources, performance is often an issue with AMA linked clips. So most users will highlight the sequence and perform a transcode to their finishing resolution as it is documented in many AMA workflows.

This is where the quality issue comes into play; The transcode dialog box allows the transcode process to “bake in” either the Image or Color Transforms as part of the operation, but it does not include any of the FrameFlex parameters used in the timeline - only what has been applied to the source. What is needed is the additional option allowing the user to include timeline FrameFlex as part of the transcoding process from a sequence. This way, all .new sources for that timeline are baked in with the expect quality an extraction offers from the larger resolution images.

As it stands, the resulting source clips are transcoded to 1920×1080 from whatever the original resolution might have been. If FrameFlex was used on the source, it will be applied. But all other effects on the timeline include timeline FrameFlex will be a scaled from the 1920 x 1080 image.

If you want to maintain the quality that FrameFlex offers when using it in the timeline, you must render the timeline or do a video midown, and not transcode the sequence. 

One of the comments in the Avid Forum suggested using AvidFX, but it too suffers from the fact that all effects can only use the output of the FrameFlex effect when dynamically linked which is 1920 x 1080. So doing the same effect in AvidFX has no difference in quality than using the 3DWarp as seen here. Click on image for original 1920 x 1080 exported frame from a 4K UHD frame size via a Media Composer timeline:

AvidFX

avidfx.png

FrameFlex render from the timeline (not transcoded)

frameflex2.png

The thread on the Avid Community Forum raises other issues users have come across that you might want to be aware of when using FrameFlex. If you understand what it’s currently capable of, you can create higher quality extractions as long as you don’t need to rotate the image. AAF roundtrip with FrameFlex is another area where users need to be careful, and I will document a Resolve AAF roundtrip in a future blog that allows FrameFlex parameters to remain relevant for any last changes needed in the finishing process.

 Update: Rotate has been added to FrameFlex with Media Composer 8.4.