Importing Audio from Test Screenings

May 24th, 2015


It is common practice to record audio from movie test screenings and align it to the same version in Media Composer to see what is working or not in a particular version of a movie.  Some films can have 10 or more test screenings before they are released. Importing the audience reaction allows the editor and director to pinpoint directly how scenes are playing out, be it for horror, comedy or other emotion they are trying to achieve in the story.

One thing to keep in mind when bringing the audio recorded from a test screening, is that DCP projection which is what is used for these screenings, is 24.000fps. Most digital productions working in NTSC based countries shoot and record at 23.976. When a proper DCP is made from a Media Composer 23.976 sequence, the film gets “pulled up” to 24.000 to meet the DCP standard. See related blog. Now when importing the audio back into the original project, the audio file is in sync with 24.000fps playback, and not 23.976. This results in the audio from the test screening to drift ahead of picture; roughly +1 frame every 00:00:42:16 and will be 01% out of sync by the end.

There are two ways to deal with this:

#1. Similar to 24.000 productions posting at 23.976, the recording in the theater could be done at 48.048kHz. When importing this file into Media Composer, it will be pulled down to 48.000kHz via a sample for sample import/playback slowing it up .1% and will be in sync.


#2. It is most likely that the recorders being used in these situations do not have the same control as professional audio field recorders,  so the pulldown trick needs to be done in post before importing back into the 23.976 fps project. If it is an actual 48.000kHz file, then you need to change its sample rate via Sound Devices Wave Agent. Unlike field production, since the file is already 48.000kHz, you need to change it to 47.952kHz and not 48.048kHz.


Importing the file with the same pullup/pulldown settings in Media Composer will result in a file that is slowed down, now matching sync with the original timeline.  If the file was recorded as an MP3 or other sample rate, I suggest first importing it into a 24.000 project in Media Composer with settings to not pull up or down, then exporting as a WAV at 48kHz, then doing what is described in step #2. For other frame rate/sample rate/pitch calculations, refer this blog that provides a handy spreadsheet to do the math for you. If in PAL based countries and post is being done as 25.000 and the DCP was made as 24.000 (via a slow down), the same steps will apply but the user can manually enter 50,000 in Wave Agent. Be sure to click “save” in the bottom left corner when using Wave Agent to ensure that the sample rate has been applied to the file(s). And of course, do these changes on a copy and not the original.

And as a final tip, if you are not getting the expected shorter/longer clip duration with the setting as seen above, turn off the second check box indicated by the arrow and try again. Despite the wording indicating that resulting files will be longer or shorter in duration after the import, I have found turning this off will actually change the duration in some scenarios.

Syncing Dailies in Media Composer Using PluralEyes

February 11th, 2015


PluraEyes can be a very handy tool when syncing dailies that have no timecode or bad timecode, etc. Because it works as an external app using an AAF roundtrip workflow, there are some tips and tricks that will make logging and organization easier if properly planned. Download the PDF. using-pluraleyes-to-sync-dailies-in-media-composer.pdf


As of PluralEyes v4, AAF is no longer a supported format, so all syncing will need to be done on original camera assets and not as a roundtrip via Media Composer and AAF. Refer to PluralEyes documentation. 

Media Composer 8.5 introduced audio waveform syncing for grouping. For AutoSync, refer to this blog

Viewing FilmScribe XML Files

November 14th, 2014


XML export from FilmScribe offers a lot more metadata than any of the metadata you can add to any of the other list types. This export will export all metadata on a source clip if it is available making it quite handy for downstream processes. If you do use the FilmScribe XML export and use a text editor to view the file, some applications do not do a good job of laying out the XML in a readable fashion. I use the free TextWrangler (also available in the AppStore) a lot on OS X for text editing and fix-it type operations. And while it can open an XML file, it’s layout was not easy on the eyes as seen here:


I did a little searching for better XML display and came across a posting on how to add a “tidy XML” filter to TextWrangler that would make it more readable. I noticed on my system, that the folder/path mentioned in the steps was not on my system:

~/Library/Application Support/TextWrangler/Text Filters/

A little more research on a different site mentioned that it could just be created and added which I did, and it worked great. So here is the .sh file and the folder as a zip file that you can download, unzip, and add to:

~/Library/Application Support/

Once that is done, you will see “tidy” appear in the Text/Apply Text Filter menu as seen here:


Once installed and selected, that same XML will now look like the following and is much easier to navigate when looking for specific metadata:


Motion Effect Types in Progressive Projects

August 30th, 2014


There was a discussion on the Avid Community forums as to what the different field based motion effect types do when working with progressive footage in a progressive project. I thought that was a great question and set out to do a quick test using Job te Burg’s excellent digital countdown leader available here.

I did two things with the original countdown. I did a DVE move left to right to get some additional movement and then did a 50% speed change (1/2 speed) rendered with each of the 7 types of motion effect types available in the Timewarp effect. In all cases, the input and output settings were set to progressive. For the 7 different types offered, you only end up with 4 different looking results as the following pairs end up with the same result:

  1. Blended VTR and Blended Interpolated
  2. Both Fields and Duplicated Fields
  3. Interpolated Field and VTR Style
  4. And FluidMotion is the fourth result and stands alone as its own unique look.

Below are links to JPG contact sheets using each method and exporting the first 6 frames of each sequence. Using A, B, C, D to uniquely identify frames, the following patterns for each are:

Blended VTR and Blended Interpolated: A|AB|B|BC|C|CD
Click link for full size contact sheet:

Both Fields and Duplicated Fields: A|A|B|B|C|C
Click link for full size contact sheet:

Interpolated Field and VTR Style: A|B|B|C|D|D
Click link for full size contact sheet:

FluidMotion: A|N*|A|N*|A|N*
Click link for full size contact sheet:

*Where N is a New frame.

It’s not really fair to use a countdown to show FluidMotion as it creates new frames based on pixels in the frame, but is shown here just for fun.

Having so many options is a bit redundant, and confusing, in a progressive project using progressive footage with input and output set to progressive.  There was no real difference in render times between any of the options other than FluidMotion which is doing a lot of pixel calculations, so is expected to take longer. But now we know the answer.

*Edit: Remember when judging motion effects frame by frame that the timeline setting be green/green or you will only be seeing one field or 1/2 a segmented frame when stepping through. This has caught me several times. The above contact sheets were done as an export, so they are the full progressive frames.

The Many Uses of ALE

August 9th, 2014


As promised in the previous blog, I wrote up a high level overview of what can be done with an ALE file and Media Composer if you are willing to get into it and do a little text editing. It opens up a lot of different workflows including some batch automation processes. Download: The-many-uses-of-ale.pdf. and get a better idea of how to manipulate files and some of its strict requirements and quirks.

No More AvidLogExchange Application

August 8th, 2014


Some users are just noticing that AvidLogExchange, the application (not the file format .ale) is no longer a product and part of the installer starting with v8. Some will notice that Avid MediaLog is no longer available and I wrote my thoughts about that here last November (2013).

AvidLogExchange has been around for a long time when there used to be more than a half dozen “log” types that were common from different vendors such as Aaton, Evertz, and KeyScope; all formats that were part of the film-to-tape logging solutions as well as some common video logging applications in the 90’s.  Those formats have not used  for almost 15 years as the ALE format became a pseudo standard due to its dominance in the NLE market throughout the 90’s. So those formats will not be missed, but the application still did some interesting tricks that fit different workflows needs that are still in use. A few of them would be quite easy to implement directly in Media Composer. A Product Manager at Avid (no longer there) even called me at the time and asked what I thought about them EOL’ing (End Of Life) Avid Log Exchange in a future release. I said as long as the handful of useful features were not lost, it would not be a big deal. Unfortunately that did not happen, but may still appear in a future release.

Those features are:

  •  Both FCP log files to ALE (format) conversion. Helped in moving source metadata to Avid.There’s still a lot of FCP 7 and earlier being used.
  • ALE Clean function. This prevented logs created outside the system to not have overlapping START and END timecode as it would create confusion during list generation as a timecode could point to different sources. This is more common with tape based sources, but can still occur with FileMaker type databases and exporting files to be used in Media Composer which leads to:
  • TAB to ALE conversion. This is one of the bigger ones. A user could open a TAB file in ALE, then it would add the Global Header information required by the import. I would say that the global header information is helpful for timing checks during import, but this could easily be done by Avid allowing a TAB file without global header or data fields “Column” and “Data”. The first line in the file can be assumed to be column names, and lines 2+ as the data. This would eliminate a lot of frustration of getting the header just right and copy/pasting. Also, seeing as Media Composer can export a TAB file, it just makes sense.


  • Record timecode as Source. While somewhat special, it does help with those looking to bring in an EDL and notch an existing flattened program file. It was original developed to support a post audio sync process on dailies, but now has uses as blogged about here (using DaVinci Resolve for Scene Detection).
  • The Windows version had a nice text editor in it including search/replace functionality which is quite useful these days when dealing with Tape and Source File merging workflows. It also had a nice two window view so you could compare original file and resulting .ALE.

The ALE format is still a popular shotlog exchange format, and using the different ALE import/merge functions allow for some nice batch renaming/subclipping processes that will be part of a future blog, but it is getting a little long in the tooth and needs an update to fit more modern workflows with an XML schema that would allow for markers, and such to imported as a batch process on multiple clips, etc. And that too is a subject for another day.

FPS, Sample Rate and Pitch Correction

June 29th, 2014


A few weeks back a question was posed on one of the industry forums about to sync recorded audio to a shot that was intended for slow motion. From that, I created a spreadsheet that would do the calculations including semitones needed to keep original pitch. The percentage or semitones can be used in pitch shift audio plug-in if desired.

There is a difference on how the WAV file declared sample rate in order for this to work in Media Composer.  For example, if recording 24.000fps on set with the intent to post at 23.976,the recorder would be set to 48.048kHz. But if the files have already been recorded at 48.000kHz, then you need to reset the sample rate to 47.952kHz using an application like Wave Agent from Sound Devices. The pullup or down would then be performed by the BWF import process. This previous blog shows sample rate to project for “normal” shooting rates. This will not work with the BWF/iXML AMA plug-in, as that does not support pullup/down workflows.

This spreadsheet calculates those values as well as the more offbeat rates that may be used when shooting for slow motion or speed up with different frame rates than than the intended playback rate. Download the Microsoft Excel spreadsheet here.  For those that do not have Microsoft Excel, this should open fine in Google docs or open source alternatives.

Don’t mess with the formulas!

Update 6/30/2014: You can now access the spreadsheet as a Google Sheet. You may need to sign in with a Google account to work with the spreadsheet online.

Update 10/26/2014: It has been brought to my attention that some field are not properly displaying when opened in “Numbers“. Specifically the sample rate fields. Make sure that these fields are set to 5 digits.

Adobe CC 2014 DCP Creation

June 28th, 2014


 Update 2/6/2015: Latest updates to Wraptor have eliminated this behavior to be in line with other applications as it relates to frame rates and audio sample rate.

With the latest release of Adobe CC 2014, Adobe added the ability to easily create DCP directly from Premiere Pro CC or Adobe Media Encoder CC. The Wraptor plug-in is provided by Quvis. This is exciting news for indie filmmakers looking to create a DCP screening copy for festivals or screen a work in progress in a theatre. Being mainly a Media Composer user, this was great news and DCP output from Media Composer has been a long time request from the Avid community. I would guess there are probably a fair amount of Media Composer users who also have an Adobe Cloud subscription for Photoshop and After Effects, they now have a solution for making a DCP from the Avid timeline.

Because the Wraptor plug-in is limited as to what controls are available, making the DCP creation process very easy to do. This is both good and not so good.  It pretty much is a drag and drop process after selecting aspect ratio. The output will always be 24.000fps but can take in 23.976 fps and 25.000 fps programs. It always assumes Rec.709 video (and levels) as an input and will properly apply XYZ and DCP gamma. For Media Composer users, do a mixdown or a render with either DNxHD or ProRes (OS dependent) and export a “same as source”.mov file. Open that with Adobe Media Encoder, select Wraptor, then aspect ratio of content, and you’re good to go. Adobe Media Encoder will transcode and create the DCP package with proper XML and MXF wrappers as defined by the DCI specification but no control over file DCP naming conventions.

The “not so good” side of the Wraptor encoder that comes with Adobe Media Encoder CC 2014, is that it does a frame rate conversion where program duration is maintained. So if working in 23.976p or 25p project types, it does what Avid Mix & Match does which is add or remove frames to maintain duration, rather than a frame-for-frame conversion where the program duration will change; .1% faster for 23.976 sources, and 4.1% slower for 25p sources. The advantage of frame-for-frame is much better overall quality as the motion remains as originally shot and mastered, be it camera moves, moving objects, or both. Of course the audio would need to be sample rate converted to maintain 48kHz when changing playback rate, but it is far easier for the eye to see motion artifacts than it is for the ear to hear a one-pass sample rate conversion.

For my test, I created a 1080p/25 timeline that was 00:01:00:00 in duration and created a DCP with Adobe Media Encoder. As you can see in the screenshot, the frame count of the EasyDCP Player (bottom left) does not match the burn-in frame counter of the program. At Frame 356, there is already a 12 frame difference being compensated for. Also, if you look at the bottom right, the total program duration at 24fps is still listed as 00:01:00:00. Click frame to see frame actual size:


I then used FinalDCP as a comparison, as it does support frame-for-frame conversion as part of its feature set. As you can see in the following screenshot, for the very same frame in DCP Player, the frame count is the same, and the bottom right program duration is longer, as I would expect when slowing down 4.1% and since audio plays in sync, there was a proper sample rate conversion done to maintain sync at 48kHz. Click frame to see actual size.


I think it’s great to have a DCP encoder that can be used for quick screenings or festivals as part of a suite many of us may already have, but I would not recommend it for final delivery and distribution due to the quality of motion artifacts than will happen. This can be overcome in Premiere Pro by taking your final program output and using “interpret as” 24.000fps as it will create a frame-for-frame version. Then deal with converting audio tracks in an audio application for sync and sample rate conversion. For Media Composer users, I wrote up a step by step in my first blog entry:
File Based Universal Mastering

I will be reaching out to Quvis, to see if they have an upgrade to the free version has these types of controls with Adobe Media Encoder and will update once I get a response.

Update 6/29/2014:

  • Job te Burg mentioned in another thread that other DCP applications do the frame-for-frame conversion as well. Thanks for the heads up, I was only using FinalDCP as an alternative example of , and in my opinion, better method for higher quality DCP creation.
  • Oliver Peters emailed and asked about padding or scaling of the 1920×1080 sources. I admit my tests were focused on frame rate conversion and did not check to see about padding or scaling but will try to get to that in a future test.

Update 6/30/2014:

  • I just tried the free DCP-0-matic and it does the proper frame rate conversion, allows for padding or scaling of 1920×1080 to met the 2K DCI spec, and has stereo 3D support. Comes in all flavors of OS: OS X, Win 32bit, Win 64bit, and Linux.


BWF/iXML AMA Update (v8)

June 13th, 2014


There has been a thread on the Avid-L discussing BWF import and AMA linking and I was surprised to read in that thread that the AMA BWF/iXML Media Composer 7.0.4 now supports monophonic BWF files as a single clip. before this release it was only supported by import. I did not see this in any of the READ ME’s but is a nice add to the AMA functionality.

But… there are still trade-offs to consider between AMA and import that need to be considered when dealing with monophonic files. The following graphic shows the different results when importing or linking and how support for metadata changes depending on method chosen (click for full size)


The first thing is a bug that has cropped up in v8 where importing monophonic files and creating a single clip upon import does not have the right duration. Basically the imported duration (X) end up being: X = (BWF duration)/n  where n is the number total number of tracks being imported. As seen in the import, what should have been 2:27:00 in duration ended up being 18:09. While we can expect this to be fixed in a updated version, hopefully it will also include the track metadata that gets lost in tracks 5 and higher when importing polyphonic files.

AMA linking to the monophonic files works as mentioned, but does not parse any of the track metadata on and of the tracks. There is no workaround for this as there as  an option to link and maintain individual tracks does not exist with AMA as there is with “Import”. There is also some inconsistencies with source metadata between Source File and Tape ID on the AMA linked version that may affect EDL and such downstream.


For now, the best method for monophonic files is to import them as individual tracks, then select all the tracks in the bin and select AutoSync as seen in the bottom bin of the above screenshot. No need to AutoSequence them first as mentioned in the Avid-L thread, and multiple takes can all be done in one pass. The one downside to this method, is using matchframe to get back to alternate tracks from the sequence. For example, the 8 track clip in the example has a mix track on 1 and 2. AutoSync allows for any consecutive span of tracks to be used in the resulting .sync clips such as 4-6, or in this case 1&2. What is very helpful when editing is the ability to do a double matchframe back to the 8 track clip to use an ISO track instead of the mix track. This is done by match framing on the sequence which loads the .sync clip. Then, from the sequence side, turn off the V track and matchframe again on the source side and it will load the multitrack audio. When synced from a polyphonic file, or a single clip created from multiple mono tracks, it gets loaded at the same position for easy track and position selection. In this scenario, it will only load the original single mono track and not the synced group defeating the purpose of this feature.

Another workaround is to combine the monophonic files into a single multitrack polyphonic file. There are several applications that can do this, but this is easily done with Sound Device’s free application Wave Agent  available for both Windows and OS X. Here is the result of combining them, and then accessing the file via the BWF/IXML AMA Plug-in. (click to enlarge):


As always, plan your workflow accordingly. If there is no additional track metadata logged or needed,  then AMA may be the best route to take, if there is, import is the better way to go. When it comes to audio, it is so fast to import, that the instant access is not that much of a benefit as found with video formats. I would love to see a metadata view available in the AMA window selecting the BWF files displayed in the window, then do an “import” versus a link at that time rather  than the potential additional steps of transcode or consolidate with more clip management needed. Then create a “container” type file that managed all the tracks with the ability to define as mono, or stereo (or more). Once edited in the timeline, it would be a simple right-click to activate the ISO track(s) needed for that event. This would offer the best of both worlds.

Update 8/9/14:  Version 8.1 has fixed the duration issue when AMA linking to monophonic BWF files, but track metadata is not supported at all.

Using Amira Color Tool with Media Composer

May 30th, 2014


If you shoot ARRIRAW or ProRes, the Amira Color tool is a straightforward, easy to use tool for creating ASC CDL and LUTs for different looks you create. It is a free download after registration at the ARRI Amira webpage. There is also a video posted on YouTube that covers its functionality. It is a subset of what Assimilate Play offers due to the limited codec support, but is easier to use for those who are dabbling in ASC CDL/LUT world for the first time.

I did a quick test to see how it might fit into a Media Composer workflow (v7.x and later) with ASC and CDL support. Unfortunately, as with Assimilate Play, the ASC CDL export will not work with Media Composer as it is XML and Media Composer imports its metadata as ALE or via an AAF. Also, the XML is rather limited as it does not even list the file to which the values should be applied.

The creation of looks and exporting as a LUT does work as long as you export in a .cube LUT format. The following chart shows which exports worked and did not (Green  = Yes, Red = No).


All .cube LUTs except for FilmLight imported into Media Composer. I am impressed at the number of LUT types it does support - I was also able to import one LUT type and save it out as another which would make this a nice LUT translator, but I need to more testing to ensure it did not change anything looks wise.It would be nice if it had a timecode display as well. But this is a nice little tool to have on hand for quick Look -> LUT generation, as well as apply LUTs quickly to ensure they are correct when created elsewhere. It is available for OS X only, 10.7, 10.8, and 10.9.

XDCAM Proxy and 4K XAVC Conform

May 5th, 2014


The advantage of 4K XAVC shooting is that a proxy can be recorded at the same time on the same card. This facilitates the production workflow by keeping everything in one camera.

The recommendation when shooting 4K XAVC productions is to not use the XDCM proxies as the conform process will involve many file renaming steps in order for Media Composer to match the proxy with the 4K master file. Use AMA to link to the 4K files, or use DMF to create proxies to ensure proper source/reel tracking.

AMA uses the entire filename as the Source/Reel ID in the “Source File” column. They need to match as much as possible as Media Composer does not offer much control over reel identification with syntax control as you may find in dedicated conform processes of DI color correction systems.

4K XAVC filename:         B001C001_1308213D.MXF
XDCAM Proxy filename:     B001C001_1308213DS02.MXF

Notice the addition of S02 at the end of the file. This is currently used to identify the clip as the XDCAM proxy. Because the filenames are different, a relink conform process is not possible. Media Composer only offers the following modifications to the conform process:

  • Ignore characters after last occurrence of (enter text string)
  • Ignore extension (check box)

Using “ignore characters after last occurrence” can be problematic as the filename may have a name such as B001C001_1308213SS02.MXF where there are two S’ in a row. The workaround involves the following steps:

  1. Duplicate Source File column for all files into the Labroll column (or another column if Labroll is already being used)
  2. On the proxy clips that have the S02 at the end, enter an underscore to separate it from the filename as is B001C001_1308213D_S02.MXF.
  3. Select all the 4K clips and the sequence. They do not need to be in the same bin. With everything highlighted, right-click on the sequence and select “Relink”
  4. Check “Selected items in ALL open bins”
  5. In the “Relink by:” section for Original:
    1. Timecode = Start
    2. Source Name = Labroll
    3. Ignore extension = active (checked)
    4. Ignore characters after last occurrence: = “_” (underscore)
  6. In the “Relink by:” section for Target:
    1. Timecode = Start
    2. Source Name = Labroll
    3. Ignore extension = active (checked)
    4. Ignore characters after last occurrence: = (blank)
  7. Click OK.

It is helpful to set a source clip color on the 4K clips to verify the conform process in the timeline when complete. See the Media Composer 7 online help for more detail on setting source color and viewing in the timeline.

This workaround can be process intensive if you have a lot of clips to manage. One can use a text editor to do a search and replace of an ALE file and merge that back into the clips. Be aware that the ALE file must contain all the columns of metadata you want to preserve, as the merge is not a true merge, but more of a replace function. If using 7.0.3, then the merge is no longer “lossy” and will only update/replace columns in the ALE. See the Media Composer 7.x online help for ALE merge functions.

Making LUTs Available to All Projects

May 2nd, 2014



In Media Composer 7.x, there is currently no way to differentiate a LUT import intended for all projects versus LUTs that are project specific. Importing a LUT is only available to the project in which it was imported. This is great when moving projects around from system to system,but if there was a LUT that you wanted to use all the time, regardless of project, the workaround is to manually copy the contents of the LUTs folder (found in the project folder) to:

  • OS X:  Library/Application Support/Avid/ColorManagement/LUTs folder
  • Windows: /ProgramData/Avid/ColorManagement/LUTs folder

For any single LUT you might want to move, you need to copy the LUT as well as the XML file of the same name. From then on your LUTs will show up in any new or existing project on that system.

Update 12/31/14: Media Composer 8.3 now allows for LUT import  to be per project, or system wide.


Song Metadata

April 12th, 2014


As you can tell from many of my blogs and comments on various forums,  I am a big fan of metadata and how it can be used to enhance workflows both creatively and technically. One of the things that I always found limiting with Avid’s QuickTime AMA plug-in was its lack of metadata support other than the filename and a timcode track when available. Depending on file format being accessed, there is related metadata that can be useful to the editorial process as well as downstream workflows involving reporting. This led me to develop a dedicated AMA plug-in for songs as all the ID3 metadata in an iTunes or similar type library is lost when importing or linking in Avid Media Composer. Working with Justin Kwan, we created the mus.iD AMA plug-in to solve that problem by bringing in that metadata when first linking to a single song, or a full library. 

Now, in addition to just the song name, the user can get composer, album, copyright, tempo (BPM), genre, lyrics, etc. Which is all useful information for editors to sort or find specific songs during the creative process. Another benefit we added was the ability to easily extract which songs were used in the sequence to start the cue sheet process for rights & clearances. All distribution deliverables will ask for a list of songs used, published, etc. as well as where and for how long. A mus.iD reporting application is also available letting the user drag and drop ease of use of an AAF file creating a file ready to be opened in Microsoft Excel or any program that supports CSV or TAB files.

The AMA plug-in and reporting application can be purchased separately or as a bundle depending on your needs. Visit the mus.iD website for more info, a video demo and store.

Translating DCI titles to Avid SubCap

March 22nd, 2014


I found an old archive on CD from 2009 when going through a box of stuff and found a little side project… This one is a bit if a niche need, but was something that Glenn Lea and I had put together using XML and a transforms to translate a DCI subtitle file to be used in Avid with the SubCap effect.  This particular workflow was to re purpose the subtitles for foreign distribution within Media Composer but some may still find use for it for different workflows. I also found the XSLT that takes a change list as FilmScribe XML and translates it to a standard EDL. This was done for the sound stage change list management for Green Lantern. I need to test that to see if it still works as FilmScribe is no longer reliable when using AMA and any sources with a green dot as everything is considered a VFX when it shouldn’t be. But should still work for transcoded and traditional dailies workflows. If it still works, I will update the blog with a step by step.

The XSLT for DCI Subtitles can be downloaded here.

The Step by step for using it: dci-subtitle-xml-to-avid-subcap-format.pdf.

Camera Sync

February 13th, 2014


Click graphic to enlarge

Since I have been blogging about syncing considerations, this blog is just a reminder that when recording audio directly to camera with on-board microphones can have a sync offset already inherent in the recording.  This will go further out of sync as the subject, or source of audio moves away from the camera. The farther away you go, the more of a sync offset is introduced. At some point it becomes moot, as you will no longer be able to see what should be in sync the farther out it goes. If you are using this audio track as a reference track to sync with tools like PluralEyes, then you need to keep this offset in mind. This is where 1/4 frame syncing comes in handy as offsets are not always one frame in duration, nor fall on frame boundaries. Related blogs on syncing are here and here.

The screenshot above shows the results from a quick test I did with the Blackmagic Cinema Camera recording ProRes at 1920×1080. The screen grab is from the timeline with three distances as logged on the clip names; 5 FFET, 10 FEET, and 20 FEET.  I used the iPad MovieSlate application that flashes the screen orange with a beep making it easy to see and hear. Clicking it will either prompt to download the full size image, or open in an other window/tab.

I added frame boundaries in a graphics program to make it easier to see. The last clip in the timeline (20 FEET) shows two frames of orange. That is actually a blend frame and based on “blend” I put the sync location where you see the dotted green line.  This camera has an inherent “audio ahead of picture” by 1/4 frame. Then, as the subject moves farther back, the sync offset get larger; 1/4 frame every 5 feet. This is a case where you would want to resync on 1/4 frame boundaries if possible for tightest sync possible. In a timeline you could slip clips before sending to PluralEyes to ensure even better sync when the results are returned.

(Auto)Sync Guide

February 7th, 2014

Many productions use double system audio workflows due to the low quality inputs on DSLR cameras or for the flexibility and higher bit rate quality available to dedicated production audio recorders.  In those scenarios, a syncing process is done either in a third party dailies system or within Media Composer itself. There are multiple ways to sync picture and sound in Media Composer:

  1. Both picture and sound elements have common timecode. This is the easiest and fastest way to sync as you can do the entire day’s dailies in one batch process. 
  2. The picture and sound elements have easy to see and hear slate and clap but no timecode. This involves marking a Sync point on the slate and clap and syncing one take at a time. 
  3. No slate, not timecode and all hell breaking loose. This type of syncing is usually done via the timeline where both elements can be slipped as needed to be in sync and played back to confirm. This is also the process if using PluralEyes to sync based on waveforms. 

For further sync accuracy, it is common to work in a 35mm film project so that sync can be slipped on 1/4 frame increments. You can read why I use film projects on digital camera workflows here.

Media Composer v7 brought new functionality that brings new workflows to the product - namely Color Transform (source side LUT) and Image Transform (FrameFlex). But… Keeping the flexibility of linked AMA, LUTs, and FrameFlex with double system audio workflows can be a bit of a minefield. Understanding what can be synced and 1/4 frame slipped with these functions before you start and workflow is important to know. In some cases, the production will have to decide what is more important; color management, quality image extraction or accuracy of sync with double system workflows.

The following  two images show the results of double system workflows with a total of 42 total combinations of 7 source clip types. The NAME in the bin starts with how the clip was created:

  • Picture AMA Linked
  • Picture Transcode from AMA link
  • Picture via Avid’s Dynamic Media Folder feature
  • Picture via a third party dailies system like Resolve, Colorfront, MTI Cortex, etc.  
  • Audio (BWF) via AMA link
  • Audio (BWF) via transcode from AMA link
  • Audio (BWF) import

For picture transcoding, there is an additional set of files that include compatibility mode ON or OFF for either Color or Image transforms (or both). This covers most, if not all the methods by which picture and sound essence can be created. Then, there are the two methods of syncing as mentioned resulting in 21×2 resulting sync clips. I have to admit this took a bit of time to create and keep track of, but the information is good to know before realizing too late that a particular method won’t work after spending all the time with a transcode process.

Each method is in its own bin; one for syncing clips in the bin and the other via the timeline. The clip I used was a typical DSLR type clip that did not have common timecode, but a clean slate and clap. The clip was VA1A2 as recorded in camera (scratch audio) and the BWF were 8 track polyphonic with MIX track on A1. The resulting VA1 .sync clip was a result of the options in the AutoSync dialog window to remove audio from video, and keep A1 from BWF. The project type is 1080p/23.976 and 35mm/4 perf active. 

In the first case, syncing the clips directly in the bin two at a time will always sync (as compared to the timeline method), but only the imported BWF audio was able to be 1/4 frame slipped for more accurate sync. All other combinations will not 1/4 frame slip. Green clips are 1/4 frame synced, Red clips are not. Click on thumbnail for full bin view with comments. The columns will indicate ability to sync and to 1/4 frame sync as separate processes. For each case where sync or 1/4 frame slip could be performed, the error message is listed. 


In the case of timeline syncing, the results are a bit more of a mis-mash. In some cases you can sync and 1/4 frame slip, and in other just sync, and with some, nothing at all.


As seen with these results, in order to have 1/4 frame slip capabilities, the picture needs to be transcoded with no active image or color transforms applied. This means one needs to choose which is more important to the workflow in this stage of the process.  This is more or less handled by syncing clips from the bin, but for productions needing to use PluralEyes or have no slate/clap on their dailies and must sync via a timeline, their options are much more limited.  So be sure to plan ahead!

FrameFlex Continued

February 5th, 2014

There is quite an interesting FrameFlex thread evolving on the Avid Community Forums. It seems that there is still some confusion as to what FrameFlex is intended to do and expected behavior in the current version (7.0.3). To me, the parameters available are no better than what you would find in the standard resize effect as all you an affect is the XY pixel extraction and position. The only benefit of FrameFlex is its ability to access the full resolution of the camera’s original files when working with larger than HD sizes resulting in a better quality image than a scaling operation from an HD proxy. That’s it.

What is confusing, and what I was addressing in my previous blog entry on FrameFlex  is that the user needs to be aware that there is a quality difference when using FrameFlex on the source clip and when using it on an event in the timeline when doing a transcode. As long as the clip is dynamically linked to the camera original via AMA, what you see in the timeline is the extraction from the source file. Any operations done on an event in the timeline is combined with any that were done on the source clip.

One might use the source side FrameFlex to correct a boom mike in the shot for example, leading to more “corrective” use on the entire clip, as it does not allow for keyframes. Using FrameFlex in the timeline is for more creative needs as you are choosing the framing, moves, zooms and such in the context of the story and events before and after the event being affected. A different span from the same clip in another event can have different settings.  What you cannot do is save off a FrameFlex effect and apply to other clips as you with every other effect. Or the ability to have “relational” FrameFlex the same way color correction has for creative reframing on same shots in the timeline. Maybe in a future release. 

The important issue is that all this works great as long as the clips are dynamically linked.  But since this assumes greater than HD sources, performance is often an issue with AMA linked clips. So most users will highlight the sequence and perform a transcode to their finishing resolution as it is documented in many AMA workflows.

This is where the quality issue comes into play; The transcode dialog box allows the transcode process to “bake in” either the Image or Color Transforms as part of the operation, but it does not include any of the FrameFlex parameters used in the timeline - only what has been applied to the source. What is needed is the additional option allowing the user to include timeline FrameFlex as part of the transcoding process from a sequence. This way, all .new sources for that timeline are baked in with the expect quality an extraction offers from the larger resolution images.

As it stands, the resulting source clips are transcoded to 1920×1080 from whatever the original resolution might have been. If FrameFlex was used on the source, it will be applied. But all other effects on the timeline include timeline FrameFlex will be a scaled from the 1920 x 1080 image.

If you want to maintain the quality that FrameFlex offers when using it in the timeline, you must render the timeline or do a video midown, and not transcode the sequence. 

One of the comments in the Avid Forum suggested using AvidFX, but it too suffers from the fact that all effects can only use the output of the FrameFlex effect when dynamically linked which is 1920 x 1080. So doing the same effect in AvidFX has no difference in quality than using the 3DWarp as seen here. Click on image for original 1920 x 1080 exported frame from a 4K UHD frame size via a Media Composer timeline:



FrameFlex render from the timeline (not transcoded)


The thread on the Avid Community Forum raises other issues users have come across that you might want to be aware of when using FrameFlex. If you understand what it’s currently capable of, you can create higher quality extractions as long as you don’t need to rotate the image. AAF roundtrip with FrameFlex is another area where users need to be careful, and I will document a Resolve AAF roundtrip in a future blog that allows FrameFlex parameters to remain relevant for any last changes needed in the finishing process.

 Update: Rotate has been added to FrameFlex with Media Composer 8.4. 

Aspect Ratio Mattes for 16:9 Projects

January 26th, 2014


This link will download a bin of preset Matte Effects of commonly used aspect ratios that can be added to the top layer of an HD sequence. Since HD was introduced in Media Composer, the preset aspect ratio mattes still assume a 4:3 source and were never updated for 16:9.  These mattes were created and adjusted for a 16:9 aspect ratio. The bin was created in v6.5.4.1 and should be able to be opened in most versions of Media Composer. The mattes are as close as I can get based on the parameters available. 

Update to FrameFlex vs. Resize

January 26th, 2014


It seems I fell victim to different behaviors with FrameFlex whether it be on the source side or in the timeline when I did my original test and described here.  In my test, I compared FrameFlex in the timeline to a resize using 3DWarp effect with HQ setting active. Where I went wrong, was assuming that transcoding the sequence with FrameFlex baked in was using the pixels from the higher resolution clip it was linked to and was surprised that both results looked exactly the same. The behavior is that a transcoded sequence with FrameFlex will apply parameters applied to the Source Setting as an extraction, but not the ones created with FrameFlex in the timeline regardless of active settings in the transcode dialog window. The result is that it will behave no differently than a 3DWarp or other resize effect and end up having no difference in quality defeating the purpose of using FrameFlex in the first place.

My thinking was once the editing was done, and a conform to the camera originals was completed, one could just do a transcode to the a mastering resolution and continue on from there. In order to preserve the higher quality extraction offered via FrameFlex, you need to render the FrameFlex effects in the timeline as you would any other effect as seen by these examples (courtesy of Grant Petty 4K images from the Blackmagic camera):

Click on image for 1920 x 1080 version. Here is the transcoded version of the event in the timeline:


And here is the rendered version:


As you can see, the rendered image is sharper overall in comparison to the transcode as it is uses all the pixels of the FrameFlex region of interest. In transcoding, the image is first getting scaled to 1920 x 1080, then a resize is applied.  So be sure to plan accordingly when using FrameFlex in the timeline for your mastering needs. I suggest creating a clip color for the event in the timeline as it is not possible to know which clips have a timeline FrameFlex applied to it as the green dot can now mean one or all of the following; frame rate mismatch, XY resolution does not match project, or a color transform is active.

Media Composer and OS X Mission Control (spaces and desktop)

January 7th, 2014


I got an email from my friend Joseph Krings,  about using Mac OS Spaces and Media Composer because I had mentioned it to him as being quite useful. Well it wasn’t me, but our mutual friend Tim Squyres that had made the suggestion and now was something I just had to try. I am probably the last person to know about this function, but I started looking into it and how to set it up with Media Composer when editing on a MacBook Pro (or any single monitor configuration).

There is plenty of information on setting up multiple desktops, such as this one.  On my MacBook Pro, I press the F3 Button to access the UI for setting it up. Just move the cursor to the upper right corner and you will see a square with + sign. Click that. In my test scenario, I added three desktops in this order from left to right:

  1. Bins
  2. Script in full screen mode (ScriptSync), optional if you’re using ScriptSync. 
  3. Composer and Timeline windows

This is done by dragging the individual windows into each of the different desktop icon representations, then organizing their layout from each window. All in all, it works quite well. Double clicking a clip will load it in the source monitor as expected, and that desktop view will become active. “Find Bin”, and ScriptSync editing all work as expected. The main benefit for me is to not have to deal with having multiple bins open and trying to organize them in whatever real estate I have available on a single screen. While Tab’d bins are nice for some workflows, there are times when I want to see multiple bins in a frame view where a single glance will tell me the coverage or information needed. And bins can each take on the size needed for the best display. Once set up,  I “four-finger swipe” back and forth between desktop views closely replicating the two monitor (or three screen in this case) Media Composer experience I am accustomed to when editing with desktop systems.

The only small inconvenience is that Media Composer does not remember the desktop layout it belongs to when launched a second time. There is a way to pin an application to a desktop view, but only works for applications that have a single UI window. Media Composer has multiple windows being assigned to different desktops and therefore cannot be pinned via the OS X UI.  I am still testing workspace layouts within Media Composer to see if I can get a combination that works and will provide an update if possible.  Perhaps someone else has been successful in saving a multidesktop configuration with Media Composer? If so, let me know! Because of this,  I leave the Project window and bins on the original desktop view as they will always open there, moving the Composer and Timeline windows to another desktop view. It’s quick to set up and good for the whole session.

Have fun swiping!