Working with ProRes RAW in non-Apple Software Applications

January 8th, 2019


Update NAB 2019: ProRes RAW support announced by Assimilate, FilmLight, MTI Film, and Telestream.

At NAB 2018, Apple announced the availability of ProRes RAW.  As of this writing, it still only supported in Apple products when created in the field from certain side camera recorders such as those from Atomos. As we all know, what codecs productions use in the field is sometimes not communicated to postproduction and you end up with a file that either is not supported or need to go through some steps to be able to use. So if you are using Adobe Premiere ProBlackmagic DaVinci Resolve, or Avid Media Composer, you will need to add some steps to your workflow if you receive ProRes RAW in your cutting room and how you plan to do an offline/online if needed.

 In all cases, you will need to do a “semi” dailies workflow using Apple Compressor. The reason for “semi” is that does not offer fully featured dailies requirements such as dealing with double system audio, additional metadata, reporting, and ALE export, etc. It will mainly be the video processing step into a codec that meets your needs. In some cases, the output from the Compressor workflow outline below will be all you need and you can select your preferred codec/data rate and color space and work with that directly in the NLE of your choice. With the assumption that you will start and finish in a non-Apple product, you will need to first create a new submaster in a high-quality compatible codec. The latest version of Apple Compressor added support for ProRes RAW and that is where the process begins. The following is a suggested workflow but you can change color space used depending on your workflow and pipeline. The following is just one example of creating a new finishing quality submaster with as much information in it needs to transcode to a non-RAW format for final color correction. 

  1.  Open files in Apple Compressor. Sample ProRes RAW files are provided by FilmPlus Gear for download.
  2. Because it is a new submaster, I use the highest quality ProRes available (ProRes 4444XQ) and the P3 color space as shown in the screenshot (click to enlarge):apple-prores-4444xq.png
  3. Set your outputs settings as needed for location, etc. Then click “Start Batch”
  4. I tried DNxHR as a codec and I don’t have the same color space options available and no control over which data rate DNxHR to use. It does offer up a message indicating it is a legacy codec and will not be supported in the future. Avid has released a statement regarding this change in the Apple OS. It seems that it will be up to the individual application to support the different codecs, and it is unclear at this time whether Apple Compressor will support it moving forward. As of now, the encode fails and thus the choice to go with Apple ProRes 4444XQ as my finishing submaster format.
  5. Once the files are completed, the process is the same as using a ProRes file. If an editing proxy is needed, I find that using the ArriLogC LUT works well enough for editorial dailies. This LUT can be applied in any of the applications or as part of a dailies process leaving the P3 submaster for color correction as needed. As mentioned, you may just want to create a ready to edit file if there is no need for a proxy workflow and apply a different codec/data rate and color space of your choosing - for example, ProRes HQ with Rec709 color space.

Before (P3):
(click to enlarge)

After (with ArriLogC709):
(click to enlarge)

Perhaps this year Apple will open ProRes RAW to all system to the same level as currently available for ProRes. We have seen them change their position regarding ProRes creation on Windows with Adobe products.

Using DaVinci Resolve’s Waveform Sync with Avid’s Media Composer AutoSync

November 11th, 2017




Media Composer introduced Waveform Syncing for grouping only in version 8.5 (January 2016). Using group by waveform for double system audio workflows is possible, but not a fun experience as described in this blog. In the almost two years since that release, syncing non-timecoded sources, or partially timecoded sources via waveform for double system audio workflows is still only available in other systems like Adobe Premiere Pro or DaVinci Resolve.

UpdateAutoSync using waveforms was added to Media Composer v8.10 in December 2017. While it works well, the user must select the sources that belong together prior to the sync process as noted in previous paragraph’s blog link.

Here is how to take advantage of Resolve’s waveform syncing, combined with some ALE text editing, and ALE merging, you can leverage Resolve’s waveform syncing for Avid AutoSync. By syncing double system in Media Composer, you retain the ability to 1/4 frame sync, and keep all BWF metadata as needed for a smoother post process with Avid Pro Tools. The process steps below are all about creating the sync metadata in Resolve to be used in Media Composer.

In Resolve, add your video and audio sources into the MediaPool. Make sure all your settings are properly set for REEL NAME so the ALE will merge. Using “clip filename” in settings is the one to use for AMA linking workflows.


In this case, all the original video clips have two tracks of audio and the double system audio clips have eight. Something to keep track of later when editing the ALE.  Select all the clips and choose Audio Sync Based on Waveform and Append Tracks (right-click):

Appending tracks just makes it easier to see total Audio Tracks have changed. In this example, it will end up being 10 audio tracks.


Select all the sync clips (10 audio tracks) and create a new timeline (right click):


From the timeline, choose to export an ALE:


Next, the ALE needs to be edited to reflect the original number of audio tracks associated with the video clips. Notice the Tracks metadata shows V and audio tracks 1-5. Why it doesn’t show 1-10 appears to be a bug or limitation with Resolve, but since we need to edit it back to 2 audio tracks, it does not really matter in this process:




In Media Composer, the edited ALE needs to be merged with video. Media can be AMA linked or through a dailies process, it does not matter. Because Resolve inserts the sync relationship of the audio timecode into the Auxiliary TC1 column, this is the information needed for AutoSync. Of course, any other logging that may have been done in Resolve will come across as well.  Make sure import settings for Shot Log is set before importing ALE. 







Now the audio can be imported or linked. I still prefer importing audio files into a 35mm film project so I can easily slip by 1/4 frames. Recent versions of Media Composer do allow sample based slipping via Source Settings, but that is treated as an effect, is not as efficient for syncing dailies, and most importantly, does not yet translate to Pro Tools.  

Import Audio:


Duplicate START column into Auxiliary TC1 colum via cmd/ctrl D:


Resulting in:


Now the usual AutoSync process with its options are now available as a batch sync process. Put the audio clips into the same bin as the video clips (why can’t sync be done across bins with results into a separate bin seeing as stereo 3D grouping has been allowing that since v6?) then choose AutoSync and selecting Auxiliary TC1 as the method to sync by:


In this example, track 1 only was chosen since it was the mix track, but the remaning 9 ISO tracks are available via the double match frame workflow. Now, based on the timecode from Resolve’s waveform sync, the clips are in sync and can be further slip synced if needed:


It may seem like a lot of steps, but the process is pretty quick once you do it a few times and can be faster than creating sync by and ear in Media Composer.

XAVC-S and Media Composer

August 12th, 2017


Update September 6, 2018: It seems that Media Composer 2018.7.1 and maybe back to 2018.5 cannot read the embedded timecode. This has been fixed in 2018.8. 

Update January  18, 2017: Media Composer release 2018.1 now supports timecode when available in the files.

Update August 24, 2017: Media Composer 8.9.1 now supports XAVC-S as part of the included Generic AMA plug-in. The below solutions may come in handy for other formats. Be sure to read the Read Me and What’s New guides. It also seems that if your XAVC-S file has timecode, it will be ignored making conform with third-party systems that do support timecode more problematic. Some issues on long file transcode not being possible have also been reported.

Update September 8, 2017: Nablet has released an XAVC-S AMA plug-in for $89. See below.  


The XAVC-S codec was announced April 8, 2013. Four years later, support for the codec still requires third party tools or some user intervention to work with these file types in Media Composer. Avid’s Sony AMA page does not help users looking for this support by suggesting the different solutions available.

Following is a list of solutions that range from free to paid. Each comes with their own advantages and disadvantages and will be up to the individual user to decide what’s best for them based on time and budget.


Change extension from .mp4 to .m4a: Use at your own risk. This can be done on a per file basis or with a batch renamer of your choice. While this is a quick trick, it will lose timecode if present in the original file and it will not work for 50fps and higher. OS X and Windows.

Sony Catalyst BrowseCatalyst Browse transcodes to several formats but not DNxHD. That requires Catalyst Prepare (compare and see below in subscription). It offers LUT and color correction management. OS X and Windows.

DaVinci Resolve:  Transcoding is always a solution. Resolve does come in a free version but is limited to 4K UHD (3840×2160) resolution on export. Of course, Resolve can do a whole lot more than just transcode and with that some responsibility in properly converting based on REEL and file format. Transcode times and media management need to be considered. OS X and Windows.


 MP4 to QT: Videotoolshed’s MP4 to QT  might be the best balance of cost and time when dealing with XAVC-S. It is a rewrap of the existing codec with no transcoding involved and allows resulting files to be directly accessed via AMA link in Media Composer. A user can now choose whether to edit native or use Media Composer’s to transcode. OS X and Windows.


Divergent Media EditReady 2:  EditReady 2 is a very popular program that balances ease of use offering rewrap and both fast transcodes for several formats and great controls for burn-ins, color management, frame rate adjustments, resize, audio track management. OS X only.


Nablet XAVC-S AMA Plug-in:  Nablet has released support for XAVC-S as its own AMA plug-in supporting timecode, entire volumns, 4K and frame rates up to 240fps.


MediaReacter AMA: Drastic Technology’s MediaReactor offers a lot of different codec support in the full version, but they also offer any one of the codecs as a single AMA plug-in of which XAVC-S is one of them. You need to reach out to the company to purchase. OS X and Windows.


 Sony Catalyst Prepare ($149.95/year): Provides all of the transcoding functionality of Catalyst Browse but includes DNxHD and a host of other management functions and a few additional export formats. See the comparison chart for details. OS X and Windows.


For those with Adobe CC, Adobe Media Encoder can provide a transcoding solution but is not available as a standalone subscription. 

Avid forums have also suggested there was going to be support for this wrapper and codec with Media Composer, but nothing of note has been formally announced. I thought it might have been available with the release of Media Composer|First, but perhaps will be part of a future update. See update above. 

Head Start for Disabling Media Composer Resolutions

January 21st, 2017


Update 11/28/18: Added the new DNxHD/HR Uncompressed Resolutions.

In many situations, it is desirable to eliminate resolutions choices in Media Composer to prevent wrong ones being used, etc. This is especially helpful in workgroup environments where consistency of codec selection is very important. Avid does make it possible to for Media Composer to no load certain resolutions at startup by creating a DisabledRes.txt and listing all the resolutions to disable. That process is explained here in the Avid Knowledgebase (2014).

As noted in the Knowledgebase article; “do not remove all resolutions”. I ran with all of them removed and it still launches, youjust don’t have any resolutions at all.

The following download link contains a starter file that lists all the resolutions that I could find across all project types. You can add those back in (and remove others) by typing those into the list. For this scenario, it would be nice if Avid offered the alternative “EnabledRes” so that the user would only need to enter the few they do want to use which would be an easier process. It would also have been nice to have a starter file on the Knowledgeable that stays updated as new resolutions are added and removing the ones you want to use. If I did indeed get all of them, there are 128 of them.

To make it easier for others, download the complete DisabledRes file here and remove the ones you want to work with. Add them to the Media Composer folder as noted in the Knowledgebase article.

In my testing so far, this does not seem to affect any of the DNxHR resolutions in some releases of Media Composer (ex: 8.5.x) and they continue to be available despite being part of the list. But in 8.7.2, the DNxHR resolutions are not listed. Might be release specific.

Thanks to Dennis Bethke for getting this list started. If I missed any, please let me know.

OS X Automator to Shut Down Avid Application Manager

December 16th, 2016



For those using Media Composer with dongles, Avid’s Application Manager is only useful from time to time, and when you do need it, you just launch it, log in and do what you need to get done. But for the majority of the time, there is no need for it to be running in the background using up cycles (even small ones) and pinging for something new to report. We all get news about updates through several other channels already.

Just quitting out of Application Manager does not quit the Helper application. That has to be done manually via the Activity  Monitor. It’s been a long time request that there be a setting in the Application Manager to not launch on startup. While waiting for that feature, Dennis Bethke created a small Automator that will shut down both the application and the helper with a simple double-click. Maybe someone has a Windows version they can share and that I can point to from this blog entry.

Download here.

Here are steps to disable it from launching on startup.

Create ProRes on Windows

December 3rd, 2016


 I was involved with a VR project that had a 4K clip that could not be opened or played in any professional app I tried. I tried transcoding it, but alas the usual transcode tools decoded as a green corrupt frame. So I started looking at any transcode application I could find with a Google search. I found “WinX HD Video Converter Deluxe” (for OS X and Win). Lo and behold it opened and played back the file with no issues. More interestingly though was the fact that I was on a Windows system and the software exported several flavors or ProRes. All this for US$35.

There seems to be a rule that consumer applications have lots of words in their product name, and the UI has that nice consumer look. But all that aside, the ability to easily create ProRes at a basement bargain price outweighed any of the consumer approach to the application.

Choose an Output Profile in its ProRes Final Cut Pro section:
(click all graphics to enlarge)


Once selected:


Further settings:


The exported file seen in Media Info:


It’s not the fastest encoder/transcoder and I have not really done a whole lot of quality comparisons of the same file being transcoded on a Mac, but at the very least creating a ProRes file of a work in a process to send your Mac based clients is easier to do at a great price.

Note that many of these types of applications may be creating ProRes without a proper Apple license. 

Media Composer Frame Rate Conversion

November 6th, 2016


I came across a posting on the Avid Editors of Facebook where someone was extolling the quality of Apple Compressor’s frame rate conversion. Others just advised to use Media Composer to convert. Personally I have not really used Compressor on a regular basis, so I thought I would put it to the test and compare it to Media Composer. For this test, I went with the following scenario; standard definition interlace material in a 1080p/23.976. This is pretty common with clip shows that go back into their digital archive.

I found a source clip that had both camera motion (zoom) and content motion (truck driving by) that originated on an SD 29.97 interlace format. The timeline is 1080p/23.976 and represents three (not counting color space) conversions:

  • Interlace to progressive
  • Frame rate conversion
  • Motion

In Compressor, I selected “Best quality” for both “Resize Filter” and “Retiming Quality”.

In Media Composer, I used several methods and motion algorithms in order to find the best quality conversion.

  • Source Clip Transcode from AMA link to original file
  • Edited the AMA linked original file and made 7 sequences and applied a different motion effect type as seen in the Motion Effect Editor”


Whenever possible, I used the motion “Adaptive Deinterlace Source” setting as seen here as in all cases it was a better quality conversion when dealing with this type footage and conversion:


Keep in mind when judging image quality via the GUI viewers in Media Composer that do not show the full frame, and if you are, make sure you are in green/green mode and not in any proxy modes. You are better off judging quality on the client monitor in green/green mode or exporting a series of frames and look at those.

In all cases the Apple Compressor conversion delivered a better quality image. There are a few Media Composer Motion Effect settings that can be used to get close as seen in the example frames below.

It is unfortunate that source clip transcode to project rate is actually one of the poorer quality versions as it defaults to “Blended Interpolated”and there is no way to change that for source transcodes. Setting a preferred render type in Render settings does not affect the type of transcode being used.

So now the user has to manage the quality on the timeline and can set the default render to their preferred look before rendering the effects. The downside is this is not ideal when making new source masters you might want to archive from the project itself. in most cases, the uprez also looked a bit better with Compressor, but that may be a side effect of the field interpolation. Another test will need to be done on matching formats to judge uprez only. In this particular test, FluidMotion did a good job and came closest to matching Compressor on motion but that can be hit and miss depending on footage without going in and editing vectors.

For all Frames, click to see full resolution.

Apple Compressor:



Source Clip Transcode:


Motion Effect: Blended Interpolated:


Motion Effect: Blended-VTR:


Motion Effect: Interpolated Field:


Motion Effect: VTR Style:


Motion Effect: Both Fields:


Motion Effect: FluidMotion:


One thing I noticed is the timecode conversion from 29.97DF to 24 was different than what Media Composer does with its timecode conversion. Media Composer converts at the start of the second in this type of conversion, I haven’t yet figured out what Compressor is using as a calculation:

Original File timecode:            01;02;33;25
Compressor Conversion:         01:02:31:21
Media Composer Conversion: 01:02:33:20

The timecode is something to keep in mind when doing conversions if ever needing to go back to camera/file master. If you are using Compressor, I would recommend a transcode to a new submaster and choose a finishing quality to start.

XYZ to Rec709

November 1st, 2016


An interesting workflow challenge was brought to my attention via my friend Job ter Burg ACE/NCE. He was looking for a LUT to re-edit a version of a trailer that was already in XYZ color space used in DCP delivery. The desire was to edit while looking at proper color space rather that the rather greenish look of XYZ, especially when working with producers and directors.


as XYZ


 Well come to find out, it was not easy to find a DCI-XYZ to Rec709 LUT, so I reached out to my very talented colorist friend, Bradley Greer of KyotoColor and he provided me with one as well as a Rec709 to DCI-XYZ LUT just in case I needed both. These LUTs are in .cube format so can work in a variety of software solutions that support the .cube LUT format including NLE’s like Media Composer and Premiere Pro.

Download both LUTs:

DNxHD LB. What’s in a name?

October 21st, 2016


You read that correctly. DNxHD LB. I came across this a few weeks back when working with R3D footage and Redcine X Pro and thought there was an error in the menu nomenclature for creating DNxHD 36 proxy media as seen here:


After generating a test file to ensure it was DNxHD 36 (and it was), I started checking into why RedCineX Pro was using this naming convention seeing as “DNxHD_data rate” has been the norm since DNxHD was first released in 2004 (DNxHD 36 was released in 2007). With a little digging I found out that Avid is changing how the codecs are referenced and eliminating the data rate from the name and asking third parties to make the change. So now you have:

  • DNxHD LB (36, 40, etc.)
  • DNxHD SQ (115, 120, 145, etc.)
  • DNxHD HQ (175, 180, 220, etc.)
  • DNxHD HQX (175x, 180x, 220x, etc.)

I can understand why DNxHR has these labels as the codec can scale from 256×120 to 8192×8192 when using custom project sizes:


And when combined with the different frame rates of 23.976, 24,000, 25.000, 29.970, 30.000, 47.952, 48.000, 50.000, 59.940, 60.000, the data rate (and storage calculations) can be a matrix of hundreds, if not thousand of combinations. DNxHD had a little confusion associated with it as the data rate changes with the frame rate and is part of a family: 145 and 220 whereas 36 was just named that as it was only available to progressive projects.

The challenge will be getting everyone on board with the new naming conventions in a timely manner and ensuring it does not lead to misunderstandings and wrong media being created when requesting a certain format, especially in a dailies situation. Since Avid did not make the naming change when DNxHR was released for DNxHD as well, and was marketed as a 2K+ type codec, users have different understandings of what DNxHR is and what to ask for. For example, you can create a custom 1920×1080 project which only allows for DNxHR to be created, and it will be the exact same data rate and storage as DNxHD but the media will not play in earlier versions of Media Composer (pre-Media Composer 8.3 introduced in December 2014) Also, Avid’s DNxHD/HR landing page does not really refer to the new nomenclature either (also does not mention a key benefit of DNxHR HQX supporting 12 bits):

(click for larger image)


The only other application that I have found that uses the new nomenclature to date (there may be others) is Premiere Pro when selecting for sequence presets for example:

(Click for larger image)


And in Adobe Media Encoder:


Although the above is a good example of why mistakes might happen. By asking for, or referring to the codec by the shorthand “DNX LB” a user might create either DNxHD LB or DNxHR LB. In the above (very long menu) I would recommend they at least call it out by the full name “DNxHD LB”.

Ironically enough, even the latest version of Media Composer does not refer to DNxHD (LB, SQ, HQ, HQX) in its own transcode UI interface and why users might mistakenly think they are different:


DaVinci Resolve still uses the DNxHD (36, 40, etc.) names, and then other third parties just use their own naming conventions like EditReady:


So as Avid takes this phased approach with codec naming, be explicit about which DNx codec you are referring to; HD or HR and know what LB, SQ, HQ, and HQX map to when working with applications that have already changed over to the new naming convention.

48fps Editing

August 21st, 2016


One of the new features that came in Media Composer 8.3 (December 2014) was the ability to edit at higher frame rates (up to 60.000p). Combined with the resolution/frame rate independence of the DNxHR codec, this provides solutions for content being produced for specialized venues, experience, etc.

Recently I was approached with the question of how well did 48fps editing work for a feature length production.  I contacted my friends at RED to provide me with a clip shot at 48.000fps with project rate at 48.000 fps so it would not be tagged as a 2x slomo clip. All testing so far as been done with this one single clip.

The good news is that you can edit 48fps (and 47.952) at their native rate.
The bad news is that several of the common workflows expected with the process do not.

Once you click “custom” in the Media Composer projects, you will see the additional frame rates not available with the preset project templates:


One thing you will notice here is that there are no film settings available once you click custom. It is not so much for tracking KeyKode and film elements that is the issue (although it could since film itself as no defined film rate), it is the fact that one cannot create a project to able to use 1/4 frame sync with double system audio workflows.


Trying to open the project in a 24fps project as a workaround did not work either, and even if it did, you would only see every other frame defeating the purpose of 1/4 frame sync. The issue is most likely related to the fact that audio being “addressed” for 1/4 frame sync is tagged as 96fps upon import into a 35mm/4-perf project (4 x 24) or 72fps in a 3-perf project (3×72). Something like 192fps might have to be done to support 48fps. This is the reason why you don’t see it in 30fps projects (4 x 30 = 120fps).

AMA linking to the material worked just fine. One thing to note here is that even when selecting a 48fps project, the project defaults to 24fps editing timebase, what Avid calls “2 frame safety” to protect for tape outputs. Since this is a file based world, and editors like to edit on every frame, be sure to change it to 48fps.

The START timecode of the clip displays 24fps NDF, it uses the field indicators to designate 48fps counts with either a : or . right before the frame portion of the timecode. The same applies to the RECORD side timecode for sequence position. Unfortunately there is no 48fps counter available as seen in other applications as a guide only, even when selecting 48fps at the timebase.


Once issue that I have not gotten to the bottom of yet, is that Media Composer’s TC does not match either the Red Player or DaVinci Resolve’s TC START for the same clip, while Red Player and Resolve match each other:

Red Player:




ALE importing and merging is an important part of any feature based workflow from dailies onward, and the easiest way to test that is via a simple roundtrip of an ALE exported from Media Composer, add a custom column, or change NAME and see if it merges back. In this case, the NAME was changed from the camera file name to scene_take.

As exported:


As modified for import:


No success. There is a mismatch error raised:


Perhaps it is due to the fact that exporting an ALE does not reflect the : or . field indicators. I edit the ALE timecode to reflect it as seen in the bin from 22:39:25:05 to 22:39:25.05. Same error. Changing fps in the header to 24 instead of 48 does  not work either and there is a fps mismatch error message. So no ALE import merge support available. I was able to import an ALE file, but the 22:39:25.05 in the ALE was changed to 22:39:25:05 which is a +/-1 frame offset from actual image. And while on the topic of ALE import, you need to select “Import Media” which is a very non-intuitive menu name for non-media import.

I believe the whole :/. is also the basis for EDL’s not being correct. Another test here is checking EDL to make sure counting is correct. For the test I synced a BWF file to reflect typical double system workflows. AuxTC and SoundTC columns all have the same behavior and EDL’s are all the same. My test sequence is to reflect frame accuracy when editing 48fps so the sequence starts with a 1 frame event, then 2 frames, then 3, 4, 5, etc. up to 24 (1/2 second) as seen here:


The EDL looks like (click for full size):


Just looking at the EDL, you can tell there will be issues:

  • No indication of it being a 48fps EDL - this can be done in the FCM command line at the top rather than having to name the sequence as such as a reminder.
  • The first line is a 0 duration despite it being a 1 frame event.
  • If you look down the record side out point, you would expect to see, 1, 2, 3, 4, but you do not.
  • According to this EDL, even if you thought it as a 48fps EDL without the :/. indicators, the half second is event 009 rather than 024 as edited.
  • No support for the :/. indicators for frame precision. None of the available templates support it.

Using the soon to be available AAF Reporting service, it sees the timecodes as 48fp as seen in this screenshot.


So, as with the ALE roundtrip test, what does an EDL roundtrip look like? When importing the EDL, I do get prompted for frame rate, but not the project rate. Then a message about 24 events having to be modified, then an empty timeline.


The good news here is that an AAF did roundtrip correctly as long as you remember to set your project edit timebase to 48fps. If you leave it at 24, import the AAF and export an EDL, it is a very different looking one with additional events as seen here (click to see full size):


Conforming the 48fps AAF (picture only) in DaVinci Resolve was also correct, but as you can see, the EDL view of the timeline counts as 48fps, not 24 with field indicators. Click image to see full resolution:


A conform check was frame accurate which is good. It would seem that the difference in timecode noted above is not affecting the conform process and may be a display issue in Media Composer?

Another test was the audio post workflow and taking an AAF to Pro Tools. The import failed right away:


Part of any feature workflow with Pro Tools is the ability to manage changes, The change list tool does not work at all with 48fps projects. I do not know whether it is because of the frame rate, or the fact that it is not a film project. Change management is needed regardless of acquisition format.

First this:




The one thing I could not test was the 48fps monitoring. Avid’s own DNxIO box does not support 47.952/48 fps monitoring. I am told that AJA does, but nothing on their website indicates that frame rate. Perhaps it will be part of an upcoming update. In any case it will be relagated to certain project formats so some form of letterboxing/pillarboxing and scaling will be included depending on project resolution/aspect ratio to match the monitor being used.

OP1a, DPX, and Same as Source QuickTime worked fine.

Answering what seemed to be a simple question; “does 48fps works with Media Composer for feature production?”, needs to be vetted to the expected workflow of a production. In some cases “yes”, in others no, but mostly “it depends”. It is a feature, but not a complete solution.

Update 9/10/2016:

Scott Freeman reminded me of the fps setting in “General Settings” and setting that to 48 rather than 24 allows the TC counts to be 48 as seen here:


Doing so, also allowed an ALE file to be merged into the existing clip that does not work when project is set to 24. There is no need for this project type to default to 24 frame (two frame safety) and the selection for TC should be part of the Format window along with fps edit rate.


If anything, the general setting should be “TC rate matches edit rate” as an option and the user should always have access to both counting types in the TC viewers as a choice.

With 48fps active, EDLs are now “frame accurate” as 48fps but the FCM command should still indicate frame rate along with “NON-DROP FRAME”. For the same sequence above, the EDL is now:


I then tried importing this EDL back into the project. I selected the EDL and was greeted with the parameters of the EDL but was not able to select any “project type”:


The EDL imports with no further messages until I tried loading it and got:


The rate was defined in the first window that popped up, but did not seem to apply to the sequence.  I clicked “Yes” and when it loaded, there were no events.

So selecting 48fs in the General settings allows for more workflow operations to work, but some are still missing such as 1/4 frame sync, and Pro Tools post.

ALE Header Trick You Need To Know About

July 9th, 2016


The ALE format has not really kept up with changes in Media Composer that have been going on with the past several releases. While ALE still has limitations to be aware of when using custom columns and such as discussed here, the ALE header is the subject of this blog, and what you need to know when importing/exporting for different needs.

The ALE is basically a TAB delimited text file with three main sections:

  1. Header
  2. Columns
  3. Data

Additional information on the columns and basic ALE formatting can be seen in this Avid whitepaper that unfortunately is good for Media Composer 7.0.2 and earlier and does not reflect any of the changes and new columns that have been added since December 2013.

The Header defines global properties of all events listed. Any event can override a global value if that column is listed and the value differs from the global value. Since that function exists, the global value does not offer much value and can prevent an ALE from being imported as seen in the image above. Media Composer now allows for different frame rates to exist in the bin, so even the FPS section of the header really holds no value since the rate is defined in each event. But the global header entry that really has no value is the “VIDEO_FORMAT”. Before 2K+ projects were introduced, this field would display NTSC, PAL, 1080/720. But when greater than HD projects were introduced, it now reads CUSTOM which really means nothing other than it is not NTSC, PAL, 1080, or 720.

Example of an HD ALE (click to enlarge):


Same clip exported from the project as UHD (click to enlarge):


Notice that the clip itself has Image Size and FPS defined and each clip could have a different value and that is well supported. But the VIDEO_FORMAT will prevent an ALE from importing into the bin if CUSTOM is listed and you are trying to import into an HD project, and vice versa. The user needs to open the ALE in a text editor and  change that. There is no need to do so, since that simple edit lets it import. Why not at very least introduce an “ignore” option when this is encountered? Better yet, when importing into a 2K+ project type, just ignore the VIDEO_FORMAT value altogether. And then again, one can argue that the Global Header itself is no longer needed in later versions of Media Composer and that a basic TAB file is much easier to deal with.

  • FIELD_DELIM: This can be automatically detected or prompted is MC were to support different types
  • VIDEO_FORMAT: No longer needed and has been made pointless with CUSTOM
  • AUDIO_FORMAT: Already ignored as clips take on Audio/Project Settings
  • FPS: Each clip has a FPS value as part of the spec, and the user could be promoted if an issue arises during import

Without the header it means  “Column” and “Data” can be removed as well as TAB files are defined as first line as header everything else is data. So the above would now look like (click to enlarge):


An ALE export would still need to exist to have backwards compatibility, but the whole process of getting metadata into Media Composer for recent versions can be made a lot easier without the user trying to figure out what to fix to make it work and needing an extra step in a text editor.

Then there is the need for a more robust format - TAB would be used for quick and easy interchange but an XML schema would allow for more metadata to be described that goes well beyond what a clip-based/TAB file can offer.

Using Resolve as Fusion Edit Connection for Media Composer

May 14th, 2016



One of the features of Avid’s Artist|DNxIO offering is the ability to use Blackmagic’s Fusion Edit Connection, an AVX plug-in connecting elements in a Media Composer timeline to Blackmagic’s Fusion. From Blackmagic Support:

The Fusion Edit Connection for Media Composer plugin is installed when you install Blackmagic Design Desktop Video (required to connect Artist | DNxIO). The Fusion Edit Connection for Media Composer plugin will work with both Fusion Free and the full Fusion Studio (requires purchase). The plugin is installed with the Desktop Video software but the Fusion Free and Fusion Studio need to be downloaded separately.

The  AVX plug-in is currently only available for Windows based system and only on system with an Artist|DNxIO device connected. In this scenario, the Artist|DNxIO works as dongle for the plug-in to operate. While that is great for those needing all the functionality of an Artist|DNxIO, it does not help those who work on OS X operating systems, in software only mode, or in a collaborative environment where one system may be dedicated to I/O and others are VFX previz stations, or even using another Blackmagic I/O solution like the UltraStudio 4K of which the Artist|DNxIO is the Avid branded version.

But there is a way to recreate the same workflows with some small advantages by using the free versions of DaVinci Resolve as the gateway to the free version of Fusion using AAF. With the most recent version of Resolve (12.5), Blackmagic has added the ability to for Resolve and Fusion to work together by defining a Fusion Compound clip in the timeline on the Edit tab. There is a whole chapter in the very well written Resolve User Manual on using Fusion Connect. Always download most recent versions of software and User Manuals from their support page. Also be aware that there are limitations to each of the offerings between the free and the Studio versions. Check the product compare pages for details.

The following is a simple example of the process. The timeline has 3 video layers where V3 has two stacked effects on it and show in expanded view:


Activate the tracks and mark an IN-OUT over the range of the clips you want to send to Fusion.  When exporting AAF, make sure “Use Marks” and “Use Selected Tracks” are active:


How you name the AAF export will be dependent on how you want to manage and track the VFX workflows in your production. In this quick example, since it is the first VFX, it is named VFX_01. Eventually I could add VFX_01_v1 for versioning etc. It will most likely be tied to the scene number in a scripted program, etc. If you’re working as a team, make sure you set up the naming schema that all agree on to make life easier.


Once in Resolve, you have the option (and flexibility) to use the MXF files being used in the timeline or conform back to the camera originals. This allows control over how you want to manage the resolution, and color management of the VFX process and is fairly straightforward at the time of conform.

Once imported, the Resolve timeline is just the elements as edited from the Media Composer:


Select all the elements, right click and select “Fusion Compound Clip. From here, follow the steps described in the User Manuals for both Resolve and Fusion. For the most part the user will want to work with the source clips - many of the more complex effects are not supported in the AAF conform, but the source elements are and properly aligned. This process will only get better as the AAF conform gets better. Here is the layout once opened in Fusion:


Once the effect has been created in Fusion, the Resolve timeline automatically updates with the rendered effect.  From here it is a simple process of exporting out in whatever format you want to work with. I would suggest DPX via the AMA link or an OP1a with an Avid DNxHD/HR codec so that you have a new source clip created that matches the VFX naming being used. This makes things easier to track for versioning and final conform later if finishing in Resolve or other third party system. is available as an iOS application.

The Elusiveness of EDL Comments

March 21st, 2016


Back in Media Composer v8.2, I was testing some metadata workflows and discovered quite by accident that values in the “Comments” column appeared in EDLs when “Comments” was checked as an option. Since the original Avid/1 Media Composer, EDL comments were restricted to the comments added to any one segment in the timeline. I don’t know if this was an intentional change, or something that just came about as a result of something else as there was no mention of it in release documentation. I thought I would keep an eye on this to see of this feature evolved with follow-on releases.

Comments Column in the Bin

I should start by saying that the “Comments” column itself is quite elusive. It is a column that seems to be standard within Media Composer, but not exposed as a standard column when choosing columns. If one goes to bin’s script view and adds commentary to the large text area on a clip, then the column becomes selectable as a custom field via the “Choose Columns” despite it already being part of the bin. Importing or merging an ALE with a “Comments’ column will find its way into that text area. Once there is active metadata, it can be saved to other bin views.

Adding Comments to EDL

Once text has been added to a “Comments” column, and selecting “Clip Comments” from the List Option/Include in List for both Picture and Sound section, those comments will be added to the EDL after each event proceeded by a *

001  A001C007 V C  08:21:04:02 08:21:09:18 01:00:00:00 01:00:05:16

This can be quite useful when needing to add specific metadata to an EDL for a downstream process. A user can duplicate any column’s value into the “Comments” column then generate a list. A workaround using ASC CDL columns for Stock Footage tracking was blogged about here.

But there are limitations to be aware of when using “Comments”. I am not guaranteeing this is a complete list, but are the ones I discovered when considering different use cases in typical workflows:

  • Comments will appear in an EDL when added to a master clip
  • Despite a subclip displaying the Comment from the master clip in the bin, the EDL will not have the comment when edited from a subclip
  • If the user overwrites the existing Comment on a subclip in the bin with a comment, then that comment will appear in an EDL.
  • The same goes for .sync clips. Comments can be added to the V and A master clips, and the resulting .sync clip will display the V comment, but the EDL will have no comment. User can enter a new comment on the .sync clip and it will appear in the EDL
  • A group clip has a different behavior – the resulting group clip does not display any of the originating clip “Comments” in the bin and the EDL does not display any of the “Comments” from any of the angles used. A user can add a “Comment” to the .grp clip in the bin but unlike sub and sync clips, EDLs will still not show that “Comment”
  • Unfortunately, EDL Comment lines are still being limited to the old linear tape bay specifications of 80 character line lengths despite every other aspect of the EDL being changed to support newer digital workflows such as having up to 129 character as Source/REEL. Comments will wrap at 80 characters into multiple *Comments lines. This makes parsing a bit more problematic downstream.
  • Comment lines are changed to all UPPERCASE instead of keeping the text as entered in the bin. This prevents some interesting parsing algorithms to be used.
  • If a user did enter a “Comment” on a specific event in the timeline via the “Add Comment” function, the EDL will display this “Comment” and not the value from the bin. The EDL comment does not differentiate between the two, making it inconsistent and can be misleading or problematic depending on use.

This behavior is still consistent with the most recent release of 8.5.1 so I suspect this will be how it works for a while. It’s an interesting feature to consider if you are aware of its limitations in order to get expected results when used.

Use Avid’s Group by Waveform Function for AutoSync

February 1st, 2016


Update 12/23/2017: Avid released Media Composer 8.10.0 which added waveform syncing to AutoSync:



It has the exact same functionality and limitations as waveform syncing for group clips, but now available within the AutoSync settings window. The syncing process itself works well but has the following limitations and behavior:

  • The user must select the clips that belong together first, before selecting sync by waveform. You cannot select a day’s worth of dailies and let it find the sync relationships for you as with similar solutions in other systems. 
  • Despite the “What’s New in 8.10″ wording of this new functionality, that it might seem possible to do so, what it can do is sync one VA clips with multiple A only clips. But the user does not have control over which track order the additional audio-only clips will take.
  • If you select a VA and an A clip that have nothing to do with each other, you will still end up with a .sync clip. No warning that a sync relationship could not be established.
  • On the other hand, if you select a V only subclip created from a VA clip and an A clip you will be warned that only clips with no audio will not be synced, but you will still get a VA .sync clip with no sync relationship established. Is it trying to sync to audio from its parent master clip? Maybe, but the resulting clip is not in sync based on the master clip’s original audio. 

Syncing via waveform as a batch process can be done in DaVinci Resolve and the sync relationship metadata can be used via ALE to AutoSync as detailed in the blog: Using DaVinci Resolve’s Waveform Sync with Avid’s Media Composer AutoSync

Original Blog:  

Media Composer 8.5 brings lots of new functionality, one of which has been a long time request of using audio track analysis to sync clips. This functionality has been available in other NLE’s for some time and was available to Media Composer users via PluralEyes. PluralEyes still has advantages over what is currently offered in Media Composer, namely the ability to sync in batches. How to use PluralEyes for syncing dailies can be read in this previous blog.

Waveform syncing in its first release is reserved for grouping clips in preparation of multicamera editing. It is not yet available for syncing dailies in AutoSync or with the multigroup function. However, it does work quite well, but does have a few quirks to be aware of when using; It will always create a group clip whether the selected clips belong together or not. It will also create a group clip from a batch of TIFF files that have no audio. The user is warned that clips with no audio will be ignored, but you get a group clip anyway. The function depends on the user properly selecting the clips that belong together before using this feature.

But there is a workaorund to use this function to sync dailies when there is no other method to find proper sync. In my example, the footage does have slate and claps, but I am using it as an example as the source clip had scratch audio to match the double system BWF files. It is a multi-step workaround process, but can come in handy when there is no common timecode, slate, clap or otherwise. My example uses a clip from a feature film that was shot on the Canon 5D (no timecode) and an 8 track BWF recorder with TOD. The BWF files were imported into a 35mm 4 perf project so that I can still slip sync by quarter frame when needed as described here. The process starts by selecting the two clips you know belong together and selecting “group clips” from the Clip menu (no longer in the bin menu with 8.5).


This will create a group clip. Load group clip into source monitor and Mark IN and OUT to active picture and sound (do not leave any Avid black).


The resulting timeline with waveforms active will look like:


For whatever reason, the better recorded tracks in this file were on tracks 5 & 6. Highlight the tracks you want to keep in the resulting .sync clip and contrain/drag to tracks 1,2, etc. (1&2 in this example):


Back in the bin, highlight the sequence that was created and select “Commit Multicam Edits” by right-clicking the sequence. This is an important step otherwise the AutoSync function will not work.


Highlight the new sequence that was created from the previous step, and select “AutoSync” from the Clip Menu.


The result is a subclip that behaves like any other AutoSync clip with sync offset indicators, the ability to match back to original BWF files to grab ISO tracks, and track patching is indicated in the EDL. And since this was a 35mm 4 perf project, the ability to slip sync on quarter frame boundaries.

Keep in mind that scratch track audio in many cameras is not perfectly in sync to start with. See this blog.  Any resulting waveform syncing will be out of sync by the same amount. This can be handled as a post sync process as described with the quarter frame sync slip. It would be nice to see in any of these of waveform syncing applications the ability to add a +/- offset as part of the original clip metadata and be used as part of the sync process. For example in Media Composer, a column named “Sync Offset” (or other name) could have a value of +1 or -1 that represents scratch track audio being ahead or behind by 1 frame. User can enter whatever value and decimals would be a good thing too, even if it were to match Avid’s quarter frame support: +/- 1, 1.25, 1.5, 1.75, 2, 2.125, etc.

We’ll see how this new waveform function gets improved in future releases.

Alternative to Video/Audio Mixdown

January 19th, 2016


There’s an interesting thread on the Avid Community discussing the speed of exports and when to render effects, etc. As of late, I have been using AMA File export as OP1a as export format as it renders and transcodes all one step as it exports the file to a destination of my choice. This may not work for everyone depending on their needs, but for me, I have found the following:


  • Single one step export and does render/transcode all in one pass
  • Many applications support MXF OP1a like Premier Pro, Resolve, Assimilate, Adobe Media Encoder, FFMpeg, VLC, Sorenson, and Handbrake, etc.
  • Includes timecode of sequence/clip of exported file
    •  Makes it easier to manage than a Mixdown in Media Composer as it now also has a user defined source that can be referenced. Use AMA MXF plug-in to access file back into the project.
  • Supports different audio configurations from mono, stereo, 5.1, and direct out
  • Not affected by MoviePlayer’s idiosyncrasies when it comes to video levels


  • Limited to DNxHD codecs
    • This would be considered the master requiring the use any of the third party compression offerings for other deliverables
    • Update 8/23/2016: Using Media Composer 8.6 I am able to export DNxHR with any project type not SD or HD, including custom sizes. 
  • Cannot set video levels during export as with other exports (video or data levels)
    • Workaround is to apply LUT as an effect to a top track in the timeline. For example, if sequence is Video levels and you want to have it full range (data levels), then apply “Levels scaling (video levels to full range)” from the LUT selection. Export using “selected tracks” in the AMA output as needed.

Update 1/23/2016:

I did a test of various methods of exporting a sequence to see which one would be the fastest. I created a 10 minute sequence (1080p/23.976) from DNxHD 115 source clips. The sequence has 2 V tracks and every event is a Picture In Picture for the entire 10 minutes. There is no audio, just testing video speeds for now. Some processes required one step and other require two and three steps and the times only reflect export/render times, and not the additional time it might take to set it up (as in the case of a mixdown to a new sequence).

From fastest to slowest:

  • 04:15: Un-rendered sequence directly exported as MXF OP1a DNxHD 115 (one step)
  • 06:24: Export QT Sames as Source from un-rendered timeline (one step)
  • 07:03: Render timeline and export MXF OP1a (05:38 for render, 01:25 for export) (two steps)
  • 08:04: Mixdown of un-rendered sequence and export QT Same as Source (05:31 for mixdown, 02:33 for export SaS) (two steps)
  • 08:05: Render timeline and export QT Same as Source (05:38 for render, 02:27 for export) (two steps)
  • 10:55: Render timeline, then do a mixdown, and export QT SaS (05:38 for render, 03:10 for mixdown, 02:27 for export) (3 steps)

In all cases, the fastest and easiest method is to create an AMA MXF OP1a export. Even with everything rendered, MXF export versus QT SaS is 58% faster. A direct one-step export as MXF OP1a is 66% faster than the next fastest method which is a direct QT Same as Source from the same un-rendered timeline.

Thanks to Bill Busby for correcting my MXF terminology. I also got some great feedback from JC Bond regardingOP1a as the first step in creating a guaranteed single DNxHD coded for .mov (Same as Source). I will just copy/paste what he sent as he said it better than I can:

Use OPAtom1a export to create an MXF file, I then AMA link to that file and do a Same as Source QuickTime export. This is the FASTEST way to get a QuickTime that is guaranteed to be a Single Codec / Resolution. Regular Same As Source QuickTimes can be MIX RESOLUTION QuickTimes since they are truly a SAME AS SOURCE and the user can have different codecs and or resolutions in the timeline. In the past the only way to ensure a single codec / resolution QuickTime was to do a CUSTOM QuickTime at that codec.

JC and I also discussed the ability to use ClipToolz [see update below] to quickly rewrap from MXF to MOV with the DNxHD codec as an alternative.

Update 8/16/2016:  Additional benefit as noted by Jared Zammit on the Avid Editors of Facebook:  One great upside to this workflow - AMA file export will not let you export any sequence with offline media. No more scrubbing a timeline for red clips, only to miss a tiny offline subtitle in an hour long sequence. Rejoice!

Update 1/28/2017. ClipToolz is no longer a product but users can look to Convert4 as an option.

Need More Storage Than What DNxHD 36 Offers?

December 28th, 2015


When DNxHD 36 was introduced in version 2.6.4 in March of 2007, it offered a balance of quality image and high storage rates that was a perfect fit HD offline workflows. As seen in the above graphic, 15GB for 1 hour of footage.  It is still a popular codec today, but with more test screenings coming straight out of Media Composer to DCP, and cost of storage going down, many of the studio productions are shifting to DNxHD 115 as the offline codec.

But in other markets, I have come across productions needing a a more compressed HD data rate than DNxHD 36. Perhaps it was to move dailies to a smaller drive to edit on set, or the drive wasn’t fast enough to play back the higher data rate, budget, or a combination of all of them.  When Avid introduced grater than HD project support in version 8.3 in December of 2014, it came with a new resolution independent codec called DNxHR. This codec was needed to support the new project resolutions and aspect ratios now available. The older DHxHD is a 720p and 1080 16:9 only codec at the handful of data rates offered.

For productions still shooting 1080 formats, it is possible using DNxHR to create media that is smaller than DNxHD 36. But with compression, it is a trade-off between storage, performance and image quality and understanding what these are will let you decide what is best for your workflow.

When in a 1080 project using 1080 sources, the Consolidate/Transcode box offers the option of Proxy Encoding at 1/14 and 1/16. When selecting this, your only option is DNxHR LB, the lowest data rate of DNxHR codecs. A good rule of thumb is when working at 4H UHD, which is 4x bigger than 1080 HD, DHxHR LB is the equivalent of DNxHD 36. If you were in a 4K UHD project and transcoded to DNxHR LB 1/4 proxy, it is the same as 1080 DNxHD 36.

You can create these DNxHR proxies from AMA linked clips directly, or from 1080 sources already at DNxHD 36. Note that these 1/4 and 1/16 proxy formats can only be created in Media Composer. Unfortunately they are not available in third party dailies applications - perhaps that will be made available in future versions.



For comparison purposes, I have also included Avid’s H.264 800 Kbps as it offers the most storage but is mainly designed for Avid’s remote editing solution, but can be considered as well depending on your needs.

For this explanation, I have used a 1 minute 1080 source clip (exactly 00:01:00:00) so that comparison between codecs can be easier to see. Here are the clips in the bin after each of the transcodes:


When compared for storage and how much more storage per codec type we get:


So as we saw in the first graphic, 1 hour of DNxHD 36 was 15GB. Using any of these other proxy formats would give you ~4x to ~41 times more storage and more performance for slower drives, or more layers of real time playback for VFX or multicamera editing when in Media Composer.  The last factor to consider is what do these look like? Again, depending on your needs and how you are monitoring, any one of them may work - but the biggest difference is if you need to monitor at 1920 x 1080 with a client monitor or using full screen on a 1920 x 1080 GUI monitor - The smaller data rates may or may not work based on complexity of image and detail required for editorial decisions. But a 1080 at DNxHR LB 1/16th may work fine if all you use are the smaller source/record GUI monitors and don’t go full screen or have a client over your shoulder. Also keep in mind as per a previous blog, operations such as stabilize at proxy resolutions do not give you the same results when going back to full rez or camera masters.

Here is what the four “smaller than DNxHD 36″ options look like when transcoded and viewed at 1920 x 1080. Click to see full resolution.


I find that if in this situation, the DNxHR LB 1/4 Proxy offers a nice balance of image quality to storage savings. H.264 800 Kbps Proxy offers the absolute best storage rates, and looks slightly better than DNxHR LB 1/16 when editing in the GUI monitors alone (no 1920 x 1080 full screen). This is due to LongGOP’s ability to provide higher image quality at higher compression rates compared to I Frame only encoding as found with DNxHD or DNxHR.

Here is DNxHR LB 1/16 Proxy (left) and H.264 800 Kbps Proxy (right) when viewed in Source/Record Monitor on a 1920 x 1080 monitor. Click to see full resolution.


Converting MS Word to Text With Layout

December 10th, 2015



A question came up recently on the Avid Editors of Facebook on how to take a screenplay that was written in Microsoft Word and export it to be used with Avid Script Based editing while preserving the script layout. In older version of Microsoft Word, there used to be a “save as text with layout” option, but that is no longer available.

One solution is to use Fade In, an excellent, low cost, professional screenwriting software, that even in demo mode, allows for importing and exporting different file types. In this scenario, the script is exported from Microsoft Word as .RTF and opened in Fade In. Once opened, go to the file menu and select export as “Formatted Text”. That will preserve the screenplay format and be ready to use in Avid’s Script Based editing interface.

If on OS X, my go-to tool for further text manipulation is TextWrangler. One example brought up in the Facebook thread was adding more left side margin in the file. This can easily be done by opening the text file, selecting all the text and using  the “Shift Right function” from the Text Menu which is ” command-] ” and it inserts a Tab for each time used.



After Shift:


 The script used here for the demo is for the 2015 film “Straight Out of Compton” - available for download with other 2015 screenplays from IndieWire

Complete Your BWF Export

December 9th, 2015


When exporting audio as BWF and you want to ensure metadata integrity, you need to finalize the process with a BWF editor such as Sound Device’s free Wave Agent, available for both Windows and OS X and should be part of everyone’s toolset. You can find more info and download here.

The example shown is a quick multitrack sequence simulating a 5.1 sequence as mono tracks to be exported as a poly file for archive or other purposes. Substitute your own naming convention and timecode per your own needs. Note that is also applies to exporting BWF source clips directly from the bin as well as poly or mono.

As you can see from the above image, I have a basic 6 track audio in a 1080p/23.976 project/timeline starting at 01:00:00:00 and I have renamed my tracks. I export using direct out as seen in these settings:


Opening the exported file in Wave Agent shows the following (click to enlarge)


As indicated, the timecode now shows 01:00:03:18 which is not correct. The Project and and Track info are blank and this is long time request to have that metadata taken from the project and track names automatically.

In order to correct and embed the proper timecode metadata back into the file, uncheck “Preserve Start TC:


Then from the Frame Rate menu, select the frame rate that matches the project from which it was exported:


This will update the timecode by properly embedding the frame rate/sample from midnight into the file.


At this point, I also add the track info back onto the tracks and optionally add a project name that is most likely the same as the file name, but is now embedded in case that gets changed. Be sure to click the “Save” button before leaving the application.


Now this information is part of the BWF file to be re-purposed as needed - even when re-importing back into Media Composer. It seems that the frame rate value is not being defined in the bEXT chunk of the BWF when exported. This is from the Sound Devices webpage on timecode:

TC frame rate:  This is the frames per second rate. It is also used to convert the HH:MM:SS:FF time code value to a ‘Samples Since Midnight’ value and visa versa. It is stored in the bEXT chunk as the ‘SPEED’ parameter and in iXML as the ‘TIMECODE_RATE’ parameter.

As seen here in this Sound Devices Tech Note.

FrameFlex and Stabilize

November 23rd, 2015


Another FrameFlex related topic popped up on the Avid Community forums regarding the ability to use the extra pixels available when linked to a higher resolution for stabilization. That thread can be read here. As noted in my previous blogs on FrameFlex, any effect applied to an image is only getting the scaled output format from FrameFlex which is your project type. So if you are linking to a 4K UHD size source clip and are in a 1080 HD project, any effect applied is to the already scaled image and is 1080 HD.

The only time there is no scaling being applied is when the extraction target is equal to the project size. That is done by checking “Size matches project raster” and this is now a pixel for pixel extraction of the source. So any re-framing that is greater or less than that is being scaled, or resized to fit.

So the  desire is to take advantage of the larger resolution file and something called “oversampling”to maintain the highest quality image possible. Here is an online article that refers to some of those advantages: “Why Everyone Should be Shooting 4K - Even for HD Delivery“. But oversampling can work against you depending on the overall factor difference between the source and target sizes as discussed here.

The proposed workflow for stabilization is to do so in the file’s native source resolution. This means creating a project that matches the source resolution. If the aspect ratio is the same, as with HD and UHD, then it is possible to just flip the project. If not, you will need to create a pixel for pixel matching project size so as to not introduce any scaling with mismatched aspect ratios. So the steps are:

  1.  Create project that matches source raster
  2. Add clip to timeline and apply stabilize effect. Make sure to turn off “auto-resize” or similar functionality. You want to end up with a clip that shows black borders as this indicates no re-sizing was done at this stage.
  3. Render to a high quality DNxHR resolution. This clip will now become your new source. One can also do a mixdown as a new source, but that has no reference at all to any sources should it be needed downstream.
  4. Export sequence out to create a new source clip. Unlike the recommendation from the linked thread, ensuring that the render stays in place when going back to the HD project and possibly adding more effects to it forcing another render is too much to keep track of and you may end up defeating the purpose of doing the effect in the high resolution project to begin with. I export Same As Source or MXF OPAtom at my desired DNxHR resolution to create that new source clip.
  5. Go back to the HD project and Link (AMA) to this new source clip. From here you can choose to re-frame out the black areas on the source, or if key frames are needed, do it on the timeline.

As you can see, there are a lot of steps and depending on the number of shots needing this, it can be quite tedious. Not to mention you have a new source clip so maintaining file names and/or timecode (if needed) takes some extra management as well. Then I decided to compare this workflow with just doing the stabilize in the 1080 project to begin with and seeing how much quality I am preserving compared to the amount of time it takes to do the roundtrip workflow.

For this test, I used Boris BCC Optical Stabilize. I used the same effect on the same AMA linked clip in both the 1080 and the 4K UHD project. The only difference was that in the 1080 HD project I let it auto resize to fill the screen and for the 4K UHD shot, I did as per the above steps. Then from the 1080 HD project, I exported an uncompressed 1920×1080 TIFF file to compare the image quality in PhotoShop. I have to say that it was very difficult for me to see the difference between the two, if at all. Now I realize that type of footage may have different results, as well as how much stabilization is needed to be done and how much offset needs to be compensated for -  but this test was done with RED R3D with full debayer used in all cases. Here are screen shots of the two processes side by side at 100% and 200%.

(Click to see full quality)

100% Compare


200% Compare


In both cases, the image on the right used the 4K UHD/FrameFlex workflow and the one on the left was just using the effect “as is” in the 1080 HD project. Because they are so close in quality, which method to use is more dependent on overall workflow needs and turnaround time in getting the program delivered. As to why they look so close? It’s hard to tell without more testing, but my guess is that the scaling/resize algorithm used in FrameFlex.

Another thing to be aware of is when using stabilize in a greater than HD project is to not use any of the 1/4 or 1/16 proxy modes to perform the analysis as the results will not be the same.  Notice the differences in resulting frame composition using the same stabilization effect in all three modes:

(click for full size image)


Update 10/14/2016: Media Composer 8.6 improved it scaling quality by adding Polyphase as an option (default). The quality will be retested for an additional update in the future.

From the “what’s New Guide” for 8.6:

In previous releases of the editing application, FrameFlex effects were always rendered using bilinear image interpolation. With this release, the FrameFlex effect will be rendered according to the Image Interpolation option selected in the Render Setting s dialog. This allows you to set FrameFlex as Polyphase for better interpolation.

24p blogs are available via a free iOS application available here:

Why You Should Merge an ALE, not Relink

September 26th, 2015


I have gotten many calls and emails over the years from users everywhere asking why they are losing metadata on clips while editing or generating lists. I ask; “did you import an ALE and relink?” The answer is always “yes.”

Because of this, my recommendation has always been  to merge an ALE into an existing master clip and this will not happen. And yes, productions have gotten away with no issues, but it is far better to have peace of mind than one day run into not having it when you need it. I don’t have a explanation as to why other than it has something to do with MOB’s or whatever. Maybe one of these days I will get an actual explanation but in the meantime, I can show you what actually happens between an ALE relink, and an ALE merge.

The native MXF DNxHD file (OPAtom) was made in DaVinci Resolve. I copied the MXF files to the Avid MediaFiles/MXF folder structure and launched Media Composer. In a bin, I import the ALE and as expected, it does not automatically relink. Highlight the clip(s) and do relink. Make sure to select all drives and uncheck “Relink only to media from current project”. (click for full size image)


The clip comes online and you can play picture and sound and all the columns have the metadata from the from the ALE file. All would seem well and fine.

But if we take a look at the media in the MediaTool for the same clip you will notice that the same MediaFile is missing a lot of the metadata - especially audio related metadata, but AuxiliaryTC as well. The clip metadata is only pointing to the essence and not a part of it. (click for full size image)


The better method is to create a master clip via one of these three methods:

  1. Import an AAF
  2. Open the MediaTool and drag the clips to the bin
  3. Import the msmMMOB.mdb file from the Avid MediaFiles/MXF numbered folder in which the media exists

Once the master clips exist, they can be highlighted and then do a file/import of the ALE file. This process is described in the blog “The Many Uses of ALE.” Once the ALE file has been merged, you can see that the clip in the MediaTool now has this same information on the file itself. It is now part of the MXF file ensuring it sticks with the clip. If this media was moved to another editing system with no bins or ALE, that metadata would still be available to use.  (click for full size image)


So while I don’t have an official engineering explanation as to why, I know that I feel much better when I see the clips in the MediaTool and the clips in the bin having the same metadata. So for all those productions transcoding in third party applications and getting media and ALE, create the master clip first and merge. You’ll sleep better for it.