Archive for the ‘Thoughts’ Category

Using DaVinci Resolve’s Waveform Sync with Avid’s Media Composer AutoSync

Saturday, November 11th, 2017

 

resolve.jpg

 

Media Composer introduced Waveform Syncing for grouping only in version 8.5 (January 2016). Using group by waveform for double system audio workflows is possible, but not a fun experience as described in this blog. In the almost two years since that release, syncing non-timecoded sources, or partially timecoded sources via waveform for double system audio workflows is still only available in other systems like Adobe Premiere Pro or DaVinci Resolve. Here is how to take advantage of Resolve’s waveform syncing, combined with some ALE text editing, and ALE merging, you can leverage Resolve’s waveform syncing for Avid AutoSync. By syncing double system in Media Composer, you retain the ability to 1/4 frame sync, and keep all BWF metadata as needed for a smoother post process with Avid Pro Tools. The process steps below are all about creating the sync metadata in Resolve to be used in Media Composer.

In Resolve, add your video and audio sources into the MediaPool. Make sure all your settings are properly set for REEL NAME so the ALE will merge. Using “clip filename” in settings is the one to use for AMA linking workflows.

1.png 

In this case, all the original video clips have two tracks of audio and the double system audio clips have eight. Something to keep track of later when editing the ALE.  Select all the clips and choose Audio Sync Based on Waveform and Append Tracks (right-click):

Appending tracks just makes it easier to see total Audio Tracks have changed. In this example, it will end up being 10 audio tracks.

3.png

Select all the sync clips (10 audio tracks) and create a new timeline (right click):

4.png 

From the timeline, choose to export an ALE:

5.png

Next, the ALE needs to be edited to reflect the original number of audio tracks associated with the video clips. Notice the Tracks metadata shows V and audio tracks 1-5. Why it doesn’t show 1-10 appears to be a bug or limitation with Resolve, but since we need to edit it back to 2 audio tracks, it does not really matter in this process:

6.png

7.png

8.png

In Media Composer, the edited ALE needs to be merged with video. Media can be AMA linked or through a dailies process, it does not matter. Because Resolve inserts the sync relationship of the audio timecode into the Auxiliary TC1 column, this is the information needed for AutoSync. Of course, any other logging that may have been done in Resolve will come across as well.  Make sure import settings for Shot Log is set before importing ALE. 

Before:

9.png

Settings:

10.png

Result:

11.png

Now the audio can be imported or linked. I still prefer importing audio files into a 35mm film project so I can easily slip by 1/4 frames. Recent versions of Media Composer do allow sample based slipping via Source Settings, but that is treated as an effect, is not as efficient for syncing dailies, and most importantly, does not yet translate to Pro Tools.  

Import Audio:

12.png

Duplicate START column into Auxiliary TC1 colum via cmd/ctrl D:

13.png

Resulting in:

14.png

Now the usual AutoSync process with its options are now available as a batch sync process. Put the audio clips into the same bin as the video clips (why can’t sync be done across bins with results into a separate bin seeing as stereo 3D grouping has been allowing that since v6?) then choose AutoSync and selecting Auxiliary TC1 as the method to sync by:

15.png

In this example, track 1 only was chosen since it was the mix track, but the remaning 9 ISO tracks are available via the double match frame workflow. Now, based on the timecode from Resolve’s waveform sync, the clips are in sync and can be further slip synced if needed:

16.png

It may seem like a lot of steps, but the process is pretty quick once you do it a few times and can be faster than creating sync by and ear in Media Composer.

The Elusiveness of EDL Comments

Monday, March 21st, 2016

edl-comments-2.jpg

Back in Media Composer v8.2, I was testing some metadata workflows and discovered quite by accident that values in the “Comments” column appeared in EDLs when “Comments” was checked as an option. Since the original Avid/1 Media Composer, EDL comments were restricted to the comments added to any one segment in the timeline. I don’t know if this was an intentional change, or something that just came about as a result of something else as there was no mention of it in release documentation. I thought I would keep an eye on this to see of this feature evolved with follow-on releases.

Comments Column in the Bin

I should start by saying that the “Comments” column itself is quite elusive. It is a column that seems to be standard within Media Composer, but not exposed as a standard column when choosing columns. If one goes to bin’s script view and adds commentary to the large text area on a clip, then the column becomes selectable as a custom field via the “Choose Columns” despite it already being part of the bin. Importing or merging an ALE with a “Comments’ column will find its way into that text area. Once there is active metadata, it can be saved to other bin views.

Adding Comments to EDL

Once text has been added to a “Comments” column, and selecting “Clip Comments” from the List Option/Include in List for both Picture and Sound section, those comments will be added to the EDL after each event proceeded by a *

001  A001C007 V C  08:21:04:02 08:21:09:18 01:00:00:00 01:00:05:16
*COMMENT LINE

This can be quite useful when needing to add specific metadata to an EDL for a downstream process. A user can duplicate any column’s value into the “Comments” column then generate a list. A workaround using ASC CDL columns for Stock Footage tracking was blogged about here.

But there are limitations to be aware of when using “Comments”. I am not guaranteeing this is a complete list, but are the ones I discovered when considering different use cases in typical workflows:

  • Comments will appear in an EDL when added to a master clip
  • Despite a subclip displaying the Comment from the master clip in the bin, the EDL will not have the comment when edited from a subclip
  • If the user overwrites the existing Comment on a subclip in the bin with a comment, then that comment will appear in an EDL.
  • The same goes for .sync clips. Comments can be added to the V and A master clips, and the resulting .sync clip will display the V comment, but the EDL will have no comment. User can enter a new comment on the .sync clip and it will appear in the EDL
  • A group clip has a different behavior – the resulting group clip does not display any of the originating clip “Comments” in the bin and the EDL does not display any of the “Comments” from any of the angles used. A user can add a “Comment” to the .grp clip in the bin but unlike sub and sync clips, EDLs will still not show that “Comment”
  • Unfortunately, EDL Comment lines are still being limited to the old linear tape bay specifications of 80 character line lengths despite every other aspect of the EDL being changed to support newer digital workflows such as having up to 129 character as Source/REEL. Comments will wrap at 80 characters into multiple *Comments lines. This makes parsing a bit more problematic downstream.
  • Comment lines are changed to all UPPERCASE instead of keeping the text as entered in the bin. This prevents some interesting parsing algorithms to be used.
  • If a user did enter a “Comment” on a specific event in the timeline via the “Add Comment” function, the EDL will display this “Comment” and not the value from the bin. The EDL comment does not differentiate between the two, making it inconsistent and can be misleading or problematic depending on use.

This behavior is still consistent with the most recent release of 8.5.1 so I suspect this will be how it works for a while. It’s an interesting feature to consider if you are aware of its limitations in order to get expected results when used.

Motion Effect Types in Progressive Projects

Saturday, August 30th, 2014

motion-effect-type.gif

There was a discussion on the Avid Community forums as to what the different field based motion effect types do when working with progressive footage in a progressive project. I thought that was a great question and set out to do a quick test using Job te Burg’s excellent digital countdown leader available here.

I did two things with the original countdown. I did a DVE move left to right to get some additional movement and then did a 50% speed change (1/2 speed) rendered with each of the 7 types of motion effect types available in the Timewarp effect. In all cases, the input and output settings were set to progressive. For the 7 different types offered, you only end up with 4 different looking results as the following pairs end up with the same result:

  1. Blended VTR and Blended Interpolated
  2. Both Fields and Duplicated Fields
  3. Interpolated Field and VTR Style
  4. And FluidMotion is the fourth result and stands alone as its own unique look.

Below are links to JPG contact sheets using each method and exporting the first 6 frames of each sequence. Using A, B, C, D to uniquely identify frames, the following patterns for each are:

Blended VTR and Blended Interpolated: A|AB|B|BC|C|CD
Click link for full size contact sheet:

Both Fields and Duplicated Fields: A|A|B|B|C|C
Click link for full size contact sheet:

Interpolated Field and VTR Style: A|B|B|C|D|D
Click link for full size contact sheet:

FluidMotion: A|N*|A|N*|A|N*
Click link for full size contact sheet:

*Where N is a New frame.

It’s not really fair to use a countdown to show FluidMotion as it creates new frames based on pixels in the frame, but is shown here just for fun.

Having so many options is a bit redundant, and confusing, in a progressive project using progressive footage with input and output set to progressive.  There was no real difference in render times between any of the options other than FluidMotion which is doing a lot of pixel calculations, so is expected to take longer. But now we know the answer.

*Edit: Remember when judging motion effects frame by frame that the timeline setting be green/green or you will only be seeing one field or 1/2 a segmented frame when stepping through. This has caught me several times. The above contact sheets were done as an export, so they are the full progressive frames.

No More AvidLogExchange Application

Friday, August 8th, 2014

mc8-image.jpg

Some users are just noticing that AvidLogExchange, the application (not the file format .ale) is no longer a product and part of the installer starting with v8. Some will notice that Avid MediaLog is no longer available and I wrote my thoughts about that here last November (2013).

AvidLogExchange has been around for a long time when there used to be more than a half dozen “log” types that were common from different vendors such as Aaton, Evertz, and KeyScope; all formats that were part of the film-to-tape logging solutions as well as some common video logging applications in the 90’s.  Those formats have not used  for almost 15 years as the ALE format became a pseudo standard due to its dominance in the NLE market throughout the 90’s. So those formats will not be missed, but the application still did some interesting tricks that fit different workflows needs that are still in use. A few of them would be quite easy to implement directly in Media Composer. A Product Manager at Avid (no longer there) even called me at the time and asked what I thought about them EOL’ing (End Of Life) Avid Log Exchange in a future release. I said as long as the handful of useful features were not lost, it would not be a big deal. Unfortunately that did not happen, but may still appear in a future release.

Those features are:

  •  Both FCP log files to ALE (format) conversion. Helped in moving source metadata to Avid.There’s still a lot of FCP 7 and earlier being used.
  • ALE Clean function. This prevented logs created outside the system to not have overlapping START and END timecode as it would create confusion during list generation as a timecode could point to different sources. This is more common with tape based sources, but can still occur with FileMaker type databases and exporting files to be used in Media Composer which leads to:
  • TAB to ALE conversion. This is one of the bigger ones. A user could open a TAB file in ALE, then it would add the Global Header information required by the import. I would say that the global header information is helpful for timing checks during import, but this could easily be done by Avid allowing a TAB file without global header or data fields “Column” and “Data”. The first line in the file can be assumed to be column names, and lines 2+ as the data. This would eliminate a lot of frustration of getting the header just right and copy/pasting. Also, seeing as Media Composer can export a TAB file, it just makes sense.

    ale-header.gif

  • Record timecode as Source. While somewhat special, it does help with those looking to bring in an EDL and notch an existing flattened program file. It was original developed to support a post audio sync process on dailies, but now has uses as blogged about here (using DaVinci Resolve for Scene Detection).
  • The Windows version had a nice text editor in it including search/replace functionality which is quite useful these days when dealing with Tape and Source File merging workflows. It also had a nice two window view so you could compare original file and resulting .ALE.

The ALE format is still a popular shotlog exchange format, and using the different ALE import/merge functions allow for some nice batch renaming/subclipping processes that will be part of a future blog, but it is getting a little long in the tooth and needs an update to fit more modern workflows with an XML schema that would allow for markers, and such to imported as a batch process on multiple clips, etc. And that too is a subject for another day.

FrameFlex Continued

Wednesday, February 5th, 2014

There is quite an interesting FrameFlex thread evolving on the Avid Community Forums. It seems that there is still some confusion as to what FrameFlex is intended to do and expected behavior in the current version (7.0.3). To me, the parameters available are no better than what you would find in the standard resize effect as all you an affect is the XY pixel extraction and position. The only benefit of FrameFlex is its ability to access the full resolution of the camera’s original files when working with larger than HD sizes resulting in a better quality image than a scaling operation from an HD proxy. That’s it.

What is confusing, and what I was addressing in my previous blog entry on FrameFlex  is that the user needs to be aware that there is a quality difference when using FrameFlex on the source clip and when using it on an event in the timeline when doing a transcode. As long as the clip is dynamically linked to the camera original via AMA, what you see in the timeline is the extraction from the source file. Any operations done on an event in the timeline is combined with any that were done on the source clip.

One might use the source side FrameFlex to correct a boom mike in the shot for example, leading to more “corrective” use on the entire clip, as it does not allow for keyframes. Using FrameFlex in the timeline is for more creative needs as you are choosing the framing, moves, zooms and such in the context of the story and events before and after the event being affected. A different span from the same clip in another event can have different settings.  What you cannot do is save off a FrameFlex effect and apply to other clips as you with every other effect. Or the ability to have “relational” FrameFlex the same way color correction has for creative reframing on same shots in the timeline. Maybe in a future release. 

The important issue is that all this works great as long as the clips are dynamically linked.  But since this assumes greater than HD sources, performance is often an issue with AMA linked clips. So most users will highlight the sequence and perform a transcode to their finishing resolution as it is documented in many AMA workflows.

This is where the quality issue comes into play; The transcode dialog box allows the transcode process to “bake in” either the Image or Color Transforms as part of the operation, but it does not include any of the FrameFlex parameters used in the timeline - only what has been applied to the source. What is needed is the additional option allowing the user to include timeline FrameFlex as part of the transcoding process from a sequence. This way, all .new sources for that timeline are baked in with the expect quality an extraction offers from the larger resolution images.

As it stands, the resulting source clips are transcoded to 1920×1080 from whatever the original resolution might have been. If FrameFlex was used on the source, it will be applied. But all other effects on the timeline include timeline FrameFlex will be a scaled from the 1920 x 1080 image.

If you want to maintain the quality that FrameFlex offers when using it in the timeline, you must render the timeline or do a video midown, and not transcode the sequence. 

One of the comments in the Avid Forum suggested using AvidFX, but it too suffers from the fact that all effects can only use the output of the FrameFlex effect when dynamically linked which is 1920 x 1080. So doing the same effect in AvidFX has no difference in quality than using the 3DWarp as seen here. Click on image for original 1920 x 1080 exported frame from a 4K UHD frame size via a Media Composer timeline:

AvidFX

avidfx.png

FrameFlex render from the timeline (not transcoded)

frameflex2.png

The thread on the Avid Community Forum raises other issues users have come across that you might want to be aware of when using FrameFlex. If you understand what it’s currently capable of, you can create higher quality extractions as long as you don’t need to rotate the image. AAF roundtrip with FrameFlex is another area where users need to be careful, and I will document a Resolve AAF roundtrip in a future blog that allows FrameFlex parameters to remain relevant for any last changes needed in the finishing process.

 Update: Rotate has been added to FrameFlex with Media Composer 8.4. 

Media Composer and OS X Mission Control (spaces and desktop)

Tuesday, January 7th, 2014

mission_control_add_space.jpg

I got an email from my friend Joseph Krings,  about using Mac OS Spaces and Media Composer because I had mentioned it to him as being quite useful. Well it wasn’t me, but our mutual friend Tim Squyres that had made the suggestion and now was something I just had to try. I am probably the last person to know about this function, but I started looking into it and how to set it up with Media Composer when editing on a MacBook Pro (or any single monitor configuration).

There is plenty of information on setting up multiple desktops, such as this one.  On my MacBook Pro, I press the F3 Button to access the UI for setting it up. Just move the cursor to the upper right corner and you will see a square with + sign. Click that. In my test scenario, I added three desktops in this order from left to right:

  1. Bins
  2. Script in full screen mode (ScriptSync), optional if you’re using ScriptSync. 
  3. Composer and Timeline windows

This is done by dragging the individual windows into each of the different desktop icon representations, then organizing their layout from each window. All in all, it works quite well. Double clicking a clip will load it in the source monitor as expected, and that desktop view will become active. “Find Bin”, and ScriptSync editing all work as expected. The main benefit for me is to not have to deal with having multiple bins open and trying to organize them in whatever real estate I have available on a single screen. While Tab’d bins are nice for some workflows, there are times when I want to see multiple bins in a frame view where a single glance will tell me the coverage or information needed. And bins can each take on the size needed for the best display. Once set up,  I “four-finger swipe” back and forth between desktop views closely replicating the two monitor (or three screen in this case) Media Composer experience I am accustomed to when editing with desktop systems.

The only small inconvenience is that Media Composer does not remember the desktop layout it belongs to when launched a second time. There is a way to pin an application to a desktop view, but only works for applications that have a single UI window. Media Composer has multiple windows being assigned to different desktops and therefore cannot be pinned via the OS X UI.  I am still testing workspace layouts within Media Composer to see if I can get a combination that works and will provide an update if possible.  Perhaps someone else has been successful in saving a multidesktop configuration with Media Composer? If so, let me know! Because of this,  I leave the Project window and bins on the original desktop view as they will always open there, moving the Composer and Timeline windows to another desktop view. It’s quick to set up and good for the whole session.

Have fun swiping!

iXML AMA Plug-in Update

Sunday, January 5th, 2014

Media Composer 7.0.3, the third maintenance release of Media Composer 7.0.0 brings some fixes and small refinements to the iXML AMA Plug-in that was introduced with 7.0.0. The good news is that it is out of the “danger zone” that I blogged about in its initial release here.

At least, while in a 1080p/23.976 project, timecode is correctly  interpolated and does not drift over the course of the clip by the .1% pulldown factor (1 frame every 00:00:42:16).  The Read Me also lists other bug fixes such as being warned when linking via AMA that a timecode mismatch exists between file and project type. While this is a nice addition, it is only in the console and the user is not notified of a mismatch at time of linking unless always checking the console becomes part of the process - which is clunky at best. With BWF import, the user is presented with timecode of file and can see right away what changes might occur at time of import.

My biggest issue now is that it is still a “Sophie’s Choice” when using either AMA or Import methods as they are not the same. Actually BWF import has gotten worse compared to BWF import in previous versions. Here are the differences in the same file using AMA versus import (click for larger image):

compare1.png

As you can see, the iXML contains a few more fields of metadata such as “Circled” and Wild Track” compared to BWF, but BWF import is missing Track metadata 4-8 which used to work.  iXML still does not support monophonic tracks into a single clip, pullup or pulldown workflows, nor allow for 1/4 frame resync in 35mm Film Projects, even if you link and do a transcode/consolidate. As far as BWF is concerned, import for frame rate is not prompted for some project types. For example, I get the timecode prompt when importing a BWF file into a 720p/23.976 project, but not in 1080p/25. And in the case 720p, it is using a 60 frame count which is not a SMPTE standard instead of a converted 30fps:

If AMA is the way of the future replacing “Import”, it really needs to provide all the functionality of the existing “import”. I guess as yet another workaround, one could export an ALE of the AMA linked files and merge them into the BWF imported ones to have parity. But why make users do that? What would make AMA a great tool is to allow for it to have “source settings” like other AMA linked formats. Not only for consistency purposes, but to allow the user to have control over the metadata with a refresh and update much like Wave Agent allows for with BWF and incorporate the existing BWF import functionality:

wave-agent.png

It should also be noted, that with 7.0.3, you cannot create an audio EDL from any imported or AMA linked BWF file. It will come up 00:00:00:00. I am sure this will be fixed in the point release, but the “workaround” for now is to duplicate the START timecode column into an AucTX column and use that to generate EDLs.

I have no real issues with iXML AMA Plug-in being a work in progress for a period of time, but not at the expense of existing and functioning workflows such as BWF import. I hope iXML and BWF Import functions will be addressed in the near future and not have to be a choice of the lesser evil depending on your workflow needs.

Plotagon

Monday, December 30th, 2013

plotagon.gif

In the course of my “industry research” I came across a very cool little application called “Plotagon“. It is a simple integrated script and storyboard application with real time playback and voices of what is written, very similar to a gaming engine, or a SIMS type environment. It will be interesting to see where this application goes as they expand its toolset as it can be used in marketing, social media, education and to some extent, filmmaking. I was able to very quickly write a bad script and using the preset list of actions create a small scene. Once complete, it can be shared via the Plotagon site as well as YouTube if desired. You can see this masterpiece here. The script can be exported using the Fountain markup language supported in several writing applications. The entire process is very easy, and somewhat addicting. The filmmaking process would need more controls over actions, timing, angles, etc. which would make the UI more involved, but I can see them pulling this off in future versions.

This reminds me of technology I have seen in the past with PowerProduction’s software offerings for storyboarding and recently its integration as a plug-in for NLE systems.  Martini QuickShot can be used in Final Cut Pro and Media Composer as an AVX plug-in allowing editors to add missing shots as needed rather than just a title with “Missing Shot” providing better previsualizuation when working with production and producers. I have often sent printed timelines or exports in frame view to production to give the a better idea of the shot size and angle to better support the story. In 2005, Media Composer exported interactive HTML storyboards from Avid FilmScribe, but unfortunately most of the web-based templates no longer work.

Editing,  like any language, is in a constant state of change. The combination of script, game engines, editing continue to shape how stories are told and shared across different distribution channels and will be fascinating to see how tools used by storytellers will evolve over time.

Giving Voice to Metadata

Friday, December 27th, 2013

patent.gif

I think anyone using PhraseFind, ScriptSync, and SoundBite  appreciates what dialogue can bring in finding what you’re looking for as part of the editorial process. At times it is akin to finding a needle in a haystack. So it was interesting to see the Apple patent on voice-tagging photos and using Siri to retrieve them as part of the claims expanding the capabilities of voice based interaction with Apple devices. It will be seen as new and innovative if and when it hits a future version of iOS.

It reminded me an awful lot of a pending patent and prototype I had designed and built at Avid over three years ago that used multiple descriptive tracks on any given piece of media. Currently metadata tagging is either clip based, frame based, or span based and can be a drawn out process. The idea behind this solution was to use voice annotation and descriptions to the video. In its most simple form, a single track would describe what is going on in the scene. Because it is time based as the tagging happens during a record/playback - all search results layer line up to that portion of the clip. Or using “context based range searches” can further refine search results. Things get even more “descriptive” when creating multiple metadata tracks, where each track can be of a certain category, for example:

  1. Characters, placement, movement, position, etc.
  2. Camera, angle, framing, movement, zooming, etc.
  3. Location, objects, day, night, interior, exterior, colors, etc.

Any search can now use all tracks, or just a subset of tracks to filter out results as needed. Combining voice tagging metadata with pre-existing “metadata: such as camera  name, shoot date, scene, take can make for a very powerful media management system that could not only be considered new and innovative, but extremely useful as well to productions dealing with not hundreds, but thousands of hours of source material. Some customers I discussed this with had needs for forty or more descriptive tracks on any given source. One could even consider recording a tagged “descriptive” track directly to camera during production and used anywhere downstream in the production cycle.

Voice, the new metadata.

FrameFlex vs. Resize

Saturday, December 14th, 2013

original.png

Avid FrameFlex is a new feature in Media Composer v7 that allows for image re-framing. The FrameFlex parameters go back to the original high resolution master using more pixels to create the new frame rather than resizing an HD frame to the new size. One result involves more pixels being used and scaled down, versus the latter which takes pixels and blows them up. Scaling tends to result in a higher quality image compared to the reverse. So with this in mind, and knowing that only FrameFlex uses the original source file resolution, and any scaling operation that is not FrameFlex is restricted to the HD resolution of the project, I set out to compare the different methods of re-scaling versus extraction.

  1. FrameFlex
  2. 3D Warp Effect with HQ active
  3. Standard Resize effect 

The image above is a 4K (quad GD) R3D file. As you can see from the FrameFlex bounding box, it is a rather aggressive “punch-in” for the shot. In FrameFlex terms, it is 50%, as far as resize goes, it is 200%.  The results were really surprising. In the end, I did not see 200% of “wow” difference. For the most part, it was very difficult to see the differences between the two operations. While there is some very slight softening, it was not as much as I thought it was going to be. And just to be sure, I did the same extraction in RedCine X Pro to use as reference. In that frame there is a difference in the gray area of the shirt which could be attributed to the 12bit to 10 bit transcode. In all tests, the R3D was done as a FULL debayer to an uncompressed HD MXF file.

Here are the resulting frames exported as TIFF. Click links to download each file.

I also did a quick test with the standard Resize effect which does not have an HQ button and there is some very slight difference there, compared to the 3D Warp resize with HQ active. If you want to download the zip file with all the TIFF files, click here. In the end, it’s different tools for different jobs. The 3D Warp does give you extra image control such as rotation to level out a horizon when needed. 

Quality overall is difficult to tell from stills alone. Codec, aspect ratio (other than a multiple of 16:9) motion and other factors do come into play, but with all things relative, I was more surprised at how well the resize from HD stood up. Even the amount of detail and noise in a shot could affect the overall quality of the resize versus extraction operations. Here is a download of the same test with the XAVC 4K codec. In this case, the 3DWarp is less crips at the same 200% push, but as expected, with smaller push-in, it becomes less noticeable.  Also, there would be a distinct visible quality difference had the same re-frame was shot as Quad HD resolution to start with versus an extraction,  but that is a test for another day.

Emulating Avid ScriptSync with Apple FCPx

Wednesday, November 20th, 2013

screen-shot-2013-11-20-at-23741-pm.gif

Moviola provides a lot of great instructional videos and webinars for the film and video industry. So it was with great interest that I signed up for “Emulating Script Sync with FCP X” which was streamed on November 19th. Those who missed it can watch it as a rebroadcast here.

All in all I learned a lot about FCPx’ handing of metadata and search functions as I have only dabbled with FCPx. The presentation was very clearly laid out and presented by someone who really knows the application. I am also a big fan of metadata and what can be done with it and was impressed with many of the functions available in FCPx. The standout ones for me were:

  • Multiple selection within a clip to apply a metadata tag
  • Filter by clip type (group, sync, etc.)
  • Saved searches
  • Hide/reject spans on clip
  • Creating string-outs based on metadata spans
  • Markers are searchable
  • Batch renaming based on concatenated metadata fields
  • The promise of merging ScriptE notes from the set. I know the team at ScriptE and they create great products.

As far as the “Emulating ScriptSync” portion, it was not even close to the concept of ScriptSync.   The solution shown was clearly based on script metadata provided by a script supervisor via their reports. I agree that no one knows better than the script supervisor anything and everything of what is being captured on set. Re-purposing any of this metadata is a no-brainer for NLE systems and has been a long time request with Media Composer, but its ALE merge operation is too limiting to take advantage of it at this time.

But back to “ScriptSync”. ScriptSync is not about pulling up a single span of a single clip as seen in the presentation. It is certainly a powerful search function of FCPx to do so, but Avid Media Composer ScriptSync implementation is all about context and choices based on review of all the takes for a given line or lines. Even reactions to lines for performers not speaking. It’s about seeing at a glance the coverage for a given scene, and with a single point and click, review all relevant takes, choosing the best one based on where it is in context of the story. As Walter Murch told me once on scrolling versus clicking to a spot: “you find the shot you need on the way to finding the shot you thought you wanted.” It is also about reverse search from the timeline as well. When asked by the director, “what else we got for that line?” and having the ability with a single click, open the script, highlight the intersection of dialog and selected take and immediately see all other coverage for that span of dialog is very powerful.

But that being said, ScriptSync could be so much more with additional development should Avid choose to do so. The addition of Nexidia’s phonetic technology a few years ago removed a lot of the tedious task of lining a script to mere minutes, but the fact that it is dependent on a flattened text file is but one of the limitations hampering its full potential in both scripted and non scripted shows that create transcripts from the dailies. As far as the other features shown in FCPx, a second pass at Media Composer’s FIND feature would go a long way to take advantage of the metadata in Media Composer.

Also note that ScriptSync should not be confused with PhraseFind, which is also based on Nexidia technology and offers a different benefit/value to the workflow, especially if no script or transcript is available. Each have their advantages and disadvantages, but if there is any written representation of dialog, ScriptSync is the way to go.

The Need for Dedicated Frame Counts

Sunday, November 10th, 2013

frame-count.jpg

 

In addition to VFX workflows using DPX, sequential TIFF or otherwise, many digital cinema cameras also acquire frame based sequential files. Two examples would be the line of Blackmagic Cinema Cameras using CinemaDNG and ARRI with ARRIRaw. GlueTools is in beta with an ARRIRaw AMA Plug-in for Avid Media Composer support and Adobe Premiere Pro CC now supports CinemaDNG natively. But frame counts are used differently depending on which files are being used where you are in the workflow; camera originals or VFX?

There is also the challenge of long file names. Versioning with VFX can get quite long, and the BNC cameras in their initial state allowed nearly unlimited file naming. Tracking these files through a post workflow involves managing both the file name, the frame count of the file, and the timecode. The advantage of frame counts is that they do not need to adhere to a frame rate - they are whatever the rate is imposed on the clip itself which is useful in high frame rate workflows. SMPTE only recognizes 24, 25, 29.97/30 (DF/NDF). But neither of these NLE’s support a dedicated frame counter that is managed according to the workflow.

Media Composer gets close with DPX, VFX and Transfer column which support up to 32 character prefix and a 7 digit count separated by a dash “-”. But those columns were added several years ago before frame-based cameras and are limited in flexibility of file naming. It also has a frame count that can be displayed above the viewers, but no way to set its preference and has no timecode to frame conversion. The MetaCheater application from many had this feature in it when creating ALE files from VFX .mov proxy files which is quite useful, but can be so easily integrated into the NLE itself.

Adobe also gets close and offers preferences for frame-based counts to start 0 or 1, as well as timecode to frame conversion. But it is an “or” tracking system and not a separate field where one can track both timecode and frame counts visually.

What is needed is a mashup of the two NLE solutions and offer a dedicated Frame Count column allowing for:

  • Start count as 0
  • Start count as 1
  • Parse frame count from file name
  • Convert timecode (from clip metadata any timecode source) to frames
  • Track folder name containing sequential files in its own field
  • Sequence side preference for 0 or 1 frame count

A minimum of 7 digits is needed to cover the full 24 hours of timecode, but there is no real need to limit the actual frame count. Once frame counting is properly managed, it can be concatenated with any column for asset management tracking, automated pulls as part of a reporting solution as simple as:

  • <start><filename><frame_count><sequence frame count>
  • <end><filename><frame_count> <sequence frame count>

By tracking frame counts and file names separately, the NLE can offer the most flexibility in metadata management of the sources as well as the sequence/compositions. Reporting can be man-readable print-outs or XML for automating pipeline processes.

Image Seduction

Saturday, November 9th, 2013

Lately I have been spending a lot of time discovering Adobe Premiere Pro CC. Future blogs will address my adventures with the product itself for the workflows I have to deal with as I get deeper into it. But I have to say, that right off the bat, there is a lot to like and my 20+ years as a Media Composer editor found it to be very approachable. The first thing that struck me was the quality of the images in the viewers. As editors, we stare at our GUI screens and video viewers for many hours a day so it is an important factor to consider.  When I load the same media in Premiere Pro CC that I was using with Media Composer (v7), it is like a whole other viewing experience. Very much like finding out you need glasses to see fine detail. The footage in this case was 4K R3D files, and while performance does affect which debayer setting is chosen during editorial, the comparisons between similar debayer settings is pretty striking.

Adobe Premiere CC’s approach to debayer is much more straightforward than Media Composer; right-click the image and select debayer for either pause and playback states.  Media Composer on the other hand, does a “behind the scenes” debayer when AMA linking to the R3D files, so you need to think it through. For example, linking to a 4K R3D file, green/green mode is displaying “nearest fit” automatically, then scaling as needed to project type. So for a 1080 project using 4K files, it is a 2K debayer (1/2) for green/green. Then from there the timeline setting will reduce it further to 1/4th or 1/8th. Redcine X Pro has a similar “Nearest Fit” setting in it’s debayer setting for transcoding based on resolution of output codec:

nearest-fit.png

I have both Media Composer and Premiere Pro installed on the same system, so monitoring, CPU, GPU, drive subsystem, etc. is exactly the same. I chose a clip with plenty of detail between main characters, focus, foreground and background. I made sure the debayer and viewer size were exactly the same.

When both were set to 1/2 debayer, the images were the closest in quality, but Premiere Pro CC is still a bit sharper overall with the biggest difference being that Premiere Pro CC could play, scrub and JKL the images while Media Composer barely played the clip at all. For all examples Premiere Pro CC is on the left and Media Composer on the right. Click for full screen image:

half-half.png

The difference was more obvious with the 1/4 debayer. In this case, this was green/yellow for Media Composer and is most likely the setting most will use as it offers a better balance between picture quality and performance. I found that overall performance was about the same with Premiere Pro CC at 1/2 debayer and Media Composer at 1/4 debayer.  Click image for full screen version:

quarter-quarter.png

The most striking difference was using the 1/8the debayer setting. The image quality on Media Composer is quite noticeably softer, while Premiere Pro CC still quite sharp and was closer to Media Composer’s green/yellow mode for quality. Click image for full screen version:

eighth-eighth.png

Performance aside, Media Composer’s viewer images are additionally affected  by the fact that the viewers are only displaying half an image, even after the debayer process; one field of an interlace frame, or half of a segmented progressive frame. That makes for a big difference when going full screen for review or out to an HD monitor when working with a client. It goes without saying that when staring at the GUI source/record monitors all day, the better images are much easier on the eyes, and in a way more seductive to the editing process itself. 

UPDATE: 11/20/2013

I went back and did the test again with images whose resolution matched the project type so any 2K+ to HD scaling and/or debayer would not be part of the image representation seen on screen. In the following examples, the footage is 1920 x 1080 H.264 camera originals from a Canon Mark II 5D.  Again, with performance having some impact as to why one would have a timeline viewer with less than Full resolution, the following screen grabs show Adobe Premiere Pro CC on the left, and Media Composer on the right. Click for original full size screenshot.

Full - Green/Green 

Here the quality of the images are very close, but with Premiere Pro CC still taking the lead as far for image quality. It may be a result of gamma, contrast differences that make it appear sharper, or Media Composer’s viewer only be 1/2 frame display:

full.png

Half - Green/Yellow

Here it is still pretty close but in addition to whatever might be going on with the “Full - Green/Green” version, the Media Composer softness starts to become more noticeable in comparison. 

half.png

Quarter - Yellow/Yellow

This mode clearly shows the softness differences between the two, especially while video is moving.  One could argue that Yellow/Yellow is not used that often, but in comparison to Premiere Pro whose differences are hard to tell between all of them, one could edit in quarter image quality and get 2x or 4x the performance compared to full when dealing with layers, and VFX.

quarter.png

Update 1/9/15: Version 8.3 now allows for GUI and Full Screen monitors to have a color display setting allowing for more accurate viewing during editorial.  This is accessed via a right-click on either the source or record monitors. For Full Screen Play, it is access if the Full Screen Play settings. Pop-Up monitors or source settings viewer do not yet have this feature.

What To Do With Avid MediaLog?

Sunday, November 3rd, 2013

medialog.gif

Back in the days of tapes-based workflows, MediaLog was a handy tool to provide loggers, producers and other contributing collaborators to  participate in the dailies prep process. It still has deck control, timecode support, clip creation, logging, ALE import/export and the ability to open Avid projects and bins. It was a key solution in the editing process of Jerry Seinfeld’s “Comedian” posted in 2002 with all 500 hours of DV material viewed and logged via MediaLog controlling a DV deck. This was before the price of Media Composer was $995 and was cost prohibitive to have at home for logging purposes. 

MediaLog has also been used as a companion tool for metadata management when prepping lists such as cutlists and EDL’s used in the downstream conform process. Its ability to set bin display to see all elements of the sequence and make corrections as needed made it a useful tool anywhere in the post process.

But with the demise of tape-based acquisition, MediaLog is quickly showing its age, but is still part of the Media Composer installer.  I gave it a quick look to see if anything had changed with it and whether or not it had inherited any new functionality from Media Composer. Unfortunately it has not.

medialog-bin.gif

None of the new UI or control has been added, which is not that big of deal but in the example above, the new clip color cannot be assigned, and the clip icon does not indicate that these clips are AMA linked. So its use as a metadata handler for file based workflows, or working with newer versions of Media Composer is diminished. 

So what to do with MediaLog? It could become more of a file-based preparation tool for Media Composer the same way that Prelude is that tool for Premiere Pro.  Adobe Prelude has a great feature set, quick logging, and can transcode as part of the process. MediaLog could become that tool for Avid, even one that users may pay for.

What could it do?

  • Update UI and all bin logging capabilities currently available with the version of Media Composer with which it is being shipped.  
  • Enable MediaLog to have AMA and AMA Plug-in support for those working in the field preparing footage for Avid editorial. 
  • Add a single pop-up monitor for viewing, and the addition of single and spanned markers
  • Background transcode and GUI for DMF on its own CPU. DMF right now can be a timesaver, but only being available on the same system as Media Composer doesn’t make it as valuable as it could be when compared to background transcode. This could be the paid option.
  • Import ALE, but create bins only to be used by Media Composer
  • Source settings for color management and FrameFlex
  • Basic sumcheck copy and reporting
  • Make this a separate downloadable application from Avid.com and not part of the Media Composer installer.

This would become a tool used by shooters, producers and assistants where a full-on editing system is either overfill, complex, or overpriced for the task at hand. Seeing as MediaLog is basically a re-compile of a subset of existing Media Composer functionality, this isn’t a start from scratch type effort.

Maybe other third party tools have taken the place of what a file-based MediaLog could do, but direct bin support of all logged material from set is a compelling solution to offer Media Composer based productions. Or… Kill it? Right now it bloats the installer download size for something very few people probably use in its current state.

Tags & Keywords

Saturday, October 26th, 2013

finder-tags-100065999-orig.png

Anyone who knows me knows that I am a big fan of metadata and what it can do for production and postproduction workflows all the way through distribution.  Needless to say, one of the many features catching my eye this week when Apple Mavericks was introduced was “Tags and Keywords” for any file on the system. “Keywords” has been quite the buzz with FCPx users and how it can be used to manage lots of sources quickly and efficiently when needed. I have equated, perhaps crudely, that this is basically a “FIND” function on invisible spanned Markers. I am not belittling the feature in any way, other than to describe an implementation using Media Composer terms. I think it provides a fantastic new way to group sources in any way one sees fit and can provide an alternate view on the sources. I also think this becomes even more powerful when used with other search type metadata to further refine the results as too many results can be just as bad as not enough in certain situations.

I can’t help but think that these tags and keywords will be supported in FCPx when the update is released (as well as being free based on what we’ve seen so far). Now any tagging or “keywording” done by anyone, anywhere in the process, can be repurposed during the editorial process.  It’s almost a form of “crowdsourced logging” as these files move from system to system with keywords added and then being able to inherently take advantage of this metadata is pretty compelling. 

Could it be repurposed in Media Composer? I would think so, but there are several functional areas in Media Composer that would need to be updated to take advantage of this:

  • AMA: At the very least, the QuickTime AMA would need to be updated to see these tags and Keywords in order to even have access to them within the project. I have mentioned this in previous blogs, but the QuickTime AMA only supports filename and timecode. All other available metadata is ignored and cannot be repurpsoed without going through additional steps using third party solutions. Clip color only seems to be an internal tag and not available outside of Media Composer/Interplay. 
  • FIND: The “Find” function in Media Composer was a good start, but in my opinion, one area of missed opportunity in changing the way users interact with their media. This deserves its own blog, but for starters, adding the ability to search for Markers and Spanned Markers would go a long way. But a shortcut for TAG and Keywords field would be better rather than having to enter the same search item twice in order to filter out results. 
  • SPANNED MARKERS: It is unfortunate that the first release of Spanned Markers was designed for one use case only without more flexibility in the way most users want to use spanned markers. Aside from the inability to search them, the fact that they cannot overlap is very limiting as spans can have different metadata needs separated by a unique tag, and then additional keywords. 

One of the downsides in supporting these type of features is not being cross-platform. But I am one who likes the tools to be the best they can be on whatever platform they are used, and not limited to the least common denominator of either operating system.

The other feature that caught my eye, but remains to be seen whether FCPx will support it (and to what extent) is the ability to remotely collaborate on documents as shown in the iWords for iCloud demo. For that we will have to wait and see.

 

Update 7/16/2014: As suspected, FCPx has added support for finder level logging as seen in this video.

Producer/Director Notes

Tuesday, October 22nd, 2013

notes.gif

Reporting is still very much needed in collaborative workflows. Producers and directors like to have something they can look at to show coverage and other notes from production. This is required for dailies solutions. In post, I create PDF’s from the “Script” view of the bin that allows for a representative frame, a few columns of source metadata (depending on length of data) and a large empty area for anyone to enter notes. I print these up for those who need it directly from a bin.

It does take a little fiddling to get the right set up, but that is quickly done and once set can be saved as a form of a template.  From there, you can add other bins via the Tab’d bin view to quickly print them up when needed. On Mac OS X, PDF creation is native to the operating system and I use the PDF Preview to make sure it is set right. On Windows, there are Printer add-ons to create PDF. Having this printout view as a preset template would be a nice feature to have done automatically. But after one or two tweaks, you can create a PDF that exports as the example here:

scenes_31-32.pdf

I also subscribe to Adobe Creative Cloud as Photoshop, After Effects and Illustrator are commonly needed in most workflows. But with that, I have been playing with all the other Adobe solutions available to me (Adobe Acrobat), and in the PDF example provided, I also embedded a URL link on the image frame to the clip stored on a cloud streaming service. In this case, I used AFrame, but it could be any of the many available streaming services. Clicking the thumbnail for 30A/1, will take you that clip on the AFrame site ready to play. Having this be part of the reporting output would be a great way to tie editorial, execs, and creatives together. In doing so, the document ranges from a simple printout to a more interactive dailies review solution. I can choose the different solutions at the price point needed, when I need it and keep the document small enough to send via email.

When MetaSync was supported, Media Composer supported Hyperlinks within bin columns . “Option/Control-clicking” the link would take you wherever that link went. One could imagine if that still existed, it could be added to the printout automating these types of workflows.

I have been playing with Adobe Prelude to better learn its place in the production to postproduction pipeline. It does allow for hyperlinks to be associated on a Marker or the span of a clip, very much like MetaSync’s AEO (Avid Enhanced Objects) allowed the creation of “smart media”.  Not only do features like that enable these simple review solutions, but become the basis for two-screen viewer participation in the broadcast environment where “value” is tagged from the very beginning of the process and extracted via the XMP when needed. 

The DPX Story Update…

Thursday, October 3rd, 2013

It seems that my interpretation of the Glue Tools response to their no longer developing a DPX AMA Plug-in may have been a unwarranted. Perhaps a better word than “revoked” could have been used. That was the first word that came to mind based on the response I received from  Glue Tools when asked about it’s availability (having been invited to be part of the beta program several months prior). That, combined with my own experience in obtaining an AMA Development license, led me to the bigger topic discussed the original blog which was more about creating a rich developer platform regardless of how many “DPX solutions” might be created. Glue Tools was just an example.

I have been recently informed by Avid, that the DPX Plug-in decision was based on a mutual agreement, seeing as Avid was already working on one and it did not make business sense to have another one in development, combined with the fact that AMA only supports clip-based media in its current state, and not frame-based as you will find with DPX, DNG, OpenEXR, or sequential graphics of any kind. So between a new area of development and obvious additional support, and Avid having one in development, Glue Tools has moved on to other AMA solutions such as the one for the Phantom Cine camera. But from a business perspective, I run into the need for DPX far more than supporting the Phantom camera. In fact, the Phantom camera came up only once in the last two years. DPX on the other hand, is an industry standard interchange format as well as being available as a recording format with add-on recorders such as the Odyssey 7Q from Convergent Design, so its need is far greater in my opinion.

On a related AMA note, I was also informed that the iXML AMA Plug-in has been updated and will be rolling out soon. I will review that one, one it is publicly available. Maybe there will be an update to the QuickTime AMA plug-in to bring in more metadata than just timecode and file name. RED R3D could also use some TLC to take better advantage of Avid workflow solutions such as the new Dynamic Media Folders.

Deciphering the Strategy

Sunday, September 15th, 2013

formats.gif

 Avid has recently released several Press Releases and whitepapers that give an indication as to the direction the company. Visit the Avid Press Room to catch up on all the news from IBC 2013. And there has been a bit of chatter on several forums about Avid Everywhere (not to be confused with Adobe Anywhere) with the release of the Avid Everywhere Whitepaper. While I am still trying to make my way through the collection of buzz words and catchphrases, this particular posting has to do with a smaller, but still significant insight into what the strategy is for Avid Media Composer and its place in a digital pipeline.

The whitepaper references a new platform with: a new metadata schema, talent brokering, rights management, third party integration, etc. with no real detail or examples of these forward looking statements. But to me, (as my mother used to say: “Actions speak louder than words”) it’s watching the everyday, smaller actions Avid makes that confuses me as to what the strategy might be moving forward. This is just one example that I ran into recently, that made me ponder the steps leading to this digital nirvana:

I was beta testing the AMA DPX plug-in for Glue Tools a few months back, and had a need for it and went to look for the actual release. I was surprised to not see it on the Glue Tools website as I was ready, credit card in hand. I emailed Glue Tools inquiring about its availability and was informed that Avid revoked the AMA license for DPX as they were now going to do it themselves.

So now I’m confused, and still without a solution for DPX directly in Media Composer. Avid has been telling customers DPX support was going to be part of Media Composer v6 (as part of its Stereo 3D solution). So for two years, the promise of DPX, but still nothing. When will we see a DPX AMA Plug-in from Avid? What I don’t understand is why Avid won’t let a third party develop such a solution regardless of whether Avid creates one or not. If Avid makes a better one, so be it. Price and functionality dictate consumer’s choices and at least there would be a solution. In turn making Media Composer a more flexible solution with all file-based formats and not falling victim to Avid’s codec prioritization process. Wasn’t that the reason for the AMA open architecture and platform in the first place according to the marketing message? So Avid didn’t have to keep up with all the different formats?

Or why not consider an exclusive sell-through for a period of time in Avid’s “oft-forgotten” Marketplace and garner more traffic with a revenue share? (My previous thoughts on Avid Marketplace). It seems to me, that offering customers more choice is better than trying to do everything yourself and offer less. Take advantage of Glue Tools, a company with years of customer feedback and development with DPX and other frame-based formats. Perhaps a “divide and conquer” method might be a better approach in getting solutions to customers sooner rather than later? Let these third parties deal the open standard formats, while Avid concentrates on getting the iXML AMA plug-in to work properly and getting the QuickTime AMA plug-in to extract all the metadata from .mov type formats such as Alexa ProRes and Blackmagic Cinema cameras without all the workarounds.

As stated in the “Avid Everywhere” whitepaper:

“When ingesting media, robust metadata
tagging and management are critical to
realizing full asset potential across
the value chain. In close collaboration
with our customer community, Avid intends
to lead the creation of a new industry-
standard metadata tracking system, where
metadata will be generated algorithmically
and provide a significantly greater level
of detail, making it possible to take a
flexible and adaptable view of assets at
any stage of the lifecycle.”

I’m all for whatever this is and will be, but can we start with the basics for now?

An update to portions of this blog can be seen here.

Innovation in Everyday Things

Sunday, September 8th, 2013

 

screen-shot-2013-09-08-at-94648-am.gif

The pressure of competition and the rapid changes of technology create the need, or better yet, the “expectation” by the customer for “innovation” in the tools they use every day. I think some companies see the definition as two different things.  Some as a evolutionary process or as a completely new, never before seen, knock your socks off” function or feature. I find that in chasing the latter, many miss the halo effect of innovation in seeing how one feature/function affects another and the ability to provide a platform for it to happen. Most, if not all “innovations” are because something happened before that allowed it to be. You may be noticing the recurring term “platform” in my blogs and what that might mean to Media Composer as a strategy. Note the original 1550 definition: “plan of action, scheme, design,” from M.Fr. plate-forme, lit. “flat form,” from O.Fr. plate “flat”

Early on in Avid’s history, Eric Peters explained to me the “bowling pin” theory of how hitting one pin affects the others when hit by the ball and use that as a basis for technology development. That has always stuck with me; to look beyond the first pin, and the interaction of what will happen when the first pin is hit and how that may develop a breeding ground for innovation.

“Missed it by that much”

“Missed it by that much!”

A good example of this is where Avid might be going with color management and ASC CDL support. It is an interesting start, but what is currently available in the product is more of a catch-up feature checklist that many could argue is a few years behind the curve of “LOG” based workflows (pun intended). 2007 introduced support for ASC CDL value tracking with the clear goal of applying it as metadata to “LOG” based images. It is good to see that the feature has been enahnced with Media Composer v7. But is missing two basic concepts; one is an obvious “ticket to play” type function, and the other one can be “a platform for innovative change” in how programs go through the post process. The latter was part of the original design concept of ASC CDL. Just a quick side-note that the ASC CDL technology is on the short list for the 2013 Science and Technical Achievement Award by the Academy of Motion Picture Arts & Sciences (AMPAS).

1.    The concept of a LUT is to create quick color space conversions for a given workflow. In the case of Avid Media Composer with or without the Symphony option, the system is both an offline and online system. A LUT can easily be applied to sources from whatever color space to Rec.709 as Media Composer and Symphony Option are Rec.709 systems. What is missing is the ability to remove LUTs with a simple and single operation for the selected sequence when doing the color correction pass. There is the ability to update or “refresh” a sequence, but no simple means to remove it from the sequence (not the sources). Most colorists will want to have access to the original dynamic range of the sensor for color correction. Right now, it is a multi-step workaround operation that needs to be carefully managed if multiple sequences are referencing the same sources as it involves removing from the sources and not just the sequence. Stay tuned for a PDF on that in the future. But this should have been part of the initial feature set to create a “solution”, rather than just a new feature.

Update 8/21/16. Since version 8.1, a user can remove LUTs associated with clips on a sequence. Still no ability to do it on a per event basis. 

2.    ASC CDL is a much used workflow, and one that is totally dependent on workflow interaction with third party systems before and after the editorial process.Seeing as there are no means to color correct ASC CDL values only in Media Composer, it is about collaboration. ASC CDL is, by design, a solution to transport color decisions between systems. The original 2007 design supported this via ALE from dailies systems, tracked it as metadata that represented the baked in values, then exported that as an EDL to be a starting point to be used or not by colorists. But for the most part, it was to make good-looking dailies and pass along “intent.” Where innovation could have occurred, is by enhancing at the same time, another feature: ALE merge. Since ALE is still the only way to pass this information into an existing clip. So while the first pass of dailies works well, what could have been a fundamental change to how productions interact with the colorist is lost. The simple process of having a more robust merge, dailies can get into editorial sooner, while keeping the ability for ASC CDL to be truly “live” throughout editorial and being much closer to final color correction saving time (and money). By enhancing this one feature, combined with the new ASC CDL values would have brought both a fundamental and innovative solution to how television programming is done today.

Let’s hope we see these changes in a future release.

Media Composer Companion Applications

Thursday, September 5th, 2013

scratch-play.gif

Over the years, I have done many presentations on workflow and workarounds when implementing particular solutions with Avid Media Composer. Whether it be text editing of ALE files, file renaming, or using MetaCheater to extract essentual metadata from QuickTime files still not available from Avid’s QuickTime AMA plug-in. Manufacturers are moving quickly and what used to be reserved for “high-end markets”, is now everywhere. For example, Sony just announced a series of 4K camera at less than US$ 6,500 joining the Blackmagic 4K Production camera at US$ 3,995, not to mention 4K coming to a phone camera near you. Not only are there new formats and codecs, but color management via 1D and 3D LUTs and ASC CDL workflows to make full use of the sensors used.

Adding to the “essential postproduction toolkit” is Assimilate Inc’s Scratch Play.  Free to download for OS X, Windows and even works on the Surface Pro for on-set use allowing color decisions to be made as well as generating LUTs and ASC CDL to be used in downstream processes. At the very least, it is the universal resolution independent codec player sorely needed in many pipelines. While applications exist by camera manufacturers such as Sony, Canon, RED, etc. they are of course dedicated to their own formats. Blackmagic’s DaVinci Resolve Lite has been the essential free tool for both OS X and Windows and is way more than a viewer, but a full DI color workstation for dailies creation and color mastering. Assimilate offers a full DI workstation as well, but different tools are needed at different points in the pipeline. And while Resolve can be used to create LUTs and CDL’s, the fact that it can do so much more is what can actually get in the way for a quick look at a file and creating a LUT or ASC CDL file expressing intent to a colorist. Both solutions have their place, but the creation of a nimble subset functionality of the Scratch product is a great idea as a tool, and of course gets “Scratch” into as many hands as the free Resolve.

The bad news is that for Media Composer v7 users, none of the LUT or CDL values work. Neither of the two 3D LUT’s imported into project (3dl and LUTher) and the ASC CDL values use the XML specification. Here is an example for a single file:

xml.gif

Media Composer only accepts an ALE file (TAB) for metadata import. And even if it were in ALE format, Assimilate would also need to include START and END timecodes as well as filename in a column called “Source File” as Media Composer is very strict about how its merges. Too strict even. Maybe Assimilate will add this to the product or Avid will update its import functionality to either support the LUTs or ease up on ALE merge. But adding an XML import would be even better, not only for LUT  + CDL workflows (as it could define order), but locators and spanned locators, etc. could be supported from on-set logging applications like MovieSlate.

I wonder if Avid reaches out proactively to these third parties and helps them enable the workflows with their products. “Media Composer As A Platform” will be a future blog topic. As much as Avid touts “openness”, the day to day workflows can be very quite challenging and the binary format of AAF is not the easiest to deal with for many workflows. ALE, for better or for worse, is what is available now, but the workflow could be so much better.