MLB Network Revamps Postproduction System With Offsite Storage, Adobe Premiere

While the 2015 MLB season promises to have its share of exciting rookies, headlined by Chicago Cubs hard-hitting third baseman Kris Bryant, the rookie of the year behind the scenes for MLB Network is sure to be its wholly rebuilt postproduction system. Nearly three years in the making, the system uses offsite infrastructure located at Coresite’s NY2 data center to drive the network’s workflow at its Secaucus, NJ, facility.

“We continue to expand our programming slate, such as with the launch of our regular season morning show ‘MLB Central,’ and expand the reach of video sources year over year,” says Tab Butler, ‎director, media management and postproduction, MLB Network. “As a result, there is more content, and more live programming time per day to fill.  So we have to develop systems with scalable capacity and capabilities.”

In addition to a massive infrastructure build out, the new workflow integrates the network’s DIAMOND (Digitized Industry Assets Managed Optimally for Networked Distribution) asset-management system with Adobe Premiere Pro editing stations to streamline MLB Network’s postproduction operations.

“Aside from the massive ingest requirements of recording over seven hours of content for every hour of baseball played, the workflows we are using are not that unique in the broadcast business. We are automating and accelerating our workflows in a very sophisticated manner in order to keep up with the massive content flow,” says Butler. “That is all so that we can do a better job in telling the stories of baseball.”

Years in the Making
When MLB Network’s facility in Secaucus, NJ, launched in 2009, it marked a genuine greenfield opportunity, allowing the postproduction team to have its pick of file-based–workflow technologies. Since then, the system’s capacity has grown approximately 25% every year in terms of ingesting, processing, editing, and storing media.

“When we started to look at the next generation of our system in late 2011, we recognized that we had some significant physical challenges in our building,” says Butler. “We did not have enough backup power, and air-conditioning to support a brand-new infrastructure, while keeping our original system in operation.  Given the dramatic increases in equipment densities associated with storage and compute power over the past five years, it was clear that our power load and heat generation was going to far exceed that of our original system. When equipment rooms are already at their cooling capacity, it doesn’t matter if you can clear rack space, because you still don’t have the power or air-conditioning necessary to support the expansion.”

The Promise of Offsite Infrastructure
With this challenge in mind, MLB Network began to explore the use of an offsite data center that could connect to its Secaucus broadcast center via dual lateral fiber routes.

“We began to look at the technology and ask ‘Is this feasible? Are the components that we would need available in the marketplace?’” says Butler. “By early 2013, we came to the realization that there was at least a possibility that the technology was either in beta or did, in fact, exist.”

Later that year, MLB Network built a 1/10th-scale lab at its home facility to test the offsite–data-center technology, the techniques for failover, and the distances of the architecture, and to determine what equipment would be located in the data center and what would stay in its own Equipment Room 1 (ER1).

In the end, MLB Network selected Coresite’s NY2 facility as the data center in Secaucus to house the bulk of the infrastructure to drive its new postproduction ecosystem. NY2 (Coresite’s 18th data center) is connected to MLB Network ER1 via two divergent fiber pipes, each comprised of 288 dark-fiber strands. The two paths — 5,000-plus ft. and 10,000-plus ft. long, respectively — terminate in two Calient Technologies photonic optical-circuit-switching routers.

“The key was getting the necessary connectivity of dark fiber between racks in ER1 and racks in Coresite. Without that connectivity, this was an immediate non-starter,” says Butler. “The Calient light router allows us to photonically switch signals, which gave us the ability to run things on path A and, with the flip of a switch, in milliseconds, move the light over a second fiber path B. That was very critical for us.”

Routing It All Around Secaucus
In February 2014, MLB Network took control of the floor space at the Coresite facility and, that summer, began testing out its new postproduction and storage system.

“We very much utilized a crawl, walk, run approach,” says Butler, “and tried to get little bits and pieces up and running as we went along.”

MLB Network deploys Grass Valley CWDM (Coarse Wave Division Multiplexing) to multiplex HD video, and many CWDM paths deliver hundreds of video paths in both directions between NY2 and ER1. The network is able to transmit 16 feeds over a single fiber, supporting the 160 Grass Valley K2 Summit record channels over 10 CWDM paths.

MLB Network deploys two Cisco Nexus 7000 Series routers at its home facility and two Cisco Nexus 6000 Series routers at Coresite, with 320 GBps of bandwidth across eight fibers.

“You are talking about an absolutely huge amount of bandwidth within the network architecture,” says Butler. “40 GBps is the backbone in the plant racks at Coresite with 10 GBps from the back of each server to the backbone. Everything is multipath, dual redundant.”

Massive Storage Requirements
MLB Network relies on two primary NetApp SANs for on-line storage — an American League and a National League — with 1.3 PB of usable storage each (or about 45,000 hours of storage per SAN). These SANs are capable of being doubled in size, should the need arise.  The Quantum StorNext 5 file systems are deployed on the NetApp storage in a multi-SAN configuration.

In addition, the network’s Grass Valley K2 Summit media-server infrastructure, under the control of Grass Valley Stratus, which relies on DIAMOND Scheduler to schedule and automate ingests, is comprised of 40 K2 Summit servers configured as 136 channels for record, 24 for playback.  These Summits write directly to the StorNext file systems on the NetApp E-Series storage. Beyond that, Butler and company are capable of adding eight more K2 Summit servers (32 channels), using the existing infrastructure and without pulling another cable, and are capable of building out from there in the future.

“We jumped from 80 channels of record to now 136, and I have to anticipate that they are going to use those 136 channels and continue to ask for more,” says Butler. “I am expecting that our consumption of data tape, which is currently 30-35 TB of LTO-4 per day in-season will probably increase as well.”

Adobe Premiere’s Hardware Backbone at Coresite
Hardware for the 50 MLB Network’s Adobe Premiere HD edit platforms is located at Coresite (and can be expanded to 64 using the current wiring) and is built on Cisco UCS C-240 hardware platform to allow UCS centralized management of the servers along with the UCS blade infrastructure. In addition, B Series blade clusters were turned into pools of VMWare, allowing the network to scale out virtual machines as necessary within that hardware platform. MLB Network also has dedicated blades reserved for applications which scale more effectively on a single operating system such as Grass Valley proxy encoder or transcode processes.

MLB Network deployed for HD editing the 2RU Cisco UCS C240 M3 rack servers amped up with NVIDIA Quadro K20 graphics accelerator cards and K2200 graphics display cards at Coresite. The network also worked closely with MultiDyne to develop a long-range dual-link, dual-display KVM system that could function over a single fiber. That single fiber connects to a receive unit in each edit room which feeds to two displays, keyboard, video, mouse, USB, and audio functionality, which is routed using the Calient optical router.

“With this architecture, you have an edit platform where all you need is a fiber drop in any room anywhere, and you can create an HD editor,” says Butler. “We have drops everywhere: edit rooms, hallways, cubicles, media-management locations, etc.”

For its proxy system, MLB Network uses the Quantum StorNext AEL500 tape library with 500 TB SAN of spinning disc usable for all proxy needs and Quantum’s HSN software enabling access to requested content that lives on LTO-6 tape.

For archiving Adobe projects and valuable engineering information, such as machine images, database backups, and configuration files, MLB Network is using the Crossroads StrongBox T10 system for managing unstructured data in this multi-tier storage for online, nearline, and archive, backed with Linear Tape File System (LTFS) technology.

Elemental transcode servers are heavily utilized for proxy video creation for both onsite utilization as well as cloud-based content.  The Elemental encoders provide multiple video resolutions and audio configurations depending upon the end user’s need.  For onsite proxy use, 16 audio channels are supported, and for DIAMOND in the cloud use, three streaming resolutions are created, with two audio channels.

The Cisco UCS environment uses Tegile’s high-performance SSD environment for boot from SAN storage.

“Between hardware and VMWare, MLB Network has more than 400 machines at Coresite. They take just minutes to boot up simultaneously off of the Tegile storage,” says Butler. “Tegile is a phenomenal tool for cloning images, driving the boot environment and shared user areas for all the edit machines. That is a real success story.”

The Beauty of DIAMOND
MLB Network’s DIAMOND system also has a large presence at Coresite, running on a mixture of SimpliVity Hyperconverged infrastructure and Cisco UCS hardware using VMware technology.  SimpliVity provides an efficient use of resources by utilizing deduplication, compression, and optimizing data inline in real-time, while providing high levels of IOPS, under a centralized management environment.

By utilizing the DIAMOND panel within Premiere Pro, editors are able to search, select, and edit the entire 525,000-plus hours of MLB Network’s Proxy/XDCAM HD 50 archive pool. The archives grew by over 100,000 hours last season and is expected to grow by over 110,000 hours during the 2015 season. DIAMOND also has a robust stats database behind it, performing over 30 million stats calculations on an average day. The real-time data coming in from the ballparks interface directly with DIAMOND Logger tools and integrate tightly into MLB Network’s Vizrt graphics environment.

“You almost have to look at this and think of it as an Election Night system,” says Butler, “except here, Election Night is every night and there are 15 races going on across the country each day.”

DIAMOND and Adobe Premiere Pro: A Match Made in Heaven
MLB Network’s Adobe Premiere Pro NLE system with the DIAMOND panel provides many advanced functions to work simultaneously with both hi-res and proxy content on a single timeline.  While providing the search function within the editing interface, an editor will search using DIAMOND, click on a piece of media to input it automatically into the source window within Premier, and seamlessly place it onto the timeline. If the content does not live on the SANs as HD content, the media will automatically be pulled up as a proxy file (new in-house proxies generated this year will be complete with 16 channels of audio). With hi-res and lo-res media mixed on the same timeline, the editor can view the full timeline with effects in real time.  They will need to restore the hi-res content from the Oracle StorageTek SL8500 library prior to conforming the final hi-res product, and DIAMOND provides the automated tools to request and monitor the restores within a project or timeline.

“The capabilities of the Premiere-DIAMOND integration continues to grow and advance in the capabilities of the automated workflows,” says Butler. “The content is searchable, viewable, and mineable across many, many data points. With all of this metadata, we enable the editor or producer to mine and find those shots that give them the best tools visually to be a compelling storyteller.”

Staff can also easily retrieve content within MLB Network’s hi-res archive using the DIAMOND Asset Sequence Handler (DASH) tool. It tracks every asset within an editor’s sequence and indicates where each is located and its resolution. If the asset is lo-res, DASH will identify where the hi-res file exists in the library. That will display in the DASH panel and give the editor the opportunity to automatically restore the asset on the timeline. When a clip restore is requested,  if the asset is not within the library robot, DASH  alerts the media-management team to put that LTO-4 tape into the library robot.

MLB Network has also instituted an automated queue function within DASH. Editors input basic metadata when they create a new sequence (date, show, etc.), and DASH prioritizes their ability to pull hi-res content according to how soon the piece will go on-air.

“This very tight integration between DIAMOND and Adobe Premiere Pro is a marriage that strengthens both parties,” says Butler. “Without Adobe’s willingness to work with us over almost three years of development, none of this would have been possible. Their willingness to help support our needs has been phenomenal. We continue to push for tighter integration and more capabilities, and Adobe continues to share our vision and enhance our partnership.”

A Streamlined Publishing Process
In terms of publishing finished content to air, MLB Network built a plugin within DIAMOND to work specifically with AP’s ENPS newsroom system and the Grass Valley STRATUS Rundown application so that editors can publish directly to active show rundowns.

MLB Network is currently running the new Grass Valley Stratus and the legacy Grass Valley Aurora simultaneously to allow the operators to migrate seamlessly to the new workflow.

“When you’re dealing with a huge MAM system, you really don’t want to have two masters; you want a single master database,” says Butler. “We ran the Stratus-DIAMOND system as the secondary system until just before spring training started, when we did the database migration from Aurora into Stratus. That means what had been the master database for the last seven years is now an incomplete secondary database and Stratus-DIAMOND database is now the new master.”

All Aurora and Final Cut Pro HD edit rooms have been outfitted with the new Premiere Pro edit platform. The editors switch from input A to input B on the monitors and switch keyboards as they move between systems. Butler says the network will continue to operate both systems in the coming months, but he expects complete migration to the new Stratus system before the All-Star Game.

Getting Production on Board: Lessons Learned
A key step in the launch of any new postproduction system, especially one of this scale, is getting the creative side to embrace the wealth of new technology now at their fingertips.

“Production feedback has been very positive, and it has been a great rollout,” says Butler. “We did a tremendous amount of training with trainers recommended by Adobe. Richard Harrington and his team of trainers were wonderful. It has been very customized and crafted for the DIAMOND-Adobe environment.”

With the rebuilt MLB Network postproduction system online and continuing to evolve, Butler is able to take a look back and offer advice for those considering a sprawling postproduction system of their own.

“Be prepared to be flexible in your technology selections, because technology is moving so fast that, by the time you think have found the ’next best thing,’ it’s nearly obsolete, or there is a better solution that is just coming to market,” says Butler. “You cannot rush a project on this scale; with the amount of minute detail that you must monitor and track, the way you deal with those details will determine whether the project comes off smoothly.”

Password must contain the following:

A lowercase letter

A capital (uppercase) letter

A number

Minimum 8 characters