In the last blog, we discussed the current consumption patterns for media and how a media organization has no choice but to automate processes in order to continue to grow revenues. This is largely because they must produce media suitable for consumption on an ever-growing list of display platforms. In addition, they must ensure that the content delivered to a specific geography is culturally appropriate for that geography and fulfils all statutory and regulatory requirements. It is quite evident that these requirements lead to the generation of multiple copies of the same basic programming, all of which must be managed individually by the overarching orchestration system. The system must be able to identify each of these variants individually, ensure that the correct version is delivered to the correct endpoint, and provide metrics that prove delivery in order to be able to collect the appropriate revenue for that deliverable. When correctly utilized, metadata also simplifies the tasks of media retrieval and repurposing – after all, you can’t monetize a piece of media if you can’t find it!
Metadata is clearly fundamental to the desired business model, but to date, many customers have treated metadata as something of an afterthought – a kind of “second-class citizen”, if you will. This is a fundamental error: for the modern media enterprise, metadata is every bit as important as the essence itself and should be treated as such.
Silos Are Bad For Business
While most media enterprises understand the value of metadata, and how it can enhance their profitability through workflow automation and re-monetization, many fall into the trap of collecting and storing that metadata in a haphazard way. Their content management systems may understand and maintain the business-oriented metadata such as show title, episode number, run time, available ad breaks etc., but they generally don’t understand the structural metadata describing the physical essence itself – which would be of great value in deciding which version of an archived program should be reprocessed for delivery to non-traditional endpoints, for example. Metadata ends up being stored across multiple management “silos” of varying capabilities. This leads to complications when a workflow encompasses multiple silos, as they must all be controlled by their own (sometimes out-of-date) mechanisms. Metadata (and sometimes even the media itself) may need to be duplicated in these silos, which is clearly inefficient and can lead to errors if the copies get out of sync with each other. There must be a better way!
Centralized Metadata Management and Processing Are Good For Business
The clear solution to this problem is to consolidate all metadata storage into a single, reliable, high-speed management platform. In addition to the obvious value of data de-duplication, this approach ensures that all users have access to the same information – with the appropriate level of access control, of course – regardless of where the media physically resides. Updates to metadata are propagated immediately to all users, and a commonality of user interface can be maximized. Improving the visibility of assets, including advanced search options, means that repurposing of archived assets is much streamlined, as different departments can now examine the entire contents of the archive when trying to match to a customer’s specific requirements.
A less obvious benefit of such an approach is that it becomes exponentially easier to mine the metadata for the generation of reports etc. to document the fulfilment of downstream customers’ delivery requirements. QC reports can be added directly to the metadata for a specific clip, for example, which can then be exported along with the media when it is delivered to the end user. All of this results in decreased costs, and therefore increased revenue for the enterprise. In fact, it can be argued that the more searchable metadata you have on a particular piece of media, the greater the value of that piece of media. Emphasizing that point, a lot of work is going on in AI and Machine Learning to recognize objects in audio and even video and annotate the metadata appropriately. Imagine being able to search for all scenes in which a specific actor’s face is seen, for example, and present only those scenes to the customer.is
All of this assumes, of course, that the database and management platform in question has the capability to interact with all of the processing systems in the workflow in order to “mine” the metadata from those processes (and even the essence) to maximize management and revenue potential. While entirely possible, this is not as simple as it seems – different products (sometimes even from the same manufacturer) will provide information through different APIs and SDKs, and the metadata system must understand all of those channels if it is to do its job appropriately.
For any media enterprise to continue to thrive in the new distribution paradigm, a modern workflow orchestration system is an absolute requirement. From the business perspective, at the very minimum, it improves efficiency, with a consequent decrease in operational costs, but there are numerous other tangible benefits: this increased efficiency can speed up the entire workflow with a consequent reduction in latency (meaning more jobs/hour for a given workflow). It can improve a company’s agility, which makes the enterprise more attractive to new customers (or existing customers who wish to add/experiment with a new channel property). Finally, and possibly most importantly, it enables a true, digital end-to-end supply chain for the delivery of media, which ultimately results in increased revenues.
At Piksel, we’ve developed a range of products to specifically address these content and metadata challenges. For more information on our products, please visit https://piksel.com/video-products/
To learn, or better yet see, how it works, meet us at IBC