Up until today, the typical focus for helping a user find content has been on collaborative filtering recommendation engines — "You watched X so you must like Y", or "Because others who watched X also watched Y, we’ll show you Y". After nearly a decade of investment and development, the maximum viewing lift from collaborative filtering is well understood and appears to have reached a plateau. The primary reason for this plateau is not the quality of the recommendation algorithms, but the quality and consistency of the metadata that powers them.
Almost all broadcasters and OTT providers rely on metadata that comes from third-parties, which is not only often incomplete and inconsistent, but also usually differs by distribution platform. To address this challenge and the end user problems it creates, an advanced content metadata system is necessary to not only improve their user experience, but also to drive loyalty, differentiation and increased revenue.
Ultimately, these businesses need content catalogues that are better managed, more accurate and boast consistent content descriptions across all their assets. They should also have a better understanding of similarities between content assets, and the ability to create next generation search and discovery experiences that are far more granular, based on the way that users think of content, with more useful recommendations at the level of the individual.
One key is keeping metadata consistent and accurate across platforms. A common issue with metadata management is that content is often ingested across multiple systems, each with its own criteria for ingesting, managing and presenting metadata. Inconsistencies across these systems can result in viewers not being able to discover content that they want to watch, or becoming confused by seeing the same piece of content with completely different descriptions.
But metadata management is only one pillar of the necessary solution. Broadcasters and distributors need to break their reliance on multiple third party metadata sources, and instead start to create new, unique metadata to improve content discovery on their services. Scene-level analysis, utilizing visual identification and natural language processing of closed captions can deliver greater insights about content assets and power next generation discovery experiences by understanding things like which characters are present, what they are doing and feeling, and what they are talking about.
“You are only as good as your metadata” is a refrain that is starting to be heard. In an era characterized by consumer choice, the pressure is on to fully utilize the engagement opportunities that the right use of metadata can deliver.
Want to learn more? Meet with us at CES in January!
Scott Williams is the Executive Vice President, Americas at Piksel. His role is focused on driving commercial growth in the digital, broadcast, cable, and telecommunications space in the Americas region, building on his 15 years of leadership experience in the digital media industry.