In the previous entry to this series, we explored the problems that messy, inconsistent metadata can create for the end user, where metadata varies from asset to asset, sometimes even for duplicate versions of the same asset. Not only does this create confusion on the part of the user (“Why am I seeing three versions of the same episode of Vikings?”), but it also makes for an ugly, cluttered user experience (this is especially prevalent when metadata conventions are different across assets (e.g. episode number vs. episode name)). These issues reflect poorly on the service presenting them, and are exactly this kind of quality of experience issue that reduces customer loyalty.So how to resolve this issue? Piksel’s Fuse Suite, particularly our Fuse Metadata Manager product, is can tackle these quality of experience issues. While largely focused on delivering a digital first approach for the backend, bringing together broadcast and online silos, creating a consistent metadata catalogue and creating a universally usable metadata format, these benefits translate onto the front end as well.
For example, Fuse Metadata Manager utilises ID normalisation features, which enable the linking and merging of content metadata. This, combined with deduplication of metadata, means that when a content item is found to have duplicate versions (from different providers, with different metadata etc.) those copies can be merged together, their metadata combined, and a single item presented to the user. What does that mean? When a user searches for that episode of Vikings, even though there may be four versions of it from different providers, only one will be presented to them, with a complete set of metadata that has been pulled together from all different versions of the asset (and enriched from third party sources to ensure the metadata is as complete as possible).
The Fuse Suite also utilises machine learning techniques, enabling rapid identification of duplicates, reducing the need for manual intervention in flagging and fixing. This is done with an accuracy of ~95%, with the other 5% flagged for human review. So when content is ingested in the first instance, the chances of it being correctly identified, labelled and matched are highly likely, meaning that the user experience will remain consistent and properly organised from the moment new content arrives on the platform.
To learn more,