After 15 years, Connected TV (CTV) has emerged as the most prominent medium for advertising, disrupting TV models and subsequently offering a vast array of inventory across various platforms. However, while consumer viewing habits have changed, advertisers’ needs have largely remained the same. Knowing what content your ads run against is still top of mind, yet is still a missing piece of today’s CTV puzzle.
While other advertising channels and platforms, such as YouTube and Linear, share insight and reporting into where your ads ran, CTV is the odd man out. Wouldn’t your media dollars work better on content that resonates with your brand? Is there a point running on Automotive or Real Estate content if you represent a Food & Beverage advertiser?
Until now, Linear TV buyers have been able to control where ads run by buying spots on specific television shows. Panels like Nielsen can tell advertisers exactly what shows are relevant for their audience, both in reach and demographic. The digital world is vastly different though, and openRTB (Open Real Time Bidding, the IAB standard for auctions) isn’t going to help much for content-level transparency. If publishers were to enrich a bid request with content information, by design, that content information would be available for both reporting and targeting.
Translation: publishers would no longer have control over selling popular content like “Hell’s Kitchen,” and advertisers would no longer need to go publisher-direct to access this content. This forces an unfair fight against programmatic partners, so we don’t expect publishers to take that route, at least not at scale.
The most promising alternative path to content level transparency is to access the libraries of CTV publishers directly so that the actual video file is available for processing prior to the bid request. In fact, some very large CTV publishers have already decided to integrate with content data platforms, like IRIS.TV, and securely share access to their library via content identifiers like the IRIS_ID to bring several benefits to CTV advertisers.
For starters, categorizing content at the video-level becomes a superior solution, as show-level reporting only tells you the show name, while video-level informs what the show is about. To catch-up with the old world, buyers who know the content they run against can better reach their target audience.
Additionally, it’s now possible to cater to the right context at the episode level, which makes CTV buying both more granular (i.e only the episodes that really matter to you for said show) and more reaching as Pixability’s models find similar types of content in the catalogs of other publishers.
And lastly, it enables brand safety and suitability by going below the surface to identify and remove any videos with unsafe or unsuitable context on the screen or in the dialogue.
Pixability’s unique approach to CTV contextualization
Now, let’s dive in on the tech side. CTV contextual technology requires processing video files and metadata that is then boiled down into concepts that marketers can leverage. This can lead to large amounts of data, making it easy to miss the mark between accuracy and actionability. For example, advertisers do not currently purchase ad spots at a scene level, so flagging each scene of each video is not meaningful when classifying a long-form video such as a movie.
After multiple iterations, we have found the combination of Generative AI, Natural Language Processing, and Computer Vision to be the best way to classify long content on CTV, each serving its own purpose. This is how we do it.
Computer Vision: Analyzing Visual Elements
Computer vision enables machines to comprehend and interpret visual data, making it essential for categorizing CTV inventory. By leveraging deep learning algorithms, computer vision systems can automatically identify scenes and visual patterns within video content. This capability is crucial for inventory categorization, as it allows for extracting valuable metadata related to the content.
At Pixability, we use computer vision algorithms to identify specific objects, products, weapons, explicit content or actions, and more within CTV assets. Our technology then uses this information to classify inventory based on IAB standards such as IAB Content Taxonomy, as well as GARM Brand safety/Suitability.
But what if the video just showcases humans sitting and chatting like many talk shows? That’s where Natural Language Processing comes in.
Natural Language Processing: Extracting Meaning from Text
While computer vision focuses on the visual aspects of CTV content, natural language processing (NLP) plays a crucial role in analyzing the textual components, such as speech-to-text, closed captions, subtitles, video titles, video descriptions, and other metadata. NLP techniques enable machines to understand and extract meaning from human language, facilitating effective categorization of CTV inventory.
By applying NLP algorithms, it becomes possible to detect topics, and even sentiment, within the textual data associated with CTV content. For example, analyzing the dialogue of a TV show or the description of a movie provides valuable insights into the genre, storyline, target audience, IAB Category, and brand suitability of said asset.
But what if the video is ambiguous? Is “Fast and Furious” a video about Automotive, or should it be in the Movies IAB category? Or both? Enter: artificial intelligence.
Generative AI: Uncovering Patterns and Context
Computer vision paired with NLP allows for deeper CTV analysis across both visual and textual information to enhance the accuracy and granularity of categorization. However, generative AI empowers Pixability to uncover how the video content is perceived by humans.
It leverages a very large training dataset to synthesize and ultimately transcend the categorization provided by Computer Vision and NLP. In a world where the IAB and GARM categories can collide with each-other, overlap, or even require multi-classing for the same video, Generative AI becomes a fantastic tool to understand what the user is trying to watch and by proxy what advertising is relevant for this video.
Want to learn more about how Pixability can help drive better brand suitability and contextual targeting on CTV? Reach out today for more information!
IRIS.TV is the only data platform built for video and CTV. We structure, connect, and activate the world’s video-level data to create better viewing experiences and advertising outcomes. Our content identifier, the IRIS_ID, enables our partners to build scalable advertising solutions for contextual and brand-suitability planning, targeting, and measurement. Learn more about the IRIS_ID and the IRIS-enabled™ ecosystem of premium publishers, data partners, and ad platforms at www.iris.tv.