What does the future of AI ads look like at Google -- and how will the company rank the advertisements in AI Mode and AI Overviews?
The questions were asked of Ginny Marvin, Google Ads liaison, by Navah Hopkins, an Optmyzr brand evangelist, in an interview posted to YouTube. The topics range from truths about ad rank and cost per click to landing pages and changes coming for Performance Max.
Along with Google, investments have come swiftly in Microsoft AI-powered advertising solutions across its platforms, particularly in Copilot. The company points to relevance, personalization, and engagement for both users and advertisers.
Marvin did not provide specific details about AI-based ads, but said this is a “massive” focus for Google.
advertisement
advertisement
The topic of AI in Google Ads led into the topic of agentic and landing pages, and whether it makes landing pages obsolete.
Marvin told advertisers and marketers to focus on “today,” and what’s in front of them. Landing pages and assets, or content, help make brands unique. Landing pages, as well as content, are fundamental to differentiating a product. “Now,” however, is a word she kept emphasizing.
AI allows Google to infer so much more about what consumers want by using advancements in natural language processing (NLP) and provides a deeper understanding of user queries and its relevance to the ad being served.
But how much do these companies really know about how AI-based ads really work?
Marvin may not have all the answers, but neither does Anthropic CEO Dario Amodei. In a post published to his personal website, he writes that the goal is to not only determine how AI works, but stop unforeseen dangers of “ignorance.”
Amodei writes about developing “MRI for AI” within the next decade. This technology would determine what makes AI work and why these platforms make certain decisions.
It also should prevent any unforeseen dangers like “jailbreaks” — a method to bypass an AI system's safety restrictions and ethical guidelines.
“Modern generative AI systems are opaque in a way that fundamentally differs from traditional software,” Amodei writes. “If an ordinary software program does something — for example, a character in a video game says a line of dialogue, or my food delivery app allows me to tip my driver — it does those things because a human specifically programmed them in.”
When a generative AI system summarizes a financial document, technologists still do not understand precisely why it makes the choices it made — for example, why it chose certain words over others or why it occasionally makes a mistake although it is usually accurate.
The lack of transparency, he wrote, makes it difficult to find definitive evidence to support the existence of these risks and rally support to address them.
It is also difficult to “know for sure how dangerous they are,” he adds.
Although it is possible to put filters on the models, Amodei says, there are many possible ways to " 'jailbreak" or trick the model, and the only way to discover the existence of a jailbreak is to find it empirically.”
Amodei wants to build an “MRI” for AI that would “fully reveal the inner workings of an AI model.” He said Anthropic and others have been trying to solve this problem for years.
“The MRI of interpretability can help us develop and refine interventions — almost like zapping a precise part of someone’s brain,” he writes. “We used this method to create Golden Gate Claude, a version of one of Anthropic’s models where the ‘Golden Gate Bridge’ feature was artificially amplified.”
It caused the model to become obsessed with the bridge, bringing it up even in unrelated conversations.
"This lack of understanding” of AI, Amodei wrote, "is essentially unprecedented in the history of technology” and needs to be fixed.