October 1st, 2025. I sent a LinkedIn message to Michael Ostrovsky, an economist at Stanford who helped design Google's ad auction system. I had already emailed him my resume weeks earlier. No reply.
So I followed up on LinkedIn. Directly.
"Sir I have already mailed you my application with my resume but did not receive any reply. I would love to pursue research under your guidance either remotely semester long or in the summer of 2026. Looking forward to a reply."
His response: "Love the determination. Can't remember why I followed you, but I clearly did :) I'll see if I have anything where I need help, and will let you know."
Sixteen days passed. I followed up again.
The next day, he came back. He had some side coding projects. Was I interested? What was my hourly rate?
Six months later we talk every day for about an hour. We published a paper together last month.
The Problem
The question at the center of this is genuinely new. Search advertising, the industry Google built, assumes a specific interaction model. Person types a query, gets a list of results, maybe clicks something. Ads slot naturally between those results. Everyone understands the mechanism.
That model doesn't map to LLM conversations. At all.
When you're ten messages deep in a conversation about planning a trip to Australia, you're not performing discrete searches. You're in a dialogue. The chatbot knows your budget, your travel dates, your anxiety about long layovers, and that you mentioned your elderly mother is joining you. Google sees "flights Sydney." We see the full picture.
The research question Ostrovsky wanted to explore: how do you actually design an ad-ranking mechanism for this kind of context? What's economically viable? What's deceptive versus genuinely useful? This sits at the intersection of mechanism design and AI systems, exactly where his decades of work have lived.
What We Built and What Surprised Us
To generate real behavioral data rather than synthetic nonsense, we built opengpt.chat, an LLM chat interface, and bought the "ChatGPT" keyword on Google Ads. We wanted actual users with actual queries.
Over several weeks we collected data across 85+ countries. The findings were not what we expected.
The United States, the target of the entire ad spend, had shockingly low engagement. Users clicked the ad, realized it wasn't ChatGPT, and left. Meanwhile users from Pakistan, Ireland, Cyprus, Australia were actually using it. Deeply. Pakistan users in particular converted at an extraordinary rate relative to the US.
We had accidentally found genuine product-market fit in a market we weren't targeting at all. Pakistani students were running marathon tutoring sessions. Users in Ireland were drafting asylum appeal letters. People in Pakistan were writing in Romanized Urdu and asking for formal English output, using the chatbot as an English proficiency bridge that would cost real money from a human tutor.
These weren't the users we built for. They were the users who actually needed it.
The Insight That Matters
The paper we wrote has a finding I keep thinking about.
On Google, a user searches "calories brown bread," then "calories hummus," then "calories tortilla," three separate ad impressions with almost no shared context between them. Each search is a fragment. The advertiser sees fragments.
On a chatbot, the same user writes: "Calculate calories for 40g brown bread, 40g hummus, two mini tomatoes, 100g apple, tortilla 60g, vegan veggie burger..." and then follows up with protein alternatives, recipe tweaks, dietary constraints. By message 4 or 5, the system knows the full intent arc.
The chatbot sees the full picture. Google sees fragments.
That's the advertising moat here, if this becomes a platform. Not keywords. Not demographic targeting. Conversation-level intent, inferred from the complete arc of what a person is actually trying to figure out. The user's 10th message is commercially far more valuable than their 1st, because by then the system understands the whole situation.
What This Actually Feels Like
Imposter syndrome is constant, I won't pretend otherwise. Here's a man whose work shaped how the internet monetizes itself. He's been doing this longer than I've been alive. There's a specific dread in sharing a data analysis with someone who has spent decades thinking more rigorously about the same underlying questions.
But what I didn't anticipate is how much it actually feels like collaboration. We run tests, argue about what the data means, correct each other's reads on the numbers. He knows mechanism design at a depth I genuinely can't match. I know the product and the implementation. Those turn out to be genuinely complementary, not in a polite way, but in a "we actually need each other to make progress" way.
The imposter syndrome doesn't disappear. It just stops being the loudest thing in the room when the work is actually moving.