No Image Available

Cloudflare’s CEO wants to save the web from AI’s oligarchs. Here’s why his plan isn’t crazy.

Sixteen years ago Matthew Prince and classmate Michelle Zatlyn at Harvard Business School decided there was a better way to help companies handle hacker attacks to their websites. Prince and a friend had already built an open source system to help anyone with a website more easily track spammers. What if the three of them could leverage that into a company that not only tracked all internet threats but stopped them too? 

Within months they had a business plan, won a prestigious Harvard Business School competition with it, and had seed funding. They unveiled the company, Cloudflare, a year later at the 2010 Techcrunch Disrupt competion, taking second place. And today, riding the explosion of cloud computing and armed with better technology and marketing, they’ve leapfrogged competitors to become one of the dominant cybersecurity/content delivery networks in the world. 

It’s one of the great startup success stories out of Silicon Valley in the past decade. Cloudflare went public in 2019 and is now worth roughly $70 billion. That makes it about number 400 on Yahoo’s list of companies by market cap, making it about as big as Marriott, Softbank and UPS. And it’s turned Prince, 50, into a certified tech oligarch worth $6 billion.

But today, in the middle of August, Prince isn’t on a video feed in front of me because he wants to talk about any of that. He wants to talk about saving something old, not building something new. He wants to talk about saving the World Wide Web and all the online journalism it has spawned. 

I’ve never had a conversation with a big tech CEO like this, and I’ve interviewed a lot of them. The best are super high energy, inspiring, out-of-the-box thinkers. But “save,” “old,” “journalism,“ and “liberal arts” are dirty words to many of them, especially when they make it big. Some I know would put big screen TVs where the paintings are in the Louvre.

But Prince isn’t like most entrepreneurs I’ve met, either. He’s more of a Renaissance man in geek clothing. Sure he studied computer science at Trinity College in Hartford. But he was also a ski bum, who edited the school newspaper. And he only minored in computer science. His actual major was English.

He wrote his college thesis in 1996, two years before Google was founded, on the potential for political biases in search engines. And while he had offers to work for companies like Netscape, Yahoo and Microsoft after graduating, at that point in his life the idea of being a programmer actually sounded boring to him.  

Instead, he went to law school at the University of Chicago, where he also started a legal magazine. Cloudflare grew out of Unspam Technologies, an open source project he started with Lee Holloway after law school while he was also teaching cybersecurity law at the University of Illinois. 

Prince wants to talk about the future of the web and journalism with me because he thinks the AI chatbot revolution is killing both of them. And he thinks he can help fix that with something he calls pay-per-crawl, a gambit he and Cloudflare launched on July 1. He cares, he says, because  “I love the smell of printer ink and a big wet press. So I kind of have a soft spot for the media industry and how important it is.”  This isn’t spin. Two years ago he and his wife bought the Park City Record, his hometown local paper..

The damage to the web from AI chatbots is not obvious to most of us yet. But publishers and advertisers say they are blowing up the 30-year-old economic relationship websites, especially news and information websites, have had with search engines. Yes, social media and the general disaggregation of news content has been slowly eroding media business models for two decades. But AI chatbots have thrown gasoline on that fire. 

It’s a big enough problem that Prince and most publishers now believe a lot of journalism and anything else advertising supported online will die or be subsumed into AI companies in the next few years without intervention.  And because 20 percent of websites use Cloudflare for security and traffic management, he can block enough AI chatbots crawlers – with his customers’ permission, of course –  to at least force some big AI companies to the negotiating table. 

To explain what Cloudflare does in detail would make your eyes glaze. But it’s a critical, and powerful part of the web’s plumbing. If Cloudflare turned off its servers tomorrow, big chunks of the web would stop working for days.  Broadly, companies and governments hire Cloudflare to help them with cybersecurity and to speed up the performance of their webpages. Cloudflare does this by putting duplicate copies of its clients webpages on servers closer to their customers. Web pages for a bank in Paris load faster for users in San Francisco if copies are stored on Cloudflare’s servers near San Francisco.

This also improves security. Hackers don’t have to immobilize a couple of company-owned server sites, but dozens of them worldwide. All this gives Cloudflare so much visibility into the latest attack techniques, that it’s increasingly difficult for hackers to come up with an assault Cloudflare has not seen before.

The problem Prince is trying to solve is perhaps the biggest attack on how the web functions since Tim Berners-Lee created it in 1990. Until the AI revolution took hold two and a half years ago, the economic foundation worked like this: Search engines like Google and Microsoft freely and regularly indexed every site with web crawlers. In return, the search results powered by those indices generated referral traffic to those crawled websites. That has supported hundreds of billions of dollars in ad spending and search engine optimization.

But AI chatbots don’t work that way. Instead of ten blue links to choose from after a search – with advertising displayed at the top and right of results – AI chatbots just supply you with the answer. It’s a much better experience for users. Twenty years ago Google’s founders themselves in interviews with me and elsewhere talked about Google search only being an intermediate step toward creating an answer engine like this.

The problem this creates, however, is that there are no ads when AI chatbots give you the answer. If there are links to sources, users almost never click on them.  AI chatbots also drive up publishers’ bandwidth costs because they crawl thousands of times a day. Wikipedia said back in April that these bots had raised their bandwidth costs 50 percent.

And now Google itself is accelerating the problem. As more and more users ditch Google search for their favorite AI chatbot, Google itself is referring less of the traffic that’s still using Google search. Instead, to compete with the AI chatbots, Google is using its own AI chatbot Gemini to answer those questions immediately.  

Some publishers say they are blocking these crawlers. In reality most aren’t because they can’t. These crawlers are smart enough to get around paywalls. And even when publishers put a “Do not crawl” sign up on their sites, they’ve discovered these aren’t giant barricades, but mere stop signs that the crawlers can easily ignore. Tollbit’s Toshit Panigrahi told me that 3.3 percent of crawl block requests were ignored at the end of 2024. At the end of March 2025, it was up to 13 percent. 

Prince doesn’t think he and Cloudflare have the power to solve this problem themselves. Google is the only company who can really fix it, he says. Even though Google is behind in the AI chatbot game, its search monopoly gives it powerful leverage against competitors like OpenAI’s ChatGPT. That means OpenAI will be reluctant to widely pay publishers to crawl their websites until Google starts doing it. 

But Prince also knows that Cloudflare has the ability to block enough AI crawlers to eventually force every AI company to the negotiating table. And he’s betting on Google’s need to eventually settle its antitrust suits in the US and abroad to force the company to make concessions.

“Google historically has believed they have a God given right to have access to all this stuff.” But the future is coming. Search is dying. AI answer engines are coming, Prince said.  

“And so if it’s true that the future of being an answer engine is driven by who has access to the most unique content, if I’m Google, I’m like, ‘ Hey, a new sheriff’s in town. We will insist you have to pay for content. And we’re gonna be the first ones out there doing the exclusive deals so that our answer engine is better than everybody else,’ ” he said.

“The reason for my optimism here is that there are no less than 16 regulatory agencies or courts that are currently evaluating Google’s practices worldwide. And one of the things they are evaluating is the way that Google is bundling search and AI together and whether that is anti-competitive. And so if Google doesn’t do it voluntarily, they will be forced to do it.”

And if that doesn’t work, Prince said that it’s entirely possible that publishers will just start using Cloudflare to block Google’s crawler, something that would have been unheard of just two years ago. “Right before I was on with you, I was on with a publisher who said ‘The level of revenue that we’re getting from Google at this point is small enough that we might just block them.’ “

For anyone who has been writing and thinking about search and the technology world for a while, the fact that anyone would even consider blocking Google’s crawlers is astonishing. Fear of not being indexed correctly by Google’s search crawlers has been such a big deal for businesses for so long that the search engine optimization industry now generates more than $100 billion a year in revenue.

“That’s what Google was terrified we were going to do when we announced pay-per-crawl,” Prince said. “They really thought that we were going to block the Googlebot across 20% of the Internet overnight.”

He said he has no plans to do that … yet. “But that’s got them (Google) because they’re like. ‘You will break search.’”

“Isn’t search already broken?, ” I asked Prince.

“It is,” he said.

If you really want to drill into this issue, here’s our conversation, edited for narrative flow and clarity.

Fred Vogelstein: Tell me the origin story of this project. It seems orthogonal to what you typically do. 

Matthew Prince:  A year and a half ago, a handful of publishers who were using us started coming to us and saying, “We have a new security threat. We have a new threat to our business model. It’s the AI companies.” At first I rolled my eyes. Publishers are always complaining about the next technology. “Oh, woe is us,” you know. 

I think it was finally Neil (Vogel, CEO of People, Inc.) who was like, “Just go pull the data.” He was right. We have solid data about internet traffic for a decade. 

And what we found was that (a year and a half ago) it was about 125 times harder (to get referral traffic from an AI chatbot compared to Google). Now it’s up to 750 times harder. Anthropic’s (referral ratio) is even worse. Anthropic today is up to like 37,000 times harder than the Google of old. 

I get it. It’s a better user interface. So we shouldn’t fight the future. But the business model of the web is inherently not going to work under these answer engines because all the referral traffic goes away. 

To me that means you have three possible outcomes. 

The super nihilistic outcome is that you, and all the other journalists, and all the other researchers, and all the other academics starve to death and die because there’s no business model anymore. The AI companies are going to take your stuff and create derivatives. Ninety nine percent of the world is going to read the derivative. No one’s going to go back to the original source. And the whole industry is going to collapse.

I actually think that’s pretty unlikely. I think we as a species are pretty creative. There’s value in human creation. There’s still value in journalists. Robots aren’t going to replace the world for a long, long time. 

So then there’s Black Mirror version. Journalists will still eat, So will academics. But we’re not going to go back to the media of the 1900s. We’re going to go back to the media of the 1400s – the Medicis. Five big families. In that outcome, the five big AI companies are going to vertically employ all of the journalists, researchers, and academics in their own world. 

There will be a liberal one. There’ll be a conservative one. There’ll be a Chinese one. One out of India. The Europeans will try to create one and fail. Eventually they’ll use the liberal US one.

That’s a pretty scary outcome though, because now we’ve got knowledge that’s going back to silos, That’s the opposite of what the Internet did. 

Then there’s the third option, which I think is the most attractive one. That’s where we accept the fact that the AI companies are effectively in a business that’s most similar to Netflix – where most of the technology is commoditized. 

And so what’s left? What do you compete on? Where do you find a moat? The answer is content – like whoever has Squid Game, whoever has the exclusive Taylor Swift interview. So you can imagine content will then get sliced up almost like the way they do for theatrical releases.

FV: So the big tech companies and AI companies become the next generation of media conglomerates?  

MP: Yes. So you (a content owner) might, for a limited period of time, say, “Hey, OpenAI, you get this for the next week exclusively. Nobody else gets it.” And OpenAI will advertise. You’ll go from that to the limited edition DVD release, to the streaming release, to HBO, And then eventually it will make its way to the Hallmark channel. 

If the AI companies agreed to take some percentage in the same way that Netflix or Spotify takes some percentage of every subscriber dollar they get, they could then dedicate that money to making new original content or helping fund content from others.

It’s worked for the music industry. It’s worked for Hollywood. Why can’t we do the same thing for journalists?

I don’t think it’s just about replacing ad dollars. I think it’s about actually expanding the pie. There will still be ad dollars, there will still be subscription dollars. But now there’s this new pie of dollars that could be as expansive to sort of content creation as Spotify was for the music industry.

FV:  Isn’t part of the problem that it’s still so easy to copy, paste and steal words. How can we make it harder to do that the way the music industry and the movie/video industry have?

MP:  You can still pirate as much music as you want. It’s just that it’s now easier to spend 10 bucks a month and use Spotify.

FV: Right. But we haven’t created that dynamic yet for text content.

MP: That’s what we’re doing with pay-per-crawl. 

FV: Can you go into more detail? That just seems super hard. I’ve occasionally wondered if blockchain might offer a solution by giving every story its own traceable DNA fingerprint. How do we fix the fact that it’s still so easy to find text anywhere online, hit copy, paste, publish, and immediately have content that generates revenue with zero work?

MP: I don’t think you need to get that far (to using blockchain). The only question is how do you create scarcity? We know how to do that. 

We go to war every single day with the Chinese, Russian, North Korean, and Iranian hackers. We built a $70 billion company stopping them. I can stop some nerd in Palo Alto.

And if they get around my blocks, now all of a sudden they’re not just copying something,  now they’re actually breaking cyber security protocols. That means they’re subject to the Computer Fraud Abuse Act. If they are US based entities we can sue them all day long and win. 

You just have to make it harder and riskier to copy than to purchase. We did it with the music industry a generation ago and there were 300 million kids pirating music. Here it’s only thousands of AI companies. If you make it easy enough for AI companies to pay for content they’re going to get in line and have conversations. And 80% of the AI universe is a Cloudflare customer. 

What I’m encouraged by is that as I talk to all the AI companies, all of them say, “We get it, we know we’re going to have to pay eventually. But it needs to be a level playing field, it needs to be fair.” 

FV: What do you mean, fair? 

MP: Google believes they have a God given right to have access to all this stuff for free because they always have. But in this new world they can’t. But because they are trying to transition from being a search company to an AI company, the other AI companies won’t pay to crawl until Google does.

Google should say, “We’re changing our policy on this date for anything that touches any of our AI products. If we crawl it, you have the ability to either block it or let us in. And if you have content which is ad supported or subscription supported, we are going to start paying you in order to be able to get through those blocks.

And we want to cut exclusive deals where we get the theatrical release every time you publish something new because we want our answer engine to be better than everyone else’s.

And that’s how you create a market. And if we have that, then over time, like you were about to release a new article on some new tech trend, or Taylor Swift does an exclusive interview about the inspiration behind her newest album, or a new study comes out on the effects of acetaminophen. Whoever has that content will win more subscribers. 

And that content is going to be what actually causes the AI companies to win in the future. It’s not going to be algorithms. It’s not going to be silicon. It’s not going to be smart researchers.

It’s the same transition YouTube went through. YouTube at first won because they could deliver video seamlessly and at a lower cost than anyone else. Then the rest of the world caught up. And now they’re winning because they are doing exclusive deals and paying content creators more than any of the other platforms. 

So all of these things eventually transition to whoever has the best content wins. And so I think Google’s just got to realize that as well (for their search business).

FV: What’s making the transition so hard? 

The transition is hard because everyone still makes the mistake thinking the Internet’s been free.  It’s never been free. Someone has always paid for it. 

That someone to a large extent has been Google. Google invented the whole system of “We will generate traffic. And then we’ll help you monetize that traffic.”

Google is responsible for everything that’s wrong with the world. They don’t like it when I say that. And I don’t really mean it that strongly. I think they have been a net massive good for the world. 

But they taught content creators to worship the deity of traffic. That beget Facebook, which beget TikTok. 

So now we’re spiraling down this sort of attention economy rabbit hole, and we’re really asking “Why is the world so divided? Why is there so much hate? The answer is obvious.

And so if you could instead get to a world where you reward unique, interesting, quality content like the Internet that we remember from its early days, that would be good. 

FV: How does any of this get enforced because we’re talking about content traveling electronically across zillions of borders. Whose laws do we use? 

MP: Well, the magic of Cloudflare (is, depending on where we put server farms to host content), we can make you and everybody else a UK publisher, or a Michigan publisher or an Australian publisher for our good friends at News Corp. Wherever it is that the laws are the most favorable, we can say, “Content now lives there.” Someone is eventually going to decide to make themselves the Delaware of content (the place that’s the most tax and legal friendly to US corporations)

FV: How would that work?

MP: Let’s imagine it’s Michigan, right? Michigan passes a law that says, “If your content is crawled from Michigan, here are the rules that apply. We’re going to develop courts that understand these things, all sorts of stuff.”

If they did that we would invest a ton in data center space in Michigan, and then we would announce all of our customers’ content is from Michigan. Eventually someone’s going to pass that law. And Google knows this and they’re terrified of what we are doing because of it. 

And so I am hopeful that even before that happens, because the problem with legislative solutions is they’re always kind of messy, that Google just says “We’re changing our policy on this date for anything that touches any of our AI products, all the way down to the answer box. If we crawl it, you have the ability to block it and you can let us in if you want.

But if you have content which is ad supported or subscription supported, we are going to start paying you in order to be able to get through those blocks. And we want to cut exclusive deals where we get the theatrical release every time you publish something new, because we want our answer engine to be better than everyone else’s.”

FV: Why do you need Google? Can’t you and Akamai tackle this problem on your own.  

MP:  I actually feel very alone right now because the Akamai guys are like, “ I don’t understand this.” They’re not actually out there having the conversations with publishers. If we can get Google to move, the rest of the industry moves along with it. 

Can it happen without them? Maybe, but it will be limited because again, the company the rest of the AI industry is worried about competing with is Google. And it can’t be that Google gets to crawl for free and they have to pay.

OpenAI is paying, doing content licensing deals. And the conversation since July 1, since we put the blocks in place for those content licensing deals, talking to major publishers, they’ve gotten easier and better, not harder and worse. So that’s good. That’s progress. I do think that all of our focus right now is on Google.

FV: Would publishers themselves ever use you to block Google? 

MP: We may be approaching that tipping point. Right before I was on with you, I was on with a publisher who said that he actually might. He said “The level of revenue that we’re getting from Google at this point is small enough that we might just block them.”

So I am hopeful that Google understands this and does the right thing. If not, I think there’s a real chance the regulatory pressure is going to be very, very significant (and force them) and come down from multiple directions incredibly quickly ….  like this fall.

And that will force something. If not, in the worst case, the revenue Google supplies to publishers will erode enough that the publishers at some point just say, “Fine, we’ll block.”

That’s what Google told me they were so terrified we were going to do when we announced pay per crawl. They really thought that we were going to block the Googlebot across 20% of the Internet overnight. I haven’t promised I won’t do that (block the Google crawler)  at some point, but not yet. But that’s got them because they’re like. ‘You will break search.’”

FV: How many publishers do you have working with you at this point now?

MP: We have the who’s who. It’s harder to name a publisher that is not either using us now or in the process of transitioning to us.

FV:  Are all the AI companies actually agreeing to pay?

MP: Not yet. We started out saying we’re going to just block training because we can actually block Google from training. We can’t do it for AI overviews. We can’t do it for the stuff that’s associated with search. Again, we wanted to create a level playing field so we couldn’t block everybody else’s training and not block Google.

We’re not blocking RAG and search yet, but that’s coming soon.

And we’re identifying who the bad actors are. And so Perplexity is the first bad actor that we caught doing some really bad stuff. We tried to work with them. We share a board member. We talked with them extensively. And they finally were just like, “We are confident we can get around whatever you put in place.” They’re not the only one. 

With the Perplexity thing what’s crazy is that because we’re blocking them from being able to get access to the content, they’re now  going to Trade Desk and pulling down the summary of the headline and a summary of the content and writing it as if People magazine, or the New York Times or the Wall Street Journal actually wrote it. 

That’s pretty good evidence not of copyright violations or defamation but of fraud.  Because you are now taking somebody else’s name and mark and saying they said X when they didn’t actually say X. And unlike the current set of copyright cases against the AI companies,  that’s a good legal case.

It’s hard to stop the North Korean hackers. It is easier to stop AI companies because they have billions of dollars in the bank. They’re registered C corporations. And it’s easy to serve their executives with subpoenas because they all live in Palo Alto, not some far away land that’s hard to reach. We’re also just really good technically at stopping them.

The good news is that there are some really good actors too. OpenAI is really a good actor in terms of everything we can see. I think they really believe in the ecosystem. I think they have the best shot to be kind of the next Google in terms of the true patron to the web that exists out there. And so you don’t have to be a bad actor to be a good AI company.

FV: If you can convince Google to make concessions, what happens next? 

MP: The next hard technical challenge will be how does the system actually work at scale.

Let’s imagine we get Google on board. Let’s imagine we get the AI companies to agree to pay. How do you send money back and forth with so many different payment infrastructures? 

You can do it to US folks pretty easily. But what about sub Saharan Africa without paying enormous fees. This is where maybe blockchain and cryptocurrencies get interesting. It’s going to be tricky. But we’ve got lots of smart people working on it.

The other thing is how do you establish a pricing mechanism for the articles. It can’t be that every article is worth the same amount. Imagine you’re in Indianapolis, and you’re obsessed with traffic patterns. And so you buy a fleet of drones, and you start creating content on what’s going on on Interstate 70, and you’re the best reporter in the world on what’s happening on I70.

How much is that data worth to an AI company if they have a lot of customers in Indiana? A lot. What’s the population of Indianapolis? Close to a million people. What if 10 percent use this service and pay an AI company $20 a month. That’s $2 million a year just for that content. Would an AI company pay a journalist $500 000 a year to supply that data exclusively? I think the answer is, “Yes.”

And so you could imagine a world in which you actually have each LLM company getting kind of a preview of content, having an algorithm score how valuable it is and then tell the writer how much he’ll get for it. And the payment isn’t based on how many words you write, just by how much you are actually adding to the knowledge base.

FV: How does all this help Cloudflare?

MP: Well, if the Internet ceases to exist, then Cloudflare ceases to exist. And by the way, we’re winning customers because of this. So it’s been good for business. We were actually kind of under penetrated in the media space before we did this. A lot of the media companies were using competitors. We’d go pitch a big media company saying “We can make you faster. We’ll save you money. And they’d be like, “Whatever.” 

At some point if we negotiate on your behalf a big payment for open AI, we’ll take some percentage of that, the same way that your agent would take a percentage. 

Also, what else would I be doing? What else can I possibly do that’s more important than this? And who’s gonna do it if it’s not us? 

I also think it might be critical to saving the world because we can’t just keep getting more polarized. It’s not healthy. We’ve got to find ways to come back together and maybe this is a way of doing it. 

I know this is where I lose people. But if we end up in a Black Mirror world, that will rip us apart. It is better for us to be sharing knowledge, not hoarding in silos.

FV: Thanks for spending so much time with me.





Want the latest?

Sign up for Om Malik & Fred Vogelstein's Newsletter below:


Subscribe Here