AI: The race for LLM AI Developer use via APIs. RTZ #482
I’ve long discussed the the race for the best models amongst leading LLM AI companies also means a race for the most Developers using the models. This use is typically via APIs (Application Programming Interfaces), to use the underlying frontier AI models for applications and services further up the AI Tech Stack.
The leading comnpanies, OpenAI and its partner Microsoft, Anthropic, Google, Meta for its open source Llama 3 models along with their AI cloud partners like Amazon et al, are all engaged in a fierce fight to get developers to try their latest AI models and features. And that race of course continues at a ferocious pace.
OpenAI continues to lead on that front with their AI Reasoning product OpenAI-o1 the latest example last week. Anthropic is also making headway with their LLM AI Claude family of models offering memory caching and ‘Artifacts’ UI upgrade in its recent iteration.
Google has the biggest opportunity here given their competitive Gemini AI models, and large developer support across the Google ecosystem of applications and services.
The latest reports on Developer support for the top LLM AI companies seem to indicate Google has more work to on this front. The Information explains in “Why AI Developers Are Skipping Google’s Gemini”:
“One of Google’s efforts to catch up to OpenAI in the artificial intelligence race is struggling.”
“Google’s conversational AI, Gemini, is too difficult for app developers and businesses to use compared to rivals’ technology, according to interviews with developers and several Google employees that help companies use the AI.”
“Gemini’s lack of popularity among developers, relative to OpenAI’s models, appears to be an open secret at Google—and in the real world.”
“For instance, a June survey of more than 750 tech workers by enterprise software startup Retool found just 2.6% of respondents said they used Gemini most frequently to build AI apps, compared to 76% who said they used OpenAI. Gemini narrowly beat out Anthropic’s Claude, at 2.3%, but Claude’s usage had more than quadrupled since Retool’s November 2023 report, the company said. (Gemini wasn’t available to developers until a month after that.”
“Similarweb, which tracks website traffic, said OpenAI’s page for app developers got 82.8 million visits from June through August, while Google’s had 8.4 million views in the same period.”
“Smaller anecdotal surveys provide similar evidence. Late last month, Julian Saks, founder of Finetune, a young startup working on AI agents, asked 50 AI startup developers in his San Francisco co-working space what conversational AI models they were using the most.”
“Developers’ struggle to use Gemini has implications for Google’s cloud business, which hopes to use Gemini to attract more customers for its server rental business.”
“The issues also pose a potential problem for Google’s development of the AI itself, some employees say.”
Higher Developer (and end user usage of the models) also has benefits to further improve the models themselves:
“By attracting millions of paying customers to buy its AI models or use ChatGPT, OpenAI gets a wealth of implicit and explicit feedback on how well its AI is performing so it can make improvements. If Google’s Gemini doesn’t get the same level of usage, the company may have a hazier road map to doing the same with it, according to two people who have worked on Gemini.”
“That could also affect the development of Gemini models for Google’s consumer and ad products to provide conversational answers in search and for its voice assistant.”
Part of the problem for Google seems to be having alternative applications and APIs to access its Gemini models, vs the single and simpler ways for competitor models:
“Google makes it harder to use the models compared to its rivals because of the confusing variety of options it offers, the number of steps some of those options require, and other differences between its system for developers and OpenAI’s.”
“Sometimes the different options Google offers for using Gemini compete for the same eyeballs in Google’s own search results: For a while this spring, Vertex AI Studio, a service Google promotes to bigger businesses that may want to use Gemini, was a sponsored result for Google AI Studio, the name of a simpler tool enabling developers to use Gemini, according to a person who saw the ad.”
Google of course is aware of these issues and is the process of addressing them:
“Google is trying to change that perception, including by responding to critics of Gemini on X. It’s also hosting events for developers where it can promote Gemini. It’s trying other incentives, such as a competition among developers to build the best Gemini-powered app, with the first-place winner getting a custom electric DeLorean with the license plate G3M1N1. (Google recently delayed the final results until later this year.)”
“Google also is considering merging some features of the overlapping app-building products it sells to reduce confusion among developers, said a person who works on its developer products.”
“Meanwhile, Google is developing the next version of Gemini, 2.0. The AI race could be a long one, giving Google time to find breakthroughs that allow it to keep up with or surpass OpenAI’s technology.”
The whole piece is useful charts and data to supplement these comparisons and issues and is worth reading in full.
But the broader point is that the leading LLM AI companies are not just competing on the ‘AI Table Stakes’ for infrastructure and model features and capabilities that I’ve discussed in recent posts. Getting Developer support is a make or break activity as well, and has to be attacked with concurrent focus and execution.
It’s a lesson that Microsoft adopted early over two decades ago, and is no less relevant today in these early days of this AI Tech Wave. OpenAI has taken it to heart. Google needs to do the same. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)