AI: The other AI 'open to close' battle. RTZ #483

AI: The other AI 'open to close' battle. RTZ #483

There’s a move afoot as AI moves from research to commercial applications. It’s an accelerating trend to ‘prevent copying of homework’. It’s spurring the other ‘open’ to close’ movement in this AI Tech Wave. Let me explain.

As I’ve written before, there’s been an ongoing debate between ‘open source’ and closed LLM AI systems. But as the commercial stakes of this AI Tech Wave get ever larger, there’s an accelerating wave of closing off AI innovations to other AI companies, developers and researchers.

The Information notes this change with OpenAI’s latest OpenAI-o1 ‘AI Reasoning model I discussed a few days ago.

In Why OpenAI is Hiding its Reasoning Model’s ‘Thoughts’:

In the artificial intelligence race, competition is fierce. Every big AI developer is watching rivals like a hawk and trying to reverse-engineer or copy their best work, as we’ve reported.”

“What can a leader like OpenAI do to keep its edge? The company’s newly released reasoning model, o1-preview, aka Strawberry, shows one way: by hiding the ball on how the model actually solves problems.”

“The blog post announcing o1-preview last week said the model uses an “internal chain of thought” to break down problems into simpler steps before solving them. As we have previously shown, developers have used “chain-of-thought prompting” to get existing large language models, including OpenAI’s GPT-4, to perform better on complex or multistep queries.”

“The new reasoning model does this on its own, but without showing its work to the customer. Instead, the o1 model shows a “model-generated summary of the chain of thought,” which implies that its thoughts are rewritten by a different model altogether before the customer sees them.”

But HOW ‘o1’ does its ‘chain-of-thought’ algorithmic loops, is not available for external analysis:

“OpenAI said it decided to keep the raw chain of thought hidden primarily because that would allow its employees—and only its employees—to “read the mind” of the model to understand how it operates. OpenAI said it doesn’t want the model’s unfiltered thinking to be shown because it might contain unsafe thoughts and that the company wants to “monitor” the model to make sure it’s not being treacherous, such as by “manipulating” the customer.”

“But OpenAI didn’t hide the fact that another factor in its decision was a “competitive advantage.” That’s understandable.”

And that of course is leading to industry speculation around OpenAI’s motives around this lack of transparency:

“Some developers have said they are annoyed by the hidden chain of thought because they could get billed for something they can’t see. OpenAI charges developers based on how many tokens—words or parts of words—its models process and spit out in the form of answers.”

“Still, reviews of o1-preview continue to be largely positive among developers who post about it on X. “

The industry has already been moving towards more proprietary approaches to their AI technologies. This as the industry moves from open research driven sharing of innovations via an accelerating flow of research papers. An accelerated move to putting things behind corporate proprietary IP walls.

And for smaller AI companies to take advantage of the work of the bigger companies, to build competing improvements ‘drafting in their wake’ as it were.

As the Information noted earlier this year in “Generative AI’s Open Secret: Everyone Is Copying Everyone Else”:

It’s the worst-kept secret in artificial intelligence.”

“Many of the AI chatbots developed by startups were likely made using data from OpenAI and other firms, even though these startups are trying to undercut OpenAI, according to developers and founders. This practice has resulted in a startling competitive dynamic: Developers are charging their customers a fraction of what GPT-4 costs, and yet these low-cost services can mimic GPT-4 on some tasks.”

On the other hand, more conservative sharing of AI product and innovation improvements also helps vs big tech competitors as well, as the Information notes in the earlier piece above:

“The good reception to o1-preview ups the ante for rivals such as Google. The search company has already had a tough time winning over customers for its Gemini LLMs because of how alarmingly difficult Google makes it to use them, as my colleague Erin reported on Monday.”

“Thanks to OpenAI, the mountain Google must climb to attract business customers just got taller.”

And Google as I noted yesterday, is already facing some uphill headwinds as they get their Gemini AI models to be as easy to use by Developers as OpenAI and its peers.

So the move towards shielding AI innovations going forward by AI companies large and small in this AI Tech Wave, is likely to here to stay. It’s the other ‘open vs close’ development as AI moves towards bigger commercialization.

Less copying others’ AI homework. And generally to be expected. Stay tuned.

(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)





Want the latest?

Sign up for Michael Parekh's Newsletter below:


Subscribe Here