By Robert Cyran
NEW YORK, March 16 (Reuters Breakingviews) - Imagine a wildly successful partnership that dissolves once something happens. Except it’s not clear what this event is, whether it’s occurred, or even if it’s possible. That’s the challenge facing OpenAI and Microsoft MSFT.O as they contemplate the possible arrival of artificial general intelligence. The opacity of their agreement, and the huge sums of money at stake, make it almost inevitable that a theoretical debate will turn into a contractual dispute.
The concept known as AGI is the point where autonomous systems can match or surpass humans at intellectual tasks. In Silicon Valley this is both an aspirational goal and existential threat. Creating a self-learning system could, theoretically, result in outcomes ranging from a golden age for humanity to extinction.
OpenAI was founded in 2015 with this explicit objective. Sam Altman, Ilya Sutskever, Elon Musk and others set out to create AGI, which they defined as “highly autonomous systems that outperform humans at most economically valuable work, for the benefit of all humanity”. In line with the idea that the love of money is the root of evil, OpenAI was founded as a non-profit.
Ego and goodwill were not sufficient, though: the AI race also required huge financial resources. So the company established a for-profit arm in 2019, and a partnership with Microsoft. The software giant invested $1 billion and provided computing infrastructure in exchange for access to OpenAI’s technology. The startup insisted that, once AGI was achieved, exclusive rights for the technology would revert to the startup.
This was probably an easy concession to make at the time. Yet as AI systems have rapidly grown more capable, defining AGI has become a less abstract question. On measures such as visual reasoning, English-language understanding, or competition-level math, advanced systems have surpassed average human benchmarks, according to Stanford Institute for Human-Centered AI.
Here’s where the conceptual debate turns into a contractual one. Like Supreme Court Justice Potter Stewart’s subjective explanation of obscenity — “I know it when I see it” — AGI was poorly defined. OpenAI’s clause was neither concrete nor easily observable, as researchers at Google pointed out in a 2023 paper which tried to lay out a framework for such declarations.
Take the provision that the milestone would be reached when systems could outperform most humans at economically valuable work. What does “most people” mean? Can you measure this without deployment in the real world, or if use is slowed for legal or ethical reasons? Finally, the economic value of many jobs is hard to define.
Meanwhile, training and deploying AI models requires voracious amounts of cash. Microsoft, Alphabet GOOGL.O, Meta Platforms META.O and Oracle ORCL.N have sharply increased capital expenditure in response to these needs and plan to spend over $700 billion this year. As a startup without existing revenue sources, OpenAI needed support.
Microsoft ended up pumping in a total of $13 billion. That wasn’t enough for OpenAI, though, and the increasingly complex agreement between the two companies, several parts of which depended on the AGI clause, chafed with both sides.
Other industries have learned that vague agreements and large sums of money don’t mix. Take catastrophe bonds, which help companies lay off the financial risk from hurricanes, floods or pandemics. These events are rare, but the $1.4 trillion bill from a storm hitting a big U.S. city like Miami might make the whole insurance industry teeter. So companies bring in outsiders willing to take on gigantic but low-probability risks in exchange for payments. Initially opaque agreements led to bitter disputes, such as the five-year legal fight between a reinsurer and bondholders over what exactly was covered following a 2008 hurricane.
Catastrophe bonds have since moved towards more concrete and easily observable triggers, lowering uncertainty and avoiding disputes. Think of measuring wind speeds at a specific location, say, rather than overall insurance industry losses or whether a government declares a state of emergency.
In biotech mergers, companies use contingent payments to bridge the gap between buyer and seller when valuing experimental drugs. These usually incorporate non-debatable triggers, like whether the U.S. Food and Drug Administration has approved the treatment by a certain date.
OpenAI and Microsoft have since reached a form of détente, probably because extended negotiations might end up with both sides losing out. A new agreement, signed late last year, says an independent panel must verify any OpenAI claim that AGI has been achieved.
Microsoft keeps a 27% stake in the company, most recently valued at $840 billion, and has the right to some technology until 2032, even if AGI is achieved. Sam Altman’s company now can pursue partnerships or an initial public offering. There is still reason for tension, though. OpenAI sends about 20% of its revenue to Microsoft, and that essentially goes away if AGI occurs. Such a large financial shift would make a big difference to OpenAI’s finances as it prepares to sell stock to the public.
The argument over how to define AGI hasn’t really been settled. Sam Altman said late last year in a podcast that people didn’t really notice and it didn’t change the world much, but “we built AGI”. Microsoft CEO Satya Nadella appears to focus more on large effects, claiming last year that AGI would be “senseless benchmark hacking” and the real benchmark of success would be the world economy growing at an improbable 10%.
The debate has also moved on. Even AI evangelists like Altman admit AI still does some things poorly, such as complex reasoning and the ability to learn. He now touts the idea of superintelligence and AI systems doing things humans cannot do, or systems doing jobs, like being U.S. president, better than a human. That’s still a woolly definition, though.
Altman is still chasing. OpenAI’s recent $110 billion fundraising included a big contribution from Amazon AMZN.O which could hang on whether the company goes public or achieves AGI, the Information reported.
In an interview with CNBC following the agreement, Altman said AI progress remains rapid, and since AGI is a “near-term thing”, the company isn’t doing new deals that stop when it’s reached. That timeline may be optimistic, but if so, the dispute still has time, and incentive, to flare up again.
Follow Robert Cyran on Bluesky.