By Krystal Hu
Oct 8 (Reuters) - (Artificial Intelligencer is published as a newsletter every Wednesday. You can subscribe here.)
On a blue-sky day in San Francisco this week, OpenAI's annual Dev Day showed off a swag room stacked with branded socks, sweatshirts, and hats; pineapple froyo served in croissant cones; and a full house buzzing for what would come next.
The message from the stage was clear—the ChatGPT maker is full-speed ahead on enterprise deals. OpenAI rolled out partnerships with software makers , plus new tools that let third-party apps plug directly into ChatGPT so users can ask questions or trigger tasks without leaving the app.
One of the most anticipated moments—Jony Ive and Sam Altman discussing a new AI device—landed softly, constrained by secrecy. Beyond the broad promise to build something that "makes you happy," details were sparse.
Copyright holders are happier than last week when Sora launched: in a reversal of its initial "opt out" posture, OpenAI said it will add controls for how characters are used in its video model and share revenue with those who opt in.
And yet, the day's news was overshadowed by OpenAI's own headlines hours earlier—a new $100 billion chip order that will see it potentially taking a 10% stake in AMD. We unpack the creativity and risk in those financing deals down below. In Chart of the Week, we’ll explain why Reddit may be showing up less in AI chatbot answers. Scroll on.
OUR LATEST REPORTING IN TECH & AI
AI venture funding continued to surge in third quarter, data shows
SoftBank to buy ABB's robot business for $5.4 billion in push to merge AI and robotics
Ex-Google scientist won Nobel prize for revealing quantum physics in action
Anduril-Palantir battlefield communication system 'very high risk,' army memo says
Exclusive- Yahoo nears deal to sell AOL to Italy's Bending Spoons for $1.4 billion
AI startup valuations raise bubble fears as funding surges
AI’S OPTION BETS
In the AI gold rush, the biggest companies aren't just buying chips—they're buying options on the future. The recent wave of complex financing deals between chipmakers and AI developers showed us how.
Nvidia's up-to-$100 billion investment in OpenAI locks in a marquee customer that can buy directly from Nvidia to build its own cloud infrastructure with funds flowing back into purchases of Nvidia hardware, while giving Nvidia a non ‑ controlling stake in return. AMD will supply six gigawatts of AI chips to OpenAI over the next few years and offer OpenAI discounted options to buy up to 10% of AMD's stock by 2030.
The fine print? The money or options will come in phases, pegged to milestones. Nvidia’s commitment is tied to the timing of each gigawatt of capacity that comes online. The options exercise right offered by AMD is contingent on OpenAI’s chip orders—and AMD’s own stock price keeps climbing.
The earliest we will see the billion-dollar deals materialize is next year. Both companies said the first phases of their projects are slated to begin in the second half of 2026, as chipmakers start to recognize revenue when shipment begins.
These financial instruments designed to secure capacity and align incentives at scale are emerging at a moment when betting on the future—not just the direction of AI technology, but the precise timing of demand and supply—becomes a delicate dance.
They solve the unsatisfiable compute demand for OpenAI. AI infrastructure requires huge upfront commitments. Nvidia is locking in one of its biggest customers, and AMD benefits from an anchor client’s validation. By embedding equity options and performance milestones, OpenAI can lock in future supply without paying the full cost upfront.
For suppliers, the deals provide headline demand to justify staggering capex. Each company also enjoyed a sharp market pop after announcing OpenAI deals—AMD's stock jumped roughly 30%, adding more than $70 billion in market cap.
For investors, these agreements are riskier. They offer the appearance of certainty: pre ‑ sold capacity, strategic partnerships, aligned incentives. Beneath that, the economics rest on an assumption that has undone many booms—that demand will materialize exactly in step with supply.
The AI buildout requires synchronized investment across data centers, power, cooling, and software readiness. If any link slips—permitting delays, power shortfalls, slower enterprise adoption—carefully pre ‑ financed growth can turn into stranded capacity. These structures are unforgiving when timing goes off ‑ script. Moreover, OpenAI must finance additional buildout with capital not yet on its balance sheet. Executives have discussed adding more debt and crafting more creative financial instruments.
That's why option ‑ heavy financing often proliferates late in investment cycles. Early on, companies scale with standard equity. As competition intensifies and antitrust pressure eases, firms reach for tools that "reserve the future": supply reservations, call options on critical assets, and milestone ‑ based triggers that create the perception of control.
Sound familiar? In the late 1990s, telecoms overbuilt fiber using capacity swaps and vendor financing, convinced traffic growth would arrive on schedule. Eventually, it did—just not fast enough or in the right places—leaving billions of dollars in write ‑ downs.
Locking in massive future supply raises the stakes. If timing aligns, these will look like masterstrokes—capacity pre ‑ positioned ahead of the curve, incentives aligned across the ecosystem. If it doesn't, they'll look like classic overbuild: huge commitments made on demand that arrived too slowly.
That's the signal in this burst of option ‑ based financing. The biggest players are trying to buy safety while making precise bets on timing. And timing is the hardest thing to control in late ‑ cycle markets. The last mile of any boom often looks the most sophisticated—and the most fragile.
CHART OF THE WEEK
Did you notice your AI chatbot's answers changed a bit?
Reddit certainly did. Similarweb data shows Reddit's share of AI chatbot mentions falling off a cliff in mid-September. On September 12, it plunged about 97%, as the percentage of answers that AI chatbots give that cite Reddit dropped from an average near 7% to roughly 0.3%.
Analysts attribute the drastic change to Google limiting a setting that used to let bots pull the top 100 search results, reducing them to only 10. Chatbots like ChatGPT, Perplexity, and Claude had been using commercial feeds built on Google's broader results to find answers beyond page one and extract Reddit threads which often sit around positions 15–40. When Google shut that off in early September, those deeper Reddit links largely disappeared from what the bots could see. It's a reminder of how much power Google's indexing and policies wield over what third-party AI systems can access—and, by extension, what we see in their answers.