Can AI hype fuel an IPO resurgence?

Reddit CEO Steve Huffman and New York Stock Exchange (NYSE) president Lynn Martin ring the opening bell
Reddit CEO Steve Huffman and New York Stock Exchange (NYSE) president Lynn Martin ring the opening bell
TIMOTHY A. CLARY/AFP via Getty Images)

Hello and welcome to Eye on AI.

Reddit today made its explosive debut on the New York Stock Exchange, an IPO largely driven by the surge of interest in data for training AI models. As of this writing, the stock had popped by as much as 60% on its first day of trading. 

The highly anticipated listing comes just a day after AI chipmaker Astera Labs saw its shares jump 72% in its first day of trading (and another 20% in midday trading on Thursday), making it the fourth largest IPO in the U.S. this year—and a sign that the appetite for AI stocks is real.

“Most semiconductor startups face a valley of death when capital investments haven’t yet paid off in commercial revenue, but AI adoption has allowed Astera Labs to accelerate out of this phase to reach mainstream usage and high revenue growth,” Brendan Burke, senior analyst of emerging technology at PitchBook, told Eye on AI, adding that the company’s recent AI momentum has “pushed it to go public ahead of a typical schedule.” 

It’s a lot of AI IPO buzz, but it’s also becoming clear that AI isn’t purely a boom for the IPO landscape—at least not yet. The wave of enthusiasm for AI is driving IPO momentum in some cases, but it’s also delaying expected IPOs in others. And among those hurtling toward IPOs on the back of AI, it might not be easy trading as lingering legal questions around AI remain unresolved.

For example, Reddit’s big selling point to investors is hinged almost entirely on AI providers paying big bucks to train their models on its user content. Reddit last month said it already secured $203 million in licensing deals with AI companies, including a deal with Google worth about $60 million per year.

“Reddit’s game plan for AI is one good reason why it’s pricing its shares closer to Meta than Snap,” wrote Alex Wilhelm recently in TechCrunch

This is a murky basket to be putting your eggs in, however. Issues surrounding AI training data and copyright continue to be under intense legal scrutiny and are the subject of an increasing number of lawsuits. Earlier this week, the golden opportunity that is Reddit’s AI content deals landed the company in potentially hot water with regulators. Just days before IPO, the FTC sent a letter to Reddit with questions about its sale of user-generated content to train AI models. Meanwhile, Google just became the first company to be fined over AI training data (more on that below). 

Then there’s a case like Databricks. The company, a major platform for data analytics and machine learning—has widely propped up the AI boom and was expected to soon make its public debut in another highly anticipated IPO. Instead, it’s leaning on its recent influx of cash from AI-driven record sales to stay private longer. 

“AI momentum enabled the company to increase its valuation by 12% in its recent Series I round. That valuation uplift will enable the company to remain private and may enable further private funding as the company continues its high growth period,” said Burke.

In an interview about why the company is now holding off on going public, Databricks cofounder and CEO Ali Ghodsi told the Wall Street Journal “the markets seem pretty shut.” 

“We’re certainly ready as a company: The way we’re operating, the way we’re doing our audits, the way our financials are, the CFO, the board structure,” Ghodsi said. “So we’ll make a strategic decision whenever that time comes.” 

That was before Altera and Reddit began trading of course.

With that, here’s more AI news. 

Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com

AI IN THE NEWS

OpenAI plans to release GPT-5 in mid-2024. That’s according to Business Insider. The company is still training the model, though some enterprise customers have already received demos. "It's really good, like materially better," one CEO who recently saw a version of GPT-5 told Business Insider. 

France fines Google €250 million for training Gemini on news articles. That’s according to Reuters. The country’s competition regulator said Google failed to respect commitments to negotiate deals with press publishers in good faith, including training Gemini on news publishers’ content without notifying them. As Fortune’s David Meyer noted yesterday in Data Sheet, this makes Google the first AI company to be fined over training data. The penalty also marks an escalation in the French government’s pursuit to get Big Tech to deal with publishers more fairly and yet another fine for Google—the company was fined €500 million in 2021 for abuses surrounding news publishers.

The U.S. grants Intel $8.5 billion to bolster chip manufacturing. That’s according to the Financial Times. The preliminary agreement also includes an $11 billion loan to help the company build new facilities in Arizona, Ohio, New Mexico, and Oregon. The funding comes from the CHIPS Act, which Congress passed in 2022, and is part of the U.S.’s bid to manufacture 20% of the world’s most advanced semiconductors by the end of the decade. The effort is driven both by increasing tensions with China over Taiwan—where all of the most advanced semiconductors are built—and skyrocketing demand for more sophisticated chips for use in AI.

Stability AI releases Stable Video 3D, loses its key researchers behind Stable Diffusion. Unlike other recent AI video generators that utilize text prompts, SV3D is built to render 3D videos from 2D images. In a blog post announcing the model, the company explained it can generate multiview videos of an object and deliver coherent views from any given angle. Overshadowing the release, however, is news that key members of Stability’s AI research team have resigned. According to Forbes, the departures were announced internally at a recent all-hands meeting and include team lead Robin Rombach and others who created Stable Diffusion, the company’s text-to-image model that helped spark the generative AI boom.

OpenAI’s GPT store is filled with bizarre, potentially copyright-infringing GPTs. That’s according to TechCrunch, which conducted a review of the user-built custom models available in the store. The publication found many that seemingly violate OpenAI’s own terms, including GPTs ripped from popular movie, TV, and video game franchises that funnel users to third-party services. Others advertise the ability to bypass AI detection tools. “Perhaps because of the low barrier to entry, the GPT Store has grown rapidly—OpenAI in January said that it had roughly 3 million GPTs. But this growth appears to have come at the expense of quality—as well as adherence to OpenAI’s own terms,” reads the article.

FORTUNE ON AI

Intel CEO: ‘Our goal is to have at least 50% of the world’s advanced semiconductors produced in the U.S. and Europe by the end of the decade’ —Pat Gelsinger (Commentary)

Why Microsoft’s surprise deal with $4 billion startup Inflection is the most important non-acquisition in AI —Kylie Robison

The foundation behind Ozempic maker Novo Nordisk is funding an Nvidia-backed AI supercomputer project —Chris Morris and Prarthana Prakash

AI CALENDAR

April 15-16: Fortune Brainstorm AI London (Register here.)

May 7-11: International Conference on Learning Representations (ICLR) in Vienna

June 5: FedScoop’s FedTalks 2024 in Washington, D.C.

June 25-27: 2024 IEEE Conference on Artificial Intelligence in Singapore

Aug. 12-14: Ai4 2024 in Las Vegas

EYE ON AI RESEARCH

Scams and spams. In a new preprint paper out of Stanford, researchers put aside all issues of disinformation and election lies in order to hone in on a different side of AI-generated images and their presence on Facebook. They found that scammers and spammers are getting high engagement posting unlabeled AI-generated images on Facebook—and that the platform’s algorithms are recommending this content widely to users who don’t follow the pages doing the posting. Additionally, many users do not seem to recognize that the images are synthetic. 

These scammers and spammers are driving audiences to content farms, selling products that don’t appear to actually exist, and appear to be manipulating their audiences in various ways, the researchers concluded. These aren’t brand new practices, but AI image generators seem to be an exceptionally useful tool for such scams, thanks to how cheaply and instantaneously they can create attention-grabbing images. The researchers analyzed AI-generated images being posted by 120 accounts sharing such content and found users have interacted with them hundreds of millions of times. A post including an AI-generated image was even one of the 20 most viewed pieces of content on Facebook in Q3 2023, garnering 40 million views. You can read the paper here.

EYE ON AI NUMBERS

72%

That’s the percentage of AI senior decision-makers who said they believe not investing in AI today will put future business viability at risk, according to a new survey report from independent research firm Vanson Bourne and database company Exasol. 

The survey of 800 senior decision-makers, data scientists, and analysts from across the U.S., U.K., and Germany also found that stakeholder pressure is a significant factor in greater AI adoption—45% claimed they are experiencing increased pressure from stakeholders to embrace the technology. Their top-cited reasons for the pressure to quickly adopt AI were the belief that it will generate new businesses or sources of revenue (50%); to keep up with changing workforce roles and responsibilities (47%); accelerating competitiveness in the market (46%); and desire to automate processes (43%).

This is the online version of Eye on AI, Fortune's weekly newsletter on how AI is shaping the future of business. Sign up for free.