Openai, the company behind ChatGPT, has had a lot of drama in the past year. There was the sudden firing and almost immediate rehiring of its CEO. Sam Altman is back as the Chief Executive of OpenAI. After that, a bunch of OpenAI's top scientists quit.
These are just the latest in a string of departures leading many to wonder just what is going on over there.
The company also very publicly found itself in hot water for training its chatbot on copyrighted material. Hollywood mega star, Scarlett Johansson, is taking on one of the biggest names in tech. Openai co founder Sam Altman. Then last week, a couple other bombs dropped. The company's chief technology officer announced she was stepping down, just as news broke that OpenAI was remaking itself from a nonprofit to a for-profit corporation. It's a seismic shift for a company that, when it was founded, pledged to develop artificial intelligence for the public interest. The Wall Street Journal's owner, News Corp, has a content licensing partnership with OpenAI. Our colleague Deepa Seetha Raman has reported on OpenAI's entire saga. What has it been like covering OpenAI?
Dynamic. It's great word. Yes, it's been really fast moving and there'll be lulls, and then all of a sudden, a bunch of stuff will happen all on top of each other, which basically underscores the fact that it's the time of chaos in this company's life. I mean, you have this company that is going through a tremendous change, and so you're seeing in real-time a company tearing itself apart.
Now, as OpenAI tries to piece itself back together, its decisions could change the business of artificial intelligence forever. Welcome to The Journal, our show about money, business, and power. I'm Jessica Mendoza. It's Tuesday, October first. Coming up on the show, what exactly is happening at OpenAI?
Courage. I learned it from my adoptive mom. Hold my hand. You hold my hand. Learn about adopting a team from foster care at adoptuskids. Org. You can't imagine the reward. Brought to you by AdoptUS Kids, the US Department of Health and Human Services and the Ad Council.
Things at OpenAI have been coming to a head since last November, when the nonprofit board that governs the company, abruptly fired CEO and co founder, Sam Altman.
Sam, at that point, at least from the external perspective, really seemed to be on top of the world, incredibly central to the broader AI movement, and somebody who just went around the world talking to world leaders and talking to policymakers and visiting the White House and just generally looking like the face of this revolution. But then, he's in Vegas. He is asked to join a call. He sees his co founder, Ilia Satzgevard, and Ilia tells him, Hey, Sam, you're fired, basically.
The move sent shockwaves through the company. Investors and a huge share of the employees rallied rallied around Altman. Just five days later, Altman was rehired as CEO. The dramatic reversal highlighted tensions that had been brewing at the company almost from its founding. When OpenAI was started in 2015, one of its main priorities was safety, making sure that the potentially powerful technology its scientists were developing would never spin out of control.
The point of the company was to be a research lab that would be uniquely motivated, because it was a nonprofit, to create powerful and valuable AI without the motivation or the corrupt influence of having to turn a profit. Companies have a set of incentives where they have to make money and generate a profit and continuously grow. This wasn't supposed to be that. It was supposed to be a initiative to really understand and develop the world's most powerful AI systems without those incentives.
Being a nonprofit was part of its DNA. That was the point, really.
That was the point. But here's the thing. Ai, developing AI, costs a lot of money. Surprise.
A lot of money.
It's not Trump change. I mean, you need billions of dollars in some cases to develop these AI models, and you need to hire a lot of really incredible people who work at places like Google and Facebook and elsewhere that are some of the brightest scientific minds.
Trying to bankroll its lead in the AI race while also staying true to its founding mission became a constant challenge for OpenAI. A few years in, the organization's lead funder left after a power struggle over how OpenAI was being run. And all his money left with him. To try to solve its funding crunch, OpenAI decided to court new investors, so it created a for-profit arm within its nonprofit org structure. I feel like I hear about for-profit companies that have nonprofit arms, but the opposite is a little bit less common?
Yeah, definitely. I mean, this is the inversion of that. Then you have this company that is inside of a nonprofit. Initially, It, too, is telling investors, Our goal isn't to maximize profits. Our goal is to do this research, and you are at risk of potentially losing your entire investment. That is the message. Then you start to take more investments from other investors. There are a lot of venture capital firms. Over time, they start to take on what winds up being billions of dollars from Microsoft. As this is all happening, there are questions being asked of, should we be taking this money? Does it make sense?
Then in 2022, OpenAI released ChatGPT. It was an instant sensation. But that meteoric rise didn't ease the tensions at OpenAI. It made them worse. While the company was publicly hailed for its groundbreaking product, some inside OpenAI and I, worried that the chatbot's success would encourage the organization to release new products even more quickly.
There are some employees that have some misgivings about... There's a lot of different bad things that can happen when You have these kinds of systems that speak like humans and can be very convincing if they don't have the sufficient guardrails. The product is really compelling, and the company just keeps improving it and stress testing it and trying to get better and better. But for some of the employees and some of the researchers, there's a little bit of a feeling that, Okay, well, maybe we should have done this before. But at the time, it's really explained away as, Well, no one knew it would get this big. It was just unpredictable viral.
The concerns over OpenAI's guardrails are part of what led the nonprofit board to fire Altman. But the investors who'd been putting money into the company played a big part in bringing him back.
This is a very short-lived firing, I think, end-to-end. We're talking about five days, so much so that the company took to calling this period of the company's life the blip as just like, Oh, it's just something that happened. Now it's back to normal. But nothing went back to normal after that.
That's next. Sam Altman's return to the CEO's office was a tipping point for OpenAI. Once he was back, investors started claiming more power. One of their first moves was to kick off the board members who had voted to fire Altman.
The big lesson among the investors is that OpenAI needs to start looking more predictable, has a board that has not nonprofit people on it, but people who have run companies before or are big parts of companies and understand how business works, understand how tech works. I mean, that's where a lot of open-eye eyes investors are saying very publicly, this needs to be more predictable entity because they'll feel more in control and more like they understand this company and that a group of a few people can't just overthrow the CEO without warning again.
Soon, OpenAI started hiring new board members to replace the ones who'd been removed.
They're adding a ton of board members this year to try to make it look like it's a more serious board. They're all very corporate, very tech background, and just what you'd expect of a Silicon Valley style board member. They've got people from government, former government officials, a board that people can really count on to not make rash, sudden decisions, like firing the CEO. That mainly if they ever wanted to fire Sam again, there would be a process in place and notice, and there would be a lot of predictability with any decision. This also starts a series of conversations internally about what it would look like if OpenAI shed the non profit altogether.
How did employees react to that idea?
What you see is that over time, some people that are part of the old culture, so the culture that thinks about AI safety or thinks about the unintended consequences of AI who think about things like existential risk, those people increasingly feel squeezed out of the company. Then you have more people coming in and getting hired that are involved in product and know how to sell things and understand what people want. In the spring and through the first half of the year, OpenAI hires its first CFO CRO, Chief Financial Officer. It hires its first Chief Product Officer, both signs that it's trying to be a company that is building products and just trying to be more appealing to the general consumer and make money that way. All the while, though, you've got concerns that this push into products and building shiny, like what one executive would later call, shiny products, that this is the thing that's going to distract the company from its original mission of building, one, artificial intelligence, and two, building it in a way that is safe and secure and can provide prosperity broadly for the world. That now it's really distracted by this desire to make money and a desire to be relevant.
Among the people who felt like they were being squeezed out were some of OpenAI's cofounders, whose vision shaped the company's original mission.
One of the big first departures is Ilyas Satsky Oliver, who is this highly respected chief scientist who OpenAI researchers really admired inside the company, and he leaves in May. His departure really marks the end of an era where the company felt like it was part of this great scientific vanguard, right? Now this guy who so at the forefront of that is leaving. I mean, there are plenty of scientists left at the company. I don't want to make it sound like they've all fallen, that they've all left. But this was a really big one. This was the initial draw for so many of those scientists early on.
Setskaver's exit set off a ripple effect. Immediately after he left, another key researcher who also worked on safety, a man named Jan Leica, resigned. A few months later, John Schulman, another co founder and a widely respected researcher, left as well.
You've got this really within a few months, just this boom, boom, boom, three cofounders step away.
These These are key safety people at the company. What did that mean for safety and that as a priority at OpenAI?
I think there is a feeling among some factions of the company that OpenAI was taking the eye off of safety increasingly and putting it more on developing products that people would want to use, which is exactly what the company was trying to avoid when it was founded. There's a feeling of like, Oh, we've just actually done a full U-turn, and we are no longer what we were. Then there are other factions at the company that say, It's not about... We're not doing less on safety. We're just doing more on product.
Last week, OpenAI took another big hit when its Chief Technology Officer, Mira Moradi, resigned. Altman found out just a few hours before the public did. She was an important leader and an integral part of OpenAI's operations and strategy.
She's the CTO of one of the most important technical companies in the world. She was in charge of creating processes and building products and building out teams. She's a central cog in the system. She really made that company function. The fact that she's leaving, it takes this institutional knowledge out of the company, and it's not an insignificant amount of institutional knowledge. This is a person that helped resolve conflicts and helped teams get their products out the door.
In a statement announcing her resignation, Maradi said she wanted to, quote, Create the time and space to do my own exploration. Her departure, coming after so many key founders and executives left, is a major blow to OpenAI. When Maradi resigned, Sam Altman was away, speaking at an Italian tech conference. On stage, he was asked about the upheaval back in Silicon Valley, and he said he hoped that this will, quote, be a great transition for everyone involved. A spokesperson for OpenAI said, quote, We remain focused on building AI that benefits everyone, adding that, quote, The nonprofit is core to our mission and will continue to exist. This week, Last week, Altman is expecting to close a fundraising cycle of about $6.5 billion, and he's getting some marquee names, NVIDIA, SoftBank, and a new round of funds from Microsoft. If Altman is successful, OpenAI is expected to hit a valuation of $150 billion.
He's been the pitchman. He's been talking to people about what that could look like. In the process of those discussions, he's also committed to saying that OpenAI is going to be for profit. It commits to doing that within a two-year time frame, at which point investors who aren't satisfied can ask for their money back. It's a real risk, and it puts a lot of pressure on the company to transition to make this shift rather quickly. Then there's another really big shift that happens, which is, up until now, Sam Altman hasn't had any equity in OpenAI, and that was a selling point. He's been telling politicians and lawmakers all over the world that the fact that he doesn't have a stake in OpenAI, that that means that he's more neutral, and that means he can slow down development if it seems like it's going too quickly. It's a sign that he's taken a step back so that he isn't corrupted by money. Now, people is likely to take some equity stake in OpenAI.
So is OpenAI's core identity fundamentally different now? Is it a different company?
I mean, I don't think you can argue that it hasn't fundamentally changed. It's just an argument of whether or not those changes add up to a good thing. And there's There's a lot of disagreement about that, but everyone can see that it's just so far from what it started originally, and it's just changed so fundamentally. It's just not at all what it used to be.
On a higher level, what does this shift at OpenAI mean? Will it affect the development of artificial intelligence as a sector?
It's about incentives. Now, there are There are concerns that OpenAI got a different set of incentives that maybe inspire the company to move even more quickly, even more aggressively, even more ambitiously about employing its technology. The concern is that less and less and less emphasis lands on the safety part that was so critical in the early stages.
That could have a domino effect on other companies as well.
Right. Exactly. Because OpenAI is in the center of the spotlight, and it is very influential among all the tech companies, including tech companies like Google. Every time it makes a move, it does send a signal to the rest of the industry about what the norms might look like and what the norms might be.
That's all for today, Tuesday, October first. The Journal is a coproduction of Spotify and the Wall Street Journal. Additional reporting in this episode by Tom Dauton and Burma Jinn. Thanks for listening. See you tomorrow.
In less than two years, OpenAI—the company behind ChatGPT—has gone from a little-known nonprofit lab to a world-famous organization at the forefront of an artificial intelligence revolution. But the company has faced a series of challenges, culminating last week in another high-profile departure and the decision to become a for-profit corporation. WSJ’s Deepa Seetharaman discusses the permanent change to OpenAI's ethos and what it could mean for the AI industry.
Further Listening:
- Artificial: The OpenAI Story
- Artificial: Episode 1, The Dream
Further Reading:
- Turning OpenAI Into a Real Business Is Tearing It Apart
- OpenAI’s Complex Path to Becoming a For-Profit Company
Learn more about your ad choices. Visit megaphone.fm/adchoices