Sam Altman Wants AGI as Fast as Possible, and He Has Powerful Opposition


A lot of people are asking for my thoughts on what happened at OpenAI this weekend.

As I’ll explain below, I believe what happened ultimately came down to two opposing philosophies on AI—and specifically AGI (the ability for an AI to fully replace a pretty smart human).

On one side you have what people like Balaji call the Accelerators, and on the other side you have what he calls the Decelerators. I have my own problems with Balaji, but the analysis below looks pretty good.

  • Basically the EA community is trying to do the most good for the most people in the future

  • And the XRisk community is trying to articulate and prevent events that could end humanity or our civilization

Specifically for the AGI conversation, these two groups are aligned on not destroying humanity by inventing an AGI too quickly that outright kills us.

Eliezer Yudkowsky is something of a leader in the AI XRisk community, and here’s what he had to say on Thursday of last week, just to give a taste.

And no, I’m not saying that tweet is what started this. But the connection is strong enough that Eliezer had to come out and tell people that no—he did not in fact order them to fire Sam. The fact that he actually had to clear that up tells us a lot.

He goes on to say this when it starts going down.

What (very likely) happened this weekend

So, what actually happened?

Details are murky, and it’s hard to speak specifically unless you have Hamiltonian knowledge from “the room where it happened”, but after having spoken with people close to the issue (yeah I’m doing that), and having had conversations about this dynamic for months before, this seems to be the situation.

I’m being broad enough here to hopefully be accurate even when it’s impossible to know the details yet. And it’s pretty easy to check everything here.

  1. There are large and/or powerful EA and XRisk factions at OpenAI

  2. They have been very concerned about how quickly we’re moving towards AGI for months now

  3. They’ve been getting increasingly concerned/vocal over the last 2-3 months

  4. The DevDay announcements, with the release of GPTs and Assistants, were a crossed line for them, and they basically said, “We need to do something.”

  5. The OpenAI board used to have more people on it, and those people were on Team Sam. They had to leave the board for unrelated reasons

  6. This left the existing board that was significantly in the Deceleration camp (Being careful here because the details of exactly who, and how much, aren’t clear)

  7. Ilya has always been very cautious about building AGI that’s aligned with humans

  8. He also just recently became the co-leader of the new Superalignment group within OpenAI to help ensure that happens.

  9. The board would eventually, and likely sooner rather than later, be filled out with more people who were Team Sam

  10. Based on all of this, it seems that the current board (as of Friday) decided that they simply had to take drastic action to prevent unaligned AGI from being created

There have been rumors that AGI has already been created, and that Ilya decided to pull the fire alarm because he knew it. But based on what I know, this is not true.

Anyway, that is the gist of it.

Basically, there are powerful people at OpenAI who believe that we’re very close to opening Pandora’s box and killing everyone.

They believe this to their core, so they’re willing to do anything to stop it. Hence—Friday.

This is my current working theory—which could still be wrong, mind you.

I’ll be watching Season 4 of Sam Altman along with you all, and I’ll add notes to this if I am wrong or need to make adjustments. But I won’t be changing the text above. I’ll just be appending below.





Source link