Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Man, the board already looked reckless and incompetent, but this solidifies the appearance. You can do crazy ill-advised things, but if you unwaveringly commit, we’ll always wonder if you’re secretly a genius. But when you immediately backtrack, we’ll know you were a fool all along.


Dude, everyone already thinks the board did a crazy ill-advised thing. They're about to be the board of like a 5 person or so company if they double down and commit.

To be honest I hate takes like yours, where people think that acknowledging a mistake (even a giant mistake) is a sign of weakness. A bigger sign of weakness in my opinion is people who commit to a shitty idea just because they said it first, despite all evidence to the contrary.


Bad take. Not "everyone" feels that what they did was wrong. We don't have insight into what's going on internally. Optics matter; the division over their decision means that its definitionally non-obvious what the correct path forward is; or, that there isn't one correct path, but multiple reasonable paths. To admit a mistake of this magnitude is to admit that you're either so unprincipled that your mind can be changed at a whim; or that you didn't think through the decision enough preemptively. These are absolutely signs of weakness in leadership.


Whether or not you agree with the decision they obviously screwed up the execution something awful. This is humiliating for them and honestly setting altman free like they did was probably the permanent end of AI safety. Just take someone with all the connections and the ability to raise billions of dollars overnight and set them free without any of the shackles of AI ethics people in a way that makes all the people with money want to support him? That's how you get skynet


I tend to think: We, the armchair commentators, do not know what happened internally. I don't know enough to know that the board's execution wasn't the best case scenario to achieve their goal of aligning the entire organization with the non-profit's mission. All I feel comfortable saying with certainty is that: its messy. Anything like this would inevitably be messy.


Right and thats what I'm saying. It's messy. They screwed up. Messy is bad. If they needed to get rid of him this last minute and make a statement 30 minutes before market close, then the failure happened earlier.


> These are absolutely signs of weakness in leadership.

The signs of "weakness in leadership" by the board already happened. There is no turning back from that. The only decision is how much continued fuck-uppery they want to continue with.

Like others have said, regardless of what is the "right" direction for OpenAI, the board executed this so spectacularly poorly that even if you believe everything that has been reported about their intentions (i.e. that Altman was more concerned about commercializing and productization of AI, while Sutskever was worried about the developing AI responsibly with more safeguards), all they've done is fucked over OpenAI.

I mean, given the reports about who has already resigned (not just Altman and Brockman but also other many other folks in top engineering leadership), it's pretty clear that plenty of other people would follow Altman to whatever AI venture he wants to build. If another competitor leap frogs OpenAI, their concerns about "moving too fast" will be irrelevant.


> Bad take. Not "everyone" feels that what they did was wrong.

But everyone important does so who cares about the rest?


You mean the “the rest” as in the people who execute on the company vision?

It’s really dismissive toward the rank and file to think that they don’t matter at all.


> It’s really dismissive toward the rank and file to think that they don’t matter at all.

I had the exact opposite take. If I were rank and file I'd be totally pissed how this all went down, and the fact that there are really only 2 possible outcomes:

1. Altman and Brockman announce another company (which has kind of already happened), so basically every "rank and file" person is going to have to decide which "War of the Roses" team they want to be on.

2. Altman comes back to OpenAI, which in any case will result in tons of time turmoil and distraction (obviously already has), when most rank and file people just want to do their jobs.


a) The company vision up until this point included commercial products.

b) Altman personally hired many of the rank and file.

c) OpenAI doesn't exist with customers, investors or partners. And in this one move the board has alienated all three.


I seriously doubt customers or (most) partners care about this. I have yet to hear of a single customer or partner leave the service, and I do not believe it to be likely. Simply, unless they shut down their offerings on Monday they will have their customers.

Investors care, but if new management can keep the gravy track, they ultimately won’t care either.

Companies pivot all the time. Who is to say the new vision isn’t favored by the majority of the company?


The fact that this happened so soon after Developer Day is a clear signal that the board wasn't happy with that direction.

Which is why every developer/partner including Microsoft is going to be watching this situation unfold with trepidation.

And I don't know how you can "keep the gravy track" when you want the company to move away from commercialisation.


> I have yet to hear of a single customer or partner leave the service

Which doesen't mean a lot. Of course they'd wait for this to play out before committing to anything.

> but if new management can keep the gravy track

I got the vague impression that this whole thing was partially about stopping the gravy train? In any case Microsoft won't be too happy about being entirely blindsided (if that was the case) and probably won't really trust the new management.


The new management has declared that their primary goal in all this was to stop the gravy track.


I don’t think there has been a formal announcement on the new direction yet


Satya is “furious.” What’s reasonable about pissing off a guy who can pull the plug? I don’t think it’s definitionally non-obvious whether to take that risk.


Last I checked he only had 49% of the company.

I also feel, that they can patch relationships, Satya may be upset now but will he continue to be upset on Monday?

It needs to play out more before we know, I think. They need to pitch their plan to outside stakeholders now


Which other company will give them the infra/compute they need when 49% of the profitable part has been eaten up?


And how will they survive if Microsoft/SamAi ends up building a competitor ?

Microsoft could run the entire business as a loss just to attract developers to Azure.


That assumes Altman competitor can outpace and outclass OpenAI and maybe it can. I know Anthropic came about from earlier disagreements and that didn’t slow OpenAIs innovation pace, certainly.

Everything just assumes that without Sam they’re worse off.

But what if, my gosh, they aren’t? What if innovation accelerates?

My point being is it’s useless to speculate that Altman starting a new business competing with OpenAI will be successful inherently. There’s more to it than that


> Everything just assumes that without Sam they’re worse off. > > But what if, my gosh, they aren’t? What if innovation accelerates?

It reads like they ousted him because they wanted to slow the pace down, so by design and intent it would seem unlikely innovation would accelerate. Which seems doubly bad if they effectively spawned a competitor that is made up by all the other people that wanted to move faster


> Everything just assumes that without Sam they’re worse off.

But it's not just him is it?


Sure, I suppose not, but they aren’t losing everyone en masse. Simply Altman supporters so far.

I think a wait and see approach is better. I think we had some inner politics spill public because Altman needs to the public pressure to get his job back, if I was speculating


The thing I really want to know is how many of the people who have already quit or have threatened to quit are actual researchers working on the base model, like Sutskever.


First it remains to be seen if Microsoft is going to do something drastic.

I also suspect they could very well secure this kind of agreement from another company that would be happy to play ball for access to OpenAI tech. Perhaps Amazon for instance, who’s AI attempts since Alexa have been lackluster


Yeah, he can be furious all he wants but he is not getting the OpenAI he used to have back. It’s either Sam + Greg now or Ilya. All 3 are irreplaceable.


I’m not advocating people double down on stupid, or that correcting your mistakes is bad optics. I’m simply saying they’re “increasingly revealing” pre-existing unfitness at each ham-fisted step. I think our increase in knowledge of their foolishness is a good thing. And often correcting a situation isn’t the same as undoing it, because undoing is often not possible or has its own consequences. I do appreciate your willingness to let them grow into their responsibilities despite it all — that’s a rare charity extended to an incompetent board.


Yeah, I agree with that. I think the board has to have been genuinely surprised by the sheer blowback they're getting, i.e. not just Brockman quitting but lots of their other top engineering leaders.

Regarding your last sentence, it's pretty obvious that if Altman comes back, the current board will effectively be neutered (it says as much in the article). So my guess is that they're more in "what do we do to save OpenAI as an organization" than saving their own roles.


> Dude, everyone already thinks the board did a crazy ill-advised thing.

I've honestly never had more hope for this industry than when it was apparent that Altman was pushed out by engineering for forgoing the mission to create world changing products in favor of the usual mindless cash grab.

The idea that people with a passion for technical excellence and true innovation might be able to steer OpenAI to do something amazing was almost unbelievable.

That's why I'm not too surprised to see that it probably won't really play out, and likely will end up in OpenAI turning even faster into yet another tech company worried exclusively with next quarters revenue.


You're not wrong, but in this case not enough time has emerged for the situation to change or for new facts to emerge. It's been a bit over a day. All that a flip-flop in that short timeframe does is indicate that the board did not fully think through their actions. And taking a step like this without careful consideration is a sign of incompetence.


> To be honest I hate takes like yours, where people think that acknowledging a mistake (even a giant mistake) is a sign of weakness.

The weakness was the first decision; it’s already past the point of deciding if the board is a good steward of OpenAI or not. Sometimes backtracking can be a point of strength, yes, but in this case waffling just makes them look even dumber.


Depends entirely on how you do it. You can do something and backtrack in a shitty way too.

If they wanted to show they’re committed to backtracking they could resign themselves.

Now it sounds more like they want to have their cake and eat it.


> To be honest I hate takes like yours, where people think that acknowledging a mistake (even a giant mistake) is a sign of weakness. A bigger sign of weakness in my opinion is people who commit to a shitty idea just because they said it first, despite all evidence to the contrary.

Lmfao you're joking if you think they "realized their mistake" and are now atoning.

This is 99% from Microsoft & OpenAI's other investors.


> This is 99% from Microsoft & OpenAI's other investors.

Exactly. You can bet there have been some very pointed exchanges about this.


Yeah, Satya likely just hired a thousand new lawyers just to sue OpenAI for being idiots.


I so wish I could be a fly on the wall in all this. There's got to be some very interesting moves and countermoves. This isn't over yet.


"When faced with multiple options, the most important thing is to just pick one and stick with it."

"Disagree and commit."

- says every CEO these days


Acknowledging a mistake so early seems like a sign of weakness to me. Hold the hot rod for at least a minute, see if the initial pain goes away. After that acknowledgement may begin to look like part of learning and get more acceptance, rather than: oopsie doodl, revert now!!!


This isn’t a shitty idea. The board fired it’s CEO and the next day is apparently asking him to come back.

At this point, I don’t care how it resolves—the people who made that decision should be removed for sheer incompetence.


> is a sign of weakness

It's often a sign of incompetence though. Or rather a confirmation of it.


They are already the dumbest board in history (even dumber than Apple’s board firing Steve Jobs). So it’s not out of keeping with anything. Besides, those 2 independent board members (who couldn’t do fizz-buzz if their lives depended on it) won’t be staying long if Sam returns— nor are they likely to ever serve on any board ever again after their shenanigans.


Some of the board member choices are baffling. Like why is Joseph Gordon Levitt’s wife on the board? Her startup has under 10 employees and has a personal email address as the contact address on the homepage.


Non-profits always have those spouses of wealthy people whose whole career is being a professional non-profit board member with some vague academic/skin-deep work background to justify it. I'm just surprised OpenAI is one those.


I hope there is an investigative report out there detailing why the 3 outsiders, 2 of them complete unknowns, are on the board, and how it truly benefits proper corporate governance.

That's way too much power for people who seemingly have no qualifications to make decisions about a company this impactful to society.


Unless "proper corporate governance" is exactly what makes the company dangerous to society, in which case you will need to have some external people in charge. You might want to set things up as a non-profit, though you'll need some structure where the non-profit wholly owns the for-profit wing given the amount of money flowing around...

Oh wait, that's what OpenAI is.

(To be clear, I don't know enough to have an opinion as to whether the board members are blindingly stupid, or principled geniuses. I just bristled at the phrase "proper corporate governance". Look around and see where all of this proper corporate governance is leading us.)


Well with this extremely baffling level of incompetence, the suspect backgrounds of the outside members (EA, SingularityU/shell companies... Logan Roy would call them "not serious people", Quora - why, for data mining?!) fit the bill.

The time to do this was before ChatGPT was unleashed on the world, before the MS investment, before this odd governance structure was setup.

Yes, having outsiders on the board is essential. But come on, we need folks that have recognized industry experience in this field, leaders, people with deep backgrounds and recognized for their contributions. Hinton, Ng, Karpathy, etc.


> Quora - why, for data mining?

What shocked me most was that Quora IMHO _sucks_ for what it is.

I couldn't think of a _worse_ model to guide the development and productization of AI technologies. I mean, StackOverflow is actually useful and its threatened by the existence of CoPilot, et al.

If the CEO of Quora was on my board, I'd be embarrassed to tell my friends.


Isn't that like saying that the Manhattan Project should have only been overseen by people with a solid physics background? Because they're the best judges of whether it's a good idea to build something that could wipe out all life on Earth? (And whether that's an exaggeration in hindsight is irrelevant; that was exactly the sort of question that the overseers needed to be considering at that time. Yes, physicists' advice would be necessary to judge those questions, but you couldn't do it with only physicists' perspectives.)


Not sure I follow. The Manhattan project was thoroughly staffed by many of the best in the field in service to their country to build a weapon before Germany. There was no mission statement they abided by that said they were building a simple deterrent that wouldn't be used. There was no nuance to what the outcome could be, and there was no aspersions to agency over its use.

In the case of AI ethics, the people who are deeply invested in this are also some of the pioneers of the field who made it their life's work. This isn't a government agency. If the mission statement of guiding it to be a non-profit AGI, as soon as possible, as safely as possible, were to be adhered to, and where it is today is going wildly off course, then having a competent board would have been key.


Does Joseph Gordon Levitt’s wife have a name?


Mrs. Joseph Gordon Levitt :)


Why would anyone care as she’s not on the board because of it.


Any proof that makes her incompetent or ill-informed or are you simply speculating as such?


Yeah, I too would like to understand how the wife of a Hollywood actor got on this board. Did sama or Greg recruit her? Someone must have.

I have seen these types of people pop up in Silicon Valley over the years. Often, it is the sibling of a movie star, but it's the same idea. They typically do not know anything about technology and also are amusingly out of touch with the culture of the tech industry. They get hired because they are related to a famous person. They do not contribute much. I think they should just stay in LA.

EDIT: I just want to add that I don't know anything about this woman in particular (I'd never heard of her before yesterday), and it's entirely possible that she is the lone exception to the generalization I'm describing above. All I can say is that when I have seen these Hollywood people turn up in SF tech circles in the past (which has been several times, actually), it's always been the same story.


[flagged]


I mean the reasoning is more something like: to become a member of the board at OpenAI you must be extra-ordinary at something. At first sight, the only candidates for this something are: "start-up founder" and "spouse of famous person". The famous spouse thing is so much more extra-ordinary than being a startup founder, that the first "explains away" the latter. Even when being related to an actor makes it more probable to be selected for such a job, there may be other hidden factors at play.


Don't take in that direction. In your opinion he may be making a baseless accusation, but just because that accusation is against a female doesn't make it sexist.


It's not because the accusation is against a female, it's because referring to someone solely as the spouse of someone else is a frequent tactic used to dismiss women.

That might not have been the intent, but when you accidentally use a dogwhistle, the dogs still perk up their ears.


It's common and acceptable to refer to a nobody who's not shown their claim to fame in terms of another famous, impactful person who happens to be their spouse, sibling, etc.


Except Tasha McCauley has far more claim to expertise in this space, however tenuous you may believe it to be, than her husband does. JGL is not relevant in the discussion, either. We're not talking about her in context of him. We are talking about her in context of her position.

If you don't understand how referring to someone solely based on their relationship with another person is denigrating, particularly when trying to highlight your perception of them being incompetent, I'm not sure what to say to you.


You sound like you want to have an argument about gender bias (esp. according to your other comment). I'm not interested in that. You're free to live in your own version of the world and assume that talking about someone by mentioning their spouse is "denigrating". Jesus.


I followed this comment trail hoping to find out more about Tasha McCauley before I google her, but you ended up doing exactly what you are bashing. Defining her in contrast to her husband's expertise on the topic.

After reading the thread, I am still unsure what makes her a proper candidate to the board seat, but I might know that's she has more claim than her husband to it.


There are lots of comments in these threads that go over her different qualifications and experiences.

I am in a discussion about referring to people as 'spouse of x'. They're not the same conversations and I am not sure why you would expect the contents to be the same.


This might just be the worst example of taking a metaphor too far


This is a good point. Saying something is sexist is what makes it so, plus why would it be sexist to dismiss her as just a wife in the same post that acknowledges that she runs a startup?

GP knows the headcount at her company so they probably know that it’s a robotics company, but it was simply of dire importance that we know that she is a wife.


[flagged]


It's sexist to refer to her solely based on her relationship with someone else when we're talking about her in the context of her expertise. The fact that she's JGL's wife has nothing to do with her merit, and so it comes off as dismissive, especially when the point being made is about her lack of ability.

Why can't you just criticize her "joke of a resume" directly instead of bringing up her spouse?

Generalizations and statements like this reflect bias in subtle ways that minimize women, and I'm glad it's being called out in some capacity.


I don't know that it would be a resume that would inspire confidence in a for-profit business's board that is primarily concerned with shareholder value.

I also don't know that it is a particularly problematic resume for someone sitting on the board of a non-profit that is expressly not about that. Someone that is too much of a business insider is far less likely to be going to bat for a charter that is explicitly in tension with following the best commercial path.


I guess you missed the part about Amal Clooney‘s husband at the Golden Globes. It’s 2023, why are we still referring people like that?


The insinuation is that her most notable accolade is the man she married and there are cases where that's an accurate insinuation.

I have no idea who she is or what her accolades are, but I do know who JGL is and therefore referring to her like that is in fact useful to me, where using any other name is not.


Could you please elaborate how is this fact useful to you? Can it be that you just make certain stereotypical assumptions from it?


It was funny because with the Clooneys both of them have actually accomplished things in significant situations and it was clearly wrong.

In this case this person seems to have primarily tried and failed to spin a robotics company out of Singularity “university” in 2012.

This only sounds adjacent to AI if you work in Hollywood.


It was wrong not because they both did achieve something. It is generally wrong and the joke just used their achievements to break the barrier for understanding that.


Suggesting that we should be on a first name basis with the romantic partner of every famous person we know of simply because they are the romantic partner of a famous person is pretty naive. “Spouse of Y” works just fine generally to save space and effort for (locally) real people.


Option A: try to look good by hiding that you know you messed up

Option B: try to fix mistakes as quickly as possible

.

This is that thing that somehow got the label "psychological safety" attached to it. Hiding mistakes is bad because it means they don't get fixed (and so systems that (do or appear to) set personal interest in favor of hiding mistakes are also bad).


It's funny, but option A is almost always best if you care about yourself, but option B is best if you care about the company or mission. Large organizations are chock-full of people who always choose option A. Small startups are better because option B is the only option as nothing can be easily hidden.


How do you know they back tracked? This reporting, as far as I can see, doesn’t have a source from the board directly.


If the board brings him back. They are done including the chief scientist. Can't stage a coup and just to bring the person back the next week


If you strike at the king, you must kill him.

I am always curious how these conversations go in corporate America. I've seen them in the street and with blue collar jobs.

Loads of feelings get hurt and people generally don't heal or forgive.


You don’t know the actual reasons for them firing Sam and I don’t either. Everyone has an opinion on something they don’t understand. For all you know, he covered up a massive security breach or lied about some skunkworks projects


If your “for all you know” supposition that he’s a criminal were correct, then it would be criminal to try to bring him back. In that unlikely case, I can assure you my opinion of the board is unlikely to improve. It may be a black box to us, but it does have outputs we can see and reason about.


> You can do crazy ill-advised things, but if you unwaveringly commit, we’ll always wonder if you’re secretly a genius.

This. Some people even take it to the extreme and choose not to apologize for anything to look tough and smart.


That seems a lot be better than doubling down on a bad mistake to save face, but we do care quite a bit about about looking strong, don't we.


At longer timescales it is important to be able to recognize mistakes and reverse course, but this happened so fast I'm not sure that's the right characterization. There's no way they could already decide that firing Sam was a mistake based on the outcomes they claim to prioritize. Reversing course this quickly actually seems to me more like a reaction based directly on people's negative opinions, though it may be a specific pressure from Microsoft as well.


Based on reports of Microsoft's CEO being "furious", and the size of its legal team, I'd bet the people's reaction wasn't exactly the most relevant factor there...


They got told they are getting every piece of hardware not on prem pulled and they can burn in legal hell trying to get it back if they dont fix it.


> That seems a lot be better than doubling down on a bad mistake to save face, but we do care quite a bit about about looking strong, don't we.

IMO Its not about looking strong, its about looking competent. Competent people don't fire their CEO of multi-billion dollar unicorn without thinking it through first. Walking back the firing so soon suggest no thinking was involved.


Not really. By reaching out to Sam this quickly, they're giving him significant leverage. I really like Sam, but everyone needs a counterbalance (especially given what's at stake).

And if they were right to fire Sam, they're now reacting too quickly to negative backlash. 24 hours ago they thought this was the right move; the only change has been perception.


I’m sure Satya and his 10,000 Harvard lawyers with filed down shark teeth were just the first of many furious calls they took.


Obviously it’s better to own up to a mistake right away. But the point is if they are willing to backtrack this quickly, it removes all doubt that it WAS a mistake, rather than us just not understanding their grand vision yet.


24 hrs isn't enough time to get signals on whether this was a mistake


How and why do you know it was a mistake without knowing the facts and reasoning? Hunch?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: