We could rightly criticize the OpenAI board for how they handled the situation, but we shouldn’t fault them for their reasons for acting. If the majority of members truly felt that Altman was steering the organization too far from its mission, then they were right to intervene. Court judges do not rule based on personal feelings; they rule based on the law. If you don’t like their decisions, then challenge the laws, not those applying them. The same logic applies to OpenAI and its board. Â
It seems likely that the board had become increasingly uneasy with the direction that OpenAI was taking as it steadily diverged from its stated mission. Last Friday, perhaps due to evidence of a new AGI-like breakthrough, these tensions came to a head. Â
It is not yet clear who the winners are in this saga. You could point to Microsoft for gaining influence among OpenAI executives and engineers. Or Google, for seeing its main AI competitor wobble uncontrollably before its eyes. Or Altman and Brockman, for cementing their statuses as AI legends. Â
While the winners can be debated, there is no question about the loser – AI safety.Â
By bungling the governance so badly, the OpenAI board scored a howler of an own goal. While their objective was apparently to slow AI development down to ensure a safe deployment, the result is likely to be the opposite. Â
AItman’s camp of AI accelerators and optimists, supported by Microsoft, has emerged squarely on top. In a post-OpenAI Silicon Valley, anyone who has serious concerns about AI safety will now have a hard time being taken seriously. The new OpenAI board members, parachuted in to steady the ship, are corporate and government elites. The adults have taken over, and they are likely to support the expansionist and competitive goals championed by Altman. Â
Time for government to step inÂ
OpenAI’s convoluted structure – a non-profit focused on the benefit of humanity overseeing a for-profit intent on leading in an industry Bloomberg estimates will grow to $1.3 trillion over the next decade – was an earnest but futile attempt at corporate self-regulation. In many ways, it was the most audacious, grandiose, and perhaps most foolish version of Silicon Valley’s go-to response to any kind of concern about the societal impact of technology and innovation.Â
A generation ago, tech entrepreneurs convinced the Clinton Administration that businesses could self-regulate the internet. An eBay score is surely more effective than liability law. If consumers don’t want to share their personal data, they can just opt out. It isn’t music piracy if you are merely sharing your favorite tunes with your peers – even when you have 100,000 “peers”. Why should it be up to tech companies to protect minors from harmful online content when that’s clearly what parents are for? And let’s not get hung up on the law if Meta’s own Oversight Board can “answer some of the most difficult questions around freedom of expression online: what to take down, what to leave up, and why.