What can Beat Artificial Intelligence? Natural Stupidity

With due apologies to Amos Tversky

Stupidity

With due apologies to Amos Tversky

The events of the weekend of November 17 2023, will take time to be understood by history. The dramatic firing of OpenAI’s CEO Sam Altman and President Greg Brockman, followed by their hiring by Microsoft over the weekend and subsequent return to OpenAI has had many pundits declaring winners and losers. We believe we need to ask more fundamental questions about decision making at OpenAI and Microsoft that led to this sequence of events to understand what happened and its implications. First, we think this whiplash could have been avoided. Second, for all the talk of responsible AI, this episode makes it abundantly clear that the future of AI cannot be left to organizations with opaque and unusual governance structures. While OpenAI’s theory behind their structure was admirable, it has become clear that it did not work in practice. Indeed, it should have been obvious from the start.  

Structure induces behavior 

A quick glance at www.openai.com/our-structure reveals that OpenAI has an unusual governance structure, with a non-profit controlling the profit-capped for-profit entity. OpenAI believed that no other structure could balance the asymmetric power that would accrue to a purely commercial entity if left unchecked. On their website, OpenAI makes it clear that the board members of the non-profit are expected to think about all of humanity rather than the commercial interests of OpenAI or its investors. When OpenAI was just a research and technology development organization with nebulous commercial prospects, this unusual structure was fine. However, with the phenomenal success of ChatGPT, the obvious commercial potential of the Gen AI technology, and Microsoft’s substantial investment, the idealistic and simplistic, but complex, structure of OpenAI had to change.  
 

As of 17 November 2023, the non-profit’s board consisted of three insiders and three outsiders. Three other members, including Reid Hoffman left the board, in 2023, *after* ChatGPT became all the rage. Because every board member had to be recruited and approved by other board members, Sam and Greg recruited and approved the people who fired them. Greg was the chairman of the board. This basic fact is lost in all the sensational talk of “coup attempts”. The board members were not unknown or foreign to OpenAI, and Sam and Greg believed that these people could represent all of humanity’s concerns with AI. They were told their responsibility was to be independent, including having the power to fire the CEO, and they acted accordingly. That should come as no surprise. We highly doubt that they woke up one morning and decided to fire Sam and Greg. This had to have been simmering for a while. 

The first mistake made by OpenAI’s senior management (Sam, Greg, and Ilya) was failing to maintain the appropriate board strength as the scale and scope of OpenAI changed rapidly. This is the CEO’s primary responsibility. Therefore, the more important question, in our opinion, is: Why did the management not evolve the board to meet its mission, or at the very least, quickly replace board members who had left? Did they think what they would succeed with no adjustments, or did they believe it was not urgent and important? We don’t believe that they would have had any trouble finding board members to join them on this mission. After all, OpenAI’s ChatGPT is the greatest thing since sliced bread. Did they not realize their board and governance structure was a ticking time bomb and untenable for OpenAI’s long-term future? Or the increased risk of governance issues when the board has so few members? It is worth asking these kinds of questions before asking why the board did what it did. In our opinion, the situation was entirely avoidable, and the responsibility for its occurrence falls squarely on Sam, Greg, and Ilya as the principals of OpenAI. The charitable explanation is naivete, which means they had the best of intentions but lacked the necessary experience and judgment when navigating complex governance structures. The uncharitable explanations are incompetence, cluelessness, and arrogance, or a combination thereof.  

Microsoft’s motivations 

As the commercial potential of Gen AI became evident, Microsoft did the unthinkable. Leaped ahead of Google in an area that Google owned, capitalizing on a technology that Google invented, posing the first serious challenge to Google’s domination in search in quite some time. However, on further scrutiny, a few things stand out. Microsoft secured a 49% stake in OpenAI’s profit-capped for-profit entity. Microsoft must have been aware of OpenAI’s unusual governance structure, and that fact that a small number of board members can completely re-direct OpenAI if they believed that AI posed a threat to humanity or that AI’s benefits were accruing to a select few. What would motivate Microsoft to make such a risky bet? How did Satya Nadella convince the Microsoft board to invest up to $13 billion in a profit-capped subsidiary of a non-profit with no additional representation on OpenAI’s board or other ways to ensure accountability?  
 

These questions are critical in understanding why Microsoft offered to hire everyone from OpenAI. Satya immediately hired Sam and Greg after they were fired because, if he had not, the decision to invest billions in an unusual entity that was fast imploding would have caused collateral damage at Microsoft as well.  There was an element of desperation trying to create a perception of continuity and save face with their backs up against the wall. And it worked. 

One reason Microsoft invested is that it needs to be seen as the first among the beneficiaries of the OpenAI/ChatGPT brand ahead of competition. If this association can shift its search market-share by a few percentage points, it would pay back the investment in OpenAI many times over even if they did not have any formal way to influence OpenAI.  

If Microsoft had to make good on the offer to hire everyone from OpenAI, the material risk of integrating hundreds of OpenAI researchers with Microsoft’s own substantial research team would likely also have destroyed a lot of value. However, that was nothing compared to the potential loss of value as indicated by the drop on Microsoft’s share price when the news of Sam Altman’s firing became public. So the market thought that losing access to the output from OpenAI employees will cause great harm to Microsoft. It is ironic because Microsoft probably does not need all the seven hundred odd OpenAI researchers to create state of the art LLMs. Microsoft’s own teams have already produced LLMs that are comparable, and in some ways better than OpenAI, but the ChatGPT brand is OpenAI’s.  

Overall, it also appears Microsoft invested in OpenAI from a position of weakness. Satya was quoted as saying he wished OpenAI’s board had consulted him before firing Sam, but he should have known that the OpenAI board was under no obligation to inform him of anything. He should have insisted that Sam strengthen OpenAI’s board as a requirement for Microsoft’s investment, but he did not. Or he did and Sam ignored him because when Microsoft made the investment, Microsoft needed OpenAI more than OpenAI needed Microsoft.  

The most benign explanation is that the mixed structure of OpenAI gave the leadership plausible deniability. Sam could say he cannot give Microsoft any real control because the non-profit controls everything at OpenAI, and Satya could say the same to Microsoft’s board. All Microsoft needs is access to technology and the ability to bring products to market faster than the competition.  

One can even ask what has changed now compared to when Microsoft made the investment? The answer likely is OpenAI’s future as a “force-for-humanity”. OpenAI is now a Microsoft controlled organization, no ifs or buts about it. That along with the proliferation of LLMs of all shapes and sizes means there is no moat (there likely never was, but that is a topic for another day) around the core LLM technology that OpenAI produces. So, it’s unclear who wins because of all of this. Of course, people will say that Microsoft’s market valuation increased by quite a few billion dollars, and so it has won. Hopefully this note provides context that it is too early to make a call on the winner. 

So, what next? 

Sam and Greg are back in Open AI with a new board, a promise to give Microsoft a voice on the board, and a commitment to expand the board to nine members. Greg and Ilya are no longer on the board. These are positive developments. The question is why it took this near-death experience to move towards sanity. If OpenAI is as important to the world as the frenzied reaction suggests, we need much more robust governance to ensure long-term progress. Yes, the new structure ensures that investors, especially Microsoft, have a say, but the other lofty goals of OpenAI will likely take a back seat, which is probably fine. 
 

The events remind us of the African proverb, “If you want to go fast, go alone. If you want to go far, go together.” It offers one explanation for what happened. Sam may have wanted to go fast, and others on OpenAI’s board may have thought Sam was going alone. But who really knows? A non-profit board is under no obligation to explain its actions to anyone else. If there was a winner in this round, it was those who wanted to slow the development of AI. However, given the explosion of open-source Gen AI models, and the sharp decrease in the cost of training and deploying these models, innovation will accelerate with or without OpenAI and Microsoft.  

The other takeaway is that robust governance is required not only for safe AI, but also for more mundane organizational issues. OpenAI does not have to be this unusual entity where a handful of board members ostensibly consider all of humanity’s concerns. It is not possible to have faith in any of OpenAI’s former, current, or future board members representing all of humanity’s interests in making meaningful decisions if the controlling entity is an opaque non-profit. It is, in some ways, the weakest form of governance. OpenAI, for whatever reason, decided that this was the best structure to help it succeed. We are seeing the result of everyone doing their jobs to the best of their ability given the governance structure. We are seeing the natural stupidity of weird governance structures. Moving forward, OpenAI can and should pick a lane to succeed. Given that it played a pivotal, even pioneering, role in demonstrating what is possible with AI for the masses with ChatGPT, we hope they do succeed and not go the way of Netscape. Because we now know that the natural stupidity of bizarre governance structures designed by ostensibly intelligent people can beat AI.  

 


Leave a Reply