The unique non-profit framework of OpenAI resulted in the unexpected removal of their highly desired CEO.


SAN FRANCISCO (AP) — Unlike Google, Facebook and other tech giants, the company behind ChatGPT was not created to be a business. It was set up as a nonprofit by founders who hoped that it wouldn’t be beholden to commercial interests.

However, the organization became intricate.

After changing to a profit-based model, OpenAI still has a majority shareholder in nonprofit OpenAI Inc. and a board of directors. This setup allowed four board members, including the chief scientist, two tech entrepreneurs, and an academic, to remove CEO Sam Altman on Friday.

The sudden departure of a highly coveted AI specialist sparked a rebellion among employees, putting the entire future of the organization at risk and highlighting the unique structure that distinguishes OpenAI from other tech companies.

It is extremely uncommon for prominent technology companies to possess such a framework.

Facebook parent Meta, as well as Google and others, are essentially set up the opposite way — giving founders ultimate control over the company and the board of directors through a special class of voting shares not available to the masses. The idea comes from Berkshire Hathaway, which was established with two classes of stock so the company and its leaders would not be beholden to investors seeking short-term profit.

OpenAI’s stated mission is to safely build artificial intelligence that is “generally smarter than humans.” Debates have swirled around that goal and whether it conflicts with the company’s increasing commercial success.

According to Sarah Kreps, director of Cornell University’s Tech Policy Institute, the board structure revealed that the members believed they were all on the same page and shared the same goals. They assumed this would prevent any issues from arising because they would remain aligned.

“As a result of increased investment, there has been significant progress in AI technology over the past year. This has also led to the emergence of certain issues,” stated one individual.

The board has declined to provide specific explanations for the termination of Altman, who was swiftly hired by Microsoft Corp. on Monday. Microsoft, a major investor in OpenAI, also recruited OpenAI President Greg Brockman, who resigned in objection to Altman’s firing, along with at least three other individuals.

Furthermore, Microsoft has made job offers to every one of OpenAI’s 770 staff members. If a sufficient number of employees accept the offer from Microsoft or decide to work for competing companies that are openly seeking them out, OpenAI may essentially cease to exist due to the lack of a workforce. Most of its current technology will be retained by Microsoft, as they hold an exclusive license to utilize it.

OpenAI announced that Altman was removed due to a review that revealed he was not always honest in his communications with the board. As a result, the board had lost trust in his ability to lead the company.

The statement did not provide specific instances or illustrations of Altman’s purported lack of honesty. The company stated that his actions impeded the board’s ability to fulfill its duties.

According to Kreps, the board’s decision to take a more conservative approach to AI, which led to Altman’s dismissal, ultimately harmed the company. This action caused a divide among the majority of employees and left the company without the capability to uphold a pro-safety mindset.

Following a weekend filled with significant events, which resulted in the replacement of one interim CEO with another, OpenAI board member Ilya Sutskever, who played a crucial role in instigating the changes, expressed remorse for his involvement in the removal.

On Monday, he shared on X (formerly Twitter) that he had no intention of causing harm to OpenAI. He expressed his love for what they have created and stated that he will strive to bring the company back together.

As of Friday, OpenAI’s board had six members. However, the current board is composed of Sutskever, who is both a co-founder and the chief scientist of OpenAI; Adam D’Angelo, the CEO of the Q&A platform Quora; Tasha McCauley, a tech entrepreneur; and Helen Toner from the Georgetown Center for Security and Emerging Technology.

Just this year, the board had a greater number of members.

Those who departed the board were LinkedIn founder and investor Reid Hoffman, who co-founded another AI company last year; former Republican U.S. Rep. Will Hurd of Texas, who was briefly a 2024 presidential candidate; Neuralink executive Shivon Zilis; and Brockman, who left in the wake of Altman’s dismissal.

At its inception, Altman and Tesla CEO Elon Musk served as the initial co-chairs for OpenAI.

The board may have avoided the conflict between its nonprofit status and the company’s for-profit division if it weren’t for a significant disagreement between Altman and Musk in 2018.

Musk suddenly left OpenAI, reportedly due to a possible conflict of interest between the new company and Tesla, the electric car manufacturer that has contributed to his personal wealth now estimated at over $240 billion.

In the past year, Musk expressed worry over Microsoft potentially steering OpenAI towards excessive profits. Recently, he established his own AI company called xAI to rival OpenAI, Microsoft, and Google, among other competitors.

The board members of OpenAI have not replied to inquiries for feedback. Out of the remaining four members, one of the more recognizable faces is D’Angelo, a former Facebook staff member who helped start Quora in 2009 and currently serves as its CEO.

In 2018, D’Angelo became a member of the OpenAI board and expressed his belief that the pursuit of general AI, while prioritizing safety, is crucial and undervalued. He stated his willingness to support this work.

On November 6, he made a public statement about the potential for AI to surpass human capabilities. This was in response to a Google research paper that presented evidence that current AI systems are unable to generalize beyond their training data, calling into question previous beliefs about their abilities.

In a previous post, D’Angelo stated that the development of artificial general intelligence will likely be a significant event in the history of the world, and it is expected to occur within our lifetime.

___

Matt O’Brien, a technology writer for the Associated Press in Providence, Rhode Island, and Michael Liedtke, also a technology writer in San Francisco, contributed to this report.

Source: wral.com