Deep divide in Silicon Valley exposed over AI risks and potential.
In a surprising turn of events, Sam Altman, co-founder and CEO of OpenAI, was unexpectedly dismissed by the company’s board. The reasons for his removal remain unclear but may involve concerns about his side-projects and the expansion of OpenAI’s commercial offerings without considering safety implications. This development highlights the divide in Silicon Valley between those who fear the risks of AI (doomers) and those who downplay the concerns and focus on its potential (boomers). The future of AI regulation and who will reap the benefits hang in the balance.
The events at OpenAI over the weekend of 17 November have highlighted a deep divide in Silicon Valley regarding the risks and potential of AI. Sam Altman, the co-founder and CEO of OpenAI, was suddenly sacked by the company’s board, with rumours suggesting concerns about his side projects and the rapid expansion of OpenAI’s commercial offerings without considering safety implications.
Despite efforts from investors and employees to bring Altman back, he was not reinstated, and Emmett Shear, former head of Twitch, was appointed as interim CEO. Satya Nadella, CEO of Microsoft, also announced that Altman and a group of OpenAI employees would join Microsoft to lead a new advanced AI research team.
This split in Silicon Valley reflects philosophical differences between the “doomers” and “boomers.” Doomers believe AI poses an existential threat to humanity and advocate for stricter regulations, while boomers downplay concerns and emphasize AI’s potential for progress. The influence of either camp could determine the course of AI regulation and the future beneficiaries of AI advancements.
OpenAI’s corporate structure, with its non-profit foundation and for-profit subsidiary, also reflects the divide. This structure aimed to balance the competing aims of doomers and boomers. Altman’s position seemed to align with both camps, as he called for “guardrails” to ensure AI safety while pushing for more powerful models and launching new tools. However, tensions arose with Microsoft, OpenAI’s largest investor, who reportedly learned of Altman’s dismissal just minutes before it happened. Microsoft’s subsequent offer to Altman and his colleagues may indicate dissatisfaction with OpenAI’s actions.
Commercial considerations also contribute to the divide, as doomers tend to be early movers with deeper pockets and proprietary models. Success in the AI race has been seen with OpenAI’s ChatGPT and Anthropic. Microsoft’s position is largely based on its investment in OpenAI, while Amazon plans to invest in Anthropic. However, being an early mover does not guarantee success, as new entrants can disrupt incumbents.
The doomer camp’s push for stricter regulations has gained political attention. The Biden administration has encouraged leading AI model-makers to make voluntary commitments for the inspection of their AI products. The British government has also signed an agreement allowing regulators to test AI products’ trustworthiness and harmful capabilities. President Biden’s executive order imposes even stricter provisions, compelling AI companies building models above a certain size to notify the government and share safety-testing results.
The future of open-source AI is another point of contention. Supporters argue that open-source models are safer due to their transparency, while detractors fear potential misuse by bad actors. Venture capitalists generally support open-source models, while incumbents may view them as a competitive threat. Regulators also seem responsive to the concerns of the doomer camp, as indicated by President Biden’s executive order potentially affecting open-source AI.
Not all tech firms fall neatly into either camp. Meta’s decision to open-source AI models benefits startups by providing a powerful model for innovative products. Apple remains silent about AI, instead focusing on “machine learning” at product launches.
Source: The Economist