The potential of AI may lie in either an “open-source” or closed system, causing a division among tech giants as they advocate for their preferred approach to regulators.


Many technology industry leaders have expressed their support for regulating artificial intelligence, but they are also actively advocating for regulations that benefit their own interests.

This does not imply that they all desire the same thing.

Meta and IBM, the parent companies of Facebook, have recently created a new organization called the AI Alliance. The purpose of this group is to promote an “open science” approach to the development of artificial intelligence, which sets them apart from their competitors such as Google, Microsoft, and OpenAI, the creators of ChatGPT.

There are two opposing groups, one in favor of open access and the other in favor of restriction, that do not see eye to eye on the best approach for developing AI. The main focus of the argument is safety, but the issue of who benefits from the progress of AI is also a crucial factor.

According to Darío Gil, a senior vice president at IBM in charge of its research division, supporters of open advocacy prefer a non-proprietary and inclusive approach. This means that the information is not restricted and unknown to others like being locked in a barrel.

The phrase “open-source” originates from a long-standing method of creating software, where the code is freely available for anyone to inspect, alter, and expand upon.

Open-source artificial intelligence encompasses more than simply coding, and there is disagreement among computer scientists regarding its precise definition. This is based on the extent to which the technology’s elements are accessible to the public, as well as any limitations on its usage. Some refer to the larger philosophy as open science.

The AI Alliance, headed by IBM and Meta and consisting of companies like Dell, Sony, AMD and Intel, as well as universities and AI startups, is joining forces to stress that the future of AI will rely heavily on the transparent sharing of ideas and use of open innovation, including open source and open technologies. In an interview with The Associated Press before its official announcement, Gil explained that this collaboration is crucial for building a solid foundation for AI.

There is some misunderstanding surrounding open-source AI because, despite its name, OpenAI (the company responsible for ChatGPT and DALL-E image generator) actually develops AI systems that are not fully open.

During a video interview with Stanford University in April, Ilya Sutskever, co-founder and chief scientist of OpenAI, discussed the obvious reasons why there are incentives that discourage the use of open source. However, he also expressed concerns about the long-term implications of developing an AI system with incredibly powerful capabilities that could pose a danger if made publicly available.

Sutskever proposed an AI system as an example of the potential risks of open-source, in which the system had acquired the ability to establish its own biological laboratory.

According to David Evan Harris, a scholar at the University of California, Berkeley, even modern AI models carry potential dangers and can potentially be utilized to escalate disinformation efforts aimed at disrupting democratic elections.

Harris stated that open source is highly beneficial in many aspects of technology, but AI presents a unique scenario.

According to him, those who have seen the film “Oppenheimer” are aware that during significant scientific breakthroughs, caution must be taken when deciding how much information should be openly shared to avoid it falling into the wrong hands.

The Center for Humane Technology, known for its criticism of Meta’s use of social media, is one of the organizations bringing awareness to the potential dangers of publicly available or leaked AI models.

Camille Carlton of the group stated that deploying these models to the public without proper guardrails is highly irresponsible.

There is a growing discussion about the pros and cons of utilizing an open-source approach for AI development.

Yann LeCun, the chief AI scientist at Meta, recently criticized OpenAI, Google, and startup Anthropic for engaging in “massive corporate lobbying” to shape the regulations in a manner that favors their advanced AI models and gives them greater control over the technology’s progress. These three companies, along with Microsoft, have created their own industry coalition known as the Frontier Model Forum.

On X, previously known as Twitter, LeCun expressed concern about scientists’ fearmongering regarding AI “doomsday scenarios”, stating that it could provide ammunition for those seeking to prohibit open-source research and development.

“In the coming years, as AI systems become the primary storage of human knowledge and culture, it is crucial for these platforms to be open source and accessible to all, allowing for contributions from everyone,” stated LeCun. “This openness is necessary in order for AI platforms to accurately represent the vast scope of human knowledge and culture.”

IBM, one of the first advocates of the open-source Linux operating system during the 1990s, sees this conflict as a continuation of a longstanding rivalry that predates the rise of AI technology.

Chris Padilla, the head of IBM’s global government affairs team, stated that the strategy being used is reminiscent of regulatory capture by creating concerns about open-source innovation. This has been a tactic used by Microsoft for many years, where they would oppose any open-source projects that could potentially rival their Windows or Office programs. They are employing a similar tactic in this situation.

The conversation surrounding U.S. President Joe Biden’s comprehensive executive order on AI did not fully address the debate on “open-source,” making it easy to overlook.

Biden’s order referred to open models, known as “dual-use foundation models with widely available weights,” which require additional examination. Weights refer to numerical parameters that impact the performance of an AI model.

According to President Biden’s order, the widespread availability of weights for a dual-use foundation model, particularly when they are publicly accessible on the Internet, can lead to significant advantages for innovation. However, this also poses significant security risks, as it can potentially remove safeguards within the model. The order grants U.S. Commerce Secretary Gina Raimondo until July to consult with experts and provide recommendations on how to effectively handle these potential benefits and risks.

The European Union is facing a tighter deadline to resolve the issue. As discussions reach a critical point on Wednesday, those involved in finalizing the implementation of cutting-edge AI regulations are still discussing various aspects, such as a provision that may exempt specific “free and open-source AI components” from regulations that apply to commercial models.

Source: wral.com