The White House is getting involved in the discussion surrounding the use of open versus closed systems in artificial intelligence.

The White House is getting involved in the discussion surrounding the use of open versus closed systems in artificial intelligence.

The Biden administration is entering a heated discussion regarding whether the most influential artificial intelligence systems should be available for public use or kept private.

The White House announced on Wednesday that it is inviting feedback from the public regarding the potential consequences and advantages of making an AI system’s essential components accessible and editable for everyone. This investigation is a part of the larger executive order issued by President Joe Biden in October, which aims to regulate the rapidly developing field of technology.

There is a disagreement among tech companies regarding the level of openness they should have with their AI models. Some companies highlight the potential risks of widely sharing AI model components, while others emphasize the importance of open science for researchers and startups. Two companies that strongly advocate for an open approach are Meta Platforms (parent company of Facebook) and IBM.

Biden’s directive referenced accessible models known as “dual-use foundation models with widely available weights” and indicated that they require additional examination. Weights are numeric values that impact the performance of an AI model.

When the weights are made available on the internet, there can be significant advantages for progress, but also significant concerns about safety, such as the elimination of safeguards in the model, according to Biden’s directive. He has given Commerce Secretary Gina Raimondo until July to consult with experts and provide suggestions on how to handle the potential advantages and risks.

The National Telecommunications and Information Administration of the Commerce Department has announced a 30-day comment period for gathering suggestions to be included in a report to the president.

According to Alan Davidson, a Commerce secretary and administrator for the NTIA, experts believe that this is not a black-and-white issue. There are varying levels of openness. Davidson stated on Tuesday to reporters that it is feasible to discover solutions that support both innovation and safety.

According to a statement from Nick Clegg, Meta’s president of global affairs, the company intends to inform the Biden administration of their knowledge gained from developing AI technologies in an open manner for the past ten years. This is in order to ensure that the advantages of AI can be accessible to all.

Google has mostly preferred a less open strategy, but on Wednesday, they unveiled a set of open models, called Gemma, that are based on the same technology as their recently launched chatbot app Gemini and paid service. Google refers to these open models as a more lightweight version of their bigger and stronger Gemini, which remains closed.

In a recent technical paper, Google stated that safety is their top priority due to the permanent consequences of releasing an open model like Gemma. They also encouraged the AI community to move past the oversimplified argument of “open vs. closed” and instead take a nuanced and cooperative approach to assessing risks and benefits, avoiding both extreme exaggeration and underestimation of potential harms.

According to Cornell University researcher David Gray Widder, even if an AI system’s components are made available to the public, it may not be readily accessible or simple for individuals to examine. This is because utilizing an open model still relies on the resources of a few major companies.

According to Widder, the reasons behind a company’s decision to adopt either an open or closed approach are complex. Those advocating for open-source may be seeking to benefit from external contributions, while those who prioritize safety may also be trying to solidify their dominant position in the AI industry.

Source: wral.com