Brazil data regulator bans Meta from mining data to train AI models

RIO DE JANEIRO (AP) — Brazil’s national data protection authority determined on Tuesday that Meta, the parent company of Instagram and Facebook, cannot use data originating in the country to train its artificial intelligence.

Meta’s updated privacy policy enables the company to feed people’s public posts into its AI systems. That practice will not be permitted in Brazil, however.

The decision stems from “the imminent risk of serious and irreparable or difficult-to-repair damage to the fundamental rights of the affected data subjects,” the agency said in the nation’s official gazette.

Brazil is one of Meta’s biggest markets. Facebook alone has around 102 million active users in the country, the agency said in a statement. The nation has a population of 203 million, according to the country’s 2022 census.

A spokesperson for Meta said in a statement the company is “disappointed” and insists its method “complies with privacy laws and regulations in Brazil.”

“This is a step backwards for innovation, competition in AI development and further delays bringing the benefits of AI to people in Brazil,” the spokesperson added.

The social media company has also encountered resistance to its privacy policy update in Europe, where it recently put on hold its plans to start feeding people’s public posts into training AI systems — which was supposed to start last week.

In the U.S., where there’s no national law protecting online privacy, such training is already happening.

Meta said on its Brazilian blog in May that it could “use information that people have shared publicly about Meta’s products and services for some of our generative AI features,” which could include “public posts or photos and their captions.”

Refusing to partake is possible, Meta said in that statement. Despite that option, there are “excessive and unjustified obstacles to accessing the information and exercising” the right to opt out, the agency said in a statement.

Meta did not provide sufficient information to allow people to be aware of the possible consequences of using their personal data for the development of generative AI, it added.

Meta isn’t the only company that has sought to train its AI systems on data from Brazilians.

Human Rights Watch released a report last month that found that personal photos of identifiable Brazilian children sourced from a large database of online images — pulled from parent blogs, the websites of professional event photographers and video-sharing sites such as YouTube — were being used to create AI image-generator tools without families’ knowledge. In some cases, those tools have been used create AI-generated nude imagery.

Hye Jung Han, a Brazil-based researcher for the rights group, said in an email Tuesday that the regulator’s action “helps to protect children from worrying that their personal data, shared with friends and family on Meta’s platforms, might be used to inflict harm back on them in ways that are impossible to anticipate or guard against.”

But the decision regarding Meta will “very likely” encourage other companies to refrain from being transparent in the use of data in the future, said Ronaldo Lemos, of the Institute of Technology and Society of Rio de Janeiro, a think-tank.

“Meta was severely punished for being the only one among the Big Tech companies to clearly and in advance notify in its privacy policy that it would use data from its platforms to train artificial intelligence,” he said.

Compliance must be demonstrated by the company within five working days from the notification of the decision, and the agency established a daily fine of 50,000 reais ($8,820) for failure to do so.

Source: wral.com