AI is present in everyday life with minimal supervision. Governments are rushing to catch up.

AI is present in everyday life with minimal supervision. Governments are rushing to catch up.

In recent news, ChatGPT has garnered attention for its use of artificial intelligence. However, AI has also been integrated into daily life without much fanfare, such as in the screening of job resumes, rental apartment applications, and even influencing medical care in certain instances.

Though several AI technologies have been found to exhibit bias by favoring certain races, genders, or incomes, there is minimal government regulation.

Legislators in seven states are making major moves to address bias in the use of artificial intelligence, as Congress has failed to take action. These measures mark the beginning of a ongoing conversation about finding a balance between the potential advantages of this complex technology and the known dangers.

According to Suresh Venkatasubramanian, a professor at Brown University and co-author of the White House’s Blueprint for an AI Bill of Rights, AI has a significant impact on every aspect of our lives, regardless of our awareness.

You wouldn’t be bothered if they all functioned properly, but unfortunately, that is not the case.

The outcome will be determined by legislators tackling intricate issues and negotiating with a rapidly expanding industry worth billions of dollars, which can only be measured in astronomical units.

BSA The Software Alliance reported that out of almost 200 AI-related bills proposed in state legislatures, only around twelve were approved last year. BSA also advocates for software companies.

Several bills, including more than 400 pertaining to artificial intelligence, have been under discussion this year. The majority of these bills focus on regulating specific aspects of AI, such as approximately 200 bills targeting deepfakes. This includes measures to prohibit pornographic deepfakes, like those featuring Taylor Swift that went viral on social media. Additionally, some lawmakers are attempting to restrict chatbots, like ChatGPT, to prevent them from providing instructions on how to make a bomb, for instance.

In addition to the seven state bills being discussed in California to Connecticut, there are other regulations across industries aimed at addressing the pervasive and intricate issue of discrimination in artificial intelligence.

Experts studying the discriminatory tendencies of AI suggest that governments are falling behind in implementing protective measures. The widespread utilization of AI in making significant choices, otherwise known as “automated decision tools,” is present but often concealed.

According to the Equal Employment Opportunity Commission, approximately 83% of employers utilize algorithms for hiring purposes, with 99% of Fortune 500 companies adopting this practice.

According to Pew Research, a large portion of Americans are uninformed about the utilization of these tools and do not know if they are biased.

Artificial Intelligence can acquire biases from the data it is trained on, often consisting of historical data that may contain hidden biases from past discrimination.

In the past, Amazon abandoned its hiring algorithm initiative when it was discovered that the system was biased towards male candidates. The AI was programmed to evaluate new resumes based on previous ones, which were predominantly from male candidates. Despite not having knowledge of the applicant’s gender, the algorithm devalued resumes that mentioned “women’s” or included women’s colleges because they were not prevalent in the data it was trained on.

According to Christine Webber, the lawyer handling a class-action lawsuit, if the AI is being trained based on decisions made by past managers that have shown favoritism towards certain individuals, then the technology will also learn to discriminate against others. This lawsuit claims that an AI system used to score rental applicants discriminated against Black and Hispanic applicants.

According to court records, Mary Louis, an African American woman and one of the individuals involved in the case, submitted an application to lease an apartment in Massachusetts and was given a vague reply: “Your tenancy has been rejected by the third-party screening service we use for all potential tenants.”

According to court documents, Louis provided two recommendations from previous landlords as proof of her consistent and timely payment of rent over a period of 16 years. However, she was informed that appeals are not accepted and the outcome of the Tenant Screening cannot be reversed.

One of the main objectives of the bills is to address the issue of lack of transparency and accountability in regards to AI bias. This effort is inspired by the failed proposition in California last year, which was the initial comprehensive effort to regulate AI bias in the private sector.

According to the proposed legislation, businesses utilizing automated decision systems would be required to complete impact assessments. These assessments would include details on how AI is involved in the decision-making process, the data that is collected, and an evaluation of potential discrimination risks. Additionally, companies would need to provide information about their safeguards. Depending on the specific bill, these assessments may need to be submitted to the state or could be requested by regulatory agencies.

Some proposed laws may mandate that companies inform customers when AI will be used to make a decision, and give them the option to opt out with certain conditions.

According to Craig Albright, the senior vice president of U.S. government relations at BSA, an industry lobbying group, its members generally support certain proposed actions, such as the implementation of impact assessments.

Albright stated that technology advances at a quicker pace than the laws surrounding it. However, there are advantages to the laws being updated. This allows companies to have a clearer understanding of their duties, leading to greater consumer trust in the technology.

However, the beginning of legislative action has not been successful. A proposed law in Washington state has already failed to make progress in the committee, and a similar proposal in California, which served as a model for many current proposals, also did not pass.

Rebecca Bauer-Kahan, an Assembly member from California, has revised a previous bill that did not pass, gaining support from certain tech companies like Workday and Microsoft. The new bill no longer includes a requirement for companies to regularly submit impact assessments. Similar bills are also being introduced or anticipated in other states, including Colorado, Rhode Island, Illinois, Connecticut, Virginia, and Vermont.

Although Venkatasubramanian from Brown University believes that these bills are a positive step, they lack concrete measures for identifying and addressing bias in AI. Additionally, limited access to the reports may hinder the ability to determine if an individual has experienced discrimination from an AI.

A more thorough yet precise method for detecting discrimination could involve implementing bias audits, which are examinations to assess if an AI is engaging in discriminatory behavior. Furthermore, publishing the outcomes would be beneficial. However, the industry resists this idea, citing concerns over revealing trade secrets.

The majority of legislative proposals do not currently include requirements for regular testing of AI systems, and most of them still have a long way to go before implementation. Nonetheless, this marks the beginning of legislators and citizens grappling with the growing and ongoing presence of technology.

Venkatasubramanian stated that this encompasses all aspects of your life, therefore making it important for you to pay attention to it.

——-

Contributions were made by Trân Nguyễn, a reporter from the Associated Press in Sacramento, California.

Source: wral.com