Artificial intelligence (AI) specialists are divided into two camps regarding its potential impact: some believe it will greatly enhance our lives, while others fear it could lead to our destruction. This is precisely why the recent European Parliament debate on AI regulation holds significant importance. However, ensuring the safety of AI poses several challenges. Here are five key obstacles to address.
Defining the concept of artificial intelligence
After a two-year process, the European Parliament has finally formulated a definition for an AI system. It refers to software that can generate outputs, such as content, predictions, recommendations, or decisions, based on human-defined objectives and can influence the environments it interacts with. This significant development has led to the voting on the groundbreaking Artificial Intelligence Act, which sets forth mandatory regulations, surpassing mere voluntary codes and demanding compliance from companies.
Finding common ground on AI regulation globally
Sana Kharaghani, the former head of the UK Office for Artificial Intelligence, highlights the borderless nature of technology.
“It is crucial to have international collaboration on this, even though it may be challenging,” she emphasizes in an interview with BBC News. “The regulation of these technologies cannot be limited to one country’s jurisdiction as they transcend national boundaries.”
However, a global regulatory body for AI, akin to the United Nations, has not yet been established, despite some suggestions. Different regions have put forth their own approaches:
- The European Union’s proposals are the most stringent, involving a grading system for AI products based on their impact. For instance, an email spam filter would face lighter regulation compared to a cancer-detection tool.
- In the United Kingdom, AI regulation is being integrated into existing regulatory frameworks. Individuals who believe they have been subject to AI discrimination, for example, would approach the Equalities Commission.
- The United States currently relies on voluntary codes, though concerns have been expressed during recent AI committee hearings regarding their efficacy.
- China aims to require companies to notify users whenever an AI algorithm is utilized.
Building public trust in artificial intelligence
Jean-Marc Leclerc, Head of EU Government and Regulatory Affairs at IBM Corporation, emphasizes the importance of trust in artificial intelligence. He states:
“The usage of AI depends on the trust people have in it.”
The potential of AI to positively impact people’s lives is immense. It is already contributing to significant advancements, such as:
- Discovering antibiotics.
- Enabling paralyzed individuals to regain mobility.
- Addressing critical challenges like climate change and pandemics.
However, there are also concerns regarding the potential misuse of AI, such as using it to screen job applicants or predict criminal behavior. In response to these concerns, the European Parliament is advocating for transparency by ensuring the public is informed about the risks associated with each AI product. Strict penalties are proposed for companies that violate the regulations, including fines of up to €30 million or 6% of their global annual turnover. Nonetheless, the question remains: Can developers accurately predict or control how their AI products will be utilized?
Determining the rule-makers
The self-policing nature of AI has prevailed until now. Major corporations claim to support government regulation, considering it “critical” for mitigating potential risks, as stated by Sam Altman, CEO of OpenAI, the organization behind ChatGPT. However, there is a concern whether these companies will prioritize profits over people if they are heavily involved in shaping the regulations. It is likely that they aim to have a close relationship with lawmakers responsible for defining the rules.
Baroness Lane-Fox, the founder of Lastminute.com, emphasizes the significance of not solely relying on corporations for decision-making. She stresses the importance of involving civil society, academia, and individuals who are directly impacted by the various models and transformations brought about by AI. Their perspectives and insights are crucial in shaping a comprehensive and inclusive approach.
Taking swift action
Microsoft, a major investor in ChatGPT, envisions the technology as a means to alleviate mundane tasks in the workplace. While ChatGPT is capable of generating human-like text and responses, Mr. Altman emphasizes that it is fundamentally a tool rather than an autonomous entity.
Chatbots are designed to enhance workers’ productivity, and in certain sectors, AI has proven to be a valuable aide and even a job creator. However, in some cases, AI implementation has resulted in job losses. For instance, BT recently announced that 10,000 jobs would be replaced by AI. ChatGPT has only been in public use for a little over six months.
Presently, large-scale language models like ChatGPT have the ability to compose essays, assist in trip planning, and even achieve success in professional exams. The progress in capabilities of these models is advancing at an astonishing pace. Notably, two prominent figures in the field of AI, Geoffrey Hinton and Prof Yoshua Bengio, who are considered AI “godfathers,” have cautioned about the immense potential for harm associated with this technology.
The implementation of the Artificial Intelligence Act is slated for a date beyond 2025, which EU technology chief Margrethe Vestager deems as “way too late.” In the meantime, she is working on a provisional voluntary code in collaboration with the US for the AI sector, aiming to have it prepared within a matter of weeks.