Is the EU’s AI Act a force for good or bad?

0


Generative AI apps like ChatGPT are raising concerns about the impact of artificial intelligence on a range of issues including disinformation as well as copyright over images, sound and text – Copyright AFP Julio Cesar AGUILAR

The recently unveiled AI Act (Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI) by EU lawmakers has caused a stir in some circles. Sceptics, seeking a less regulated playing field, are voicing concerns over potential hurdles and stifled innovation due to harsh financial penalties of non-compliance. Those wishing to see the more dangerous potentials for AI reined in are more welcoming of the proposals.

The Act proposes:

  • Safeguards agreed on general purpose artificial intelligence. 
  • Limitation for the of use biometric identification systems by law enforcement .
  • Bans on social scoring and AI used to manipulate or exploit user vulnerabilities.
  • Right of consumers to launch complaints and receive meaningful explanations. 
  • Fines ranging from 35 million euro or 7 percent of global turnover to 7.5 million or 1.5 percent of turnover.

At its centre, the EU AI Act will adopt a risk-based approach, classifying AI systems into four different risk categories depending on their use cases:

  1. Unacceptable-risk,
  2. High-risk,
  3. Limited-risk,
  4. Minimal/no-risk.

The Act’s focus will be with the unacceptable-risk and high-risk AI systems, with both risk classes having received much attention in the EU Parliament’s and Council’s amendments.

On this side of the quantum fence that the AI Act is a force for good is Rob Consalvo, Senior Director of Strategic Commercial Engagement at healthcare data company H1.

Consalvo  takes the view that the proposed new legislation will not stifle innovation: “Contrary to popular opinion, I believe the new AI Act will actually be a catalyst for progress, especially within healthcare and pharmaceuticals. I don’t view these regulations as obstacles; rather, they are the guardrails steering us toward responsible innovation and a future where humanity and AI will coexist harmoniously.”

Focusing on the need to maintain a strong ethical framework when implementing AI to scale, Consalvo adds: “Yes, there may be some growing pains involved with the Act, but we can’t just treat something that poses an existential threat to humans flippantly. It would be ethically reprehensible for any organization to wield the power of AI without due diligence.”

Drawing on medical practice, Consalvo cites: “For example, in the healthcare world, it would be appalling to learn that a pharma company is pushing a drug that it knows to be potentially dangerous. Regulations can help ensure AI output is accurate so we can avoid clinical trial and drug development missteps, wasted investments, and biased or misinterpreted data—all things that ultimately put patient safety at risk.”

In summing, Consalvo presents his vision as one based on regulation balancing business needs to ensure an ethical framework is maintained: “To be clear, I believe that AI is the future and am an advocate for its responsible use. I actually think the regulations could go even further. At the very least, let’s commend an attempt to safeguard a future for humans who will be living and working alongside AI.”


Is the EU’s AI Act a force for good or bad?
#EUs #Act #force #good #bad

Leave a Reply

Your email address will not be published. Required fields are marked *