Dear TechAways readers,
On Wednesday, the EU presented its first AI legislation – a gamble to harness the many opportunities and address challenges of AI in a future-proof manner that protects our fundamental rights. As Industry Commissioner Thierry Breton quipped during the press conference: “The little anthropomorphic robot Wall-E has not succeeded in making us forget the T-800 from Terminator…“ I would add nor Upgrade… (if you have not seen this sci-fi thriller yet, hang on to your seat).
The European Commission proposed a legal framework following a risk-based approach: the higher the risk, the stricter the rule. This will have far-reaching implications for tech firms that have already started to develop the technology, as well as for future innovation. If it is a high-risk AI system, strict conditions would have to be met before the product hits the market. What is a high-risk AI system? The Commission listed eight categories covering actions like using AI systems for recruitment procedures or in bank lending. If the risk threatens people’s safety, livelihood or rights– code red. The AI system would be banned.
Thankfully, “social scoring” is verboten. Playing your music too loud in a public space or jaywalking will not disqualify you from basic rights such as buying a plane ticket or taking out a loan according to the Commission’s new AI proposals. While it seems like an episode of Black Mirror, it actually is a global trend – social-style scores, rating one another on certain services, are already part of our daily lives and China brought it to another level with its social credit system. Also verboten is live facial identification in publicly accessible spaces for law enforcement purposes – in principle. Like all principles, there are exceptions in the fine print.
The question now is how much the proposed AI legislation will be amended or watered down by the European Parliament and Council as they try to reach an agreement on a final text. Heated discussions are expected on the definition of high-risk, exceptions to the list of banned AI applications and the degree of self-regulation for conformity assessments. It should be an interesting ride.
Wish to know more? Check out our quick recap of the AI package proposal with our AI in Your Pocket interactive timeline.
This is Sarah Cumming reporting, Cambre’s senior tech policy and competition lead consultant. If you have any questions or simply wish to reach out, please contact me here.
Have a great read!
Sarah
“Wait for me!” – US Federal Trade Commission [MIT Technology Review]
Not wanting the EU to have all the AI-regulating fun, the US Federal Trade Commission announced this week that it would start going after companies that use and sell biased algorithms. As the FTC blog post reads, if an AI developer “tells clients that its product will provide ‘100% unbiased hiring decisions,’ but the algorithm was built with data that lacked racial or gender diversity. The result may be deception, discrimination—and an FTC law enforcement action”. While the US and the EU have (or haven’t) gone after Big Tech differently the last decade, this further exemplifies the shift in how governments are looking at future AI regulation.
The role of ethics in AI [Twitter]
The factors shaping the way AI works in practice are vast and imply important ethical considerations. And to ensure that societal biases aren’t perpetuated by technology it is imperative that creators reflect on how algorithms are designed, and the consequences which may be observed in the application. Twitter’s newly announced Responsible ML (machine learning) initiative aims to tackle these ethical issues through analysis and learning. The 280-character platform also wants to actively engage with the public to improve its automated systems. While many are hesitant about AI, often such biases boil down to flawed coding (as discussed in the last edition of TechAways) and missing context. Twitter may not have all the answers but at least they are taking clear steps.
The missing link 🔗 between humans and AI [Financial Times]
AI has been outpacing humans for many years now, but behind the stunning algorithms, there is something that sets them apart from humans – their inability to think about their own thinking. AI cannot know what it does not know, and studies run in numerous research labs prove that. Researchers at University College London are studying the brain mechanisms that support the self-awareness and metacognition ability of humans to see if they can make algorithms less ‘over-confident’. But do we want introspective robots? The good thing is that this research could increase trust and accountability of AI systems, and create a world in which we better understand our own and our robots’ minds.
AI-based solution for doctors’ bad handwriting skills! 🩺 [Wired]
Microsoft is planning to grow its influence in the healthcare industry by acquiring the AI developer Nuance, a leader in healthcare artificial intelligence. Nuance created a software that listens to doctor-patient conversations and simultaneously transcribes it into digitalised medical notes. The voice transcription technology, used by more than 300,000 clinicians and 10,000 healthcare organisations worldwide, understands the specialised language of medicine which makes it different from existing software like Siri and Alexa. The company also developed a platform for building chatbots, voice-based authentication technology, and tools for monitoring call centre conversations. With this acquisition, and its own “Cloud for Healthcare”, Microsoft could make a big push into the healthcare software business, and hopefully make us less reliant on doctors’ notoriously bad handwriting!
Sarah’s bio:
I have been at Cambre for two years focusing on tech, competition and trade issues. A qualified lawyer, I practised competition and international litigation before moving to public affairs. A kiwi-frog (🇳🇿/ 🇫🇷), I always look forward to the next Rugby World Cup. Right now, I am counting the days to the reopening of cafés in Brussels, only two weeks to go!
In case you haven’t had enough:
The new lawsuit that shows facial recognition is officially a civil rights issue [MIT Technology Review]
Groundbreaking effort launched to decode whale language [National Geographic]
Five ways AI can democratise African healthcare [Financial Times]
Intel’s Dystopian Anti-Harassment AI Lets Users Opt In for ‘Some’ Racism [VICE]
Artificial intelligence bias can be countered, if not erased [Financial Times]
Home Office algorithm to detect sham marriages may contain built-in discrimination [Bureau of Investigative Journalism]
It’s creepy that AI is teaching workers to be more human [Financial Times]
Google Is Poisoning Its Reputation with AI Researchers [The Verge]