Hey everyone, it's Natalia. Europe is already the world's tech privacy cop. Now it might become the AI cop too. Companies using artificial intelligence in the EU could soon be required to get audited first, under new rules set to be proposed by the European Union as soon as next week. The regulations were partly sketched out in an EU white paper last year and aim to ensure the responsible application of AI in high-stakes situations like autonomous driving, remote surgery or predictive policing. Officials want to ensure that such systems are trained on privacy-protecting and diverse data sets. The proposal comes as AI capabilities are woven into everything from online shopping, to music recommendations and even fever detection and other Covid-screening technologies rolled out during the pandemic. The technology is advancing innovation but it's also increasingly setting off alarms around discrimination, misuse, privacy and other issues. Facial recognition is particularly controversial. China has been accused of human rights abuses in Xinjiang, where it's targeted the Uighur population using scanning systems. Meanwhile, some civil liberties groups warn of the dangers of discrimination or mistaken identities when law enforcement uses the technology, which sometimes misidentifies women and people with darker skin tones. Dozens of digital rights groups are urging the EU to ban certain uses of facial recognition tools in Europe, pointing to increased use of the technology by public and private actors despite the bloc's strict privacy rules. "We are deeply concerned about the dramatic increase in the deployment of biometric technologies all across Europe that pave the way for indiscriminate mass surveillance," said Friederike Reinhold, a senior policy advisor at AlgorithmWatch. "We see an urgent need to draw red lines here." The EU could be leaning toward answering those pleas. In a March letter to members of the European parliament obtained by Bloomberg, European Commission President Ursula von der Leyen spelled out plans for mandatory rules for high-risk AI, adding that "in the case of applications that would be simply incompatible with fundamental rights, we may need to go further." But any new rules could clash with another major EU goal: boosting its prowess in advanced technologies like AI to better compete with the U.S. and China. The EU is already planning direct equity investments in early stage AI and other technology startups via its 3 billion-euro venture capital fund. If the EU's rules get passed, some systems could face delays as they undergo the checks. It's also unclear whether Europe's member states have enough AI talent on-hand to routinely carry out sophisticated inspections of the technology involved. EU officials seem to be making the bet that slowing down AI development will ultimately yield a safer, more sustainable technology. The risk, of course, it that someone else will make it first. —Natalia Drozdiak |
Post a Comment