Hi folks, it's Shelly. Providers of internet infrastructure are not usually as controversial as other corners of the tech industry like social networks or facial recognition companies. But here we are in 2019, where Cloudflare Inc., a U.S. internet firm that helps websites load faster and protects them from cyber attacks, vaulted into the headlines this week after cutting off service to a message board allegedly used by the man suspected of killing 22 people in El Paso, Texas.
``The rationale is simple: they have proven themselves to be lawless and that lawlessness has caused multiple tragic deaths,'' Cloudflare CEO Matthew Prince wrote in a
blog post explaining his decision. But he went on to explain the decision was still a difficult one. "We continue to feel incredibly uncomfortable about playing the role of content arbiter," he wrote.
If that language sounds familiar, it's because Facebook founder Mark Zuckerberg used the same exact word in his
2016 opus on the ubiquitous social media site's role in promulgating fake news and influencing elections. He said that Facebook didn't "want to be arbiters of truth ourselves, but instead rely on our community and trusted third parties."
Facebook's thinking has surely evolved since then, as it takes a more active role in the content shared across its platforms. But I can't tell you how often I still hear from tech executives that they're just making the product, and that it's up to users how to use it. Various factions argue that the products haven't been designed for misuse and that it's up to society, governments, or other groups to police it.
Take facial recognition technology startup Megvii, a rising AI star in China, which told me earlier this year that it
requires clients not to "weaponize its technology and solutions and not to use them for illegal purposes, including the infringement of human rights." At the same time, Megvii sells its software via third-party distributors and can't be 100% sure how its image recognition software is put to use, as evidenced by inoperable software code that was found on a mobile app used by police in China to track minority citizens. The company
said it doesn't maintain any relationship with the database and had no knowledge of why its code appeared in the police app in the first place.
The uncomfortable reality is that tech companies can no longer just build a platform and claim ignorance as to how it is used. They have to take ownership and responsibility for
how people are using the technology they build and be brave enough to do something about illegal or dangerous use. That goes for cloud providers hosting potentially hateful content and AI companies whose algorithms could fall into the wrong hands. It also applies to YouTube and other video sites that have
ignored the need to protect children using their services instead of making their sites safer places to hang out.
The tech industry could learn a thing or two from other industries that have grappled with similar controversies. For example, electronics and clothing makers in recent years have tried to get more control over the treatment of workers who make their wares in response to illegal child labor, fatal accidents and suicides deep in their supply chains. Likewise, financial services companies now have to comply with so-called know your customer, or KYC, rules that aim to prevent money laundering, bribery and financing for other illegal acts. Tech companies shouldn't be expected to root out all the problems themselves or stop mass shootings, but the responsibility has to start somewhere.
Post a Comment