PLUS: The latest AI and tech news.
By Jennifer Conrad | 06.21.21 The tech industry is scrambling to constrain the dark side of text-based artificial intelligence, a technology that has enormous potential but also can spread disinformation and perpetuate biases, writes Khari Johnson. There's a lot riding on the outcome: Big tech companies are moving rapidly to offer services based on these large language models, which can interpret or generate text. Companies like Google and Microsoft are increasingly incorporating these technologies into their products. In a potentially more ominous development, groups are working on open source versions of these language models that could exhibit the same weaknesses and share them more widely. Text generated by large language models is coming ever closer to language that looks or sounds like it came from a human, yet it still fails to understand things requiring reasoning that almost all people understand. How to Fix It Some researchers are injecting positive text about marginalized groups into large language models. Others are hiring contractors to say awful things in conversations with language models to provoke them to generate hate speech, profanity, and insults. Humans then label that output as safe or unsafe to help train AI to identify toxic speech. Read more about their efforts here. | After years of stalled attempts to curb surveillance technologies, Baltimore is set to enact one of the nation's most stringent bans on facial recognition technologies, writes Sidney Fussell. But it comes with complicated caveats: It lasts for only one year, certain private uses of the tech would be illegal, and the city's police department is exempt. A permanent ban failed to get city council approval last year. The new effort, which awaits the mayor's signature, will establish a task force to produce regular reports on the purchase of surveillance tools, describing both their cost and effectiveness. "It was important to begin to have this conversation now over the next year to basically hash out what a regulatory framework could look like," says city councilmember Kristerfer Burnett, who introduced the ban. Why Now? Baltimore's political landscape may look very different in a year. Since 1860, the police department has been largely controlled by the state. But residents will vote as soon as next year on whether to return control of the police force to the city. Read more about the ban here. | Mobile apps typically send user data to corporate computers known as the cloud for tasks like transcribing speech or suggesting message replies. Now Apple and Google say smartphones are smart enough to do some crucial and sensitive machine learning tasks on their own, Tom Simonite reports. Earlier this month, Apple said its virtual assistant Siri will transcribe speech without tapping the cloud in some languages on recent and future iPhones and iPads. Last month, Google said the latest version of its Android operating system has a feature dedicated to secure, on-device processing of sensitive data. Initial uses include powering the company's Smart Reply feature built into its mobile keyboard that can suggest responses to incoming messages. Why You Should Pay Attention Apple and Google both say on-device machine learning offers more privacy and faster apps. But storing data on devices also aligns with the tech giants' long-term interest in keeping consumers bound into their ecosystems—and when people hear their data can be processed more privately, they might become more willing to agree to share more data. Read more about the tradeoffs here. | |
Post a Comment