By Jennifer Conrad | 09.20.21 Some software developers are now letting artificial intelligence help write their code. But, as Will Knight reports, they're finding that AI is just as flawed as humans. Last June, Microsoft subsidiary GitHub released a beta version of a program called Copilot that uses AI to assist programmers. Start typing a command, a database query, or a request to an API, and the program will attempt to guess your intent and write the rest. But early users noticed errors creeping into the code in different ways. Researchers at NYU recently analyzed code generated by Copilot and found that, for certain tasks where security is crucial, the code contains security flaws 39.9 percent of the time. Humans Still Needed Despite such flaws, Copilot and similar AI-powered tools, such as OpenAI's Codex, may herald a sea change in the way software developers write code. AI could automate more mundane work while developers spend their time vetting and tweaking the program's suggestions. Read about the potential for AI-enabled coding. | Amid increasing internet control, surveillance, and censorship in Iran, a new Android app aims to give Iranians a way to speak freely, writes Lily Hay Newman. Nahoft, which means "hidden" in Farsi, is an encryption tool that turns up to 1,000 characters of Farsi text into a jumble of random words. You can send this mélange to a friend over any communication platform—Telegram, WhatsApp, Google Chat, etc.—and then they run it through Nahoft on their device to decipher what you've said. In addition to generating coded messages, the app can also encrypt communications and embed them imperceptibly in image files, a technique known as steganography. Recipients then use Nahoft to inspect the image file and extract the hidden message. Secret Code There are even ways to use Nahoft for secure communication without an internet connection. This is significant because the Iranian regime has repeatedly imposed near-total internet blackouts in particular regions or across the entire country. Read about Nahoft—and why the stakes were so high for the designers to get its security features right. | Last week, The Wall Street Journal ran a series of blockbuster articles about Facebook based on leaked internal reports and presentations. Among other things, Facebook white-listed prominent users, shielding them from its usual content moderation system. The company also reportedly played down findings that Facebook-owned Instagram was making some teenage women feel worse about their looks. As Steven Levy writes in his subscriber-only newsletter, Plaintext, Facebook employs some of the smartest, most dedicated data scientists in the country. "The stalwarts among them believe their work helps unearth truths about social media that allows the company to address the problems, improving the lives of millions of users," Levy writes. "Others get satisfaction in just making Facebook work better." Although these data analysts, statisticians, and social scientists work to make the platform a healthier place, when changes they suggest could hurt Facebook's business, business considerations usually win out. A Faustian Bargain? The series of leaks reported by the Journal could be a sign of growing dissatisfaction with that tradeoff. Jeff Hammerbacher, who was hired in 2006 to build Facebook's data-mining infrastructure, later told Bloomberg, "The best minds of my generation are thinking about how to make people click ads." Read the rest of Levy's newsletter, including a look back at what Facebook brass said users wanted to see in 2015. | |
Post a Comment