PLUS: The latest AI and tech news.
By Jennifer Conrad | 09.16.21 Good morning! I've always been fascinated by the idea that monks typed up many early websites and filled in the databases that brought centuries of knowledge online. Today, the unseen laborers behind the latest world-changing technology, artificial intelligence, are low-paid gig workers who tag the information in large data sets. Their work sometimes introduces errors that can lead to flawed models—and that's just one reason to use caution when labeling some AI models as "foundations." | As artificial intelligence plays an ever larger role in our everyday lives, researchers at Stanford are considering what counts as foundation AI technology. As Will Knight reports, the Center for Research on Foundation Models would designate certain models as foundational: models trained on a huge amount of data and then used for many applications, including search and content moderation. The center will research how to improve those models, including rooting out biases built into the systems. For example, researchers might scrutinize large language models such as GPT-3, which can answer questions or generate text based on a prompt. Not everyone is convinced: "These models are really castles in the air; they have no foundation whatsoever," Jitendra Malik, a professor at UC Berkeley who studies AI, told workshop attendees in a video discussion to celebrate the center's launch. "The language we have in these models is not grounded, there is this fakeness, there is no real understanding." Read why Stanford's proposal to label models as foundational is dividing the research community. | WIRED's reporting on the origins of some of today's most controversial technologies. | Research at the University of Texas at Austin in the 1960s prefigured technological breakthroughs in facial recognition. Yet this early, foundational work on the subject is almost entirely unknown. | |
Post a Comment