A human composed these words—honest!—but artificial intelligence software can write pretty well too. Last year the AI company OpenAI launched a text-generation technology called GPT-3 that acquired impressive fluency by digesting billions of words from the internet. Poems, jokes, and Harry Potter parodies created with its help delighted Twitter and convinced some investors that automated writing will soon be big business. A text-based adventure game called AI Dungeon seemed to demonstrate that exciting potential by offering a new spin on the choose-your-own-adventure format: Type whatever you want your character to do, then algorithms fill in the next part of the story, often with surprising results. Last week, we learned that AI Dungeon also shows the dark side of text generation. From its early days, some fans liked the game because it was fluent at creating stories with adult themes. But OpenAI recently discovered that some players were prompting the writing algorithms to generate sexually explicit text depicting children. Latitude, the game's publisher, swiftly added a new moderation system that combined automatic filters with human reviews. But many players said the filter was glitchy, and some disliked the idea of people reviewing AI-enhanced fictional adventures they had created in private. Latitude now faces a revolt from its own user base and tricky questions about how to police AI-assisted creativity. Read more on the mess Latitude now finds itself in, and what it tells us about the limits of today's artificial intelligence technology, here. Tom Simonite | Senior Writer, WIRED |
Post a Comment