These people had volunteered to test the products, but it had never remotely occurred to them that their data could end up leaking in this way. Late last year, we published a bombshell story about how sensitive images of people collected by Roomba vacuum cleaners ended up leaking online. Roomba testers feel misled after intimate images ended up on Facebook Microsoft has already said it will use OpenAI’s text-to-image generator DALL-E to create images for PowerPoint presentations too. They could also give email programs and Word better autocomplete tools, he adds. Language models could be integrated into Word to make it easier for people to summarize reports, write proposals, or generate ideas, Shah says. ChatGPT’s potential to help people write more fluently and more quickly could be Microsoft’s killer application.
In the meantime, it’s more likely that we are going to see apps such as Outlook and Office get an AI injection, says Shah. But it’s not clear how that will work in practice, and accurate results will be crucial if Microsoft wants people to stop “googling” things. When asked, OpenAI was cryptic about how it trains its models to be more accurate. A spokesperson said that ChatGPT was a research demo, and that it’s updated on the basis of real-world feedback. People might not even notice when these AI systems generate biased content or misinformation-and then end up spreading it further, he adds.
A chat AI like ChatGPT removes that “human assessment” layer and forces people to take results at face value, says Chirag Shah, a computer science professor at the University of Washington who specializes in search engines. When people search for information online today, they are presented with an array of options, and they can judge for themselves which results are reliable.