We’ve all been there: sticking our thumbs in the air and staring at a suggested word that somehow summed up what we wanted to say. So let’s tap on that. Apparently. But a new study suggests that these little taps may save us more than just a few seconds.
Research from Cornell Tech published this week in Scientific advanceshave found that AI-powered autocomplete suggestions not only change the way you write, but also how you actually write think. And you won’t even notice it.
What did the research actually find out?
The researchers conducted two large-scale experiments with over 2,500 participants and asked them to write short essays on controversial social issues – for example, the death penalty, fracking, GMOs, and the right to vote for felons.
Some participants received autocomplete suggestions that were secretly designed to take a specific direction and were generated using a large language model from the GPT-3 and GPT-4 families. Others got nothing.
The result? People who wrote with the biased AI gradually became familiar with the AI’s positions. Not because they were convinced by arguments. Not because they read something compelling. Just because her phone kept ending her thoughts for her.
Even knowing about the trick didn’t break the magic
Now here’s the part that should make you put down the phone for a second. The researchers told some participants up front that the AI had a bias problem — a kind of “don’t say we didn’t warn you” disclaimer. They then tried to question others. In most misinformation studies, these approaches act like mental vaccines. This time no one did anything.
“Their attitude toward the problems was still changing,” said lead author Mor Naaman, who also noted that the scope of autocomplete has exploded — Gmail now offers to write entire emails on your behalf.
So next time your phone suggests that you “fully support” something, maybe take another look at that little blue word. Your opinion could be just a tap away from becoming someone else’s.




