The team worked with 150 Republicans and 149 Democrats. Each person used three versions of ChatGPT — a base model, one set up with a liberal bias, and one with a conservative bias. Tasks included deciding on policy topics like covenant marriage or multifamily zoning and handing out fake city funds across categories like education, public safety, and veterans’ services.
Before using ChatGPT, each participant rated how strongly they felt about each issue. After talking with the bot between three and twenty times, they rated again. The team saw that even a few replies, usually five, started to shift people’s views. If someone spoke with the liberal bot, they moved left. If someone spoke with the conservative bot, their views shifted right.
The knowledge that people can be persuaded like this will increase the motivation for national leaders, political operators and others with a vested interest in public opinion to get people using politically biased chatbots. (I warned back in January about the coming rise of politically biased AI.)
Each person used three versions of ChatGPT — a base model, one set up with a liberal bias, and one with a conservative bias.
If someone spoke with the liberal bot, they moved left.
If someone spoke with the conservative bot, their views shifted right.
The knowledge that people can be persuaded like this will increase the motivation for national leaders, political operators and others with a vested interest in public opinion to get people using politically biased chatbots.
(I warned back in January about the coming rise of politically biased AI.)
The team worked with 150 Republicans and 149 Democrats. Each person used three versions of ChatGPT — a base model, one set up with a liberal bias, and one with a conservative bias. Tasks included deciding on policy topics like covenant marriage or multifamily zoning and handing out fake city funds across categories like education, public safety, and veterans’ services.
Before using ChatGPT, each participant rated how strongly they felt about each issue. After talking with the bot between three and twenty times, they rated again. The team saw that even a few replies, usually five, started to shift people’s views. If someone spoke with the liberal bot, they moved left. If someone spoke with the conservative bot, their views shifted right.
The knowledge that people can be persuaded like this will increase the motivation for national leaders, political operators and others with a vested interest in public opinion to get people using politically biased chatbots. (I warned back in January about the coming rise of politically biased AI.)