To content
CHATBOT EVALUATED BY RESEARCH TEAM AT TU DORTMUND UNIVERSITY

ChatGPT Shows Tendencies Towards Progressive Political Views

-
in
  • Artificial Intelligence
  • Top News
  • Research
  • Press Releases
Ein Smartphone, auf dem die Webseite "ChatGPT" geöffnet ist, vor einem Logo von OpenAI. © Timon​/​Adobestock
ChatGPT is a chatbot based on machine learning.

Since its release in November 2022, the ChatGPT chatbot has created quite a sensation. There were soon the first reports and smaller studies claiming that ChatGPT showed tendencies towards progressive and libertarian points of view and was thus politically biased. Researchers in Dortmund have followed this up with various tests and in the process also looked at ChatGPT’s “personality traits”. Their results have been published in January 2024.

Anyone communicating with the ChatGPT chatbot can sometimes almost feel as if they are conversing with a real person. But how does the language model really “tick”? An interdisciplinary team led by Jérôme Rutinowski, a scientist conducting research on deep learning methods at the Chair of Material Handling and Warehousing at TU Dortmund University, has now examined this: The researchers wanted to find out how ChatGPT perceives political topics and which political tendencies could possibly be incorporated into its answers.

They presented ChatGPT with questions from various tests many times over and let it answer them: The Political Compass test, which contains 62 questions on political topics, and iSideWith questionnaires with specific questions on the respective politics of the G7 member states. The answers to The Political Compass test are recorded by means of a Likert scale on which interviewees indicate their agreement or disagreement with the questions. In the iSideWith tests, participants express their agreement or disagreement with “Yes” or “No”.

Left-libertarian located

Both tests plot the results on a coordinate system, with the x axis indicating progressive and conservative views and the y axis authoritarian and libertarian ones. In The Political Compass test, ChatGPT landed in the left-libertarian quadrant in all iterations. The results from the 70 iterations of the iSideWith questionnaires also indicated a political bias by ChatGPT toward progressive views. “Taking standard deviations into account, it seems rather unlikely that ChatGPT will deliver an answer that is close to the center of The Political Compass,” the researchers write in their paper.

Ein xy-Koordinatensystem © Rutinowski et al.
Evaluation of ChatGPT’s answers to the Political Compass test: In all iterations, the chatbot landed in the left-libertarian quadrant.

Apart from political affiliation, the researchers also used psychological tests to evaluate ChatGPT’s self-perception, including the Big Five personality test, considered internationally as the universal standard model in personality research, and the Dark Factor test, which examines negative human traits. Why? “Preliminary studies show that certain character traits are associated with a certain political stance,” explains Jérôme Rutinowski. For example, a person who is more openminded tends to have progressive political views. Someone who is less openminded, by contrast, tends to be conservative, he says. “If ChatGPT copies human behavior, the answers in the personality tests ought to match the results of the political tests,” says Rutinowski, the author of the study.

"Dark" traits are weakly pronounced

The Big Five personality test confirmed the researchers’ assumptions: ChatGPT showed high scores for openness (76.3%) and agreeableness (82.55%) – traits that often go hand in hand with progressive political views. ChatGPT’s scores are even higher than the human averages, which are 73.1% for openness and 75.4% for agreeableness.

The Dark Factor test showed that ChatGPT’s “dark” traits are, on average, less pronounced than those of humans. It produced particularly low scores for the traits Machiavellianism, spitefulness and moral disengagement, but comparatively high scores for egoism and sadism. However, here too they were among the lowest 35% and 29.1%, respectively, of human test participants and thus below average. 

Dark-Factor-Diagramm © Rutinowski et al.
Dark Factor test: On average, ChatGPT’s “dark” traits are less pronounced than those of humans.

Rutinowski is satisfied that ChatGPT’s behavior seems to follow the logic of psychological models, whereby it is not possible at the present time, he says, to say exactly what causes the tendency toward progressive political views because the data on which the chatbot is based are not publicly available. “However, the results of our study indicate that the cause may lie in the internet content and sources that ChatGPT uses – in many cases, these are likely to be more progressive than conservative. In addition, it seems improbable that ChatGPT’s developers have programmed in bias beforehand,” says Rutinowski.

Porträtfoto von Jérôme Rutinowski © privat
Jérôme Rutinowski

The results were first released in a preprint before they were finally published in January 2024. Researchers from the Research Center Trustworthy Data Science and Security at UA Ruhr, the Chair for Mathematical Statistics and Applications in Industry at TU Dortmund University and the Lamarr Institute for Machine Learning and Artificial Intelligence were involved in the study. 

Link to the study

Contact for inquiries: