According to a study, text generated by artificial intelligence can appear more human on social media than text written by real humans.
Chatbots, such as OpenAI’s hugely popular ChatGPT, are able to convincingly mimic human conversation based on prompts given to it by users. Use of the platform exploded last year and served as a watershed moment for artificial intelligence, giving the public easy access to converse with a bot that can help out at school or work and even suggest dinner recipes.
Researchers behind a study published in the scientific journal Science Advances, supported by the American Association for the Advancement of Science, were intrigued by OpenAI’s GPT-3 text generator in 2020 and worked to find out if humans “can distinguish misinformation from accurate information, structured in the form of tweets”, and determine whether the tweet was written by a human or an AI.
One of the study’s authors, Federico Germani of the University of Zurich’s Institute for Biomedical Ethics and History of Medicine, said the “most surprising” finding was how humans were more likely to label AI-generated tweets as human-generated than tweets actually created by humans, according to PsyPost.
HUMANS DISTURBED ABOUT THE DIFFERENCE BETWEEN REAL OR AI-GENERATED IMAGES: STUDY
Artificial intelligence illustrations are seen on a laptop computer with books in the background in this illustration photo. (Getty Images)
“The most surprising finding was that participants often perceived AI-produced information as more likely to come from a human, more often than information produced by a real person. This suggests that AI can convince you to be a real person more than a real person can convince you to be a real person, which is a fascinating finding from our study,” Germani said.
With the rapid rise in the use of chatbots, tech pundits and executives in Silicon Valley have sounded the alarm about how artificial intelligence can spiral out of control and possibly even lead to the end of civilization. One of the main concerns among experts is how AI could lead to misinformation spreading across the internet and convincing humans of something that isn’t true.
OPENAI CHIEF ALTMAN HAS DESCRIBED WHAT AI ‘SCARY’ MEANS TO HIM, BUT CHATGPT HAS HIS OWN EXAMPLES
The researchers of the study, titled “The GPT-3 Model of AI (Dis)Informs Better Than Humans,” worked to study “how AI influences the information landscape and how people perceive and interact with information and misinformation,” Germani told PsyPost.
The researchers found that 11 topics they found were often prone to misinformation, such as 5G technology and the COVID-19 pandemic, and created fake and real tweets generated by GPT-3, as well as fake and real tweets written by humans.
WHAT IS CHATGPT?

The OpenAI logo on the website displayed on a phone screen and ChatGPT on AppStore displayed on a phone screen are seen in this illustration photo taken in Krakow, Poland, June 8, 2023. (Jakub Porzycki/NurPhoto via Getty Images)
They then brought together 697 participants from countries including the US, UK, Ireland and Canada to take part in a survey. Participants received the tweets and were asked to determine whether they contained accurate or inaccurate information, and whether they were AI-generated or organically created by a human.
“Our study emphasizes the challenge of differentiating between AI-generated information and human-created information. It underscores the importance of critically evaluating the information we receive and trusting reliable sources. Additionally, I would encourage individuals to become familiar with these emerging technologies to grasp their potential, both positive and negative,” Germani said of the study.
WHAT ARE THE DANGERS OF AI? DISCOVER WHY PEOPLE ARE AFRAID OF ARTIFICIAL INTELLIGENCE
Researchers found that participants were better at determining misinformation crafted by another human than misinformation written by GPT-3.
“A remarkable finding was that AI-generated disinformation was more compelling than that produced by humans,” Germani said.
Participants were also more likely to recognize tweets containing accurate AI-generated information than accurate tweets written by humans.
The study noted that in addition to its “most surprising” finding that humans often cannot tell the difference between AI-generated and human-generated tweets, their confidence in decision-making plummeted during the survey.

Illustrations of artificial intelligence are seen on a laptop computer with books in the background in this illustration photo from July 18, 2023. (Getty Images)
“Our results indicate that not only are humans unable to tell the difference between synthetic text and organic text, but also that their confidence in their ability to do so also declines significantly after attempting to recognize their different origins,” the study says.
WHAT IS AI?
The researchers said this was likely because GPT-3 can mimic humans convincingly, or that respondents may have underestimated the intelligence of the AI system to mimic humans.

Artificial intelligence will hack data in the near future. (Stock)
“We propose that, when faced with a large amount of information, individuals may feel overwhelmed and give up trying to critically evaluate it. As a result, they may be less likely to try to distinguish between synthetic and organic tweets, leading to decreased confidence in identifying synthetic tweets,” the researchers wrote in the study.
The researchers noted that the system sometimes refused to generate misinformation, but also sometimes generated false information when asked to create a tweet with accurate information.
CLICK HERE TO GET THE FOX NEWS APP
“While this raises concerns about the effectiveness of AI in generating persuasive misinformation, we have yet to fully understand the real-world implications,” Germani told PsyPost. “To address this issue, larger-scale studies on social media platforms are needed to observe how people interact with AI-generated information and how these interactions influence behavior and adherence to individual and public health recommendations.”