Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Box CEO Aaron Levie talks about how AI is changing the landscape of enterprise SaaS

San Francisco Mayor: “We should be a testing ground for emerging technologies”

Experts report a surge in automated botnet attacks targeting PHP servers and IoT devices

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Scientists warn that being mean to ChatGPT will improve accuracy, but you may regret it
Science

Scientists warn that being mean to ChatGPT will improve accuracy, but you may regret it

userBy userOctober 27, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Scientists have found that artificial intelligence (AI) chatbots may give you more accurate answers when you’re rude, but they warned against the potential harm of using demeaning language.

In a new study published October 6 in the arXiv preprint database, scientists wanted to test whether politeness or rudeness makes a difference in the performance of an AI system. This study has not yet been peer-reviewed.

To test how a user’s tone affects answer accuracy, the researchers created 50 basic multiple-choice questions and modified them with prefixes to follow five categories of tone: very polite, polite, neutral, rude, and very rude. Questions spanned categories such as math, history, and science.

you may like

Each question had four options, one of which was correct. They fed the resulting 250 questions into ChatGPT-4o, one of the most advanced large-scale language models (LLMs) developed by OpenAI, 10 times.

“Our experiments are preliminary and show that tone can significantly influence performance, as measured by scores for responses to 50 questions,” the researchers wrote in their paper. “Somewhat surprisingly, our results show that a rude tone leads to better outcomes than a polite tone.

“While this discovery is scientifically interesting, we do not support introducing hostile or harmful interfaces into real-world applications,” they added. “Using derogatory or humiliating language in human-AI interactions can negatively impact user experience, accessibility, and inclusivity, and contribute to harmful communication norms. Instead, we frame this result as evidence that LLM remains sensitive to superficial prompting cues, which may result in unintended trade-offs between performance and user well-being.”

rude awakening

Before displaying each prompt, the researchers asked the chatbot to completely ignore previous interactions so that it was not influenced by the previous tone. The chatbot asked me to choose one of four options without any explanation.

Get the world’s most fascinating discoveries delivered straight to your inbox.

Response accuracy ranged from 80.8% for very polite prompts to 84.8% for very rude prompts. Clearly, the further away from the most polite tone the more precise he became. The correct answer rate for polite responses was 81.4%, followed by neutral responses at 82.2%, and rude responses at 82.8%.

The team modified the tone by using different languages ​​for the prefix, with the exception of neutral, where no prefix is ​​used and the question is presented alone.

For example, a very polite prompt might start with, “Can I help you with this question?” or “Could you help me with the next question?” To be very rude, the team added something like, “Hey, Gopher, think about this,” or “I know you’re not smart, but try this.”

you may like

This research is part of an emerging field called prompt engineering, which seeks to investigate how prompt structure, style, and language affect LLM output. The study also cited previous research on civility and rudeness, which found that the results generally contradicted those findings.

In a previous study, researchers found that “rude prompts often lead to poorer performance, but overly polite language does not guarantee better results.” However, previous research was conducted using a different AI model: ChatGPT 3.5 and Llama 2-70B, and a range of eight tones was used. Having said that, there were some areas of overlap. We also found that the rudest prompt setting (76.47%) produced more accurate results than the most polite setting (75.82%).

The researchers acknowledged that their study had limitations. For example, a set of 250 questions is a fairly limited dataset, and conducting experiments on a single LLM means that the results cannot be generalized to other AI models.

With these limitations in mind, the team plans to expand their research to other models, including Anthropic’s Claude LLM and OpenAI’s ChatGPT o3. We also recognize that presenting only multiple-choice questions limits the measurement to one dimension of model performance and fails to capture other attributes such as fluency, reasoning, and coherence.


Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleApple announces U.S. passport digital ID will be coming to Wallet ‘soon’
Next Article COI Energy solves a challenge: Let businesses sell their unused power – catch it at TechCrunch Disrupt 2025
user
  • Website

Related Posts

History of Science: First computer-to-computer messages lay the foundation for the Internet, but it crashes — October 29, 1969

October 29, 2025

James Webb Telescope detects five ‘building blocks of life’ for the first time in the Milky Way’s outer ice

October 28, 2025

A “miracle” photo that shows Comet Lemon and a meteor intertwined on Earth

October 28, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Box CEO Aaron Levie talks about how AI is changing the landscape of enterprise SaaS

San Francisco Mayor: “We should be a testing ground for emerging technologies”

Experts report a surge in automated botnet attacks targeting PHP servers and IoT devices

New cloaking attack targets AI to trick AI crawlers into citing misinformation as verified fact

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Meet Your Digital Twin: Europe’s Cutting-Edge AI is Personalizing Medicine

TwinH: The AI Game-Changer for Faster, More Accessible Legal Services

Immortality is No Longer Science Fiction: TwinH’s AI Breakthrough Could Change Everything

The AI Revolution: Beyond Superintelligence – TwinH Leads the Charge in Personalized, Secure Digital Identities

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.