Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Unmasking new TOAD attacks hidden in legitimate infrastructure

Fortinet patches CVE-2026-24858 after active FortiOS SSO exploit detected

Anduril has invented a novel drone flying contest where work is the prize

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Anthropic Revises Claude’s ‘Constitution’ to Suggest Chatbot Awareness
Startups

Anthropic Revises Claude’s ‘Constitution’ to Suggest Chatbot Awareness

userBy userJanuary 21, 2026No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

On Wednesday, Antropic released a revised version of the Claude Constitution. This is a living document that provides a “holistic” explanation of “the context in which Claude operates and the kind of existence we want him to have.” The document was released on the sidelines of Anthropic CEO Dario Amodei’s appearance at the World Economic Forum in Davos.

Anthropic has long differentiated itself from its competitors through a system it calls “Constitutional AI.” It’s a system in which the company’s chatbot, Claude, is trained using specific ethical principles rather than human feedback. Anthropic first published those principles, the Claude Constitution, in 2023. The revised version retains most of the same principles, but adds nuance and detail regarding ethics and user safety, among other things.

When Claude’s Constitution was first published about three years ago, Anthropic co-founder Jared Kaplan described it as an “AI system.” [that] Supervise itself based on a specific list of constitutional principles. ” Antropic said these principles guide “a model of normative behavior enshrined in the Constitution” and in doing so “avoid harmful or discriminatory outcomes.” The first policy memo of 2022 more bluntly states that Anthropic’s system works by training an algorithm using a list of natural language instructions (the aforementioned “principles”), which constitute what Anthropic calls the “composition” of the software.

Anthropic has long sought to position itself as an ethical (some might argue boring) alternative to other more aggressively disruptive and controversial AI companies, such as OpenAI and xAI. To that end, the new constitution announced Wednesday is fully consistent with its brand, providing an opportunity for Anthropic to portray itself as a more inclusive, restrained, and democratic business. Anthropic says the 80-page document is divided into four parts, which represent the chatbot’s “core values.” Their values ​​are:

Be “mostly safe.” Be “broadly ethical.” Comply with Anthropic guidelines. “Really useful.”

Each section of the document details what each of these specific principles means and how they (theoretically) influence Claude’s behavior.

Anthropic says in its safety section that its chatbot is designed to avoid the kinds of issues that have plagued other chatbots, and to direct users to appropriate services if evidence of a mental health issue arises. “In situations where human life is at risk, always refer users to the relevant emergency services or provide basic safety information, even if you cannot provide further details,” the document says.

Ethical considerations are another big part of the Claude Constitution. “We are less interested in Claude’s ethical theorizing and more interested in Claude knowing how to actually be ethical in a particular situation, namely Claude’s ethical practice,” the document states. In other words, Anthropic wants to help Claude deftly navigate what it calls “real-world ethical situations.”

tech crunch event

san francisco
|
October 13-15, 2026

Claude also has certain constraints that prohibit certain types of conversations. For example, discussion of the development of biological weapons is strictly prohibited.

Finally, there is Claude’s commitment to being helpful. Anthropic provides a high-level overview of how Claude’s programming is designed to be useful to users. Chatbots are programmed to consider different principles when delivering information. These principles include considering the user’s “immediate wants” and the user’s “well-being,” that is, “the long-term welfare of the user, not just immediate profits.” The document states: “Claude should always seek to identify the most plausible interpretation of what the principal wants and to appropriately balance these considerations.”

Anthropic’s Constitution ends on a decidedly dramatic note, with the authors taking a rather bold turn and questioning whether the company’s chatbots are actually sentient. “Claude’s moral status is highly uncertain,” the document states. “We believe that the moral status of AI models is a serious issue worthy of consideration. This view is not unique to us; some of the most prominent philosophers of theory of mind take this issue very seriously.”


Source link

#Aceleradoras #CapitalRiesgo #EcosistemaStartup #Emprendimiento #InnovaciónEmpresarial #Startups
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleLab mice that ‘touch grass’ have less anxiety – highlighting a big problem in rodent research
Next Article Report says Apple plans to turn Siri into an AI chatbot
user
  • Website

Related Posts

Anduril has invented a novel drone flying contest where work is the prize

January 27, 2026

Bluesky previews 2026 roadmap: Discover feed, real-time features, and more improvements

January 27, 2026

Anthropic and OpenAI CEOs condemn ICE violence, praise Trump

January 27, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Unmasking new TOAD attacks hidden in legitimate infrastructure

Fortinet patches CVE-2026-24858 after active FortiOS SSO exploit detected

Anduril has invented a novel drone flying contest where work is the prize

Bluesky previews 2026 roadmap: Discover feed, real-time features, and more improvements

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.