Openai created waves with the launch of ChatGpt, introducing the world to a generative AI that can hold natural, human-like conversations. However, we see a transition from simple chatbots to more complex AI agents. It is designed to perform tasks, make decisions, and even act on behalf of humans with minimal supervision.
These agents move beyond answering questions. They manage schedules, handle customer service and even trade stocks. As AI agents become more autonomous and integrated into our daily lives, important questions arise. How do I know who is behind the screen, that is, who or a machine?
However, as AI agents approach the alignment of human intelligence, the obvious problem remains. There is no clear way to see if an online actor is a real person, an AI working on someone’s behalf, or who the person is actually.
AI alignment: The issue of trust between humans and AI
AI alignment (AIA) is a techno-social issue in our era. That’s why Human.org is intervening with solutions. We are suppressing AI systems and building infrastructure to suit human values. Its protocol works on a Layer 1 blockchain and creates an exposed and verifiable identity system for both human and AI agents. This ensures transparency, accountability and, most importantly, humans maintain control.
Investors are seeing urgency, with Human.org coming from 7.3 million supporters including HF0, SOMA Capital, Spearhead, Pioneer Fund, Hummingbird VC, Val Vavilov (Bitfury), James Tamplexin (HF0). We raised dollar seed funds. and Sheridan Clayborne (Lendtable).
Founded in 2023 by 23-year-old serial entrepreneur Kirill Avery, Human.org, it is the first AI safety lab to focus on solving AI alignment problems through a decentralized trust infrastructure. The company plans to roll out the protocol by the second quarter of 2025.
“We started the world’s first product-based AI safety lab with one goal. We will solve AIA, solve the crises of trust, and enable AI and humans to coexist together,” Human said. .org said on its website.
Kirill’s entrepreneurial journey began to grow younger. He began building the app at 11am, moved to the US at the age of 17, joined Y Combinator as one of the youngest solo founders, and later explained AI at HF0, San Francisco’s top AI startup residency. We have recognized the important need for responsibility.
For the past two years, Human.org has been involved in blockchain and identity that ensures transparent and secure interactions between validated humans and AI agents, all without government or corporate oversight, without government or corporate oversight. We have developed a protocol. With support from the Pioneer Fund and leading AI and Crypto Investors, Human.org lays the foundation for a future where AI remains under human control. For more information, please visit human.org.
Growing Threats: AI-driven misinformation and mistrust
Even before the generator AI took off, the internet was flooded with misinformation, bots and anonymous accounts. Nowadays, as AI agents approach human-like communication, risk is escalating. Bad actors can deploy AI on a large scale due to market manipulation, financial fraud, and extensive misinformation campaigns that can undermine democracy and destabilize governments.
Currently, there is no universal way to see whether AI agents represent real people or how to hold these systems accountable. As AI-generated content continues to flood the internet, economic, democratic and personal interaction interests are rising.
“I grew up in a society where I couldn’t trust what you saw online, so I wanted to create humans and ensure that human identity and expression remain protected in the age of AI. “If we are not currently working on these issues, we risk losing control over ourselves and society.”
Human.org Solution: The Internet Trust Layer
Human.org builds the Internet’s trust infrastructure to help AI systems stay accountable. Unlike government and business-managed solutions, the Human’s Blockchain protocol allows you to personally control an individual’s digital identity while maintaining privacy.
“AI agents need identity and authority, and humans solve it and solve it with technology that keeps everything transparent and secure,” says Eric Norman, general partner at the Pioneer Fund. “Kiril has one of the biggest visions of founders I know and I really want to create something that will help people work together.”
The protocol is built around five major technologies: First, there is Human Networks, a blockchain designed to promote secure interactions between validated humans and AI agents. Complementing this is Human ID, a secure, encrypted system that allows you to confidently verify your real human identity. To bring accountability to AI Systems, Human.org introduces an agent ID. This allows AI agents to return to human creators. The ecosystem also includes Humancoin, a digital currency distributed to verified users, encouraging reliable interactions. Finally, Human App acts as a user-friendly interface, allowing individuals to manage their identity, carry out transactions, log in securely and interact with AI agents in a controlled environment.
Source link