Openai CEO Sam Altman says that humanity is only a few years away from the development of artificial general information that can automate most human labor. If that’s true, humanity also deserves to understand and speak out to the people and mechanics behind such incredibly unstable forces.
This is the guide behind the archive project for the MIDAS project and the archive project “The Openai Files,” for the Tech Surveillance Project of two non-profit technology watchdog organizations. The file is “a collection of documented concerns about open-rye governance practices, leadership integrity, and organizational culture.” Apart from raising awareness, the goal of the file is to propose paths for Openai and other AI leaders to advance with a focus on responsible governance, ethical leadership, and shared interests.
“The governance structure and leadership integrity lead to projects that are as important as this reflects the magnitude and severity of the mission,” reads the website’s vision of change. “Companies leading the race to AGI have to be bound by very high standards and hold themselves.”
So far, competition for advantage in AI has resulted in raw scaling. This has led companies like Openai to hoover content without agreeing for training purposes, and to build large data centers that are causing local consumer power outages and increased power costs. The commercialization rush also pressures investors to change profit mounts, and ships products before necessary safeguards are taken.
That investor pressure has shifted Openai’s core structure. Openai’s file details how, in the early non-profit era, the profits of investors were initially limited to up to 100 times, so that the profits from achieving AGI go to humanity. The company has since announced plans to remove the cap and acknowledged that it made such changes to appease investors who raised funds on the condition of structural reform.
The file highlights issues such as Openai’s rushing safety assessment process and “reckless culture” and potential conflicts of interest between Openai’s board members and Altman himself. They include a list of potential startups in Altman’s own investment portfolio.
The file also raises questions about Altman’s honesty. This was a speculation topic as a senior employee tried to kick him out in 2023 over “deceptive and chaotic behavior.”
“I don’t think Sam should put his finger on the button on Agi,” former chief scientist at Open, reportedly said at the time.
The questions and solutions raised by the Openai file remind us that there is little transparency, limited surveillance, and enormous power is in the hands of a few. The file gives you a glimpse into that black box and aims to shift conversations from inevitability to accountability.
Source link