Midas Project released a detailed 50-page report that compiles all internal details about OpenAI. This is the most extensive and in-depth investigation of publicly available data regarding the organization’s shareholders, management, and security. It covers internal changes, new false statements by Altman, and vulnerabilities in the security system. In short:
Let’s start with the story of a researcher who was stripped of two million dollars in stock options by OpenAI for refusing to sign a lifetime confidentiality agreement upon dismissal. We have written about this before. At that time, Altman claimed he was unaware of such an order, but it turns out his signature is on the relevant documents. And this is not the first case — there have been many similar stories.
Despite the company’s decision not to become a fully profit-driven enterprise and to transition into a Public Benefit Corporation (PBC), this is mostly a formality. It appears that the main goal of the startup was to eliminate profit limits for investors — allowing them to invest more money — and to remove control from the non-profit board. Under the new structure, the board will remain, but its influence will effectively be reduced to symbolic. Just for appearances, so that society does not protest.
Another important point is that OpenAI is accelerating product releases while ignoring its own safety testing procedures. Final versions of models often bypass testing altogether, with only intermediate iterations tested. Previously, this process took months; now, timelines have been cut to just a few days — almost all testing is automated. The result: almost complete lack of thorough testing.
And finally — a quote from Sam Altman in 2023. He explicitly stated that he voted for Altman’s dismissal because he believed he was not the right person to hold the launch button for AGI.
These are the news highlights. The full investigation can be found here.
