More than a dozen current and former employees from OpenAI, Google’s Deep Mind, and Anthropic have posted an open letter on Tuesday calling attention to the “serious risks” posed by continuing to rapidly develop the technology without having an effective oversight framework in place.
The group of researchers argue that the technology could be misused to exacerbate existing inequalities, manipulate information and spread disinformation, and even “the loss of control of autonomous AI systems potentially resulting in human extinction.”
The signatories believe that these risks can be “adequately mitigated” through the combined efforts of the scientific community, legislators, and the public, but worry that “AI companies have strong financial incentives to avoid effective oversight” and cannot be counted upon to impartially steward the technology’s development.
Since the release of ChatGPT in November 2022, generative AI technology has taken the computing world by storm with hyperscalers like Google Cloud, Amazon AWS, Oracle, and Microsoft Azure leading what is expected to be a trillion-dollar industry by 2032. A recent study by McKinsey found that, as of March 2024, nearly 75% of organizations surveyed had adopted AI in at least one capacity. Meanwhile, in its annual Work Index survey, Microsoft found that 75% of office workers already use AI at work.
However, as Daniel Kokotajlo, a former employee at OpenAI, told The Washington Post, “They and others have bought into the ‘move fast and break things’ approach, and that is the opposite of what is needed for technology this powerful and this poorly understood.” AI startups including OpenAI and Stable Diffusion have repeatedly run afoul of U.S. copyright laws, for example, while publicly available chatbots are routinely goaded into repeating hate speech and conspiracy theories as well as spread misinformation.
The objecting AI employees argue that these companies possess “substantial non-public information” about their products capabilities and limitations, including the models’ potential risk of causing harm and how effective their protective guardrails actually are. They point out that only some of this information is available to government agencies through “weak obligations to share and none of which is available to the general public.”
“So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public,” the group stated, arguing that the industry’s broad use of confidentiality agreements and weak implementation of existing whistleblower protections are hampering those issues.
The group called on AI companies to stop entering into and enforcing non-disparagement agreements, establish an anonymous process for employees to address their concerns with the company’s board of directors and government regulators, and to not retaliate against public whistleblowers should those internal processes prove insufficient.
Related Posts
New study shows AI isn’t ready for office work
A reality check for the "replacement" theory
Google Research suggests AI models like DeepSeek exhibit collective intelligence patterns
The paper, published on arXiv with the evocative title Reasoning Models Generate Societies of Thought, posits that these models don't merely compute; they implicitly simulate a "multi-agent" interaction. Imagine a boardroom full of experts tossing ideas around, challenging each other's assumptions, and looking at a problem from different angles before finally agreeing on the best answer. That is essentially what is happening inside the code. The researchers found that these models exhibit "perspective diversity," meaning they generate conflicting viewpoints and work to resolve them internally, much like a team of colleagues debating a strategy to find the best path forward.
Microsoft tells you to uninstall the latest Windows 11 update
https://twitter.com/hapico0109/status/2013480169840001437?s=20