A former OpenAI safety researcher has raised alarm over the rapid advancement of artificial intelligence. He believes the industry is taking a dangerous gamble with the technology, The Guardian reports.
Daniel Adler, who left OpenAI in November, shared his concerns in a series of posts on X. He described his time at the company as a “wild ride” and said he would miss many aspects of it. However, he expressed deep concern about the pace of AI development and its potential consequences for humanity.
“I’m pretty terrified by the pace of AI development these days,” Adler wrote. “When I think about where I’ll raise a future family or how much to save for retirement, I can’t help but wonder: will humanity even make it to that point?”
His warning adds to growing fears within the AI research community. Fortune highlighted similar concerns from leading experts, including Stuart Russell, a professor of computer science at the University of California, Berkeley.
Russell compared the AI race to running toward a cliff. Speaking to the Financial Times, he said, “Even the CEOs participating in this race acknowledge that whoever wins has a significant chance of causing human extinction. We have no idea how to control systems more intelligent than ourselves.”
The debate over AI safety comes amid intensifying competition between the United States and China in artificial intelligence. On Monday, reports surfaced that Chinese firm DeepSeek had developed an AI model potentially matching or surpassing leading U.S. systems at a much lower cost. This revelation unsettled American investors and sparked reactions from top tech figures, including OpenAI CEO Sam Altman.
The rapid progress in AI has fueled both excitement and fear. Companies are investing billions into AI development, aiming to create systems with human-like reasoning and decision-making. However, experts warn that advancing AI without proper safeguards could lead to unintended consequences.
Governments and regulators are also taking notice. In recent months, lawmakers worldwide have debated new rules to manage AI risks. The European Union has proposed strict regulations to oversee powerful AI models. The United States is considering similar measures but faces resistance from major tech firms eager to maintain a competitive edge.
Despite these efforts, concerns persist. Some experts argue that AI companies are moving too fast and not prioritizing safety. They worry that AI models could become uncontrollable, leading to unpredictable behavior that might threaten human society.
Adler’s comments reflect a broader unease in the AI research community. While AI has the potential to revolutionize industries, many experts urge caution. They argue that companies should slow down development and ensure strong safety measures are in place before pushing AI to new limits.
For now, the race continues. Companies and nations are vying for dominance in AI, hoping to unlock its benefits while avoiding its risks. However, as warnings from experts grow louder, the debate over AI’s future is far from over.