OpenAI Prioritizes 'Shiny Products' Over AI Safety, Ex-Researcher Says

From PC Mag: A researcher who just resigned from ChatGPT developer OpenAI is accusing the company of not devoting enough resources to ensure that artificial intelligence can be safely controlled.

"These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there," ex-OpenAI researcher Jan Leike claimed in a tweet on Friday.

A year ago, OpenAI appointed Leike and his colleague, renowned AI researcher Ilya Sutskever, to co-lead a team focused on reining in future superintelligent AI systems to prevent long-term harm. The resulting “superalignment” team was supposed to have access to 20% of OpenAI’s computing resources to research and prepare for such threats.

But earlier this week, both Leike and Sutskever abruptly resigned from the company. Although Sutskever said he believes the company is on track to develop a “safe and beneficial” artificial general intelligence, Leike took to Twitter/X on Friday to express some serious doubts.

View: Full Article