OpenAI has hired an AI safety expert from Anthropic to oversee risk and governance, reflecting rising focus on responsible development of advanced AI systems.
Security researchers and developers are raising concerns over major flaws in autonomous AI agents, highlighting risks from security vulnerabilities and unpredictable behaviour in early deployments.
AI experts warn that safety and governance measures are falling behind rapid AI advancements, narrowing the window to manage long-term risks responsibly.
AI pioneer Yoshua Bengio cautions against granting rights to artificial intelligence, citing early signs of self preserving behaviour and governance risks.
OpenAI CEO Sam Altman acknowledges growing concerns around AI agents as models display increasing autonomy and unexpected behaviour.
OpenAI is hiring a Head of Preparedness with a compensation package reaching $555,000 as it deepens focus on AI safety and risk management.
An experimental AI-run vending machine linked to Anthropic shut down after unexpected purchases, underscoring challenges in autonomous AI decision-making.
OpenAI has cautioned that prompt injection remains a persistent security risk as agentic AI systems expand across the open web and gain wider autonomy.
New research shows poetic prompts can bypass safety systems in several AI models, revealing vulnerabilities in current guardrail approaches.
OpenAI to introduce parental controls and new safety features on ChatGPT after teen suicide lawsuit, raising global debates on AI ethics and child protection.
AI leaders Geoffrey Hinton and Yann LeCun urge embedding empathy and submission to human intent as fundamental guardrails to ensure AI safety and align systems with human values.