OpenAI has hired an AI safety expert from Anthropic to oversee risk and governance, reflecting rising focus on responsible development of advanced AI systems.
Indian technology stocks fell after Anthropic launched a new AI risk tool, prompting investor concerns around compliance, transparency and future AI deployment costs.
IIM Lucknow has proposed an ethical framework to guide responsible use of AI in marketing, focusing on transparency, data privacy, fairness and accountability.
The European Union has launched a safety probe into Grok AI, examining whether X has met its obligations under the bloc’s digital platform risk compliance framework.
Indonesia and Malaysia have restricted access to xAI’s Grok chatbot, citing concerns around content moderation, governance, and compliance with local regulations.
Italy has closed its probe into DeepSeek after reviewing concerns around AI hallucinations, reflecting Europe’s evolving approach to AI oversight.
Meta’s reported AI partnership with Manus AI faces Chinese regulatory scrutiny over potential technology export control concerns.
India has directed X to rein in Grok after AI-generated obscene content triggered regulatory scrutiny, highlighting growing concerns over AI governance.
Italian regulators have ordered Meta to suspend its WhatsApp AI chatbot, citing concerns over data processing, transparency and user consent under EU law.
Lyricist Javed Akhtar flags concerns over AI generated deepfake content, highlighting rising risks of digital impersonation and misuse of public figures.
OpenAI is hiring a Head of Preparedness with a compensation package reaching $555,000 as it deepens focus on AI safety and risk management.
South Korea will implement a comprehensive AI law from January 2026, setting new rules for safety, transparency and governance of artificial intelligence systems.
Disney accuses Google of AI copyright infringement shortly after OpenAI’s one billion dollar deal, highlighting rising tensions over training data use.
India proposes mandatory royalties for AI training data, urging tech firms to compensate creators and ensure transparency in model development.
Uber faces collective legal action in the UK over allegations that its AI-powered pay and management systems lack transparency and violate worker data rights.
India is witnessing a surge of AI-generated content—from robot anchors to chatbot-written articles—raising urgent questions about misinformation, ethics, and public trust in digital media.
India announces stricter AI regulations to combat deepfakes, requiring platforms to label synthetic and manipulated content.