

Prime Minister Ulf Kristersson defends limited use of generative AI tools for administrative aid, sparking ethics and transparency debate
Swedish Prime Minister Ulf Kristersson is under public and political scrutiny after revealing that he uses ChatGPT, a popular generative AI chatbot developed by OpenAI, to assist with certain aspects of government decision-making. The statement, made during a recent interview with Swedish media, has drawn backlash from lawmakers, ethics experts, and citizens alike who question the appropriateness and security implications of using AI tools in state affairs.
Kristersson clarified that the AI chatbot was not used for making critical policy decisions, but to “test ideas” and “gain perspectives” on routine administrative matters. “ChatGPT is like bouncing ideas off an assistant. It doesn’t replace our democratic processes or expert advice,” he told Mothership, a local outlet.
However, the disclosure has ignited a broader conversation in Sweden and beyond about the use of large language models (LLMs) in public office, transparency in government operations, and the ethical boundaries of AI-assisted leadership.
A Surprising Admission in a Highly Neutral Political Landscape
Sweden, known for its emphasis on transparency and democratic accountability, is one of the first European countries where a sitting head of state has openly acknowledged using AI tools like ChatGPT while in office.
While Kristersson described his use of ChatGPT as “informal” and “non-binding,” critics argue that even limited AI involvement in governance warrants a public framework for accountability. “This is not about whether the Prime Minister trusts ChatGPT or not. It’s about whether the public should trust decisions influenced by opaque algorithms,” said political analyst Lena Holmberg on Sveriges Radio.
The Swedish Green Party and members of the Left Party have called for more clarity on the types of questions posed to the AI tool, what responses were generated, and whether any external data privacy guidelines were followed.
No Formal AI Policy in Swedish Governance Yet
The controversy has exposed a notable policy gap: Sweden currently does not have a formal set of regulations governing the use of generative AI in public office. While the EU is in the process of implementing the AI Act — a broad legal framework to regulate AI systems across the bloc — it has yet to come into full effect and may not address all cases of informal usage.
Cybersecurity experts have raised concerns about the risk of sensitive government information being entered into publicly accessible tools like ChatGPT, which store prompts for training purposes unless disabled.
“There is no guarantee that the data entered by the Prime Minister, even if seemingly trivial, is completely protected,” said AI policy researcher Daniel Nystrom. “ChatGPT is not an internal government system; it’s a commercial product hosted on foreign servers.”
A Divided Public Response
Public opinion in Sweden appears divided. A survey conducted by Dagens Nyheter found that 48% of respondents believed Kristersson’s use of ChatGPT was inappropriate, while 39% saw it as pragmatic and modern.
Younger Swedes, particularly digital natives, were more likely to view the Prime Minister’s move as innovative. Some compared it to using spell-check tools or drafting assistants. Others said it demonstrated a worrying level of technological dependency in leadership.
Opposition parties have yet to formally call for an investigation, but several members of parliament are urging the government to outline clear policies on acceptable AI use in official capacities.
Kristersson Defends His Position
In a follow-up statement, Kristersson reiterated that “no classified information or state decisions were shared with the AI system”, and that all work involving national security remains strictly human-led.
“I use ChatGPT the way others might consult a newspaper, a colleague, or a book,” he said, adding that the tool was useful for gathering alternate viewpoints and summaries. “I understand the concerns and welcome a broader discussion on responsible AI use in public life.”
Broader Implications for Global Governance
The Swedish Prime Minister’s disclosure may be a precursor to similar revelations elsewhere. Around the world, governments are increasingly experimenting with AI-powered assistants to draft reports, analyze public sentiment, and simulate economic scenarios.
However, Kristersson’s experience underscores the thin line between innovation and overreach when it comes to generative AI in politics. With no universally accepted norms in place, each country — and leader — must navigate a rapidly evolving technological landscape with limited legal precedent.
The incident has added fresh urgency to calls for national and international guidelines around AI in government use, especially regarding privacy, misinformation, and human oversight.