Agency and Accountability in the Age of Agentic AI
Published: 20 March 2026
20 March 2026: As AI systems move from answering questions to taking action, who is responsible when things go wrong? Charles Burke, Head of International Trade & Business Policy at the Scotland Office and CPP 2025 Practice Fellow, explores what agentic AI means for governance and public policy.
20 March 2026: As AI systems move from answering questions to taking action, who is responsible when things go wrong? Charles Burke, Head of International Trade & Business Policy at the Scotland Office and CPP 2025 Practice Fellow, explores what agentic AI means for governance and public policy.
Blog by Charles Burke, Head of International Trade & Business Policy, Scotland Office, UK Government and CPP 2025 Practice Fellow
In 1979, an IBM training manual stated that ‘a computer can never be held accountable, therefore a computer must never make a management decision.’ The principle was simple: systems execute instructions; people remain responsible for decisions. For decades, that boundary held.
I joined the civil service as a policy generalist, and my career has borne that out - working across justice, transport, regulation, energy and net zero, infrastructure, and most recently trade, business and investment. Through the Centre for Public Policy Practice Fellowship I took the opportunity to speak with researchers across a similarly broad range of fields - including energy, AI governance, and accounting and finance - about the challenges they were exploring. The question of how to connect those conversations into a single piece of writing was answered, unexpectedly, by the explosion of agentic AI earlier this year. It struck me that agentic AI - systems that can be given an abstract goal and pursue it autonomously - is in some ways the ultimate policy generalist: deployable across energy, public finance, service delivery and beyond. That versatility is precisely what makes the question of accountability so important.
Since the beginning of this year, debate around "agentic" AI has moved from niche research circles into mainstream media coverage. Coverage in outlets including the Guardian and the Atlantic described artificial intelligence (AI) systems that could be given a goal - run a procurement process end-to-end, coordinate a response across multiple organisations, manage a complex project without human sign-off at each stage - and left to pursue it autonomously.
The coverage was not entirely celebratory. Alongside the excitement, regulators and researchers have acknowledged that existing frameworks were not designed for these systems. The Digital Regulation Cooperation Forum - a body bringing together UK regulators including the Competition and Markets Authority, the Financial Conduct Authority, the Information Commissioner's Office and Ofcom - recently issued a call for views on agentic AI, recognising that existing oversight frameworks were built for a different technological landscape.
What could this mean for public policy?
Public services already rely heavily on automation, from routing enquiries to processing routine transactions. But agentic AI is qualitatively different. Rather than following fixed rules, it can be given an abstract objective and work out how to pursue it: reduce delayed discharges in a hospital system, cut energy waste across a building portfolio, identify gaps in service uptake within a region.
Tools of this kind also lower the technical barrier to entry, allowing non-developers to build bespoke systems for highly specific problems - calibrated to a single local authority or community - and redeploy them in new contexts without starting from scratch. The potential is transformative. But the greater the capacity to act at scale, the more consequential the question of who is responsible when things go wrong.
Technologies like this tend to develop quickly in commercial settings. Public services operate at a different pace and scale, within complex organisational and legal environments. Yet if autonomous systems become embedded elsewhere in the economy first, public expectations about speed and responsiveness may shift before governance in public services has caught up.
Questions of agency and accountability
Democratic systems rest on the assumption that consequential decisions can ultimately be traced back to someone responsible. Agentic AI does not remove that chain, but it stretches it.
Consider systems being developed to handle planning applications or assess eligibility for public services - decisions affecting people's lives at a speed and scale that makes individual case review practically impossible. Unlike a caseworker who can explain their reasoning, an agentic system's outcomes would emerge from how its goals were set, calibrated and monitored. When patterns of disadvantage appear, there may be no single decision to challenge and no obvious individual to challenge it to. Accountability becomes structural rather than personal.
Take smart energy management as an example of how automation currently works. Systems that decide when to store or dispatch power can support the transition to renewable energy and reduce overall costs. However, those benefits tend to accrue first to households able to afford the relevant technologies. That is not simply a technical by-product; it reflects policy choices embedded in system design by human decisions and actions - and that means there is a clear path back to identify, challenge and change them if necessary.
What distinguishes agentic systems from the examples described above is not simply that they act - automated systems have always done that. The difference is that they can be given an abstract goal and pursue it through a chain of decisions and actions that nobody explicitly designed or approved in advance. The system is not following a script; it is generating one. By the time a systemic problem becomes visible, it may already have acted at scale, embedding outcomes that are difficult to reverse and that nobody explicitly chose.
Keeping responsibility in view
The task for policy is not to halt technological development. The potential gains in efficiency, responsiveness, and problem-solving capacity are real, and so are the pressures on public services. The challenge is to ensure that as these systems take on greater autonomy and discretion, accountability remains clearly in view.
In 1979, IBM drew a clean boundary: if a computer cannot be held responsible, it should not decide. That boundary made sense in a world where computers were tools. Today, it needs to be redrawn - not abandoned, but updated for systems that no longer simply execute instructions and instead pursue goals.
The responsibility of policymakers and those who advise them is to ensure that as they do, the lines of oversight and accountability remain visible.
Author
Charles Burke is Head of International Trade, Business and Investment Policy at the Scotland Office, UK Government, where he has worked for five years across a range of policy areas including energy and net zero, infrastructure, and EU exit. He is a 2025 Practice Fellow at the University of Glasgow's Centre for Public Policy and holds an MA (History) from the University of Glasgow and an MSc in International Public Management and Public Policy from Erasmus University, Rotterdam. You can find him on LinkedIn.
About the Practice Fellows Scheme
The Centre for Public Policy Practice Fellows programme connects policy professionals with University of Glasgow researchers to explore evidence-led solutions to real-world policy challenges. Fellows engage in short-term collaboration to investigate pressing questions from their roles and contribute to knowledge exchange between academic and practice.
This ESRC-funded initiative supports up to six Fellows annually from governments, the voluntary sector and beyond.
Applications will reopen in Spring 2026. Sign up to the Centre for Public Policy mailing list to hear when the call opens.
First published: 20 March 2026
Blog by Charles Burke, Head of International Trade & Business Policy, Scotland Office, UK Government and Centre for Public Policy 2025 Practice Fellow