Artificial intelligence has rapidly become the latest object of institutional anxiety. From universities to courts and newsrooms, fears abound that AI will erode expertise, bypass deliberation, and hollow out accountability. Some critics have gone so far as to describe AI as a death sentence for civic institutions.
This anxiety, while understandable, misidentifies the real problem, particularly in Pakistan.
AI does not destroy institutions. Weak institutions destroy their own capacity to govern AI.
Pakistan’s institutional challenges did not begin with algorithms. Long before AI entered classrooms, courtrooms, and bureaucratic offices, our institutions were already struggling with politicization, capacity constraints, delayed decision-making, and fragile public trust. Framing AI as the primary threat confuses a catalyst with a cause.
Take the judiciary. Judicial backlog, inconsistent reasoning, and procedural delays have undermined public confidence for decades. AI-assisted legal research tools or case management systems do not inherently weaken judicial legitimacy. Used responsibly, they can help manage overwhelming caseloads. Legitimacy is lost not when technology assists decision-making, but when human responsibility is displaced or obscured.
The same logic applies to higher education. Pakistani universities face chronic problems: weak research culture, uneven faculty training, rote-based assessment, and widespread credential inflation. These problems existed long before generative AI became accessible to students. If AI is used as a shortcut to avoid improving pedagogy, assessment design, and academic mentoring, damage will follow, but the fault lies with institutional neglect, not the tool itself.
Blaming AI for institutional decay is tempting because it absolves institutions of responsibility. It suggests collapse is inevitable and technologically determined. History tells a different story. The printing press disrupted religious and legal authority; bureaucracy constrained professional discretion; computers automated clerical work; and the internet weakened editorial gatekeeping. Yet institutions adapted by developing new norms, standards, and accountability mechanisms.
AI is not historically exceptional. What is exceptional is how unprepared many institutions are to govern it.
A common concern is that AI erodes expertise. Evidence from medicine, law, and education suggests otherwise. Expertise declines when institutions replace professional judgment with automation, not when AI supports human decision-making. Human-in-the-loop systems often outperform both humans working alone and fully automated systems. The real danger arises when AI is treated as a cost-cutting substitute for training, supervision, and responsibility.
Another anxiety centres on speed. AI accelerates processes, raising fears that speed undermines deliberation. But in Pakistan, excessive delay itself is a major source of institutional illegitimacy. Files stagnate for years; cases linger for decades. Efficiency is not the enemy of legitimacy. Opacity is. Decisions, fast or slow, lose legitimacy when reasons are not recorded, responsibility is unclear, and avenues for appeal are absent.
Much of the harm attributed to AI is better explained by managerialism and austerity. Underfunded institutions, donor-driven efficiency mandates, and chronic understaffing encourage the misuse of technology as a replacement for judgment rather than a support for it. In such environments, any powerful tool, not just AI, can deepen inequality and arbitrariness.
In fact, AI often functions as a diagnostic instrument. It exposes inconsistencies in grading, bias in decision-making, weaknesses in examination systems, and patterns of misinformation in the media. These failures are not created by AI; they are revealed by it. Institutions that were already brittle feel the pressure first.
This distinction matters because it shapes the policy response. If AI is inherently destructive, the solution is prohibition or retreat. If the problem is institutional weakness, the solution is governance reform.
Pakistan does not need an AI panic. It needs clear rules for AI use, investment in professional capacity, transparent accountability mechanisms, and a firm insistence that human beings, not systems, remain responsible for institutional decisions.
AI is not a death sentence for our institutions. It is a stress test. Institutions that maintain procedural integrity, professional judgment, and public accountability will adapt and endure. Those that outsource responsibility without oversight will fail, regardless of whether they use AI or not.
Before blaming AI, we should fix our institutions.
