Can AI systems replace human judges and lawyers?

Can AI make fundamentally ethical decisions? If it delivers an unlawful verdict, who will be punished? In the pursuit of efficiency and speed, can we trust people's fates to artificial intelligence?
Imagine a courtroom where artificial intelligence (AI) replaces jurors, and a perfectly designed AI agent replaces the lawyer.
This is exactly the type of scenario that was pondered at the International MaxUp Legathon, which took place in Kazakhstan's capital, Astana.
Here, at the Maqsut Narikbayev University, students from 13 countries explored the impact of new technologies such as AI on legal systems, legal principles, ethics, and human rights, and shared experiences from their own countries.
Can AI replace judges?
Judicial decisions require factual analysis, moral reasoning, and human empathy to reach a fair resolution.
Firstly, artificial intelligence has no emotions. It cannot offer mitigating circumstances, show empathy for the situation, or feel compassion. AI systems are built on repetition. They learn from data, identify patterns, and make decisions based on experience.
If the outcome of previous experience was incorrect, the AI will resolve similar cases incorrectly in the same way.
Meanwhile, modern AI bots produce a specific result based on systematic repetition of data. They often fail to explain the logical chain by which they arrived at a particular choice. In courts, however, this justification is crucial.
Sergey Pen, Deputy Chairman of the Board for Science, Innovation, and Artificial Intelligence at Maqsut Narikbayev University (MNU) in Astana, believes that, for these reasons, AI currently has no chance of replacing judges.
According to him, the language model provides answers based only on its knowledge base and statistical contextual matches, leaving out the reasoning provided by a human judge.
"There's a huge problem, as language models cannot reproduce the legal chain of reasoning, or so-called legal reasoning,” said Pen.
Nowadays, he said, AI should be viewed only as a tool alongside human decision-making.
In Kazakhstan, such tools review judicial practice and analyse legislation. It can process large volumes of information more quickly. There is also a judicial system which is already using AI for official purposes as an internal tool to help judges follow consistent judicial practices and see what decisions are being made on similar types of disputes across Kazakhstan.
But, this does not replace or supplant the legitimacy of the decision itself, which rests solely with humans.
"Only a judge, as a human being, can legitimise and render a judicial decision," explained Pen.
How does AI work in legal matters in other countries?
In China, AI has already entered the courtroom, with a student from the China University of Political Science and Law noting that AI is used in basic and simple tasks.
"In my country,it is now used to fill in some blanks, and maybe help the jury find some cases, [analyse whether] the case is similar to the previous ones, but the AI can't just decide the result," said Chinese law student Hongyi Chen.
Students from Georgia conducted immersive research to understand how AI could function within the legal system. Analysing international practice, they highlight the gap between technological feasibility and legal legitimacy.
The main risk is the lack of a soul in the algorithm and the impossibility of ethical choice.
"For the time being, the human judge, as an arbitrator, still has to weigh the merits in the case and only then does the legal solution become binding. But if we have already reached this level of AI application, then I think the possibility exists for AI to pass judgement in the future," said Tbilisi University student Keti Khaliashvili.
Students from Canada's McGill University noted that the integration of AI into legal matters has to be explored and regulated.
“Personally, at the moment, I feel that AI is not developed yet to the point where it could completely replace human judgment," student Elisa Xue noted.
The responsibility dilemma
The most fundamental reason why AI can't replace humans is responsibility. A court decision is an act of authority, for which judges are accountable. If a judge makes a mistake, an appeal can be filed, disciplinary action can be taken, or the matter can be resolved by law.
But in the case of artificial intelligence, who is to blame? The code developer or the cloud service provider?
MNU students believe it's necessary to determine who will bear responsibility.
"If AI-generated content causes harm, is it responsible, and is "AI" labelling necessary? There are no categories in law that differentiate between harmless content, harmful content, or potentially harmful content. Law must be adaptable and apply to different levels of legal relations," argues MNU student, Islam Shagatayev.
The winners of the legathon agree with him, and they propose the introduction of absolute responsibility for manufacturers and system developers.
"We think that an individual or user does not always have any protection. That's why the developer should bear most of the responsibility," said Alissa Doktorovich, a student of Al-Farabi Kazakh National University.
The fact that discussions about AI are taking place right now in Kazakhstan is symbolic, as 2026 here marks the Year of Digitalisation and Artificial Intelligence.
The country's Law on "Artificial Intelligence", passed in November 2025, introduced and enshrined the principle of anthropocentricity, meaning AI is merely a tool that imitates human cognitive functions, and does not replace human responsibility.
The Legathon will allow us to rethink and reassess the relationship between AI and the law. In an era when law is becoming increasingly algorithmic, we need to remember the human element.




