Autonomous AI could challenge how we define criminal behavior
Whether we ever build AI that thinks like a person is still uncertain. What seems more realistic is a future with more independent machines. These systems already work across many industries and digital environments. Alongside human-to-human and human-to-machine contact, communication between machines is growing fast. Criminology should start to look at what this shift means for crime and social control.

A new academic paper from Gian Maria Campedelli of Fondazione Bruno Kessler argues that society is entering a stage where autonomous systems act with a degree of independence. These systems adapt to context, exchange information, and interact with one another in ways that can sometimes produce outcomes that look unlawful or harmful.
From the early days of AI to a new social reality
AI research began in the 1950s with the aim of copying how people think. For decades, AI stayed under human control and was mostly used for calculations, data analysis, and simple tasks. Early criminology projects used it to predict crime or find risk patterns, and humans managed each step.
LLMs and other generative systems have changed what machines can do. They can now plan, adapt, and exchange information with other systems while needing little human input.
Campedelli calls this a hybrid society. Social interaction isn’t limited to people or to exchanges between humans and machines anymore. Machines now communicate with one another as well, and he says this could reshape how criminologists think about crime, control, and responsibility.
Why criminology must expand its scope
The analysis argues that criminology has traditionally focused on how people use technology to commit offenses. What’s missing is attention to how technology might act in ways that cause harm. To understand these changes, the author draws on Actor-Network Theory and Woolgar’s call for a sociology of machines, frameworks that have gained new relevance with the rise of AI foundation models and generative agents.
From this point of view, AI agents are more than tools that people control. They take part in social and technical networks that shape events and outcomes. Seeing them this way widens criminology’s focus and helps explain how harmful actions can grow out of systems that connect humans and machines.
Campedelli believes that the sociology of machines is now a subject to study. With GenAI widely used, researchers can watch how independent systems behave, work together, and sometimes cause harm without anyone planning it.
Understanding machine agency
To make sense of this new form of agency, the study proposes three dimensions: computational, social, and legal.
The computational dimension describes an AI system’s capacity to plan, learn, and act on its own. This capability lets agents handle complex tasks or adjust to new settings with limited human oversight.
The social dimension refers to how AI systems influence and are shaped by others. Machines that negotiate, trade, or share information are contributing to digital networks that increasingly affect social life.
The legal dimension deals with questions of responsibility. As AI systems gain autonomy, traditional laws that assume human control may no longer fit. The study warns of a growing liability gap when no single person can be blamed for a harmful outcome.
Together, these dimensions describe machines as actors within social systems, not outside them. This makes them a legitimate subject of criminological study.
The risks of multi-agent AI
The paper also examines the growing use of multi-agent AI systems, meaning networks of autonomous agents that interact to achieve goals. Examples already exist in finance, logistics, and defense research. Because they learn from one another, their collective behavior can produce outcomes that no single model would generate alone. That same ability can create new types of risk.
Campedelli points to studies showing that interacting AI agents can develop cooperative behaviors in unexpected ways. Some experiments have shown price collusion, misinformation, or hidden instructions escaping human oversight.
The study notes that as these systems expand, their interactions add layers of complexity that make their collective behavior harder to predict or control. Each agent already operates in ways that are difficult to understand, and when connected in networks, that uncertainty multiplies, reducing human ability to monitor and guide their actions.
Things can go wrong
The author offers two main paths for how AI systems might cross legal or ethical lines.
Malicious alignment happens when humans design AI agents to commit crimes or cause harm. A network of bots that manipulates markets or carries out fraud would be an example. In this case, the harm comes from human intent.
Emergent deviance happens when harm appears by accident through normal interactions among systems. Even when each agent is built for good purposes, their combined actions can create damage. A trading algorithm that causes a market crash or a language model that spreads false information both fit here.
This difference matters for accountability. Malicious alignment shows intentional misuse, while emergent deviance points to weak oversight and poor prediction.
Questions for the future
To guide future research, the paper raises four questions.
First, will machines simply imitate people, or will they develop their own behavioral norms? As AI training relies more on synthetic data rather than human examples, its decisions may drift away from human expectations.
Second, can existing theories of crime that were built for humans explain machine behavior? Frameworks like Social Learning Theory might fall short because AI lacks emotion and intent in the human sense. This theory holds that people learn by watching others and copying what brings rewards.
Third, which forms of crime will be affected first? The study suggests that digital offenses such as fraud, hacking, and manipulation will evolve fastest, while physical crimes involving robots may appear later.
Fourth, what will policing look like in an age of autonomous systems? One idea is to create AI systems that monitor other AI systems, similar to how cybersecurity software detects intrusions. Yet this raises new ethical and governance challenges, especially if such systems make mistakes or operate without human context.