6G network design puts AI at the center of spectrum, routing, and fault management

Wireless network operators are preparing for a generation of infrastructure where AI is built into the architecture from the start. Sixth-generation networks, expected to reach commercial development over the coming decade, are being designed with AI at the center of how spectrum is allocated, traffic is routed, and failures are detected.

AI 6G networks

A paper by researchers at Harokopio University of Athens examines how different AI techniques map to specific layers of 6G network design, from the physical radio layer up through network management and service delivery. The paper covers publications from 2018 through 2025 and draws on standardization work from 3GPP, the ITU-T Focus Group on 6G, and the O-RAN ALLIANCE.

What 6G is expected to carry

The performance targets for 6G are substantially higher than those for 5G. Data transfer speeds are projected to possibly exceed 10 terabits per second, compared with roughly 10 gigabits per second in current 5G deployments. End-to-end latency is targeted at around 0.1 milliseconds, a tenfold improvement over 5G’s 1-millisecond requirement. Reliability targets for ultra-critical applications reach 99.9999 percent, a specification that covers use cases including autonomous vehicle control, remote surgery, and industrial automation.

Coverage is also expected to extend into deep-sea, underground, and space environments, supporting connectivity in locations that current networks do not reach.

How AI splits across the network stack

The researchers organize AI techniques by where they operate in the network. Traditional machine learning methods are applied at the physical layer, handling tasks such as channel estimation and beam optimization, including work on reconfigurable intelligent surfaces. Deep learning and reinforcement learning operate at the network and management layer, where they support spectrum allocation, network slicing, and real-time orchestration.

Federated learning is assigned primarily to the service layer, where it enables devices to train shared models without transmitting raw data to a central server. This approach is relevant for IoT deployments, healthcare applications, and extended reality services, where data sensitivity or bandwidth constraints make centralized training impractical.

Explainable AI operates across all layers, addressing the need for transparency in automated decisions, a requirement that aligns with regulations including the EU’s GDPR.

Security concerns tied to AI adoption

Integrating AI into 6G also introduces security risks that do not exist in conventional network architectures. AI systems trained on large datasets can be targeted through data poisoning attacks, where malicious inputs degrade model performance. Federated learning, despite its privacy benefits, remains vulnerable to model inversion attacks that can extract information from shared model updates.

Generative adversarial networks can produce synthetic network traffic or fake credentials that bypass conventional intrusion detection systems.

Countermeasures under examination include adversarial training, Byzantine fault-tolerant aggregation in federated systems, and AI-driven anomaly detection that monitors traffic patterns in real-time.

Blockchain infrastructure is being evaluated as a supporting layer for audit trails and identity management in distributed AI deployments, with lighter consensus mechanisms such as Proof of Stake proposed to keep energy costs manageable.

Energy and hardware constraints

Running AI at scale inside a network introduces energy costs. The computational load of training and running large models across dense device deployments conflicts with sustainability goals that 6G operators are expected to meet. Research directions include model compression, quantization, and pruning, which reduce the computational load of AI inference without significant loss in accuracy.

Hardware development is another constraint. Terahertz communication, which operates in the 0.1 to 10 THz range and is central to 6G’s high-speed targets, requires new transceiver designs and faces significant path loss challenges. Edge computing microchips capable of running AI inference locally are needed for latency-sensitive applications, and current chip designs have not yet resolved the tension between processing power and power consumption.

Quantum computing appears as a longer-range possibility. Quantum algorithms such as the Quantum Approximate Optimization Algorithm could address resource allocation problems at scales that exceed the capacity of classical optimization. Integration with existing infrastructure remains an engineering challenge, given current requirements for near-absolute-zero operating temperatures and error correction overhead.

The researchers note that interoperability across vendors and regions remains an open problem. Standardized APIs and data exchange formats are necessary for AI components from different manufacturers to work together within a single network deployment.

Download: Tines Voice of Security 2026 report

Don't miss