Background
Type:

Deep reinforcement learning based energy management in full-duplex ultra dense networks with cell switching and radio resource allocation

Journal: Computer Communications (1873703X)Year: 15 February 2026Volume: 248Issue:
Rahmati T.Shahgholi B.a
DOI:10.1016/j.comcom.2026.108430Language: English

Abstract

The exponential growth in traffic load and increasing number of connected devices have driven cellular networks to offer high capacity and to support massive access. Full-Duplex Ultra-Dense Networks (FD-UDNs) represent a promising technology to meet this demand in cellular networks. However, these networks encounter serious challenges concerning energy consumption and high levels of interference, which, if not properly managed, can adversely affect overall network performance. This paper presents a deep reinforcement learning-based solution for the problem of joint small base station (SBS) on/off switching and resource allocation, with the objective of maximizing energy efficiency and meeting quality of service (QoS) requirements. To reduce complexity, we decompose the problem into two sub-problems: 1) BS sleep management and 2) power and radio resource allocation. For BS sleep management, two approaches are proposed: centralized and distributed. In the centralized approach, the network decides about the sleep state of the SBSs. In the distributed approach, each SBS independently decides on its sleep state. Subsequently, by assigning users to the active stations, each BS allocates transmit power and radio resources to its users. The simulation results highlight performance of the proposed methods compared to the previous method in terms of both energy efficiency and user satisfaction rate. Additionally, the results show that our distributed sleep management method outperforms the centralized one. © 2026