The communication and networking field is hungry for machine learning decision-making solutions to replace the traditional model-driven approaches that proved to be not rich enough for seizing the ever-growing complexity and heterogeneity of the modern systems in the field. Traditional machine learning solutions assume the existence of (cloud-based) central entities that are in charge of processing the data. Nonetheless, the difficulty of accessing private data, together with the high cost of transmitting raw data to the central entity gave rise to a decentralized machine learning approach called Federated Learning . The main idea of federated learning is to perform an on-device collaborative training of a single machine learning model without having to share the raw training data with any third-party entity. Although few survey articles on federated learning already exist in the literature, the motivation of this survey stems from three essential observations. The first one is the lack of a fine-grained multi-level classification of the federated learning literature, where the existing surveys base their classification on only one criterion or aspect. The second observation is that the existing surveys focus only on some common challenges, but disregard other essential aspects such as reliable client selection, resource management and training service pricing. The third observation is the lack of explicit and straightforward directives for researchers to help them design future federated learning solutions that overcome the state-of-the-art research gaps. To address these points, we first provide a comprehensive tutorial on federated learning and its associated concepts, technologies and learning approaches. We then survey and highlight the applications and future directions of federated learning in the domain of communication and networking. Thereafter, we design a three-level classification scheme that first categorizes the federated learning literature based on the high-level challenge that they tackle. Then, we classify each high-level challenge into a set of specific low-level challenges to foster a better understanding of the topic. Finally, we provide, within each low-level challenge, a fine-grained classification based on the technique used to address this particular challenge. For each category of high-level challenges, we provide a set of desirable criteria and future research directions that are aimed to help the research community design innovative and efficient future solutions. To the best of our knowledge, our survey is the most comprehensive in terms of challenges and techniques it covers and the most fine-grained in terms of the multi-level classification scheme it presents.
Fifth Generation (5G) phase 2 rollouts are around the corner to make mobile ultra-reliable and low-latency services a reality. However, to realize that scenario, besides the new 5G built-in Ultra-Reliable Low-Latency Communication (URLLC) capabilities, it is required to provide a substrate network with deterministic Qual-ity-of-Service support for interconnecting the different 5G network functions and services. Time-Sensitive Networking (TSN) appears as an appealing network technology to meet the 5G connectivity needs in many scenarios involving critical services and their coexistence with Mobile Broadband traffic. In this article, we delve into the adoption of asynchronous TSN for 5G backhauling and some of the relevant related aspects. We start motivating TSN and introducing its mainstays. Then, we provide a comprehensive overview of the architecture and operation of the Asynchronous Traffic Shaper (ATS), the building block of asynchronous TSN. Next, a management framework based on ETSI Zero-touch network and Service Management (ZSM) and Abstraction and Control of Traffic Engineered Networks (ACTN) reference models is presented for enabling the TSN transport network slicing and its interworking with Fifth Generation (5G) for backhauling. Then we cover the flow allocation problem in asynchronous TSNs and the importance of Machine Learning techniques for assisting it. Last, we present a simulation-based proof-of-concept (PoC) to assess the capacity of ATS-based forwarding planes for accommodating 5G data flows.
5G system and beyond targets a large number of emerging applications and services that will create extra overhead on network traffic. These industrial verticals have aggressive, contentious, and conflicting requirements that make the network have an arduous mission for achieving the desired objectives. It is expected to get requirements with close to zero time latency, high data rate, and network reliability. Fortunately, a ray of hope comes shining the way of telecom providers with the new progress and achievements in machine learning, cloud computing, micro-services, and the ETSI ZSM era. For this reason there is a colossal impetus from industry and academia toward applying these techniques by creating a new concept called CCN environment that can cohabit and adapt according to the network and resource state, and perceived KPIs. In this article, we pursue the aforementioned concept by providing a unified hierarchical closed-loop network and service management framework that can meet the desired objectives. We propose a cloud-na-tive simulator that accurately mimics cloud-native environments, and enables us to quickly evaluate new frameworks and ideas. The simulation results demonstrate the efficiency of our simulator for parroting the real testbeds in various metrics.
The emerging industrial verticals set new challenges for 5G and beyond systems. Indeed, the heterogeneity of the underlying technologies and the challenging and conflicting requirements of the verticals make the orchestration and management of networks complicated and challenging. Recent advances in network automation and artificial intelligence (AI) create enthusiasm from industries and academia toward applying these concepts and techniques to tackle these challenges. With these techniques, the network can be autonomously optimized and configured. This article suggests a collaborative cross-system AI that leverages diverse data from different segments involved in the end-to-end communication of a service, diverse AI techniques, and diverse network automation tools to create a self-optimized and self-orchestrated network that can adapt according to the network state. We align the proposed framework with the ongoing network standardization.
AI-based network-aware Service Function Chain migration in 5G and beyond networks
Rami Akrem Addad; Diego Leonel Cadette Dutra; Tarik Taleb; Hannu Flinck
IEEE Transactions on Network and Service Management
While the 5G network technology is maturing and the number of commercial deployments is growing, the focus of the networking community is shifting to services and service delivery. 5G networks are designed to be a common platform for very distinct services with different characteristics. Network Slicing has been developed to offer service isolation between the different network offerings. Cloud-native services that are composed of a set of inter-dependent micro-services are assigned into their respective slices that usually span multiple service areas, network domains, and multiple data centers. Due to mobility events caused by moving end-users, slices with their assigned resources and services need to be re-scoped and re-provisioned. This leads to slice mobility whereby a slice moves between service areas and whereby the inter-dependent service and resources must be migrated to reduce system overhead and to ensure low-communication latency by following end-user mobility patterns. Recent advances in computational hardware, Artificial Intelligence, and Machine Learning have attracted interest within the communication community to study and experiment self-managed network slices. However, migrating a service instance of a slice remains an open and challenging process, given the needed co-ordination between inter-cloud resources, the dynamics, and constraints of inter-data center networks. For this purpose, we introduce a Deep Reinforcement Learning based agent that is using two different algorithms to optimize bandwidth allocations as well as to adjust the network usage to minimize slice migration overhead. We show that this approach results in significantly improved Quality of Experience. To validate our approach, we evaluate the agent under different configurations and in real-world settings and present the results.
Yago Gonzalez explained the intrinsic relationship between one of the hottest terms in the computer science market right now, edge computing, and the XR environment, particularly regarding both holography and VR architectures, supported by the developments made in the CHARITY and Accordion H2020 consortiums.