1 State of the Art
Many orchestrators assume that applications are executing in an environment with reliable nodes and fast network connections, e.g., a cloud. However, a fog typically consists of heterogeneous devices that have varying hardware capabilities and may move or fail without notice  . Devices may range from drones with strict battery constraints, over smartphones, to small servers. Furthermore, the quality and speed of network connections can differ greatly between nodes and/or geographic locations .
Recently, various Kubernetes distributions that support edge or fog environments have appeared or been extended to do so, such as MicroK8s , K3s , or OpenShift . They often focus on being able to support constrained resources and multiple hardware architectures. KubeEdge  also supports analytics pipelines and an offline mode. However, these distributions often lack support for Service Level Objectives (SLOs) or fog-specific quality constraints. Likewise, the fact that Internet of Things (IoT) devices, fog devices, and the cloud form a continuum is often neglected  .
RAINBOW aims to tackle these challenges by providing a Service Graph abstraction to accurately model the topology and constraints of fog applications, such that the orchestrator is aware of the application’s needs, in order to optimize its components’ placements and execution. The RAINBOW Orchestrator, which is built on top of Kubernetes is a complex subsystem that consists of multiple loosely coupled components. Figure 1 provides on overview of the Orchestrator’s components (highlighted in orange) and their high-level interactions.
The Orchestrator’s components may be hosted on different nodes. Each node that is part of RAINBOW, hosts an API server component that delegates requests to the appropriate Orchestrator components.
The Orchestrator Repository acts as a database for storing the state of the RAINBOW Orchestrator. Collaboration between the Orchestrator components is achieved mostly through subscriptions to changes of the objects in the Orchestrator Repository, similar to a publish/subscribe mechanism.
The Resource Manager is responsible for obtaining and caching information about the nodes that are currently part of the fog region and their available and used resources. This information is critical for the Scheduler when making placement decisions. The term resource has a very broad scope in RAINBOW. It refers to any physical or virtual component of a node (e.g., CPU, memory, or GPS chip) that can be used by an application service.
To deploy a new application, a RAINBOW Developer submits a Service Graph, which is subsequently handled by the Deployment Manager. The Deployment Manager iterates through the Service Graph’s nodes and creates Kubernetes-native deployment objects for each of the graph’s nodes. Each native deployment is handled by its respective Application Lifecycle Manager. The SLOs and monitoring configuration attached to the Service Graph are used to set up the respective SLO managers and monitoring facilities.
Once all deployment objects have been created, the Scheduler determines on which node a service should be placed. The relationships between the application’s services and the constrains on them aid in finding nodes that will deliver a suitable performance in the challenging fog environment.
The SLO Managers monitor SLO compliance (e.g., response time) of the application’s components throughout the application’s lifetime and trigger elasticity strategies (e.g., horizontal or vertical scaling) upon violations.
The RAINBOW Orchestrator enables fog-optimized management of applications, based on application metadata supplied by the Service Graph. RAINBOW is thus, capable of delivering a feature set that is hard to find elsewhere.
 F. Bonomi, R. Milito, J. Zhu and S. Addepalli, “Fog Computing and Its Role in the Internet of Things,” New York, NY, USA, Association for Computing Machinery, 2012, p. 13–16.
 S. Yi, Z. Hao, Z. Qin and Q. Li, “Fog Computing: Platform and Applications,” 2015, p. 73–78.
 A. Brogi, S. Forti and A. Ibrahim, “Predictive analysis to support fog application deployment,” Fog and edge computing: principles and paradigms, p. 191–222, 2019.
 “MicoK8s,” [Online]. Available: https://microk8s.io.
 “K3s: Lightweight Kubernetes,” [Online]. Available: https://k3s.io.
 Red Hat, “Red Hat Opens Up the Edge with Enterprise-Grade Kubernetes and Automation Technologies,” [Online]. Available: https://www.redhat.com/en/about/press-releases/red-hat-opens-edge-enterprise-grade-kubernetes-and-automation-technologies.
 “KubeEdge,” [Online]. Available: https://kubeedge.io.
 A. Brogi, S. Forti, C. Guerrero and I. Lera, “How to place your apps in the fog: State of the art and open challenges,” Software: Practice and Experience, vol. 50, no. 5, p. 719–740, 2020.
 V. Cardellini, F. Lo Presti, M. Nardelli and F. Rossi, “Self-adaptive Container Deployment in the Fog: a survey,” 2019.
 M. Villari, M. Fazio, S. Dustdar, O. Rana, D. N. Jha and R. Ranjan, “Osmosis: The Osmotic Computing Platform for Microelements in the Cloud, Edge, and Internet of Things,” Computer, vol. 52, no. 8, p. 14–26, 2019.