A Fog Computing ecosystem consists of actors that range from IoT developers and academic researchers to Fog operators. All fog-related users would like to explore Fog capabilities, though they face new challenges in such a complex environment. For instance, IoT developers and researchers wish to evaluate their applications in Fog environments and analyze their performance metrics. Furthermore, Fog operators wish to evaluate different devices, but this could be costly due to the wide range of options and time-consuming due to their configuration.
Acknowledging and facing all these issues during the development process of the Rainbow Project, we implemented Fogify, a Fog Computing emulation framework. Fogify facilitates modeling, deployment, and experimentation of fog testbeds. Specifically, it provides a toolset to model complex fog topologies comprised of heterogeneous resources, network capabilities, and QoS criteria; deploy the modeled configuration and services using popular containerized infrastructure-as-code descriptions to a cloud or local environment; experiment, measure and evaluate the deployment by injecting faults and adapting the configuration at runtime to test different “what-if” scenarios that reveal the limitations of service before introduced to the public. Watch the demonstration of Fogify here.
What features does Fogify provide?
The topology design begins with users extending their IoT application’s docker-compose file to include Fogify’s model. During the modeling phase, users describe IoT and Fog Services, Compute Nodes, Networks, and, generally, their interconnections through the Fog Topology. The Topology primitive consists of a set of Blueprints, which each one denotes an emulated device and is a union of a Node, Service, Networks, replicas, and a label. Fogify inherits docker services directly from the docker-compose file, enabling users to develop their applications using familiar docker constructs. Furthermore, “Fogified” docker-compose YAML does not affect the portability of an application, since users could deploy it as-is on any docker runtime environment (e.g., Docker Swarm).
Users can deploy the fogify model by using the FogifySDK, which is a python library that abstracts the interaction with Fogify’s Controller API by providing a set of functions. When Fogify Controller receives the YAML description, it evaluates the model’s validity, and, if no error is detected, starts the bootstrapping process. Specifically, the system decodes the described model primitives to underlying Cluster Orchestrator’s configuration files and deploys, ensuring the instantiation of the containerized services in the emulated environment. At the same time, the Controller creates the overlay mesh networks between the emulated devices and broadcasts (any) network constraints to Fogify Agents. Fogify Agents are located on every cluster node, and accept instructions through their API, apply them to the emulated devices, and, generally, monitor every emulated instance.
While the emulation is executing, Fogify gives the opportunity users to perform Actions and “what-if” Scenarios (sequences of timestamped actions) on the emulated Fog Nodes. Currently, there are three types of actions, scaling actions (both horizontal and vertical), network alterations that change the network characteristics of a fog node, and stress actions that inject CPU or I/O intensive workloads and can alternate the IoT data generation rate. With actions and scenarios, users can emulate more complex situations like ad-hoc network saturation, mobility patterns, workload variations, physical failure, and many more. When an action or a scenario is submitted, the Fogify Controller coordinates its execution with the Cluster Orchestrator and the respective Fogify Agents.
Fogify by-default monitors a set of performance metrics from every emulated node while users may extend them by providing app-level exposed metrics. Seamlessly, agents gather and store the beforementioned metrics in their local storage. To create an end-to-end interactive analytic tool for emulated deployments, we exploit the FogifySDK capabilities in the Jupyter Notebook stack. FogifySDK, except for remote management of the emulation (through topology (un-) deployment and actions submission), offers functions for monitored metrics retrieval. Furthermore, FogifySDK stores retrieved metrics to an in-memory data structure, namely panda’s dataframe, which offers a batch of exploratory analysis methods, plots, and summary statistics.
The combination of monitored metrics with the analytical capabilities of the Jupyter ecosystem permits users to produce hidden insights about fog-oriented concepts such as Quality of Service, deployment cost, system reliability, etc.
Fogify is open source [https://github.com/UCY-LINC-LAB/fogify] and for more information about Fogify, you can visit our documentation page [https://ucy-linc-lab.github.io/fogify/] and see the presentation of the system in SEC2020 [https://www.youtube.com/watch?v=d_6bJzJidas].