At high-level, human-robot collaboration system is a collision prediction and avoidance system between Personnel and Robots in an indoor environment. Following information is required continuously in a time-deterministic manner:
- Personnel’s current 3D Coordinates and motion dynamics
- Robot’s current 3D Coordinates and motion dynamics
Using above information, predictions on collision are made a-priori. Based on probability of collision, the collision prediction and avoidance system send control messages to slow or stop the Robot thus avoiding the collision between Personnel and Robot.
An exemplary demonstrator setup for RAINBOW is shown in Figure 1 and consists of 2 workplace areas “Area-1” and “Area-2”. Each of the work-place area consists of a Robotic arm controlled by Industrial PC and PLC. IoT Gateway collects telemetry data of Robot (joint angles, velocity etc.) from Industrial PC using PROFINET and forwards this data to RMT service running on Fog devices using OPC-UA.
Figure 1: Demonstrator overview with RAINBOW
The Fog device is a high-performance 64-bit multi-core processor hardware with Linux OS capable of running multiple instances of each of the below services as shown in Figure 2.
- Personnel Localization and Motion Capturing service (PLMC): One instance per Personnel in work-place area
- Robot Motion Tracking service (RMT): One instance per Robot in work-place area
- Collision Prediction and Avoidance service (CPA): One instance for group of Personnel and Robot in work-place area
Figure 2: Services in Fog
Above service run probabilistic algorithms for estimation and prediction. The reason these services are to run in Fog device and not as cloud service is due to the requirement of short, deterministic latency between data acquisition to stopping of robot. These services run with a strict real-time constraint in processing the data and are computationally intensive. Not only latency requirements, but also scalability, mobility, reliability, resource-sharing, secure-deployment of applications, application monitoring, distributed data management and analytics, security and data privacy are also important requirements in these applications.
Below is a high-level description that relates use case requirements with RAINBOW platform components in the proposed reference scenario.
- Scalability of cloud-native services and effective utilization of resource on Fog device: To achieve this in RAINBOW framework the Service Graph Editordescribes dependencies, Service Topology etc. of the cloud native applications and Policy Editor describes both run-time and pre-deployment constraints.
- Reducing system latency and jitter: Reduction of System latency reduces the safety distance required and hence greater collaboration with lesser unintended halts in production due to stopping of robots. Additionally, reducing network jitter can make system more deterministic and can improve predictions results of the algorithm.
- Run-time application monitoring, constraint evaluation and dynamic resource provisioning: Here performance, and health indicator metrics from deployed service can be collected continuously using Resource Application-level Monitoring in Centralized Orchestration Backend and Multi-domain sidecar proxy in Rainbow Mesh stack. These metrics can be evaluated with Service Level Objectives (SLOs) by Orchestration lifecycle manager.
- Reliable service and data migration between Fog devices: In the scenario of personnel mobility, service and data migration from one Fog device to other is expected. To achieve this requirement, RAINBOW provides components such as Orchestration lifecycle manager, resource Manager, Analytical Engine, Resource Application-level Monitoring in Centralized Orchestration Backend and Multidomain sidecar proxy in Rainbow Mesh stack.
- Data management and High-performance queries across distributed databases for data Analytics: Each of the Fog device serves a database instance to preserve applications data and states. For continuous Analytics it needed to fetch data from databases hosted in different Fog devices distributed in the infrastructure. Also, since Fog devices have limited memory there is a need to sync database in Fog devices with central database, thus, the need of analytic engine that queries data optimally from distributed databases and syncs data with a central database periodically.