Original Paper
Abstract
Background: Applications of robotics in daily life are becoming essential by creating new possibilities in different fields, especially in the collaborative environment. The potentials of collaborative robots are tremendous as they can work in the same workspace as humans. A framework employing a top-notch technology for collaborative robots will surely be worthwhile for further research.
Objective: This study aims to present the development of a novel framework for the collaborative robot using mixed reality.
Methods: The framework uses Unity and Unity Hub as a cross-platform gaming engine and project management tool to design the mixed reality interface and digital twin. It also uses the Windows Mixed Reality platform to show digital materials on holographic display and the Azure mixed reality services to capture and expose digital information. Eventually, it uses a holographic device (HoloLens 2) to execute the mixed reality–based collaborative system.
Results: A thorough experiment was conducted to validate the novel framework for mixed reality–based control of a collaborative robot. This framework was successfully applied to implement a collaborative system using a 5–degree of freedom robot (xArm-5) in a mixed reality environment. The framework was stable and worked smoothly throughout the collaborative session. Due to the distributed nature of cloud applications, there is a negligible latency between giving a command and the execution of the physical collaborative robot.
Conclusions: Opportunities for collaborative robots in telerehabilitation and teleoperation are vital as in any other field. The proposed framework was successfully applied in a collaborative session, and it can also be applied in other similar potential applications for robust and more promising performance.
doi:10.2196/36734
Keywords
Introduction
Background
Robots are becoming friends of humans by impacting and contributing to our daily life in many ways. They can carry out complex or repetitive activities for us, from household to industrial work, medicine, security, and so on [
]. Not only can robots assist in heavy works, but they can also help in education, especially during the COVID-19 pandemic. With the help of artificial intelligence, robots now can act as friends by monitoring behaviors and understanding our likes and dislikes. They are offering various services effectively being used in different industries. Robots can boost the productivity of an industry by performing accurate, precise, fast, consistent, and high-quality work [ ]. They can ensure safety by overtaking dangerous tasks in hazardous environments.Collaborative robots, also known as cobots, are making the situation easier and more productive as they are designed to work in the same workspace as humans [
]. Both the robots and humans work delicately together, and robots assist human coworkers in completing different tasks [ ]. This collaboration between humans and robots is changing the industrial production strategy. More industries are shifting to this manufacturing style daily because of flexibility, productivity, safety, reduced risk of injuries, and quality of performance in production [ ]. Because of the recent advancement in the collaborative robots’ application in various fields, the size of the global market in this field is growing daily. In 2018, the market value of this industry was around USD 649.1 million and was anticipated to expand to 44.5% by 2025 [ ]. Therefore, the potential of collaborative robots, without any doubt, is huge, and more and more research is needed for the refinement of different approaches and applications in this field.To support the needs of collaborative robots in different potential fields, researchers are working to propose different frameworks for different collaborative robot-based applications. Some researchers followed the agent-based system [
], whereas others also explored compliant control [ - ] and ergonomic aspects–based approach [ ]. Still, researchers should consider other state-of-the-art technology-based approaches to designing a strong foundation for the huge potential of collaborative robots. One of the cutting-edge technologies in this era is mixed reality (MR), which blends both the digital and physical worlds to offer a promising solution for various applications [ ]. MR merges virtual and augmented reality together, letting us incorporate the real world with digital data. Almost 150 firms in different fields have already adopted MR-based solutions, and it is estimated that by 2025 more than 14.4 million US employees will use smart glasses [ ]. The possibility of mixed reality is huge, especially where human interaction is required. Therefore, the application will be promising if the mixed reality can successfully be applied in designing a framework for the collaborative robot.In this study, a novel framework is proposed for the collaborative robot using mixed reality. Unity, Unity Hub, Windows Mixed Reality (WMR) platform, and Azure mixed reality services are adopted to design the framework and use a holographic device (HoloLens 2) to execute it. The framework can be used in various collaborative applications such as telerehabilitation or teleoperation.
The rest of the manuscript is outlined as follows: first, some recent advancements in this research area are briefly discussed; then, the development of the framework, along with the system architecture and the control of a collaborative robot with mixed reality, is represented; subsequently, one experiment using the framework and result of the experiment is illustrated; and finally, the conclusion of the study is portrayed.
Related Work
The collaborative robot is one of the potential fields in robotics, and researchers are working on the control system nowadays. Few researchers followed a unified approach by merging an impedance model with a dynamic model with friction and optimizing the assembly [
]. They used the proportional derivative control for the inner loop and impedance control for the outer loop. To evaluate the system’s efficiency, a 6-DOF (Degrees of Freedom) series collaborative robot was used to perform peg-in-hole assembly tasks, and the performance of the system was accurate and flexible. One study proposed that by assessing the mobility of a collaborative robot, the performance of the robot system can be estimated [ ]. Most of the available solutions are for a single robot, yet the model was evaluated using 3 automobiles. The model performed more competently than most of the strategies available. Another study represented the humans and robots as coworkers using geometric primitives, attraction vectors, and hypothetical repulsion by computing the distance and relative motion among them [ ]. By applying this idea along with the robot’s kinematic representation, the system achieved collision avoidance control by generating a nominal path to cautiously avoid collision with the human while performing the industrial operations. Repulsion-vector-reshaping was also introduced to ensure motion persistence, and the robot performed smoothly and successfully by avoiding collisions.Rehabilitation robots and assistive robots are two potential applications for collaborative robots. Researchers are working on different approaches using skin surface electromyogram–based signals [
], nonlinear sliding mode control [ ], geometric solution [ ], and variable transformation for flatness geometric property [ ] using collaborative robots to design robots for rehabilitation. Researchers have also followed learning latent actions from task demonstrations [ ], reinforcement learning [ ], digital image processing [ ], and eye tracking–based assistive robot control [ ] approaches for collaborative robots, focusing on assistive applications.One study suggested controlling the momentum of a robot by considering the maximum speed acceptable to secure the safety of human coworkers [
]. The system estimated the allowable top velocity by using a collision model to predict the maximum force during a collision. The system enhanced the functionality and productivity of the collaborative tasks without risking human safety. Few researchers presented a new robotic system for collaborative robots blending mobile manipulators and supernumerary limbs [ ]. Their robot could operate autonomously and be connected with humans as additional body parts. Other researchers presented a collaborative system that consists of hardware, software, and operational architecture of a humanoid robot trained with cognitive abilities [ ]. The robot could identify the help a human coworker might need, recognize their activities, grasp objects, navigate, and so on. The experimental evaluation demonstrated that the robot performed safely and robustly while conducting collaborative tasks.Researchers are also exploring framework-based approaches to construct generalized systems for different applications. In one study, researchers proposed an open-source framework for a humanoid robot using cross architecture, which was low-cost in computation [
]. The framework was validated via both simulator and telemetry interface, and the result showed that it could be used to design new algorithms. In another study, researchers presented a framework for a collaborative human-robot environment using a commercial manipulator and their unique control method [ ]. The framework included a trajectory planning and safety strategy by exploiting the human worker’s experiences and was evaluated in a factory. In a similar study, a few researchers presented a framework for robot-assisted control in human-robot cooperation for a 7-DoF surgical robot [ ]. The framework used manual motion to drive the tooltip, a 3D camera–based method to adjust the workspace, calculation of optimal instrument orientation, and cartesian interpolation to assure safety. In other studies, researchers proposed frameworks for different human-robot collaboration–based applications such as industrial cyber-physical systems [ ], interaction in games (ie, Rock-Paper-Scissors) [ ], and cooperative assembly duties [ ].The mixed reality–based approaches are becoming popular for different applications among researchers day by day as it has potential in many ways. Researchers used mixed reality–based methods to design an interface for human-robot interaction to teleoperate a robot [
] and a user interface to control teleoperated robotic platforms [ ]. Researchers also used these approaches to design various robotic control systems. In one study, researchers developed an interface for human-robot communication using mixed reality for interactive robot manipulation control for mobile platforms [ ]. The interface offered tools for robot path planning and visualized it for workers to comprehend robot behavior to ensure safety. The interface was successfully implemented and evaluated on Microsoft HoloLens. In another similar study, researchers used both mixed and virtual reality to design workspace for collaborative robots in the industry [ ]. Robotic Operating System and Unity were used to design the system and tested in diverse settings. In another work, researchers presented an interactive control framework for both single and multirobot systems using mixed reality for various applications [ ]. The system allowed interaction with robots by focusing on the visualization of their objective, and it could relate to any robots and mixed reality interfaces. The presented framework was evaluated experimentally, and the results verified the framework’s capabilities in the mixed reality system.Methods
Development of a Mixed Reality Framework for Robot Control
To develop the mixed reality–based system from scratch for controlling an assistive robot, some prerequisites and a few prior pieces of knowledge are required. Windows operating system–based computer and windows SDK (software development kit) with visual studio are needed to design the structure. The simplest approach to making mixed reality apps is installing either the Unity or Unreal game engines [
]. However, the same programs may be created for a custom engine using DirectX (Microsoft Corp). DirectX is a high-level interface that allows to directly access low-level functionality. It connects to Windows’ hardware abstraction layer [ ]. Unity is one of the most popular real time programming development platforms on the market, with C++-based runtime code and C#-based development scripting [ ]. Unity and Unity Hub are used as cross-platform gaming engines and project management tools. The Mixed Reality Toolkit, if Unity is used, may be used for input simulation to test various input interactions, including hand-tracking and eye-tracking input. WMR is a Microsoft platform that debuted with Windows 10. The WMR platform enables developers to create apps that show digital material on holographic and virtual reality displays [ ]. The Mixed Reality Feature Tool is needed to configure Unity while developing the framework. Interfacing, generating scenes, importing packages, and adding game objects to a scene all require a basic understanding of Unity. As the Unity scripts are written in C#, some fundamental C# expertise is also required. Any previous knowledge of NoSQL database systems and serverless functionalities will help with the system design. Finally, to implement the application, a holographic device (HoloLens 2) is required. By collecting and revealing digital information within the work setting and surroundings, Azure mixed reality services let people create, learn, and collaborate more efficiently [ ]. Azure also helps secure and protect the stored data using 256-bit Advanced Encryption Standard and Transport Layer Security when data are in transit [ ]. Azure mixed reality services bring 3D to mobile phones, headphones, and other devices that are not connected to the internet. Azure Remote Rendering and Azure Spatial Anchors are two mixed reality cloud technologies that let developers create captivating, immersive experiences across several platforms. These services enable incorporating spatial awareness into projects when developing 3D training, predictive equipment maintenance, and design review applications, all within the context of users’ environments. The kinematics and dynamics of the assistive robot must be taken into consideration when developing the framework, which is explained below.Assistive Robot
The xArm-5 robot is an end effector robot with 5 DoF, developed by UFactory [
]. To attain a high payload capability (3 kg) and appropriate repeatability of 0.1 mm, this robot is equipped with high-performance harmonic reducers and brushless motors. xArm-5 has a total workspace area of 0.7 meters. The Modbus Remote Terminal Unit protocol is used by xArm-5, and RS-485 communication is required to interact with a position, speed, or force control. These characteristics combine to make the xArm-5 one of the most versatile, high-precision, and multifunction robotic arms on the market. However, xArm-5 is comparatively heavyweight (11.2 kg) due to the big size of the motors; and due to its design features, this robot cannot be folded back during idle time (kinematic constraints). The graphical user interface for xArm-5 is provided by UFactory as xArm Studio and SDK for the languages Python, Robotic Operating System, and C++. Python SDK is used to control the xArm-5 for advanced capabilities in this research.xArm-5 Robot’s Kinematics and Dynamics
During the kinematic analysis, the modified Denavit-Hartenberg (DH) parameters are adopted to specify the xArm configuration of links and joints [
]. On the other hand, the iterative Newton-Euler method was applied for dynamic estimation to assess the joint torques corresponding to each activity of daily living.Forward Kinematics
The link frame allocation (according to modified DH convention) of the xArm-5 robot is shown in
, where the yellow dots indicate that the direction is pointing into the viewing surface, cyan dot represents the direction pointing out of the viewing surface, and axis z defines the axis of rotation of each joint. To calculate the forward kinematics, the modified DH parameters corresponding to the link-frame allocation are given in . Moreover, outlines the robot’s link parameters.1 | 0 | –π/2 | 267 | 0 |
2 | 289.4886 | 0 | 0 | –1.3849 |
3 | 351.1587 | 0 | 0 | 2.7331 |
4 | 76 | –π/2 | 0 | –1.3482 |
5 | 0 | 0 | 97 | 0 |
Here, ɑi is the length of the common normal, αi is the angle about common normal, di is the offset along the previous z-axis, and represents the joint angles. Note that the term represents the length of the link, and represents the offset of the angle. The values of those variables are shown in
.267 mm | 284.5 mm | 77.5 mm | 342.5 mm | 76 mm | –1.3849 rad | 2.7331 rad | –1.3482 rad |
The general form Homogeneous Transformation Matrix that relates two successive coordinate frames is represented by Equation (1).
Where is the rotation matrix that represents the frame in relation to frame , and is the vector that indicates the location of the origin of the frame with respect to the frame .
Moreover, is the link twist, corresponds to link length, di stands for link offset, and θi is the joint angle (radian) of the xArm5 Robot. The individual homogeneous transfer matrices that relate two successive frames of the xArm robot (
) are derived using Equation 1 and are given in . The homogenous transformation matrix that relates frame {5} to frame {0} can be obtained by multiplying individual transformation matrices as expressed in Equation (2).The single transformation matrix found from Equation (2) represents the reference frame’s positions and orientations attached to the end effector with respect to the base reference frame {0}.
Dynamics of the xArm-5 Robot
The dynamic equation of the xARm5 Robot derived from the Newton-Euler formulation can be written in the following form:
Where is the 5×5 mass matrix of the manipulator, are the 5×1 acceleration vector, is the 5×1 vector of centrifugal and Coriolis terms, and G(θ) is the 5×1 vector of gravity terms.
summarizes the mass/inertia parameters of the xArm-5 robot, and joint torques for the xArm-5 were calculated using Equation 3. Moreover, presents the range of motion of each joint.Mass (kg) | Center of mass (mm) | |
Link 1 | 2.177 | [0.15, 27.24, –13.57] |
Link 2 | 2.011 | [36.7, –220.9, 33.56] |
Link 3 | 2.01 | [68.34, 223.66, 1.1] |
Link 4 | 1.206 | [63.87, 29.3, 3.5] |
Link 5 | 0.17 | [0, –6.77, –10.98] |
Joint | Working range | ||
Joint number, deg | |||
Joint 1 | ±360 | ||
Joint 2 | –118~120 | ||
Joint 3 | –225~11 | ||
Joint 4 | ±360 | ||
Joint 5 | –97~180 | ||
Maximum speed, deg/s | 180 |
xArm-5’s Control Architecture
illustrates the control architecture for the xArm-5 robot. The xArm controller generates two commands, the joints’ torque and the cartesian, and updates the torque commands every 4 ms to execute in the xArm-5 controller. The torque commands are transformed to motor currents and finally to reference voltage for the motor drivers. The proportional integral controller is employed to acknowledge the real time control of the system. It also guarantees that the suitable control torque commands are transmitted to the joints and the reference voltage commands for the drivers. It also minimizes the differences between the desired and the measured currents.
System Architecture
This section discusses the life cycle of a collaborative session involving a collaborative robot in a mixed reality environment. The user launches the application from their Mixed Reality headset, which is the application’s access point (HoloLens 2). With the suitable digital twin and user, the provider can construct a collaborative session room. The user, as well as any allowed onlookers, can now join the collaboration session. Even though they are in separate places, all users in the collaboration session, including the host user (provider), are now in a digital collaboration session and communicating with the identical digital twin. Depending on the different needs, the provider can regulate the digital twin in several ways.
shows the data flow of the connected system.Due to the distributed nature of cloud applications, the system can send a command to the digital twin, which the collaborative robot physically executes. All commands require a controller that runs on the robot’s local network. WebSocket is a computer communications protocol that allows full-duplex communication channels over a single Transmission Control Protocol connection, which manages all device connections and interactions [
]. Users of the mixed reality collaborative session can now use high-level commands to control the collaborative robot. If the provider wants to control the robot to the desired path, it will be confined to moving along a fixed course and at varying distances along that trajectory. The user can provide input to the provider during this procedure, and key data such as the joint parameters and torques are collected and sent to the cloud. For example, using the Mixed Reality headset, the user can play interactive games that target specific muscles or motions to create a sense of confidence and accomplishment in their physical development. Thus, the entire quality of the therapy is improved. Internet of Things data can be stored and retrieved using the Azure cloud platform. Every collaborative robot sends telemetry data to a cloud platform, where it is stored indefinitely. In the case of teletherapy, relevant parameters such as patient range of motion and resistance to motion during various exercises are pushed to the cloud during each rehabilitation session. These parameters are used by the Azure cloud platform to support machine learning approaches that can adaptively develop appropriate rehabilitation strategies for each user. shows the details of the system architecture.At first, in the event sequence of the mixed reality environment, the user launches the app on the client device and connects with Azure App Service via WebSocket. Then, the user is authenticated via Azure Active Directory. Subsequently, the user can select a digital twin model they wish to interact with. The App Service will retrieve assets corresponding to the selected digital twin, including 3D models and data. Next, the App Service provides a user with their requested data. The Digital Twin pushes incoming data to the Event Hub when the system is running, which fires a serverless function on the Azure cloud server. The serverless function updates database values. At the final step of data flow, some machine learning models will be deployed, which will use historical users’ data and real time collaboration metrics for future automated result analysis.
Control of a Collaborative Robot Through the Mixed Reality Environment
The Azure cloud platform enables multiple users to join a shared collaboration session. In this collaboration session, users can visualize and interact with one digital twin. The digital twin will move in tandem with the physical robot and mirror its movements. Users in the collaboration session can additionally send high-level commands to control the digital twin.
shows the mixed reality user interface of the proposed framework. During the collaboration session, a user can control a collaborative robot in several ways, such as joint angle control, cartesian control, and preplanned trajectory control. The virtual sliders are used in joint-based control. On the other hand, in cartesian mode end effector is controlled by moving the virtual end effector in a mixed reality environment. Furthermore, the provider can set a desired path to follow for a collaborative robot in the preplanned trajectory control mode.
Ethical Considerations
The researchers involved in the project took part in the experiments to demonstrate the proof of concept of the teleoperation using the proposed framework. Therefore, ethical approval is not required for this study. The project did not focus on any intervention development or intervention efficacy testing; hence, we did not recruit any participants and did not seek ethics approval for this project.
Results
To validate the proposed framework, an end effector type 5 DoF assistive robot is mounted on a table (
a). The purpose of this assistive robot is to give therapy with a pretrained trajectory and by a practitioner. The designed app, which contains the mixed reality interface, is deployed to the HoloLens 2 to control the assistive robot. A practitioner can wear the HoloLens 2 and control the therapeutic sessions remotely in a mixed reality environment ( b). c depicts a collaboration session between a practitioner and a patient where the robot can be controlled and monitored remotely in a mixed reality environment using the proposed framework.The system provides an overall cross-platform rehabilitation experience for its users. A clinician can remotely assist a patient in both active and passive rehabilitation therapy via a shared mixed reality–based collaboration session. The rehabilitation robot that exists locally for the patient has a digital twin that lives on the Azure cloud platform. A therapist or clinician can interact with this virtual digital twin and use it to assist with rehabilitation therapy.
shows a mixed reality interface that visualizes robot data such as torque and temperature for each joint, as well as force sensor data. In this manner, the therapist can visualize the key metrics that estimate the patients’ improvement, such as range of motion and resistance. They can use these data to recommend optimal rehabilitation plans for a patient. Users in the collaboration session can summon data visualizations of both real time and historical data.Discussion
Framework for collaborative robots has many potential applications in industry and telehealth such as teleoperation and telerehabilitation. Especially, a pandemic situation such as COVID-19 has affected all aspects of health care and gave the realization of the need for telehealth, which can help health care workers and patients to protect themselves from the risk of disease transmission. Telehealth can also offer personalized rehabilitation programs, real time control, continuous interaction with doctors, negligible waiting time, and so on. Using the proposed framework, it will be handy to implement different systems for teleoperation. While the proposed mixed reality–based framework promises a stable control system for collaborative robots, there are some limitations to it [
- ]. To use the framework, there should be a continuous and stable connection. The system becomes unstable and inoperable if the connection is lost. Security of personal health data is also a concern. Furthermore, the system is expensive compared to other regular rehabilitation, as a holographic device and a collaborative robot is needed for this [ ]. It is worth mentioning that a holographic device such as HoloLens 2 should not be worn for extended periods of time. Possible side effects of HoloLens include headache, dizziness, or loss of balance [ , ]. It is important to use these tools responsibly to achieve the maximum benefit from the services they provide. To improve the framework, in the future, the focus will be given to stable communication, personalized rehabilitation program, and real time control and monitoring by the expert.Conclusion
Collaborative robots and their applications have a magnificent impact in the rehabilitation and teleoperation fields. A framework for collaborative robots is very much needed to meet the demands of collaborative robots. The proposed mixed reality–based framework for collaborative robots can be used for different telehealth applications such as teleoperation and telerehabilitation and can guide other researchers to conduct further studies for the advancement of humans. Several state-of-the-art technologies were used while developing the framework, including Unity, WMR platform, Azure mixed reality services, and HoloLens 2. The framework is validated by conducting a comprehensive, collaborative experiment using a 5-DoF collaborative robot (xArm-5) in a mixed reality environment. In the future, the study will continue with mixed reality–based applications for collaborative robots. Further research will be conducted on telerehabilitation and teleoperation to design a more robust and stable framework for further advancement.
Acknowledgments
This research is partially funded by the Connected System Institute, University of Wisconsin-Milwaukee (Grant AAH9481).
Conflicts of Interest
None declared.
Individual homogeneous transfer matrix.
DOCX File , 14 KB
Experiment of collaborative session via mixed reality–based framework.
MOV File , 73149 KBReferences
- Muthoni J. Use of Robotics in Our Daily Lives. Jonas Muthoni. 2021 Jun 16. URL: https://jonasmuthoni.com/blog/use-of-robotics-in-daily-lives/ [accessed 2022-01-02]
- Benefits of Robots. RobotWorx. 2021. URL: https://www.robots.com/articles/benefits-of-robots [accessed 2022-01-02]
- Why Cobots? Universal Robots. 2021. URL: https://www.universal-robots.com/products/collaborative-robots-cobots-benefits [accessed 2022-01-02]
- Khasis D. How Humans and Robots Can Work Together for Better Warehouse Management. RIS News. 2019. URL: https://risnews.com/how-humans-and-robots-can-work-together-better-warehouse-management [accessed 2022-01-02]
- Human-Robot Collaboration: 3 Case Studies. Wevolver. 2020. URL: https://www.wevolver.com/article/humanrobot.collaboration.3.case.studies [accessed 2022-01-02]
- Grand View Research 2019. URL: https://www.grandviewresearch.com/industry-analysis/collaborative-robots-market [accessed 2022-01-02]
- Schou C, Madsen O. A plug and produce framework for industrial collaborative robots. International Journal of Advanced Robotic Systems 2017 Jul 17;14(4):172988141771747. [CrossRef]
- Brahmi B, Laraki MH, Saad M, Rahman M, Ochoa-Luna C, Brahmi A. Compliant adaptive control of human upper-limb exoskeleton robot with unknown dynamics based on a Modified Function Approximation Technique (MFAT). Robotics and Autonomous Systems 2019 Jul;117:92-102. [CrossRef]
- Brahmi B, Saad M, Brahmi A, Luna CO, Rahman MH. Compliant control for wearable exoskeleton robot based on human inverse kinematics. International Journal of Advanced Robotic Systems 2018 Nov 22;15(6):172988141881213. [CrossRef]
- Maric B, Mutka A, Orsag M. Collaborative Human-Robot Framework for Delicate Sanding of Complex Shape Surfaces. IEEE Robot. Autom. Lett 2020 Apr;5(2):2848-2855. [CrossRef]
- Kim W, Peternel L, Lorenzini M, Babič J, Ajoudani A. A Human-Robot Collaboration Framework for Improving Ergonomics During Dexterous Operation of Power Tools. Robotics and Computer-Integrated Manufacturing 2021 Apr;68:102084. [CrossRef]
- What is mixed reality? Microsoft. 2022 Apr 28. URL: https://docs.microsoft.com/en-us/windows/mixed-reality/discover/mixed-reality [accessed 2022-01-02]
- Importance of Mixed Reality in Real World. AREA. 2020. URL: https://thearea.org/ar-news/importance-of-mixed-reality-in-real-world/ [accessed 2022-01-02]
- Zeng F, Xiao J, Liu H. Force/Torque Sensorless Compliant Control Strategy for Assembly Tasks Using a 6-DOF Collaborative Robot. IEEE Access 2019;7:108795-108805. [CrossRef]
- Grigore LS, Priescu I, Joita D, Oncioiu I. The Integration of Collaborative Robot Systems and Their Environmental Impacts. Processes 2020 Apr 23;8(4):494. [CrossRef]
- Safeea M, Neto P, Bearee R. On-line collision avoidance for collaborative robot manipulators by adjusting off-line generated paths: An industrial use case. Robotics and Autonomous Systems 2019 Sep;119:278-288. [CrossRef]
- Kiguchi K, Rahman MH, Sasaki M, Teramoto K. Development of a 3DOF mobile exoskeleton robot for human upper-limb motion assist. Robotics and Autonomous Systems 2008 Aug;56(8):678-691. [CrossRef]
- Rahman MH, Kittel-Ouimet T, Saad M, Kenné J, Archambault PS. Development and Control of a Robotic Exoskeleton for Shoulder, Elbow and Forearm Movement Assistance. Applied Bionics and Biomechanics 2012;9(3):275-292. [CrossRef]
- Assad-Uz-Zaman M, Islam M, Rahman M, Wang Y, McGonigle E. Kinect Controlled NAO Robot for Telerehabilitation. Journal of Intelligent Systems 2020 2020 Jul 28;30(1):224-239. [CrossRef]
- Brahmi B, Ahmed T, Bojairami IE, Swapnil AAZ, Assad-Uz-Zaman M, Schultz K, et al. Flatness Based Control of a Novel Smart Exoskeleton Robot. IEEE/ASME Trans. Mechatron 2022 Apr;27(2):974-984. [CrossRef]
- Losey DP, Jeon HJ, Li M, Srinivasan K, Mandlekar A, Garg A, et al. Learning latent actions to control assistive robots. Auton Robots 2022 Aug 04;46(1):115-147 [FREE Full text] [CrossRef] [Medline]
- Erickson Z, Gangaram V, Kapusta A, Liu C, Kemp C. Assistive Gym: A Physics Simulation Framework for Assistive Robotics. 2020 Presented at: IEEE International Conference on Robotics and Automation (ICRA); May 31-August 31, 2020; Paris, France. [CrossRef]
- Sunny MSH, De Caro JS, Rulik I, Zarif MII, Rahman M, Wang I, et al. Nose Tracking Assistive Robot Control for the People with Motor Dysfunctions. Archives of Physical Medicine and Rehabilitation 2021 Oct;102(10):e82-e83. [CrossRef]
- Sunny MSH, Zarif MII, Rulik I, Sanjuan J, Rahman MH, Ahamed SI, et al. Eye-gaze control of a wheelchair mounted 6DOF assistive robot for activities of daily living. J Neuroeng Rehabil 2021 Dec 18;18(1):173 [FREE Full text] [CrossRef] [Medline]
- Shin H, Seo K, Rhim S. Allowable maximum safe velocity control based on human-robot distance for collaborative robot. 2018 Presented at: IEEE International Conference on Ubiquitous Robots (UR); 26-30 June 2018; Honolulu, HI, USA p. 401-405. [CrossRef]
- Kim W, Balatti P, Lamon E, Ajoudani A. MOCA-MAN: A MObile and reconfigurable Collaborative Robot Assistant for conjoined huMAN-robot actions. 2020 Presented at: IEEE International Conference on Robotics and Automation (ICRA); May 31-August 31, 2020; Paris, France. [CrossRef]
- Asfour T, Kaul L, Wächter M, Ottenhaus S, Weiner P, Rader S, et al. Armar-6: A collaborative humanoid robot for industrial environments. 2018 Presented at: IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids); November 6-9, 2018; Beijing, China p. 447-454. [CrossRef]
- Perico DH, Homem TPD, Almeida AC, Silva IJ, Vilão CO, Ferreira VN, et al. Humanoid Robot Framework for Research on Cognitive Robotics. J Control Autom Electr Syst 2018 May 31;29(4):470-479. [CrossRef]
- Sandoval J, Su H, Vieyres P, Poisson G, Ferrigno G, De Momi E. Collaborative framework for robot-assisted minimally invasive surgery using a 7-DoF anthropomorphic robot. Robotics and Autonomous Systems 2018 Aug;106:95-106. [CrossRef]
- Khalid A, Kirisci P, Khan ZH, Ghrairi Z, Thoben K, Pannek J. Security framework for industrial collaborative robotic cyber-physical systems. Computers in Industry 2018 May;97:132-145. [CrossRef]
- Brock H, Ponce Chulani J, Merino L, Szapiro D, Gomez R. Developing a Lightweight Rock-Paper-Scissors Framework for Human-Robot Collaborative Gaming. IEEE Access 2020;8:202958-202968. [CrossRef]
- Sadrfaridpour B, Wang Y. Collaborative Assembly in Hybrid Manufacturing Cells: An Integrated Framework for Human–Robot Interaction. IEEE Trans. Automat. Sci. Eng 2018 Jul;15(3):1178-1192. [CrossRef]
- Cousins M, Yang C, Chen J, He W, Ju Z. Development of a mixed reality based interface for human robot interaciotn. 2017 Presented at: IEEE International Conference on Machine Learning and Cybernetics (ICMLC) ;1; July 9-12, 2017; Ningbo, China p. 27-34. [CrossRef]
- Cancedda L, Cannavò A, Garofalo G, Lamberti F, Montuschi P, Paravati G. Mixed reality-based user interaction feedback for a hand-controlled interface targeted to robot teleoperation. 2017 Presented at: International Conference on Augmented Reality, Virtual Reality and Computer Graphics ;10325; June 12-15, 2017; Ugento, Italy p. 447-463. [CrossRef]
- Ostanin M, Yagfarov R, Klimchik A. Interactive Robots Control Using Mixed Reality. IFAC-PapersOnLine 2019;52(13):695-700. [CrossRef]
- Siegele D, Steiner D, Giusti A, Riedl M, Matt D. Optimizing Collaborative Robotic Workspaces in Industry by Applying Mixed Reality. 2021 Presented at: International Conference on Augmented Reality, Virtual Reality and Computer Graphics 2021;12980. Springer, Cham; September 7-10, 2021; Virtual Event p. 544-559. [CrossRef]
- Ostanin M, Yagfarov R, Devitt D, Akhmetzyanov A, Klimchik A. Multi robots interactive control using mixed reality. International Journal of Production Research 2020 Nov 07;59(23):7126-7138. [CrossRef]
- Choosing your engine. Microsoft. URL: https://docs.microsoft.com/en-us/windows/mixed-reality/develop/choosing-an-engine?tabs=unity [accessed 2022-01-02]
- Download DirectX End-User Runtime Web Installer from Official Microsoft Download Center. Microsoft. URL: https://www.microsoft.com/en-us/download/details.aspx?id=35 [accessed 2022-01-02]
- Unity Real-Time Development Platform. Unity. URL: https://unity.com/ [accessed 2022-01-02]
- Configure your project without MRTK. Microsoft. 2021. URL: https://docs.microsoft.com/en-us/windows/mixed-reality/develop/unity/configure-unity-project [accessed 2022-01-02]
- Azure mixed reality cloud services overview. Microsoft. URL: https://docs.microsoft.com/en-us/windows/mixed-reality/develop/mixed-reality-cloud-services [accessed 2022-01-02]
- Data Privacy in the Trusted Cloud. Microsoft Azure. URL: https://azure.microsoft.com/en-us/overview/trusted-cloud/privacy/ [accessed 2022-01-12]
- UFACTORY xArm 5 Lite. UFACTORY. URL: https://www.ufactory.cc/products/xarm-5-lite-2020 [accessed 2022-01-02]
- Kovalchuk A. Modified Denavit-Hartenberg Coordinate System for Robot Actuating Mechanisms with Tree-like Kinematic Structure. S&E BMSTU 2014 Dec 03;15(11):244. [CrossRef]
- What is web socket and how it is different from the HTTP? GeeksforGeeks. 2022 Feb 21. URL: https://www.geeksforgeeks.org/what-is-web-socket-and-how-it-is-different-from-the-http/ [accessed 2022-01-02]
- Van Krevelen D, Poelman R. A Survey of Augmented Reality Technologies, Applications and Limitations. IJVR 2010 Jan 01;9(2):1-20. [CrossRef]
- Filipenko M, Angerer A, Reif W. Opportunities and limitations of mixed reality holograms in industrial robotics. arXiv preprint 2020 Jan 22. [CrossRef]
- Takata T, Nakabayashi S, Kondo H, Yamamoto M, Furui S, Shiraishi K, et al. Mixed Reality Visualization of Radiation Dose for Health Professionals and Patients in Interventional Radiology. J Med Syst 2021 Feb 17;45(4):38 [FREE Full text] [CrossRef] [Medline]
- Morimoto T, Kobayashi T, Hirata H, Otani K, Sugimoto M, Tsukamoto M, et al. XR (Extended Reality: Virtual Reality, Augmented Reality, Mixed Reality) Technology in Spine Medicine: Status Quo and Quo Vadis. J Clin Med 2022 Jan 17;11(2):470 [FREE Full text] [CrossRef] [Medline]
- Condino S, Turini G, Parchi PD, Viglialoro RM, Piolanti N, Gesi M, et al. How to Build a Patient-Specific Hybrid Simulator for Orthopaedic Open Surgery: Benefits and Limits of Mixed-Reality Using the Microsoft HoloLens. J Healthc Eng 2018 Nov 01;2018:5435097-5435012 [FREE Full text] [CrossRef] [Medline]
Abbreviations
DH: Denavit-Hartenberg |
DOF: Degrees of Freedom |
MR: Mixed Reality |
SDK: software development kit |
WMR: Windows Mixed Reality |
Edited by G Eysenbach; submitted 23.01.22; peer-reviewed by MR Islam, N Silva; comments to author 30.03.22; revised version received 13.04.22; accepted 26.04.22; published 17.05.22
Copyright©Md Tanzil Shahria, Md Samiul Haque Sunny, Md Ishrak Islam Zarif, Md Mahafuzur Rahaman Khan, Preet Parag Modi, Sheikh Iqbal Ahamed, Mohammad H Rahman. Originally published in JMIR Biomedical Engineering (http://biomsedeng.jmir.org), 17.05.2022.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Biomedical Engineering, is properly cited. The complete bibliographic information, a link to the original publication on https://biomedeng.jmir.org/, as well as this copyright and license information must be included.