Robotics has shown significant potential in assisting people with disabilities to enhance their independence and involvement in daily activities. Indeed, a societal long-term impact is expected in home-care assistance with the deployment of intelligent robotic interfaces. This work presents a human-robot interface developed to help people with upper limbs impairments, such as those affected by stroke injuries, in activities of everyday life. The proposed interface leverages on a visual servoing guidance component, which utilizes an inexpensive but effective laser emitter device. By projecting the laser on a surface within the workspace of the robot, the user is able to guide the robotic manipulator to desired locations, to reach, grasp and manipulate objects. Considering the targeted users, the laser emitter is worn on the head, enabling to intuitively control the robot motions with head movements that point the laser in the environment, which projection is detected with a neural network based perception module. The interface implements two control modalities: the first allows the user to select specific locations directly, commanding the robot to reach those points; the second employs a paper keyboard with buttons that can be virtually pressed by pointing the laser at them. These buttons enable a more direct control of the Cartesian velocity of the end-effector and provides additional functionalities such as commanding the action of the gripper. The proposed interface is evaluated in a series of manipulation tasks involving a 6DOF assistive robot manipulator equipped with 1DOF beak-like gripper. The two interface modalities are combined to successfully accomplish tasks requiring bimanual capacity that is usually affected in people with upper limbs impairments.
In recent years, several robotic end-effectors have been developed and made available in the market. Nevertheless, their adoption in industrial context is still limited due to a burdensome integration, which strongly relies on customized software modules specific for each end-effector. Indeed, to enable the functionalities of these end-effectors, dedicated interfaces must be developed to consider the different end-effector characteristics, like finger kinematics, actuation systems, and communication protocols. To face the challenges described above, we present ROS End-Effector, an open-source framework capable of accommodating a wide range of robotic end-effectors of different grasping capabilities (grasping, pinching, or independent finger dexterity) and hardware characteristics. The ROS End-Effector framework, rather than controlling each end-effector in a different and customized way, allows to mask the physical hardware differences and permits to control the end-effector using a set of high-level grasping primitives automatically extracted. By leveraging on hardware agnostic software modules including hardware abstraction layer (HAL), application programming interfaces (APIs), simulation tools and graphical user interfaces (GUIs), ROS End-Effector effectively facilitates the integration of diverse end-effector devices. The proposed framework capabilities in supporting different robotics end-effectors are demonstrated in both simulated and real hardware experiments using a variety of end-effectors with diverse characteristics, ranging from under-actuated grippers to anthropomorphic robotic hands. Finally, from the user perspective, the manuscript provides a set of examples about the use of the framework showing its flexibility in integrating a new end-effector module.
Teleoperation permits to control robots from a safe distance while performing tasks in a remote environment. Kinematic differences between the input device and the remotely controlled manipulator or the existence of redundancy in the remote robot may pose challenges in moving intuitively the remote robot as desired by the human operator. Motivated by the above challenges, this work introduces TelePhysicalOperation, a novel teloperation concept, which relies on a virtual physical interaction interface between the human operator and the remote robot in a manner that is equivalent to a “Marionette” based interaction interface. With the proposed approach, the user can virtually “interact” with the remote robot, through the application of virtual forces, which are generated by the operator tracking system and can be then selectively applied to any body part of the remote robot along its kinematic chain. This leads to the remote robot generating motions that comply with the applied virtual forces, thanks to the underlying control architecture. The proposed method permits to command the robot from a distance by exploring the intuitiveness of the “Marionette” based physical interaction with the robot in a virtual/remote manner. The details of the proposed approach are introduced and its effectiveness is demonstrated through a number of experimental trials executed on the CENTAURO, a hybrid leg-wheel platform with an anthropomorphic upper body.
The teleoperation of complex, kinematically redundant robots with loco-manipulation capabilities represents a challenge for human operators, who have to learn how to operate the many degrees of freedom of the robot to accomplish a desired task. In this context, developing an easy-to-learn and easy-to-use human-robot interface is paramount. Recent works introduced a novel teleoperation concept, which relies on a virtual physical interaction interface between the human operator and the remote robot equivalent to a "Marionette" control, but whose feedback was limited to only visual feedback on the human side. In this paper, we propose extending the "Marionette" interface by adding a wearable haptic interface to cope with the limitations given by the previous work. Leveraging the additional haptic feedback modality, the human operator gains full sensorimotor control over the robot, and the awareness about the robot’s response and interactions with the environment is greatly improved. We evaluated the proposed interface and the related teleoperation framework with naive users, assessing the teleoperation performance and the user experience with and without haptic feedback. The conducted experiments consisted in a loco-manipulation mission with the CENTAURO robot, a hybrid leg-wheel quadruped with a humanoid dual-arm upper body.
The teleoperation of mobile manipulators may pose significant challenges, demanding complex interfaces and causing a substantial burden to the human operator due to the need to switch continuously from the manipulation of the arm to the control of the mobile platform. Hence, several works have considered to exploit shared control techniques to overcome this issue and, in general, to facilitate the task execution. This work proposes a manipulability-aware shared locoma-nipulation motion generation method to facilitate the execution of telemanipulation tasks with mobile manipulators. The method uses the manipulability level of the end-effector to control the generation of the mobile base and manipulator motions, facilitating their simultaneous control by the operator while executing telemanipulation tasks. Therefore, the operator can exclusively control the end -effector, while the underlying ar-chitecture generates the mobile platform commands depending on the end-effector manipulability level. The effectiveness of this approach is demonstrated with a number of experiments in which the CENTAURO robot, a hybrid leg-wheel platform with an anthropomorphic upper body, is teleoperated to execute a set of telemanipulation tasks.
The teleoperation of robots with human-like capabilities may pose significant challenges to the human operator due to the kinematic complexity and redundancy of these robots. Bimanual telemanipulation represents such a challenging task that requires precise coordination of the two arms to perform a stable bimanual grasp on an object and eventually transport the object while maintaining the grasp. In this work, we present a shared control telemanipulation interface to facilitate the bimanual grasping and transportation of objects of unknown mass. With the proposed method, the robot is able to transport the object maintaining autonomously a sufficient amount of grasping force while accepting commands from the operator to reach the desired location. As humans do, it is not necessary to know the weight of the object in advance; instead, the robot estimates it during the lifting phase. On the basis of the estimated weight, the required amount of grasping force is computed. During object transportation, the robot autonomously regulates the grasping forces in a shared control fashion, allowing the operator to seamlessly command only the trajectories of the object. The proposed method has been implemented and validated on the CENTAURO robot, a quadrupedal platform with a humanoid dual arm upper body, performing experiment where objects of different weights and dimensions must be picked up and transported.
Nowadays a wide range of industrial grippers are available on the market and usually their integration to robotics automation systems relies on dedicated software modules and interfaces specific for each gripper. During the past two decades, more sophisticated end-effector modules that target to provide additional functionality including dexterous manipulation skills as well as sensing capabilities have been developed. The integration of these new devices is usually not trivial, requiring the development of brand new, tailor-made software modules and interfaces, which is a time consuming and certainly not efficient activity. To address the above issue and facilitate the quick integration and validation of the new end-effectors, we developed the ROS End-Effector open-source framework, which provides a software infrastructure capable to accommodate a range of robotic end-effectors of different hardware characteristics (number of fingers, actuators, sensing modules and communication protocols) and capabilities (with different manipulation skills, such as grasping, pinching, or independent finger dexterity) effectively facilitating their integration through the development of hardware agnostic software modules, simulation tools and application programming interfaces (APIs). A key feature of the ROS End-Effector framework is that rather than controlling each end-effector in a different and customized way, following specific protocols and instructions data fields, it masks the physical hardware differences and limitations (e.g., kinematics and dynamic model, actuator, sensor, update frequency, etc.) and permits to command the end-effector using a set of high level grasping primitives. The framework capabilities and flexibility in supporting different robotics end-effectors are demonstrated both in a kinematic/dynamic simulation and in real hardware experiments.
Unpublished (Yet)
2023
2023
Intuitive Laser-Based Teleoperation of Complex Robots Using Neural Networks and Behavior Trees