Publications
Scientific Publication list
Journal articles
2025
2025
- RASAn intuitive tele-collaboration interface exploring laser-based interaction and behavior treesDavide Torielli, Luca Muratore, and Nikos TsagarakisRobotics and Autonomous Systems, 2025
The recent advancements in the development of robotic systems that offer advanced loco-manipulation capabilities have opened new opportunities in the employment of such platforms in various domains. However, despite the increased range of offered capabilities, the collaboration with these robotic platforms to execute tasks through common human–robot interaction interfaces is still an open challenge. In this article, we present a novel human–robot interaction interface that permits to intuitively command and control the manipulation and the locomotion abilities of the robot by exploring a visual servoing guidance method realized with a laser emitter device. By pointing the laser to locations and objects in the environment where the robot is operating, the operator is able to command even highly articulated robots intuitively and efficiently. The detection of the laser projection is performed by a neural network that provides robust and real-time tracking of laser spot. Combined with the responsiveness of the laser detection, a Behavior Trees-based motion planner is employed to reactively select and generate the autonomous robot motions to reach the indicated target. This combination allows the operator to communicate goal locations and paths to follow without requiring prior knowledge of the system, and without worrying about the generation of the potential complex loco-manipulation robot actions. The effectiveness of the proposed interface is demonstrated with the CENTAURO robot, a hybrid leg-wheel platform with an anthropomorphic upper body, exploiting its abilities to accomplish a number of locomotion and manipulation tasks.
@article{LaserJournal, title = {An intuitive tele-collaboration interface exploring laser-based interaction and behavior trees}, author = {Torielli, Davide and Muratore, Luca and Tsagarakis, Nikos}, journal = {Robotics and Autonomous Systems}, volume = {193}, pages = {105054}, year = {2025}, issn = {0921-8890}, doi = {https://doi.org/10.1016/j.robot.2025.105054}, url = {https://www.sciencedirect.com/science/article/pii/S092188902500140X}, keywords = {Human-robot interface, Human-centered robotics, Visual servoing, Motion planning}, dimensions = {true}, }
2024
2024
- A Laser-Guided Interaction Interface for Providing Effective Robot Assistance to People With Upper Limbs ImpairmentsIEEE Robotics and Automation Letters, 2024
Robotics has shown significant potential in assisting people with disabilities to enhance their independence and involvement in daily activities. Indeed, a societal long-term impact is expected in home-care assistance with the deployment of intelligent robotic interfaces. This work presents a human-robot interface developed to help people with upper limbs impairments, such as those affected by stroke injuries, in activities of everyday life. The proposed interface leverages on a visual servoing guidance component, which utilizes an inexpensive but effective laser emitter device. By projecting the laser on a surface within the workspace of the robot, the user is able to guide the robotic manipulator to desired locations, to reach, grasp and manipulate objects. Considering the targeted users, the laser emitter is worn on the head, enabling to intuitively control the robot motions with head movements that point the laser in the environment, which projection is detected with a neural network based perception module. The interface implements two control modalities: the first allows the user to select specific locations directly, commanding the robot to reach those points; the second employs a paper keyboard with buttons that can be virtually pressed by pointing the laser at them. These buttons enable a more direct control of the Cartesian velocity of the end-effector and provides additional functionalities such as commanding the action of the gripper. The proposed interface is evaluated in a series of manipulation tasks involving a 6DOF assistive robot manipulator equipped with 1DOF beak-like gripper. The two interface modalities are combined to successfully accomplish tasks requiring bimanual capacity that is usually affected in people with upper limbs impairments.
@article{LaserRAL, author = {Torielli, Davide and Bertoni, Liana and Muratore, Luca and Tsagarakis, Nikos}, title = {A Laser-Guided Interaction Interface for Providing Effective Robot Assistance to People With Upper Limbs Impairments}, year = {2024}, journal = {{IEEE} Robotics and Automation Letters}, volume = {9}, number = {9}, pages = {7653-7660}, doi = {10.1109/LRA.2024.3430709}, keywords = {Robots;Lasers;Task analysis;Keyboards;Magnetic heads;Surface emitting lasers;Grippers;Human-robot collaboration;physically assistive devices;visual servoing}, url = {https://ieeexplore.ieee.org/document/10602529}, dimensions = {true}, }
2023
2023
- ROS End-Effector: A Hardware-Agnostic Software and Control Framework for Robotic End-EffectorsJournal of Intelligent & Robotic Systems, 2023
In recent years, several robotic end-effectors have been developed and made available in the market. Nevertheless, their adoption in industrial context is still limited due to a burdensome integration, which strongly relies on customized software modules specific for each end-effector. Indeed, to enable the functionalities of these end-effectors, dedicated interfaces must be developed to consider the different end-effector characteristics, like finger kinematics, actuation systems, and communication protocols. To face the challenges described above, we present ROS End-Effector, an open-source framework capable of accommodating a wide range of robotic end-effectors of different grasping capabilities (grasping, pinching, or independent finger dexterity) and hardware characteristics. The ROS End-Effector framework, rather than controlling each end-effector in a different and customized way, allows to mask the physical hardware differences and permits to control the end-effector using a set of high-level grasping primitives automatically extracted. By leveraging on hardware agnostic software modules including hardware abstraction layer (HAL), application programming interfaces (APIs), simulation tools and graphical user interfaces (GUIs), ROS End-Effector effectively facilitates the integration of diverse end-effector devices. The proposed framework capabilities in supporting different robotics end-effectors are demonstrated in both simulated and real hardware experiments using a variety of end-effectors with diverse characteristics, ranging from under-actuated grippers to anthropomorphic robotic hands. Finally, from the user perspective, the manuscript provides a set of examples about the use of the framework showing its flexibility in integrating a new end-effector module.
@article{ROSEE, doi = {10.1007/s10846-023-01911-5}, url = {https://doi.org/10.1007/s10846-023-01911-5}, year = {2023}, publisher = {Springer Science and Business Media {LLC}}, volume = {108}, number = {4}, author = {Torielli, Davide and Bertoni, Liana and Fusaro, Fabio and Tsagarakis, Nikos and Muratore, Luca}, title = {{ROS} End-Effector: A Hardware-Agnostic Software and Control Framework for Robotic End-Effectors}, journal = {Journal of Intelligent \& Robotic Systems}, dimensions = {true} }
2022
2022
- TelePhysicalOperation: Remote Robot Control Based on a Virtual “Marionette” Type Interaction InterfaceIEEE Robotics and Automation Letters, 2022
Teleoperation permits to control robots from a safe distance while performing tasks in a remote environment. Kinematic differences between the input device and the remotely controlled manipulator or the existence of redundancy in the remote robot may pose challenges in moving intuitively the remote robot as desired by the human operator. Motivated by the above challenges, this work introduces TelePhysicalOperation, a novel teloperation concept, which relies on a virtual physical interaction interface between the human operator and the remote robot in a manner that is equivalent to a “Marionette” based interaction interface. With the proposed approach, the user can virtually “interact” with the remote robot, through the application of virtual forces, which are generated by the operator tracking system and can be then selectively applied to any body part of the remote robot along its kinematic chain. This leads to the remote robot generating motions that comply with the applied virtual forces, thanks to the underlying control architecture. The proposed method permits to command the robot from a distance by exploring the intuitiveness of the “Marionette” based physical interaction with the robot in a virtual/remote manner. The details of the proposed approach are introduced and its effectiveness is demonstrated through a number of experimental trials executed on the CENTAURO, a hybrid leg-wheel platform with an anthropomorphic upper body.
@article{TPO0, title = {TelePhysicalOperation: Remote Robot Control Based on a Virtual “Marionette” Type Interaction Interface}, author = {Torielli, Davide and Muratore, Luca and Laurenzi, Arturo and Tsagarakis, Nikos}, year = {2022}, journal = {{IEEE} Robotics and Automation Letters}, volume = {7}, number = {2}, pages = {2479-2486}, doi = {10.1109/LRA.2022.3144792}, url = {https://ieeexplore.ieee.org/document/9696192}, dimensions = {true}, }
Conference Proceedings
2024
2024
- Wearable Haptics for a Marionette-inspired Teleoperation of Highly Redundant Robotic SystemsDavide Torielli, Leonardo Franco, Maria Pozzi, Luca Muratore, Monica Malvezzi, Nikos Tsagarakis, and Domenico PrattichizzoIn IEEE International Conference on Robotics and Automation, 2024
The teleoperation of complex, kinematically redundant robots with loco-manipulation capabilities represents a challenge for human operators, who have to learn how to operate the many degrees of freedom of the robot to accomplish a desired task. In this context, developing an easy-to-learn and easy-to-use human-robot interface is paramount. Recent works introduced a novel teleoperation concept, which relies on a virtual physical interaction interface between the human operator and the remote robot equivalent to a "Marionette" control, but whose feedback was limited to only visual feedback on the human side. In this paper, we propose extending the "Marionette" interface by adding a wearable haptic interface to cope with the limitations given by the previous work. Leveraging the additional haptic feedback modality, the human operator gains full sensorimotor control over the robot, and the awareness about the robot’s response and interactions with the environment is greatly improved. We evaluated the proposed interface and the related teleoperation framework with naive users, assessing the teleoperation performance and the user experience with and without haptic feedback. The conducted experiments consisted in a loco-manipulation mission with the CENTAURO robot, a hybrid leg-wheel quadruped with a humanoid dual-arm upper body.
@inproceedings{TPO4, title = {Wearable Haptics for a Marionette-inspired Teleoperation of Highly Redundant Robotic Systems}, author = {Torielli, Davide and Franco, Leonardo and Pozzi, Maria and Muratore, Luca and Malvezzi, Monica and Tsagarakis, Nikos and Prattichizzo, Domenico}, year = {2024}, booktitle = {{IEEE} International Conference on Robotics and Automation}, volume = {}, number = {}, pages = {15670-15676}, url = {https://ieeexplore.ieee.org/abstract/document/10610788}, doi = {10.1109/ICRA57147.2024.10610788}, dimensions = {true} }
2022
2022
- Manipulability-Aware Shared Locomanipulation Motion Generation for Teleoperation of Mobile ManipulatorsDavide Torielli, Luca Muratore, and Nikos TsagarakisIn IEEE/RSJ International Conference on Intelligent Robots and Systems, 2022
The teleoperation of mobile manipulators may pose significant challenges, demanding complex interfaces and causing a substantial burden to the human operator due to the need to switch continuously from the manipulation of the arm to the control of the mobile platform. Hence, several works have considered to exploit shared control techniques to overcome this issue and, in general, to facilitate the task execution. This work proposes a manipulability-aware shared locoma-nipulation motion generation method to facilitate the execution of telemanipulation tasks with mobile manipulators. The method uses the manipulability level of the end-effector to control the generation of the mobile base and manipulator motions, facilitating their simultaneous control by the operator while executing telemanipulation tasks. Therefore, the operator can exclusively control the end -effector, while the underlying ar-chitecture generates the mobile platform commands depending on the end-effector manipulability level. The effectiveness of this approach is demonstrated with a number of experiments in which the CENTAURO robot, a hybrid leg-wheel platform with an anthropomorphic upper body, is teleoperated to execute a set of telemanipulation tasks.
@inproceedings{TPO2, title = {Manipulability-Aware Shared Locomanipulation Motion Generation for Teleoperation of Mobile Manipulators}, author = {Torielli, Davide and Muratore, Luca and Tsagarakis, Nikos}, year = {2022}, booktitle = {{IEEE/RSJ} International Conference on Intelligent Robots and Systems}, volume = {}, number = {}, pages = {6205-6212}, doi = {10.1109/IROS47612.2022.9982220}, dimensions = {true} }
- A Shared Telemanipulation Interface to Facilitate Bimanual Grasping and Transportation of Objects of Unknown MassIn IEEE-RAS International Conference on Humanoid Robots, 2022
The teleoperation of robots with human-like capabilities may pose significant challenges to the human operator due to the kinematic complexity and redundancy of these robots. Bimanual telemanipulation represents such a challenging task that requires precise coordination of the two arms to perform a stable bimanual grasp on an object and eventually transport the object while maintaining the grasp. In this work, we present a shared control telemanipulation interface to facilitate the bimanual grasping and transportation of objects of unknown mass. With the proposed method, the robot is able to transport the object maintaining autonomously a sufficient amount of grasping force while accepting commands from the operator to reach the desired location. As humans do, it is not necessary to know the weight of the object in advance; instead, the robot estimates it during the lifting phase. On the basis of the estimated weight, the required amount of grasping force is computed. During object transportation, the robot autonomously regulates the grasping forces in a shared control fashion, allowing the operator to seamlessly command only the trajectories of the object. The proposed method has been implemented and validated on the CENTAURO robot, a quadrupedal platform with a humanoid dual arm upper body, performing experiment where objects of different weights and dimensions must be picked up and transported.
@inproceedings{TPO3, title = {A Shared Telemanipulation Interface to Facilitate Bimanual Grasping and Transportation of Objects of Unknown Mass}, author = {Torielli, Davide and Muratore, Luca and De Luca, Alessio and Tsagarakis, Nikos}, year = {2022}, booktitle = {{IEEE-RAS} International Conference on Humanoid Robots}, volume = {}, number = {}, pages = {738-745}, doi = {10.1109/Humanoids53995.2022.10000094}, dimensions = {true} }
- TelePhysicalOperation: a Shared Control Architecture for Intuitive and Smart Teleoperation of Complex Mobile ManipulatorsDavide Torielli, Luca Muratore, and Nikos TsagarakisIn Italian Conference in Robotics and Intelligent Machines, 2022
@inproceedings{TPOIRIM, title = {TelePhysicalOperation: a Shared Control Architecture for Intuitive and Smart Teleoperation of Complex Mobile Manipulators}, author = {Torielli, Davide and Muratore, Luca and Tsagarakis, Nikos}, year = {2022}, booktitle = {Italian Conference in Robotics and Intelligent Machines}, volume = {}, number = {}, pages = {255--259}, url = {https://doi.org/10.5281/zenodo.7797398}, doi = {10.5281/zenodo.7797398}, dimensions = {true} }
2021
2021
- Towards an Open-Source Hardware Agnostic Framework for Robotic End-Effectors ControlIn , 2021
Nowadays a wide range of industrial grippers are available on the market and usually their integration to robotics automation systems relies on dedicated software modules and interfaces specific for each gripper. During the past two decades, more sophisticated end-effector modules that target to provide additional functionality including dexterous manipulation skills as well as sensing capabilities have been developed. The integration of these new devices is usually not trivial, requiring the development of brand new, tailor-made software modules and interfaces, which is a time consuming and certainly not efficient activity. To address the above issue and facilitate the quick integration and validation of the new end-effectors, we developed the ROS End-Effector open-source framework, which provides a software infrastructure capable to accommodate a range of robotic end-effectors of different hardware characteristics (number of fingers, actuators, sensing modules and communication protocols) and capabilities (with different manipulation skills, such as grasping, pinching, or independent finger dexterity) effectively facilitating their integration through the development of hardware agnostic software modules, simulation tools and application programming interfaces (APIs). A key feature of the ROS End-Effector framework is that rather than controlling each end-effector in a different and customized way, following specific protocols and instructions data fields, it masks the physical hardware differences and limitations (e.g., kinematics and dynamic model, actuator, sensor, update frequency, etc.) and permits to command the end-effector using a set of high level grasping primitives. The framework capabilities and flexibility in supporting different robotics end-effectors are demonstrated both in a kinematic/dynamic simulation and in real hardware experiments.
@inproceedings{RoseePaper, title = {Towards an Open-Source Hardware Agnostic Framework for Robotic End-Effectors Control}, author = {Torielli, Davide and Bertoni, Liana and Tsagarakis, Nikos G. and Muratore, Luca}, year = {2021}, journal = {{IEEE} International Conference on Advanced Robotics}, volume = {}, number = {}, pages = {}, doi = {10.1109/ICAR53236.2021.9659331}, dimensions = {true} }
Thesis
2024
2024
- PhDIntuitive Human-Robot Interfaces Leveraging on Autonomy Features for the Control of Highly-redundant RobotsDavide TorielliUniversity of Genova, Feb 2024
The advancements in robotics have revealed the potential of complex robotic platforms, promising a wide spread of robotics technologies to help people in various scenarios, from industrial to households. To harness the capabilities of modern robots, it is of paramount importance to develop human-robot interaction interfaces that allow people to seamlessly operate them. To address this challenge, traditional interface methods, such as remote controllers and keyboards, are going to be replaced by more intuitive communication means, that permit, for example, to command the robot through body gestures, and to receive feedback that extends the visual domain, such as tactile clues. At the same time, the robot must be equipped with autonomous capabilities that relieve the operators in considering all the aspects of the task and of the robot motions, thus reducing their workload, decreasing the execution time of the task, and minimizing the possibility of failures. This PhD thesis takes on these challenges by exploring and developing innovative human-robot interaction paradigms that focus on the key aspects of enabling intuitive human-robot communication, enhancing user’s situation awareness, and incorporating different levels of robot autonomy. With the TelePhysicalOperation interface, the user can teleoperate the different capabilities of a robot (e.g., single/double arm manipulation, wheel/leg locomotion) by applying virtual forces on selected robot body parts. This approach emulates the intuitiveness of physical human-robot interaction, but at the same time it permits to teleoperate the robot from a safe distance, in a way that resembles a "Marionette" interface. The system is further enhanced with wearable haptic feedback functions to align better with the "Marionette" metaphor, and a user study has been conducted to validate its efficacy with and without the haptic channel enabled. Considering the importance of robot independence, the TelePhysicalOperation interface incorporates autonomy modules to face, for example, the teleoperation of dual-arm mobile base robots for bimanual object grasping and transportation tasks. With the laser-guided interface, the user can indicate points of interest to the robot through the utilization of a simple but effective laser emitter device. With a neural network-based vision system, the robot tracks the laser projection in real time, allowing the user to indicate not only fixed goals, like objects, but also paths to follow. With the implemented autonomous behavior, a mobile manipulator employs its locomanipulation abilities to follow the indicated goals. The behavior is modeled using Behavior Trees, exploiting their reactivity to promptly respond to changes in goal positions, and their modularity to adapt the motion planning to the task needs. The proposed laser interface has also been employed in an assistive scenario. In this case, users with upper limbs impairments can control an assistive manipulator by directing a head-worn laser emitter to the point of interests, to collaboratively address activities of everyday life. In summary, this research contributes to effectively exploiting the extensive capabilities of modern robotic systems through user-friendly human-robot interfaces. With the developed interfaces, the gap that still prevents a large adoption of robotic systems is further reduced.
@thesis{PHDThesis, title = {Intuitive Human-Robot Interfaces Leveraging on Autonomy Features for the Control of Highly-redundant Robots}, author = {Torielli, Davide}, year = {2024}, month = feb, school = {University of Genova}, url = {https://hdl.handle.net/11567/1160113}, dimensions = {true} }
2019
2019
- MasterCooperative Assembly with Autonomous Mobile Manipulators in an Underwater ScenarioDavide TorielliUniversity of Genova, Sep 2019
Robotics is spreading in all the relevant sectors of the human life. The importance of studying this field is confirmed by all the various applications where robots are used: exploration of space and sea, industry, healthcare, transportation and so on. This thesis aims to improve the current state of the art in a particular field: Underwater Robotics. Currently, the research in this area focuses on improving robots capabilities to make them more and more efficient in performing missions autonomously. A particular advancement is towards the cooperation between multiple agents. With cooperation the robotics systems can perform more and more difficult tasks, such as carrying a long and heavy object in an unstructured environment. Specifically, the problem addressed is an assembly one known as the peg-in-hole task. In this case, two autonomous manipulators must carry cooperatively (at kinematic level) a peg and must insert it into an hole fixed in the environment. Even if the peg-in-hole is a well-known problem, there are no specific studies related to the use of two different autonomous manipulators, especially in underwater scenarios. Among all the possible investigations towards the problem, this work focuses mainly on the kinematic control of the robots. The methods used are part of the Task Priority Inverse Kinematics (TPIK) approach, with a cooperation scheme that permits to exchange as less information as possible between the agents (that is really important being water a big impediment for communication). A force-torque sensor is exploited at kinematic level to help the insertion phase. The results show how the TPIK and the chosen cooperation scheme can be used for the stated problem. The simulated experiments done consider little errors in the hole’s pose, that still permit to insert the peg but with a lot of frictions and possible stucks. It is shown how can be possible to improve (thanks to the data provided by the force-torque sensor) the insertion phase performed by the two manipulators in presence of these errors. Another part of the thesis deals with computer vision algorithms: a third robot exploits particular methods to estimate the hole’s pose. Different techniques are compared to detect and to track the hole, considering the errors they provide in the pose’s estimation. Even if the problem is simplified (due to its complexity), this thesis could help further works. The focus is on the particular problem stated, but the methods and tools exploited can be useful also for other applications, not only underwater-related.
@thesis{MasterThesis, title = {Cooperative Assembly with Autonomous Mobile Manipulators in an Underwater Scenario}, author = {Torielli, Davide}, year = {2019}, month = sep, school = {University of Genova}, url = {https://arxiv.org/abs/2505.07441}, dimensions = {true} }
Unpublished (Yet)
Contributed works
2024
2024
- A High-Force Gripper with Embedded Multimodal Sensing for Powerful and Perception Driven GraspingEdoardo Del Bianco, Davide Torielli, Federico Rollo, Damiano Gasperini, Arturo Laurenzi, Lorenzo Baccelliere, Luca Muratore, and 2 more authorsSep 2024
Modern humanoid robots have shown their promising potential for executing various tasks involving the grasping and manipulation of objects using their end-effectors. Nevertheless, in the most of the cases, the grasping and manipulation actions involve low to moderate payload and interaction forces. This is due to limitations often presented by the end-effectors, which can not match their arm-reachable payload, and hence limit the payload that can be grasped and manipulated. In addition, grippers usually do not embed adequate perception in their hardware, and grasping actions are mainly driven by perception sensors installed in the rest of the robot body, frequently affected by occlusions due to the arm motions during the execution of the grasping and manipulation tasks. To address the above, we developed a modular high grasping force gripper equipped with embedded multi-modal perception functionalities. The proposed gripper can generate a grasping force of 110 N in a compact implementation. The high grasping force capability is combined with embedded multi-modal sensing, which includes an eye-in-hand camera, a Time-of-Flight (ToF) distance sensor, an Inertial Measurement Unit (IMU) and an omnidirectional microphone, permitting the implementation of perception-driven grasping functionalities. We extensively evaluated the grasping force capacity of the gripper by introducing novel payload evaluation metrics that are a function of the robot arm’s dynamic motion and gripper thermal states. We also evaluated the embedded multi-modal sensing by performing perception-guided enhanced grasping operations.
@articleb{Dagana, author = {Del Bianco, Edoardo and Torielli, Davide and Rollo, Federico and Gasperini, Damiano and Laurenzi, Arturo and Baccelliere, Lorenzo and Muratore, Luca and Roveri, Marco and Tsagarakis, Nikos G.}, booktitle = {{IEEE-RAS} International Conference on Humanoid Robots}, title = {A High-Force Gripper with Embedded Multimodal Sensing for Powerful and Perception Driven Grasping}, year = {2024}, volume = {}, number = {}, pages = {149-156}, keywords = {Multimodal sensors;Force;Robot vision systems;Pose estimation;Humanoid robots;Grasping;Thermal force;Grippers;Robots;Payloads}, doi = {10.1109/Humanoids58906.2024.10769951}, dimensions = {true} }
2023
2023
- A Unified Multimodal Interface for the RELAX High-Payload Collaborative RobotLuca Muratore, Arturo Laurenzi, Alessio De Luca, Liana Bertoni, Davide Torielli, Lorenzo Baccelliere, Edoardo Del Bianco, and 1 more authorSep 2023
This manuscript introduces a mobile cobot equipped with a custom-designed high payload arm called RELAX combined with a novel unified multimodal interface that facilitates Human–Robot Collaboration (HRC) tasks requiring high-level interaction forces on a real-world scale. The proposed multimodal framework is capable of combining physical interaction, Ultra Wide-Band (UWB) radio sensing, a Graphical User Interface (GUI), verbal control, and gesture interfaces, combining the benefits of all these different modalities and allowing humans to accurately and efficiently command the RELAX mobile cobot and collaborate with it. The effectiveness of the multimodal interface is evaluated in scenarios where the operator guides RELAX to reach designated locations in the environment while avoiding obstacles and performing high-payload transportation tasks, again in a collaborative fashion. The results demonstrate that a human co-worker can productively complete complex missions and command the RELAX mobile cobot using the proposed multimodal interaction framework.
@articleb{Muratore2023, author = {Muratore, Luca and Laurenzi, Arturo and De Luca, Alessio and Bertoni, Liana and Torielli, Davide and Baccelliere, Lorenzo and Del Bianco, Edoardo and Tsagarakis, Nikos G.}, title = {A Unified Multimodal Interface for the RELAX High-Payload Collaborative Robot}, journal = {Sensors}, volume = {23}, year = {2023}, number = {18}, article-number = {7735}, url = {https://www.mdpi.com/1424-8220/23/18/7735}, issn = {1424-8220}, doi = {10.3390/s23187735}, dimensions = {true} }
2021
2021
- Towards a Generic Grasp Planning Pipeline using End-Effector Specific Primitive Grasping ActionsSep 2021
@articleb{RoseePaperLiana, title = {Towards a Generic Grasp Planning Pipeline using End-Effector Specific Primitive Grasping Actions}, author = {Bertoni, Liana and Torielli, Davide and Zhang, Yifang and Tsagarakis, Nikos G. and Muratore, Luca}, year = {2021}, journal = {{IEEE} International Conference on Advanced Robotics}, volume = {}, number = {}, pages = {}, doi = {10.1109/ICAR53236.2021.9659402}, dimensions = {true} }