|
Benchmarking |

|
|
Benchmarks are crucial for the progress of a research field, allowing performance to be quantified to give insight into the effectiveness of an approach compared with alternative methods. In manipulation research, particularly in robotic manipulation, benchmarking and performance metrics are challenging due largely to the enormous breadth of the application and task space researchers are working toward. The majority of research groups have, therefore, selected for themselves a set of objects and/or tasks that they believe are representative of the functionality that they would like to achieve/assess. The chosen tasks are often not sufficiently specified or general enough such that others can repeat them. Moreover, the objects used may also be insufficiently specified and/or not available to other researchers (e.g., they may have been custom-fabricated or are only available for purchase in certain countries). The object set is specifically designed to allow for widespread dissemination of the physical objects and manipulation scenarios. The objects were selected based on a survey of the most common objects utilized in the robotics field as well as the prosthetics and rehabilitation literature (in which procedures are developed to assess the manipulation capabilities of patients) along with a number of additional practical constraints. Along with the physical objects, textured mesh models, high-quality images, and point-cloud data of the objects are provided together with their physical properties (i.e., major dimensions and mass) to enable realistic simulations.
Sample Publications:
Berk Calli, Aaron Walsman, Arjun Singh, Siddhartha Srinivasa, Pieter Abbeel, and Aaron M. Dollar
Benchmarking in Manipulation Research: The YCB Object and Model Set and Benchmarking Protocols, IEEE Robotics and Automation Magazine, vol. 22(3), pp. 36-52, 2015.
Dexterous Manipulation
RoboticGrasping
2015
All Benchmarking Publications >>
|