Osaro, which specializes in machine-learning-enabled robotics for high-volume fulfillment centers, has secured a patent for Computer-Automated Robot Grasp Depth Estimation, a technology designed to enhance robotic grasping without using specialized sensors. The patent (USPTO #12,236,340, Ben Goodrich et al, February 25, 2025) describes techniques whereby pick-and-place robots collect data through physical interaction with objects and use the data to train machine-learning models that can accurately estimate the depth of objects in relation to their surroundings.
Osaro’s robotics innovation is particularly applicable in fulfillment warehouses, where robots must adapt to constantly changing inventory, diverse product types and challenging grasping scenarios. When equipped with Osaro’s depth-estimating software, which is a component of the Osaro SightWorks platform, robots should be able to handle new SKUs without reprogramming, estimate depth across diverse objects and grasp irregular or deformable items.
“Our kitting and bagging customers require robots that can accurately grasp a wide variety of challenging items. These can range from plush toys to reflective objects to bagged apparel,” said Gemma Ross, vice president of operations at Osaro. “This latest innovation from Osaro’s labs enables robots to grasp these challenging items more accurately and efficiently, even in cluttered and unstructured environments. It also means we can reduce the cost of the deployed solution since it is less dependent on special sensors and lighting.”
Osaro’s patented depth-perception technology uses self-supervised learning, enabling the robot to estimate depth by analyzing images and arm movements from both successful and failed grasping attempts. By repeatedly attempting to grasp objects and analyzing the resulting images and arm positions, the robot can estimate the depth of objects.
“Imagine a robot trying to grab a small toy in a classic warehouse tote,” explained Ross. “Each time it tries to grasp it learns a little more about how far away the toy is. Eventually, it will be able to successfully determine the distance of the toy every time. Most importantly, it does this automatically without a human in the loop to annotate the information required to train a machine-learning model.”