Scientists from the United States have presented an AI-based robot that can rearrange items on a shelf. In the simulator, its efficiency was very high, but it will be about 80% in real life.

Researchers from the University of Berkeley presented the Lateral Access maXimal Reduction of occupancY support Area (LAX-RAY) system. The system can predict the location of an object, even when only part of it is visible. It also uses Google’s technology, COntext Inference (COCOI), an online contact interface that injects physical objects’ properties into a simple system.

Scientists explained that people learn for a long time to manipulate various common objects in the physical world. For robots, this is an even more difficult task – it’s all about the difficulties of perceiving the real sizes of objects and physics laws.

The LAX-RAY system includes three modes of the mechanical search for objects. To test this mechanism’s effectiveness, the researchers used a shelf simulator and generated 800 random environments. They then deployed LAX-RAY on a physical shelf using a Fetch robot and a built-in depth-sensing camera, measuring whether the devices could calculate objects’ locations and place them correctly.

In the simulator, the researchers achieved an efficiency of 87.3%, and they expect the actual efficiency to be 80%. In future work, scientists plan to create more complex models, pushers, and invaders for robots. They also want to design new pulling actions using pneumatically activated suction cups.

They note that these tasks are not as easy as they seem. For them, you need to use a third source for vision, and there will be even more raw information for analysis. Besides, the device may be less effective in objects with a complex shape or large mass.