AI learned to sew up wounds by watching videos from surgical operations. In the future, this model will learn any daily activities by watching a video.

Researchers at the University of Berkeley, Intel, and Google Brain have taught the AI ​​model how to operate by simulating videos of eight surgeons at work. An algorithm called Motion2Vec was trained on frames where doctors operate surgical robots to suture or knot. But if the robot is usually controlled by a doctor from a computer console, then in the case of Motion2Vec it does it on its own.

He has already shown his skills in stitching pieces of fabric. In tests, the system reproduced the movements of surgeons with an accuracy of 85.5%. To achieve this level of accuracy was not easy: eight surgeons in the video materials used a variety of techniques, so AI had to choose the best option.

To solve this problem, the team used semi-autonomous algorithms that study the problem by analyzing partially labeled data sets. This allowed AI to understand the basic movements of surgeons from a small amount of data. However, researchers acknowledge that the system needs to be refined before engaging in operations on its own.

Now scientists plan to conduct tests with various types of tissues so that the system can adapt to various situations – for example, to unexpected bleeding. The next step in the development of the system will be semi-automatic remote surgery. At this stage, the robot will assist the doctor.

Scientists want to create more AI using the same model. The network has a lot of unstructured information in the form of video, images and text. Robots can extract useful content from it to comprehend this data and help us solve everyday problems.