If there’s one thing we’ve learned from some of our favorite YouTube shitty robots, it’s that human-robot interaction can be a tricky business. Developing methods to get rigid robotic arms to perform delicate tasks around soft human bodies is easier said than done.
This week, a team at MIT’s CSAIL department is showcasing its work using robotic arms to help people get dressed. The promise of such technology is clear: helping people with mobility issues perform tasks that many of us take for granted.
Among the biggest hurdles is creating algorithms that are able to navigate around the human form efficiently, without hurting the person it’s trying to help. Preprogrammed modes can run into all sorts of variables, including shapes and human reactions. Overreacting to variables, on the other hand, can effectively freeze the robot, unsure of the best route to take.
So, the team set out to develop a system that could adapt to different scenarios and learn as it goes.
“To provide a theoretical guarantee of human safety, the team’s algorithm reasons about the uncertainty in the human model. Instead of having a single, default model where the robot only understands one potential reaction, the team gave the machine an understanding of many possible models, to more closely mimic how a human can understand other humans,” MIT writes in a blog post. As the robot gathers more data, it will reduce uncertainty and refine those models.”
The team says it will also be researching how human subjects react to these sorts of tasks.