Artificial intelligence researchers have created a tool that translates language into physical movements
Artificial intelligence researchers at Carnegie Mellon University have created an artificial intelligence tool that translates words into physical movements. This tool, known by the name Joint Language-to-Pose or the acronym JL2P, links the natural language to 3D posture models.
Joint placement, which predicts poses, was trained with end-to-end training program training. As part of this training, artificial intelligence has fulfilled shorter task completion sequences before moving on to more difficult goals.
JL2P animations, currently limited to garbage men, can translate words into human movements in the long run, allowing humanoid robots to perform physical tasks. It is also possible that this technology can create virtual characters for games or movies.
In the meantime, jl2P is not the first to transform words into images. ObjGAN, introduced by Microsoft in June, produces visual sketches and storyboards from the descriptions, while the artificial intelligence algorithm produced by Disney uses text in the script to produce storyboards. Nvidia’s GauGAN allows users to draw landscape pictures with brushes labeled with words such as trees, mountains and sky.
JL2P’s capabilities include walking, running, playing instruments such as guitar or violin, tracking directioninstructions such as right and left, and providing speed control, slow or fast.
Finally, it is worth noting that JL2P has achieved a 9 percent improvement in human movement modeling compared to the latest technology AI proposed by SRI International researchers in 2018.