More

    Robots could become much better learners thanks to ground-breaking method devised by Dyson-backed research — removing traditional complexities in teaching robots how to perform tasks will make them even more human


    One of the biggest hurdles with teaching robots new skills is how to convert complex, high-dimensional data, like images from onboard RGB cameras, into actions that accomplish specific goals. Existing methods typically rely on 3D representations requiring accurate depth information, or using hierarchical predictions working with motion planners or separate policies.

    Researchers at the Imperial College London and the Dyson Robot Learning Lab have unveiled a novel approach that could address this problem. The “Render and Diffuse” (R&D) method aims to bridge the gap between high-dimensional observations and low-level robotic actions, especially when data is scarce.

    R&D, detailed in a paper published on the arXiv preprint server, tackles the problem by using virtual renders of a 3D model of the robot. By representing low-level actions within the observation space, researchers were able to simplify the learning process.

    Robot putting a toilet seat down

    (Image credit: Vosylius et al)

    Imagining their actions within an image

    https://cdn.mos.cms.futurecdn.net/dTibW7FkAgBFR95ZqLzAT5-1200-80.png



    Source link
    waynewilliams@onmail.com (Wayne Williams)

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img