More

    Microsoft’s new AI tool wants to find and fix AI-generated text that’s factually wrong



    Microsoft has unveiled a new tool which looks to stop AI models from generating content that is not factually correct – more commonly known as hallucinations.

    The new Correction feature builds on Microsoft’s existing ‘groundedness detection’, which essentially cross-references AI text to a supporting document input by the user. The tool will be available as part of Microsoft’s Azure AI Safety API and can be used with any text generating AI model, like OpenAI’s GPT-4o and Meta’s Llama.

    https://cdn.mos.cms.futurecdn.net/gHDzxNWQp3YGCMsnoMhdD6-1200-80.png



    Source link

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img