More

    Ethical AI: Considerations ahead of regulations



    The AI leviathan continues to tower over every datacenter, with organizations racing to deploy AI based solutions for immediate benefits today, or putting the infrastructure and models in place to reap an aspirational return from research projects in the long run. Regardless of where an organization is on its AI journey, the breakneck speed at which this technology is advancing has left regulators scrambling to catch up in terms of how AI should be moderated, ensuring the technology is used ethically. There is a pressing need to clarify accountability in cases of errors or unintended consequences. There’s also a clear need for the development of legal frameworks that provide guidelines for determining responsibility when AI systems cause harm or fail to meet the expected standards.

    Alex McMullan

    CTO International at Pure Storage.

    What is ethical AI?

    Ethical AI means supporting the responsible design and development of AI systems and applications that are not harmful to people and society at large. While this is a noble goal, it’s not always easy to achieve and requires in-depth planning and constant vigilance. For developers and designers, key ethical considerations should at a minimum include the protection of sensitive training data and model parameters from manipulation. They should also provide real transparency into how AI models work and are impacted by new data, which is essential to ensuring proper oversight. Regardless of whether ethical AI is being approached by the C-level of a private business, government or regulatory body, it can be difficult to know where to start.

    Transparency as the foundation

    https://cdn.mos.cms.futurecdn.net/Wcc69A4Ts8bhSbGgJeGkoZ-1200-80.jpg



    Source link

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img