More

    Accenture, SAP Leaders on Diversity Problems and Solutions


    Generative AI bias, driven by model training data, remains a large problem for organisations, according to leading experts in data and AI. These experts recommend APAC organisations take proactive measures to engineer around or eliminate bias as they bring generative AI use cases into production.

    Teresa Tung, senior managing director at Accenture, told TechRepublic generative AI models have been trained primarily on internet data in English, with a strong North American perspective, and were likely to perpetuate viewpoints prevalent on the internet. This creates problems for tech leaders in APAC.

    “Just from a language perspective, as soon as you’re not English based — if you’re in China or Thailand and other places — you are not seeing your language and perspectives represented in the model,” she said.

    Technology and business talent located in non-English speaking countries are also being put at a disadvantage, Tung said. The disadvantage emerges because the experimentation in generative AI is largely being done by “English speakers and people who are native or can work with English.”

    While many home grown models are developing, particularly in China, some languages in the region are not covered. “That accessibility gap is going to get big, in a way that is also biased, in addition to propagating some of the perspectives that are predominant in that corpus of [internet] data,” she said.

    AI bias could produce organisational risks

    Kim Oosthuizen, head of AI at SAP Australia and New Zealand, noted that bias extends to gender. In one Bloomberg study of Stable Diffusion-generated images, women were vastly underrepresented in images for higher paid professions like doctors, despite higher actual participation rates in these professions.

    “These exaggerated biases that AI systems create are known as representational harms, “ she told an audience at the recent SXSW Festival in Sydney, Australia. “These are harms which degrade certain social groups by reinforcing the status quo or by amplifying stereotypes,” she said.

    AI is only as good as the data it is trained on; if we’re giving these systems the wrong data, it’s just going to amplify those results, and it’s going to just keep on doing it continuously. That’s what happens when the data and the people developing the technology don’t have a representative view of the world.”

    SEE: Why Generative AI projects risk failure without business exec understanding

    If nothing is done to improve the data, the problem could get worse. Oosthuizen cited expert predictions that large proportions of the internet’s images could be artificially generated within just a few years. She explained that “when we exclude groups of people into the future, it’s going to continue doing that.”

    In another example of gender bias, Oosthuizen cited one AI prediction engine that analyzed blood samples for liver cancer. The AI ended up being twice as likely to pick up the affliction in men than women because the model did not have enough women in the data set it was using to produce its results.

    Tung said health settings represent a particular risk for organisations, as it could be dangerous when treatments are being recommended based on biased results. Conversely, AI use in job applications and hiring could be problematic if not complemented by a human in the loop and a responsible AI lens.

    AI model developers and users must engineer around AI bias

    Enterprises should adapt the way they either design generative AI models or integrate third-party models into their businesses to overcome biased data or protect their organisations from it.

    For example, model producers are working on fine-tuning the data used to train their models by injecting new, relevant data sources or by creating synthetic data to introduce balance, Tung said. One example for gender would be using synthetic data so a model is representative and produces “she” as much as “he.”

    Organisational users of AI models will need to test for AI bias in the same way they conduct quality assurance for software code or when using APIs from third-party vendors, Tung said.

    “Just like you run the software test, this is getting your data right,” she explained. “As a model user, I’m going to have all these validation tests that are looking for gender bias, diversity bias; it could just be purely around accuracy, making sure we have a lot of that to test for the things we care about.”

    SEE: AI training and guidance a problem for employees

    In addition to testing, organisations should implement guardrails outside of their AI models that can correct for bias or accuracy before passing outputs to an end user. Tung gave the example of a company using generative AI to generate code that identified a new Python vulnerability.

    “I will need to take that vulnerability, and I’m going to have an expert who knows Python generate some tests — these question-answer pairs that show what good looks like, and possibly wrong answers — and then I’m going to test the model to see if it does it or not,” Tung said.

    “If it doesn’t perform with the right output, then I need to engineer around that,” she added.

    Diversity in the AI technology industry will help reduce bias

    Oosthuizen said to improve gender bias in AI, it is important for women to “have a seat at the table.” This means including their perspectives in every aspect of the AI journey — from data collection, to decision making, to leadership. This would require improving the perception of AI careers among women, she said.

    SEE: Salesforce offers 5 guidelines to reduce AI bias

    Tung agreed improving representation is very important, whether that is gender, racial, age, or other demographics. She said having multi-disciplinary teams “is really key,” and noted that an advantage of AI is that “not everybody has to be a data scientist nowadays or to be able to apply these models.”

    “A lot of it is in the application,” Tung explained. “So it’s actually somebody who knows marketing or finance or customer service very well, and is not just limited to a talent pool that is, frankly, not as diverse as it needs to be. So when we think about today’s AI, it’s a really great opportunity to be able to expand that diversity.”

    https://assets.techrepublic.com/uploads/2024/10/tr_20241023-ai-bias-diversity-sap-accenture-leaders-insights.jpg



    Source link
    Ben Abbott

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img