I’ve never wanted to sky dive or do a bungee jump, but when I read some industry commentary it feels like this is what CFOs are being asked to do when it comes to investing in AI tools. For example, Ashu Garg and Jaya Gupta at Foundation VC claim: “This isn’t just a new category of software; it’s the dismantling of enterprise software as we know it.”
Knowing CFOs as I do, they are a pragmatic bunch who are not easily swayed by marketing speak and will only invest when tangible value can be demonstrated. They are also unlikely to invest in AI if there is any sense they might lose control of critical decision-making processes. So, if vendors want to encourage CFOs to embrace AI, finance leaders must be confident they can trust the technology to deliver accurate results.
Putting aside the hype, it is not likely that existing large language models (LLMs) and conversational AI tools will dismantle every element of finance workflows any time soon. However, change is coming, and CFOs need to get ready. Right now, they should be thinking about getting the correct foundations in place so that when the time comes to adopt AI tools, they have ultimate flexibility to do it in a pragmatic way; rather than the unnerving sensation of jumping off a cliff into the unknown attached to a bungee cord.
Chief Product and Technology Officer, Unit 4.
AI and the randomness of life
Once the organizational and IT foundations are established, CFOs will have more confidence that AI tools will base their decisions on accurate information. They will also be better placed to oversee the AI tool to avoid incorrect decisions. For example, unpredictability in forecasting and planning is a primary challenge.
Black swan events can have dramatic and unforeseen effects on performance, but this is not simple for LLMs to address. Traditionally, they require training on every eventuality to make decisions, but with the right building blocks in place finance teams can decide how best to approach such unique scenarios with AI tools.
One way AI Agents will be able to address these more complex situations is by collaborating with one another to complete tasks autonomously, as the analyst and industry commentator, Phil Wainewright has highlighted. Potentially, this approach will see these tools find new solutions and create opportunities to drive productivity, as well as business performance.
Three priorities to build trust in AI
In such an example, CFOs must be prepared to allow critical finance systems to operate autonomously without supervision. This will require huge trust in AI, but finance leaders can be more confident in ceding control to AI tools if they have addressed three priorities:
1. Integrity of input data: it is obvious, but data must be accurate, and its integrity protected if AI tools are to make trustworthy decisions. AI agents must be able to share data if they are to collaborate, so organizations must have a single source of truth for all the information within their systems, as well as be able to integrate information easily from external sources. This also means being able to read all data, in all formats – structured and unstructured. On top of that is data security and knowing that the data comes from trusted sources – if AI Agents are talking to one another unhindered, how do you guarantee they are all trustworthy?
2. Problem complexity: the AI tool you adopt needs to fit the problem. Generalist AI models, like conversational AI tools, may not be suited to making decisions for niche challenges. How you train the AI is critical – does it have the right data source relevant to the problem you’re looking to solve? But the even bigger question is how you deal with randomness. Phil Wainewright talks about the “ingenuity of humans” which today AI systems cannot replicate. In the world of finance, if you are looking at forecasting, there are a multiplicity of known factors affecting business performance, but there are also black swans which are very difficult to train an AI to adapt to. How will your AI model cope with randomness?
3. Transparency of decision making: if we are going to let go and trust AI Agents to take more decisions in finance environments, then we must be able to trust the answers they provide. Unsupervised learning is a key step on the path to “letting go” but this requires confidence both in the model being used and the training data. With LLMs this process can also become inefficient. The more data they require to train the AI, the bigger the black box becomes, the more unwieldy and harder it is to understand the decision making. It also poses the risk of unreliable data sources being introduced into the model. Businesses cannot afford to rely on technologies and data that cannot be decoded, so it is critical to find more elegant, streamlined ways to demonstrate what data is being used and how the model uses the data to reach decisions.
Addressing these priorities from the outset will give CFOs the confidence that AI is being embraced as part of a structured approach, surrounded by defined policies and guidelines. Having such checks and balances in place will ensure adopting AI is not a leap of faith. Certainly, there is an element of stepping into the unknown, because we don’t yet know the full extent of what mature AI technologies will be capable of, but if you approach it right, it will not feel like you’re attached to a bungee, curling your toes over the edge of the cliff while you psyche yourself up to leap.
We’ve compiled a list of the best RPA software.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
https://cdn.mos.cms.futurecdn.net/cuJ2nHdA2cLngX4bhsHsye-1200-80.jpg
Source link