Bold predictions claim AI will boost productivity, generate game-changing insights, and streamline operations – from risk management and compliance to the front office and beyond. Yet behind the headlines, a more uncomfortable question lingers: can AI truly meet these lofty expectations, or is it at risk of collapsing under the weight of its own hype?
The heart of the challenge isn’t AI itself, it’s data. In many institutions, outdated infrastructure, fragmented architectures, and siloed legacy systems continue to dominate. These setups were never intended to handle the scale and urgency that modern AI demands. Without an overhaul of data foundations, even the most powerful AI models will produce inaccurate results, create false confidence, and leave firms ill-prepared to respond to market developments.
Chief Product and Innovation Officer at Gresham.
The missing link in data readiness
AI algorithms crave completeness, accuracy, and consistency. Yet firms often rely on data systems that are anything but unified. It’s no surprise many financial institutions struggle to reconcile different feeds or update data in real time. As a result, data quality quickly degrades. AI then ends up basing its decisions on stale or inconsistent information, leading to questionable outputs at best – and costly mistakes at worst.
Recent research by Gresham, polling firms split over the US, the UK and the DACH region, found that while nearly all firms are starting to use AI to some degree, just 41% said they had extensively deployed AI across business operations. This discrepancy is telling: where data remains siloed or inconsistent, AI efforts stay small-scale or purely exploratory. Where data strategies are robust, AI initiatives advance more rapidly.
Investments in data management are also rising. In the same survey, 63% of senior decision-makers employed by financial services firms globally said that AI would result in an increase in the cost of data within their organization, reflecting a need for new tools and expertise to ensure data is ready for advanced analytics. Notably, 40% of the sample predicted a rise in operational headcount, contradicting the common assumption that AI automatically reduces numbers employed.
Scoping the barriers
These challenges come into sharp focus when we consider the underlying systems and structures found across financial institutions. Many companies have spent years building proprietary back-office platforms and custom integrations, and layering AI into these older environments is far from simple. Communication breakdowns between AI models and legacy platforms can lead to inconsistent data flow and unreliable outputs.
The question of talent is another barrier. AI in finance demands more than coding prowess, it requires a deep understanding of regulatory frameworks, specialized financial products, and institutional processes. This mix is hard to find, leaving the field short on people who can develop, implement, and maintain AI solutions at scale.
Legal and licensing matters add further complication. As generative AI begins to analyse text, images, and market data, firms must handle intellectual property constraints and commercial usage rights with great care.
Licensing agreements for market and reference data, for instance, were not designed with AI’s evolving capabilities in mind. There are also concerns about inadvertently exposing confidential information through AI-driven tools. The legal and operational frameworks that govern these new technologies continue to lag behind AI’s rapid adoption.
Shifting concepts of data quality
Data quality is no longer just about whether a field is correct or incorrect; it’s about understanding whether you’re allowed to use certain data in the first place, and whether it makes sense to do so. Permissions, provenance, and the potential for biased or ambiguous outputs all come into play. Large language models, for instance, can provide confident but factually inaccurate answers, making rigorous testing and validation essential.
Older data cataloguing and governance tools can’t keep pace with these evolving requirements. A static dictionary that flags ownership or acceptable values doesn’t capture the contextual nuances that AI demands. Instead, modern cataloguing must account for data lifecycles, usage restrictions, and the need to red-team or pressure-test models to reveal potential flaws.
Moving forward – Why bother?
It’s easy to see why some remain skeptical. Retooling data infrastructure, refining governance, and hiring specialized staff can be expensive and time-consuming.
However, the benefits of AI are too significant to dismiss. Smart algorithms excel at spotting patterns in massive data sets, exactly what financial institutions need for fraud detection, real-time market insights, and predictive risk analysis. Institutions that manage to integrate AI properly often find themselves making decisions faster and catching opportunities that competitors overlook.
Efficiency gains can be especially compelling. Routine tasks, like reconciliation and document processing, can be automated, freeing human teams to focus on high-value, analytical work. Moreover, AI-driven personalization can deepen customer relationships. Tailored product offerings and proactive service tweaks help retain clients in an industry where loyalty can’t be taken for granted.
Making AI work
If the industry wants AI to live up to its potential, it must address data issues head-on. This means modernizing legacy architecture, ensuring data quality is consistently maintained, and creating clear guidelines around privacy and licensing. It also requires investing in people who can bridge the gap between software engineering, finance, and regulation. Meanwhile, legal teams must keep pace with changing models, usage restrictions, and licensing obligations.
So, can AI truly deliver in financial services? The short answer is yes: provided institutions build the necessary groundwork. These tools can overhaul how data is processed, accelerate complex decisions, and uncover meaningful insights in a realm where timing and accuracy matter. But unless firms commit to cleansing and modernizing their data, AI may never advance beyond flashy proofs of concept. Genuine success demands reliable foundations.
We’ve featured the best AI website builder.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
https://cdn.mos.cms.futurecdn.net/gWr6NrfMqA5kB42af2aice-1200-80.jpg
Source link