More

    The CEO of Capgemini has a warning. You might be thinking about AI all wrong 



    [

    FOMO—the fear of missing out—used to be a short-hand favorite of young people worried about not being at the right party on a Saturday night. Now, chief executives are increasingly having FOMO about applied AI. The financial bets are large enough for boards to wince at capex implications. The outcomes are shrouded in mystery, a particular irritant for leaderships teams obsessed with data and clarity.  

    Step forward, Aiman Ezzat, the chief executive of technology and consultancy business Capgemini. The French Fortune 500 Europe giant has been in the news after it agreed to sell its U.S. subsidiary, Capgemini Government Solutions, which had been providing tracing and removal data for Immigration and Customs Enforcement (ICE) in America. In line with the great tech sell-off over AI spending fears, Capgemini’s share price has been laboring. 

    I spoke with Ezzat before the controversy over ICE blew up (Ezzat explained on LinkedIn that the American business acted autonomously to protect U.S. classified information). He told me that business leaders were treading a fine line with AI. There is a sweet spot somewhere between too far, too fast and stuck on the starting blocks. 

    “You don’t want to be too ahead of the learning curve,” he said. “If [you are], you’re investing and building capabilities that nobody wants.” 

    “Basically, the need to integrate AI with humans. How do you get humans to trust the agent? The agent can trust the human, but the human doesn’t really trust the agent.” 

    Aiman Ezzat

    AI is not a big-bang moment; changes will happen in increments. Most leaders can remember the hype around the metaverse—a virtual reality world where we could trade and do business via our dancing avatars (Capgemini itself experimented with a metaverse lab). Mark Zuckerberg was so keen on the idea that he renamed his company after it. Like air-fryers, its time may now have passed. 

    Agility is the new approach: small tests and pilots before you scale. Capgemini now has labs for 6G mobile technology, quantum computing and robotics. No one knows which parts of these technologies may be the metaverses of the future. 

    “Is everything ready to mature? No,” says Ezzat. “But we want to be there to be able to see when things start to mature, when we can really start scaling up, not waiting to see, okay, oh, now it’s moving.” 

    “We have to do something, right? So, you have to be investing—but not too much—to be able to be aware of the technology, following at the speed to make sure that we are ready to scale when the adoption starts to accelerate.” 

    181

    Capgemini’s rank on the Fortune 500 Europe

    As I have written before, many large firms are viewing AI primarily as a way to make separate business divisions more efficient. That’s a start, but it is not a ‘whole enterprise’ approach which brings together data and operations from, say, finance and human resources or procurement and supply chains, and then connects them in innovative ways. 

    “AI is a business. It is not a technology,” Ezzat says, warning that leaders often fall into seeing AI as a “black box that’s being managed separately”. “There are technologies behind it, but it’s really about transforming the business. It cannot just be used to keep the house running.”

    “The question you [the CEO] have to focus on is: ‘how can your business be significantly disrupted by AI’, not ‘how is your finance team going to become more efficient?’ I’m sure your CFO will deal with that at the end of the day.” 

    A well-worn phrase with AI is ‘human in the loop’—a phrase challenged by one senior technology executive I spoke to recently as being “way off beam”. What we should really be talking about is ‘human in the lead’. Welcome back ‘human-centricity’, a centuries-old social philosophy, formalized as an engineering approach by the 1950s ergonomics movement.  

    “How do you deal with what we call AI-human-centricity?” Ezzat says. “Basically, the need to integrate AI with humans. How do you get humans to trust the agent? The agent can trust the human, but the human doesn’t really trust the agent.” 

    Ergonomics was about chairs that were built for people, rather than chairs designed to fit efficiently into an office or be simple to stack and move. How to mould AI to work with people is a similar challenge. Bad chairs lead to bad backs. Bad AI is likely to be far more consequential. 

    https://fortune.com/img-assets/wp-content/uploads/2026/02/GettyImages-2256965708-e1770894185574.jpg?resize=1200,600
    https://fortune.com/2026/02/12/ceo-capgemini-aiman-ezzat-warning-might-be-thinking-about-ai-all-wrong-letter-from-london/


    Kamal Ahmed

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img