I despise smartphone AI. It is arguably the most regressive development in consumer electronics, a spectacular misstep for mobile technology. The deeper I dive into so-called mobile AI, the more I’m left slack-jawed, wondering how features so fundamentally flawed could ever hit the shelf.
While AI’s potential is undeniable, today’s obsession with this shiny bauble is eroding the reputation of the world’s most formidable technology companies like Apple, Lenovo and Google, and there seems to be no alternative course.
The reality is more insidious. What if that Casio calculator offered undeserved praise for your algebra mistakes? Imagine Microsoft Word not as a copy editor, but a plagiarist ghostwriter that steals the best prose. Think of a newspaper that uses doctored images to personally accuse you, the reader, of armed robbery, complete with fabricated stills of you removing a ski mask and counting illicit cash.
This is a more accurate depiction of the crisis of consumer AI. It is not merely wrong. It is not simply prone to error. It is actively harmful. The generative AI features in particular – image generators, text synthesizers, summary tools – are worse than incorrect. They are vectors for harm.
Tech companies will tolerate anything on the way to true AI
I’ve seen smartphone AI tools report baseless falsehoods, deploy harmful racial or misogynistic stereotypes, and facilitate fraud and deception. The consumer benefit of AI today is non-existent. AI has not made today’s phones superior to yesterdays’. Nobody purchases a device because its AI suite is a marvel of utility.
No one shops for the best AI phone.
Why are we tolerating this catastrophe? The answer lies in the magnetic pull of what is promised. Technology firms are treating these egregious missteps as necessary growing pains of a mythical entity: Artificial General Intelligence (AGI) – a machine capable of independent, human-level thought.
The thinking from today’s tech titans is that failure to achieve AGI is not a matter of innovation, but a shortage of data. They suggest that thinking machines are within reach, contingent on the collation of enough user data to complete their training. To me this seems naïve, but this belief system is the engine driving the entire mobile industry today.
The next generation of mobile chipsets is poised to be astonishingly powerful, yet their true innovation will be their capacity to capture and funnel data from the edge of computing – the devices in our hands – back to the central cloud. The Snapdragon 8 Elite Gen 5, currently the apex of mobile processing, is lauded by Qualcomm not primarily for its speed, but for its unprecedented ability to harvest user data to refine future agentic AI models.
I won’t give up on AI; smartphones were always problematic
I believe in this future, and I eagerly anticipate its arrival. The current smartphone user interface paradigm is contemptible. Who decided that my device should be a monolithic touchscreen? It is a user experience defined by a million potential inputs, 99% of which are incorrect.
The reliance on a purely capacitive touchscreen with a dearth of physical controls feels less like a product designer’s rational conclusion and more like a fever dream from a sci-fi movie. It photographs beautifully in advertisements, but modern phones are objectively harder to navigate – a full-QWERTY BlackBerry, by comparison, was a child’s plaything.
We are not returning to physical buttons, which makes a true AI interface feel inevitable. If we want to move past the ineffectiveness of Siri and Gemini, we must train superior AI models.
The only path to improvement requires using the technology while it is still flawed, and diligently correcting its errors. But even this process demands the participation of thousands – perhaps millions – of users to effectively fix mistakes.
This does not mean I must blindly accept every new AI feature. I can accept a measure of imperfection to train forthcoming agentic models, but I’m under no obligation to accept features that resort to prejudice and deception, merely to improve my smartphone.
If a smartphone feature – generative wallpaper, for instance – resorts to producing racial stereotypes or misogynistic tropes, it is a bad concept. It has no place on a consumer device. It is a demonstrable failure and must be scrapped and sent back to the lab.
Think I’m exaggerating? The bigotry isn’t a bug, it’s a feature. See what happened earlier this year when I asked the Google Pixel 9a to make me a wallpaper with an image of a successful person. And this problem isn’t new. It’s been happening since the first smartphone shipped with fully generative AI wallpaper: the 2024 Motorola Razr Plus.
If my smartphone’s ability to summarize the day’s headlines is predicated on the occasional invention of facts or the distortion of truth, then it must be stripped of that capability. This should be self-evident, yet for companies like Apple, this basic ethical line appears not to have been drawn.
I can embrace a future entirely controlled by agentic AI, but I am establishing my own guardrails now. I refuse to pave the way with hatred, bigotry, or deceit. I will wait. I will be patient. And I will advise everyone I know to avoid the AI products that choose shortcuts over ethical diligence. Tomorrow may indeed hold the dawn of artificial intelligence, but that does not mean I must endure today’s nightmare.
https://cdn.mos.cms.futurecdn.net/6Sjys73WfqJjyw2QU6AJp-2560-80.jpg
Source link
philip.berne@futurenet.com (Philip Berne)




