
Few could have predicted that a warped, glitchy video of Will Smith attempting to eat spaghetti would become one of the most important before-and-after markers in modern AI history.
The original 2023 clip generated with ModelScope was memorably, operatically bad. Smith’s face warped between mismatched expressions, his hands morphed into rubbery appendages, and the noodles floated like they were acting under their own strange gravitational code. ‘Will Smith eating spaghetti’ became a kind of shorthand for the unhinged early stage of AI video-generation.
The 2023 clip feels like a relic now, the sort of thing people show in documentaries about the dawn of a technology to illustrate its awkward adolescence. The AI couldn’t keep Smith’s identity stable from frame to frame, and the initial video revealed the real limits of early text-to-video systems. By early 2024, the meme had grown enough legs that Smith himself joined in on the joke, posting a TikTok in which he exaggerated every motion as he ate spaghetti in real life.
The most up-to-date version, using Kling 3.0, has an entire scene of Smith eating spaghetti with a kid and even having a conversation, all from a single prompt.
AI cinema
The improvements in AI video appear rapidly in the video. The way the eyes stay aligned, the facial structure stabilizes, and the bowl stops teleporting between frames. The spaghetti actually behaves like a physical object by the time the compilation reaches its most recent models. Even the lighting becomes coherent.
Early models were capable of producing frames that looked good in isolation, but they couldn’t sustain a character, a motion pattern, or even a scene across time. Kling 3.0 maintains continuity throughout. The short piece of video feels like it belongs to the same physical reality from start to finish.
It’s a time-compressed demonstration of how entire research priorities shifted. First came anatomical consistency, then motion coherence, then higher resolutions, then realistic physics, then the ability for models to follow the emotional or narrative intent of a prompt.
Spaghetti testing
Personality is what makes the spaghetti meme endure. And personality, of a sort, is what the newest models have begun to capture. In the early clips, nothing on screen behaves with intention. By the end, the AI-generated Smith really seems to be performing an action, as if guided by an internal logic rather than random frame-to-frame improvisation.
That shift signals something important for the broader field of AI video. Once a model can maintain a character through movement, it opens the door to rendering human action in a way that fits inside our expectations.
The internet has spent years archiving its own absurdity, but this meme has matured into a kind of yardstick. If a model can do this convincingly, it’s operating at a level that the earliest systems couldn’t have imagined.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
https://cdn.mos.cms.futurecdn.net/PwcaFrMo9dexicLq69RGYe-2000-80.jpg
Source link
ESchwartzwrites@gmail.com (Eric Hal Schwartz)




