18 June 2025

The Verge: “Apple’s new design language is Liquid Glass”

Liquid Glass is inspired by Apple’s visionOS software and can adapt to light and dark environments. When you swipe up on the iOS 26 lockscreen there’s a glass edge, and elements throughout the OS have glass edges to them. Even the camera app has the glass feel, with menus that are transparent and features that are overlaid on top of the camera feed.

Liquid Glass uses real-time rendering and will dynamically react to movement. Apple is using it on buttons, switches, sliders, text, media controls, and even larger surfaces like tab bars and sidebars. Apple has redesigned its controls, toolbars, and navigation within apps to fit this new Liquid Glass design.

Tom Warren

Speaking of Apple, the big announcement of the 2025 WWDC was… a new design language. The reactions have not been particularly favorable, since the heavy doses of transparency in every corner of the user interface can lead to low contrast and poor readability, even for people with normal vision. I have no access to a live example, but some of the screenshots I’ve seen online are borderline impossible to read. This is a long-standing argument dating back to the slick holo-screens from the movie Minority Report; while everyone loves the novelty and the cool factor on screen, the lack of anything similar in real life might serve as a clue that these effects are impractical for regular use.

17 June 2025

Marcus on AI: “A knockout blow for LLMs?”

Apple has a new paper; it’s pretty devastating to LLMs, a powerful followup to one from many of the same authors last year.


The new Apple paper adds to the force of Rao’s critique (and my own) by showing that even the latest of these new-fangled “reasoning models” still — even having scaled beyond o1 — fail to reason beyond the distribution reliably, on a whole bunch of classic problems, like the Tower of Hanoi. For anyone hoping that “reasoning” or “inference time compute” would get LLMs back on track, and take away the pain of m multiple failures at getting pure scaling to yield something worthy of the name GPT-5, this is bad news.


If you can’t use a billion dollar AI system to solve a problem that Herb Simon one of the actual “godfathers of AI”, current hype aside) solved with AI in 1957, and that first semester AI students solve routinely, the chances that models like Claude or o3 are going to reach AGI seem truly remote.

Gary Marcus

Nothing terribly surprising to this conclusion. As the author mentions in this newsletter, this is a known problem of the LLM architecture going back decades: neural networks perform well enough within the bounds of their training data, but can break down in unpredictable patterns when applied to tasks outside their training range. And so this relentless drive to replace good, old-fashioned deterministic algorithms, which are more power- and compute-efficient on top of that, with LLMs is a recipe for ballooning costs and uncomfortable failures.