
Episode 177 — February 13th, 2025 — Available at read.fluxcollective.org/p/177
Contributors to this issue: Justin Quimby, Jon Lebensold, Scott Schaffter, Ade Oshineye, MK, Neel Mehta, Boris Smus, Erika Rice Scherpelz
Additional insights from: Alex Komoroske, Ben Mathes, Chris Butler, Dart Lindsley,Dimitri Glazkov, Jasen Robillard, Julka Almquist, Kamran Hakiman, Lisie Lillianfeld, Melanie Kahl, Robinson Eaton, Samuel Arbesman, Spencer Pitman, Wesley Beary
We’re a ragtag band of systems thinkers who have been dedicating our early mornings to finding new lenses to help you make sense of the complex world we live in. This newsletter is a collection of patterns we’ve noticed in recent weeks.
“That men do not learn very much from the lessons of history is the most important of all the lessons that history has to teach.”
— Aldous Huxley
🤹♂️🕸️ Playing with patterns
In the early 1990s, the object-oriented software engineering community picked up Christopher Alexander’s breakthrough ideas in architecture and design. His ideas were at the periphery of contemporary architecture, but in software design communities, they became central. Design patterns have become a touchstone for software engineers. They’ve culminated in mature software libraries, frameworks, and design conventions. Practitioners often take time to learn these patterns: first by following guides and tutorials, then by instantiating these patterns inside a framework, and perhaps eventually adding their own patterns.
One could argue this is about to change. Patterns, by definition, are instantiated repeatedly. It stands to reason that this repetition burns their latent representation into generative AI systems. Perhaps, with AI coding assistants, humans no longer need to think about software design patterns. Let the AI handle that.
But what if we flip this around? Given their ability to operate at the intersection of many pattern languages, what if LLMs can provide novel ways to identify patterns that fit local conditions and requirements? The ability to mass-customize pattern languages to a particular context could help us explore new ways of applying and combining them. Just as we may ask a text-to-image model to generate an image of an astronaut riding a horse on the moon, we can ask a copilot to define a persistence strategy pattern over CQRS with support for dependency-injected logging.
Seeing patterns as the beginning of a more significant journey echoes Alexander’s thinking. Just as software design patterns were growing in popularity, Alexander himself had moved beyond them to a search for what makes systems live. He found pattern languages somewhat restrictive—and explicitly warned against over-indexing on them.
Being able to play with patterns may provoke novel ways of building software. Imagine generating one that perfectly matches your needs instead of using a design language like Material. Instead of using a framework like React, you take inspiration from its ideas to generate a framework that’s only as complicated as your application needs. This may start chaotic but yield novel ways of building.
Returning to Christopher Alexander, we can more easily see what it means to play with contextualized patterns. We can use generative AI to imagine structures that interpolate between architectural patterns. For example, we could interpolate between three patterns: (1) courtyards that live, (2) entrance transitions, and (3) light on two sides of every room. We might end up with something like this, showing how these three patterns might be combined:
The key to all of these explorations is realizing that the patterns themselves do not contain the truth about what makes a structure come to life or a software system effective. We need to be able to pull on our experience of these systems to understand if they are alive, whole, and centered. Playing with patterns can help by breaking us out of the rigid confines of any defined pattern language. However, if we do not ensure those new patterns have their sense of life, we can end up with patterns that are more like a plastic palm tree than a stately redwood.
Yet overall, we are optimistic. When tools enable new ways of seeing and concept generation, surprising things can happen: the microscope, telescope, and X-rays all change how we think of the world around us. What if LLMs can help provide a way to see across patterns and connections we don't fully fathom? Who knows, maybe it’ll help us along that journey in finding Christopher Alexander’s elusive “quality without a name.”
🛣️🚩 Signposts
Clues that point to where our changing world might lead us.
🚏🇨🇫 The Central African Republic’s president launched a memecoin
The president of the poor African country launched a meme cryptocurrency, $CAR, which he said was an “experiment” to “support national development” and raise the country’s international profile. The coin’s value (predictably) crashed quickly after the announcement, falling from a high of 56¢ to its current value of 3¢ as of the time of writing. (The CAR made Bitcoin legal tender in 2022 and briefly allowed foreign investors to buy citizenship for $60,000 in crypto.)
🚏📽️ A Chinese movie became the first non-Hollywood film to hit $1 billion
“Ne Zha 2,” a Chinese animated film, became the first non-Hollywood movie to rake in $1 billion in a single market and is now the “highest-grossing film ever in a single market.” Within just days of release, the movie became China’s highest-grossing film of all time as well as the country’s most-watched film ever.
🚏⛽ BMW will keep investing in gas-powered cars, citing the “rollercoaster” of EVs in the US
While most automakers (including BMW) have been moving aggressively toward electric vehicles, BMW has been keeping its options open, developing flexible platforms that can support electric, internal combustion, and hybrid power trains. One BMW board member, referring to potential changes in American EV policy, said that “it would be naive to believe that the move toward electrification is a one-way road; it will be a rollercoaster ride” — and as such, BMW would continue investing in internal combustion tech. (While many warn that EV sales have slowed down, 2024 was the best year in history for electric car sales, and 2025 is expected to be even better.)
🚏🚙 Jeeps started showing full-screen pop-up ads on the infotainment screen
One Jeep driver posted on Reddit that their car had started showing pop-up ads for an extended warranty on the central screen. The ad takes up the full screen (blocking out navigation and other settings) and apparently reappears every time the user stops, even if it’s been dismissed before. Jeep’s parent company said a software glitch caused opt-out settings to not be honored.
📖⏳ Worth your time
Some especially insightful pieces we’ve read, watched, and listened to recently.
The Resilience of Alien Chess (Mark Rabkin) — Argues that working in a big organization is like playing chess when an alien swoops in to change the rules of the game every few minutes. Success in this world is less about making long-term plans on the current chessboard and more about adapting quickly to new boards and making small improvements to any board put in front of you.
Robust Yet Fragile (Maxim Raginsky) — Argues that control in complex systems (such as air traffic control or public policy) reduces the “externally perceived complexity” of the system, making it stable and often self-reinforcing thanks to feedback loops. But this reduction in complexity often leads people to irrationally believe that the controls are no longer necessary — which makes the system fragile, since people could strip away its controls at any moment.
Paradoxical Commandments (Kent M. Keith) — Observes that many idealistic young people set out to change the world, but find their efforts stymied and return with their tail between their legs. The remedy to this, the author argues, is to “love people, because change takes time, and love is one of the only motivations strong enough to keep you with the people and the processes until change is achieved.”
Elections Without Democracy: The Rise of Competitive Authoritarianism (Steven Levitsky & Lucan Way) — Written in 2002, when political scientists were still giddy about the wave of post-Cold War democratization, this classic paper argues that many countries categorized as hybrid regimes (neither fully democratic nor fully authoritarian) are not simply halfway down the road to inevitable democratization. One stable-ish landing spot for backsliding democracies, autocracies that haven’t fully consolidated power, or fractured regimes that have replaced fallen autocracies is “competitive authoritarianism,” where the arenas of democratic competition (like elections and an independent press) still exist, but the regime has varying levels of success rigging the game so it wins anyway.
🔮📬 Postcard from the future
A ‘what if’ piece of speculative fiction about a possible future that could result from the systemic forces changing our world.
// How might the broad-scale deployment of AI impact career paths?
// 2032. The Presidio park in San Francisco.
A couple enjoys vanilla soft serve and a leisurely walk in the park on a sunny Saturday afternoon. They are established tech industry veterans enjoying their day off. A moment not connected to work and its constant stream of notifications, pings, and chats. Phones are silenced, glasses deactivated, and not a single active network connection. A steady stream of twenty-somethings flows around them, busy hitting their personalized exercise goals or purposely striding toward a networking event or an algorithm-determined date.
One looks at the other and says, “Don’t look right. Don’t mention work. Start talking about airships or something.”
Without breaking stride, the other talks about the recent launch of the Zeppelin model NT-5 in Friedrichshafen and its implications for cargo transport and executive leisure excursions. Five minutes later, the first holds up a hand.
“Thanks. It was just one of the Associate Prompt Engineers from work. I’ve been mentoring for the ‘APE’ program and some of the recent grads are really into the role. They’re nice, but… I want to keep enjoying my day off.”
“Wait, wait… your company has a recent grad program specifically for AI Prompt Engineers?”
“Yeah, it's in its fourth or fifth year now. The same way that product management used to be a wacky new role, the company is building a career path around the best practices and lessons from the ad hoc AI prompt work of the past decade.”
“What’s the problem? Aside from dealing with an enthusiastic coworker on your day off, of course…”
“It’s that, well, so much of this group is so focused on their own careers. They saw older friends whose careers rocketed to success as pseudo-prompt engineers. As a result, they’ve wanted this since they were teens, not because they enjoy AI or the possibilities it unlocks but because it is a pathway to making big tech money. They’re hyper-focused on growing their net worth, not how they can help people or advance the technology.”
“Or make the world a better place?”
“Or make the world a better place.”
“The soft serve is good, though.”
They share a sorrowful smile and continue on their stroll.
© 2025 The FLUX Collective. All rights reserved. Questions? Contact flux-collective@googlegroups.com.
As someone who studied with Alexander, and used patterns in many real-life projects, I'm not so sure about this.
Design patterns are specifically not derived from observation - they are ‘meta-design’.
When we write patterns, we’re not looking at the world to find recognisable patterns (which I agree is what an LLM would do).
We’re looking at the world for recognisable conditions - then devising patterns which (hopefully) describe how to work for beneficial resolution of those patterns (while encouraging us to be aware of the wider and narrower contexts engaged).
An AI can find patterns in the training set (I’m pretty sure that in fact that is what the LLM training approach does - with wide scope patterns like ‘story’, ‘research paper’, ‘letter’, through narrower scopes like ‘paragraph’, ‘sequence’ etc on down to tokens).
But those patterns will be ‘as observed’ - not ‘designed for’.
The business of ‘refining’ which goes on seems to be about having the ‘anti-patterns' which are there in the training data get weeded out.
As to AI helping, well we're at the 'alignment issue' - because Alexander structured the complex system map which 'A Pattern Language' sets out in a specific manner - with the 'Emergent desirables' first (Towns), then the 'Ambitious but achievables' (Buildings), then the 'Doables' (Construction).
To write good patterns, then - at least if we pay attention to Alexander, we should first describe the emergent conditions we wish to support, then enquire as to the conditions which might support that emergence, then look for the forces at play and how to resolve them to support the wider whole.
So far, we don't know how to tell AI systems about the emergent outcomes we want - and they are certainly not adequately represented in any training set....