Episode 148 — May 30th, 2024 — Available at read.fluxcollective.org/p/148
Contributors to this issue: Neel Mehta, Boris Smus, Erika Rice Scherpelz, MK, Gordon Brander, Dimitri Glazkov, Ben Mathes
Additional insights from: Ade Oshineye, Justin Quimby, Alex Komoroske, Robinson Eaton, Spencer Pitman, Julka Almquist, Scott Schaffter, Lisie Lillianfeld, Samuel Arbesman, Dart Lindsley, Jon Lebensold, Melanie Kahl, Kamran Hakiman, Chris Butler
We’re a ragtag band of systems thinkers who have been dedicating our early mornings to finding new lenses to help you make sense of the complex world we live in. This newsletter is a collection of patterns we’ve noticed in recent weeks.
“Then why do you want to know?”
“Because learning does not consist only of knowing what we must or we can do, but also of knowing what we could do and perhaps should not do.”
― Umberto Eco
🎛️🔄 Catalyzing variety
We often revisit the idea of requisite variety, which states that an effective control system must have at least as much variety — more available states — than what you want to get done with the system being controlled. Without this, the controller will be unable to represent the states of the system it needs to regulate and will be limited in the number of states it can attempt to produce.
Although variety and requisite variety seem similar, there is an important difference: variety can emerge without external input, but requisite variety must exist within a larger system. Requisite variety requires at least three elements: the system being controlled, the controller, and the environment. The tension between the three defines the boundaries of the challenge: do we constrain or catalyze the practical variety of the system under control in the environment we’re using it in?
Consider a car (the system being controlled), the steering wheel and pedals (control system) on a two-dimensional road (the environment). You can control the car well enough on a 2D surface, without having to know the full details of the car’s engine. The control system isn’t as complex as the whole car (system), but the control system is complex enough to control the car on the 2D surface.
Now try to make a car that flies. The steering wheel and car needs to be able to pitch, yaw, rotate, etc. Now the environment changes and the control system has to change, too. The controls in a plane are more complex than in a car.
To restate: how do we figure out the complexity of the system under control, the controls themselves, and the environment we want to do it in?
One common method is to regulate inputs to that system. Although the theoretical variety of the system under control remains unchanged, a larger range of inputs can increase the practical variety that the system demonstrates. For instance, one cannot say they like or dislike a food they haven’t tried before, although they can guess based on its similarity to past food they’ve eaten. Exposure to new foods, like trying tteokbokki for the first time, increases the practical variety of their culinary preferences.
In a dynamic system, where the controller and the system under control are not static, input can — and typically does — evolve the controller itself: systems within systems. To continue our food example, the primary controller for most household’s cooking might be the grocery store. As the household’s input changes — such as experiencing tteokbokki — they may add new dishes to their menu. At first, their cooking options will be limited by what the grocery store stocks. However, the same inputs that change their taste may also influence what the grocery store carries over time. Inputs provide a catalyst for variety, changing the effective variety of the controlled and controlling systems.
To turn to the real world, this principle highlights one mechanism through which totalitarian states limit their polities. Totalitarian states intentionally limit variety by restricting access to information — controlling the inputs. While limiting the inputs to control a focus can be beneficial for focusing an organization with clear, specific goals, à la Steve Jobs and Apple, such intense focus is advantageous only in limited circumstances. In most systems, particularly societies that encompass a mix of desires and choices, the input limitations of autocracy become deeply problematic. Systems become brittle and highly vulnerable to unexpected inputs. Conversely, broadening inputs increases resilience by providing for adaptability in the face of complex, multifaceted challenges.
Although it’s not how the term ‘requisite variety’ is commonly used, thinking about input variety through the lens of requisite variety underscores the importance of maintaining diverse inputs within any system. By embracing a wide range of inputs, we can enhance our ability to manage and adapt to various situations, fostering resilience and innovation. Whether in organizational management, household decisions, or societal governance, diversity in input and control mechanisms leads to more robust and adaptable systems.
🛣️🚩 Signposts
Clues that point to where our changing world might lead us.
🚏🐍 New “AI-first” programming languages are launching to challenge Python
Python has long been a dominant language for machine learning, vector computations, and training models, but it’s not exactly designed for such heavy number-crunching; consider the infamous “Global Interpreter Lock” that blocks parallelization. Thus, developers are launching new “AI-first” languages that feel like Python but can take full advantage of GPUs and parallelization. Bend looks like Python but automatically runs code in parallel when possible, promising up to 50x performance improvements on GPUs with thousands of threads. Meanwhile, Mojo is a superset of Python that can run speedily on GPUs and borrows the memory-safety features of Rust, while still being compatible with existing Python libraries.
🚏⛽ 63% of Americans live within 2 miles of a public EV charger
A new survey has found that about six in ten Americans live within two miles of an electric vehicle charger, and 95% of people live in a county with at least one public charger. Overall, the number of public chargers in the US has more than doubled since 2020, rising from 29,000 to 61,000. The other key finding was that people who live closer to EV chargers are more likely to consider buying an electric car and more supportive of phasing out gas car sales.
🚏🌐 A “free” VPN was actually a rent-a-proxy that cybercriminals loved
A company called 911 S5 offered free VPNs to Americans, but when unsuspecting consumers downloaded the software, their computers became internet relays sold in bulk to cybercriminals. A common use case was fraudsters routing their internet connections through computers located near the home addresses of stolen credit cards so they could make purchases without drawing attention. The US Treasury Department recently announced sanctions against Chinese nationals who allegedly ran the service, and the DOJ arrested one of the men.
🚏🪀 Anti-waste experts are encouraging retailers to avoid brightly-colored plastic
New studies have found that red, blue, and green plastics break down more quickly when exposed to UV light than black, white, or silver items. Given how prevalent microplastics have become (including within the human body!), these researchers urge retailers to avoid bright plastic packaging, often used to make packaged goods more eye-catching.
📖⏳ Worth your time
Some especially insightful pieces we’ve read, watched, and listened to recently.
Composability: Designing a Visual Programming Language (John Austin) — Examines the difficulties of making drag-and-drop programming languages, such as those for Unity. The most composable architecture for such a language is a graph that links inputs, computations, and outputs. To achieve maximal composability, you need your graphs to have plenty of “seams,” where any subset (or “cut”) of the graph is itself a valid program.
The Terrifying Real Science of Avalanches (Veritasium) — A look into the many fields of science that come together to help predict and avoid avalanches. There’s stratigraphy from archeology (studying the different layers of the snowpack), the thermodynamics of melting and refreezing snow, and the physics of slipping and friction. This field is also a literal analogy for common systems-thinking concepts, like shearing layers, phase transitions, and metastable systems.
How to Write Email With Military Precision (Harvard Business Review) — Describes some communication techniques that militaries use to convey the maximum amount of information in the least space: putting the bottom line up front (“BLUF”) to summarize the who/what/when/where/why, tagging email subjects with keywords for easier scanning (“ACTION” for action required, “INFO” for no response needed, “DECISION” for requesting permission, etc.), and fitting emails onto one screen so the reader doesn’t have to scroll.
What Monks Know About Focus (Joel J. Miller) — It has never been easy to battle distraction, even for medieval monks over a millennium ago, as described by a recent translation of monk John Cassian's writings (circa 400 CE). Cassian struggled against his meandering mind during his prayers, and suggested that immersive long-term engagement with a text deepens our understanding of what we read by the changes wrought in ourselves through the very process of reading.
🔍📆 Lens of the week
Introducing new ways to see the world and new tools to add to your mental arsenal.
This week’s lens: theory of change.
Suppose your team has a great idea for increasing product usage. It’s cool, users like it, and the implementation is slick and easy to use… and yet, nothing changes. The metrics stay stubbornly where they are. What happened? How did this great idea with excellent execution fail so utterly?
We may have failed to think through the theory of change associated with the idea. A theory of change connects an idea with the impact that it’s meant to have, breaking it down step by step: if this, then that. If that, then the goal. It forces us to outline our assumptions about how our idea will impact the system it’s a part of.
Articulating a theory of change is especially valuable when there are many good ideas without a clear focus. We often confuse the attractiveness of an idea with its effectiveness. Going along with an idea that feels like a positive change can be tempting. However, even if it’s positive, it can be useless if it doesn’t impact the right goals. Improving your hair care routine might be a good idea, but if your goal is to improve your dental health, it’s not an effective one.
Although any given theory of change is likely wrong when confronted with reality, articulating a theory of change surfaces assumptions and provides opportunities to mitigate risks or, if needed, pivot entirely. If we can constantly update our theories of change in the face of new facts, then that value will be compounded.
© 2024 The FLUX Collective. All rights reserved. Questions? Contact flux-collective@googlegroups.com.
Lisp did not start as an AI first language in the 1980s… Was that phrase added to irk Lispers? Lisp was created in the late 1950s, and was used extensively during the "first times of AI" in the 1970s and the 1980s… The failure of which (the so-called AI Winter) made using Lisp kind of weird in the late 1980s and 1990s