Episode 152 — June 27th, 2024 — Available at read.fluxcollective.org/p/152
Contributors to this issue: Neel Mehta, Boris Smus, Dimitri Glazkov, Erika Rice Scherpelz, MK
Additional insights from: Ade Oshineye, Ben Mathes, Justin Quimby, Alex Komoroske, Robinson Eaton, Spencer Pitman, Julka Almquist, Scott Schaffter, Lisie Lillianfeld, Samuel Arbesman, Dart Lindsley, Jon Lebensold, Melanie Kahl, Kamran Hakiman, Chris Butler
We’re a ragtag band of systems thinkers who have been dedicating our early mornings to finding new lenses to help you make sense of the complex world we live in. This newsletter is a collection of patterns we’ve noticed in recent weeks.
“Genius — the emergence of a truly remarkable and memorable work — seems to appear when a thing is perfectly suited to its context. When something works, it strikes us as not just being a clever adaptation, but as emotionally resonant as well. When the right thing is in the right place, we are moved.”
— David Byrne
📝 Editor’s note: We’ll be off next week for the USA's Independence Day. We’ll see you again the week after!
🧼➰ Epistemic hygiene tripwires
Epistemic hygiene, like many complex concepts, builds on a fairly simple idea. Our mind is a massive tangle of mental models. When faced with new information, we may or may not choose to add it to our mental model museum. Epistemic hygiene is the discipline of making those choices in a way that lead to seeing the world more clearly. Good epistemic hygiene reduces self-delusion.
In the age of frictionless information consumption and production, epistemic hygiene has become as important as physical hygiene. What we put inside our minds makes up our Selves.
Good epistemic hygiene is rooted in the recognition that what feels good in the moment is not necessarily good in the long term. That which satisfies our present anxieties about uncertainty may become a self-reinforcing mental trap. Epistemic hygiene anticipates that mental habits form quickly, and in the deluge flung at us by the information superhighway, we are constantly bombarded by the seeds of bad mental habits.
A practitioner of epistemic hygiene may feel alone in their pursuit: why is everyone else making such dubious choices? In a not-so-weird twist, it turns out that we don’t see our own choices as well as those of others. Our fellow aspiring practitioners of epistemic hygiene see us making dubious choices and throw up their hands: “Do you even epistemic hygiene, bro?”. Self-delusion is genuine, and one big part of good epistemic hygiene is accepting that we are likely to be subject to it. Any feedback from the outside is best seen as a gift (even if it feels awful to receive).
Without such feedback, we can develop tripwires: known moments of unproductive mental habits that, when spotted, lead us to recognize that we’re consuming something that is working against us in the long term.
For example, schadenfreude is a very useful tripwire. Once we recognize that feeling of joy at the expense of someone’s sorrow, let the alarms go off in the epistemic hygiene control room. Schadenfreude typically stems from agency-taking beliefs. When we experience it, we should examine how we reduced complex, nuanced individuals to two-dimensional cartoons.
Another epistemic tripwire is its near opposite: presuming that people have the same thoughts and beliefs as us. In a bizarre (but oh-so-common) rendition of theory of mind, we can imagine other individuals as exact copies of ourselves. The inevitable discord between our predictions and their actions is a good place for a tripwire. We might have deluded ourselves into believing that we know what others are thinking.
Developing a reliable set of epistemic hygiene tripwires can serve as a grounding for habits that work for us. And hopefully, helping others spot their tripwire moments and sharing our tripwires with others can make this practice of epistemic hygiene a bit less lonely.
🛣️🚩 Signposts
Clues that point to where our changing world might lead us.
🚏🪷 India wants to merge its river systems, but that could make droughts worse
India often suffers from simultaneous droughts and floods in different parts of the country, so the government is undertaking a massive scheme to build dozens of canals to connect separate river systems, thus letting the country move water around as needed and merge the country’s watersheds into “a mega water grid.” But, on top of the giant $168 billion price tag and estimated half-million people that this project will displace, experts warn of potential unintended consequences: moving so much water could disrupt the seasonal monsoon, making some dry regions even drier and thus increasing water stress.
🚏🔮 An open-source ChatGPT clone lets you plug in any LLM you want
A popular open-source library called LibreChat promises to let you enjoy all the features of ChatGPT — including “streaming” text responses, speech-to-text, multimodal chats, and image generation plugins — without paying for ChatGPT’s premium tier or being locked into OpenAI’s GPT model. The free tool can be self-hosted or run on cloud hosting providers, and it lets you plug in a wide range of cloud (e.g. Anthropic’s Claude) or local (e.g. Llama) LLMs.
🚏🦜 Hawaii is releasing millions of mosquitoes to protect rare birds from malaria
Hawaii’s endemic honeycreeper birds are both rare and endangered, and they have no natural immunity to malaria. This has become a problem as the climate has warmed, pushing mosquitoes up toward the higher elevations where honeycreepers live. The state is thus employing a classic strategy to protect the birds: controlling mosquito reproduction by releasing millions of male mosquitoes from helicopters. These males carry a naturally occurring strain of bacteria that prevents the females they mate with from laying viable eggs.
🚏🥓 McDonald’s is shutting down its AI drive-through ordering system
McDonald’s has been experimenting with automated drive-through ordering systems, where customers can chat with a bot to place fast food orders. But the AI has been lampooned for mishearing customer orders; videos of flubbed orders have gone viral, featuring such oddities as “a handful of butter, hundreds of chicken nuggets, and ice cream loaded with bacon.” Perhaps in response to this, McDonald’s has announced it will be ending its trial of the AI technology in July.
📖⏳ Worth your time
Some especially insightful pieces we’ve read, watched, and listened to recently.
“What Have We Already Tried?” Is the Most Powerful Product Question You Can Ask (UX Collective) — Observes that, for product designers, most of your ideas (dashboards, short videos, etc.) have probably been tried before, either inside or outside your company. There’s plenty of data out there to learn from (“outputs are never scarce”), so pause and study it all first before you try to build a new prototype to test out.
The HMEC Principle: Finding the Sweet Spot for Generative AI (Chris Gorgolewski) — Argues that generative AI is most helpful when assisting humans by solving problems whose solutions are “Hard to Make, but Easy to Check” (HMEC). For instance, AI is useful for designing websites since you can easily judge if the output looks nice, but it’s bad at giving medical advice because there’s no way for a lay person to know if it’s telling the truth.
As We May Code (NS Hipster) — Sketches out the concept of a “semantic web” for code: what if all libraries and functions, in any programming language, had a shared language for inputs, outputs, and dependencies? This could greatly improve code search, interactive documentation, dependency management, and even AI coding assistants.
Cape to Cairo — By Trains (Neil Shaw) — A delightful travelog of one man’s solo trip from South Africa to Egypt on the rails. The best part is his ability to talk to and learn from the many kinds of people (from Portuguese-speaking retired mineworkers to Chinese exchange students) he meets along the way.
🔍📆 Lens of the week
Introducing new ways to see the world and new tools to add to your mental arsenal.
This week’s lens: the adjacent possible mirage.
In the realm of social change, a better tomorrow always seems to be just around the corner. This belief fuels enthusiasm and a willingness to tackle the unknown. However, as we start implementing our vision, unexpected barriers inevitably arise. What once seemed attainable now appears elusive, leading to disillusionment and cynicism. Welcome to the adjacent possible mirage.
The adjacent possible refers to the idea that there is a limited set of moves we can make from where we currently stand. The adjacent possible mirage occurs when our beliefs about what is possible deviate from reality. Unlike outright self-deception, the mirage appears legitimately within reach but contains hidden complexities and constraints that only become visible during implementation. The adjacent possible mirage often involves asymptotic dynamics: the goal seems within reach, but no matter how much effort we put into reaching it, it remains about the same distance away (Zeno’s goal, perhaps?).
The adjacent possible mirages are very common in technological progress. The “I want my jetpack” TV trope captures the sentiment well: How many times have we pin our hopes on some technological breakthrough, only to realize that further work is needed to turn this breakthrough into a tangible product in our hands — and when it’s in our hand, it’s rarely what we initially imagined.
To navigate spaces rich with adjacent possible mirages, we need to critically evaluate our goals. Do we truly understand the problem? Do we have the resources, knowledge, and capability to achieve our goals?
Recognizing the mirage shouldn’t deter us from pushing for change. Instead, it helps us plan more effectively and avoid the frustration of getting stuck in the messy middle between dream and realization.
© 2024 The FLUX Collective. All rights reserved. Questions? Contact flux-collective@googlegroups.com.