Episode 165 β October 10th, 2024 β Available at read.fluxcollective.org/p/165
Contributors to this issue: Dart Lindsley, Erika Rice Scherpelz, Spencer Pitman, Justin Quimby, Ade Oshineye, Wesley Beary, Boris Smus, Jasen Robillard, Neel Mehta, MK
Additional insights from: Ben Mathes, Dimitri Glazkov, Alex Komoroske, Robinson Eaton, Julka Almquist, Scott Schaffter, Lisie Lillianfeld, Samuel Arbesman, Jon Lebensold, Melanie Kahl, Kamran Hakiman, Chris Butler,Β
Weβre a ragtag band of systems thinkers who have been dedicating our early mornings to finding new lenses to help you make sense of the complex world we live in. This newsletter is a collection of patterns weβve noticed in recent weeks.
βWanderer, there is no road. The road is made by walking.β
β Antonio Machado
ππ§ Lost in confusionΒ
In the early stages of the COVID-19 pandemic, many leaders believed the problem could be solved through expert analysis and well-defined processes. They treated the situation as complicated, to use the term from the Cynefin framework. However, many parts of reality were more complex, with evolving variables, unpredictable human behavior, tribal identification, and uncertain outcomes. Such misclassification led to delayed or ineffectual responses and inadequate measures as leaders struggled to adapt to a situation that defied easy categorization.
We were in the realm of confusion, which lies in the center of the Cynefin framework. In this space, clarity is absent, and itβs unclear which of the other Cynefin domains applies (complex, complicated, chaotic, or clear). This domain embodies uncertainty about uncertainty itself.
One of the most significant risks from the confusion domain is misclassification. Treating the complex phenomenon of the COVID-19 pandemic as complicated is a prime example. When we misclassify a complex problem as complicated, we treat it as one that can be solved with data and engineering rather than as a situation requiring high adaptability, experimentation, and learning in the face of rapid change.
Misclassification can occur in various scenarios. You might believe youβre dealing with a clear situation where cause and effect are well understood, but the reality could be far more complicated, complex, or even chaotic. Conversely, treating something as complex when itβs clear can lead to massively overwrought solutions for simple problems. Getting the classification wrong can result in missed opportunities to find more effective approaches to the issues in front of us.
The confusion domain challenges us to confront the unsettling reality that our assumptions and cognitive biases canβand more often than not, willβlead us astray. It challenges us to consider not just where we are, but also how we might be wrong about where we are. This layer of self-reflectionβthis meta-classificationβadds a new dimension to the Cynefin framework, encouraging careful examination of our initial conclusions.
The confusion domain reminds us that sometimes, the first step to clarity is acknowledging that we might not yetβor everβfully understand the situation, and thatβs okay. By staying open to the possibility of being wrong, we can better adapt, learn, and ultimately find our way through.
π£οΈπ© SignpostsΒ
Clues that point to where our changing world might lead us.
ππ³οΈ Almost 3 million Americans have now voted early
Early voting (both mail-in voting and early in-person voting) is in full swing for the USβs November 2024 elections, and the excellent University of Florida Election Lab has found that over 2.9 million voters have already cast an early ballot. This is less than the 9 million people who had voted early by this point in the 2020 election, but much of that was COVID-induced mail-in voting. The long-term trend appears strong, though: since the 1970s, more and more people have been voting early.
ππ΄οΈ AI bots will let you auto-apply to thousands of jobs on LinkedIn
A popular new Python script will automatically crawl through LinkedInβs job board for you and use an LLM to submit personalized job applications, including auto-written resumes and cover letters. Users have reported applying to hundreds of jobs an hour, with one user sending over 2800 submissions; theyβve seen a lot more success getting interviews than when applying manually. The scriptβs creator says itβs a way to fight back against companies that use AI to screen applications (indeed, this would result in AIs talking to other AIs, with no humans in the loop).
ππͺ Chinese hackers used a backdoor to hack major American ISPs
A team of hackers backed by the Chinese government recently hacked three large US-based broadband providers, and experts think they exploited βbackdoorsβ that were originally built into the encryption software to enable (legal) wiretapping requests by law enforcement. As the outlet 9to5Mac put it, βThe moment you build in a backdoor for use by governments, it will only be a matter of time before hackers figure it outβ¦ You cannot have an encryption system which is only a little bit insecure any more than you can be a little bit pregnant.β
ππΎ The White House made a Reddit account to talk about hurricanes amidst misinformation
The Presidentβs office has started posting on Reddit under its official βwhitehouseβ account; it started threads on the r/NorthCarolina and r/Georgia forums to discuss the federal response to hurricanes Helene and Milton. Itβs an unconventional way for the government to spread its message, but itβs an understandable reaction to the misinformation and conspiracy theories thatβve been swirling online about FEMA, the agency that coordinates disaster response.
πβ³ Worth your time
Some especially insightful pieces weβve read, watched, and listened to recently.
Why A.I. Isnβt Going to Make Art (New Yorker) β Science fiction author Ted Chiang (also known for his essay βChatGPT Is a Blurry JPEG of the Webβ) poses the question: can you have creativity without an inner life? His implicit answer is an emphatic no, but that does not mean that new AI-based tools wonβt empower artists to reach new heights.
Magicians Wouldnβt Be Engineers (Bret Devereaux) β The historian behind the popular blog ACOUP disagrees with the common idea that, if magic were real, its rules would be fully determined and systematized, and its practitioners would effectively be scientists. For most of human history, βphysics itself was a βsoftβ magic systemβ: people knew what worked but had no idea how it worked. Consider the medieval blacksmith who had no idea how metalsβ atomic structures worked, but could forge a sword anyway with βcraft knowledge.β This pattern is reminiscent of James C. Scottβs techne and metis.
How to Get Rich in Tech, Guaranteed (Startup L. Jackson) β Observes that Big Tech is a reliable way to get rich, but βstartups are the only way to get 20 years of experience in five.β The authorβs advice is simple: find a startup run by high-integrity, smart, and hard-working people with a compatible culture, and sprint toward the milestone the company needs to get to the next round!
Shitposting, Shit-Mining and Shit-Farming (Programmable Mutter) β Argues that a little bit of shitposting (silly snarky posts) helps social media platforms, but if platforms like Twitter/X let such low-quality posts take over, they quickly decay into βshit-miningβ (trying to monetize data, like when Twitter shut off its public API) and βshit-farmingβ (finding people who like being fed junk content and selling their attention to grifters).
πποΈ Book for your shelf
A book that will help you dip your toes into systems thinking or explore its broader applications.
This week, we recommend Why Greatness Cannot Be Planned: The Myth of the Objective byΒ Kenneth O. Stanley and Joel Lehman (2015, 104 pages).
In this instant classic, the authors challenge the belief that ambitious objectives are the key to innovation. Instead, they argue that true breakthroughs come from exploring indirect pathsβwhat they call "stepping stonesβ. The book pushes against conventional wisdom, suggesting that the pursuit of clear objectives and plotting direct path forward often blinds us to unexpected opportunities along the way
The key insight here is the idea of "novelty search," which values curiosity and exploration over rigid goal-setting. Innovation, they argue, thrives on detours and local progress, and using objectives as benchmarks for success can cause us to miss the necessary deviations that lead to real breakthroughs. By focusing too narrowly on reaching an end goal, we risk overlooking the value of wandering, where the most transformative discoveries often occur.
Although generative AI hadnβt yet gone mainstream when this book was published in 2015, it illustrates exactly the tension the authors describe. AI accelerates the process of exploring ideas and surfacing novel connections, enabling faster, deeper exploration. But as the book warns, optimizing for clear objectives can sometimes skip the meandering, trial-and-error processes that lead to serendipitous discoveries. The challenge today is not whether AI is creative, but how we balance its efficiency with the kind of open-ended exploration that drives human innovationβthe very wandering that Stanley and Lehman highlight as crucial for uncovering hidden stepping stones.
This book is a worthwhile read because it invites us to rethink how we approach innovation, encouraging openness to exploration over rigid goalsβa lesson that feels especially relevant as AI reshapes how we navigate creativity and discovery.
Β© 2024 The FLUX Collective. All rights reserved. Questions? Contact flux-collective@googlegroups.com.