Discover more from 🌀🗞 The FLUX Review
🌀🗞 The FLUX Review, Ep. 74
November 3rd, 2022
Episode 74 — November 3rd, 2022 — Available at read.fluxcollective.org/p/74
Contributors to this issue: Neel Mehta, Ben Mathes, Boris Smus, Erika Rice Scherpelz, Dimitri Glazkov
Additional insights from: Ade Oshineye, Gordon Brander, a.r. Routh, Stefano Mazzocchi, Justin Quimby, Alex Komoroske, Robinson Eaton, Spencer Pitman, Julka Almquist, Scott Schaffter, Lisie Lillianfeld, Samuel Arbesman, Dart Lindsley, Jon Lebensold
We’re a ragtag band of systems thinkers who have been dedicating our early mornings to finding new lenses to help you make sense of the complex world we live in. This newsletter is a collection of patterns we’ve noticed in recent weeks.
“For every complex problem there is an answer that is clear, simple and wrong.”
― H. L. Mencken
🛠️🤯 In search of mental model tooling breakthrough
Whether we want it or not, models permeate our thinking. Our ability to make predictions depends on the existence of a model in our mind. We model the world so that our thoughts can die instead of us. From simple sentence completion (“two plus two is …”) to elaborate ecosystem dynamics (“given these factors, the most likely future of this industry is …”) to those “automatic response” moments when we swerve our bicycle wildly just in time to avoid an obstacle, our mental models help us make sense of our world. They help us navigate… and they also constrain us.
The quality of our mental models is paramount to how we show up in the world. Our mental models of how events will play out help us avoid the futures we don’t want to be in. High-quality mental models let us navigate futures that lead to positive outcomes. Low-quality ones leave us in despair.
Teaching and learning allow us to exchange mental models. Individual development helps us evolve them. The mental models adapt and change as they are exchanged. The good mental models help people thrive; the bad mental models lead to pain and hard outcomes. Good ones, at least in theory, should gain popularity. These mechanisms are how mental models spread and evolve: variation, heritability, and selection.
Each individual’s story enriches and refines the mental models they come into contact with. Mental model improvement is a collective process: the breadth and depth of individual experiences is key to their quality.
One of today’s challenges stems from the fact that high-quality mental models are complex. They are subtle and nuanced, making them difficult to convey quickly and clearly. They hide in the folds of tacit knowledge, acquired through lifetimes of experiences. As the high-quality models grow even more complex, it becomes more difficult for them to spread; how many of us read the original Hegel compared to hearing a short clip of Slavoj Žižek talking about Hegel (or even just reading a tweet about it)?
As information flows readily in the frictionless interconnectedness of the Internet, the effortful process of high-quality mental model acquisition does not seem to be following suit; if anything, it gets harder to spot a robust mental model and easier to become distracted by the cacophony of memes. Our attention perennially exhausted, what chance do we have of grasping the complexity that surrounds us? This feeling is starting to show up in our art.
It might just be the case that humanity needs a breakthrough to overcome this challenge. What new technology — social or physical — will help us increase our collective capacity to share and enrich high-quality mental models? What tools might we need to do this? Will it be at the language layer, where a new lingua franca will emerge to help us communicate our sense-making more precisely? Will it be at the epistemological layer, where a collection of universal models will uplevel our thinking? Will it be at the technological layer, where a new form of medium selects for rich understanding?
We do not yet know. But understanding the nature of the challenge is a good first step. Perhaps you, dear reader, will take the next one.
Clues that point to where our changing world might lead us.
🚏🇨🇦 Canada is setting a target of 500,000 immigrants per year
In response to a labor shortage that is seeing a million jobs sit unfilled, the Canadian government has unveiled a plan to welcome 500,000 new immigrants each year starting in 2025. That’s a significant increase from the 405,000 permanent residents admitted last year, and a large figure for a modestly-sized country: 500,000 people represent 1.3% of Canada’s entire population.
🚏🥤 Scientists can use fluid dynamics to identify deepfaked audio
AI-generated audio clips can sound almost identical to real human speech, but one group of scientists used tools from fluid dynamics to devise a method for telling “deep fakes” apart from authentic speech clips. Their algorithm listens to speech and sketches out the approximate shape of the speaker’s vocal tract. Real human speech reflects the unusual geometry of the human mouth, but deepfaked audio sounds like it was produced by an unrealistic vocal tract shaped like a drinking straw.
🚏🛢 Some universities are shutting down their petroleum engineering programs
The increasingly poor reputation of oil and gas is jeopardizing the future of that industry as students steer clear of petroleum-related career tracks. Over the last five years, the number of students getting petroleum engineering degrees has crashed 83% (from 2,300 per year to just 400), which has led several colleges — including the University of Calgary and Imperial College London — to suspend their petroleum engineering majors.
🚏🗽 New York employers are offering comical salary ranges in response to a new law
A new law in New York City requires businesses with at least four employees to post minimum and maximum salaries as part of any job listing. This is an admirable attempt to increase salary transparency, but some employers are offering unhelpfully-wide salary bands, possibly in an attempt to game the law. One reporter job would reportedly pay anywhere from $50,000 to $180,000, and one job at the Wall Street Journal ranged from $140,000 to $450,000. Citi even posted a range of $0 to $2 million for one of its banking jobs, though the company later said this was a clerical error.
🚏🛑 The seller of .coin crypto-domains is disabling their functionality
Domain names (like the once-omnipresent “.eth” in Twitter handles) are popular in the crypto community; rather than pointing to a website, they’re more like mnemonics for long-winded crypto wallet addresses. The company Unstoppable Domains sold “.coin” domains as NFTs, but after realizing that another company had been selling .coin domains for years, they decided to stop selling the NFTs and shut down their libraries that converted domain names to addresses. Unstoppable Domains says that their .coin domains are “self-custodied,” so users will be able to keep their NFTs, though the tokens won’t have much utility without the resolving tools. One prominent crypto skeptic pointed out that this is a major problem with NFTs: even though nobody can take your token away from you, they can easily withdraw support for it, leaving it as good as useless.
📖⏳ Worth your time
Some especially insightful pieces we’ve read, watched, and listened to recently.
The Upside-Down Painting (Jason Zweig) — Shares a story from a modern art class in college, where the professor developed a sweeping interpretation of a painting that was actually projected upside-down. When the author (then a student) pointed this out, the professor sneered, “you are correct, but that is irrelevant to my analysis.” Looking back, the author sees this as his first brush with “cognitive dissonance, the sunk-cost fallacy, escalation of commitment, herding behavior and the blatant inconsistencies of human cognition.”
One Reason Mergers Fail: The Two Cultures Aren’t Compatible (Harvard Business Review) — Explores a common failure case for corporate mergers: when a company with a “tight” culture (which values rules, order, process, efficiency, and predictability) merges with a company with a “loose” culture (which values flexibility, innovation, decentralization, and creativity).
An Early Advantage (Higher Ground) — Examines the peculiar Montessori-school technique of having young children learn to put on a coat by placing it upside-down on the floor, inserting their arms, and flipping it over their head. Argues that it’s more than just a handy technique for putting on a jacket: it builds children’s confidence and teaches them that cleverness can help them solve difficult problems.
Other People’s Problems (Camille Fournier) — Emphasizes the importance of picking your battles in a corporate setting and underscores just how difficult it is to solve business problems, especially if they have a cultural element. Shares a five-step method for approaching problems and cautions us that “there’s always going to be something you can’t fix.”
The Street Type That Breaks the Hierarchy (City Beautiful) — In the spirit of “a city is not a tree,” a city planner argues that the classic but strict road hierarchy (local street, collector, arterial, highway) leads to suburban sprawl and “stroads.” Boulevards, which mix attributes of several road types, break this system in a good way, creating a backbone for an urban neighborhood that’s still comfortable to walk or bike along.
🕵️♀️📆 Lens of the week
Introducing new ways to see the world and new tools to add to your mental arsenal.
This week’s lens: Pace Layers.
If we’re building a mobile application, we might be willing to follow the hottest design trends. On the other hand, we probably don’t want to invest in building an application for a new mobile OS that doesn’t have a lot of users yet.
How does one make these judgments? One useful lens is to think about the domain in which our problem resides in terms of pace layering. The idea of pace layers is that, in any sufficiently robust system, there is a spectrum of structures ranging from fast and adaptable to slow and stable. Think of these as a series of layers, each of which moves at a different pace. The layers arise because it is challenging for any single structure to be both adaptable and stable.
By thinking of these functions as residing in separate layers, we can start to see how they influence and depend on each other. Fast-adapting layers act as buffers to help absorb change, only letting changes filter down to lower layers if they stand the test of time and scale — imagine if governments had to react to every new trend in clothing. Slow, stable layers act as a reliable foundation upon which fast layers can build — imagine if fashion designers had to figure out how to weave fabric from scratch each time they came up with a new design.
Though the graphic below applies this lens to society, the pattern works in many domains, including technology and biological systems. One thing this example shows is that “fast” and “slow” are relative attributes, not absolutes. Related to this is the observation that these layers are not crisply delineated in practice. Pace layers are better understood as a description of how things relate to each other rather than a generally applicable ontology.
We have found that once we learned about the pace layers, we started seeing them everywhere, providing helpful framing in situations where the tension between stability and adaptability is particularly strong.
🔮📬 Postcard from the future
A ‘what if’ piece of speculative fiction about a possible future that could result from the systemic forces changing our world.
// Content tag: generative AI, government regulations
// November 2050. Twenty years have passed since the Requiring AI Consent of Humans to Expand Legibility (RAICHEL) Act.
Farhad had been working on his startup for years. His app was designed to auto-generate some financial reports for his clients. His clients were junior analysts on Wall Street, and they liked the idea of his app: the app would auto-generate images and text analyses for the junior analysts, and then the junior analysts would look good to their bosses. It did their job for them.
Farhad couldn’t solve this one big problem, though. The text was never good enough. The charts were never good enough. None of the junior analyst clients would stay customers for long.
The worst part was that Farhad knew this was possible. It had already been done by Bloomberg in 2025! Many didn’t know, but Bloomberg started auto-generating all their financial industry charts, news, and analysis behind the scenes using GPT-3 and a variant of Stable Diffusion that only outputted data charts.
But there was a really big catch: in 2030, the RAICHEL act made it illegal to use an AI model without getting explicit consent from everyone who made content that the model trained on. The RAICHEL Act was named after a woman who’s private adult photos kept showing up in AI-generated images.
So all the generative AI models that had been huge in the 2020s? All those ones that were trained on, basically, the whole internet? They were illegal. Well, unless someone wanted to go back and email every website creator, every photographer, and so on, to get their consent retroactively… and since there was no record of who to even contact, let alone getting them all to respond, all those models were black market now. You had to pay in crypto to get access to them, and adding $200 ETH gas fees on top of every single invocation of the model made it stupid expensive to run. Hence the finance clients Farhad was targeting.
Sure, you could try to build a prototype illegally with the old black-market models, but if you ever got to $1M in revenue you had to register with the FTC any use of machine learning, and get audited. There had been some nasty lawsuits. Companies had gotten shut down.
Farhad was hoping he could figure out a new way to do it by getting all these junior analysts to give him their old charts and data, but it was a big risk. Farhad was beginning to fear that you couldn’t do it without all that data from the web.
© 2022 The FLUX Collective. All rights reserved. Questions? Contact email@example.com.