Episode 82 — January 12th, 2023 — Available at read.fluxcollective.org/p/82
Contributors to this issue: Neel Mehta, Boris Smus, Samuel Arbesman, Dimitri Glazkov, Scott Schaffter, Erika Rice Scherpelz, Ben Mathes
Additional insights from: Ade Oshineye, Gordon Brander, a.r. Routh, Stefano Mazzocchi, Justin Quimby, Alex Komoroske, Robinson Eaton, Spencer Pitman, Julka Almquist, Lisie Lillianfeld, Dart Lindsley, Jon Lebensold
We’re a ragtag band of systems thinkers who have been dedicating our early mornings to finding new lenses to help you make sense of the complex world we live in. This newsletter is a collection of patterns we’ve noticed in recent weeks.
“Death closes all: but something ere the end, / Some work of noble note, may yet be done”
— Alfred, Lord Tennyson
🤝🔁 A model of trust
Trust and trustworthiness: like massive tree crowns, these words cover a myriad of shifting meanings. One model that captures a large swath of these meanings was developed by Mayer et al. in the 1990s. It’s not perfect, but it can serve as a useful tool in our trust-related adventures, offering a proverbial stake in the ground.
The model is situated around the concept of a risk-taking relationship. In such a relationship, trust is a process, rather than a state. This process consists of a series of interactions between the participants. Interactions involve taking risks: with full understanding that the other party might not follow through, people still commit. The participants’ sense of mutual trustworthiness develops over a series of these interactions.
One insightful observation the authors make is that trust requires perceived risk. There must be something valuable at stake, something that could be lost if trust is breached. This observation can be rather clarifying. Consider a company that gives products away for free and finds themselves struggling with low trust from their customers. It turns out that for trust to develop, the customer must be clear on the risk the company is taking. It might seem counterintuitive, but if we apply Mayer et al.’s reasoning, then charging for products might lead to more trusting customers.
Another useful insight comes from the three key factors that participants use to evaluate trustworthiness: ability, benevolence, and integrity.
Ability asks “can they do what they said they would do?” Usually, it’s a question of competence. No matter our eagerness, if we are simply incapable of delivering on our promises, trust will not develop.
Benevolence asks “are they in this relationship for mutual benefit?” Do both parties want to do good to the other participants? Do they lack ulterior, egocentric motives? Honesty is a good word for this factor. As we engage in the risk-taking relationship, we start developing a perception of benevolence. Just like in the now-famous game of trust by Nicky Case, we start forming our sense of the other’s benevolence.
Integrity asks “are there shared and acceptable principles that they adhere to?” Even if implicit and unspoken, a relationship has a social contract. Do both parties feel like they adhere to this contract? Do we understand its underlying principles? With each interaction in our risk-taking relationship, we form a better understanding of the rules of engagement – and our commitment to them.
Perhaps the most important insight of the model is that these factors are perceptions: they do not reflect what we think of ourselves in the relationship. They are opinions formed by the other participants based on their experiences with us. These perceptions can’t be changed quickly, we can only patiently engage in the risk-taking relationship with others and build trust over time.
Clues that point to where our changing world might lead us.
🚏🍁 Toronto may soon have more tech workers than Silicon Valley
The Toronto-Waterloo corridor in Ontario currently has around 314,000 tech jobs, compared to about 379,000 in the San Francisco Bay Area. But thanks to Canada’s speedier and more welcoming immigration policies, tech employment there is booming: by one estimate, Toronto-Waterloo’s tech sector is growing 350% faster than Silicon Valley’s. At that rate, it’ll overtake the Bay Area by “sometime in early 2023.”
🚏🧹 Robotic vacuum cleaners are helping fill hotels’ staffing shortfalls
The hotel industry is facing “unprecedented staffing challenges” post-pandemic, with an estimated 350,000 fewer people currently working in hotels than in February 2020. To ease the burden, some hotel owners are investing in robot vacuum cleaners; at $30,000 a pop, they’re pricey, but hotel owners are grateful. "If we vacuum every floor with a robot, that saves one whole shift” of a human employee, gushed one hotel’s managing director.
🚏☎️ A new AI chatbot can negotiate your bills and snag discounts for you
The CEO of a “robot lawyer” startup unveiled an AI chatbot that can talk to customer service for you, and in a demo, he had it get on a live chat with a Comcast rep to negotiate a discount on an internet bill. The bot complained about the internet service and threatened legal action, which prompted the representative to offer a $10 discount. The CEO said the bot is programmed to “be aggressive, citing laws and having an emotional appeal,” and he added that the company was working on making it less polite, since it apparently said “thank you” too often.
🚏🔋 An experimental battery lasts longer than lithium-ion, with 4x the capacity
Room-temperature sodium-sulfur (RT Na-S) batteries have been hailed as a promising new type of battery, given their cheap and easy-to-find materials. RT Na-S batteries have historically lacked the lifespan and storage capacity needed to be useful, but a team of researchers has announced a new type of RT Na-S battery that has four times the storage capacity of a standard lithium-ion battery, plus an “unprecedented” lifespan. Still, these new batteries have only been made on small scales so far.
🚏🎈 A geoengineering startup seems to be launching sulfur-filled balloons into the atmosphere
It’s theorized that you could cool the planet by releasing sulfur dioxide into the atmosphere, but the scientific community doesn’t know the side effects or long-term impacts of such a move, so researchers have almost entirely steered clear of even small-scale testing. But one startup — seemingly without approval or consultation with the public — says it’s started launching sulfur-bearing balloons into the atmosphere to force the issue and jump-start research into geoengineering. One political scientist warned, “a self-appointed protector of the planet… could force a lot of geoengineering on his own” with sulfur balloons, given how easy they are to deploy.
📖⏳ Worth your time
Some especially insightful pieces we’ve read, watched, and listened to recently.
AI Is Plundering the Imagination and Replacing It With a Slot Machine (The Bulletin) — Argues that art that uses computational methods (like art generated by code) is still a fundamentally human endeavor because the human had to grapple with the problem; the process is “full of uncertainty and trial and error, as the artist gropes around for a method that will produce the desired results.” Not so for AI-generated art, where the process is reduced to throwing words at a black box and seeing what is generated (quite like pulling a slot machine’s lever), which constrains the outputs to “the dulling sameness of a world of infinite but meaningless variety.”
When SimCity Got Serious: The Story of Maxis Business Simulations and SimRefinery (The Obscuritory) — Chronicles the misadventures of Maxis’s “professional version” of SimCity, which aimed to be an accurate modeling tool for companies that managed complex technical systems like refineries, military bases, hospitals, and power grids. The problem was that creating accurate mental models was often at odds with having fun. Case in point: SimRefinery, an oil refinery sim whose most fun feature was blowing up the plant.
Single-Threaded Leaders at Amazon (Pedro Del Gallego) — Describes the top-down, hyper-focused corporate structure used at Amazon, where the single leader of a business unit acts as the decision maker for all aspects of the product, and is fanatically devoted to keeping everyone fanatically devoted to their plan. The problem with this strategy is that horizontal coordination (i.e. working across product areas) becomes extremely difficult as the company grows.
The Carrier Bag Theory of Fiction (Ursula K. Le Guin) — Referencing the “carrier bag” theory of evolution (which argues that a humble carrying bag was more important for cultural development than weapons), Le Guin questions the idea that the proper narrative is linear and involves conflict. Instead, Le Guin argues that the “reduction of narrative to conflict is absurd;” rather, the natural shape of a novel is as a bag of words carrying many meanings and describing many people. It’s far less prescriptive than the hero’s journey.
The Bronze Age Collapse — Mediterranean Apocalypse (Fall of Civilizations Podcast) — Argues that the simultaneous collapse of many civilizations in the eastern Mediterranean basin around the 1100s BCE, often attributed to the mysterious “Sea Peoples,” was likely due to systemic factors. Those include severe cooling due to a volcanic eruption, which spawned refugees who took to the sea; the rise of disruptive new military technology in the Iron Age; and the deeply-interconnected nature of these societies, which made them all crash in unison.
📚🌲 Book for your shelf
An evergreen book that will help you dip your toes into systems thinking.
This week, we recommend Finding Meaning in an Imperfect World by Iddo Landau (2017, 312 pages).
We live in a world obsessed with meaning. Is my work meaningful? Is my life full of meaning? Will what I do matter to anyone in the future? Or—to go full nihilist—is everything meaningless? We see these obsessions particularly clearly in the world of tech, whether it's a focus on putting a dent in the universe, how to best donate one's earnings, or the eschatological musings around artificial intelligence.
While all of these questions are good and important, Iddo Landau, a philosopher at Haifa University, wants us to know that we are thinking about meaning all wrong. In this book, Landau argues that we are using unreasonable—and often impossible—standards for meaning in one's life, standards that we will never live up to. For whether it's ensuring our actions will positively influence all of humanity, or our achievements will echo down the generations, we will simply not be able to measure up to these goals. Landau counters this "perfectionist" tendency when it comes to meaning, and shows that we need to take a much more humble and healthy approach to meaning. In particular, he equates meaning with value (what is valuable to us is that which provides meaning), showing that meaning in life is therefore not an all or nothing proposition, but rather a spectrum, and that we should simply aim to live a life that contains that which is valuable to us.
Ranging across topics such as "Life in the Context of the Whole Universe" and "The Goal of Life," Landau's approach is a core entry into what might be termed modern wisdom literature. These are frameworks for grappling with our modern era, where we are embedded within numerous complex systems where the implications of our actions are sometimes murky, at best. Grounded in philosophy but very readable, this is definitely a book worth checking out.
© 2023 The FLUX Collective. All rights reserved. Questions? Contact email@example.com.
Superb. Really enjoyed reading the relationship between trust and risk - something I've never considered before. Mind you: I've often heard that, spiritually, trust and love are the same thing.
The note about the chat bot threating legal action and getting a discount is truly absurd and hilarious. Makes me wonder: Will we trust chatbots over time?