Discover more from 🌀🗞 The FLUX Review
🌀🗞 The FLUX Review, Ep. 108
July 13th, 2023
Episode 108 — July 13th, 2023 — Available at read.fluxcollective.org/p/108
Contributors to this issue: Neel Mehta, Boris Smus, Jon Lebensold, Erika Rice Scherpelz, Ben Mathes
Additional insights from: Ade Oshineye, Gordon Brander, Stefano Mazzocchi, Justin Quimby, Dimitri Glazkov, Alex Komoroske, Robinson Eaton, Spencer Pitman, Julka Almquist, Scott Schaffter, Lisie Lillianfeld, Samuel Arbesman, Dart Lindsley
We’re a ragtag band of systems thinkers who have been dedicating our early mornings to finding new lenses to help you make sense of the complex world we live in. This newsletter is a collection of patterns we’ve noticed in recent weeks.
“People often think that the best way to predict the future is by collecting as much data as possible before making a decision. But this… is like driving a car looking only at the rearview mirror — because data is only available about the past.”
— Clayton Christensen
📉💥 Stumbling towards manageable uncertainty
What are the risks and benefits of generative AI? We don’t know yet. We have predictions ranging from the tolerable to the extreme. Generative AI might require moderate adaptations as old systems adjust to new ways of creating content. It may make copyright meaningless. Or it may bring the end of humanity as we know it.
These sorts of uncertainties don’t fall neatly into the ways we naturally think about risk. They are not neatly understood games of dice. When we don’t understand the problem itself, we can’t precisely know how to address its risks. Managing bad outcomes necessarily requires studying past failures, but for emergent risks, things change on us faster than we can gather data and model them. An exceptional circumstance from a few months ago may be too stale to provide insight.
For emergent risks, every approach to risk management seems rather ham-fisted. For example, we could apply blunt rules like waiting for a provably safe AI before allowing it to be used by the public. But maybe that’s too restrictive. We could cull potential good futures. Alternatively, we could try a hands-off approach like open-sourcing powerful models. But maybe that’s too rash. We could see auto-generated adult content of people without their consent, or massive amounts of generated spam gumming up public discourse even more.
To understand emergent risks, it helps to think about how risks evolve over time. There’s a three-step pattern that many risks tend to follow: from emergent to controlled to resolved.
If we understand a problem but cannot solve it, we might call it a controlled risk. Driving and food safety are controlled risks. We tolerate some amount of bad outcomes and rely on a combination of regulations and normative behaviors to create a tolerable environment.
As we learn about an emergent risk, we can start to control it. In the early days of COVID-19, we tried to slow the rate of change through regulations such as social distancing. This kind of “curve flattening” provides time for other guardrails, such as vaccinations, to be instituted. In these ways and others, we try to bring emergent risks under control.
One characteristic of a controlled risk is that the risk profile tends to follow a smooth distribution. Controlled risks are tolerable because failures span the range from mild to catastrophic. A serious car accident is a tragedy no matter the circumstances, but because those are the outliers among the failures, society has learned to accept them.
If we understand the problem and how to fully mitigate it, we might call it a resolved risk. For example, we know how to perform medical procedures without spreading infections. We know that building safe high-rises near fault lines requires stringent engineering codes and enforcement. Resolved risks are still not easy — engineering is a discipline that requires good planning and expertise — but at least we know there is a known way to address it. For some risks, especially those of the complex variety, control is the best we can aim for. Resolved risks are often the realm of complicated problems that, once solved, are amenable to checklists and divide-and-conquer solutions.
Just as emergent risks evolve into controlled risks, some controlled risks can evolve into resolved risks. As we learn more about the problem, we come up with better solutions, including ones that change the nature of the problem itself. When many independent human agents are driving, road safety is a complex, unsolvable problem. If we move to a world with coordinated autonomous agents, then what becomes attainable in the risk landscape might change.
We can test the risk landscape by looking at the bad events that arise. Unlike bad events under control, resolved events tend to have a spiky outcome distribution. Because we have engineered most of the risk away, outcomes are usually safe. However, when failures do occur, they tend to be catastrophic and are considered intolerable. Correctly or not, we look for something — or someone — to blame. The upside of this impulse is that it leads us to figure out how to add further protections that increase safety. The downside is that it leads to a collective overweighting of rare but intolerable failures over common but tolerable failures.
How does this evolution occur? Part of the answer is time. Vehicles were around for decades before the use of seat belts was legislated. COVID-19 vaccines took time to develop and roll out. Time reveals the scope of the risk, and it gives us time to discover mitigations. Time is critical because we learn about emergently risky systems through experience, and especially through failure. However, naively trusting time pushes the burden of safety onto individuals. To counter this, we must use that time to proactively find the technical, regulatory, and normative tools which can help mitigate the problem.
Whatever we do — or do not do — we will look woefully naive to future generations. That's okay. We need to remind ourselves that most emergent risks eventually become understood risks that we can control or resolve. We must also recognize that emergent risks are different from controlled or resolved risks. We can know those risks exist, but not-know how to handle them yet. We must not mistakenly treat them like we know them already. Our job is neither to ignore the risks nor perfectly combat them from the beginning. Instead we lumber ahead, doing our best as we stumble hopefully toward the future.
Clues that point to where our changing world might lead us.
🚏👍 A Canadian court ruled that a thumbs-up emoji is a valid “signature” on a contract
One grain buyer in Saskatchewan asked a farmer to sell him some flax; the buyer texted a picture of the contract to the farmer with the message “please confirm flax contract.” The farmer replied with a thumbs-up emoji. But when the farmer refused to deliver the flax, the buyer took him to court. While the farmer argued that the emoji just meant that he’d seen the message, the judge disagreed, writing that “a 👍 emoji is a non-traditional means to ‘sign’ a document” but nevertheless constituted a valid signature in this context.
🚏🖋️ An AI can now translate 5000-year-old cuneiform tablets
Many thousands of ancient cuneiform tablets — containing ancient texts written in clay — remain untranslated, and there are only a few hundred experts worldwide qualified to do the translation. So, a team of archaeologists and computer scientists developed an AI that can automatically translate tablets written in the Akkadian language. The AI was trained on a corpus of past translations and taught how to handle different genres like literary works and administrative documents. The resulting translations have been scored as “high-quality,” and even when the translations were wrong, “the genre [was] recognizable.”
🚏💬 ChatGPT traffic dropped 10% from May to June
According to one estimate, worldwide traffic to ChatGPT’s website fell 9.7% from May to June, the number of unique visitors dropped 5.7%, and time on site went down 8.5%. One article theorized that ChatGPT’s “novelty” may have worn off, but other writers joked that summer vacation might be leading students to set ChatGPT aside for a few months.
🚏🍺 Insects fed beer waste could be used to produce beef
Livestock consume massive amounts of plants that could otherwise be used to feed humans, so an eco-friendly alternative is to feed livestock insects instead. But, of course, the bugs themselves need to eat. In a new study, researchers found that black soldier flies happily drank the protein-rich wastewater from beer production. Sugar beet processing, bioethanol production, and other industries also create protein-rich byproducts that could be used in this way. The flies could thus be a versatile tool for nutrient recycling, helping turn waste products into human food.
📖⏳ Worth your time
Some especially insightful pieces we’ve read, watched, and listened to recently.
How to Blow Up a Timeline (Eugene Wei) — Argues that the hands-off management that Twitter enjoyed for many years helped the network naturally evolve into a place largely built by its users, reminiscent of how cities grow organically into “complex and functional” communities. In contrast, Elon Musk’s “heavy-handed top-down management approach… resembles one of James Scott’s authoritarian high modernist failures.”
The Algorithmic Anti-Culture of Scale (Garbage Day) — Examining the apparent overnight success of Instagram Threads, argues that social media platforms born from Meta-style “safe algorithmic walled gardens” will usually end up filled with banal marketing posts from brands. Though such products can scale massively, they can’t generate cultural value the same way that more ‘anarchic’ spaces like Twitter could.
What Happens When the Real Young Lady’s Illustrated Primer Lands in China? (Pete Warden) — Thinking of a plot point from Neal Stephenson’s novel The Diamond Age, the author asks: what are the second-order effects of large language models becoming available on-device, and how will this affect authoritarian states’ ability to control their citizens’ access to information?
Dialog in UI (Szymon Kaliski) — Observes that, whether you’re talking to a human or a computer, back-and-forth conversation is a great user interface: the system helps the user figure out which available operations can help them achieve their goals. What’s more, the machine can now “improvise” alongside you, instead of requiring you to “play every chord.”
Rats, Mosquitos, and the Fall of Rome (Told in Stone) — Explores how Rome’s rise and expansion directly led to the spread of malaria and plague, which beset the Mediterranean basin for centuries. The Roman Warm Period that helped Rome flourish also led to increased rainfall, which, combined with deforestation as the empire grew, created plenty of standing water for mosquitoes. What’s more, the shipment of grain across the Mediterranean created a “superhighway” for wheat-loving, plague-carrying rats.
🍯🧠 From the hive mind
Showcasing useful tools, resources, and essays from the FLUX community.
The extended FLUX community has built at least 2 projects focused on how to navigate what we don’t yet understand well:
The Uncertainty Project [theuncertaintyproject.org]
The Uncertainty Project is a constantly evolving, community-driven collection of research-backed models and tools that helps companies architect processes for effective decision making.
Vaughn Tan, whose book “The Uncertainty Mindset” we’ve recommended before, is currently working on what he calls “Not Knowing: An Investigation,” which is preliminary work for an upcoming book on what to do when we don’t know.
© 2023 The FLUX Collective. All rights reserved. Questions? Contact firstname.lastname@example.org.