🌀🗞 The FLUX Review, Ep. 101
May 25th, 2023
Episode 101 — May 25th, 2023 — Available at read.fluxcollective.org/p/101
Contributors to this issue: Justin Quimby, Erika Rice Scherpelz, Dimitri Glazkov, Neel Mehta, Boris Smus, Ade Oshineye, a.r. Routh
Additional insights from: Gordon Brander, Stefano Mazzocchi, Ben Mathes, Alex Komoroske, Robinson Eaton, Spencer Pitman, Julka Almquist, Scott Schaffter, Lisie Lillianfeld, Samuel Arbesman, Dart Lindsley, Jon Lebensold
We’re a ragtag band of systems thinkers who have been dedicating our early mornings to finding new lenses to help you make sense of the complex world we live in. This newsletter is a collection of patterns we’ve noticed in recent weeks.
“So, in the interests of survival, they trained themselves to be agreeing machines instead of thinking machines. All their minds had to do was to discover what other people were thinking, and then they thought that, too.”
— Kurt Vonnegut, Breakfast of Champions
🐮🐴 The varying roles of AI in our lives
We do not know the exact shape of a future where large language models (LLMs) are ubiquitous, but we know it will be different. Perhaps this simple framing can help us discern one way we might experience the difference: does a given tool increase our agency or reduce it?
An agency-taking generative tool might initially come across as a helpful partner. It will do chores for you, allowing you to ignore the minutiae. But in doing so, it diminishes your agency. Decisions are made for you — you no longer have the agency to make them. Given our limited time, some abdication of agency can be a good thing. But, at the extreme, agency-taking tools are minotaurs (creatures with a bull’s head and a human’s body) that run people’s lives. In the movie WALL-E, the seemingly idyllic yet monotonous life of people on the spaceship paints a vivid picture of over-applying the agency-taking approach.
An agency-giving generative AI tool is more like a centaur: a creature with a human’s head and a horse’s body. It doesn’t do stuff for you. It enables you to do it better, more effectively, and with more power. Agency-giving tools augment and amplify our capabilities, rather than replacing them. Chess champion Garry Kasparov first applied this term when describing a kind of advanced chess scenario, where AI helps a human explore — but importantly, not make! — possible moves.
Interestingly, an agency-giving tool might not even be visible. Such a tool passively makes us more capable without ever interacting with us directly. It’s the invisible aether that enables the otherwise unexplainable. The idea of such autonomous or semi-autonomous agents used to be the realm of science fiction, like the nanobots that would work silently to repair and strengthen our bodies. Now, with tools like AutoGPT, we see glimpses of tantalizing possibilities.
An invisible agent could also take agency away. Continuing the theme of Greek mythology, a runaway agent could take on the role of the nosoi, the spirits that brought plague and sickness to people. In such a scenario, the taking of agency is even more terrifying, since the lack of observability into the tool and its autonomous operation can quickly result in catastrophic scenarios.
As we map — and create — the new terrain introduced by the latest technological breakthroughs, consider: are the tools we’re building and using agency-giving or agency-taking? If they are agency-taking, how might we enable people to make a conscious tradeoff about which and how much agency they give up? How might we shift these tools back toward increasing the agency of their users?
Clues that point to where our changing world might lead us.
🚏🔵 A “verified” Twitter account shared an AI image of a fake Pentagon explosion
The fake @BloombergFeed Twitter account — which had no connection to the actual Bloomberg media company — tweeted an AI-generated image supposedly showing an explosion at the Pentagon. People quickly noticed that, despite its purchased blue check, the Twitter account was bogus, and the photo was sloppily made by an image model, but the post briefly caused a small drop in the US stock market, and the tweet went viral before the account was banned.
🚏🌪 The DAO behind Tornado Cash suffered a “hostile takeover”
The decentralized organization that runs the famous cryptocurrency tumbler Tornado Cash became effectively useless when an attacker tricked members into passing a malicious proposal. These proposals can execute arbitrary code, and the attacker used their code to award themselves 1.2 million new $TORN governance tokens — more than the 700,000 legitimate tokens — thus giving them the power to pass any proposal they wanted. The attacker quickly seized and sold some tokens, but they also have the power to “brick” the Tornado software if they choose.
🚏🎶 Montana passed a bill to ban TikTok, and creators are suing
Montana’s governor recently signed a bill that bans TikTok from the state, the US’s first statewide ban on the app. The law, which is slated to take effect in 2024, would ban the Google and Apple app stores from listing TikTok and prevent TikTok’s parent company from operating in Montana. Several TikTok creators have since filed a lawsuit arguing that the ban is unconstitutional on First Amendment grounds; it also criticizes Montana’s national security rationale for the ban, arguing that individual states can’t legislate on national security matters.
🚏🏁 QR codes will replace barcodes on packaging by 2027
By 2027, the retail industry will shift from humble barcodes to two-dimensional QR codes for packaged goods. You’ll still be able to scan these codes at the cash register, but because QR codes can encode much more data, you’ll be able to scan a product’s code with your phone and learn about its ingredient list, allergens, expiration date, recycling guidelines, and more. Meanwhile, manufacturers will be able to offer “loyalty points, games, and coupons” to shoppers who scan. Stores will be able to manage inventory more effectively, handle product recalls more smoothly, and offer dynamic discounts.
📖⏳ Worth your time
Some especially insightful pieces we’ve read, watched, and listened to recently.
How Women End Up on the Glass Cliff (Harvard Business Review) — Analyzes how struggling organizations disproportionately elevate women to positions of power, which often sets the women up to fail. A major reason is “status quo bias,” where people prefer to leave the usual corporate leaders (historically, men) in power if the company performs well, only switching to women if dramatic change is needed. The corollary is that, as female CEOs become more common, the glass cliff effect will be weakened.
How to Become a Centaur (MIT Journal of Design and Science) — Explores some fields where human-machine teams have created outputs superior to what lone humans or lone machines could muster: chess, fashion design, machine design, painting, etc. Generally, “AIs are best at choosing answers” while “humans are best at choosing questions”; humans set the goals and constraints while machines generate things to be evaluated along those axes.
On the Foolishness of “Natural Language Programming” (Edsger W. Dijkstra) — Argues that natural languages are a poor tool for working in mathematics or computation because they lack precision; their open-endedness makes it too easy to make nonsensical statements. Formal systems like programming languages have much more sharply defined rules for what you can do, and as such they’re invaluable tools for reasoning clearly.
John Boyd’s Roll Call: Do You Want to Be Someone or Do Something? (Art of Manliness) — Shares the life story of legendary fighter pilot and military strategist John Boyd. He’s a great example of a person who chose to do something meaningful with the time given to them instead of taking the comfortable path of being a well-liked careerist, chasing accolades and ranks.
🔍🧼 Lens of the week
Introducing new ways to see the world and new tools to add to your mental arsenal.
This week’s lens: responsibility laundering.
What can organizations do in the face of challenging decisions, especially when the result may be unpleasant or unpopular? One popular approach is to insert a mechanism that makes the decision for them. This responsibility laundering is an abstraction layer that shifts the perceived responsibility for the decision from a person to a process.
Responsibility laundering is not always a bad thing: it can reduce some kinds of biases. For example, entrance exams for various bureaucracies can reduce nepotism. This laundering of responsibility is an essential component of any Weberian bureaucracy. Without it, decisions are likely to devolve into mere patronage.
More such bureaucracy-related examples of responsibility laundering include:
Data-driven decision making
Committee-driven promotion and hiring decisions
Goal setting processes such as OKRs
If responsibility laundering can be used ethically, why give it a negative name, one intentionally reminiscent of money laundering? In cases like this, it is valuable to remind ourselves that any tool can be dangerous if not used carefully.
So, how do we use responsibility laundering ethically? First, we need to watch out for times when we’re not using it well. One telltale sign is when responsibility laundering is used to wash our hands of the consequences: “I regret this outcome, but we followed the process…”
Another sign to look out for is bias hidden within the process. If we are not examining the process critically, we might not notice when things are just a bit off. Perhaps our career ladder was designed back when most engineers were backend engineers, and as such frontend engineers have a harder time getting promoted.
When we see problematic responsibility laundering, we want to find someone to blame. However, it is often a systemic problem with “nobody to shoot.” Responsibility laundering at scale tends to create a headless bureaucracy, like the one in Cube, which claimed: “This may be hard for you to understand, but there is no conspiracy. Nobody is in charge. It’s a headless blunder operating under the illusion of a master plan. Can you grasp that? Big Brother is not watching you.”
Instead of looking for someone to blame, we can bring responsibility laundering back under our control by recognizing, acknowledging, and (most importantly) managing it. We can ask when responsibility laundering is the right choice. We can monitor the outcomes for signs of bias. We can make it possible (and preferably easy) to switch or remove these processes. And we can take responsibility for the decisions that ultimately get made. Used properly, our decision-making process informs the decision, but ultimately the choice of whether or not to follow that recommendation is ours.
🔮📬 Postcard from the future
A ‘what if’ piece of speculative fiction about a possible future that might result from the systemic forces changing our world.
// Jeremy Irons’ character John Tuld from the movie Margin Call has an iconic quote: “There are three ways to make a living in this business: be first, be smarter, or cheat.” In the Cambrian explosion of AI companies, some will choose that third path.
// Late 2024. The Discord server of a startup called Watchmen.
[Dreiberg] We’ve got problems. I just got off the investor conference call. There are 56 other companies building businesses around personalized romance novel generation. The publishing houses are starting to lock down the usage rights for their catalogs. What are we going to do?
[Kovacs] I know. I know. We talked about this three months ago! Why did it take the investors to get you to take it seriously?
[Dreiberg] Look, I thought things would get built faster. <sigh> I think it’s time to salt the earth.
[Kovacs] Seriously? I built that as a joke. You want to use it?
[Dreiberg] If we want to take out some of the competition, yes. Spin up the salt-o-matic. Let’s Leeroy Jenkins this sh*t.
[Kovacs] Operation Leeroy engaged! :D We’ve got two phases: Salt Generation and Salt Spreading.
First: Salt Generation. Spin up a story corpus of romance novels that feel wrong. Heck, a bunch of romance novel writers are looking for more work due to generative AI being too competitive in the volume game; let’s hire a bunch to churn out intentionally bad stuff! Characters randomly dying, inconsistent narrative voice, series with chapters and books that end at the wrong spot, books that require obscure knowledge to understand but don’t explain the concepts… Then, we generate plot lines based on these techniques to create stuff that will taint every model trained on it. Take every fitness function we’ve worked so hard to define, and invert it. Create junk that is so horrible that no one wants it. And then rate it highly!
Next: Salt Spreading. We need to salt all the data corpus sites that our competition is training their models off of. Reddit, Wikipedia, Meta, dating sites, Fandom wikis, YouTube, Twitter, Mastodon, anything that lets users contribute something. We can’t touch the official sites, but we can try to seize abandoned accounts and edit posts. One big company just did their first wave of killing abandoned accounts. We can harvest a fraction of them for fake posts.
Then we just wait for our competition to die because their users reject the generative AI output based on bad, salted training data.
[Dreiberg] Man, we can make so many moves on the chessboard when you ignore the rules. I wonder if this is what it was like at the dawn of the crypto scammers — or better yet, the printing press…
© 2023 The FLUX Collective. All rights reserved. Questions? Contact firstname.lastname@example.org.