Episode 103 — June 8th, 2023 — Available at read.fluxcollective.org/p/103
Contributors to this issue: Neel Mehta, Boris Smus, Erika Rice Scherpelz, Ade Oshineye, a.r. Routh, Dimitri Glazkov
Additional insights from: Gordon Brander,Stefano Mazzocchi, Ben Mathes, Justin Quimby, Alex Komoroske, Robinson Eaton, Spencer Pitman, Julka Almquist, Scott Schaffter, Lisie Lillianfeld, Samuel Arbesman, Dart Lindsley, Jon Lebensold
We’re a ragtag band of systems thinkers who have been dedicating our early mornings to finding new lenses to help you make sense of the complex world we live in. This newsletter is a collection of patterns we’ve noticed in recent weeks.
“You can’t justify a bridge by the number of people swimming across a river.”
💎🔬 The treasure is in the details
Imagine a team building a product on top of a large language model (LLM). As development advances, the team makes tradeoffs. Do they optimize for repeatable testability even if it means constraining variety? Do they hide the LLM completely or let it peek through? Do they use a chat interface? Something more traditional? Something completely different?
In any design or solution space, there are tradeoffs. In a well-established space, there are typically heuristics around these tradeoffs. These heuristics are not always right. If they were, they would be laws. However, heuristics are generally better than starting with nothing.
In a new design space, such as LLM-based products, heuristics are only just starting to emerge. We are not only discovering which heuristics unpack which tradeoffs — we are also in the process of discovering the tradeoffs themselves!
This snapshot of the current moment illustrates a more general pattern: tradeoffs are an excellent way to discover the underlying structure of a space. Tradeoffs are where insights cluster. Tradeoffs coalesce around challenges. Novel tradeoffs, lacking reliable heuristics, point to net new challenges.
As we make choices about tradeoffs, we build up a pile of condensed insight into the nature of the system (no, not that pile). Going back to the example of building products around LLMs, perhaps we learn that it’s reasonable to test our prompts using a fixed random seed because improvements to our test dataset still correlate to improvements in overall performance. Or maybe we find that tests that are too static are useless. Or we learn that our users prefer minotaurs to centaurs. Or they prefer muses to oracles. Either way, we’ve learned something new about how our problem space interacts with the LLM.
It’s easy to treat tradeoffs as nothing more than practical concerns, things that can be engineered away if we just work hard enough. However, if we look a little more deeply, we might find that the tradeoffs represent fundamental truths about the underlying system. The CAP Theorem says that a distributed data store can only provide at most two of consistency, availability, and partition tolerance. The theorem started as a conjecture, as an idea built off the insights condensed from making tradeoffs between these three. Formulating it as a more general principle led to it eventually being (partially) proven.
Yet at the same time, not all tradeoffs represent fundamental truths. Our generalizations are as likely to be wrong as they are to be right. However, the treasure is in the details. If we allow ourselves to dig into the tradeoffs we are making, we can develop a mechanical sympathy for the underlying system that allows us to work with it more effectively and capture heuristics that will guide those who come after us. As Alan Kay puts it, “point of view is worth 80 IQ points,” but sometimes a better mental model is worth infinite IQ points.
🛣️🚩 Signposts
Clues that point to where our changing world might lead us.
🚏🖼 You can generate AI art that’s also a QR code
One Reddit user shared eight images, all generated with Stable Diffusion’s ControlNet model, that are also functional QR codes. You can scan the artworks — which include pictures of flowers, landscapes, and an anime character — with a QR scanner app, and it’ll usually work. (The cover photo for this episode was generated with a different technique, because the poster didn’t explain their exact process for creating this art.)
🚏😶🌫️ Thousands of subreddits are “going dark” to protest Reddit’s new API pricing
In May, Reddit announced that its previously free API would now cost money to use, which was seen as a death knell for many popular third-party Reddit apps. The Apollo app announced that it would now cost $20 million a year to run; the creator of the Reddit Is Fun app said the API costs and ban on ads in third-party apps (a key revenue stream) would “likely kill RIF.” In protest, at least 3000 subreddits will “go dark” next week, shutting off activity for 48 hours. Other subreddits may “go away permanently unless the issue is adequately addressed.”
🚏🗽 New York City had the worst air quality in the world
The smoke from wildfires raging in northern Quebec blew down into the northeastern US this week, choking the region in a sooty orange haze. New York City’s Air Quality Index (AQI) hit a record high of 392 (anything above 150 is unhealthy, and 300+ is “hazardous”), briefly giving the city the worst air quality of any major city on the planet — beating out mainstays like Delhi, Jakarta, Dubai, and Dhaka.
🚏🎓 A professor gave his students “incomplete” grades, saying ChatGPT wrote their papers
An agriculture professor at Texas A&M gave all his students “incomplete” marks for the semester, blocking them from getting their diplomas; he said he had evidence that they’d used ChatGPT to write their papers. The prof’s (mistaken) rationale was that he’d pasted students’ papers into ChatGPT and asked the chatbot if it had written the papers, to which ChatGPT responded affirmatively. The problem was that, no matter what text you give ChatGPT, it’s highly likely to say that “it is possible that this passage was generated” by an LLM. (As an experiment, one Redditor fed the professor’s doctoral dissertation into ChatGPT and asked if it had written the paper, to which the bot said, “The text contains several characteristics that are consistent with AI-generated content.”)
🚏🚊 Switzerland is putting solar panels between its train tracks
Solar panels have drawn criticism for taking up valuable land, so some projects now aim to install panels in more marginal spaces. One example is a startup that’s installing solar panels between the train tracks on a small stretch of railroad in western Switzerland. Eventually, the company’s goal is to have a special train “unroll” a long ribbon of panels in its wake as it moves down the rails, thus enabling automatic installation.
📖⏳ Worth your time
Some especially insightful pieces we’ve read, watched, and listened to recently.
What if They Gave an Industrial Revolution and Nobody Came? (The Roots of Progress) — Posits that the supply of innovation (i.e. the number of inventions being created) wasn’t the only factor in spurring the Industrial Revolution; there also needed to be sufficient demand for innovation. In industrial Britain’s case, a combination of high wages, cheap energy, and low cost of capital led to high demand for labor-saving, steam-powered machines.
The Counter-Intuitive Truth About Trust in Teamwork (Shane Snow) — Argues that the most effective way to build trust is not to project competence and integrity, but instead to convince others of your benevolent intentions.
How to Send a Self-Correcting Message (Hamming Codes) (3Blue1Brown) — Grant Sanderson explains the remarkably elegant math behind parity bits and how they’re used for error detection and correction. This is useful anytime we’re sending information over a noisy channel, such as with QR codes, data storage, or internet communications.
How Ukraine's ‘Iron People’ Keep the Country on Track (CNN) — A photo-essay exploring the vital role railways have played in Ukraine’s defense against Russia. Trains have been valuable for evacuating refugees to nearby countries, transporting military equipment to the front lines, and helping soldiers go home and see their families. Along the way, Ukraine’s railroad workers have become a symbol of courage and resilience.
Preternatural Machines (Aeon) — Examines how Islamic Enlightenment scholars of the early Middle Ages pioneered complex automata that later arrived to the then-backward Western world shrouded in mystery, skepticism, and religiously driven fear. These often elaborate and artful masterpieces helped transform Western thought and craft.
🔍📆 Lens of the week
Introducing new ways to see the world and new tools to add to your mental arsenal.
This week’s lens: reality distortion fields.
Originally used to describe the effect of Steve Jobs’ charisma on Apple developers, customers, and even executives, the reality distortion field can be a handy lens when exploring what happens when we are tricked into believing in impossible (or seemingly impossible) things.
A form of reality distortion field surrounds every charismatic leader, and most organizations have reality distortion fields embedded into their cultures as lore. Reality distortion fields are places where the world-as-we-know-it is simpler and more straightforward than we normally experience, and as such, they hold significant appeal.
In themselves, reality distortion fields aren’t bad or good. They just are. Their effects could be both positive and negative.
A positive outcome of experiencing a reality distortion field might be overcoming our own attachment to ideas we held too firmly to see other options — and, after being influenced by a distortion, noticing alternatives that we couldn’t see without the field’s effect. For example, a team might gain focus and clarity of execution and overcome overwhelming odds when convinced of their destiny to seize the moment.
Similarly, the reality distortion field generated by a hot but vague new term (spatial computing! AI! mixed reality!) can open up new areas for exploration by reframing what’s possible. People’s preconceptions might rule out certain promising ideas, such as manipulating 3D digital objects with your hands, but the buzzy new term can give people enough benefit of the doubt to explore these new spaces.
The downsides stem from becoming trapped in the thick of the distortion field, and becoming entrenched in magical thinking and forfeiting our own reasoning agency. These cases rapidly devolve into cults and extremism. Sadly, there are so many examples of this happening that we don’t even have to mention them.
The crux of the reality distortion field is in how we choose to respond to its influence. Can we discern the reality distortion fields we’re in? Do we lean into its strength to empower our actions and explore new domains? Or are we abdicating our agency and the clarity of our own thoughts?
© 2023 The FLUX Collective. All rights reserved. Questions? Contact flux-collective@googlegroups.com.