Episode 123 β October 26th, 2023 β Available at read.fluxcollective.org/p/123
Contributors to this issue: Ben Mathes, Boris Smus, Erika Rice Scherpelz, Dimitri Glazkov, Neel Mehta, MK
Additional insights from: Ade Oshineye, Gordon Brander, Stefano Mazzocchi, Justin Quimby, Alex Komoroske, Robinson Eaton, Spencer Pitman, Julka Almquist, Scott Schaffter, Lisie Lillianfeld, Samuel Arbesman, Dart Lindsley, Jon Lebensold
Weβre a ragtag band of systems thinkers who have been dedicating our early mornings to finding new lenses to help you make sense of the complex world we live in. This newsletter is a collection of patterns weβve noticed in recent weeks.
βAny intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius β and a lot of courage β to move in the opposite direction.β
β E. F. Schumacher
πΉ π― Simple problems vs. easy solutions
In the programming world, laziness is often regarded as a virtue. A βhackβ can be a person who is faking it, a security breach, or a cool shortcut to get something done (e.g. a lifehack). Thereβs a tension here: sometimes, we see taking the easy path as a positive and sometimes as a negative. What distinguishes the two? Is it just a matter of taste? Do we have to have the experience to know if our solution to a problem is an elegant fix vs. a sloppy workaround? While the taste developed from experience may be part of it, we believe that certain systematic lenses can also be applied.
One feature that helps us divide the space of βgood easyβ from βbad easyβ is thinking about whether the nature of the problem matches the nature of a solution. A derogatorily termed βhackyβ solution frequently employs an oversimplified approach to a complex issue. Such a solution might temporarily mask symptoms or alleviate the issue while the root cause remains latent, ready to resurface when the fragile fix crumbles. In programming, this looks like the problematic approach of modifying code to get the test to pass without understanding why the error occurred in the first place. In governance, this might look like slum clearance, either metaphorically or literally.Β
Conversely, when framed positively, the notion of laziness or a βcool hackβ entails streamlining the problem to uncover a more effective solution. Simplifying the problem often reveals elegant solutions that might not have been visible in the original framing of the problem. This is difficult to do, though. Executing this technique effectively demands a deep comprehension of the problemβs core and a concentrated focus on it. The idea of an SLC β the simple, lovable, and complete alternative to an MVP (minimum viable product) β is one example of intentionally making the mindset shift from simplifying the solution to simplifying the problem. (Although weβd like to observe that the original definition of MVP did more closely match the simple, lovable, complete framing.) Another example is the idea of paving the cow paths: instead of solving a problem from scratch, sometimes the best approach is to see what solutions are already there and consider formalizing one of those.Β
By directing our attention toward resolving the most straightforward aspect of the problem, as opposed to seeking the easiest fix, we will be better positioned to discover an elegant solution rather than settling for a subpar hack. However, simplest doesnβt always mean simple. Sometimes a problemβs inherent complexity defies simple solutions. One example of this is in our discussion of polities β a unit of governance can be too large or too small to solve a problem effectively. Effectiveness is about simplifying the problem in ways that get rid of incidental complexity while retaining the core.Β
In the end, thereβs still an element of experience, distinction, and taste. Knowing how much you can simplify the problem requires having a deep understanding of the problem itself. Nevertheless, even in unfamiliar domains, refocusing from seeking easy solutions to redefining the problem in simpler terms remains a constructive initial step.
π£οΈπ© SignpostsΒ
Clues that point to where our changing world might lead us.
ππ§ͺ A βdata poisoningβ tool can make artworks ruin AI models trained on them
A new tool called Nightshade lets artists make undetectable changes to their artworkβs pixels βΒ and if that adulterated art is used to train AI image generation models, the models will start producing erratic outputs. In one study, researchers used the poisoning tactic to make a model turn pictures of dogs into cats, cars into cows, handbags into toasters, and hats into cakes. The researchers argued that AI modelsβ habit of βhoovering upβ data from the public internet is a security hole, because attackers could freely inject damaging data into the training set.
ππ A new electric car conversion kit could be 10x cheaper than alternatives
One Australian design student has created a prototype kit called REVR (Rapid Electric Vehicle Retrofits) that can affordably convert traditional gas or diesel vehicles into hybrid electric cars. Unlike traditional electric conversion services, which can cost around A$50,000, REVR leaves the internal combustion engine components intact and adds a compact motor between the car's rear wheels, with a battery system in the spare wheel compartment, all for an estimated cost of just A$5,000.
ππ Solar is forecasted to become the worldβs dominant energy source by 2050
According to a new study, the world may have already crossed a βtipping pointβ in energy production: even if no new policies are enacted, solar power is slated to produce more than half the worldβs energy by 2050, largely at coal and natural gasβs expense. However, the paper warns of four potential βbarriersβ to the trend: power grid instability, the difficulty of financing solar in developing economies, supply chain capacity crunches, and βpolitical resistance from regions that lose jobs.β
ππ¬ Glass-powered storage could preserve data for thousands of years
Microsoftβs Project Silica is working on a new long-term data storage solution using high-purity glass blocks; these blocks have βvoxelsβ (think 3D pixels) etched into them with ultra-high-precision lasers. While reading and writing files to the glass blocks currently takes multiple days, the technology aims to provide storage thatβll stay good for thousands of years. Glass is immune to the physical decay of tapes, CDs, DVDs, paper, and other physical media, and the glass blocks contain βground truth tracksβ that could help future civilizations re-learn how to interpret the rest of the data found in the glass.
πβ³ Worth your time
Some especially insightful pieces weβve read, watched, and listened to recently.
OpenAI Is Too Cheap to Beat (Vikram Sreekanti & Joseph E. Gonzalez) β Argues that major LLM providers like OpenAI end up being cheaper to use than open-source models because the hosted players enjoy economies of scale and can optimize their platforms thanks to the vast amounts of data they gather. Concludes that organizations shouldnβt sink βneedless time, talent, and moneyβ into self-hosting models while their competitors βmove faster and likely achieve better quality by layering on top of OpenAI.β
The Wolf (Rands in Repose) β Describes an archetype of software engineer who works outside well-defined processes and is unburdened by the βencumbering necessities of a group of people building at scale.β As a result, the engineer is incredibly effective and appears to suffer no consequences for not following the rules.
The Lost Thread (Robin Sloan) β Written in April 2022, on the eve of Elon Muskβs purchase of Twitter, this essay argues that Twitter is doomed to eventually become irrelevant, just like most previous social media platforms. Twitterβs linear timeline and niche audience canβt capture the vast range of human experiences and modes of interaction, and the platform landed on a local maximum but (in the authorβs mind) was unable to move beyond it.
They Got the Lead Out of Turmeric! (Marginal Revolution) β Describes one PhD studentβs discovery that yellow lead-based pigments, used to make turmeric roots appear more colorful, were poisoning pregnant women and children in Bangladesh, then details how Bangladeshβs Food Safety Authority swiftly cracked down on lead in turmeric throughout the country. Itβs an impressive case of βacademic research quickly being translated into political action that improves lives.β
ππ Lens of the week
Introducing new ways to see the world and new tools to add to your mental arsenal.
This weekβs lens: sour spots.
Many of us know about a βsweet spotβ: the ideal zone where multiple variables are in balance, thus achieving a happy medium and letting us enjoy the best of both worlds. For example, a business might find its sweet spot by striking an optimal balance between skilled labor and automation. A sufficient number of tasks are automated to conserve costs and boost efficiency, enabling skilled labor to concentrate on activities that demand creativity and emotional intelligence. This balance minimizes operational costs and maximizes output, all while maintaining or increasing product or service quality. The sweet spot can also be reached by reframing a problem, allowing two opposing perspectives to perceive their objectives as complementary rather than conflicting. Maybe the best solution lies not with 100% A or 100% B, but rather 50% of each.
In contrast, a sour spot emerges when balancing multiple variables has a detrimental impact, achieving the worst of both worlds and leaving all parties worse off. The classic humorous example of a sour spot is a futon, which β in trying to be halfway between a sofa and a bed β is no good for either sitting or sleeping. More broadly, sour spots may emerge when decision-makers collectively make excessive concessions and compromises, reaching a solution that gains everyoneβs approval but fails to address the core issue. A well-known example here is βdesign by committee."
Sour spots show us that meeting in the middle can sometimes be the worst outcome, but, of course, going too far in either direction can be a problem too. These βbitter extremes,β to coin a phrase, may arise from excessive deference to a singular concern, triggering a cascade of adverse consequences. For example, a company that over-invests in automation can render a significant portion of its skilled workforce redundant, leading to low morale and decreased innovation.Β
Regardless of whether the issue stems from excessive or insufficient compromise, the underlying problem is the same: the parties have lost sight of the bigger picture. When decision-makers enter a discussion with an adversarial mindset, it is highly likely that the resolution will be a sour spot or bitter extreme. Motivated by a reluctance to concede victory to others, the involved parties become blind to the realm of possibilities where win-win solutions exist. To echo this weekβs main piece, a more beneficial approach is to move our focus away from the potential solutions and onto the problem to be solved. This more expansive understanding can steer us away from lose-lose situations and might even guide us toward the sought-after sweet spot.
Β© 2023 The FLUX Collective. All rights reserved. Questions? Contact flux-collective@googlegroups.com.
Another thought thatβs been rattling round my head is that the real benefit of AI is that understanding it helps us understand human brains better. Maybe the counterpoint is that understanding how to data poison AIs helps us understand how to data poison humans.
Data poisoning is fascinating. One can imagine a world where the great injustice of AI is not the AIs ruling over the humans but rather nasty humans torturing the AIs with poison data.