Episode 131 β February 1st, 2024 β Available at read.fluxcollective.org/p/131
Contributors to this issue: Dimitri Glazkov, Jon Lebensold, Erika Rice Scherpelz, Robinson Eaton, Ade Oshineye, Justin Quimby, Neel Mehta, Boris Smus
Additional insights from: Ben Mathes, Alex Komoroske, Spencer Pitman, Julka Almquist, Scott Schaffter, Lisie Lillianfeld, Samuel Arbesman, Dart Lindsley, MK, Melanie Kahl
Weβre a ragtag band of systems thinkers who have been dedicating our early mornings to finding new lenses to help you make sense of the complex world we live in. This newsletter is a collection of patterns weβve noticed in recent weeks.
βThe common trait of people who supposedly have vision is that they spend a lot of time reading and gathering information, and then they synthesize it until they come up with an idea.β
β Fred Smith, Overnight Success
π»πΈ The Babbage machine phenomenon
Computational devices have a long history, but credit for the first computer often goes to Charles Babbage for his Difference Engine and Analytical Engine. However, Babbageβs ideas, while viable, were never actually built. It would be another 100 years before vacuum tubes kick off practical machines for generalized computation.
Protein folding, electric cars, and the many ideas of Leonardo da Vinci show that ideas often show up as concepts or novelty items well before they achieve scale. Sometimes, it takes time for ideas to diffuse. Sometimes, society needs to get used to them. Maybe ideas need technology that hasnβt been invented yet or canβt yet be scaled. Maybe they just need some tweaks before theyβre ready for prime time. Sometimes, people were just interested in investing elsewhere at the time.
However, even when these early concepts didnβt yield practical inventions at the time, they are often seen in retrospect as the precursors to later technological development.Β
We might call such early glimpses of possible societal-scale innovation breakthroughs βBabbage machine phenomena.β What differentiates a Babbage machine phenomenon from innovation kayfabe (seemingly innovative work that is ultimately just for show)? To some degree, only time can tell. However, by looking at the stages an innovation goes through, we can start to understand some of the differences.Β
We can imagine a pipeline of innovation from βpossibleβ to βfeasibleβ to βscalableβ to βrealβ:
At the βpossibleβ stage, an innovator takes something previously unimagined (or, perhaps, only abstractly imagined) and shows that itβs possible to work out enough details to build it. Solar sails are an example of a possible technology.
The βfeasibleβ stage shows that the possible can be done in real life with individual care and attention. Many Kickstarters start with a piece of feasible technology: they have a real working demo, but they havenβt yet necessarily made it through the process of validating that they can manufacture at scale.Β
In the βscalableβ stage, innovators focus on figuring out how to do their work without having to do everything by hand. Those Kickstarters figuring out the manufacturing process are at the scalability stage, as are many startups when they roll out their products to be generally available (often greasing the wheels with incentives).Β
The difference between scalable and βrealβ is often more a matter of degree than of kind: does society recognize this innovation as generally useful? Can it sustain itself without the incentives propping it up? In a post-ZIRP world, weβre seeing how many startups that thought they were real were still scaling, leading to failures as cheap money and demands for profitability showed they didnβt have a sustainable business model.Β
Babbage machine phenomena and innovation kayfabe happen in the early stages: possibility and feasibility. This is partially because the cost goes up further along the pipeline. However, itβs also somewhat definitional; by the time something is scaling or beyond, itβs no longer a concept, real or fake.Β
But back to the original question: how do we spot the difference between Babbage machine phenomena and innovation kayfabe? First, we can start peeling back the layers. Are there detailed plans or just shiny concept videos? The more people are willing to describe how something could work, the more likely weβre looking at a true Babbage machine phenomenon. The more shallow the presentation, the more likely itβs innovation kayfabe. (That said, concept videos can be valuable for inspiration; they only become innovation kayfabe when they are presented as something thatβs coming). Is there a real prototype or demo? Are the creators willing to let the public play with it? The more people can poke and prod at the innovation, the more likely it is to be a precursor. In resource-rich environments β such as tech companies during the ZIRP era β the difference might be less about the technology itself and more about the willingness to continue investing (or to set the idea free). Is a good-faith effort made at scaling, or is the effort dropped after a publication, a good press release, or a successful promotion?Β
Of course, not all innovative paths lead somewhere. We are not driving steam automobiles or (usually) watching 3D televisions at home. However, when it comes to Babbage machine phenomena, the more important question is whether or not a particular idea expands our collective adjacent possible. Ideas, even if ultimately infeasible, help us add to the net wisdom of humanity by better understanding what we can and cannot do. That alone makes them worthwhile.Β
π£οΈπ© SignpostsΒ
Clues that point to where our changing world might lead us.
πποΈ The creators of an βAIβ comedy show revealed it was actually human-written
Earlier this month, a podcast called Dudesy released an hour-long comedy special that it claimed was an AI-generated impression of American comedian George Carlin; they said the AI had been trained on βdecadesβ of Carlinβs stand-up material. But when Carlinβs estate sued the podcast for making βunauthorized copiesβ of Carlinβs βcopyrighted routines,β the podcasters said that the comedy special hadnβt used AI at all β instead, one of the co-hosts had written the whole thing. Commentators said this could eliminate some copyright claims, but even an entirely human-written special would still be on the hook for βunauthorized use of Carlinβs name and likeness for promotional purposes.β
πβοΈ Investment in quantum computing dropped by nearly 50% last year
Quantum computing attracted $2.2 billion in worldwide venture funding in 2022, but that figure fell to just $1.2 billion in 2023, in part because the AI boom drew away investor interest and in part because investors got more cautious in general. Interestingly, the decline wasnβt evenly distributed: quantum funding fell 80% in the US but just 17% in APAC, and it actually grew 3% in EMEA.
ππ§ββοΈ A study found that coding copilots increased code churn and reduced code reuse
A recent whitepaper that looked at 153 million lines of changed code concluded that the rise of AI-powered coding assistants, such as GitHubβs Copilot, has put βdownward pressure on code quality.β In particular, the paper estimates that the amount of code churn (the percentage of lines βreverted or updatedβ within two weeks of being written) will be twice as high in 2024 as it was in 2021. It also finds that AI-assisted programmers tend to use more copy-pasted and AI-generated code while reusing and refactoring existing code less, which hurts maintainability in the long run.
πβοΈ FEMA will pay states to install solar panels and heat pumps after disasters
The USβs Federal Emergency Management Agency helps communities recover from natural disasters by funding things like debris removal and the rebuilding of public infrastructure. The agency recently announced that itβll also start paying for recovering communities to set up solar panels (since microgrids can make communities more resilient to blackouts) and install heat pumps and energy-efficient appliances (to reduce the chance of power outages to begin with).
πβ³ Worth your time
Some especially insightful pieces weβve read, watched, and listened to recently.
How Technology Interacts With Status Signaling (Culture: An Ownerβs Manual) β Examines how new technologies can both βserve as an antidote to status markingβ (for instance, new technologies eventually diffuse and become omnipresent, thus no longer being exclusive) and create new ways for people to signal status (for instance, new technologies often come with βalibis,β or non-status reasons for owning the item, thus making the owner seem less status-focused while still signaling their status).
Contextualizing Elagabalus (The Historianβs Craft) β Uses the case of an oft-maligned Roman emperor to argues that, in the ancient world, history was largely a form of literature, so historians often skewed the truth to fit the themes, motifs, archetypes, and morals that ancient readers expected. Historians also purposely bad-mouthed past emperors to make the current regime look better, often using regional or ethnic stereotypes.
Why Bad Strategy Is a βSocial Contagionβ (McKinsey) β A business professor critiques executives who conflate goals with strategies; in truth, strategy is all about making a plan to achieve those goals, focusing on the strengths of the company, and mitigating the parts of the plan that make it difficult.
How Language Nerds Solve Crimes (PBS Otherwords) β Examines how forensic linguists use syntax, word choice, sociolinguistics, and statistics to help identify serial killers, thwart ransomers, and even figure out the true authors of books.Β
ππ Lens of the week
Introducing new ways to see the world and new tools to add to your mental arsenal.
This weekβs lens: Tamagotchi quality.
Back in the 1990s, Tamagotchis were all the rage. If you were of the right age, you may have worried about the well-being of your digital pet and heard the chirps coming from other peopleβs pockets reminding them to do the same. This was the era of the Nintendo 64, the Sony PlayStation, and many other classic consoles. In an environment of such rich digital entertainment, why was a toy with a low-resolution LCD screen so popular?Β
Tamagotchis were at the sweet spot of doing something truly satisfying really well. They perfectly matched technology with entertainment β at the right time in the curve of technological progress. They were innovative, but they werenβt cutting-edge. We can remember our small digital friends and call this Tamagotchi quality: the idea of solving a problem really well, even if that means doing something that might seem like itβs not pushing us to the edge of our capabilities.Β
The opposite of Tamagotchi quality is when a technology is pushed beyond its capabilities to create something that presumably solves a problem but is unreliable. Dollar store toys that break under contact with a real child are one example. However, things can fail to meet the Tamagotchi quality bar without being cheap, as technological flops like 3D home televisions or the Juicero gadget. These devices trade performance for a perception of innovation and fundamentally disappoint their users after a few moments.Β
Tamagotchi quality is related to lateral thinking with seasoned technology, which finds new ways to use existing technology. It is also related to the MAYA and LAYA principles, using the most or least advanced yet acceptable technology to build products that meet your users where they are. Ultimately, this lens highlights the intersection of these principles: that itβs better to do a useful thing solidly than an exciting thing badly.Β
As we think about current AI technologies, we can apply the Tamagotchi quality lens. Are we trying to use AI to do something that is sustainably satisfying (even if it feels somewhat boring), or are we chasing after innovative ideas where we can only deliver disappointment when they fail to live up to our expectations?
Β© 2024 The FLUX Collective. All rights reserved. Questions? Contact flux-collective@googlegroups.com.