Episode 143 β April 25th, 2024 β Available at read.fluxcollective.org/p/143
Contributors to this issue: MK, Stefano Mazzocchi, Erika Rice Scherpelz, Jasen Robillard, Neel Mehta, Boris Smus
Additional insights from: Ade Oshineye, Ben Mathes, Justin Quimby, Dimitri Glazkov, Alex Komoroske, Robinson Eaton, Spencer Pitman, Julka Almquist, Scott Schaffter, Lisie Lillianfeld, Samuel Arbesman, Dart Lindsley, Jon Lebensold, Melanie Kahl, Kamran Hakiman
Weβre a ragtag band of systems thinkers who have been dedicating our early mornings to finding new lenses to help you make sense of the complex world we live in. This newsletter is a collection of patterns weβve noticed in recent weeks.
βIf you only deal with stuff you know, then the information you have to draw from is finite. But if you draw from the unknown, the possibilities are infinite.β
βΒ Sun Ra to Avreeayl Ra, as recounted by Jeff Parker
π§ πΈ LLM vibes
LLMs have raised many philosophical questions. Do they demonstrate intelligence or consciousness? Are they stochastic parrots that inevitably end in mediocrity? It may be that the answers to these questions are less important than the mental models we bring to our use of LLMs. While pondering these questions, weβve all generated our own set of mental models that help us use them more effectively.Β
LLMs are trained on massive amounts of data, and while they can sometimes reproduce it, they can also create information that is distinct from that training data while still feeling thematically aligned. We also see that large, general-purpose LLMs often feel rather middle-of-the-road (some might even say mediocre). Fine-tuning can better align an LLM to a specific task, but it also tends to change the feel of that LLMβs output.Β
One way to think about LLMs is as upscalers of vibes. In image processing, upscaling is the process of taking digital imagery and magnifying it without making it all pixelated. Upscaling adds detail that may well never have existed in the original and yet seems coherent with the source material.
Similarly, LLMs act as vibe amplifiers. The training of an LLM takes a lot of detailed information and captures the common essence of it: the vibe. If the information set is very large, the LLM will have a broad repertoire to vibe on, but it will necessarily flatten this vibe to the lowest common denominator. An LLM trained or fine-tuned on a more specialized set of data will conform more to the vibes of that data set. Prompting can also push the vibes, although not as far.Β
We can employ this idea when understanding how to use LLMs most effectively. When it comes to style, asking an LLM to create something that sounds like itβs in our unique style will be quite a big lift. Even if the material of ours was in the training set, itβs a tiny drop in the ocean of vibes that went into this LLM. Weβll probably need to distill our style into specific descriptorsβ¦ and even then the results will likely feel a bit off.Β
However, if we are trying to get it to create something that sounds like a mainstream news article, thatβs very aligned with the common vibe. A prompt like, βWrite a news article about Xβ may be sufficient to get the job done.
We can also apply this idea of vibe flattening to the sort of reasoning LLMs tend to be good at. Weβve found that even with a sophisticated prompt, single-prompt responses tend to be somewhat shallow. It tends to look more like system 1 thinking: itβs quick and feels low effort. It leans toward the average. It captures the vibe of the answer space but doesnβt go deeper. To push LLMs toward more deliberate, complex responses generally requires a little human judgment in the form of multiple rounds of prompting to steer the conversation away from the flat, easy answer.
What makes the vibe upscaler lens useful is that vibes are both frustratingly vague and immediately identifiable β βIβll know it when I see it.β By thinking about LLMs as vibe upscalers, we can build an intuition for how to integrate LLMs into our tools for thought.Β
π£οΈπ© SignpostsΒ
Clues that point to where our changing world might lead us.
πβοΈ An author was granted a limited copyright for a book that ChatGPT helped write
One American author relied heavily on ChatGPT to help write a novel based on her life, and when she applied for a copyright, she was initially denied due to a US Copyright Office policy that bans βmachine-generated elements.β But she appealed, arguing that ChatGPT was a necessary accommodation for her given her disabilities; she also provided evidence to show that she wrote intricate prompts and heavily edited almost all the lines the LLM wrote. The government agreed and granted her a copyright, but with a catch: she wasnβt recognized as the author of the text but rather the creator of the βselection, coordination, and arrangement of text generated by artificial intelligence.β That means that nobody can copy the work without her permission, but the sentences themselves arenβt copyrighted and could be individually copied or rearranged.
ππ
Builders broke ground on the USβs first high-speed rail project
This week marked the groundbreaking ceremony for the Brightline West project, a privately run high-speed rail route between Los Angeles and Las Vegas. This marks the United Statesβ first high-speed rail project; trains will run at an impressive 186 mph or 299 km/h, on par with the Japanese bullet trains. When finally completed, the project will shuttle passengers between LA and Vegas in just over two hours, a huge time savings compared to the four-hour drive (and thatβs without traffic).
ππ΄ββ οΈ An auto-replying AI bot is promoting products on Reddit posts to game SEO
These days, search engines and humans alike have come to trust Reddit as a rare bastion of truly human-generated content on the web. At the same time, a new AI product will automatically watch subreddits and write AI-generated replies that promote customersβ products under relevant threads. Examples shown on their site include advertising travel insurance in response to a question about going to Ireland and plugging an AI coding assistant on a thread about mobile app development. (While the text is generated with AI, the posts are made by purchased Reddit accounts that are well βagedβ and have plenty of βkarma,β so they look real.)
ππ¨π¦ Canada is giving PhDs and postdocs their biggest raise in 20 years
Canadaβs latest federal budget will allocate over C$5 billion in academic stipends over the coming years, which advocates call βthe largest investment in graduate students and postdocs in over 21 years.β Masterβs students who are eligible for federal funding will see their stipends increase from C$17,500 per year to C$27,000; PhD candidatesβ stipends will rise to a flat C$40,000 per year from their previous range of C$20,000 to C$35,000; and postdoctoral fellowsβ salaries will jump from C$45,000 to C$70,000. The budget will also add over 1700 new scholarships and fellowships while allocating billions for basic research and AI research.
πβ³ Worth your time
Some especially insightful pieces weβve read, watched, and listened to recently.
To Accelerate Biosphere Science, Reconnect Three Scientific Culture (Santa Fe Institute) β Argues that researchers who study Earthβs biosphere, and scientists in general, need to unite ideas from three distinct βscientific culturesβ: variance (naming things and observing details), exactitude (refining models by gathering data), and coarse-grained culture (finding generalities and underlying principles). For instance, evolution started with variance (e.g. Darwin observing birds), progressed to finding coarse-grained principles, and evolved into an exacting field of science.
Did the Makers of Devin AI Lie About Their Capabilities? (Machine Learning Made Simple) β A critical analysis of a viral demo that showed an βAI software engineerβ supposedly autonomously solving a freelance coding assignment. Argues that there were examples of cherry-picking situations where an AI bot would be good at coding (such as solving competitive programming exercises, which tend to have clear problem statements and well-defined inputs and outputs), plus what the author calls bait-and-switch tactics: Devin was credited for solving impressive-sounding problems, like auto-finding and fixing bugs, when it really solved a more mundane problem, like writing a test case given a detailed specification.
The Bubble Sort Curve (Lines That Connect) β An interesting illustration of how to use algebra, logic, and first-principles analysis to get a handle on a seemingly intractable problem: whatβs the formula for the strange curve you see when youβre halfway through a bubble-sort?
Digital Detritus: Unintended Consequences of Open Source Sustainability Platforms (Phylum) β Tells the story of how Tea, a crypto protocol that sought to reward coders for their open-source contributions, introduced financial incentives that led to fairly predictable bad behavior: package registries were flooded with spammy, low-effort libraries as people tried to βfarmβ Tea tokens. Earlier, people had been submitting pull requests to legitimate OSS projects that would make the Tea protocol think they were the developers and thus start granting them rewards.
π²ποΈ Book Game for your shelf
A game that will help you dip your toes into systems thinking or explore its broader applications.
This week, we recommend Golem, published by Cranio and designed by Simone Luciani, Virginio Gigli, and Flaminia Brasini (2021).
In Golem, Luciani, Gigli, and Brasini take the nuanced historical context of the legend of Rabbi Loewβs creation of a golem in Prague and create a game that both respects its history and provides some interesting system-thinking insights.Β
In many games, a valid strategy is to focus on investing in a highly powerful mechanism and pushing that as far as it can go. Golem, on the other hand, is all about balance. You donβt want to let your golems get too far ahead of you (on power, speed, capacity, etc.) or you risk losing control. You are limited in the number of golems you can have, so you want to build in a kill switch so you can deprecate golems in a timely manner and then rebuild better ones based on the lessons youβve learned. Many small, slow golems who are working in tandem (possibly in an antagonistic fashion so that they balance each other) are likely to be better than a single powerful golemβ¦ especially if it escapes your control.Β
In a world where bigger, better, and faster often seems like the best way to win, we appreciate how Golem encourages an approach where balance, care, and the ability to change one's mind are key to winning.Β
Β© 2024 The FLUX Collective. All rights reserved. Questions? Contact flux-collective@googlegroups.com.