3 Comments

As someone who studied with Alexander, and used patterns in many real-life projects, I'm not so sure about this.

Design patterns are specifically not derived from observation - they are โ€˜meta-designโ€™.

When we write patterns, weโ€™re not looking at the world to find recognisable patterns (which I agree is what an LLM would do).

Weโ€™re looking at the world for recognisable conditions - then devising patterns which (hopefully) describe how to work for beneficial resolution of those patterns (while encouraging us to be aware of the wider and narrower contexts engaged).

An AI can find patterns in the training set (Iโ€™m pretty sure that in fact that is what the LLM training approach does - with wide scope patterns like โ€˜storyโ€™, โ€˜research paperโ€™, โ€˜letterโ€™, through narrower scopes like โ€˜paragraphโ€™, โ€˜sequenceโ€™ etc on down to tokens).

But those patterns will be โ€˜as observedโ€™ - not โ€˜designed forโ€™.

The business of โ€˜refiningโ€™ which goes on seems to be about having the โ€˜anti-patterns' which are there in the training data get weeded out.

As to AI helping, well we're at the 'alignment issue' - because Alexander structured the complex system map which 'A Pattern Language' sets out in a specific manner - with the 'Emergent desirables' first (Towns), then the 'Ambitious but achievables' (Buildings), then the 'Doables' (Construction).

To write good patterns, then - at least if we pay attention to Alexander, we should first describe the emergent conditions we wish to support, then enquire as to the conditions which might support that emergence, then look for the forces at play and how to resolve them to support the wider whole.

So far, we don't know how to tell AI systems about the emergent outcomes we want - and they are certainly not adequately represented in any training set....

Expand full comment

You make lots of good points here. The slightly different angle I would put on it is that using AI isn't a way to replace the need to define things like the emergent outcomes we want. It is a way to accelerate the exploration process. Alexander observed and designed and built thousands of built places to develop his patterns. But those patterns are still a snapshot in time. By using AI to aid our exploration we can supplement our real data sets with synthetic data and explore more broadly, resituating and expanding patterns as our contexts and goals change. (Which is especially useful when you're in a part of the world that makes Alexander's iterative, incremental way of building literally illegal.)

Expand full comment

Here's a concrete example. When I was building my home, it became clear that a lot of what allows for good room shape and good light is at odds with what makes most sense in a climate change impacted world where every efficiency is paramount. Multistory buildings with a small and compact footprint are energy efficient and support higher population density... but they sure make it hard to build wings and courtyards and lots of windows. Being able to use AI to explore ways to combine patterns derived from centuries of best practices with the reality of a tall rectangular prism would have been a real boon.

Expand full comment