As someone who studied with Alexander, and used patterns in many real-life projects, I'm not so sure about this.
Design patterns are specifically not derived from observation - they are โmeta-designโ.
When we write patterns, weโre not looking at the world to find recognisable patterns (which I agree is what an LLM would do).
Weโre looking at the world for recognisable conditions - then devising patterns which (hopefully) describe how to work for beneficial resolution of those patterns (while encouraging us to be aware of the wider and narrower contexts engaged).
An AI can find patterns in the training set (Iโm pretty sure that in fact that is what the LLM training approach does - with wide scope patterns like โstoryโ, โresearch paperโ, โletterโ, through narrower scopes like โparagraphโ, โsequenceโ etc on down to tokens).
But those patterns will be โas observedโ - not โdesigned forโ.
The business of โrefiningโ which goes on seems to be about having the โanti-patterns' which are there in the training data get weeded out.
As to AI helping, well we're at the 'alignment issue' - because Alexander structured the complex system map which 'A Pattern Language' sets out in a specific manner - with the 'Emergent desirables' first (Towns), then the 'Ambitious but achievables' (Buildings), then the 'Doables' (Construction).
To write good patterns, then - at least if we pay attention to Alexander, we should first describe the emergent conditions we wish to support, then enquire as to the conditions which might support that emergence, then look for the forces at play and how to resolve them to support the wider whole.
So far, we don't know how to tell AI systems about the emergent outcomes we want - and they are certainly not adequately represented in any training set....
You make lots of good points here. The slightly different angle I would put on it is that using AI isn't a way to replace the need to define things like the emergent outcomes we want. It is a way to accelerate the exploration process. Alexander observed and designed and built thousands of built places to develop his patterns. But those patterns are still a snapshot in time. By using AI to aid our exploration we can supplement our real data sets with synthetic data and explore more broadly, resituating and expanding patterns as our contexts and goals change. (Which is especially useful when you're in a part of the world that makes Alexander's iterative, incremental way of building literally illegal.)
Here's a concrete example. When I was building my home, it became clear that a lot of what allows for good room shape and good light is at odds with what makes most sense in a climate change impacted world where every efficiency is paramount. Multistory buildings with a small and compact footprint are energy efficient and support higher population density... but they sure make it hard to build wings and courtyards and lots of windows. Being able to use AI to explore ways to combine patterns derived from centuries of best practices with the reality of a tall rectangular prism would have been a real boon.
As someone who studied with Alexander, and used patterns in many real-life projects, I'm not so sure about this.
Design patterns are specifically not derived from observation - they are โmeta-designโ.
When we write patterns, weโre not looking at the world to find recognisable patterns (which I agree is what an LLM would do).
Weโre looking at the world for recognisable conditions - then devising patterns which (hopefully) describe how to work for beneficial resolution of those patterns (while encouraging us to be aware of the wider and narrower contexts engaged).
An AI can find patterns in the training set (Iโm pretty sure that in fact that is what the LLM training approach does - with wide scope patterns like โstoryโ, โresearch paperโ, โletterโ, through narrower scopes like โparagraphโ, โsequenceโ etc on down to tokens).
But those patterns will be โas observedโ - not โdesigned forโ.
The business of โrefiningโ which goes on seems to be about having the โanti-patterns' which are there in the training data get weeded out.
As to AI helping, well we're at the 'alignment issue' - because Alexander structured the complex system map which 'A Pattern Language' sets out in a specific manner - with the 'Emergent desirables' first (Towns), then the 'Ambitious but achievables' (Buildings), then the 'Doables' (Construction).
To write good patterns, then - at least if we pay attention to Alexander, we should first describe the emergent conditions we wish to support, then enquire as to the conditions which might support that emergence, then look for the forces at play and how to resolve them to support the wider whole.
So far, we don't know how to tell AI systems about the emergent outcomes we want - and they are certainly not adequately represented in any training set....
You make lots of good points here. The slightly different angle I would put on it is that using AI isn't a way to replace the need to define things like the emergent outcomes we want. It is a way to accelerate the exploration process. Alexander observed and designed and built thousands of built places to develop his patterns. But those patterns are still a snapshot in time. By using AI to aid our exploration we can supplement our real data sets with synthetic data and explore more broadly, resituating and expanding patterns as our contexts and goals change. (Which is especially useful when you're in a part of the world that makes Alexander's iterative, incremental way of building literally illegal.)
Here's a concrete example. When I was building my home, it became clear that a lot of what allows for good room shape and good light is at odds with what makes most sense in a climate change impacted world where every efficiency is paramount. Multistory buildings with a small and compact footprint are energy efficient and support higher population density... but they sure make it hard to build wings and courtyards and lots of windows. Being able to use AI to explore ways to combine patterns derived from centuries of best practices with the reality of a tall rectangular prism would have been a real boon.