Discussion about this post

User's avatar
Dil Green's avatar

As someone who studied with Alexander, and used patterns in many real-life projects, I'm not so sure about this.

Design patterns are specifically not derived from observation - they are ‘meta-design’.

When we write patterns, we’re not looking at the world to find recognisable patterns (which I agree is what an LLM would do).

We’re looking at the world for recognisable conditions - then devising patterns which (hopefully) describe how to work for beneficial resolution of those patterns (while encouraging us to be aware of the wider and narrower contexts engaged).

An AI can find patterns in the training set (I’m pretty sure that in fact that is what the LLM training approach does - with wide scope patterns like ‘story’, ‘research paper’, ‘letter’, through narrower scopes like ‘paragraph’, ‘sequence’ etc on down to tokens).

But those patterns will be ‘as observed’ - not ‘designed for’.

The business of ‘refining’ which goes on seems to be about having the ‘anti-patterns' which are there in the training data get weeded out.

As to AI helping, well we're at the 'alignment issue' - because Alexander structured the complex system map which 'A Pattern Language' sets out in a specific manner - with the 'Emergent desirables' first (Towns), then the 'Ambitious but achievables' (Buildings), then the 'Doables' (Construction).

To write good patterns, then - at least if we pay attention to Alexander, we should first describe the emergent conditions we wish to support, then enquire as to the conditions which might support that emergence, then look for the forces at play and how to resolve them to support the wider whole.

So far, we don't know how to tell AI systems about the emergent outcomes we want - and they are certainly not adequately represented in any training set....

Expand full comment
2 more comments...

No posts