[Me & X] Why practices take so long to co-evolve with AI

X: Why do you think that practices will take so long to co-evolve with AI?

Me: We're a couple of years into an 8-15 year window. That's not a long time, you're talking about social systems. The practices have to emerge and be accepted. We haven't even decided what the flag will be.

X: Flag?

Me: Think "DevOps". Practices have to emerge and coalesce. This is all normal and it's fairly quick compared to management practices.

X: How long does that take?

Me: 30-50 years. Take Explorers, Villagers and Town Planners (EVTP) and its previous namings. This has taken 18 years so far. I did suggest back in 2015 that it'll be another decade before the model starts becoming more noticeable, and since then we've had the timely publishing of Susanne Kaiser "Architecture for Flow". It'll take another decade to become occasionally used.

X: AI isn't helping to accelerate this?

Me: Not in the way you probably think. These changes don't move linearly but happen in fits and starts. So, right now you have a bunch CIOs getting rid of engineers because of a fundamental misunderstanding of what the impact of AI will be. Yes, it is being driven by personal incentives (think bonuses, share options) based on reducing operational costs and improving profits. The smart ones know this, and so they'll take their ill gotten booty and LinkedIn claims of "successfully reduced the operating cost by $X million" and bugger off into the next organisation before the trouble hits.

X: And that helps how?

Me: It allows for a new set of CIOs to come and talk about how they "Spearheaded the strategic revitalisation of a failing engineering department" aka introduced new practices.

X: Does it have to be like this? That seems like a of lot of pointless pain?

Me: That's the whole point of EVTP but alas it needs high degrees of situational awareness to work. Most organisations run on stories instead, they lack the practices needed.

X: I was using Claude Opus 4.6 to write a planning document and it spontaneously included Wardley mapping in the plan design without being asked!

Me: Well, that's interesting. If these AI systems start demonstrating situational awareness whilst human execs fail then ... well, for whom the bell tolls.

X: ?

Me: Most execs and organisations work on little to no situational awareness. They are either "chancers" (i.e. success is fairly random) or "believers" (i.e. focused on belief). This is fine if you're competing against others like yourself (Red Queen Effect). In 2012, I ran a study on awareness and action in the niche hotbed of competition which was open source tech in Silicon Valley. I examined the impact on market cap over a 7 year period. Awareness made a huge difference but this niche is not representative of the wider industry. If you're talking AIs demonstrating situational awareness then many execs are toast. Consider replacing execs with AI, you should get better results.

Awareness vs Action study of 160 hi-tech companies in Silicon Valley, 2012 - showing Thinkers, Players, Chancers and Believers

X: Culture?

Me: Lol. I'm sure plenty of excuses will be found.

Originally published on LinkedIn.