A moment of reflection

On vibe coding, context and open source

In 2023, I wrote a short series of posts about the AI / vibe coding space. These were subjects that I have covered in various talks over the last decade but I'd like to take a moment to reflect.

The first post (see background) was on the importance of conversational programming. Whilst this has been a hot topic subject since 2016, I would constantly repeat "It's not widely talked about yet, but it will be". In 2023, we were "still waiting for those conversational programming environments to fully form but we're getting close" i.e. we were right on the edge, the sub components were there, the first examples had been in circulation from Aleksandar Simovic demo at AWS RE:Invent to GitHub's Copilot.

However, as I often remind people these changes require CAST — concept, attitude, suitability and technology. The concept of conversational programming had been there for a long time, since the days of Graphical Conversation Theory and Architecture by Yourself by Nicolas Negroponte. The attitude was there, lots of people felt frustrated at how difficult it was to get something developed. The technology was increasingly there, LLMs/GPTs were rapidly becoming commodity like. We just needed some further improvements (suitability) and the spark.

That spark came when Andrej Karpathy coined the term "vibe coding" in 2025. The entire field exploded onto the scene ... along with the inevitable and tiresome myths (listed in that post). It has been glorious to watch though, a re-run of previous cycles such as serverless, cloud ... all the way back to the industrial revolution. Yes, I had the tedious trolls saying it was different this time but then they always say that, and then quietly disappear later. The only real difference is the blast radius of the change i.e. how many value chains does it impact? Vibe coding is big, maybe not as big as SpimeScript will be (or what you might call cyber digital) but that's a future topic, not for now.

There is however, still a long way to go. For example, practices need to co-evolve and are still emerging. Which brings me onto the second post, the importance of context or what I called "Maps as code". There are a lot of thorny issues here including human reasoning, the chain of comprehension that exists, the embedding of values in a system (which has implications for sovereignty) and the general lack of situational awareness.

As I said in that post:

"Github CoPilot are admirable but probably in the wrong space. I think the breathless horde of consultants prognosticating the replacement of programmers with LLMs have fallen into a trap, not dissimilar to their claims in 2011 of cloud saving you money"

The problem for me is the medium. The issue itself is code as text, and how we visualise the space or more importantly how we fail to visualise the problem space. That is not a problem with AI but instead :

"It doesn't matter if it's written or spoken, the medium is still the word, it is text. The power of conversational programming will only be truly unleashed if we can escape from the confines of text (where syntax, styles and rules dominate) and into a world of maps (where things, relationships and context matters)."

This is one of the reasons why I'm writing Rewilding Software Engineering with Tudor Girba. I cannot emphasise enough the importance of examples (in development) and visualisation of context. Our current toolsets are flawed in my opinion, most lacking any contextual views because most tools are not contextual to the problem that you are trying to solve. In terms of context, I've seen modest improvements over the years, at least some are talking about it as a big thing but most get bogged down in knowledge graphs and the ideas of modifying standardised tools with this. They are missing the forest for the trees.

The third and final post was on the importance of open source in this space. I won't hide that I was severely disappointed where we were in 2023, especially with the focus on guard rails and the AI safety summit. My anger spilt over in that post:

"Laughably, the UK led an AI safety summit last week. I say laugh, but if you're in the UK then you might want to cry, especially given so many voices were ignored. You would have thought that handing over national sovereignty in the landscape of technology to a few would be THE major safety issue. Apparently not. The 'great and good', and lobbyists talking up 'responsible AI' seemed focused on saving us from some future mythical frontier AI. I'm guessing they were all traumatised as younger children by James Cameron."

To be blunt, not much seemed to be improving, we've had the weak Open Source AI definition from OSI (where data is not considered part of the set of symbolic instructions and caveats are given to sufficiently detailed information) but set against this we have fortunately seen France and more importantly China's moves into this space. I am however encouraged with the recent announcements by Kanishka Narayan MP.

Finally, the UK might be getting into the game.

Overall, the progress has not been too bad. In terms of context, that's still far slower than I would have hoped by now but when it comes to practices, these normally take 5 to 8 years to emerge and stabilise.

Background

Jan, 2023 — What is conversational programming? — https://medium.com/@swardley/why-the-fuss-about-conversational-programming-60c8d1908237

May, 2023 — Maps as Code — https://medium.com/mapai/why-the-fuss-about-conversational-programming-70a8b7ca0d2b

Nov, 2023 — Why open source AI matters — https://medium.com/mapai/why-open-source-ai-matters-a46a7d23ad0e

Rewilding Software Engineering — https://moldabledevelopment.com/

UK AI Minister Kanishka Narayan MP supports OpenUK and open source — https://www.linkedin.com/posts/openuktechnology_indiaaiimpactsummit2026-indiaai-opensourceai-activity-7430266584134377473-Yk0F

Originally published on Medium.