[Opinion] The circular logic of metrics

I did enjoy this post by Pavel Samsonov on "The circular logic of our metrics" - https://lnkd.in/eg9RZpdH - particularly as it led me to the line "our tools shape how we think" by Frank Elavsky - https://lnkd.in/ezpVAmt9

The one adjustment I would make is that the way we think is shaped by our tools, medium, and language. If any one is captured, escape is still possible. For example, I can reason my way out of a system in which language is captured. This is a flaw in Harari's argument at the WEF, which treats linguistic capture as sufficient.

What I cannot do is reason my way out of a system in which all three and hence my reasoning has been captured. This was brilliantly exposed in Brave New World by Aldous Huxley. In this light LLMs / GPTs can function as a pill-less SOMA: control is ambient, not enforced, and any salvation must come from outside the system itself. This is the real danger of LLM/GPTs and its effects will be seen in the atrophy of the skills we need to reason and comprehend.

Alas, productivity is seductive especially when combined with the argument that no-one understands the "system". Whilst superficially true, the difference is in the chain of comprehension - https://lnkd.in/e3tRQHr7

I would caution all to think of LLM/GPTs as non kinetic forms of warfare which is not to say that we shouldn't use them, but we should be mindful that we have not yet developed the practices to safely use them. Without these we are likely to create new theocracies of AI - https://lnkd.in/exJVmNvD

Back in 2023, when I discussed "Why open source AI matters" - https://lnkd.in/eJ9kQ-Dc - in the context of conversational programming (or what most people would call vibe coding today), the UK did have a chance to tackle this problem but alas, we were treated to a laughable theatre of safety. I'm glad however to see that this tide might be turning, finally - https://lnkd.in/eqYf_ACi and Kanishka Narayan MP should be praised for this.

I know I can sound like a stuck record, but these concerns have been a constant bugbear of mine since we first identified them in a mapping effort at the DVLA in 2015. Back then we realised the importance of embedded values in the simulation models (what you would call training data) in future intelligent agents - https://lnkd.in/eku2X_Ea

Of course, it's more than just values, it's reasoning which is at risk. The sovereignty issues are profound (and no, it has nothing to do with where data centres are located). These concerns are also behind my writings with Tudor Girba on Rewilding Software Engineering and the importance of tool driven development & understanding - https://lnkd.in/ek9MF7P3

The growth of LLM/GPTs is unavoidable (Red Queen effect) but if we reclaim our tools, we might still be able to gain the benefits of LLM/GPTs and at the same time reclaim our freedom and our reasoning. In my opinion, these are the practices we need.

Illustration

Originally published on LinkedIn.