[Me & X] LLMs as non-kinetic warfare

On the question of LLMs being non kinetic forms of warfare ...

X : It seems to me you are implicitly adopting a power-only view i.e.:
- Ideas are instruments of domination
- Culture is weaponized influence
- Reason is an effect of power, not an independent norm

For me, that aligns closely with a certain post-modern thinkers....who often treat knowledge, language, and reason primarily as effects of power relations....for whom exposure to ideas is primarily coercive and reason itself is just an instrument of domination.

Charles Sanders Peirce's position is explicitly a rejection of that premise.

He insists that power can impose belief, but it cannot justify it. And most importantly that inquiry requires exposure to disturbing ideas.

Peirce admits that closed systems can be comforting and then self-insulation can be psychologically effective. But the moment belief is protected from challenge, it ceases to be inquiry.

Inquiry requires the risk of being changed.

Me : Not quite. The problem with "reason is an effect of power, not an independent norm" is that the supposed independent norm is shaped by the tools, medium, and language through which we reason.

If any one is captured, escape is still possible. I can reason my way out of a system in which language is captured. This is a flaw in Harari's argument at the WEF, which treats linguistic capture as sufficient.

What I cannot do is reason my way out of a system in which my reasoning has been captured.

Peirce is right that power cannot justify belief and that inquiry requires exposure to destabilising ideas. Where I depart from him is whether inquiry remain outside capture. By changing the tools, the language, and medium, we move from influencing beliefs to enclosing cognition.

These systems do not merely persuade; they become the substrate through which reasoning occurs. In such a world, belief is embedded not through power, but through necessity. You have no other way to reason.

This was anticipated in Brave New World. Large language models function as a pill-less SOMA: control is ambient, not enforced, and any salvation must come from outside the system itself. This is real danger of LLM/GPTs and its effects can already be seen in the atrophy of the skills we need to reason.

Diagram on LLMs as non-kinetic warfare

=== Addendum

I'm encouraged by the UK Gov service standard of "understand how the technology they use works" and "avoid situations where your technology choices might reduce the reliability of information given to users or decisions made about them" which forces comprehension - https://lnkd.in/eQqY_eeq

As mentioned in my post on theocracy (see comments), one partial solution to capture is diversity i.e. adopt US, China and other national AIs and then force them to debate solutions with the human remaining as judge.

Also, a UK - China AI collaboration would be welcome news ... https://lnkd.in/e6k6hwJ5

Originally published on LinkedIn.