[Me & X] AI for Prediction
X : Do you use AI for prediction?
Me : No. I do use various AI based tools (not LLM/GPTs) for statistical analysis of population studies but not for what should be examined.
X : But if you ask ChatGPT ...
Me : ... you get a lot of fairly obvious stuff to any expert in the field and occasionally a hallucination or an unexpected correlation that's worth exploring. The issue is the training data. The really interesting ideas are not found in blog posts or analyst reports but mostly in the poorly formed ideas of the minds of many people, most of whom don't even realise it might be important. The signal is just too weak. I use a combination of maps (helps people formulate the poorly formed, in which case you listen for the 'ah-ha' moment between people), research groups and oodles of interviews to find that stuff.
X : An AI won't help?
Me : Statistical analysis it can but I suspect you're talking LLM / GPTs in which case the hallucinations and unexpected correlations can turn up a hypothesis to explore. But you have to do the exploration.
X : But what if you get the AI to produce a map?
Me : You get a possible first draft which misses lots of stuff that will really matter but tells you a lot that should be obvious to any mapper in that industry.
X : But ...
Me : Look, until we've wired up everybody's head and found a way to train these systems on poorly formed concepts, to simulate 'ah-ha' moments between many and then use population studies to find the statistically relevant signals ... all you're going to get is cohesive arguments based around signals in text.
X : I don't see why humans matter so much.
Me : They matter because those poorly formed concepts occur through the interaction with the real world rather than past training data. Exploration is the pursuit of truth and coherence. We can never find truth, so instead we look to falsify the models we have through observation ... that's what we mean by the pursuit of truth. So we build a coherent model, use it until reality disagrees and then we go 'ah-ha'. After this, we look for a new coherent model. When we find it, we go 'ah-ha' again. Stumbling from one 'ah-ha' to another, the cycle repeats, over and over. Interaction with reality is not optional.
X : Truth is coherence!
Me : ... and that's how society falls.
X : But if we could model the universe ...
Me : ... then the model must contain itself, and that submodel contains itself and on to infinity. This creates an infinite set which is only possible if information is infinite. If that was possible then the likelihood of this being reality becomes zero.
X : What does that mean?
Me : It leads you to the only truth that you could possibly prove, which is reality is an illusion. Fortunately, uncertainty principles stop us ever finding this out. Hence we are left with the best we can do ... 'ah-ha' moments.
X : What about AGI?
Me : Wiring up people's heads will happen sooner.
Originally published on LinkedIn.
