The Science in the Fiction

The Perils of False Reasoning is a different take on the concepts that appear in The Science in The Fiction project.  What makes the concepts interesting is that they were featured in scientific articles some 40 years after I created the series.

To illustrate wording in the articles is compared with passages from books published years earlier.

Unlike the other concepts – which came from a knowing – this one came about as a result of my approach to solving a creative challenge.  

I used logic, mapped it to the creative need.

The Challenge

Light Beings.

Playing a central role in the MA series, the unique energy species necessitated I consider a variety of factors when writing them into the story.

In order to simultaneously draw parallels to humans – making them relatable to readers – while also highlighting differences so they weren’t boring.

Though I draw from education experience and imagination when creating my work, for this concept I drew from education and experience.

Imagination plays a critical role in problem solving – including how to approach writing software code – something I have done along my life’s path and tapped into here.

Seriously?

During a scene of some importance I used an analogy to explain a rather complex point.

Drawing from my days in tech to do so.

Imagine my surprise when upon reading a tech article about AI I found my plot device staring me in the face.

6 years after I wrote the story.

At this point in the game I probably shouldn’t be surprised.

But I am.

Validation

I published the Metatron’s Army series beginning in 2016.  8 years and 13 books later I came across a tech article that grabbed my attention because it mirrored a plot device I designed drawing from my years working in tech.

To better understand the context of the parallel I’ve included multiple passages from the article and from my book Promotion below.

From Apple Engineers Show How Flimsy AI ‘Reasoning’ Can Be from Wired published October 15, 2024. (link as of October 24, 2024)

The fact that such small changes lead to such variable results suggests to the researchers that these models are not doing any “formal” reasoning but are instead “attempt[ing] to perform a kind of in-distribution pattern-matching,

These massive drops in accuracy highlight the inherent limits in using simple “pattern matching” to “convert statements to operations without truly understanding their meaning,” the researchers write.

Other recent papers have similarly suggested that LLMs don’t actually perform formal reasoning and instead mimic it with probabilistic pattern-matching of the closest similar data seen in their vast training sets.

We’re likely seeing a similar “illusion of understanding” with AI’s latest “reasoning” models, and seeing how that illusion can break when the model runs in to unexpected situations.

From Promotion

“You aren’t human.  Picking up on what is inside of me doesn’t translate to understanding.”

“It’s knowledge but it’s a data point.  … that doesn’t mean he could ever understand  … if you remember there were plenty of misunderstandings.

“As to what it means to reroute your thoughts and emotional responses,” he continued.  “It works similar to a computer code.  I filter words phrases and emotional frequency responses.”

I used the time we’ve been talking to map your responses to a variety of stimuli.”

“We do not read thoughts.  We detect electrical synopses associated with emotional response.  All thoughts fit into categories.  Most of them elicit emotional responses of one type or another.  We map those to a likely source.”

“Sounds like there’s a lot of room for error.”

“Which is why we do not assume though it can be difficult not to.”

 I have a feeling that last will prove challenging.

For some more than others.

Never assume.