“Preface to the First English Edition, 1959,” from “The Logic of Scientific Discovery,” by Karl Popper. On artificial model languages:
“Preface to the First English Edition, 1959,” from “The Logic of Scientific Discovery,” by Karl Popper. On artificial model languages:

“Preface to the First English Edition, 1959,” from “The Logic of Scientific Discovery,” by Karl Popper. On artificial model languages:

“Preface to the First English Edition, 1959,” from “The Logic of Scientific Discovery,” by Karl Popper. On artificial model languages:

Karl Popper is regarded by some as one of the 20th century’s most significant philosophers of science. This book was written in German in 1934, long before all of this novel artificial intelligence development. As such, it wasn’t directly aimed to address AI. I found the opening section of the book, linked above, to carry some interesting assertions. I think it offers several compelling arguments against the efficacy of language analysis alone with regard to problem solving. I believe that this can be framed in the context of AI as a bit of a rejection of the optimistic, potential scope of GPT.

Interestingly, I have noticed that GPT astoundingly appears to be genuinely capable of solving simple math operations, even offering abstract proofs and somesuch for its answers. In this context, perhaps language analysis includes things like, say, pattern recognition with simple division problems, which are often formatted into text for online discussion (“4/2=2,” “15%12=3,” “a2+b2=c2,” “2+2=5”). The structure and logic of these sorts of expressions/equations is not too far out of alignment, relative to that of linguistic structure. But, when you really throw the tough stuff at GPT, it stumbles all over the place.

With physics, it can sometimes somewhat correctly walk you through a methodology behind solving a given problem, but will often not “understand” what it’s saying, providing answers that are incorrect (sometimes contradicting its own answers in the same sentence). What I find interesting is how context impacts its accuracy. For example, if you ask it a simple math problem, you can often expect it to respond with the correct answer. However, if that same problem were to show up as a means for solving a physics problem, it often makes mistakes. I reckon it may be something akin to what happens when you see a julia set. You perceive a recognizable form, and afterwards, could probably identify it, even with substantial proportions of it blocked from your eyesight. Not once did 𝒛ₙ₊₁ = 𝑧²ₙ + 𝑐 cross your mind.

To some extent, you can argue that really, GPT is not a language model, but a simplified neuronal model, which introduces some intriguing new depth to the subject. However, it is still only informed by language under this lens. It cannot directly observe anything. For the most part, language is still its only window into the concepts we bring to light.

I don’t know, I’m interested in how others feel about this, so I figured I would share! AI is a very cool thing, which absolutely has the capacity — hell, the likelihood — to produce real-world outcomes, but I find these outcomes to be primarily related to things like job market complication, human connection implications, reduction of the human hunger for and pursuit towards knowledge. To some extent, it saves people quite a bit of time to be able to effectively ask a bot all of the basics, as they can focus on the cutting edge. However, sometimes you find something that leads to a paradigm shift, which calls for a reassessment of fundamentals. It’s important that they are understood. We mustn’t lose our edge when it comes to ensuring that people still have the understanding and capacity to solve scientific problems, and ensuring that people do not resort to AI for every conceptual need in lieu of personal intellectual development.

What are your thoughts?

submitted by /u/zizn
[link] [comments]