Maybe it’s time to stop optimizing everything and remember why we build things.
Maybe it’s time to stop optimizing everything and remember why we build things.

Maybe it’s time to stop optimizing everything and remember why we build things.

Please, pay little attention to the current discussion of AI and it’s usage. Pay attention, but do not go there.

If you start looking at what is happening at scale — in the markets, for example — people are now way too hyped about the talking machine that can paint.

"But we do not know yet what this thing is capable of yet, there are two narratives!"

There are no narratives.

This is a tool.

Tools do not mean anything. A knife is defined by its wielder. The problem arises when people do not understand what the tool is, and I think this is what is now being forgotten.

First of all, when we say "artificial intelligence," it really means "the next cool technology." It does not mean anything else. The current “artificial intelligence” points to LLMs, diffusion models, etc. The coolest of them actually take a very, very long jump into the physical world in the vision-to-language-to-action models.

They are magic — as in "almost indistinguishable from an incredibly advanced form of technology."

People tend to say that we need proof before we can say anything. But you don’t really need to prove anything when you criticize a formal system. See, for example, LLMs. They are, at the end of the day, an incredible combination of certain mathematical concepts (linear algebra, information theory, differential equations, and so on) combined with a lot of language data thrown in. In the end, this produces a "symbol shuffler" that is very, very good at predicting what word comes next when you say something.

But that’s it.

This is like in The Hitchhiker’s Guide to the Galaxy when they build the machine to "tell what is the meaning of the universe" and it blurts out "42." What goes in, comes out.

It is an incredible piece of technology that will be good for these tasks:

  • Very good at tasks with a high language component, such as routine chat work, machine translation, some coding tasks, searching, etc.
  • Pretty good at tasks that involve some language use, such as reporting, documenting, deducing, or evaluating tasks in a limited context.
  • Terrible at tasks with very little language component, such as anything involving human interaction (which, actually, is almost every single job that exists).

And it will actually be only at best mediocre (at worst dangerous) in those tasks with a high language component if you don’t really pay attention to what you’re doing — because you have a big, big, big temptation to get lazy and not pay attention.

Because LLMs are nothing but the biggest language game ever created. Their actual limits are still, of course, not known, and please keep on pushing them — but the hard limits are already there. The limits of LLMs are the limits of natural language. (Very, very many people, by the way, have thought about this in the last two thousand years, so you don’t have to do it yourself.)

The limits exist within all of these new models. The limits of these systems are the limits of formalization. Mathematics is a form of language game. But math is not physics. Musical notes are not music.

Because in the end, all they are is this: extremely sophisticated printing presses that reproduce and reshape “language” mathematically. The models are not “neural networks” or “agents” in the biological sense, but computational machines of relationships between symbols.

Some smart people will now start arguing, "but this is what humans do." And here I must tell those people — really? This is what you do? This is a good time then to go for a walk in the forest, drink a cup of coffee, play with your dog/kid, call a friend — you choose.

The problem is that we have a very strong tendency to associate ourselves with conscious reasoning — and yes, it does happen on some symbolic level — but now you are really thinking that you are the tool. But you are not the tool. You are the thing that can use those tools in a very, very smart and wise way.

You are the thing that MAKES SENSE out of these things. We are the ones who create the MEANING. Those machines can never, ever, ever do that for us.

But also, I am not saying that you shouldn’t use these. Of course you should use these tools and learn to use them. They can be very, very handy. Use LLMs where you have a hard time formalizing something into words.

Use Claude to do that refactoring, but make sure you really know what you’re doing, because you have to be very formal about what you’re doing. And I hope you know what you did afterwards, because I do not want to debug that.

Use diffusion models to create that background for the presentation about your sales case. But you still need to pitch it. You still need to sell it. You still need to emotionally connect with that customer. Understand their problem — and hopefully sell them something meaningful.

See: the problem isn’t the technology. It is us. It is what we are doing with the technology. The problem is that almost all current technology in the modern world is robbing you from introspection, self-searching, critical thinking, actually paying attention. Everything is demanding your attention and you are being slowly turned into a short-term reward association bundle machine.

Somehow, somewhere, for some reason, we collectively forgot what technology was supposed to be about — humans, and how to make our lives collectively better and more meaningful.

Maybe now is a good time to take a little break from your phone. Because that is the one of biggest things that is harming you currently. Almost all of the "trendy" software technology currently at large is doing a lot of harm to us.

It too, started beautifully and then somehow, somewhere, for some reason, we decided to turn the internet into a short-term attention span–hooking algorithm feeding into our minds 24/7. Let’s monetize it. Put ads there. Because that is really what we need: a billion people running after the next new thing.

This is horrible. Why did we build the Matrix? Seriously.

I would like to think that Silicon Valley is waking up from this horror, because everybody at some point really needs to prove their point in technology by making money out of it and showing where it is used and where it is helpful and where it enables the creation of meaning so that somebody wants to pay for it.

But it seems like their newest invention is to now put chatbots into web shops to make shopping even easier! Incredible!

Is this really the problem we now need to solve?

This whole thing is unfortunately, a caricature of 1999. AI sucks. It has always sucked, and will always suck.

But I really love you, human being, who read this text.

submitted by /u/s0lari
[link] [comments]