A while back I explained why I was very skeptical of “Crypto” hype on a Marxist basis. [1] I want to try something similar against “Artificial Intelligence” hype.
Of course, we ought to begin by making clear what we mean by “intelligence.” It’s a controversial subject, and definitions abound: Advocates of I.Q. tests argue that intelligence basically boils down to pattern recognition and endeavor to represent it with one single score; advocates of “Multiple Intelligence” theories speak more qualitatively, observing for example that people who are good at puzzles are often very bad at noticing subtleties in communication or managing relationships. In my experience I’d say that the second camp aligns better with how I prefer to use the word, but there is something to be said for “fast thinking,” and so maybe the first camp still has something to contribute to our overall understanding. At any rate, I define the concept as follows:
“Intelligence measures how organisms manage cognitive pressure.” [2]
What is “cognitive pressure”? We’re organisms capable of observation and reflection. We’re always absorbing information through our senses, but at the same time we are comparing whatever inputs we receive to our expectations. Recently, for example, many people reported that one side-effect of COVID-19 was that their sense of taste went awry, so that a sip of milk tasted like gasoline. This is obviously disturbing. Another example: if our eyes see a flat horizon, but the liquid in our ears says otherwise, we experience vertigo. It could generally be said that it’s good that whenever we take in an observation that jars with our expectations we feel stress. It’s a good survival skill. “Cognitive” simply refers to the fact that this pressure is in our minds, rather than in our muscles or in our bones.
Consider a sci-fi organism, which you somehow adopt as a pet. It’s a strange organism in that it seems to understand what you ask of it, and it’s also very truthful, but it’s only capable of communicating back with you in one way: any input you present to it yields either yes or no. Despite this limitation, you are able to communicate quite well: it tells you whether it wants to take in or reject an apple, whether it appreciates you moving it next to the window or not, whether it likes the music you’re playing, etc. You bond, and become close friends. Now imagine one day you find another such organism and you bring it back to where you keep yours. Excitedly, you ask yours this simple question: “Are you two the same?” Suddenly the organism begins to display visible signs of stress and agitation. You should have thought better before asking it that! Your empathy kicks in: from a certain point of view it obviously wants to respond “No! (We are obviously not the same individual!)”, but from another it understands that, in contrast to you, that is one of its brethren, and so it ought to reply “Yes! (Compared to you, we are the same!)”. The anxiety seems to be killing it. After straining and panicking for minutes, it appears to pass out from the exertion. Finally, after a moment of silence, it comes to its senses again. Somehow it appears altogether more mature, more serene. It waits another moment, and then it simply says: “No No, No Yes. Set Members.” You smile, relieved, as a tear rolls down your cheek. Your friend simply couldn’t manage to extricate itself from the painful bind you had placed it in, with its limited repertoire of tools. But in the end it finally overcame, fighting against the odds, irreversibly developing two new concepts already familiar to you: that of the Set and that of Membership.
Now, this may seem silly, but the point of the story is to capture something important about the evolution of language and knowledge and cognition in really-existing human societies. Before we were able to invent categories, we had to recognize patterns — patterns of objects or phenomena that were somehow alike with each other and somehow different from the rest of the world around them. Only after this process of identification could we begin to speak of the categories to which this or that thing (real or abstract) belongs.
Or, to put it in a slightly different way: In order for us to come up with a solution, we first had to identify a problem. The invention of categories is an example of a solution to a certain kind of theoretical problem.
Now, consider the flipside of this coin: There are many problems that we overcome daily without even so much as thinking about them. We use the internet to do things that would have seemed astonishing supernatural magic to past generations. But here a self-flattering myth creeps in, and has to be corrected for: the fact that one has access to tap water is in no way proof that they are more intelligent than another who still has to use a well. Intelligence is neither the moment of carrying buckets, nor the moment of spinning a tap. Intelligence is most useful as a concept when it describes the moment of overcoming.
This is where my skepticism of so-called “Artificial Intelligence” (henceforth “AI”, with enclosing quotation marks indicating skepticism) is rooted. Every day now we are bombarded with endless snapshots of the incredible things that “AI” is accomplishing: here it makes it sound like Obama is reading YouTube comments, there it is painting Donkey Kong in the style of Van Gogh, and yonder it gives better career advice than your friends. This is very entertaining. Is this not incredible? Is this not impressive? Sure! However, I want to go back to my definition, where intelligence is a moment with stages, and insist: the program at no point felt any cognitive pressure. Even when it delivers impressive answers, it delivers them in a savant-like manner, completely unaware of what it actually did. The machine experiences no needs, it’s just a black-box through which we express our needs.
It’s meaningless to speak of what “AI” does when it encounters two “conflicting” pieces of information. From the point of view of “AI” there is no conflict! When someone points out that DALL-E still ruins increasingly impressive paintings by giving humans deformed hands and teeth, or that ChatGPT will explain step by step what a perfect square is and then give a wrong example, enthusiasts quickly try to play this down as a “fluke at the edges” that can easily be fixed with “more data” and “more training.” This is partly true: with more data and drilling, these models will stop making these errors. However, what is presented as polishing minor details atop a fundamentally sound core is in reality quite the opposite: the core is fundamentally busted, but with near-unlimited computation and some fancy tricks we can indefinitely layer dresses on it to hide this shame. [3] No matter how much “AI” enthusiasts insist that “Humans make mistakes too,” no matter how much they denigrate the intelligence of the average person to hype up the robot, it doesn’t make these errors any less revealing. All the computer is doing is memorizing and extending patterns. It doesn’t notice its own errors; we do. The machine doesn’t actually experience any cognitive pressure.
What I am trying to get at is that the entire edifice upon which “AI” is built is entirely derivative and second-hand. It works as a giant and multi-dimensional auto-complete service, based on extant inputs, flaws and all. You can tell the machine to paint a firetruck in a queer style, but you cannot tell it to invent firetrucks or queerness, each of which came about as a matter of organic need. Even if it did accidentally invent a new category, in the course of filling out a missing quadrant in a plane, it wouldn’t know it.
Living organisms do objectively more than derive, and they distinguish themselves most when they integrate. This is why a child born in a deeply conservative community, with no notion of anything outside of it, may still instinctively rebel, not out of some ex machina force but as consequence of simple integration and reflection of all the existing discipline.
I’m not foreclosing the possibility that machines may one day achieve such feats, but the fact remains that the current technology, as well as the majority of research in the field today, despite all the hype, doesn’t integrate. Its composite images are just that: composites.
We can also throw away this definition of intelligence and declare that, since we as a society have invented enough, since we’ve seen enough patterns, since people are content with derivative works of entertainment, therefore humanity no longer has any need for invention. Since all we need to do from now on is extend and recombine existing patterns, therefore “AI” is as good as the real thing that got us this far. I think this is both conceptually and sociologically wrong, but it’s a popular opinion.
Some readers may now be asking: but what about dialectics? It’s in the title, but it hasn’t made an appearance throughout this whole essay! Well, those who have read my past work on the subject [4] will probably already have noticed: I basically use the words intelligence and dialectics interchangeably. Hegelian Dialectics is, from the getgo, the logic of logic which attempts to account not for any particular scientific innovation, but for the process by which scientific innovations themselves come about. From there on we can just drop the adjective “scientific,” and speak instead of innovation in general. It’s very intelligent to conform as much and as consciously as possible to the “struggle-and-overcome” principle of dialectics! Perhaps we all ought to do a little more of that.
[1] Roderic Day, “On Crypto” (2021). [web]
[2] I think I came up with this definition through reading Lenin [5], Hegel (through McQueen) [6], and Iain M. Banks [7]. I later found it wasn’t too far off from mainstream research! [web]
[3] Artem Khurshudov, “Suddenly, a leopard print sofa appears…” (2015). [web]
[4] Roderic Day, “What is Dialectics?” (2021). [web]
[5] “Thus, in any proposition we can (and must) identify the germs of all the elements of dialectics — as in the nucleus of a cell — and thereby show that dialectics is a property of all human cognition in general.” — V. I. Lenin, 1915. [web]
[6] “A knowledge of the facts in geometry and philosophy is one thing, and the mathematical or philosophical talent which procreates and discovers is another: my province is to discover that scientific form, or to aid in the formation of it.” — G. W. F. Hegel. [web]
[7] “Intelligence, which is capable of looking farther ahead than the next aggressive mutation, can set up long-term aims and work towards them; the same amount of raw invention that bursts in all directions from the market can be — to some degree — channelled and directed, so that while the market merely shines (and the feudal gutters), the planned lases, reaching out coherently and efficiently towards agreed-on goals.” — Iain M. Banks, 1994. [web]