AI Won't Give You What Thinking and Doing Will

Thinking isn't the accumulation and regurgitation of information. It's a mental system of muscles you need to work through struggles, failures, repetition, and success. AI isn't this.

AI Won't Give You What Thinking and Doing Will
Photo by Museums Victoria / Unsplash

Today I stumbled across this tweet:

Don't get me started

I thought it was a joke. Then I realized it isn't.

I think it's alright that this person wants to program this way; I'm not criticizing anyone in particular. Choosing to focus on producing something above other objectives isn't uncommon, and it can be useful. It does come with risks, though... I think it's worth exploring that.

🤖
Disclaimer: I do use AI models to write software, though not while cooking steaks. I think they're great in limited scopes, but these scopes are very limited.

Back in my day

When I worked in joinery, there was a clear divide between people who were curious, meticulous, and rigorous about their work and those who were more focused on production. However, there was a key difference: the careful, craft-oriented workers were in the cabinet shop working on high-value, custom work, refining their skills on a daily basis. The people focused on production were almost exclusively working on manufacturing lines, rarely reviewing their skills or learning anything new. They weren't doing the custom work, working on varied tasks or tools, drawing from a wide repertoire of skills, working with CAD and CNC machines, or anything like it. They graded wood, milled it, pushed it through shaping machines, and either moved it to the joinery or bundled it up for delivery trucks. They were millworkers rather than craftspeople.

It's not wrong or bad to be a millworker. You do need to ensure that it's a choice, though. When you enter an industry like this you can ask yourself: am I someone who develops skills and a craft over a lifetime, or do I want to push and stack boards? Perhaps you focus on refining your craft if only because you're passionate, or perhaps you want to produce something, get paid, and go home. Both are valid. You can be somewhere in the middle, and that's fine too. It should be an informed choice, though.

You really can be automated

On this topic I think it's also crucial that you consider what happens when (not if) your tasks are automated. The mill was once much larger, filled with people doing far more work than it is today. Where did they go? Why is the mill smaller? Well, these peoples' skills weren't sophisticated enough to outpace automation and they became obsolete. They had to find other work. That can absolutely happen to you.

That's akin to what I see this person doing to themselves. They may be outputting a product they want right now, but they've relegated the skill and craft to a machine. They might feel like they're forward-looking, but ironically it seems as though they won't be fully prepared to move forward into this new world if they're relying on these tools.

The analogy of joinery being replaced by CNC machines isn't lost on me here, but I think a lot of valid points still carry over. Consider too that you can transfer skills from joinery to a shop with CNC machines and enable yourself to do even more and potentially better work. On the other hand, without that foundation, you'll have nothing to bring to this type of work. In this case, the parallel in software would be ensuring that you're always able to do more than AI can do. Once you're below that bar, your value proposition becomes very precarious.

Sometimes being output-oriented makes sense. It's fine if that's what you intended, or it's well-suited to a task at hand. It's important to realize however that when you use AI to accomplish this, you've learned very little, refined your skills in a much different way than it appears, and potentially diverted a lot of care, attention, and discipline from what you're creating. You haven't exercised your mind in a way that will make you much better at your work tomorrow.

The work described in that thread will be accomplished entirely by AI very soon, and the author will need to find other ways to be valuable. Will they be practiced enough to take on more challenging tasks? Are they training their mind to be adaptable, critical, versatile, creative, and curious enough to add value where a machine can't? Or... Are they training themselves to be a button pusher in an industry that's driven by constant forward movement, rapidly growing complexity, and frankly, machines that are getting good at pushing their own buttons?

Your brain needs a work out, too

The most insidious aspect of this is the lack of mental exercise. I think the process of having an idea and seeing output tricks people into thinking they actually did something. They didn't. This isn't unlike asking someone to do something for you, then claiming you did it. Saying you asked for changes and added your name to the copyright doesn't qualify as actually doing it. When gluing AI outputs together, I think the most we can say is "I had an idea". If we're honest about it that's probably fine. If not, we're doing ourselves a disservice.

The author of the tweet later urges people to learn to prompt better, and I have to wonder if that's really where our energy and attention should be placed. Do I want to become an exceptional prompter so I can get code I can already write, only slightly faster? Shouldn't the AI be able to prompt itself well enough to replace me eventually? And is the act of writing the code purely labour, with no other intrinsic purpose or benefits? In a reality where I only have so much mental energy and attention to spend... Do I want to spend it on learning to prompt, or do I want to invest it in learning to actually do the job better?

Not to mention, every AI responds to prompts slightly differently. By honing your prompting skills, you're tying your skill to a company's implementation of a tool that can change at any time, or even disappear completely.

I'd also argue that programming absolutely isn't strictly labour with a singular output. Apart from soft aspects which interface human needs with technological constraints, pure programming and software architecture are very broad, deep, and sophisticated practices. Much like writing, we only learn by doing it. And redoing it. Not only do we learn to write better, but we learn to think better. We get better at understanding problems by engaging them, sometimes for hours on end. We make mistakes, we struggle, we succeed, and we build up that mental muscle that allows us to persevere and overcome these challenges. AI doesn't give you that.

Incidental experiences are food for growth

Our minds aren't rigid machines with gears meshing and transmissions shifting and indexing as tasks demand it (as much as we might wish they were at times). Instead they're much fuzzier things than that. We never truly have the same experience twice, for example. Revisiting writing familiar code is an opportunity to discover new ideas, new patterns, and better ways to do things. Something we've learned elsewhere might suddenly become applicable to a familiar problem. We synthesize and generate all the time, remodelling and reconstructing what we thought we knew.

As we do this we iterate towards better practices and refine our ability to not only write code, but to think about what, why and how we're doing our work. The practice of writing the code ourselves is the best way to get better at doing it, both in the editor and in our minds.

There are so many incidental experiences along the way which inform our eventual, final destinations far more than we tend to realize. Consider taking a trip to the store as opposed to ordering something on Amazon. On the trip to the store, maybe you'll see an old friend or have seemingly innocuous experiences which lead you down paths of thought, curiosity, and intrigue. Maybe you'll go to find a sauce you've had many times before, but see something new that piques your interest and you'll give that a shot instead. Maybe another shopper even suggests it to you.

On Amazon, though? You'll see what the algorithm wanted you to, and you'll get funnelled through a drab checkout process which only ever deviates to encourage you to delay shipping to save Amazon money. There's no opportunity for incident. It's a path fully carved out, fully uninteresting, entirely unrewarding or sparking any remote form of potential.

It's very similar with programming. Doing the work, going through the rituals, hitting walls, breaking through them, learning, relearning: this all leads to experiences you didn't expect. You're guaranteed to discover things you didn't expect. As you work through problems you'll discover other problems. As you work through those you'll find similar people who've worked through similar problems. You'll gradually enrich and deepen your awareness of everything involved in your process. You'll discover aspects of your work you prefer, those you dislike, and so on. AI is not going to give this to you.

People are your best bet

A key component of that is other people. AI is not people. When you refer to the AI for advice, you're specifically choosing not interacting with a human instead. Software's growth was never predicated on people being isolated from each other, but rather the opposite. The industry truly exploded as the internet enabled us to network, collaborate, and share with each other. I don't expect the inverse to accelerate the industry. That's a separate topic, but arguably equally or more important than how you exercise your mind. Software without the people sounds like an awful industry to work in, period.

Only you can make yourself better at anything. Even in the limited cases where an AI can bring you to an answer faster, you will be robbing yourself of the valuable experience of trying to find the answer yourself. Sometimes this is a worthwhile tradeoff, but it can't be all the time, every day. Eventually you need to be good at creating the very same information that the AIs are trained on: the hard-won discoveries and revelations of all the human software developers who came before, and whose shoulders the AI and we all now stand on.

Software is all people, all the way down. It's easy to forget, but software is meaningless and useless without people. I'm extremely leery of any process which aims to reduce the number of people involved.

Don't get displaced

The philosopher Byung Chul Han makes an interesting observation in his book The Disappearance of Rituals: A Topology of the Present, directly pertaining to this topic:

Today, a further paradigm shift is silently taking place. The Copernican anthropological turn which made man an autonomous producer of knowledge is being superseded by the dataistic turn. The human being now has to comply with data. No longer the producer of knowledge, the human being cedes its sovereignty to data. Dataism puts an end to the idealism and humanism of the Enlightenment. The human being is no longer the sovereign subject of knowledge, the originator of knowledge. Knowledge is now produced mechanically. The data-driven production of knowledge takes place without the involvement of the human subject or consciousness. Enormous volumes of data displace the human being from its central position as producer of knowledge, and the human being itself is reduced to a data set, a variable that can be calculated and manipulated.

My point is: be on the other side of this machine. Don't get displaced by this. Be an origin of knowledge. Don't allow the data to determine your outputs; be the source of its inputs. Always stay ahead of it. You have many things machine doesn't have, and as far as I can tell, it won't have any time soon.

At the end of the day, shouldn't we be doing real things and learning from real experiences and people? Shouldn't we embrace our work rather than shy away from it? Are there legitimate shortcuts to wisdom, or is AI going to curb our mental growth while providing a dangerous facade of knowledge and capability where there's in fact a deficit of it?

In summary

That's about it