Thoughts on AI
August 8, 2025
I think AI is a tool you should earn the right to use. I don’t say this out a sense of worthiness or elitism, but to ensure you garner knowledge at the most important stage of learning: the beginning.
The beginning stages of learning is where we all start and we ought to give it the attention it deserves. It influences what, when, and how we continue to learn information in a given subject. Here, the choices you make in your learning have consequences going forward.
Remember a time where un-learning something was vastly more difficult than learning it? Take poor typing form as an example. It takes far longer to fix than it took to learn. Not to mention the discouragement that comes along with it. You remind yourself that it would’ve been better to learn it properly in the first place rather than waste your time fixing it now. With this in mind, I’d argue establishing a solid foundation in your learning will save you time and energy in the long term, even if it means sacrificing both those things at the start.
This is where AI concerns me. I fear that it tricks you into thinking that you’re learning when you aren’t. Eventually, you’ll realize this and have to go back and re-learn on your own what you didn’t even learn in the first place.
To put this into perspective, we can compare how AI impacts learning by proxy of myself. My approach to learning centers on three key principles:
- Learn by asking questions.
- Learn by doing.
- Learn by teaching.
For any given topic, I follow these principles in the same order.
1. Learn by asking questions
To start, I ask a question. Any question. What matters isn’t the question itself. It’s the two questions I’ll have in response to that question’s answer. Each answer from those questions spawns another two questions and the cycle repeats. I like to think of it as a binary tree of questions and answers which delves into a topic from it’s high level concepts to low level semantics.
Asking questions helps you avoid what’s called the Dunning-Kruger effect. It’s where people learning a new topic overestimate their competence because they don’t yet comprehend the extensiveness of said topic. When we start learning, we don’t have the ability to identify gaps in our knowledge because, quite frankly, we don’t posses any knowledge to have gaps in. So, because we can’t see any knowledge gaps, we’re deluded into believing we are more competent than we really are.
We can extend this concept to the conscious competence model or more aptly named: the Four Stages of Learning.
- Unconscious incompetence - you don’t know what you don’t know
- Conscious incompetence — you know what you don’t know.
- Conscious competence — you know what you know but it takes effort.
- Unconscious competence — you know what you know and effortlessly.
Our goal with asking questions is to skip to the second stage as fast as possible. Each question we ask spurs more questions, allowing us to see firsthand how deep the rabbit hole goes and how little we really know about a topic. By knowing what we don’t know, we can identify which path we ought to take to progress to stage 3.
Now, how does AI fit into all of this? Well, on the surface, AI seems great for asking questions. It’s easy to do and you can tailor it’s output to your exact specificity. You may want a really concise answer just to spark your curiosity or you may prefer a long excerpt to deeper your understanding. For any question you may have, you can ask the LLM to follow your preferred output and they will abide by it.
I’ve also found that LLMs are great at predicting what my follow-up questions will be. In fact, many LLMs automatically generate responses to said questions. Claude in particular is scarily accurate in this regard and at times it feels as if it’s reading my mind. Unfortunately, that some what defeats the point of me asking questions in the first place. It’s not the repetition of asking questions that fosters learning. It’s identifying which questions to ask given an answer. Question formulation requires intuition, creativity, and curiosity. All things LLMs can mimic but not actually posses. Your ability to formulate questions is a key aspect of your ability to learn. It’s better to practice doing it yourself instead of relying on an LLM to do it for you. Luckily, you can mitigate this from happening by mentioning “Do not respond with follow-up questions I may be interested in asking.” in your user prompt.
A brief aside: many argue that the skill of finding information is as important as knowing the information itself. From this, it’s argued that using LLMs slowly deteriorates your ability to find information because it is doing the finding for you. I agree to with this point to an extent. Yes, I don’t doubt that if I were to exclusively use LLMs for research, after a years time I would be much worse at researching on my own. That being said, this has all happened before. How capable is the average person at going to a library and using books to find information? I’d think quite poor, myself included, because for the past 15 years we’ve all used Google. I believe AI is to Google what Google was to libraries. Each method exponentially improved the breadth and speed of information retrieval compared to its predecessor. However, that’s not to say we should ditch all other methods and stick solely to AI. In many cases, such as programming, I still prefer books. Most are written by experts in their fields and are edited under a high level of scrutiny. That is to say, once they reach a store’s bookshelf, I have a higher faith in the book’s accuracy of information compared to an LLM after “thinking” for 5 seconds.
2. Learn by doing
You learn by doing. Emphasis on the you. Not you and a trusty LLM sidekick.
There’s no way around it. AI doing the work for you doesn’t help you learn. The problem is that we’re convinced it does from it’s perceived “productivity gains”. Sure, an LLM can generate more code in a minute than a person could in who-knows how long. I bet it’ll even compile and run. Its a productivity miracle, right? Unfortunately, productivity and learning are two separate concerns. Remember, you’re trying to learn. There’s a time to be productive. But it’s not at the very start. First you learn and from there, you can use what you’ve learned in order to be productive. One comes after the other and if you choose the most productive option first, you won’t really be learning. You have to do it alone. An LLM holding your hand, or in many cases today, carrying you on their back, won’t get you anywhere.
I think the best learning comes after struggling. It causes confusion and frustration which makes the ‘aha’ moment in learning even more enjoyable. Without some struggle, the ‘aha’ moment wouldn’t hold any weight. It wouldn’t mean anything because it’s beating the struggle that makes the moment feel like a reward. It’s that struggle-to-aha pipeline that when repeated over and over again, slowly develops your ‘learning muscle’. This muscle represents how you can withstand the resistance of the struggle and learn alongside of it, not in spite of it. With AI, there is no struggle. It’s effortless and in turn, the ‘aha’ moment is hard to come by and your learning muscle will atrophy.
3. Learn by teaching
At this point, I may seem as if I’m entirely against AI. I can assure you I’m not and I appreciate how powerful LLMs can be. Fortunately, I think that power makes them great for third and final principle of learning: learn by teaching.
Take a topic you’re well-versed in and tell a LLM to ask you questions about said topic. Then, ask it to critique your answers. You’ll either be pleasantly surprised with your competence or quickly develop a case of imposter syndrome. Either way, it’s worth trying because you’ll identify gaps in your knowledge you may be ignorant of. This harks back to learning by asking questions but in reverse. This time, you’re the one giving the answers, not the questions. This tests your 4th and final stage of learning: Unconscious competence. You are not only testing whether “you know what you know” but also if “you know what you don’t know”. That being said, it’s important to see not only ‘if’ you can teach, but ‘how’ you can teach.
Einstein said “If you can’t explain it to a six year old, you don’t understand it yourself.” It’s true. It’s not difficult to explain a concept to someone whose an expert in it. Any errors or oversights in your explanation are smoothed out by their expertise. They can fill in the blanks. But a six year old can’t. When explaining something to a six year old, you can’t hide behind the curtain of buzzwords or assumptions. It requires creating metaphors, telling stories, and connecting ideas to the real world. It forces you to go beyond the use of definitions (exercises in memory) and instead use illustration (exercises in understanding). So when an LMM is asking you questions to test your understanding, tell it to pretend like it’s a complete beginner in the topic you are explaining. (I say “complete beginner” and not “six year old” because I’ve found many LLMs simply mimic a six year old’s tone instead of comprehension.)
I think the extent of your knowledge only goes so far as your ability to communicate it. What good is a song in your head that you can’t sing aloud? A song unspoken is a song unheard.
So, what are we to do?
Suffice to say, I don’t find LLMs to be a awfully productive tool for my learning. I tend to avoid them whenever possible as to not undermine my three principled approach and to avoid forming any dependency on them.
That being said, I’d be ignorant to not use them at all. There’s no doubt about it: LLMs are here to stay. Will they always present themselves in the familiar chatbot interface they have now? I don’t believe so. I’d posit they’ll be incorporated into our tooling on a much less conscious level. Things like LLM-powered writing feedback in your document editor or summarized results in a Google search. LLMs will evolve to power software behind the scenes, sometimes without us even knowing. In these scenarios, I’d have no choice but to comply or find an alternative. That I can handle. What concerns me is the forced adoption of LLMs. Not as a software consumer but as a software developer.
Google and Microsoft each have said over 25% of new code written by their developers comes from AI. Now, do I trust these numbers to a tee? Not quite, considering they’re incentivized to promote the adoption of AI. They are two of the biggest investors/providers of AI technology and investors love hearing these obscure, meaningless measurements in quarterly meetings. However, it is reflective of an emerging norm. LLMs can write a lot of code and fast. When put in the hands of a capable developer, the potential code throughput is impressive. Companies recognize this and are not only permitting the use of AI, but in cases like these, mandating it.
Software development is a constant practice in learning. Many developers with decades upon decades of experience claim to be life-long learners. If that’s the case and LLMs are starting to be mandated in a developer’s work environment (which in turn is their learning environment), I’m quite concerned. It seems ironic to me that, feasibly, you could lose some of your competence while gaining experience at the same time. So, can I be stubborn, keep my head down, and avoid AI entirely? I fear not because this isn’t the first time a situation like this has occurred.
It’s happened before
The first electronic calculator came out in the early 60’s and by the 70’s, they were small, accessible, and relatively affordable to buy. People began to fear accountants would be rendered useless because the calculator meant anybody could do their own accounting. The truth is: that didn’t happen at all. In fact, you could argue the exact opposite happened. Accountants who could use these calculators properly could save time and energy on arithmetic to instead focus on analysis. This made accountants better at their jobs, thus more valuable to clients who in turn sought their employment more often. That being said, what did happen is that the accountants who would leverage calculators properly replaced the few who wouldn’t.
I think the same thing is going to happen to software developers. AI tooling like coding agents aren’t going to entirely replace software developers. Instead, the developers who know how to use these agents will replace the ones who don’t. Companies view them as more productive and therefore more desirable. Now that could mean a more knowledgeable, anti-AI developer may be overlooked in favour of someone less informed but AI-inclined. This may seem completely unfair at first but it’s important to put it all into perspective. Would it be unfair to favour, say, a researcher that can use Google compared to someone relying solely on reading books at a library?
With all of this in mind, I’m faced with a glaring question: how should I use AI? I fear that if I don’t embrace it now, I may pay for it later by playing catch-up on how to use LLM tooling. This catch-up game is the extract thing my approach to learning was made to mitigate. On the other hand, if I do embrace it now, I won’t develop a fundamental understanding in the concepts I’m learning. So, what do I do?
I’m afraid I’ve got no choice but to oblige and use AI. Albeit, with some asterisks. I won’t bite the bullet and start using LLMs to write all my code for me. That eliminates the potential to learn entirely and defeats why I chose to study software engineering in the first place. Like most people in the field, I enjoy programming so things like vibe-coding and agentic coding have no appeal to me. Instead, I plan on incorporating LLMs into my research workflow. They’re great for researching industry conventions and best practices. I can give an LLM a topic I want to learn about and ask it to generate a report on the best resources to learn from and why. The “Deep Research” mode from each LLM provider is great at this. They’re also great at playing devil’s advocate for your design decisions. When programming, I’ll implement my first approach to a problem and take note of any pain points I encounter. If the friction of development exceeds my ability to implement a solution, I take a step back and consult an LLM to see if I’m approaching the problem correctly. I ask it to critique my current approach but instead of directly telling me which other approach to use, I ask it to respond with questions to prompt my own problem solving. All of this keeps me in the driver seat of my own learning.
I like learning. It’s a humbling, often frustrating task that constantly reminds me of how little I really know. But that’s the best part of it. Without that part, the understanding on the other end of it wouldn’t mean anything. So I’ll keep following my same three principles and ever so often use an LLM to enhance my learning, not take away from it.
I must admit that it may put me behind other developers. It may even leave me in the dust. But I’ll take my chances.
“The more that you learn, the more places you’ll go.”
— Dr. Seuss