Few months ago, I made a video saying that using chat GBT is slowly destroying your brain. I said there's a risk of it making you lazier, reducing your problem solving ability, and damaging your memory and depth of understanding, all while making it feel like it's actually helping you. And this was, as it turns out, quite controversial at the time when I uploaded it.
But over the last few months, as more and more research is starting to come out around how AI affects learning, we're starting to see that AI is not the savior to all of the learning problems that we thought it might be. Having said that, AI is revolutionizing learning. That's a fact.
At this point, there is no going back. And so, for me as a learning coach, it has been a huge focus over the past year to really understand what is the best way to use AI for learning. I've been using AI for my own learning.
I've been testing different models and different versions. Literally running thousands of tests on this. I've been talking to with students and professionals on how they use AI for learning, what's working for them, what isn't.
And in this video, I want to share with you my findings and insights so far. This is basically my current status update on the best way I think that you can use AI for learning right now. getting all the benefits of AI and mitigating the key risks.
So, the way I'm going to structure this video is I'm going to go through the key issues, the major problems with AI that either I've identified or I've gotten from my data, my conversations, and my surveys. And then I'm going to say what the implication of that is, why you actually need to care about that, and then what you can do about that. Either my recommendations on how to use AI to mitigate that risk, or whether you should just avoid it.
So to start it off, we're going to begin with the biggest issue uh which is concerns around information accuracy. So that's issue number one. Now before I jump into the actual point of information accuracy uh for context in order to explore this topic a little bit more uh over the past four or five months I've been having dozens of conversations with my students talking about the way that they use AI and the problems that they're facing.
I also ran a survey on uh my YouTube and my LinkedIn collecting information from people that are both using AI or not using AI and getting their perspectives. I had 923 responses. Uh, which if I actually had published that as a study, that would have been a pretty large study.
Uh, but the findings were very interesting and I'll share some of those key insights with you throughout the video. One of the key findings from that was that the number one biggest concern that people have around using AI for learning is information accuracy. For people using AI for learning, this was the thing that they were most worried about.
for the people that are not using AI for learning, this was the biggest reason why they're not. And it's also, I think, one of the most interesting points and that's because issues with information accuracy and the problem where an LLM like Claude, Gemini, Chacht, Deepseek, uh they will just tell you something as if it were true, but actually it's completely made up. Uh this phenomenon which is called hallucination.
This is an issue with the technology itself. LLM's large language models use something called the transformer architecture. And the transformer architecture, I mean, even though it is kind of like this amazing new milestone in AI development, is fundamentally a probability based word generator.
It looks at your query. It looks at the training data, the massive amount of training data that it's got access to. Uh, and then it will create a network of what it thinks that you're looking for.
and it will match which words based on the training data would be the highest probability to come next. And so it doesn't have any sense of truth. Not only does it have no sense of truth, it doesn't even really have a concept of full sentences.
Each word is simply the highest probability word to come next in the sequence. And so I see these posts sometimes that say Chachib is lying to you. But in a way, I feel this is a little unfair to chatt because you can't lie if you don't even have a concept of the truth.
These large language models are doing exactly what they're designed to do, which is create fluent, cohesive sounding sentences that are contextually appropriate for the interaction that you are having. Now, one of the lowhanging fruits of trying to increase the accuracy of this information is to give it access to the internet. allow it to search, get more information to make a more informed probability estimate.
But this doesn't really change the problem. By giving it access to more information beyond its initial training data, yes, you allow it to build networks of probability based on maybe newer, more up-to-date information sources, but it's still missing a lot of pieces that need to happen to actually have a concept of truth or accuracy. For example, it needs to validate and prioritize the different sources.
It needs to think about whether that information is reliable and how reliable it is. And then how does the reliability and the context in that information source compared to a potentially contrasting information source? Is the opinion of a 100 people on Reddit stronger than the opinion of one expert that wrote a blog article?
And how do we even know that any of that information is true or not to begin with? And even if that information is true, how does that information fit with the existing training data, aka the existing body of knowledge? Often, if you're an expert reviewing new information on a complex topic, the way you interpret that new information is really important.
You have to be careful not to paraphrase or extrapolate things in a certain way that is going to lead someone to form a different conclusion. It's a very fine intellectual process. An LLM doesn't have a concept of that.
Even if you tell it to be careful, all it does is it uses words that are probabilistically closer to what a person being careful is likely to say. There is no change in the way that fundamentally reasons through that information. And this is especially problematic because the way that an LLM produces its text is held to a different standard than how a human interprets that text.
Here's what I mean. When an LLM is trained, its gold standard is text that is fluent and coherent. Now, for a human, when we read something that is fluent and coherent, we are led to believe that this is true.
We have confidence in what we're reading because of its fluency and its coherence. And often what can happen is that an LLM will generate some text that sounds very convincing because it's built to be fluent. And we as humans are drawn to believe it.
And so fundamentally this issue with information accuracy is a problem that is not really going to be resolved because of the fact that there is no underlying mechanism of accuracy in these models. And so what can we do about that? The solution to this is to reframe this not as an issue with information accuracy but to reframe it as a issue of risk versus complexity.
And this is across every task that you're going to use AI like a large language model for. Anytime you work with an LLM like Claude or Chachib, you always want to have an understanding about the complexity of the topic. And this is mentally the relationship you should keep in mind here on the x-axis.
We're going to have the level and this green line is going to represent the complexity. So as a complexity of a topic goes up. What I mean by complexity is that there are a lot of moving pieces.
There's potentially lots of new information that's evolving. There may be a lot of competing opinions and lots of different schools of thought. there's a general lack of consensus or the way that you're trying to apply this knowledge is in very very specific context where there's lots of different factors at play and again there isn't a very clear well understood way of exactly how to use those factors and what that relationship is.
So for example, if I'm trying to update myself on the latest learning science research, I know that there are a huge range of different opinions and there's conflicts and even among the researchers and experts, there's differing opinions there. And so when the latest research comes out, you have to interpret that with huge grains of salt and see it from different perspectives and see how that relates to your own existing knowledge. Now, if I were to try to get an LLM to tell me the most important things about all the latest research that's come out, that's not going to be a very accurate way of doing it.
It's going to look accurate because it sounds fluid and coherent. It looks like it's considered all those things. But, and I've tested this multiple times when I actually go through to read the original articles myself.
The conclusions that I come to are slightly different than the ones that the LLM generated for me. And they may be 90% the same but that 10% difference is important for me who's trying to generate that top level expertise. And if I have a 10% error in my understanding of a topic over time that is actually going to compound.
So that would be one example of complexity. The topic in the field itself is really complex. The second example is applying it in those different contexts.
So you may be trying to apply a very well understood marketing principle. Someone published about this decades ago. Marketers have been using this for years and years and years.
It's it's a universal truth and a law about marketing. However, you're trying to apply it in your own particular business for this group of people with these problems and these challenges and these preferences. And so even though the knowledge is well understood, the way you are trying to apply and connect it together is not well understood.
And so that would also be an example where the complexity is high. So as that complexity goes up the risk of using an LLM and having accuracy also goes up and as a result the overall usefulness does something like this. So this is the LLM usefulness.
So if you're dealing with really simple topics, well understood fields, simple applications, then the training data and the data it's able to get is probably going to be accurate. There are hundreds of thousands of people that have all come to the same conclusion saying this is the way that you need to think about it. There is no real argument about this case.
So naturally, that is what the LLM is likely to generate. or you're dealing with an issue that's so simple that it doesn't need to really have rigor in understanding the conceptual truth. As long as you get a conclusion that is roughly in the ballpark of the right answer, if you're happy enough with that, then it's going to be very useful for you.
But the main implication of understanding this relationship is to make this decision proactively and upfront. This is the part that actually saves you time. The point of using AI for learning is to save you time.
You have too many things to do. You've got competing priorities. You have stuff to learn.
You're potentially getting overwhelmed. AI is meant to solve those problems, not just give you more problems to worry about. If it didn't solve those problems and now all you're worrying about is information accuracy, this would be a a losing game, obviously.
But the problem I've observed from talking to dozens of professionals and students that are using AI for learning is that they will go through without really proactively considering the complexity of what they're trying to use the LLM for. and they will spend 30 minutes, an hour, two days, three weeks trying to learn with the AI for something that it is not well geared for. And then only afterwards realizing that it's not very effective or is leading them astray or in the worst case scenario, they've actually wasted time building an understanding uh and building a body of knowledge on inaccurate information.
So, they've actually made it harder for themselves to learn it. And ironically relying too much on AI to begin with has actually made them waste time. So for me again as of right now with the existing technology things can evolve blah blah blah but for me right now if I assess that the thing I'm trying to get out of the LLM is nuanced complicated multiaceted.
I'm going to have to put things together in a way that's not well understood. I wouldn't even bother trying to get the LLM to do that. And I have spent probably over 200 hours on just trying to create custom GPTs and rag models and combinations of different models with different versions, all sorts of different things to try to get it to be able to beat this information accuracy thing on its head.
And my conclusion so far is that that investment of time is just not worth it if all you need to do is learn this new knowledge. Now, if you're trying to create a knowledge bank, you know, you're creating your sort of own personal answer machine because there's this body of knowledge, like a huge amount of documentation, for example, and you're going to have to come back to this again and again and again as part of your work, you know, for the next months, then that can be worth it. But if all you need to do is just learn about this thing so you can make your decisions, solve problems, be in a good place where you can work with that information and just be good at your job or do well in an exam.
I don't believe that the time investment and trying to set this up to make it perfectly accurate is really worthwhile. You can spend five minutes just loading up your resources into notebook LM and then just studying off of that and just accepting that there is going to be a level of information inaccuracy that you may come across and that risk is going to get higher as you go into the deeper more nuanced aspects of that topic. Now the good news is that most people probably don't need to go that deep on most topics.
80 to 90% of people 80 to 90% of the time only need to learn a superficial the 50 the top 50% superficial part of a topic. They don't need to get that final level of expertise. That's where the complexity goes up.
That's where the error and the risk goes up. But if all you're learning is the wellestablished part of that topic, the stuff that's unambiguous, the simple stuff, then your risk of information accuracy is in reality going to be very low. And so again, coming back to this graph, if you are always staying on this side of the complexity line, then you're going to find it's generally pretty useful for you and your risk is going to stay relatively low.
Now, the second biggest concern based on this survey was over reliance on AI. You know, I posted this survey on my YouTube uh on my LinkedIn. The people that are likely to do the survey are probably people that have already seen some of my content and I already talk about this kind of thing.
But regardless, I'm very proud of you guys. I'm proud of you for being worried about becoming over reliant on AI. And this is probably one of the most consistent themes that I got through in the interviews that I did in the consultations that I've been doing.
There is this general sentiment that problem solving ability is going down. Critical thinking is going down. People are getting lazy.
They're losing basic knowledge that they used to have. And they're starting to feel like if the AI can't solve the problem and do it for them, they also can't. One of the questions I asked in the survey was, "What are the issues you still have with your learning despite using AI?
Yes, it's helpful for some things, but it's not solving all the problems. What problems are still left for you? " And the overwhelmingly common theme across several hundred responses was that using AI doesn't fix the core of the issue.
Difficult stuff to understand is still difficult to understand. it's still hard to remember and retain information. And if you're a professional trying to use that knowledge for your work because that depth is not there, they're finding it hard to apply it to their own context and their own domains.
And these are exactly the same issues that have existed before the AI hype. When I was first starting to teach people how to learn 14 years ago, this is the same list of major challenges people had. Whether you're learning from AI giving you information or a Google search giving you information or a teacher giving you information or whether you're reading it from a book or from a handdrawn tapestry written by a monk 400 years ago, wherever the source of the information is, the bottleneck is still in our brain.
And the major issue, this is the biggest problem of all of this, is that we're often completely unaware of the fact that we are becoming dependent. And there are two questions that I asked in that survey that for me were not surprising at all, but very concerning. The first question was just, how helpful do you think AI is for your learning?
It was rated on a five-point scale. The median answer across several hundred responses was four. 63% of people rated either four or five, helpful or very helpful.
Compare that with only 8% voting one or two out of five, which is not at all helpful or just not helpful. In the question after that, I said that sometimes something can feel helpful even if it isn't actually helping us. So when we think about using AI for learning, well the outcomes we want with learning are better retention, being able to understand something, being able to apply it the way that we need to apply it.
These are the outcomes we are learning for. And there are a lot of pseudo outcomes that feel good but don't actually matter. When you're studying conventionally with a textbook, the number of pages you get through, the pages of content you cover in a day, that feels very productive, that feels important.
Doesn't matter unless that translates to the equivalent amount of information you retain, understand at the right level, and it can apply the way you need to. And it's the same thing with AI. You might cover a lot of content, ask a lot of questions, understand a lot of the explanations that it gave, but then at the end of the day, do you remember that?
Is your knowledge actually deep and can you use that knowledge in the way you need to? So the second question was asking about the outcomes. How helpful is AI when you actually think about the outcomes that are meaningful?
And just by asking that question, the survey results changed dramatically. So on that same fivepoint scale the number of people rating it as five out of five very helpful h haveved for both students and professionals. The number of people rating it as neutral went up.
For professionals the number of people rating it as neutral went up by 100%. So it doubled. For the people rating it one or two out of five uh among the student cohort that tripled.
So it went from 39 people thinking that it was one or two out of five helpfulness up to 120 and it more than doubled for professionals going from 8 to 18. Now one thing I will quickly note here is that professionals generally did find AI usage to be more helpful than students for learning. Probably the reason this is the case is that professionals have a very high amount of task reactive learning that they need to do.
What this means is that someone gives them a project, a task that they need to do. They need to learn just enough to complete the task. It doesn't really matter that they build great expertise.
They just need to deliver on an outcome. And so this type of task reactive learning is really well suited for LLMs because again, you're not usually working with uh information that's a high level of complexity. You often just need to learn enough to get the job done.
you're operating at low risk and it's saving you a huge amount of time to do that. And so task reactive learning is something that AI is really well suited for. Students on the other hand don't really have a lot of task reactive learning.
A lot of their learning is about having knowledge in their heads to be able to use and apply and remember. But ultimately when we think about how useful AI actually is for achieving the outcomes that we actually need, it is objectively only a third to half as helpful as we feel like it is. And that is the issue.
That is the thing that creates over reliance. And there are two ways I want you to think about overreiance because overreiance isn't always a bad thing. There is productive over reliance and then there is nonproductive overreiance.
Productive over reliance is when I'm relying on something that I don't technically need but it's saving me time or giving me some other benefit. I rely on my phone to communicate with people. I could send them a letter.
I could go to my computer and write them an email. I could [clears throat] bike or drive over to their house uh and just shout through their window. I rely on my calculator for doing arithmetic.
I don't hold on to random bits of facts and information that I don't need to because I can look it up. These are all examples of reliance. And you could say that it's over reliance because if I don't have access to my phone, my ability to communicate with other people goes down a lot.
I certainly can't do as much arithmetic without my calculator and the body of knowledge that I have access to if I can't search anything on the internet goes down by like 99. 99999%. But is that an overreiance that bothers us?
Probably not because in achieving the outcomes we need to it is actually a benefit. This contrasts with nonproductive over reliance which is where the AI thing starts falling into. Non-productive over reliance is when we are relying on something to give us a certain outcome but it doesn't.
In the learning space, a great example of classical non-productive over reliance is relying on other people's notes and writing notes in a certain way or writing notes through a certain software. And then we feel like if we don't have our software, this person's notes, whatever tool it is, that we're robbed of our ability to learn effectively. This is an example of non-productive over reliance because relying on those tools probably doesn't actually produce any benefit in the first place.
But what it does do is it provides a benefit to metrics that don't matter. So if your metric of success and learning is how neat my notes look or how many notes I can write down in one hour, then using your favorite software that allows you to type things much faster and then auto summarize things and you know you can highlight different things in different colors that could be the solution to that. But it's non-productive because that doesn't translate to the outcome that we actually needed which was retention depth and application.
And so usually the situations where we find ourselves going into non-productive over reliance is when the metrics of success are not clear for something like learning. The metrics of success are hard to measure. It's hard to measure your retention.
You actually have to wait and then test yourself. It's hard to measure the depth of understanding. To measure your depth of understanding, you have to try to apply the information at the level of depth and the level of interconnectivity that you need.
A lot of times it's hard to apply your information in the first place unless you're being examined on it. If you need to learn something and just use it at work and solve problems, make decisions, it's hard to simulate those types of challenges very accurately. And because these metrics are hard to measure and they're not very clear and we're also not used to thinking about this, it is more effortful to track our progress on this.
So instead, we use these other metrics that are easier to track and that feel better like content covered, how long it takes to cover X amount of pages in however much time. And so the trick here is we have to protect ourselves against going into non-productive over reliance. And one of the things about that is just recognizing the difference between a productive versus a nonproductive metric.
And this allows us to make a more accurate judgment about how useful something actually is. And by the way, if you're interested in having a look through the results of that survey, there's some other findings in there that aren't in this video and you just want to have a deeper look into it. I've also got a full article that I've written up going through the findings, the key insights, uh, which you might be interested in.
And I've also summarized some of those into my newsletter as well. So, if you haven't joined my newsletter, it's a free weekly newsletter. Uh, this article will be in there as well.
So, you can sign up to the newsletter in the link below. I'll also leave a link to the full article if you want to check out the data for yourself. So, just being aware of the difference between these different types of metrics is going to help you to avoid falling into this trap.
Now, there is another way for you to avoid becoming over reliant on AI. And this is probably the most valuable. This is the strategy that if you lean into this, not only will you avoid over reliance on AI, you will be able to use AI in a way that other people can't.
It allows you to have a competitive advantage through using AI as opposed to just keeping up with everyone else who's using AI. And to show you the strategy, I need to teach you just a little bit more about how learning works and how thinking works. Uh, but I'll keep it brief.
When we think about productive versus non-productive over reliance, one of the big questions here is how do we know what is going to be useful and relevant 5 years from now, 10 years from now? When the calculator first came out, you know, people said that you shouldn't rely on that too much because it stops you from being able to do arithmetic in your brain. And that's actually true.
But as it has played out, there's just really not a lot of situations where I'm far from a calculator where I also need to be doing advanced arithmetic mentally. And so, how do we know that the way that we're using AI is actually going to be harmful? Maybe just the secret is just be able to use the AI better and faster than everyone else.
Then you can just bypass the whole need to remember things and understand things and apply that knowledge. like why do you even need to do that if the AI can do that for you and you can figure out how to make AI do that for you? That's actually a very valid line of reasoning.
And to understand the answer to that, we have to understand a little bit about how the human brain works versus how AI works. The first thing is as I mentioned uh the way that an AI like a large language model specific because AI is actually a huge field. There's many different strands of it.
Um, but what we're talking about right now is LLMs. The way that LLMs work and this is a thing that is taking the world by storm. Like most of the hype around AI is actually because of large language models.
As I mentioned, this is mostly working off of probability. What it doesn't have is it doesn't really have a conceptual understanding of information. Uh, and it doesn't actually have a sense of reasoning per se.
The reasoning is very very very basic. it's not uh it doesn't have a great deal of problem solving and reasoning capability and this is uh something that is technology limited at the moment. The the current technology and the architecture is actually far from being able to do this really well and there's some interesting stuff coming out with um incorporating knowledge graphs and things but it's still really really really far from matching a human.
What we're talking about would be a major new breakthrough on the same level that LLMs and Chachib was to begin with actually maybe a couple levels beyond that for it to get to a conceptual reasoning ability that is equivalent to that of a humans. And this is a technology that and I might be wrong on this, but from my conversations with AI experts and my understanding of the industry, this is not going to be happening for years. But what will happen is that LLM will get better at what it's already good at.
It's already great at working with a very high volume of input, high amounts of information. It's really good at looking through uh finding trends uh from that information. It's really good at recalling from that high volume of input in a way that's contextual.
It will get faster and cheaper and hopefully more environmentally friendly at doing this stuff over time. And what that means is that this LLM contributes a certain value. And the value it contributes is its ability to work with high volumes of information and recall this type of stuff and to do that very basic level of knowledge application.
And what that therefore means is that that value is already occupied by AI as a human. It's not valuable for you to be able to do that. You know those videos of those people who can like do the mental maths like really really fast and you know there are people next to them with a calculator and then they're like you know just doing like four-digit multiplication faster than the calculator can keep up.
Extremely impressive. Superhuman ability to do that. Really impressive.
Not practically commercially that useful. And it's the same thing if you graduate from university and you have lots of facts. you can recall lots of facts.
Uh you can do like basic knowledge application not going to be very useful because the AI can do that a million times faster and cheaper than you. And so where is the gap? The gap is the things that the AI is going to struggle with which is this here.
So this is where human value is going to concentrate. If you can't do this, it's going to be hard to get a job. It's going to be hard to progress through your job.
And more and more your employers are going to expect that you can do this because why would they expect you should just do the same thing that the AI is doing? They don't need that from you. And so then we think about well what is it that a human does that allows them to do reasoning and thinking conceptually and putting together complex things in a big picture?
What is a human doing that allows them to do that better? And how can you get better at doing that? And this is how we avoid this uh over reliance is that we become aware of the processes of thinking that are actually productive that help us to do this stuff and we get better at it.
We do this. We deliberately don't rely on AI to do this type of thing for us. Number one because it sucks at it, but number two because we should be getting better at it.
We need to be getting better at it. And a simple way to think about what those processes are so that you can use this as your mental checklist. And this is a way that this is the same mental checklist that I use is to use the top three levels of this thing called Bloom's taxonomy really quickly.
And I've got other videos going into this in more detail if you want to check it out. But really quickly, there are these six levels of thinking that were identified by Bloom and then was later revised by other researchers blah blah blah. Anyway, the bottom level here is called memorize.
Memorizing is just about trying to read something again and again and again to stick it into your head. very low level, not very effective, very passive, pretty much a waste of time. Uh, this is not a process that you should really be using.
The next level here is called understand. Understand is literally just trying to understand what you're reading or listening to. Probably right now, most people as you are listening to me, you are just trying to understand what I'm saying.
This is also not actually a very effective process. And the reason this is not a very effective process is that your ability to retain information and your ability to understand information as outcomes. Right?
If these are the two outcomes, remembering something and understanding something, these two outcomes do not come about as a result of using the process memorize and understand. That's confusing. And this is the part that really trips people up.
trying to memorize something and trying to understand something are not processes that are effective at generating memory and understanding. It's probably better just to use different words to completely uh dissociate them. So if we call this process the process of memorizing and we call this process the process of um instead of understanding let's call it comprehending maybe if we call that comprehending.
Sorry Bloom, you know, rest in peace. Um, changing your pyramid. If we call this process comprehending and then we say that these outcomes are retention, understanding is probably fine.
Then yeah, we can see that memorization does not lead to retention. Okay, comprehending does not lead to understanding. What does lead to understanding and retention are actually the higher levels above that.
So in in the middle here, we have one that's called apply. applies when you're using your information to solve problems and execute on things. Uh but there are actually many levels of application.
You can apply things in a very simple way like a onetoone relationship. I learned this thing. I apply it just specifically exactly for that singular purpose or or you can apply things in a very complex way.
You know, bringing this piece in and combining it with 10 other things to solve this very intricate problem. And so even though the word just says apply actually this is this level is technically only for simple application that onetoone very direct application this also is limited usefulness you don't really want to be spending a lot of time doing this kind of thinking these three levels this is fair game for AI if all you're trying to do is just wrap your head around something you're just trying to paraphrase something to be able to understand it a little bit more quickly to be able to just comprehend it quickly you just want to apply one fact into something else and not have to think about it too much. Use an LLM for that.
It's going to be good at it. It's going to be faster at it. And these are skills that you don't need to develop.
You don't have to be good at this. We're going to enter into a world where AI is just going to do that for you. And your ability to do that is not going to be important.
But the levels above this, this is the stuff that if you try to get an LLM to do, it's not going to give you a good answer. It's actually like as in it's going to be worse than a human trying to do this. And for that reason, this is valuable for you to get good at.
So the level above this we call analyze. Analyze is really about at the end of the day it's about looking for similarities and differences. It's about comparing.
It's about taking two things and saying in what ways are they similar in what ways are they different? And you're finding all the different types of similarities and differences. So if I took this cup which is shaped like a camera lens which I really like.
If I take this mug and this Apple Pencil and I say, "How are these two things similar and different? " There are lots of similarities and differences. So, just because I found that, okay, well, the shape of these is different, but the temperature is the same.
Those are not the only ones. Being good at analyzing means that you're able to find many different types, many different categories of differences and similarities. What this does mentally is it allows your brain to create relationships between different items and different pieces of information.
So when you're reading a sentence on a page, don't just try to comprehend it. Try to think about, well, how is that information different or similar to this other paragraph or the same concept that was explained over here instead? Or how is that similar or different to what I already know?
And by doing that, it takes a little bit longer, but the outcomes that actually matter, your attention and your depth of understanding and your ability to apply it will grow. That's where the learning actually happens. It doesn't happen from understanding what you read.
It happens from thinking about how it relates. That is the source of the learning. The next step above this is the evaluate.
Evaluate is when you not only recognize that there are similarities and differences, but then you prioritize them. So yes, these two things are similar and they're related together in this way, but how important is that relationship? How important is this similarity or this difference?
And that has different context. Maybe for one application, it's really important, but for another application, it's totally meaningless. This process of actually critiquing the value and prioritizing and making judgments about how important different things are and how they fit together.
This is really where we start getting into a special level of thinking and learning. This is where you can start solving the really complex problems. This is where you can engage in those deep discussions.
When you're used to thinking in this way and someone says something to you and you're thinking how is that different to what I already know, you recognize a difference and you say, "Okay, well the implication of that difference means that it can influence this and this and this and this and so that is a very important difference whereas this is not an important difference. That leads you to ask a really good question. That leads you to understand how things get connect together deeper than the people around you.
This leads to better retention and problem solving and deeper understanding and as a result again more time spent mentally. This is why when you look at a top learner they spend a lot of time just thinking asking questions maybe going back and forth exploring thoughts and less time just mindlessly consuming information. And so that's evaluate.
And the final level six here is create. This is where you're taking your knowledge and you're hypothesizing something new. you synthesize something new.
You're creating a new and novel original plan or strategy solution design uh for this particular problem or for this particular project. It's not about learning anything else anymore. It's about using what you know to bring it together and synthesize it.
And maybe to do that well, you need to learn something. But that's a natural part of your primary purpose, which is to bring it all together. So when you do this, when you operate in these top three levels, these are the things that AI is really bad at.
so bad that I do not even bother trying it to to get it to do this. Yes, it will output something and is just not very high quality. I have never ever in my life like what I mean AI has not been around for so long, but like over the last few years, I've never seen a single example where an AI has been able to output something at this higher order at a better quality than a human, a skilled human.
You want to be that skilled human. those processes that I just explained to you. I I didn't just explain them for fun.
The reason that I explained that to you is that that becomes your mental checklist. So when whenever you're doing any kind of learning or problem solving and you're taking in new information, ask yourself what part of this is hard. Is it hard because there's just a lot of information and a lot of inputs and I want to summarize that.
The answer is yes. Feel free use AI that doesn't require any of these top processes. But if what you're struggling with is bringing it together, comparing the differences, figuring out what is more or less important, synthesizing a map, get used to doing that yourself.
Don't offload that onto AI. Save time with everything else. Feel free.
Save time with all the other things. If it's just tedious, monotonous work that doesn't take a lot of mental effort, do that. Save that time.
But when it comes time to use your brain and think about things deeply, don't shy away from that because every time you decide to offload that to AI, you are robbing yourself of an opportunity to get better at that skill and that is career self-sabotage. So that is my guide on how you can use AI to learn effectively. If you again want to check out the full uh report with some other insights from the survey that I've done, uh the link to the article is in the description.
If you want to sign up to the newsletter, feel free to do that as well. If you want to learn a little bit more about doing these higher levels of thinking here and getting really good at doing this, then you may want to check out this video where I go into this topic of higher order learning in much more detail. Thank you so much for watching.
I hope this helped and I'll see you in the next one.