Curiosity, critical thinking, and self-regulation matter more than ever in the AI age
Economists have long recognized the importance of human capital—the skills and knowledge embedded in people. More than a century ago, Alfred Marshall wrote that “the most valuable of all capital is that invested in human beings.” Thinkers outside economics have made the same point. The philosopher Michel Foucault, reflecting on the economic rise of the West in the 16th and 17th centuries, asked, “Was it not due precisely to the existence of an accelerated accumulation of human capital?” It’s no exaggeration to say that human capital explains the rise in living standards generation after generation in modern societies.
Recent advances in artificial intelligence have raised concerns about the displacement of human capital. Will AI and human capital operate as complements, making each other more productive? Or will they be substitutes? Three critical but sometimes overlooked components of human capital—curiosity, critical thinking, and self-regulation—can help answer these difficult questions.
Curiosity
Imagine taking all the data ever recorded by humans up to the year 1939 and feeding it into a large language model (LLM). The year 1939 is significant because it was shortly before Paul McCartney and John Lennon were born. In fact, before they were conceived. Suppose we then ask that LLM to create songs described with adjectives used by music critics who listened to The Beatles. Would the LLM produce “Yesterday”?
Here are two reasons it wouldn’t. First, there wouldn’t be enough information to predict the creative output of the two yet unborn Liverpool lads. The Lennon-McCartney songs were inspired by their life experiences. Yet prior to the musicians’ existence, there wouldn’t be many clues about what those experiences were. Moreover, we couldn’t confidently predict that John and Paul would even exist—we wouldn’t know which of their fathers’ millions of sperm cells would fertilize the mothers’ eggs.
Second, without giving specific details of the songs, our prompt would be far too vague. “Yesterday” has been described as melancholic, timeless, elegant, lyrical, and intimate. Those words may sound right, but they don’t narrow down the possibilities by much. So, before The Beatles, AI couldn’t have created their music by prediction: We would miss out on what some consider one of the best rock and roll songs. The same could be said about the work of your favorite painter, writer, sculptor, and so on—anyone born after 1939.
Now, think of today rather than 1939. For the same reasons, an LLM fed with all the information available until this moment wouldn’t be a substitute for the talent, creativity, and curiosity of future creators. Although AI may do a decent job recombining old data (past books, records, and images), it cannot mimic human creations yet unseen.
This notion extends beyond art. As an example, consider the policy question “What can be done to reduce gun violence in Chicago?” An LLM would answer with a summary of previous studies and perhaps highlight those more applicable to that city, but it wouldn’t empirically test new ideas to give a previously unknown answer. On its own, AI is not going to design a policy intervention, get funding for it, prepare survey enumerators, visit households, persuade participants to respond, and so on. Humans do that—and they are driven to do it by their intellectual curiosity. It is our curiosity that increases the stock of knowledge AI depends on.
We are bound to reach a point when all the information available has been fed into LLMs—a situation called “peak data.” After that, without new information (for example, studies on new strategies to prevent gun violence), the output of LLMs won’t improve much. If everyone decided to rely on what LLMs say rather than financing and conducting new research, we would soon be stuck with outdated studies—clearly an undesirable situation. Peak data implies that for AI to get better and better at answering questions, we humans must keep pushing the knowledge frontier, continuing to ask and answer new questions. We must remain creative and curious.
A financial market analogy underscores this point. Consider the efficient market hypothesis, famously postulated by Eugene Fama. The idea is that prices incorporate all the information available; therefore (privileged information aside) you cannot beat the market. This notion was later refined by Sanford Grossman and Joseph Stiglitz, who posed an information paradox: If prices already reflect all available information, investors have no incentive to gather and analyze information. But if nobody gathers such information, how can it be reflected in prices? Market participants produce and process information because there are benefits from doing so, and prices reflect such information—though not perfectly or instantaneously.
Similarly, AI may incorporate all available information at a given time, but to remain relevant and improve, it needs people to keep producing new knowledge. From this perspective, curiosity and AI are complements, not substitutes. In the long run, AI will improve only if humans develop more and better ideas.
Critical thinking
In his 1845 Economic Sophisms, Frédéric Bastiat describes an interesting dichotomy between hard sciences and social sciences. Hard sciences, he argues, can be known only by scholars, and “despite his ignorance, the common man benefits from them.” The practical application of the social sciences, however, concerns everyone and “no one admits ignorance of them.” While people tend to accept the words of experts in the hard sciences without hesitation, they seldom do so when it comes to the social sciences. Regular folks don’t claim to know a better way to build computer chips or airplane engines, but they often claim they could improve the tax system or fight poverty more effectively. Bastiat’s dichotomy extends to our interaction with AI.
If you ask an LLM to solve a mathematical problem, you get a simple and direct answer. Your judgment isn’t required. Your preconceptions don’t affect your interpretation of the information you get. In the social sciences and humanities, that is often not the case. Consider asking an LLM the following questions: How do I know someone is in love with me? Is there a God? Should I have children? Who should I vote for in the presidential election? LLMs will provide answers, but they will remix what others have said throughout history—nowhere near a definitive answer. It’s up to us to weigh the arguments and make a judgment. In this sense, critical thinking becomes essential.
There is another reason critical thinking matters. Psychologist Donald Campbell warned that “the more any quantitative social indicator is used for decision-making, the more subject it will be to corruption pressures.” Campbell’s law also applies to AI. Because so many people rely on LLMs, bad actors are motivated to contaminate their training data with disinformation—a process called “data poisoning.” So, even at the most basic level, the information provided by LLMs may be misleading. Knowing this, we must remain vigilant. Critical thinking is key in this process.
Self-regulation
AI can summarize vast amounts of information to guide our decisions, but it doesn’t control what we actually do. We are fallible and often give in to our emotions. An LLM can generate the perfect personalized workout plan for you, but its success depends on your discipline: Can you stick to the plan even when you don’t feel like exercising? AI can tell your colleague how much money she should save every month for retirement or tell your neighbor how much alcohol to drink at parties, but they may fail to follow its advice, even if they know it’s right.
Economists since Adam Smith have acknowledged human fallibility. In his 1790 book, The Theory of Moral Sentiments, Smith explains: “The qualities most useful to ourselves are, first of all, superior reason and understanding…and secondly, self-command, by which we are enabled to abstain from present pleasure or to endure present pain, in order to obtain a greater pleasure or to avoid a greater pain in some future time.” So it’s not only about knowing what’s good for us. It’s also about having sufficient self-regulation to do what it takes to achieve it.
Smith’s point is crucial when we think about the wide variety of human activities economists call “household production.” This term means that we usually don’t consume what we buy “as is.” We transform it with time, effort, and skill. We may buy a stationary bicycle, but we need to ride it. The same goes for a book, meal ingredients, and even relationships. We must devote time, effort, and skill to get from them what we actually want. This process is subject to the weakest-link problem modeled by Michael Kremer in his O-ring theory (named after a space shuttle part that failed 40 years ago). In this context, other inputs cannot substitute for the effort, time, and skill people contribute. It doesn’t matter how fancy your gym is if you never show up. We can apply this principle to AI: As it gets better, the weakest link will be our ability to follow through on what we know is best for us. Thus, the benefits of self-regulation will increase as AI becomes better at giving us information.
Human capital
Curiosity, critical thinking, and self-regulation are forms of human capital that grow when we are encouraged—repeatedly and deliberately—to be curious, think critically, and self-regulate. If you doubt they can be developed, consider the opposite: School systems or workplaces that discourage questioning, reflection, and autonomy clearly erode those skills.
To readers worried about the singularity—the moment when AI surpasses human intelligence and becomes capable of improving itself—talking about LLMs may seem naive. After that moment, AI could become like a new species on Earth. We can speculate about two future scenarios of human-AI interaction. In one, humans and machines are adversaries, as in the Wachowskis’ film The Matrix. Each generation’s human capital would be the only way to fight back, making its accumulation a priority. In the other scenario, AI and humans peacefully coexist. What would our interactions with superintelligent beings look like?
In some sense, humans have experienced these interactions already when working for large organizations. These “superior beings” are self-interested and agglomerate a brainpower far greater than any single human’s. Still, they compensate us for using our knowledge and skills to serve their goals. If our relationship with post-singularity AI resembled our relationship with such organizations, then investing in human capital would still yield benefits. In this coexistence scenario, some people might choose to establish AI-free communities. Those low-tech places would rely on their members’ human capital. Thus, whether one thinks apocalyptically or not, the case for investing in human capital is strong.
Back to the present, the newsworthy efforts of Meta to recruit human talent to develop more powerful AI technologies—offering exorbitant compensation packages—show how crucial human capital is today. The age of human capital hasn’t ended; it continues to evolve. Think of the mechanization of agriculture, the automation of manufacturing, and now the “algorithmization” of services. Each stage has freed human capital in some areas and demanded more in others.
But these stages shouldn’t be seen as independent processes. The human capital displaced by tractors, irrigation, and fertilizers made the manufacturing boom possible. Production lines with automated processes made the services boom possible. AI will make the next boom possible. Just because we cannot imagine it from where we stand today doesn’t mean it won’t happen. Picture our great-great-great-grandparents trying to imagine what Google or Nvidia does today. As before, human capital will remain relevant—just in new and perhaps hard-to-foresee ways. There will be new sectors in the future and plenty of value created in them by the skills and knowledge embedded in people.
Podcast

Opinions expressed in articles and other materials are those of the authors; they do not necessarily reflect IMF policy.








