Almost all students use AI [source]. Probably, by the time many of you are reading this post, it might be safe to remove the word ‘almost.’ AI in higher education is a thing.
In the past few years, generative artificial intelligence has rapidly transformed the landscape of higher education, and it has stirred profound anxiety among academics and institutional leaders. A global survey of more than 1,600 university faculty found that 82% of instructors worry that students are becoming overly dependent on AI tools [source], with many seeing this reliance as a threat to core learning outcomes such as critical evaluation and independent reasoning. Over half of those surveyed ranked this concern as significant, signaling deep unease about the technology’s impact on educational quality.
Concerns are echoed in institutional studies: in one report, 78% of higher education administrators feared that AI could negatively impact academic integrity, and over half believed it could undermine critical thinking skills, even as AI adoption continues to grow [source]. Students are not indifferent to the shift, either. While a majority now use AI regularly in their academic work, many express confusion and worry about its ethical use and the implications for academic honesty, with roughly half of students reporting concerns about cheating related to AI [source].
These data paint a clear picture: AI isn’t a fringe concern in academia. It’s a widespread disruptor that’s challenging long-standing assumptions about learning, assessment, and the value of advanced degrees.
I know that the default reaction from many of my colleagues in academia would be essentially “AI is bad, and it should be banned from classrooms!” or more or less along that line. In fact, AI is often treated as a threat, a form of cheating, or an academic shortcut.
However, I think that perspective misses the point entirely.
AI doesn’t make students dumber; it just reveals problems in education.
That said, I think this is a real opportunity for us, the new generation of professors, to rethink what learning in higher education should measure in this new era.

The Real Problem Is Not AI. It’s What We Reward
The fear that AI will “devalue” education assumes that education’s value lies in tasks such as memorization, summarization, and routine problem solving. Unfortunately, that assumption is largely untrue today. (To be fair, the assumption was no longer valid since industrial revolution.)
Most grading systems still reward students for:
- Memorizing fragmented knowledge for quizzes and exams
- Writing essays that primarily summarize existing sources
- Solving homework problems that closely resemble worked examples
These were reasonable proxies for learning when human cognition was the limiting factor. They are no longer meaningful when a $20/month tool can outperform most students at all three.
AI does not weaken education. It exposes that we have been measuring the wrong things.
Policing AI Is Educationally Backward
Many institutions have responded by tightening rules: banning AI tools, redesigning exams to be “AI-proof” (e.g., reviving blue book exams), or classifying AI usage as misconduct.
This approach is understandable, but perhaps fundamentally misguided.
The modern workforce already relies heavily on AI. Researchers use it to explore literature, engineers to test designs, lawyers to draft arguments, and analysts to interpret data. Professors themselves increasingly use AI for coding, writing, and research.
Teaching students that AI is something to hide from is equivalent to teaching accounting students not to use spreadsheets; literature students not to use word processors (or typewriters); engineering students not to use calculators or MATLAB; or data science students not to use Pandas.
In fact, it is abundantly clear at this point that the world of work today’s students will enter will rely heavily on AI, if not be centered around it. Just as working without computers or smartphones is unthinkable to us, a world without AI is no longer a possibility for the next generation.
Hence, we are not protecting learning by banning AI. We are training students for a world that no longer exists.
A Better Question: What Should Humans Still Be Responsible For?
If AI can generate text, write code, and summarize entire libraries, what remains uniquely human? Typing speed? Memorization? Minor variations on familiar solutions? Hardly.
Instead, the irreplaceable skills are going to be:
- Judgment under uncertainty
- Evaluating reliability and bias
- Synthesizing conflicting or incomplete information
- Making principled tradeoffs
- Solving complex problems with messy objectives, parameters, and constraints
- Orchestrating the components of large, interconnected systems
These are the abilities that determine real-world impact and leadership. And they are precisely what traditional assignments struggle to measure (or even acknowledge).

From “What You Produce” to “What You Can Do With It”
In the past, assignments were about production. Can you write an essay? Solve a problem? Summarize a paper? These questions made sense when humans were the bottleneck. Now, AI can generate text, analyze datasets, and summarize articles—in a manner that is generally more accurate (and definitely faster and at a larger scale). By traditional measures of education, we are just producing a workforce that is destined to be replaced by AI.
Instead, the skill we should be testing is human judgment, creativity, and leadership:
- What is a big opportunity, given that your ability will be augmented by AI?
- What are the strengths and weaknesses of AI, and how do you leverage them for your big idea?
- How do you break down a problem into smaller chunks and generate a prompt accordingly?
- Are you able to evaluate the quality of AI outputs? What can you do to improve the quality of AI outputs?
- How do you synthesize massive information into meaningful conclusions?
- Can you spot errors, contradictions, or bias?
The goal shifts from doing the work to orchestrating and reasoning about the work.
Education at Unprecedented Scale
One productive way forward is to redesign assignments so that they operate at a scale no human could realistically manage alone.
For example, instead of asking:
“Read five papers and summarize them.”
We may ask:
“Analyze five hundred articles from different parts of the world that are written in all kinds of different languages and identify how an international policy evolved over the past twenty years.”
Or maybe, instead of:
“Memorize Newton’s second law of motion (F=ma) and apply it to practice problems.”
We may ask:
“Here is a massive collection of videos showing objects of different masses pushed with varying forces. Using AI, extract the kinematics (position, velocity, and acceleration) from the footage, and infer the equation that best explains the relationship among force, mass, and the kinematics parameters.”
AI handles the mechanical labor. Students are evaluated on:
- The patterns they identify
- What they choose to emphasize
- What they discard as irrelevant or misleading
- How they resolve contradictions
- How coherent, rigorous, and defensible their conclusions are
The assignment becomes impossible to complete without AI and meaningless to complete without thinking.
Wait a second, that sounds like an awful lot of work for instructors?!
Not necessarily. In fact, the same technology that enables students to operate at unprecedented scale can dramatically reduce the burden on faculty as well. AI can help generate datasets, simulate scenarios, translate materials, cluster student outputs, flag anomalies, summarize common patterns, and even surface representative examples for review. Instead of grading hundreds of near-identical essays line by line, instructors can focus on evaluating higher-level reasoning: the quality of questions students ask, the assumptions they make, the evidence they prioritize, and the logic of their conclusions.
AI doesn’t just expand what students can do. It also makes these new forms of assessment practical to teach.
Instant Feedback and Iterative Learning
Here’s another radical (?) idea (*perhaps not so much so, because Khan Academy‘s AI tutor Khanmigo has already pioneered this concept). Traditional education runs on delayed feedback. Students submit homework. Days or weeks later, they receive a grade. By then, the moment of confusion has passed, the mental context is gone, and the opportunity for correction has largely evaporated.
AI collapses this loop.
Students can:
- Test an idea immediately
- See counterexamples
- Ask for alternative explanations
- Revise their reasoning in real time
- Explore “what if” scenarios
Learning becomes closer to scientific experimentation than to performance on scheduled checkpoints.
This matters because expertise is built through iteration, not one-shot correctness. AI allows students to fail cheaply, repeatedly, and productively.
Here’s a really inspiring video from Sal Khan, the Founder and CEO of Khan Academy, you may want to watch:
Personalization Without the Cost of Private Tutors
For centuries, education has faced an unavoidable tradeoff—either teach a small number of students well, or teach many students approximately.
AI weakens that constraint.
Students can receive:
- Explanations tailored to their background
- Multiple representations of the same concept
- Adaptive practice at appropriate difficulty
- Clarification in the moment confusion arises
This does not replace human instructors. It amplifies them.
Instead of spending time repeating explanations, faculty can focus on:
- Designing better problems
- Interpreting student misconceptions
- Guiding projects
- Mentoring judgment and research thinking
Mass education no longer has to mean uniform education.
Thinking With Tools: A New Form of Literacy
We often frame AI as “doing the thinking for students.”
A more accurate framing is that AI changes how thinking is distributed.
Students increasingly reason in partnership with tools that:
- Store memory
- Generate hypotheses
- Simulate outcomes
- Surface alternatives
- Expose contradictions
This is not new in spirit. Calculators, search engines, spreadsheets, and programming languages already changed cognition. AI simply extends this trend dramatically.
The new literacy is not memorization. It is:
- Asking precise questions
- Structuring problems
- Evaluating reliability
- Debugging flawed reasoning
- Knowing when not to trust the output
Education should explicitly teach this form of tool-mediated reasoning, because it is how real intellectual work is now done.
Access to Real Complexity
Classroom problems are often simplified not because reality is simple, but because students lack the tools to handle complexity.
AI changes that.
Students can now engage with:
- Messy historical corpora
- High-dimensional policy tradeoffs
- Noisy real-world datasets
- Interacting causal systems
- Conflicting interpretations across cultures and time
This allows coursework to resemble research, policy analysis, or professional practice instead of artificial exercises.
The educational question shifts from:
“Can you follow the procedure?”
to:
“Can you make sense of a system that resists clean answers?”
Why This Matters
Together, these changes redefine what “learning” means. Not absorbing information; Not reproducing procedures; Not avoiding tools.
But:
- Iterating on ideas
- Navigating uncertainty
- Integrating evidence
- Exercising judgment and creativity
- Collaborating with machines
AI is the doorway.
But these deeper transformations are the destination.
A Different Incentive System
If students are rewarded for memorization, they will memorize. If they are rewarded for test scores, they will optimize for test-taking (hence the familiar question: “Is this going to be on the exam?”). As long as the incentives remain unchanged, students will always find a way to game the system. Students are not dumb (and especially when it comes to gaming a system, they are remarkably good at it).
Change the incentives, and behavior changes with them.
If students are rewarded for judgment, synthesis, and robustness, they will optimize for those instead. In this sense, AI is not an obstacle to learning—it can be an enabler. It allows students to spend less time on mechanical tasks and more time developing the higher-order skills we actually care about.
AI gives us a chance to realign incentives with what education has always claimed to value:
- Thinking, not copying
- Understanding, not reproducing
- Insight, not volume
Conclusion: Education After the Bottleneck Is Gone
For centuries, education has been constrained by what a single human mind could reasonably process.
Today, that bottleneck is disappearing. We are handed with a rare opportunity to rethink learning at its foundations—if we, as professors, are willing to move past discomfort and defensiveness and proactively redesign how we teach and what we reward.
We can pretend the old limits still exist.
Or we can design education around what humans do best when those limits are lifted.
AI will not make students less capable.
But refusing to change how we teach might.
답글 남기기