Roshan Singh • 7 February 2026 • 8 min read
STOP Asking ChatGPT to Explain: Make It Interrogate You (Or You’ll Stay Average)
Most students use AI like a kinder version of a coaching sir. They paste a question, ask “explain”, read a neat solution, and feel smart.

Most students use AI like a kinder version of a coaching sir.
They paste a question, ask “explain”, read a neat solution, and feel smart.
That feeling is the problem.
Because JEE is not a reading exam. It is a performance exam. Under time pressure, with anxiety, your brain has to pick the right tool, set up the right equation, and execute without hand-holding.
If AI is doing the picking and the setup for you, you are not learning. You are watching.
You are training recognition. The exam demands retrieval.
This post is a hard pivot: stop using AI as a teacher. Use it as an interrogator.
Not for motivation. Not for “clarity”. For pressure. For recall. For decision points.
The ugly truth: explanations create the illusion of competence
When you read an explanation, a lot of the work is already done:
- The right concept is selected.
- The right representation is chosen.
- The path is laid out in clean steps.
Your brain nods along and mistakes fluency for mastery.
Cognitive science has been screaming about this for decades. People routinely overestimate their learning when the material feels easy while studying.
The fix is not “work harder”. The fix is to force your brain to produce, not consume.
Two findings matter here.
-
Retrieval practice beats re-reading. When you try to pull information out of memory, you strengthen access routes, and you also expose what you do not know.
-
“Desirable difficulties” work. Learning improves when practice is effortful in the right way. Not random suffering, but challenges that make you retrieve and discriminate.
This is why practice tests, spaced redoes, and mixed problem sets work. They are not “extra”. They are the mechanism.
If AI is removing difficulty by giving you a smooth explanation too early, it is sabotaging the mechanism.
Your real bottleneck is not knowledge. It is selection.
Coaching sells knowledge: more theory, more lectures, more notes.
JEE punishes you somewhere else: selection.
Selection is the moment you look at a problem and decide:
- Which principle applies?
- Which variable should be eliminated?
- Which approximation is allowed?
- Which method is fastest, and which is a trap?
A student can “know” the chapter and still fail because they cannot select under pressure.
Explanations hide selection. They start after the selection is already made.
If you want rank movement, you must train the selection step.
And this is where AI can be a weapon, if you use it correctly.
The rule: AI is not allowed to solve until you have committed
Here is the protocol. It looks simple. It is violent if you do it honestly.
Step 0: Lock the prompt
When you open an AI chat, you are opening a candy store. Your default behavior will be to ask for “the solution”.
So you need a fixed prompt that forces the AI to behave like an examiner.
Use this:
You are my strict JEE examiner. Do not solve the problem. Your job is to interrogate my understanding. Ask me one question at a time. Force me to commit to the next step. If I answer incorrectly or vaguely, ask a sharper question or give a minimal hint. Only reveal the full solution after I have attempted a complete solution and written my final answer.
That is your contract.
Step 1: Before anything, make a prediction
Before you touch the pen, answer two questions:
- What chapter and which idea do I think this is testing?
- What is the likely final form of the answer (dimension, sign, scaling, rough magnitude)?
This prediction is not optional. It is a metacognition anchor.
When you are wrong, you learn what your brain confuses.
Step 2: Write the first move, not the full solution
Most students try to write a complete solution in one go and crash.
Instead, write just the first move:
- define variables
- draw the diagram
- write the governing equation
- choose the coordinate system
Then stop.
Now go to the AI.
Step 3: Ask the AI to attack your first move
Paste your first move and ask:
- Is this the right tool?
- What assumption am I making?
- What is the most common mistake at this step?
The AI should respond with questions, not answers.
This forces you to justify the tool choice. You are training selection.
Step 4: Only accept “minimal hints”, never a full solution
A hint is acceptable if it does one of these:
- points out a missing constraint
- asks you to compute an intermediate quantity
- asks you to check units
- asks you to compare two candidate approaches
A hint is not acceptable if it does the hard thinking for you, like “Use conservation of energy and then…”
If the AI starts solving, stop it and restate the contract.
Step 5: After you finish, do an “examiner recap”
When you have a full attempt, ask the AI:
- List the decision points in my solution.
- Identify where I could have chosen a wrong method.
- Create one near-miss variant that breaks my method.
Near-miss variants are gold. JEE loves them.
You are no longer practicing a problem. You are practicing discrimination.
Why this works: active generation beats passive consumption
There is a simple framework from learning science called ICAP that classifies engagement:
- Passive: you read.
- Active: you underline, copy.
- Constructive: you generate explanations, predictions.
- Interactive: you argue, respond, refine.
Most AI usage is passive. You read a polished answer.
The interrogator protocol forces constructive and interactive behavior. You generate, the system pushes back, you refine.
That is where durable learning happens.
The anti-coaching angle: “personalized explanations” are a product, not a method
The edtech pitch is seductive: “AI will personalize the lecture to your level.”
Sounds great. The hidden cost is that personalization often means removing struggle.
Struggle is not a virtue. It is a signal.
If you never feel stuck, you are probably not retrieving. You are being carried.
And coaching culture loves carry.
Because carry feels like progress. Students feel “clear”. Parents feel “value”. Nobody measures performance until the mock.
Then the mock arrives and exposes the truth: the student cannot generate under time.
So they buy more explanation.
That loop is profitable and stupid.
A 7-day AI drill that actually moves your score
If you want to test whether your AI usage is helping or harming, run this for one week.
Daily (45 minutes total)
- Pick 6 problems from a mixed set (2 Physics, 2 Chem, 2 Maths).
- For each problem, do:
- 2 minutes prediction
- 6 minutes attempt
- 2 minutes AI interrogation (questions only)
- Mark each problem as one of:
- Concept gap
- Trigger gap (did not recognize the right tool)
- Execution error
- Algebra or arithmetic
- Time panic
Your score improves when the trigger gaps shrink.
That is selection training.
Day 7 (the audit)
Take 20 problems you “understood” this week.
Do them again without AI.
If your performance collapses, your AI is acting like a crutch.
If performance holds, your AI is acting like a coach you deserve.
The only time “explain” is allowed
Explanations are not banned. They are delayed.
Use “explain” only after one of these conditions is true:
- You have produced a full wrong solution and you can name the exact step where it broke.
- You have produced two competing approaches and you need help choosing.
- You have done the problem on two different days and failed both times.
Explanation is for diagnosis after failure, not for comfort before effort.
References you should respect (even if you hate reading papers)
- Roediger and Karpicke (2006) showed the testing effect: retrieving improves long-term retention more than additional studying.
- Bjork and Bjork (1992, 2011) popularized “desirable difficulties”: conditions that make practice harder can improve long-term learning and transfer.
- Dunlosky et al. (2013) reviewed common study techniques and found strong evidence for practice testing and distributed practice, and weak evidence for highlighting and re-reading.
- Sweller (1988) introduced cognitive load theory: working memory is limited, and instructional design matters. Bad AI use overloads you by splashing steps without building schemas.
You do not need to memorize citations.
You need to stop behaving like learning is a feeling.
The punchline
If you use AI to reduce effort, you will feel smarter and get worse.
If you use AI to increase effort in a controlled way, you will feel dumber and get better.
Pick your poison.
Keep exploring
More from the Eklavya learning desk
Continue the journey with reflections on independent learning, coaching myths, and smarter JEE prep.
How Eklavya Is Rethinking AI Chats Without Breaking What Already Works
Why Eklavya kept RAGFlow for clean NCERT retrieval but moved conversation, behavior, and long-term student memory into its own stack, so JEE/NEET chats stay controllable as usage scales.
Lofi Is Not Focus: The Irrelevant Sound Effect Is Eating Your Rank
Studying with playlists feels like focus, but it often taxes working memory and makes recall fragile. A blunt JEE protocol to build silence stamina and perform when it matters.