Before You Learn the Tools
Before AI tools can be used well, they need a foundation of language, thinking, basic knowledge, and experience
Lately, I keep seeing people talk about how to use AI tools.
How to use Claude Code.
How to use Codex.
How to write prompts.
How to set up skills and everything else.
There is nothing wrong with that in itself.
If something is useful, we should use it.
And if we can use it well, all the better.
When I see people at the front edge of this, exploring how to use Claude or Codex and turning that into actual practice, I honestly think it is impressive.
I am still nowhere near that level.
That is exactly why I feel copying the skills or AGENTS.md files that other people carved out for themselves and feeling satisfied is a little off.
What kind of intention shaped that structure.
What kind of workflow it assumes.
What kinds of stumbling points it was built around.
Understanding that background and pulling out the parts that actually fit your own work.
I think that kind of understanding matters far more.
Still, there is something that bothers me a little.
There seem to be more people than I expected who learn only how to use the tools,
and then feel as if they have already become capable of something.
There is probably an order to making use of AI.
At the very bottom, as the foundation, are language and the ability to think.
How do you train your ability to think.
I suspect it begins when you get stuck on something.
When your materials will not come together.
When a technical problem refuses to be solved.
When you cannot see why something is happening the way it is.
That is when people stop.
To keep thinking while still not knowing.
To accept, on purpose, that the answer will not arrive immediately.
Not knowing is uncomfortable.
So people reach for an answer right away.
But that is often the moment thinking stops.
When we do not understand something, we tend to think this.
"Maybe I just do not have the ability."
But that too can be a kind of escape.
If you blame your own limits,
you no longer have to keep thinking.
The problem is not always that you lack ability.
Sometimes it is simply that you still cannot see it yet.
Thinking may be the ability
to keep holding on to not-knowing.
And the ability to think is not something you can attach afterward like a tool.
It is something that slowly remains inside you
through the time things did not work
and through the time you kept thinking anyway.
So maybe,
instead of looking for a method to train your thinking,
what matters more is not letting go of the time
when you keep thinking without understanding yet.
What exactly you do not understand.
Where you are getting caught.
How you are supposed to read what comes back.
If you lose sight of even that,
then no matter how plausible the answer sounds,
you end up receiving nothing more than something that merely feels convincing.
Above that foundation of language and thought are basic knowledge and experience.
An understanding of the work you are actually involved in.
At least the minimum vocabulary.
What has worked before, and what has failed.
Without those things,
you cannot judge what is sound and what is risky.
And without language,
no matter how smart AI becomes,
you still cannot convey your intention.
Why something looks correct.
Where it feels dangerous.
Which option can actually survive contact with reality.
That judgment is not something the tool can do for you.
If anything, the more difficult part is that the tool often returns things in a clean shape.
AI makes mistakes.
It just makes them in a clean shape.
That is why, if there is no foundation underneath,
something can feel correct simply because it looks well-formed.
And only then, on top of all that,
comes knowing how to use the tools.
How to write prompts.
How to divide work across agents.
How to structure a workflow.
All of that belongs up here.
At the top.
You can polish only the top of the pyramid,
but if there is nothing underneath, it will not stand.
It may even look stable for a while,
but the moment a slightly more complex piece of work arrives, it collapses.
You get stuck when the wording of the question changes a little.
You fail to notice how rough the answer really is.
You carry forward language that only sounds right.
That happens more quickly than people think.
AI is powerful, certainly.
But it is not a tool that lets you stop thinking.
It is a tool that amplifies the person who has thought.
That is why even when people use the same tool,
the results can be very different.
People with real questions can use the tool deeply.
People without them get pulled around by it.
The former use it to move their own thinking forward.
The latter use it to quickly obtain the feeling that they have thought.
The surface may look similar,
but the inside is very different.
I do not think the real difference going forward will be who knows the most tool names.
It will be who can ask what.
Who can stop at the right moment.
Who can keep doubting a little longer.
It will be those parts that rarely appear on the screen.
Every time a new tool appears,
there will be even more information about how to use it.
That will probably continue for a while.
But every time, I find myself thinking the same thing.
Do not try to stand on top of the tools.
It may take time, but build the foundation first,
and place the tools on top of that.
That order, at least,
does not seem to have changed very much.