I Was Using Cursor Wrong
For a while, I thought I was being productive.
Open Cursor, write a prompt, get some code, tweak it, write another prompt. Repeat. It felt fast. It felt modern. But somewhere around week three of building this way, I caught myself doing the same back-and-forth I’d always done — except now I had an AI involved. I wasn’t moving faster. I was just failing with extra steps.
That’s when it hit me: there has to be a better way to use this thing.
I started digging. Not into tutorials or YouTube videos, but into what Cursor was actually capable of under the hood. That’s when I found Skills.
At the time, I had no idea what they were. But when it clicked, I was genuinely impressed — the kind of impressed that makes you stop and sit back for a second.
A Skill is essentially a strict, exact set of instructions for how to accomplish one specific task. Not a vague prompt. Not a “hey do this for me.” A tightly scoped, repeatable instruction set that removes ambiguity and tells the AI exactly what it needs to do and how to do it.
The concept really landed for me while I was building my first automation bot on another platform. I was running into the same problem there. The AI would go wide. It would interpret, improvise, wander. And I realized that’s not a flaw, that’s just how these models work. They think across a massive spectrum of possibility. Left to their own devices, they’ll explore it.
Recommended by LinkedIn
Skills are how you fix that. You’re not dumbing the AI down. You’re giving it a lane. Tunnel vision on purpose. You take something that naturally thinks in all directions and you say: not right now, just do this one thing, exactly like this. That’s when the output stops being impressive and starts being reliable.
There’s a version of AI-assisted work that looks productive but isn’t. It’s prompt, accept, prompt, accept. Riding the momentum of fast output without asking whether the output is actually good or whether the process is actually repeatable. It’s easy to fall into. The tools are slick, the results feel magical, and “fast” gets mistaken for “right.”
I fell into it. Most people do.
The smoke and mirrors are convincing enough that you can spend weeks thinking you’re building something when you’re really just consuming a service. That’s fine for exploration. It’s a problem if you’re an engineer, a product manager, or a founder trying to build something that has to work consistently.
The shift isn’t about using AI less. It’s about using it more deliberately. Giving it constraints. Defining what done looks like before you start. Treating your instructions like code that needs to be maintained, not a wish you’re tossing into a chatbox.
I’m still figuring this out. That’s kind of the point of writing this. But the thing I keep coming back to is that the developers who are going to get the most out of these tools aren’t the ones who are most impressed by them. They’re the ones who look past the impressiveness and ask: how do I make this repeatable?
That’s the question I’m chasing. Skills were my first real answer.
Skills are so single-agent I’m all about missions now. Multi agent multi head multi site coordinated efforts 👌
Tbf skills is relatively new in cursor at least