I asked Claude Code to stop wasting my tokens. It told me to stop using it. I was converting Confluence pages to markdown by passing them through Claude Code. Page after page. Every conversion cost tokens - for something that's just a format change. So I asked Claude Code: you're spending too many tokens on this. What's the token-optimized way? Its answer: use Pandoc. This doesn't need me. I built a Python script. It calls the Confluence REST API, downloads the page, and runs Pandoc for the conversion. Zero tokens. But then 3 things broke: 1. Images were not getting downloaded in the .md file 2. Table formatting was not proper 3. Status labels and @mentions were not converting to markdown Multiple iterations later - the script now handles all of it. Pages convert to markdown without missing anything. Proper formatting. No token cost. The thing that stuck with me: I wasn't wasting tokens on hard problems. I was wasting them on mechanical tasks that a Python script handles better. If you're using Claude Code, Antigravity, Cursor, or any AI tool - try this before your next conversion. Ask yourself: does this need intelligence, or is it just a format change? If it's just format - write a script. Save the tokens for the work that actually needs thinking. Feel free to connect if you want to know more. Happy to help. #ClaudeCode #Python #Confluence #BuildInPublic #AITools
This is such an important distinction AI is powerful — but not every problem needs intelligence. Format conversion, parsing, transformation — these are deterministic tasks. Tools like Pandoc + API scripting are often the smarter solution. Use AI for ambiguity. Use code for structure. Well said.
I write about practical AI workflows at Ship with AI - https://amanparmar3.substack.com/