VS Code's new default theme caught some developers off guard last week. I'm not a fan. The interesting part isn't the colour change itself. A quirk in how Code handles GUI config means some users got unexpectedly updated, making it feel like a forced change. Wrote a short post about it: https://lnkd.in/eXGMPydh #vscode #coding #code #devops
VS Code's Default Theme Update Causes Confusion
More Relevant Posts
-
You've nodded along when someone said "that lives on the heap," but never quite understood why memory is split between Stack and Heap? I just published a deep dive that breaks down: ✴️ Why StackOverflowError actually happens ✴️ How Stack frames get pushed and popped ✴️ When value types escape to the Heap ✴️ Why closures force data out of the Stack This isn't theory, it's the foundation behind every memory leak you've debugged and every recursion limit you've hit. #SoftwareEngineering #MemoryManagement #Programming #DeveloperCommunity
To view or add a comment, sign in
-
I've spent the last couple months using Claude Code exclusively, no hand-written code, to get a real read on where things are at. TL;DR: Vast net gain if you're already a solid engineer. Complete bug factory if you're not. When you first use Claude Code, you'll be ecstatic. The code works, and it gets done fast. But working code isn't necessarily shippable code. And as your codebase grows toward 50 KLOC, Claude gets noticeably dumber, slower, and costlier. You'll burn more tokens on refactors than on features, especially as your CLAUDE․md and memory files grow; they're all context, every time. The number 1 problem: code duplication. Multiple code paths for state, validation, and DB updates are a nightmare, and duplication is Claude's default behavior. Yes, even with Opus. So you add rules to CLAUDE․md. That file is still just context, not enforced config. Ask Claude if it's about to duplicate code before plan execution. 90% of the time, it is. Ask again after. 90% of the time, it did anyway. Defense in layers: narrow sub-agents, daily refactor sessions, one biggest issue at a time. You still need everything you always needed: tests, static analysis, hooks, CI gates, PR reviews. We may no longer care about *writing code*, but we absolutely still need to care *about code*. 2: File bloat. Claude piles on lines by default, which makes duplication (problem #1) harder to detect. Watch file sizes. Break them down aggressively. 3: Magic numbers. Claude avoids constants unless pushed. Well-named constants help the model understand intent and cut duplication. Same reason they helped humans. Project memory is largely useless, for the same reason CLAUDE․md alone isn't followed. There is no enforcement model. It's context turtles all the way down. Enforcement happens outside the model. Cost and time won't be what early enthusiasm suggests. A quality codebase requires slowing down. Nothing new. One human note: if you scrutinize plans, read changed code, and actually engage with the model, you'll understand the codebase as well as when you wrote everything by hand. A huge change is that you're context-switching across many features at once now, not just in and out of code. Development moves fast. It *feels* even faster, until the big refactors hit. Fast is slow, slow is fast. Even in 2026.
To view or add a comment, sign in
-
"We may no longer care about *writing code*, but we absolutely still need to care *about code*." This is a great post. One thing that's dropped in there--which is an excellent distillation of the problem--is that "there is no enforcement model. ... Enforcement happens outside the model." After getting fed up with misalignment a while back and then spending the last week banging around in my Claude Code traces, I want to flag this as something that I believe is true: The solution to these problems will not be in hooks, better documentation, or bigger thinking budgets; those are the band-aids we're going to use for now to stumble our way through this. The mitigations we will put in place to reduce the incidence of utterly stupid decisions. The solution will have to be fixing the models. And that's obviously hard, because nobody has been able to do it yet. A million-token context window is not going to solve the problem, any more than going from 16k to 100k to 200k did. I can fit entire codebases inside a million tokens and it STILL forgets to follow the instructions in my 100-line CLAUDE.md.
Technology leader, author, composer and musician. I convert coffee to code, future ancient texts and nerdy hacker music. #ActuallyAutistic #NeurodiversityAdvocate
I've spent the last couple months using Claude Code exclusively, no hand-written code, to get a real read on where things are at. TL;DR: Vast net gain if you're already a solid engineer. Complete bug factory if you're not. When you first use Claude Code, you'll be ecstatic. The code works, and it gets done fast. But working code isn't necessarily shippable code. And as your codebase grows toward 50 KLOC, Claude gets noticeably dumber, slower, and costlier. You'll burn more tokens on refactors than on features, especially as your CLAUDE․md and memory files grow; they're all context, every time. The number 1 problem: code duplication. Multiple code paths for state, validation, and DB updates are a nightmare, and duplication is Claude's default behavior. Yes, even with Opus. So you add rules to CLAUDE․md. That file is still just context, not enforced config. Ask Claude if it's about to duplicate code before plan execution. 90% of the time, it is. Ask again after. 90% of the time, it did anyway. Defense in layers: narrow sub-agents, daily refactor sessions, one biggest issue at a time. You still need everything you always needed: tests, static analysis, hooks, CI gates, PR reviews. We may no longer care about *writing code*, but we absolutely still need to care *about code*. 2: File bloat. Claude piles on lines by default, which makes duplication (problem #1) harder to detect. Watch file sizes. Break them down aggressively. 3: Magic numbers. Claude avoids constants unless pushed. Well-named constants help the model understand intent and cut duplication. Same reason they helped humans. Project memory is largely useless, for the same reason CLAUDE․md alone isn't followed. There is no enforcement model. It's context turtles all the way down. Enforcement happens outside the model. Cost and time won't be what early enthusiasm suggests. A quality codebase requires slowing down. Nothing new. One human note: if you scrutinize plans, read changed code, and actually engage with the model, you'll understand the codebase as well as when you wrote everything by hand. A huge change is that you're context-switching across many features at once now, not just in and out of code. Development moves fast. It *feels* even faster, until the big refactors hit. Fast is slow, slow is fast. Even in 2026.
To view or add a comment, sign in
-
"Complete bug factory if you're not" So true, I let codex do everything without any guidance (just to see what happens) and the tech stack is a mess. You need a detailed Spec doc, and TDDs and pseudo code and hand holding to make an actual production ready app
Technology leader, author, composer and musician. I convert coffee to code, future ancient texts and nerdy hacker music. #ActuallyAutistic #NeurodiversityAdvocate
I've spent the last couple months using Claude Code exclusively, no hand-written code, to get a real read on where things are at. TL;DR: Vast net gain if you're already a solid engineer. Complete bug factory if you're not. When you first use Claude Code, you'll be ecstatic. The code works, and it gets done fast. But working code isn't necessarily shippable code. And as your codebase grows toward 50 KLOC, Claude gets noticeably dumber, slower, and costlier. You'll burn more tokens on refactors than on features, especially as your CLAUDE․md and memory files grow; they're all context, every time. The number 1 problem: code duplication. Multiple code paths for state, validation, and DB updates are a nightmare, and duplication is Claude's default behavior. Yes, even with Opus. So you add rules to CLAUDE․md. That file is still just context, not enforced config. Ask Claude if it's about to duplicate code before plan execution. 90% of the time, it is. Ask again after. 90% of the time, it did anyway. Defense in layers: narrow sub-agents, daily refactor sessions, one biggest issue at a time. You still need everything you always needed: tests, static analysis, hooks, CI gates, PR reviews. We may no longer care about *writing code*, but we absolutely still need to care *about code*. 2: File bloat. Claude piles on lines by default, which makes duplication (problem #1) harder to detect. Watch file sizes. Break them down aggressively. 3: Magic numbers. Claude avoids constants unless pushed. Well-named constants help the model understand intent and cut duplication. Same reason they helped humans. Project memory is largely useless, for the same reason CLAUDE․md alone isn't followed. There is no enforcement model. It's context turtles all the way down. Enforcement happens outside the model. Cost and time won't be what early enthusiasm suggests. A quality codebase requires slowing down. Nothing new. One human note: if you scrutinize plans, read changed code, and actually engage with the model, you'll understand the codebase as well as when you wrote everything by hand. A huge change is that you're context-switching across many features at once now, not just in and out of code. Development moves fast. It *feels* even faster, until the big refactors hit. Fast is slow, slow is fast. Even in 2026.
To view or add a comment, sign in
-
Juan Irming (Âû) is putting his finger on something most people are still avoiding. What he’s describing isn’t just a tooling issue with Claude Code. It’s a shift in what “good engineering” actually means when code generation is no longer the bottleneck. A few things that stand out: The constraint has moved from producing code to maintaining coherence. Duplication and drift are now the default. Context is not control. Files like CLAUDE.md give the illusion of governance, but without enforcement, they don’t hold. The real skill is designing systems that can withstand constant machine-generated entropy. This dynamic isn’t unique to engineering. We’re seeing the same pattern in education with AI. Giving teachers access to powerful tools without structured guidance doesn’t elevate practice by default. It often creates inconsistency, fragmentation, and uneven outcomes. That’s exactly the problem we’re trying to solve at Mindset CoPilot. Not just providing answers, but shaping how decisions are made. Not just adding context, but embedding guidance that holds under real classroom conditions. The difference between suggestion and support you can rely on is everything. The line that sticks with me from Juan’s post: "we may no longer care about writing code, but we absolutely still need to care about code" Same applies with our solutions. AI doesn’t remove the need for expertise. It raises the bar for how that expertise gets applied and sustained. #AI #Quality #Outcomes #Software #Engineering
Technology leader, author, composer and musician. I convert coffee to code, future ancient texts and nerdy hacker music. #ActuallyAutistic #NeurodiversityAdvocate
I've spent the last couple months using Claude Code exclusively, no hand-written code, to get a real read on where things are at. TL;DR: Vast net gain if you're already a solid engineer. Complete bug factory if you're not. When you first use Claude Code, you'll be ecstatic. The code works, and it gets done fast. But working code isn't necessarily shippable code. And as your codebase grows toward 50 KLOC, Claude gets noticeably dumber, slower, and costlier. You'll burn more tokens on refactors than on features, especially as your CLAUDE․md and memory files grow; they're all context, every time. The number 1 problem: code duplication. Multiple code paths for state, validation, and DB updates are a nightmare, and duplication is Claude's default behavior. Yes, even with Opus. So you add rules to CLAUDE․md. That file is still just context, not enforced config. Ask Claude if it's about to duplicate code before plan execution. 90% of the time, it is. Ask again after. 90% of the time, it did anyway. Defense in layers: narrow sub-agents, daily refactor sessions, one biggest issue at a time. You still need everything you always needed: tests, static analysis, hooks, CI gates, PR reviews. We may no longer care about *writing code*, but we absolutely still need to care *about code*. 2: File bloat. Claude piles on lines by default, which makes duplication (problem #1) harder to detect. Watch file sizes. Break them down aggressively. 3: Magic numbers. Claude avoids constants unless pushed. Well-named constants help the model understand intent and cut duplication. Same reason they helped humans. Project memory is largely useless, for the same reason CLAUDE․md alone isn't followed. There is no enforcement model. It's context turtles all the way down. Enforcement happens outside the model. Cost and time won't be what early enthusiasm suggests. A quality codebase requires slowing down. Nothing new. One human note: if you scrutinize plans, read changed code, and actually engage with the model, you'll understand the codebase as well as when you wrote everything by hand. A huge change is that you're context-switching across many features at once now, not just in and out of code. Development moves fast. It *feels* even faster, until the big refactors hit. Fast is slow, slow is fast. Even in 2026.
To view or add a comment, sign in
-
Every time Claude Code makes a new request, it sends your entire codebase context from scratch. For complex projects, Graphify's knowledge graph reduces this overhead by up to 71%. #claude #claudecode
To view or add a comment, sign in
-
Lesson 04 of Claude Code Lessons just dropped. This one's about the most expensive mistake I see people making with Claude Code, picking one model at install and never switching. A friend texted me last week: "Claude Code is too expensive, I'm turning it off." I asked what model she was running. "Opus. It's the best one, right?" She was running Opus to generate 50 first-draft subject lines. That's like hiring a principal engineer to label inbox folders. Claude Code has three models and one slash command that almost nobody uses: /model. 1️⃣ Haiku is the fast intern. Throw bulk work at it — 15 subject lines, a bug triage table, a first-pass draft. It's cheap and it's fast and you're not asking it to think. 2️⃣ Sonnet is the senior teammate. It handles ~80% of your real work — judgment, writing, everyday code. This is the default for a reason. 3️⃣ Opus is the principal you call in when being wrong costs a quarter. Architecture decisions. Positioning strategy. The one call that actually matters. Pair it with /effort high and give it a hard problem. The habit that changes everything: escalate for the decision, drop back for the execution. Switch to Opus for the hard call. Then the moment the hard thinking is done, /model sonnet and keep moving. You're not burning Opus tokens on follow-ups that Sonnet handles fine. Lesson 04 walks you through it with two tracks: 1️⃣ Builder track — ship a launch campaign. Haiku for 15 subject lines. Sonnet to pick the winners and write the body. Opus to resolve a real positioning tension. 2️⃣ Developer track — triage 8 bugs. Haiku for the severity table. Sonnet to fix a P1. Opus to review an architecture decision where the team is split. You use all three in 15 minutes. Once you feel the difference, you won't go back. Detail Writeup → https://lnkd.in/en4bKUFn #ClaudeCode #AI #Automation #DeveloperProductivity #AgenticAI
To view or add a comment, sign in
-
Shipped concord v0.1.0 this week — a CLI that syncs AI harness assets (skills, subagents, hooks, MCP servers, plugins, instructions) across Claude Code, Codex, and OpenCode. Built 100% with the Agora + Superpowers combo, and I want to write down why that pairing worked. Quick context. Superpowers is Jesse Vincent's agentic skills framework that enforces *how* you build with a coding agent: brainstorm before code, plans broken into 2–5 minute tasks, TDD, subagent review between steps. Discipline for execution. Agora v2.2.0 is what I've been building on the other side — a skill-first overlay for supervised AI work. Clarification, doubt, dissent, synthesis, and governance workflows on top of whatever host agent you already use. Small commands like /clarify, /doubt, /decide, /steelman, /assumption-audit. Philosophy as methodology, not decoration. Where the combo really earned its keep for concord was brainstorming. Before any plan gets written, /clarify and /doubt forced me to answer things I would have otherwise skipped: what's the real decision, what assumptions am I hard-coding into the spec, where would this fail. The payoff was concrete — the early design shifted once /doubt surfaced how fragile "just symlink everything" was across Windows, macOS, and WSL. That became a 6-fetcher / 4-writer / 2-installer architecture with atomic rollback and drift detection, instead of a naive sync that would have broken in week two. Once the spec held up, Superpowers carried implementation: plans, TDD cycles, a POC log, three-platform CI matrix, 162 commits to v0.1.0. The repo itself shows the pattern — docs/superpowers/specs/, docs/superpowers/plans/, docs/superpowers/poc/. That directory structure is the workflow. The framing I keep coming back to: agents generate options; humans supervise judgment. Superpowers makes the generation reliable. Agora makes the supervision structured. Together they cover the loop from "is this even the right problem" to "it's shipped and passing CI on three platforms." concord v0.1.0: https://lnkd.in/g48tFuw6 Agora v2.2.0: github.com/malleus35/agora Curious to hear from anyone else pairing skills frameworks this way — or using Superpowers with another judgment-layer overlay.
To view or add a comment, sign in
-
Mastering Linear Logic Day 243 Today Today is day 243 of my coding journey, and I am continuing to refine my expertise in the Two Pointer technique. This strategy is a game-changer for linear data structures because it allows for efficient searching and comparison without the need for nested loops, keeping the time complexity at $O(n)$. Two Pointer Core Logic The core idea relies on using two indices to traverse an array or string from different positions. While it usually requires a sorted array, its power lies in three main patterns: Opposite Direction: Pointers start at each end and move toward the center (e.g., Palindrome checks). Same Direction: Fast and slow pointers help detect cycles or find middle elements. Sliding Window: Pointers track a range or subarray that expands and contracts based on constraints. Today's Solved Problems I focused on problems that involve comparison and searching pairs within sorted structures: LeetCode 125 Valid Palindrome: Used pointers at both ends to compare characters while ignoring non-alphanumeric symbols. LeetCode 344 Reverse String: Implemented an in-place swap using two pointers to achieve $O(1)$ space complexity. LeetCode 977 Squares of a Sorted Array: Leveraged the fact that the largest squares are at the ends of a sorted array containing negative numbers. LeetCode 167 Two Sum II: Since the array is already sorted, I used two pointers to narrow down the target sum in a single pass. LeetCode 408 Valid Word Abbreviation: A complex case where I synchronized pointers between a word and its compressed version, handling multi-digit jumps. Logic Tip: In the "3 Sum" problem, the strategy is to fix one pointer and then use the two-pointer technique on the remaining part of the array to find the missing pair. #DSAinJavaScript #365daysOfCoding #JavaScriptLogic #TwoPointers #LeetCodeDaily #ProblemSolving #Algorithms #DataStructures #CodingLife #LogicBuilding #CleanCode #JSDeveloper #WebDevelopment #ProgrammingJourney #SoftwareEngineering #TechLearning #MERNStack #DailyCoding #BackendLogic #CodingCommunity
To view or add a comment, sign in
-
Encapsulation isn’t optional. If you’re not validating data coming into your types, you’re breaking OOP at the first pillar. Protect your code. Rock your standards. 🎸🔥 #CodeQuality #MVPBuzz #dotNetDave https://lnkd.in/gu6vuk_q
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development