From the course: Windsurf for Privacy-Conscious Development
Claude Code - Windsurf Tutorial
From the course: Windsurf for Privacy-Conscious Development
Claude Code
- [Instructor] Here, we're going to be diving into Claude Code, a powerful AI assistant that runs in your terminal. Its strength lies in its local-first design and robust security configurations. And we're going to walk through the entire setup process, from installation to critical security steps of configuring project-specific settings to ensure Claude has only access to the files that you permit, keeping your secrets and your sensitive data safe. Now, before we begin, you're going to need a Claude AI or a Claude Console account. And the installation itself is very straightforward. If you have Node.js version 18 or newer, we're just going to use npm to install it globally. We're simply just going to type in this command, npm install -g, and all of this. And this command is going to make the Claude tool available systemwide. The first time you run it, you're going to be prompted to log in. I've already used this, and I'm already logged in, so I'm not being reprompted. And once we're all logged in and authenticated, we can start Claude Code in our project directory. So here, we're just going to make a temporary directory just for testing things, and we can open Claude in our new directory. Now, for the most important part, configuring Claude Code to protect sensitive files. And we can create project-specific settings that can be shared with your entire team through version control. So we're going to create a special directory in our root. So we can create special directories in our project root that look like this. This is already in the root, so if you do make directory.claude in your actual project directory file, then that'll automatically create that for you. And inside this directory, you can create a settings.json file, and this is where we define our security rules. So opening this file, our goal is to explicitly deny Claude the ability to read our .env files and our secrets directory. And we can do this by using a permissions object with a deny rule. That ends up looking like this. What this does is it prevents Claude from reading .env files, it blocks access to files like .env.local, or .env.production, and denies access to any files inside the directory named secrets. And this file should be checked into Git if you're working on a team, so the same security policy applies to everyone on the team. Just as we deny access, we can also explicitly grant it with the allow rule. And overall, you can really just edit this in any way you want, being able to deny access to whatever, and allow access to anything else. And that's simply it. By investing just a few minutes, creating just a project-level settings.json file, you can establish a robust security boundary for Claude Code, and ensure that it never accesses your most sensitive files while simultaneously pre-approving safe routine commands. And this configuration-driven approach is key to using AI assistants responsibly and securely.