gVisor gVisor provides a strong layer of isolation between running applications and the host operating system. It is an application kernel that implements a Linux-like interface. Unlike #Linux, it is written in a memory-safe language (Go) and runs in userspace. #gVisor includes an Open #Container Initiative (#OCI) runtime called runsc that makes it easy to work with existing container tooling. The runsc runtime integrates with #Docker and #Kubernetes, making it simple to run #sandboxed containers. https://lnkd.in/dvS8PBEa (by #Google)
gVisor: Isolated Linux-like Interface for Go
More Relevant Posts
-
Ever wonder how Docker or Kubernetes enforce strict memory restrictions on your apps? Or how Linux prevents a single rogue process from crashing your entire server? While researching a specific issue with the os.totalmem() method in Node.js, I ended up going down a fascinating rabbit hole and discovered the magic behind containerization: Control Groups, or cgroups. If an operating system allowed any single program to access all bare-metal resources unchecked, one memory-hungry process could easily starve everything else, leading to resource exhaustion and system crashes. This is exactly where cgroups are needed. Cgroups are built directly into the Linux kernel, they allow you to partition, restrict, and manage system resources for specific processes. Think of it like setting a strict limit on a credit card, no matter how much a program wants to spend, it simply cannot exceed the hard limit you've set. At a high level, cgroups provide four essential features that keep our modern infrastructure stable: 1. Resource limits: Caps the maximum amount of hardware (CPU, Memory, Disk I/O) a specific process or group of processes can consume. 2. Prioritization: Ensures that mission-critical workloads get access to CPU and disk time before less important background tasks. 3. Accounting: Measures and monitors exact resource usage, which is essential for billing, capacity planning, and debugging. 4. Control: Gives you the power to freeze, resume, or restart an entire group of processes as a single, manageable unit. Without cgroups, the predictable, isolated containers we rely on every day wouldn't exist! If you want to deep dive more into how you can setup cgroups and check for the running processes, I have found an amazing medium article by Dagang Wei, which helped me understand this concept clearly: https://lnkd.in/d4Snacf6 #Linux #Docker #Kubernetes #NodeJS #DevOps #SoftwareEngineering #Cgroups #Learning
To view or add a comment, sign in
-
New guide on getnix.io — "What is Nix?" A beginner-friendly intro to what Nix actually is, how it differs from Docker/Homebrew/Apt/Ansible, and the core concepts you need before diving in. If you've ever wondered why people keep talking about Nix, this one's for you. https://lnkd.in/ewR-S3Q8 #Nix #NixOS #Linux #macOS #DevOps #ReproducibleBuilds #PackageManagement
To view or add a comment, sign in
-
I recently spent some time investigating Linux beyond just commands, trying to understand how it actually works under the hood through its file system. Instead of using Linux, I explored it. Here are a few things I found really interesting: • /proc isn’t a real directory — it’s a live view of the kernel • /sys shows structured relationships between hardware and the system • /dev treats devices like files (which is surprisingly powerful) • /etc quietly controls most of the system’s behavior • And /proc/kcore… a file that represents live kernel memory That last one honestly changed how I think about Linux. It made me realize the file system is not just for storage, it’s an interface to the entire operating system. I’ve written a short blog sharing these findings and what I learned from them. Would really appreciate if you check it out 👇 🖇️ https://lnkd.in/g5NymD_P #Linux #OperatingSystems #SystemDesign #DevOps #BackendDevelopment
To view or add a comment, sign in
-
Linux From Scratch has been called impractical for years. Fine. Of course it is. It takes at least forty hours, and that is assuming things go reasonably well. When you finish, there is no package manager waiting to rescue you. No clean update path. No layer of polish smoothing over the hard parts. Every binary on that machine exists because you compiled it. Every configuration file exists because you wrote it. That much is obvious. The real question is whether that matters. It does. More than most people are willing to say out loud. Not because LFS belongs on a production server. It does not. Not because it is the smartest way to run a modern environment. It is not. That misses the point completely. The point is what it does to your understanding. Because there is a kind of knowledge that only comes from doing something the hard way, from first principles, with your own hands. You do not get that from browsing a wiki. You do not get it from skimming documentation and nodding along. You get it by getting stuck. By getting it wrong. By sitting in a chroot at two in the morning, staring at a kernel that refuses to compile, and staying there until the system finally makes sense. That experience does something documentation alone cannot do. It burns the lesson in. It forces the abstractions to fall away. It turns Linux from a product you use into a system you actually understand. And once you have that, you keep it. Read more here: https://lnkd.in/gtiUeRhb #Linux #LinuxFromScratch #OpenSource #SystemsEngineering #DevOps #Infrastructure #SoftwareEngineering #OperatingSystems #LearnByDoing #LFS
To view or add a comment, sign in
-
-
I think this is the future of recruiting and Jr employee training in the age of AI. It takes a long time and hard work to learn and gain experience. But now, AI means that anyone can take the shortcut. The problem is that the real value (experience) is lost. This is something we will need to replace. Programs like this will start to be an important part of any new employee training program - a way to quickly, but clearly, get some deep and practical experience that advances the person years ahead. The output is not the goal - that will be discarded. The goal is the experience and knowledge that the human gains. This is a very different way to think, and a different way to value human growth. We need to invest in training people and growing them in their ability to do the things that AI can't do. That takes time, but the results are necessary.
Linux From Scratch has been called impractical for years. Fine. Of course it is. It takes at least forty hours, and that is assuming things go reasonably well. When you finish, there is no package manager waiting to rescue you. No clean update path. No layer of polish smoothing over the hard parts. Every binary on that machine exists because you compiled it. Every configuration file exists because you wrote it. That much is obvious. The real question is whether that matters. It does. More than most people are willing to say out loud. Not because LFS belongs on a production server. It does not. Not because it is the smartest way to run a modern environment. It is not. That misses the point completely. The point is what it does to your understanding. Because there is a kind of knowledge that only comes from doing something the hard way, from first principles, with your own hands. You do not get that from browsing a wiki. You do not get it from skimming documentation and nodding along. You get it by getting stuck. By getting it wrong. By sitting in a chroot at two in the morning, staring at a kernel that refuses to compile, and staying there until the system finally makes sense. That experience does something documentation alone cannot do. It burns the lesson in. It forces the abstractions to fall away. It turns Linux from a product you use into a system you actually understand. And once you have that, you keep it. Read more here: https://lnkd.in/gtiUeRhb #Linux #LinuxFromScratch #OpenSource #SystemsEngineering #DevOps #Infrastructure #SoftwareEngineering #OperatingSystems #LearnByDoing #LFS
To view or add a comment, sign in
-
-
Containers from the Ground Up ➔ Part 2: Linux Namespaces Docker, Inc or Podman Desktop didn't secure your containers. The Linux kernel did. Most engineers I talk to know containers provide isolation. Far fewer can tell you what enforces it when something actually goes wrong. I spent time going one layer deeper, past the runtime, past the daemon, down to the kernel feature that makes all of it work: namespaces. A few things that surprised me along the way: chroot was the best isolation Linux had before namespaces existed. A process inside it could still see every PID on the host, bind any network port, and a root process could escape it entirely with two syscalls. We called it a jail. It wasn't. Every process on Linux is always inside a namespace even on a bare machine with no containers running. The kernel creates the initial ones at boot. Every process inherits its parent's. A container is just a process that got new ones via clone(). When I ran a simple loop comparing /proc/1/ns/ against my container process, the picture became concrete. Separate inodes for pid, net, mnt, uts, ipc. Same inode for user which is expected for a rootless setup. Each different inode is a hard kernel boundary, not a runtime abstraction. The failure modes map cleanly once you know the model: — Container leaking network traffic? net namespace. — Seeing host PIDs from inside a container? Missing CLONE_NEWPID. — Rootless container can't bind port 80? Not a firewall rule. UID mapping in the user namespace. At its core, podman run is a sequence of clone() calls with the right flags, followed by execve(). The runtime configures. The kernel enforces. Debugging changed for me after internalising this. I stopped reading container logs first and started reading /proc and lsns. Wrote this up properly with verified commands if you want to run through it yourself. Link in the comments. #Linux #Containers #SoftwareEngineering #DistributedSystems #Kubernetes #DevOps #SystemsProgramming #BackendEngineering
To view or add a comment, sign in
-
After a long gap, I finally sat down and completed something I had been meaning to write for a while. It started with a simple question I couldn't ignore anymore: 👉 What actually happens after you run `docker ps`? I've used Docker for a long time, almost 8 years now, but at some point I realized I was relying on it without really understanding how it works under the hood. That didn't sit right. (I was just being bored) So I started digging. What began as curiosity turned into tracing real system calls using `strace`, following how the Docker CLI connects to `/run/docker.sock`, how `dockerd` accepts that connection, and how the request flows through `containerd` and `runc` before finally reaching the Linux kernel. Along the way, it also helped me connect a lot of dots around local IPC (Unix sockets), system design, and how these concepts translate directly into real-world systems and production setups. This wasn't just about Docker anymore. it gave me a much clearer mental model for building and debugging systems, especially when you’re trying to design something larger and more reliable. This isn't the easiest topic if you're new to Linux, networking, or container internals. But if you spend time with it, it changes how you think about containers and system design. Honestly, this stuff doesn't fit well into short posts, so I put everything together in a detailed write-up. I'll drop the link in the comments 👇 #Docker #Linux #DevOps
To view or add a comment, sign in
-
-
Docker networking is where a lot of beginner container setups quietly break. You can have two containers running on the same machine, both healthy, both reachable in isolation, and still hit connection errors when one service tries to talk to the other. The usual mistake: relying on the default bridge network and hoping container names work like hostnames. They do not. In the latest Levelling Docker video, I walk through the networking basics that make multi-container apps click: • Port mapping with -p, so your browser can reach a container • The default bridge network, and why it is limited • Custom bridge networks, where containers can resolve each other by name • Docker's built-in DNS on custom networks • A hands-on PostgreSQL + pgAdmin exercise wired together properly The key idea is simple: Create a custom network, put related containers on it, and connect by container name instead of chasing IP addresses. That one habit makes local Docker setups much less fragile. Link in the comments. #Docker #DevOps #Linux #ContainerNetworking #Tutorial
To view or add a comment, sign in
-
Wrote a new blog on Linux File System: Understanding the System Behind the Structure Instead of focusing on commands, I explored how Linux actually works under the hood through its file system. Covering: • How /etc controls system-wide behavior • Why /proc is a live interface to the kernel • What /var/log reveals about system failures and activity • How /dev abstracts hardware into files • The role of /sys in device and kernel interaction • Why Linux is fundamentally file-driven, not command-driven https://lnkd.in/gqyCcwSs Thanks to Hitesh Choudhary Chai Aur Code Piyush Garg Akash Kadlag Jay Kadlag Nikhil Rathore for guidance! #Linux #WebDevelopment #BackendDevelopment #DevOps #SystemDesign #Programming #LearnInPublic #SoftwareEngineering
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development