Rejection is a funny thing. It tells you what other people believe about you not what you're actually capable of. I've heard "you don't know this" more times than I can count recently. And each time, I had the same thought: I may not know it yet. But give me time and I'll figure it out. So I gave myself 24 hours. And I built Linux Sync. Linux Sync is a full peer-to-peer Linux system sync application written in Python, with a native GTK GUI, a background daemon, mDNS network auto-discovery, QR code pairing, and SSH-based transport. It syncs packages across any Linux distro (DNF, APT, Pacman, Zypper and more), Flatpak apps, your entire home directory, /etc system config, and GNOME desktop settings. A true 1-to-1 mirror between two machines. I had never built a GTK application before starting this project. I learned the framework, built the entire GUI, wired it to a sync engine, and shipped a working application in under 12 hours. People sometimes hear that and ask how. The honest answer is that I've spent years building what I think of as a spider web of knowledge. Linux internals. Networking protocols. SSH. Python. System architecture. UI patterns. Package management. None of it learned in isolation all of it connected. When I encounter something unfamiliar, I don't start from zero. I find the thread that connects it to something I already understand and pull. That's what it means to be a fast learner. Not that you know everything. That you know how to learn anything. Linux Sync is open source, fully functional, and built in a day. Not to impress anyone specifically just to remind myself, and maybe someone else who needs to hear it, that being told you can't do something is just the starting line. #Linux #Python #GTK #OpenSource #SoftwareEngineering #CareerDevelopment #NeverStopLearning https://lnkd.in/e7k-Nf6g
Building Linux Sync in 12 hours: Overcoming Rejection
More Relevant Posts
-
Official Swift Extension is live now on Open VSX Registry. Great for Cursor and other VS Code forks — this brings full syntax highlighting, debugging, refactoring, and SPM support to popular IDEs! https://lnkd.in/geXJg7bf
To view or add a comment, sign in
-
🚀Every Developer Should Know this: ; ; ; As developers, we often focus on frameworks, languages, and tools… But mastering the terminal is what truly boosts productivity ⚡ Here are some must-know shell commands that can make your life easier 👇 📁 File Management ls, cd, pwd, mkdir, rm, cp, mv 📄 File Handling cat, less, head, tail -f (perfect for logs 👀) 🔍 Search & Filter grep, find, wc ⚙️ Permissions chmod, chown ⚡ Process Management ps, kill, top 🌐 Networking curl, ping, wget 📦 Compression tar, zip, unzip 🔁 Power Moves | (pipes), > (redirect), >> (append) 💡 Pro Tip: The real power of shell scripting comes from combining commands. For example: cat logs.txt | grep "error" Small commands. Massive impact. Start using them daily — your future self will thank you 🙌 #Developers #Linux #ShellScripting #DevTips #Productivity #Programming #DevOps
To view or add a comment, sign in
-
-
Bash is the fifth most-used programming language in the world. >49% of developers use it actively -- ahead of TypeScript. And 80% of the Bash code on GitHub is absolute garbage. Not my word: ACM, 2022, 1.35 million scripts analysed. Quoting failures, word-splitting errors, missing error handling. Schoolboy errors across millions of repositories. We are talking about the language that runs on 96.3% of the world's top million web servers. The language in every CI/CD pipeline, every container entrypoint, every deployment workflow. Treated like a weekend hack. See my full rant in first comment -- with the data, the bad tutorials, and some bad language. #Bash #DevOps #ShellScripting #SoftwareEngineering #Linux
To view or add a comment, sign in
-
-
Cutlet uses Claude Code. The LLM emits every line. Source, build steps, and examples live on GitHub. It runs on macOS and Linux and ships a REPL. It supports arrays, strings, double numbers, a vectorizing meta-operator, zip/filter indexing, prototypal inheritance, and a mark-and-sweep GC. Development ran through an agentic pipeline: tests and example programs as integration checks, linters, ASan/UBSan builds, runtime dumps, pipeline tracers, and Docker with full agent permissions. Yes, full agent permissions. Workflows favor agentic pipelines that frontload specs and expose tests, tracers, and runtime tools. LLMs run with broad runtime access. https://lnkd.in/eh_kRGms --- Want similar stories? Join 👉 https://faun.dev/join
To view or add a comment, sign in
-
Containers didn't happen because Docker was a good idea. They happened because the alternative was genuinely awful. Here's what "before containers" looked like in practice: You have one application server. You need to run three services on it. Each has a different Python version, different system library requirements, different assumptions about what /tmp contains. The solutions were: separate VMs (expensive), configuration management (fragile), or very careful manual isolation (not actually isolated). Containers solved this not by inventing something new, but by surfacing Linux primitives that already existed. cgroups were added to the kernel in 2006. Namespaces existed before that. Docker in 2013 was a well-designed interface on top of things Linux already knew how to do. Understanding this history matters for one reason: if you know why containers were invented, you know what they're actually good at, and what they're not. They're excellent at process isolation and dependency management. They're not a security boundary by themselves. The tool is the solution to a specific problem. Know the problem. #Docker #Linux #DevOps #Containers #Infrastructure #CloudNative #SoftwareEngineering #History
To view or add a comment, sign in
-
📜Learning bash scripting taught me something I didn't expect: The gap between understanding something and using something is massive. You can read about variables and functions all day. But the real progress happens when you: → Apply the theory in isolation (write the function, test it, break it) → Connect the concepts (pass parameters, capture input, handle logic) → Build something end-to-end (a real script that solves a real problem) 👇Below is a script I put together that sorts all .txt files in a directory by size, smallest to largest, with input validation built in. Is it the most efficient script? Probably not. But that's the point, efficiency comes with practice, and you can't refine what you haven't built yet! Most people stop at step one and wonder why it's not sticking. Build something, even something small. That's where it lands. What's a skill you learned by just building with it? 💻 #Bash #Scripting #Linux #Tech #DevOps
To view or add a comment, sign in
-
-
If you’re an open-source developer or maintainer that’s been on the fence or dipping your toes into vibe coding, this policy is probably going to trickle down into, or inform, many other FOSS projects soon, so it’s worth a read.
To view or add a comment, sign in
-
A moment I'll never forget. Two emails from Greg Kroah-Hartman (Linux Kernel maintainer): "This is a note to let you know that I've just added the patch titled 'staging: greybus: audio: Use sysfs_emit in show functions' to my staging git tree..." "This is a note to let you know that I've just added the patch titled 'staging: greybus: arche-platform: Use sysfs_emit instead of sprintf' to my staging git tree..." My patches are now in the Linux Kernel ! For context: I have a B.Sc. in Agriculture. I'm self-taught in C and systems programming. Six months ago, the idea of contributing to the kernel felt impossible. What changed? I stopped waiting to feel "ready enough" and just started: → Read kernel documentation → Found small issues I could fix → Submitted patches following LKML guidelines → Learned from code review feedback The patches themselves? Converting sprintf to sysfs_emit in the Greybus subsystem—small changes, but they improve kernel safety and follow best practices. Here's what I learned: - Start small (these were ~10 line changes) - Documentation matters (I also contributed watchdog driver docs) - Code review is a gift (Guenter Roeck's feedback taught me more than any tutorial) - Agriculture background ≠ barrier to kernel development To anyone thinking "I'm not experienced enough for open source": You are. Pick a project. Read the contribution guide. Submit something small. The kernel doesn't care about your degree. It cares about your code. #Linux #OpenSource #KernelDevelopment #SelfTaught #TechCareer #FromAgricultureToCode P.S. - If you're interested in contributing to the Linux Kernel, the staging tree (where Greybus lives) has excellent beginner-friendly issues. Start there.
To view or add a comment, sign in
-
Got a new laptop last week. Usually this means: reinstall Python, recreate conda environment, pip install everything again, install dependencies, lose half a day. This time I didn't do any of that. I just copied my entire WSL environment from the old machine to the new one: libraries, virtual environments, configs. Opened the terminal on the new laptop and it worked like nothing changed. Here's how: 1. On the old laptop, export the WSL distro: wsl --export Ubuntu D:\Ubuntu_backup.tar 2. Transfer the .tar file to the new laptop with pen drive. 3. On the new laptop, install WSL, then import: wsl --import Ubuntu C:\WSL\Ubuntu D:\Ubuntu_backup.tar 4. Log in with root user: ubuntu config --default-user <your_username> Done. All my Python environments, pip packages, project files remains intact. Zero reinstallation. The .tar file was heavy (7+ GB ), but it saved hours of setup. If you work on WSL and switch machines often, this is the way. #WSL #Python #DevTools #Productivity #Linux
To view or add a comment, sign in
-
The latest ffmpeg 8.1 includes a new filter I developed that exposes OpenColorIO. It is currently an optional build, so I've written some more detailed documentation on building and using OCIO with ffmpeg 8.1: https://lnkd.in/ehqifJ5a along with documentation on using conan to build ffmpeg with OCIO (with some custom recipes) - https://lnkd.in/eP_2zCJc (this should build on Linux, MacOS and Windows). This now allows you to make the slate and make the burn-in version of media in one launch of ffmpeg, I have also made a proof of concept of this: https://lnkd.in/exCn5NhB which allows you to take a image sequence or existing movie file, and create a review set of media. There is an API version of this to make it easy to incorporate into a pipeline. I want to thank the #ocio and #ffmpeg teams for their help in this development.
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development