As a person in tech I don't want to remember the commands every time my tech stack changes. So I built dashb, a CLI tool that auto detects your projects and gives the right commands instantly. Once you download it just run db in any directory. No config and no setup. A few features I am proud of- 1) db doc- checks your project health. 2) db add- add custom shortcuts for your workflow. 3) db stats- see which commands you use the most. Works across Python, Node.js, Rust, Go, Docker and many more. Almost 300 downloads. Download now: npm install -g dashb PS- If you have any suggestions or want to add new features feel free to reach out. #CLI #DeveloperTools
Automate CLI with dashb: Instant Project Commands
More Relevant Posts
-
I recently posted about "Agentic Workflows" and separately on "How to train your program verifier" (a3-python based on the z3 theorem prover). You can use a3-python from Agentic workflows. Here is how: From your repository, install AW. Then add the a3-python workflow using: gh aw add https://github.com/ (avoid link shortening, so adding newline here) Z3Prover/z3/blob/master/a3/a3-python.md see gh.io/gh-aw for instructions on installing AW. Run the action (fx. from GitHub portal). It will scan your python files, post-process them (with your copilot tokens) and creates a GitHub issue if it finds issues with your python files.
To view or add a comment, sign in
-
I just built and published an open-source Python package chrono-temporal. It adds time-travel queries to any database entity. You can query what your data looked like at any point in history, track full change histories, and diff any two points in time. pip install chrono-temporal https://lnkd.in/ebm-cfRX
To view or add a comment, sign in
-
💻 Docker Practice: Using Environment Variables Today I practiced making my Docker containers more flexible by using environment variables to control application behavior. 💠 Dynamic Configuration: Used the ENV instruction in the Dockerfile to set a variable (APP_MODE=Production). 💠 Code Integration: Updated the Python script to read the variable using os.environ.get(), allowing the app to adapt to its environment. 💠 Build & Verification: Built the productionapp image and confirmed that the container correctly identified its mode during execution. 💠 Execution Success: Verified the output: "application mode: Production" without having to change a single line of Python code. #Docker #DevOps #Backend #PythonDevelopment #Automation #Configuration #SoftwareEngineering
To view or add a comment, sign in
-
-
𝗗𝗲𝗽𝗹𝗼𝗷𝗶𝗻𝗴 𝗔 𝗙𝗮𝘀𝘁𝗔𝗣𝗜 𝗔𝗽𝗽 𝗧𝗼 𝗥𝗲𝗻𝗱𝗲𝗿 You can take your API live with Render. Here's how: - Run your API on localhost for development - Use Render's free tier - Connect directly to GitHub - Auto-deploy on every push to main - Use environment variables - No server configuration needed You need 3 files for deployment: - requirements.txt with your dependencies - .python-version to pin your Python version - A script to run your app Your requirements.txt might look like this: • fastapi==
To view or add a comment, sign in
-
Flight&Balance version 11 is live. Not vibecoded but with support of AI. 51 UI pages. 67 API endpoints across 41 controllers. This completely replaces a 5 year old Python Django based system. The code quality is good; fixes are easy. Docker and Playwright based end-to-end tests. Unit tests of course. 95% fully generated code. I handled the tickets (user stories), code reviews and the pull requests.
To view or add a comment, sign in
-
🚀 𝗦𝗵𝗶𝗽𝗽𝗲𝗱 𝗠𝘆 𝗙𝗶𝗿𝘀𝘁 𝗣𝘆𝘁𝗵𝗼𝗻 𝗣𝗮𝗰𝗸𝗮𝗴𝗲 𝗼𝗻 𝗣𝘆𝗣𝗜… 𝗮𝗻𝗱 𝗦𝗼𝗺𝗲𝘁𝗵𝗶𝗻𝗴 𝗪𝗶𝗹𝗱 𝗛𝗮𝗽𝗽𝗲𝗻𝗲𝗱 📝✨ I thought this would be a simple “publish & move on” project. Turns out… it became one of the biggest learning curves of my dev journey. 👀🔥 💥 Broken builds 💥 Weird packaging errors 💥 Unexpected CLI bugs And a lot of “why is this even failing?” moments. But today… after all the chaos: 𝗧𝗼𝗱𝗮𝘆, 𝗜 𝗼𝗳𝗳𝗶𝗰𝗶𝗮𝗹𝗹𝘆 𝘀𝗵𝗶𝗽𝗽𝗲𝗱 𝗧𝗲𝘅𝘁𝗻𝗼𝘁𝗲𝘀 🐍 A tiny CLI tool. One command. Surprisingly useful. 💡 The biggest win? Not just shipping… Learning how packaging works, debugging real-world errors, and shipping something people can actually use. 👉 Read the full story: 🌐 https://lnkd.in/db7a5neY 𝗧𝗿𝘆 𝗜𝘁: 🐍 PyPI → https://lnkd.in/dpzhWn_A 💻 GitHub → https://lnkd.in/dS9W57bE 🔁 Share to help others ❤️ Like if you found it useful 📥 Save for your next project 👥 Tag someone who needs this #python #pythonprogramming #KnowledgeSharing
To view or add a comment, sign in
-
Your Dockerfile is probably rebuilding everything on every code change. Here are 4 optimizations that made our test builds 5x faster and images 55% smaller: 1. Reorder your instructions Copy dependency files first, install, then copy source code. Docker invalidates cache at the first changed layer, so put things that change least at the top. This alone cut rebuilds from 37s to 7s. 2. Add a .dockerignore Without one, Docker was sending 325 MB of build context (node_modules, .git, logs). With one: 1.2 kB. 3. Use multi-stage builds Build in a full image, run in a slim one. Final image went from 2.49 GB to 1.11 GB with no impact on rebuild speed. 4. Add cache mounts for your package manager When dependencies change, cache mounts let npm/pip/go reuse previously downloaded packages instead of starting from scratch. Full walkthrough with examples for Node, Python, Go, and Ruby: https://lnkd.in/dhTbZbJX
To view or add a comment, sign in
-
We're building something. 👀 This week we shipped a small but meaningful update to Flux API and our Python and TypeScript SDKs — added introspection: the API can now describe itself at runtime. What routes are available, what they do, what fields you can search by. Sounds like a technical detail. But there's a reason we're doing this now. We're working on an MCP server built directly into Flux API. The idea is simple: an LLM agent connects to your FoxNose environment via MCP and immediately understands what's there and how to work with it — no hardcoded configs, no middleware, no manual wiring. The agent just asks the API what it can do, and gets to work. But this is still under wraps. 🤫 Stay tuned. 🦊 #FoxNose #MCP #AIAgents #LLM #RAG
To view or add a comment, sign in
-
🚀 New Python Project: To-Do List Application I built a simple To-Do List application using Python to practice working with user input, lists, and basic program logic. This project allows users to manage their daily tasks directly from the terminal. 📌 Key Features • Add new tasks • View all tasks • Mark tasks as completed • Delete tasks from the list 🛠 Technologies Used Python | CLI | Data Structures 🔗 GitHub Repository https://lnkd.in/dsa3cQHS I’m continuously building small projects to strengthen my Python and problem-solving skills. Feedback is always welcome! #Python #Programming #GitHub #Coding #100DaysOfCode
To view or add a comment, sign in
-
What Edwin Manual described below is exactly where development is heading: increasingly hands‑off, increasingly autonomous, and increasingly shaped by agents that don’t just follow instructions but evaluate the task, choose the better path, and execute end‑to‑end. What’s becoming clear in agentic engineering is that humans are shifting into the judgment/verification layer. A fleet of agents handles the feature lifecycle (planning, implementing, testing, deploying) and our job becomes ensuring the system design, constraints, and intent are preserved. One interesting tension I’ve been thinking about: when you model a multi‑agent system through RAG, with markdown files defining each agent’s values, capabilities, and constraints, the latest research shows that more RAG sources not only restrict the model’s ability to explore alternative, more optimal routes, but can also inadvertently degrade LLM output quality. Guardrails keep the system safe, but they also narrow the search space. Fewer guardrails give the model more freedom to discover unconventional but better solutions (as depicted in the post below with the Vercel API call). It raises a real design question for the next wave of autonomous development: How much structure is enough to keep agents aligned, and how much freedom is necessary for them to outperform us? Would love to hear some thoughts on that balance below. #MultiAgentSystems #RAGArchitecture #ContextEngineering #AIOrchestration #AIAlignment #AIInfrastructure #FutureOfDevelopment
Today I was working with Claude Code. I specifically asked it to deploy a project using the Vercel CLI and set up a custom domain. I regularly use the Vercel CLI with Claude Code, so that's what I told it to use. But when I watched it work, I noticed something different. It wrote a Python script that calls the Vercel REST API directly. It even figured out where the Vercel CLI stores its auth token , pulled it, and attached it to the request. I was curious, so I asked it: "Why did you use the REST API instead of the CLI?" Its answer was genuinely impressive. It explained that the Vercel CLI doesn't support creating projects with full settings in one step. There's no command to set root directory, framework preset, or build commands. The CLI is primarily a deployment tool. So instead of running 4 separate commands (with one step that's impossible via CLI anyway), it made a single API call that configured everything at once. It didn't just follow my instructions. It evaluated the task, found a better approach, and executed it. The whole thing took 2 minutes.
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development