General Motors: Driving a Software-Defined Future with GitHub Enterprise General Motors is evolving from an automaker into a platform innovator. To support this transformation, the company needed to unify a massive, fragmented developer environment and accelerate the delivery of software-driven vehicles. By migrating nearly 20,000 developers and 150,000 repositories to GitHub Enterprise Cloud, GM has streamlined its toolchain and modernized its developer experience. Key Outcomes: Unified Ecosystem: Consolidated disparate tools into a single platform for better collaboration. Faster Iteration: Build times reduced from hours to minutes, enabling more frequent deployments. Integrated Security: GitHub Advanced Security is now embedded directly into the developer workflow. Zero Downtime: Executed a large-scale migration with no impact on production. "Transitioning to GitHub was a straightforward decision," says Mario Parisi, Software Development Manager at GM. "Today, we operate within a unified ecosystem that supports all of our developers." See how GM is leveraging GitHub to redefine the driving experience: https://lnkd.in/gnUQHXHi
GM Unifies Developer Environment with GitHub Enterprise Cloud
More Relevant Posts
-
Platform teams often spend 6-12 months building a portal, only to find themselves trapped in perpetual maintenance mode. They end up wrestling with breaking changes, plugin updates, and Kubernetes configs instead of shipping the features developers actually want. This creates a dangerous trap: Impressive storefronts with empty shelves. You have a portal, but no capacity left to build the "Golden Paths" that actually drive value. The most successful teams in 2026 are making a different choice. They recognize that maintaining the interface layer is not a competitive advantage. They are buying the storefront (SaaS/Managed Backstage) so they can focus 100% of their energy on stocking the shelves. Stop building the tool. Start building the platform. https://lnkd.in/d9usMrhz
To view or add a comment, sign in
-
If you’re looking to advance your career and get hands‑on with Kubernetes, one of the easiest ways to start is with the Portainer Community Edition. It’s open source, lightweight, and removes a lot of the friction that usually slows people down when they’re trying to learn containers, orchestration, and real‑world platform workflows. A few reasons it’s a great starting point: You learn Kubernetes without drowning in YAML. Portainer gives you a visual layer so you can understand what’s happening before you dive deeper into manifests and automation. You get real hands‑on experience. Deploy apps, manage workloads, explore networking, storage, RBAC, and day‑2 operations all from a clean UI. You can run it anywhere. Local laptop, lab cluster, cloud, home server… whatever you have, Portainer runs on it. You build confidence faster. Instead of spending hours troubleshooting setup issues, you can focus on actually learning how Kubernetes behaves. It mirrors what real teams deal with. Multi‑cluster views, access control, container lifecycle management; the same concepts used in production environments. If you’re serious about growing your skills in containers or platform engineering, start small, get hands‑on, and build from there. Portainer CE makes that path a lot smoother. https://lnkd.in/gBTyQQGx
To view or add a comment, sign in
-
Heroku is moving into maintenance mode under a sustaining engineering model. Here's what it means for production teams and how to evaluate your options. https://lnkd.in/dhMTmX4V
To view or add a comment, sign in
-
🚀 Copilot Collections is expanding into DevOps & Cloud Last week we shared how Copilot Collections supports the flow from product idea → backlog → architecture → development → QA. Now we’re extending it even further. We’ve introduced a new DevOps Engineer agent designed to support multi-cloud environments (AWS, Azure, GCP), with built-in FinOps guardrails, cloud governance and automation support. It helps teams: ☁️ design and manage cloud infrastructure ⚙️ implement CI/CD and platform automation 📊 monitor observability and optimize costs 🛡️ follow Golden Paths and best practices With this update, Copilot Collections now supports the full path from product idea to running cloud infrastructure! ☁️ Check it on: https://lnkd.in/dzXhBT9x
To view or add a comment, sign in
-
Cursor has launched Automations, enabling always-on coding agents that run on schedules or external triggers like Slack, GitHub, and PagerDuty. Built on cloud sandboxes, it targets review, incident response, and maintenance, expanding Cursor into a workflow layer for engineering teams. #cursor
To view or add a comment, sign in
-
𝐌𝐨𝐬𝐭 𝐭𝐞𝐚𝐦𝐬 𝐚𝐫𝐞 𝐮𝐬𝐢𝐧𝐠 𝐆𝐢𝐭𝐇𝐮𝐛 𝐥𝐢𝐤𝐞 𝐚 𝐟𝐢𝐥𝐢𝐧𝐠 𝐜𝐚𝐛𝐢𝐧𝐞𝐭. 𝐆𝐢𝐭𝐇𝐮𝐛 𝐄𝐧𝐭𝐞𝐫𝐩𝐫𝐢𝐬𝐞 𝐢𝐬 𝐚𝐧 𝐞𝐧𝐭𝐢𝐫𝐞𝐥𝐲 𝐝𝐢𝐟𝐟𝐞𝐫𝐞𝐧𝐭 𝐭𝐡𝐢𝐧𝐠. If your developers are on GitHub Free or a basic plan, you're getting version control. That's it. 𝐆𝐢𝐭𝐇𝐮𝐛 𝐄𝐧𝐭𝐞𝐫𝐩𝐫𝐢𝐬𝐞 is what happens when you treat software delivery as a system — not just a collection of repositories. 𝐇𝐞𝐫𝐞'𝐬 𝐰𝐡𝐚𝐭 𝐭𝐡𝐚𝐭 𝐚𝐜𝐭𝐮𝐚𝐥𝐥𝐲 𝐥𝐨𝐨𝐤𝐬 𝐥𝐢𝐤𝐞 𝐢𝐧 𝐩𝐫𝐚𝐜𝐭𝐢𝐜𝐞: ⚙️ 𝐆𝐢𝐭𝐇𝐮𝐛 𝐀𝐜𝐭𝐢𝐨𝐧𝐬 — automate your entire CI/CD pipeline. Build, test, and deploy without leaving GitHub. Enterprises using Actions see up to 32% 𝐫𝐞𝐝𝐮𝐜𝐭𝐢𝐨𝐧 𝐢𝐧 𝐝𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭 𝐭𝐢𝐦𝐞. ☁️ 𝐆𝐢𝐭𝐇𝐮𝐛 𝐂𝐨𝐝𝐞𝐬𝐩𝐚𝐜𝐞𝐬 — fully configured cloud development environments, spun up in seconds. No more "𝐰𝐨𝐫𝐤𝐬 𝐨𝐧 𝐦𝐲 𝐦𝐚𝐜𝐡𝐢𝐧𝐞." 🔒 𝐁𝐫𝐚𝐧𝐜𝐡 𝐩𝐫𝐨𝐭𝐞𝐜𝐭𝐢𝐨𝐧 & 𝐨𝐫𝐠-𝐰𝐢𝐝𝐞 𝐩𝐨𝐥𝐢𝐜𝐢𝐞𝐬 — enforce code review requirements, approval workflows, and compliance rules across every repository, automatically. 📋 𝐆𝐢𝐭𝐇𝐮𝐛 𝐏𝐫𝐨𝐣𝐞𝐜𝐭𝐬 — planning and tracking built directly into your delivery workflow, not bolted on as a separate tool. 🌍 𝐄𝐧𝐭𝐞𝐫𝐩𝐫𝐢𝐬𝐞 𝐬𝐜𝐚𝐥𝐞— 92% 𝐨𝐟 𝐅𝐨𝐫𝐭𝐮𝐧𝐞 100 𝐜𝐨𝐦𝐩𝐚𝐧𝐢𝐞𝐬 run on GitHub Enterprise. It's not a developer preference. It's an enterprise standard. The question we hear most from organisations: "We already have Azure DevOps — do we need GitHub Enterprise too?" The answer isn't either/or. As a 𝐌𝐢𝐜𝐫𝐨𝐬𝐨𝐟𝐭 𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧𝐬 𝐏𝐚𝐫𝐭𝐧𝐞𝐫, we help organisations understand how GitHub Enterprise and Azure DevOps complement each other — and configure both so your teams aren't duplicating effort or paying for overlap. What does your current CI/CD setup look like? Drop it in the comments — happy to share what we've seen work. 👇 #GitHub #GitHubCopilot #GitHubEnterprise #MicrosoftPartner #CopilotWithCloudalytix #DevOps #SoftwareDelivery #AIProductivity #Cloudalytix #CloudDevelopment #SDLC #Clouddians #AItoIdeas #AIValue #CICD #SoftwareDelivery #EnterpriseIT Jayabharathi P Mahmudullah Baig Rajpradeep Manoharan Ashiz Ahamed Syed Jalal S Nithish kumar N Janakiram V GitHub GitHub Enterprise Microsoft Microsoft Azure Microsoft AI Microsoft Fabric Microsoft AI Cloud Partner Program Microsoft Cloud
To view or add a comment, sign in
-
-
How RGT saved $200k on software costs over the course of 2 months. In RGT was used a stack of software very similar to most orgs: - Clickup for project management + task management - AWS for cloud deployment - Slack for team + clients communication - Harvest for hour reporting The total monthly spend across these 4 was roughly $5k-$7k per month, and over the course of using these for 3 years, we paid these companies ~$200k. If I would hand your company right now $200k would that not be significant? How we went about it: - Quick internal alternatives for Harvest + Clickup (took 2-3 months + 2-3 devs + LLMs for fast code generation), rough spend - $20k - Switching to discord from Slack (no need to reinvent the wheel when one exists and is cheaper/free) - Switched most of AWS to Digital Ocean (same mentality) Where is our next cost-optimization moves? - Moving from Digital Ocean to Hetzner (should drop cloud bill by 50%-70%) - Sophisticated LLM stack for our devs + devops using hybrid approach of open source models + claude + openclawd variations (The above should save us another $100k or so) After that - we are going to build agents for specific dev purposes that are working hand-in-hand with our team (frontend agent, devop agent etc), I believe the future is a mesh of human-in-the-loop doing high-end sophisticated work for the coming 1-3 years, and positioning yourself accordingly is the way to go.
To view or add a comment, sign in
-
🚀 Day 5 – #40daysofkubernetes Today I learned how Kubernetes maintains application availability using controllers. Understanding the difference between ReplicationController, ReplicaSet, and Deployment gave me real clarity on how Kubernetes ensures high availability and smooth updates ☸️ 📌1️⃣ 𝗥𝗲𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝗖𝗼𝗻𝘁𝗿𝗼𝗹𝗹𝗲𝗿 (𝗢𝗹𝗱𝗲𝗿 𝗔𝗽𝗽𝗿𝗼𝗮𝗰𝗵) ReplicationController ensures: 👉 A specified number of Pods are always running 👉 If a Pod crashes → Kubernetes creates a new one But it has limitations: 👉 Doesn’t support advanced label selectors 👉 Not recommended for modern production use 👉 It’s the older method. 📌 2️⃣ 𝗥𝗲𝗽𝗹𝗶𝗰𝗮𝗦𝗲𝘁 (𝗜𝗺𝗽𝗿𝗼𝘃𝗲𝗱 𝗩𝗲𝗿𝘀𝗶𝗼𝗻) ReplicaSet is the next evolution. It: ✔ Supports set-based label selectors ✔ Maintains desired replica count ✔ Automatically replaces failed Pods In production, we usually don’t create ReplicaSets directly. 📌 3️⃣ 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁 (𝗣𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻 𝗦𝘁𝗮𝗻𝗱𝗮𝗿𝗱) Deployment manages ReplicaSets. This is what we use in real-world scenarios. It provides: ✅ Rolling updates (zero downtime) ✅ Rollback to previous versions ✅ Easy scaling ✅ Declarative updates Instead of managing Pods manually, we define the desired state — and Kubernetes maintains it automatically. 🔁 𝗤𝘂𝗶𝗰𝗸 𝗖𝗼𝗺𝗽𝗮𝗿𝗶𝘀𝗼𝗻 ReplicationController → Old version ReplicaSet → Improved version Deployment → Production-ready controller Deployment internally creates ReplicaSet ReplicaSet manages Pods Clear hierarchy: Deployment → ReplicaSet → Pods 💡𝗠𝘆 𝗞𝗲𝘆 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴𝘀 ✔ Kubernetes doesn’t just run containers — it manages lifecycle. ✔ ReplicaSets ensure stability. ✔ Deployments ensure smooth application updates. ✔ Controllers are the brain behind self-healing. Now I understand how companies deploy new versions without downtime 🚀 See you on Day 6 🙌 Piyush sachdeva The CloudOps Community #Kubernetes #DevOps #Containers #CloudComputing #LearningJourney #40daysofkubernetes
To view or add a comment, sign in
-
-
Kubernetes didn’t fail you. You probably asked it to do the wrong job. For the last few years, I’ve led a platform team responsible for Kubernetes as the foundation of our product. Along the way, we hit plenty of roadblocks. We made it through, but not without learning some hard lessons. And I keep seeing the same pattern. Teams adopt Kubernetes expecting: • Speed • Standardization • Efficiency What they get instead: • Tool sprawl • YAML fatigue • Surprise cloud bills • Slower onboarding • Unexpected complexity Here’s what I've learned from all of it. 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 𝗶𝘀 𝗻𝗼𝘁 𝗮 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝘃𝗶𝘁𝘆 𝘁𝗼𝗼𝗹. 𝗜𝘁’𝘀 𝗮 𝗽𝗿𝗼𝗴𝗿𝗮𝗺𝗺𝗮𝗯𝗹𝗲 𝘀𝘂𝗯𝘀𝘁𝗿𝗮𝘁𝗲. It gives you primitives: scheduling, networking, scaling, policy, quotas. It does 𝘯𝘰𝘵 give you: • Golden paths • Opinionated defaults • A magical developer experience That’s 𝗽𝗹𝗮𝘁𝗳𝗼𝗿𝗺 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴’𝘀 job. The teams who succeed with Kubernetes stop treating it like a product and start treating it like infrastructure Lego. Over the next few weeks, I’ll share some of my learnings from designing Kubernetes at scale. • Where do we over-engineer • Where do we under-invest Let’s start here: 𝗪𝗵𝗮𝘁’𝘀 𝘁𝗵𝗲 𝗯𝗶𝗴𝗴𝗲𝘀𝘁 𝗴𝗮𝗽 𝗯𝗲𝘁𝘄𝗲𝗲𝗻 𝘄𝗵𝗮𝘁 𝘆𝗼𝘂 𝗲𝘅𝗽𝗲𝗰𝘁𝗲𝗱 𝗳𝗿𝗼𝗺 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀… 𝗮𝗻𝗱 𝘄𝗵𝗮𝘁 𝘆𝗼𝘂 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗴𝗼𝘁? 𝗪𝗵𝗲𝘁𝗵𝗲𝗿 𝘆𝗼𝘂'𝗿𝗲 𝗮 𝘂𝘀𝗲𝗿 𝗼𝗿 𝗮 𝗽𝗹𝗮𝘁𝗳𝗼𝗿𝗺 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿, 𝗜'𝗱 𝗹𝗼𝘃𝗲 𝘁𝗼 𝗵𝗲𝗮𝗿 𝘆𝗼𝘂𝗿 𝗞8𝘀 𝘀𝘁𝗼𝗿𝘆. #kubernetes #platformengineering #devex
To view or add a comment, sign in
-
-
🚀 Improving Deployment Speed & Quality Using GitHub Copilot Modern cloud deployments demand speed — but without compromising reliability. Recently, I started leveraging GitHub Copilot to enhance our deployment workflows across Terraform, PowerShell, and CI/CD pipelines — and the productivity boost was real. Here’s how Copilot helped improve deployment efficiency 👇 🔹 Faster Infrastructure as Code Development While working with Terraform, Copilot suggested optimized resource blocks, variables, and output structures — reducing boilerplate effort significantly. 🔹 Smarter CI/CD Pipeline Writing In GitHub Actions, Copilot auto-suggested workflow syntax, conditional jobs, reusable workflows, and environment configurations — minimizing YAML errors. 🔹 Improved Script Quality For PowerShell and Bash scripts, Copilot helped: ✔️ Generate parameter validation ✔️ Add proper error handling ✔️ Suggest logging improvements 🔹 Security & Best Practices Copilot often suggests: • Secure variable usage • Proper naming conventions • Structured module design • Input validation 🔹 Reduced Debugging Time Instead of searching documentation repeatedly, contextual code suggestions accelerated troubleshooting. 📊 Impact on Deployment: ✔️ Faster PR creation ✔️ Cleaner IaC modules ✔️ Reduced syntax errors ✔️ Improved review cycles ✔️ Increased deployment confidence AI won’t replace engineers — but engineers using AI will outperform those who don’t. Are you integrating AI into your DevOps workflow yet? #GitHubCopilot #DevOps #Terraform #Azure #CloudEngineering #CICD #Automation #AIinTech
To view or add a comment, sign in
-
More from this author
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development