When working with multiple LLM providers, managing prompts, and handling complex data flows — structure isn't a luxury, it's a necessity. A well-organized architecture enables: → Collaboration between ML engineers and developers → Rapid experimentation with reproducibility → Consistent error handling, rate limiting, and logging → Clear separation of configuration (YAML) and logic (code) 𝗞𝗲𝘆 𝗖𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁𝘀 𝗧𝗵𝗮𝘁 𝗗𝗿𝗶𝘃𝗲 𝗦𝘂𝗰𝗰𝗲𝘀𝘀 It’s not just about folder layout — it’s how components interact and scale together: → Centralized configuration using YAML files → A dedicated prompt engineering module with templates and few-shot examples → Properly sandboxed model clients with standardized interfaces → Utilities for caching, observability, and structured logging → Modular handlers for managing API calls and workflows This setup can save teams countless hours in debugging, onboarding, and scaling real-world GenAI systems — whether you're building RAG pipelines, fine-tuning models, or developing agent-based architectures. → What’s your go-to project structure when working with LLMs or Generative AI systems? Let’s share ideas and learn from each other.
Best Practices In Technology
Explore top LinkedIn content from expert professionals.
-
-
Maybe you can WRITE SQL, but are you writing ✨GOOD SQL✨? SQL is more than just writing a query without errors… Here’s 10 query optimization tips: 1. Avoid SELECT * and instead list desired columns 2. Use INNER JOINs over LEFT JOINs when applicable 3. Use WHERE and LIMIT to filter rows 4. Filter as much as possible as early as possible (consider the order of execution) 5. Avoid ORDER BY (especially in subqueries and CTEs) 6. Avoid using DISTINCT unless necessary (especially when it’s already implied like in GROUP BY & UNION) 7. Use CTEs when you’ll have to refer to a table/ouput multiple times 8. Avoid using wildcards at the beginning of a string (‘%jess%’ vs. ‘jess%’) 9. Use EXISTS instead of COUNT and IN 10. Avoid complex logic Obviously you can’t ALWAYS avoid these, and they each have their use cases, but these are good things to think about when optimizing your queries.
-
I’ve had a lot of people reach out recently asking for tips and guidance on AI governance 🤝 That tells me one thing. Many organisations are still at a very early stage, trying to work out what AI governance actually looks like in practice. Over the past year, I’ve shared a number of playbooks on AI, law, and governance. The feedback has been overwhelmingly positive, and I’m truly humbled by it 💬 It confirmed that practical, grounded guidance is what people are looking for. So I put together a 𝐂𝐨𝐫𝐩𝐨𝐫𝐚𝐭𝐞 𝐀𝐈 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 𝐏𝐥𝐚𝐲𝐛𝐨𝐨𝐤📘 This one goes deeper and focuses on the questions I’m being asked most often. It covers: • Why AI governance fails in real organisations • What AI governance actually means at corporate and board level • A practical operating model across board, executive, legal, business, and IT 🧭 • The AI risk landscape boards actually care about, beyond bias alone • Accountability, approval, and escalation, where most governance breaks down ⚠️ • Third-party and vendor AI risk, and why accountability cannot be outsourced • What regulators and boards will expect next • Practical next steps that work inside existing governance structures It’s not meant to be technical nor theoretical, which in my opinion, is not going to be helpful. It is meant to be feasible and useful in reality. 💡 Also, it’s written for GCs, risk and compliance leaders, and senior management responsible for governing AI in real organisations. And, I know many are struggling in this area. If this is relevant to you: 👉 Make sure we’re connected on LinkedIn 👉 Comment 𝐀𝐈 𝐆𝐎𝐕𝐄𝐑𝐍𝐀𝐍𝐂𝐄 below and I’ll send it to you 📩 Once you’ve read it, let me know what stage your organisation is at.
-
We’ve just released the definitive guide to the AI-driven prior auth space. Prior auth is one of those areas that’s both ripe for AI and deeply frustrating for everyone involved: • The hardest parts of the process are exactly what AI is good at • If done well, it can drive real ROI for health systems and providers • And yet, it remains painful, slow, and opaque The reality is that prior auth is incredibly complex and fragmented. Requirements vary dramatically by payer, plan, procedure, state, and site of service. Submission rules are often difficult to even find, let alone interpret. And once a request is submitted, turnaround times are long while visibility into status is limited at best. It's no surprise then that dozens of vendors have jumped in to try to fix it. The problem? It’s become almost impossible to make sense of the landscape. Who does what? Which pain points do they really address? How do they actually solve the issues? That’s what this guide is for. We explain how the prior auth process works across both medical and pharmacy benefit. We highlight where AI is being applied today. And we map the vendor landscape so you can see how the pieces fit together. This is the resource I’ve wanted for a long time to understand the space. Link is in the comments, and we'd love to hear your thoughts. Special shoutout to Colin DuRant for his incredible work putting this together 🙌
-
💎 Accessibility For Designers Checklist (PDF: https://lnkd.in/e9Z2G2kF), a practical set of cards on WCAG accessibility guidelines, from accessible color, typography, animations, media, layout and development — to kick-off accessibility conversations early on. Kindly put together by Geri Reid. WCAG for Designers Checklist, by Geri Reid Article: https://lnkd.in/ef8-Yy9E PDF: https://lnkd.in/e9Z2G2kF WCAG 2.2 Guidelines: https://lnkd.in/eYmzrNh7 Accessibility isn’t about compliance. It’s not about ticking off checkboxes. And it’s not about plugging in accessibility overlays or AI engines either. It’s about *designing* with a wide range of people in mind — from the very start, independent of their skills and preferences. In my experience, the most impactful way to embed accessibility in your work is to bring a handful of people with different needs early into design process and usability testing. It’s making these test sessions accessible to the entire team, and showing real impact of design and code on real people using a real product. Teams usually don’t get time to work on features which don’t have a clear business case. But no manager really wants to be seen publicly ignoring their prospect customers. Visualize accessibility to everyone on the team and try to make an argument about potential reach and potential income. Don’t ask for big commitments: embed accessibility in your work by default. Account for accessibility needs in your estimates. Create accessibility tickets and flag accessibility issues. Don’t mistake smiling and nodding for support — establish timelines, roles, specifics, objectives. And most importantly: measure the impact of your work by repeatedly conducting accessibility testing with real people. Build a strong before/after case to show the change that the team has enabled and contributed to, and celebrate small and big accessibility wins. It might not sound like much, but it can start changing the culture faster than you think. Useful resources: Giving A Damn About Accessibility, by Sheri Byrne-Haber (disabled) https://lnkd.in/eCeFutuJ Accessibility For Designers: Where Do I Start?, by Stéphanie Walter https://lnkd.in/ecG5qASY Web Accessibility In Plain Language (Free Book), by Charlie Triplett https://lnkd.in/e2AMAwyt Building Accessibility Research Practices, by Maya Alvarado https://lnkd.in/eq_3zSPJ How To Build A Strong Case For Accessibility, ↳ https://lnkd.in/ehGivAdY, by 🦞 Todd Libby ↳ https://lnkd.in/eC4jehMX, by Yichan Wang #ux #accessibility
-
10 Golden Rules for Writing Production-Ready SQL 1. Never use SELECT * in production You’re pulling unnecessary columns, increasing I/O, and risking silent breakage when schemas change. 2. Always qualify columns with table aliases When queries grow, ambiguity becomes bugs. Be explicit. 3. Prefer CTEs over deeply nested subqueries Readable queries are maintainable queries. Future you will thank you. 4. Validate row counts after major transformations If your input has 10M rows and output has 3M, do you know why? 5. Be intentional with JOIN types INNER, LEFT, RIGHT — each one changes business logic. Don’t guess. 6. Avoid unnecessary DISTINCT DISTINCT often hides duplication problems instead of solving them. 7. Think about data skew before joining large tables One hot key can destroy performance. 8. Use EXISTS instead of IN when filtering large datasets Short-circuit evaluation can dramatically improve performance. 9. Comment business logic, not syntax Don’t explain what SUM() does. Explain why revenue excludes refunds. 10. Test with edge cases Nulls. Duplicates. Missing keys. Empty tables. Production data is never “clean.” One more truth: If your SQL cannot be understood in 2–3 minutes by another engineer, it’s not production-ready. Data bugs are business bugs. Treat your queries like code — because they are. What rule would you add to this list? Join the group: https://lnkd.in/giE3e9yH Repost to help others in your network ♻️ Follow for more 👋
-
With 30 years of experience in the technology sector, including in engineering & operations, I’ve developed my own best practices that help organizations build trust with the communities who will use their technology. In this week’s special TIME Magazine Davos issue, I outlined a framework based on those hard-won lessons to help ensure AI development is responsible, thoughtful, and benefits humanity, including: - Embrace Early Collaboration: Bringing outside voices into the development process early helps to create technology that better reflects the breadth and depth of the human experience. Ensuring you partner with - and listen to - experts & local communities can help mitigate potential risks. - Operationalize Care: The success of AI projects often hinges on how well organizations implement systems that operationalize their commitment to care. For example, at Google DeepMind, we have developed frameworks that embed ethical considerations and safety measures into the fabric of any research and development process - as fundamental building blocks, not bolted-on afterthoughts. - Build Trust Through Real-World Impact: The antidote to apprehension around AI is to build products that solve real problems, and then highlight those solutions. When people understand how AI is adding clear value to their lives, the conversation can focus both on positive opportunities and managing risk. I very much appreciated the opportunity to share my thoughts, and you can read more here:
-
Nobody mentions this when they tell you to install Claude Code. Last month I was advising one of America’s well-known banks on how to roll out Claude Code as the new standard for code production across their entire engineering organisation. Not a pilot or a proof of concept. Claude Code Enterprise to be Org-wide. The security team had one question before anything else moved forward. What can this agent actually access and do on a developer’s machine? It sounds simple but it stopped the room. Because most developers and most engineering teams have never asked it. They install the tool, they run it, they ship faster. Nobody reads the permissions. Nobody maps what the agent can reach. Nobody thinks about what happens when an AI agent with write access runs inside the same environment as production credentials, internal configs, and sensitive codebases. A bank does not have that luxury. So here is what the security team required before a single install was approved: 1. Separate local user account Claude Code runs in its own sandboxed environment. Not your main user. Not your admin account. Its own contained space with no access to the rest of your machine. 2. Codebase set to read-only The agent reads your code. It does not touch your file system. Read access only enforced at the permission level, not assumed. 3. Dedicated GitHub account for the agent Not your personal GitHub. Not a shared team account. A standalone agent account so every push, every commit, every action is logged, traceable, and attributable to the agent specifically. These are not advanced security controls. They are basic hygiene that most developers skip entirely because nobody told them to do it. If one of the largest financial institutions in America made all three mandatory before approving a single developer install,you should be asking the same questions about your own setup. It does not matter whether you are running Claude Code at a bank, a startup, or on your personal machine at home. The agent does not know the difference. It can only access what your permissions allow it to access. Lock that down before you run it again. #ai #aisecurity #claudecode
-
📁 Terraform Directory Structure – The Right Way! 💡 Are you managing your Terraform projects correctly? A well-structured Terraform directory ensures scalability, reusability, and efficient infrastructure management. Let’s dive deep into best practices! 🏗️ 1️⃣ Environments – Separate Configs for Dev, Staging & Prod Managing multiple environments? Here’s how to structure them: 📂 Development/ 📂 Staging/ 📂 Production/ Each contains: ✅ main.tf – Defines cloud resources. ✅ variables.tf – Declares variables without values. ✅ outputs.tf – Stores Terraform outputs for dependencies. ✅ terraform.tfvars – Provides values for variables. 🔹 Why? Isolates Dev, Staging, and Production setups. Avoids accidental production changes. Makes configurations modular & reusable. 2️⃣ Modules – Reusable Infrastructure Components Instead of repeating code, Terraform Modules help reuse configurations. 📌 VPC Module – Handles Virtual Private Cloud creation. 📌 EC2 Module – Manages EC2 instances efficiently. 🔹 Why? 🚀 Eliminates duplicate code – Define once, use everywhere! 🔄 Ensures consistency across environments. ⚙️ Faster deployment – Just call the module! 3️⃣ Scripts – Automate Terraform Workflows Automation is key in DevOps & IaC. These scripts help: ⚙️ init.sh – Initializes Terraform. 🛑 teardown.sh – Destroys infrastructure to save costs. 🔹 Why? Saves time by automating Terraform operations. Reduces manual errors while setting up infrastructure. 4️⃣ Core Terraform Files – The Brains of Your Infrastructure These files are the foundation of your Terraform project: ✅ provider.tf – Specifies the cloud provider (AWS, Azure, GCP). ✅ backend.tf – Defines state management (e.g., AWS S3, Terraform Cloud). 🔹 Why? Keeps Terraform state secure instead of local files. Prevents conflicts in team environments. 🔍 Why This Directory Structure Matters? ✅ Organized, modular, and scalable Terraform projects. ✅ Prevents accidental changes in production. ✅ Reusable infrastructure with Terraform Modules. ✅ Automated setup & cleanup with scripts. 💬 How do you structure your Terraform projects? Let’s discuss in the comments! 👇 📌 Follow for more DevOps insights! 🚀 #Terraform #DevOps #CloudComputing #InfrastructureAsCode #AWS #Azure #GCP #HashiCorp #Automation #CloudEngineering #TerraformBestPractices
-
Are you using Claude to autocomplete or to think in parallel with you? Many developers treat it like a faster tab key. The real power shows up when you use it as a second brain running alongside yours. Here’s what that looks like in practice. 1. Run Work in Parallel Spin up multiple sessions and worktrees so planning, refactoring, reviewing, and debugging happen simultaneously instead of sequentially. 2. Start Complex Tasks in Plan Mode Outline architecture and approach before writing code, so execution becomes clean and intentional instead of reactive. 3. Maintain a Living CLAUDE.md Document mistakes, patterns, and guardrails so Claude improves with your workflow and reduces repeated errors over time. 4. Turn Repetition into Skills Automate recurring tasks with reusable commands and structured prompts so you build once and reuse everywhere. 5. Delegate Debugging Provide logs, failing tests, or CI output and let Claude iterate toward solutions while you focus on higher-level thinking. 6. Challenge the Output Ask for edge cases, diff comparisons, cleaner abstractions, and alternative designs to push beyond “good enough.” 7. Optimize Your Environment Set up your terminal, tabs, and context structure so you reduce friction and maximize visibility while working. 8. Use Subagents for Heavy Lifting Offload complex or exploratory tasks to parallel agents so your main context stays clean and focused. 9. Query Data Directly Use Claude to interact with databases, metrics, and analytics tools so you reason about data instead of manually extracting it. 10. Turn It Into a Learning Engine Ask for diagrams, system explanations, and critique so every project improves your mental models. The difference is simple: Autocomplete makes you faster. Parallel thinking makes you better. The question is how you’re choosing to use it.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development