Technology Strategy Consulting

Explore top LinkedIn content from expert professionals.

  • View profile for Raj Goodman Anand
    Raj Goodman Anand Raj Goodman Anand is an Influencer

    Helping organizations build AI operating systems | Founder, AI-First Mindset®

    23,722 followers

    Too many AI strategies are being built around the technology instead of the business challenges they should solve. The real value of AI comes when it is directly tied to your goals. I have arrived at seven lessons on how to align your AI strategy directly with your business goals: 1. Start with the "why," not the "what." Before discussing models or tools, ask what business problem you need to solve. It could be speeding up product development, or cutting operational costs. Let that answer be your guide. 2. Think in terms of business outcomes. Measure AI success by its impact on metrics like revenue growth or employee productivity not by technical accuracy. 3. Build a cross-functional team. AI can't live solely in the IT department. Include leaders from all relevant departments from day one to ensure the strategy serves the entire business. 4. Prioritize quick wins to build momentum. Identify a few small, high-impact projects that can deliver results quickly. This builds organizational confidence and makes people ready to take on larger initiatives. 5. Invest in data foundations. The best AI strategy will fail without clean and well-governed data. A disciplined approach to data quality is non-negotiable. 6. Focus on change management. Technology is the easy part. Prepare your people for new workflows and equip them with the skills to work alongside AI effectively. 7. Create a feedback loop. An AI strategy is not a one-time plan. Continuously gather feedback from users and analyze performance data to adapt and refine your approach. The goal is to make AI a part of how you achieve your objectives, not a separate project. #AIStrategy #BusinessGoals #DigitalTransformation #Leadership #ArtificialIntelligence

  • View profile for Abhishek Mishra

    ERP & AI Strategist for Healthcare & Retail | Co-Founder @ 3E MindCurve | Microsoft D365 Specialist | Strategic Advisor @ ProactAI

    11,481 followers

    A few years ago, I met an ERP consultant on a hospital project. He proudly told me: 👉 𝘐’𝘮 𝘵𝘩𝘦 𝘍𝘪𝘯𝘢𝘯𝘤𝘦 𝘮𝘰𝘥𝘶𝘭𝘦 𝘨𝘶𝘺. 𝘐 𝘬𝘯𝘰𝘸 𝘦𝘷𝘦𝘳𝘺 𝘤𝘰𝘯𝘧𝘪𝘨𝘶𝘳𝘢𝘵𝘪𝘰𝘯 𝘪𝘯𝘴𝘪𝘥𝘦 𝘰𝘶𝘵. And he was right. Technically brilliant. But in every steering committee meeting… the CFO looked past him. 𝐖𝐡𝐲? Because he only spoke in system language — not in the CFO’s language. Fast forward to another project. This time, the lead consultant didn’t just talk about ERP screens. ✅ He asked the CFO about 𝐫𝐞𝐯𝐞𝐧𝐮𝐞 𝐥𝐞𝐚𝐤𝐚𝐠𝐞. ✅ He discussed 𝐩𝐫𝐨𝐜𝐮𝐫𝐞𝐦𝐞𝐧𝐭 𝐜𝐨𝐧𝐭𝐫𝐨𝐥𝐬 with the supply chain head. ✅ He framed every feature in terms of 𝐜𝐨𝐦𝐩𝐥𝐢𝐚𝐧𝐜𝐞, 𝐞𝐟𝐟𝐢𝐜𝐢𝐞𝐧𝐜𝐲, 𝐚𝐧𝐝 𝐩𝐫𝐨𝐟𝐢𝐭𝐚𝐛𝐢𝐥𝐢𝐭𝐲. Same ERP. Same modules. But one consultant was seen as a 𝘳𝘦𝘴𝘰𝘶𝘳𝘤𝘦, The other as a 𝘴𝘵𝘳𝘢𝘵𝘦𝘨𝘪𝘤 𝘱𝘢𝘳𝘵𝘯𝘦𝘳. 💡 𝐓𝐡𝐞 𝐥𝐞𝐬𝐬𝐨𝐧: Clients don’t value ERP expertise in isolation. They value how it connects to 𝐛𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐨𝐮𝐭𝐜𝐨𝐦𝐞𝐬. 𝐈𝐟 𝐲𝐨𝐮 𝐰𝐚𝐧𝐭 𝐭𝐨 𝐬𝐭𝐚𝐧𝐝 𝐨𝐮𝐭 𝐢𝐧 𝐄𝐑𝐏 𝐜𝐨𝐧𝐬𝐮𝐥𝐭𝐢𝐧𝐠, Stop thinking like a 𝘮𝘰𝘥𝘶𝘭𝘦 𝘰𝘱𝘦𝘳𝘢𝘵𝘰𝘳. Start thinking like a 𝘣𝘶𝘴𝘪𝘯𝘦𝘴𝘴 𝘰𝘸𝘯𝘦𝘳. #ERP #Consulting #DigitalTransformation #HospitalERP #Dynamics365 #MindCurve

  • 𝗔𝗿𝗲 𝘆𝗼𝘂 𝗽𝗿𝗼𝗮𝗰𝘁𝗶𝘃𝗲𝗹𝘆 𝗺𝗮𝗻𝗮𝗴𝗶𝗻𝗴 𝘆𝗼𝘂𝗿 𝗦𝗼𝘂𝗿𝗰𝗲-𝘁𝗼-𝗣𝗮𝘆 𝘁𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝘆 𝗰𝗼𝘀𝘁𝘀? If not, why let savings from smart Procurement slip away due to outdated technology or suboptimal use? S2P technology plays a central role in cost management, yet many companies lack a strategic approach to continuously assess and optimise their tech stack. Companies can adopt Bain & Co’s "𝗥𝗲𝗱𝘂𝗰𝗲, 𝗥𝗲𝗽𝗹𝗮𝗰𝗲, 𝗮𝗻𝗱 𝗥𝗲𝘁𝗵𝗶𝗻𝗸" model to continuously evaluate their technology infrastructure and costs, ensuring a more optimised and sustainable cost profile. Here is the model in action for Source to Pay technology cost optimisation: ▪️ 𝗥𝗲𝗱𝘂𝗰𝗲 to recover 10 to 20% of costs through short-term actions such as - adjusting licenses to match actual usage and adoption patterns - discontinuing features or functionalities that add little value - switching off modules where business capabilities have not yet caught up Avoid over-licensing by matching user access to actual needs, ensuring modules align with Procurement’s readiness. ▪️ 𝗥𝗲𝗽𝗹𝗮𝗰𝗲 to yield 20 to 30% of savings by - transitioning to cost-optimal, flexible solutions and getting out of lock-ins - switching subscription models when premium offerings are unnecessary - consolidating overlapping tools that offer similar features For example, merge multiple eSourcing tools into a primary platform and adopt a tender-based pricing for niche auction needs. This helps to adjust the cost profile of your Source to Pay technology with the actual needs. ▪️ 𝗥𝗲𝘁𝗵𝗶𝗻𝗸 to realise up to 40% cost optimisation by: - reimagining the architecture with a modular, composable design - automating and orchestrating processes and integrating new digital tools - reevaluate the mix of best-of-breed solutions vs integrated suites A new Procurement strategy requires a fresh look at the S2P tech stack to ensure it adapts and supports growth cost-effectively, while offering flexibility through additional digital levers like AI and automation. 𝗢𝗽𝘁𝗶𝗺𝗶𝘀𝗶𝗻𝗴 𝗦𝟮𝗣 𝘁𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝘆 𝗶𝘀 𝗮 𝗰𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀 𝗷𝗼𝘂𝗿𝗻𝗲𝘆, 𝗻𝗼𝘁 𝗮 𝗼𝗻𝗲-𝘁𝗶𝗺𝗲 𝗲𝗳𝗳𝗼𝗿𝘁, especially with contractual commitments, sunk costs, and change management challenges. Rather than following IT preferences and standards, it’s about keeping technology fresh and aligned with business needs as they evolve. ❓How do you manage your S2P technology to adapt to changing business needs while maintaining cost efficiency.

  • View profile for Gurumoorthy Raghupathy

    Expert in Solutions and Services Delivery | SME in Architecture, DevOps, SRE, Service Engineering | 5X AWS, GCP Certs | Mentor

    14,141 followers

    🚀🚀 Why Load Testing & APM Should Be Non-Negotiable in Your SDLC🚀🚀 In today's digital landscape, delivering high-performing applications isn't just nice to have—it's mission-critical. Yet many teams still treat performance as an afterthought. Here's why integrating Load Testing and Application Performance Management (APM) throughout your SDLC is essential: 1. The Performance Reality Check Studies show that 53% of users abandon a mobile site if it takes longer than 3 seconds to load. Even a 100ms delay can hurt conversion rates by 7%. The cost of poor performance? Amazon calculated that every 100ms of latency costs them 1% in sales. 2. Why Early Integration Matters 2.1 Load Testing in SDLC: ✅ Identifies bottlenecks before production deployment ✅ Validates system capacity under expected user loads ✅ Prevents costly post-release performance fixes ✅ Ensures scalability requirements are met 2.2 APM Throughout Development: ✅ Real-time visibility into application behavior ✅ Proactive issue detection and resolution ✅ Performance baseline establishment ✅ Continuous optimization opportunities 3. Grafana: The Game Changer for Performance Monitoring Grafana has revolutionized how we visualize and monitor application performance with it's ✅ Unified Dashboards - Correlate metrics from multiple data sources ✅ Real-time Alerting - Get notified before users experience issues ✅ Historical Analysis - Track performance trends over time ✅ Custom Visualizations - Tailor views for different stakeholders ✅ Cost-Effective - Open-source with powerful enterprise features 4. Key Metrics to Track: ✅ Response times and throughput ✅ Error rates and success ratios ✅ Resource utilization (CPU, memory, disk) ✅ Database query performance ✅ User experience metrics 5. The Bottom Line Performance isn't just a technical concern—it's a business imperative. Teams that embed load testing and APM into their SDLC deliver more reliable, scalable applications that drive better user experiences and business outcomes. Your SDLC needs to include APM / Load testing for optimal customer satisfaction to cost ratio. What's your experience with performance testing in your SDLC? Share your wins and lessons learned below! 👇 #SoftwareDevelopment #LoadTesting #APM #Grafana #DevOps #PerformanceTesting #SDLC #Monitoring #TechLeadership

    • +1
  • View profile for Matt Wood
    Matt Wood Matt Wood is an Influencer

    CTIO at PwC

    79,751 followers

    𝔼𝕍𝔸𝕃 field note (2 of 3): Finding the benchmarks that matter for your own use cases is one of the biggest contributors to AI success. Let's dive in. AI adoption hinges on two foundational pillars: quality and trust. Like the dual nature of a superhero, quality and trust play distinct but interconnected roles in ensuring the success of AI systems. This duality underscores the importance of rigorous evaluation. Benchmarks, whether automated or human-centric, are the tools that allow us to measure and enhance quality while systematically building trust. By identifying the benchmarks that matter for your specific use case, you can ensure your AI system not only performs at its peak but also inspires confidence in its users. 🦸♂️ Quality is the superpower—think Superman—able to deliver remarkable feats like reasoning and understanding across modalities to deliver innovative capabilities. Evaluating quality involves tools like controllability frameworks to ensure predictable behavior, performance metrics to set clear expectations, and methods like automated benchmarks and human evaluations to measure capabilities. Techniques such as red-teaming further stress-test the system to identify blind spots. 👓 But trust is the alter ego—Clark Kent—the steady, dependable force that puts the superpower into the right place at the right time, and ensures these powers are used wisely and responsibly. Building trust requires measures that ensure systems are helpful (meeting user needs), harmless (avoiding unintended harm), and fair (mitigating bias). Transparency through explainability and robust verification processes further solidifies user confidence by revealing where a system excels—and where it isn’t ready yet. For AI systems, one cannot thrive without the other. A system with exceptional quality but no trust risks indifference or rejection - a collective "shrug" from your users. Conversely, all the trust in the world without quality reduces the potential to deliver real value. To ensure success, prioritize benchmarks that align with your use case, continuously measure both quality and trust, and adapt your evaluation as your system evolves. You can get started today: map use case requirements to benchmark types, identify critical metrics (accuracy, latency, bias), set minimum performance thresholds (aka: exit criteria), and choose complementary benchmarks (for better coverage of failure modes, and to avoid over-fitting to a single number). By doing so, you can build AI systems that not only perform but also earn the trust of their users—unlocking long-term value.

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect & Engineer | AI Strategist

    720,803 followers

    As hybrid-cloud adoption accelerates, mastering the right 𝘀𝗸𝗶𝗹𝗹𝘀 𝗮𝗻𝗱 𝘁𝗼𝗼𝗹𝘀 is essential for building 𝗿𝗲𝘀𝗶𝗹𝗶𝗲𝗻𝘁, 𝘀𝗰𝗮𝗹𝗮𝗯𝗹𝗲, 𝗮𝗻𝗱 𝗲𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝘁 applications.  Here’s a 𝗖𝗹𝗼𝘂𝗱-𝗡𝗮𝘁𝗶𝘃𝗲 𝗥𝗼𝗮𝗱𝗺𝗮𝗽 that breaks down the critical domains and technologies you need to know:  🔴 𝟭. 𝗟𝗶𝗻𝘂𝘅 𝗙𝘂𝗻𝗱𝗮𝗺𝗲𝗻𝘁𝗮𝗹𝘀 Linux is the backbone of cloud-native systems. 𝗠𝗮𝘀𝘁𝗲𝗿 𝘁𝗲𝗿𝗺𝗶𝗻𝗮𝗹 𝗰𝗼𝗺𝗺𝗮𝗻𝗱𝘀, 𝗯𝗮𝘀𝗵 𝘀𝗰𝗿𝗶𝗽𝘁𝗶𝗻𝗴, 𝗮𝗻𝗱 𝗱𝗶𝘀𝘁𝗿𝗶𝗯𝘂𝘁𝗶𝗼𝗻𝘀 like Ubuntu, Red Hat, and Alpine to navigate the cloud world with confidence.  🟢 𝟮. 𝗡𝗲𝘁𝘄𝗼𝗿𝗸𝗶𝗻𝗴 𝗘𝘀𝘀𝗲𝗻𝘁𝗶𝗮𝗹𝘀 Cloud connectivity depends on 𝗽𝗿𝗼𝘁𝗼𝗰𝗼𝗹𝘀 𝗹𝗶𝗸𝗲 𝗛𝗧𝗧𝗣, 𝗦𝗦𝗟, 𝗧𝗖𝗣/𝗜𝗣, 𝗮𝗻𝗱 𝗗𝗡𝗦. Tools like 𝗪𝗶𝗿𝗲𝘀𝗵𝗮𝗿𝗸 help monitor and secure network traffic. 𝗟𝗲𝗮𝗿𝗻 𝗦𝗦𝗛, 𝗩𝗣𝗡𝘀, 𝗮𝗻𝗱 𝗳𝗶𝗿𝗲𝘄𝗮𝗹𝗹𝘀 to strengthen cloud security.  🔵 𝟯. 𝗖𝗹𝗼𝘂𝗱 𝗦𝗲𝗿𝘃𝗶𝗰𝗲𝘀   Cloud is non-negotiable! Whether 𝗔𝗪𝗦, 𝗔𝘇𝘂𝗿𝗲, 𝗼𝗿 𝗚𝗼𝗼𝗴𝗹𝗲 𝗖𝗹𝗼𝘂𝗱, understand key models like 𝗦𝗮𝗮𝗦, 𝗣𝗮𝗮𝗦, 𝗮𝗻𝗱 𝗜𝗮𝗮𝗦 and how to deploy, scale, and manage workloads effectively.  🟣 𝟰. 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 Security is a must-have in cloud-native environments. Master 𝗜𝗔𝗠 (𝗜𝗱𝗲𝗻𝘁𝗶𝘁𝘆 & 𝗔𝗰𝗰𝗲𝘀𝘀 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁), 𝗢𝗽𝗲𝗻 𝗣𝗼𝗹𝗶𝗰𝘆 𝗔𝗴𝗲𝗻𝘁, 𝗣𝗿𝗶𝘀𝗺𝗮, 𝗮𝗻𝗱 𝗦𝗲𝗰𝗿𝗲𝘁𝘀 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 (𝗩𝗮𝘂𝗹𝘁, 𝗔𝗪𝗦 𝗞𝗠𝗦) to protect your applications.  🟡 𝟱. 𝗖𝗼𝗻𝘁𝗮𝗶𝗻𝗲𝗿𝘀 & 𝗢𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻 𝗖𝗼𝗻𝘁𝗮𝗶𝗻𝗲𝗿𝘀 𝗿𝗲𝘃𝗼𝗹𝘂𝘁𝗶𝗼𝗻𝗶𝘇𝗲𝗱 𝗮𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗱𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁! Get hands-on with:   ⚙️ Docker – Build lightweight, portable applications   ⚙️ Kubernetes – Automate deployment & scaling   ⚙️ Istio & Service Mesh – Secure and manage microservices  🟠 𝟲. 𝗜𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝗮𝘀 𝗖𝗼𝗱𝗲 (𝗜𝗮𝗖) Automate infrastructure with 𝗧𝗲𝗿𝗿𝗮𝗳𝗼𝗿𝗺, 𝗣𝘂𝗹𝘂𝗺𝗶, 𝗖𝗹𝗼𝘂𝗱𝗙𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻, and configuration management tools like 𝗔𝗻𝘀𝗶𝗯𝗹𝗲, 𝗖𝗵𝗲𝗳, 𝗮𝗻𝗱 𝗣𝘂𝗽𝗽𝗲𝘁. This ensures 𝘀𝗰𝗮𝗹𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗮𝗻𝗱 𝗰𝗼𝗻𝘀𝗶𝘀𝘁𝗲𝗻𝗰𝘆 across environments.  🟢 𝟳. 𝗢𝗯𝘀𝗲𝗿𝘃𝗮𝗯𝗶𝗹𝗶𝘁𝘆 Monitor, troubleshoot, and optimize cloud applications with:  📌 Prometheus & Grafana – Metrics & visualization   📌 Elastic Stack (ELK) – Log aggregation   📌 OpenTelemetry – Distributed tracing  🔵 𝟴. 𝗖𝗜/𝗖𝗗 – 𝗖𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻 & 𝗗𝗲𝗹𝗶𝘃𝗲𝗿𝘆 Modern DevOps is 𝗮𝗹𝗹 𝗮𝗯𝗼𝘂𝘁 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻! Learn:   ✅ GitHub Actions, GitLab CI/CD, Jenkins – Automate testing & deployment   ✅ ArgoCD & Flux (GitOps) – Declarative Kubernetes deployments  𝗖𝗹𝗼𝘂𝗱-𝗡𝗮𝘁𝗶𝘃𝗲 𝗶𝘀 𝗘𝘃𝗼𝗹𝘃𝗶𝗻𝗴 – 𝗦𝘁𝗮𝘆 𝗔𝗵𝗲𝗮𝗱!  This roadmap lays the foundation for cloud-native success, but the landscape is constantly evolving.  𝗪𝗵𝗮𝘁’𝘀 𝘆𝗼𝘂𝗿 𝗴𝗼-𝘁𝗼 𝘁𝗼𝗼𝗹 𝗼𝗿 𝗺𝘂𝘀𝘁-𝗸𝗻𝗼𝘄 𝗰𝗹𝗼𝘂𝗱-𝗻𝗮𝘁𝗶𝘃𝗲 𝗰𝗼𝗻𝗰𝗲𝗽𝘁? Share in the comments! 👇

  • View profile for Dirk Hartmann

    Head of Simcenter Technology Innovation | Full Professor TU Darmstadt | Siemens Distinguished Key Expert | Siemens Top Innovator and Inventor of the Year

    9,880 followers

    🚀 𝐁𝐞𝐧𝐜𝐡𝐦𝐚𝐫𝐤𝐢𝐧𝐠 𝐌𝐋-𝐁𝐚𝐬𝐞𝐝 𝐏𝐃𝐄 𝐒𝐨𝐥𝐯𝐞𝐫𝐬: 𝐁𝐞𝐬𝐭 𝐏𝐫𝐚𝐜𝐭𝐢𝐜𝐞𝐬 Last week, I shared insights from the study 𝘞𝘦𝘢𝘬 𝘉𝘢𝘴𝘦𝘭𝘪𝘯𝘦𝘴 𝘢𝘯𝘥 𝘙𝘦𝘱𝘰𝘳𝘵𝘪𝘯𝘨 𝘉𝘪𝘢𝘴𝘦𝘴 𝘓𝘦𝘢𝘥 𝘵𝘰 𝘖𝘷𝘦𝘳𝘰𝘱𝘵𝘪𝘮𝘪𝘴𝘮 𝘪𝘯 𝘔𝘢𝘤𝘩𝘪𝘯𝘦 𝘓𝘦𝘢𝘳𝘯𝘪𝘯𝘨 𝘧𝘰𝘳 𝘍𝘭𝘶𝘪𝘥-𝘙𝘦𝘭𝘢𝘵𝘦𝘥 𝘗𝘢𝘳𝘵𝘪𝘢𝘭 𝘋𝘪𝘧𝘧𝘦𝘳𝘦𝘯𝘵𝘪𝘢𝘭 𝘌𝘲𝘶𝘢𝘵𝘪𝘰𝘯𝘴 by Nick McGreivy and Ammar Hakim (link in comments). The authors highlighted a crucial issue: many #ML -based #solvers aren't benchmarked against appropriate baselines, leading to misleading conclusions. ⚠️ 𝐒𝐨, 𝐰𝐡𝐚𝐭’𝐬 𝐭𝐡𝐞 𝐫𝐢𝐠𝐡𝐭 𝐚𝐩𝐩𝐫𝐨𝐚𝐜𝐡? 🤔 The key lies in comparing the #Cost vs. #Accuracy of #algorithms, reflecting the inherent trade-off between efficiency and precision in numerical methods. While quick, low-accuracy approximations are common, highly accurate results typically require more computational time. ⏱️ 📊 𝐀𝐜𝐜𝐮𝐫𝐚𝐜𝐲-𝐒𝐩𝐞𝐞𝐝 𝐏𝐚𝐫𝐞𝐭𝐨 𝐂𝐮𝐫𝐯𝐞𝐬: 𝐀 𝐁𝐞𝐧𝐜𝐡𝐦𝐚𝐫𝐤𝐢𝐧𝐠 𝐒𝐭𝐚𝐧𝐝𝐚𝐫𝐝 From my experience, the most effective way to benchmark is by using Pareto curves of accuracy versus computational time (see the figure below). These curves offer a clear, visual comparison, showing how different methods perform under the same hardware conditions. They also mirror real-world engineering decisions, where finding a balance between speed and accuracy is critical. ⚖️ An example of this can be seen in Aditya Phopale’s master thesis, where the performance of a #NeuralNetwork-based solver was compared against the state-of-the-art general purpose #Fenics solver. 🔍 𝐂𝐡𝐨𝐨𝐬𝐢𝐧𝐠 𝐭𝐡𝐞 𝐑𝐢𝐠𝐡𝐭 𝐁𝐚𝐬𝐞𝐥𝐢𝐧𝐞 𝐒𝐨𝐥𝐯𝐞𝐫 Nick McGreivy and Ammar Hakim also emphasize the importance of selecting an appropriate baseline. While Fenics might not be the top-notch choice when it comes to computational efficiency for a specific problem (e.g., vs spectral solvers), it is still highly relevant from an #engineering perspective. Both the investigated solver and Fenics share a similar philosophy: they are general-purpose, Python-based solvers that are based on equation formulations. 🧩 Additionally, unlike #FiniteElement solvers like Fenics, the investigated Neural Network solvers don’t require complex discretization. Thus, Fenics serves as a suitable baseline for practical engineering applications, despite its "limitations" from a more theoretical context. 💡 𝐖𝐡𝐚𝐭 𝐀𝐫𝐞 𝐘𝐨𝐮𝐫 𝐁𝐞𝐬𝐭 𝐏𝐫𝐚𝐜𝐭𝐢𝐜𝐞𝐬? I’m curious to hear from others: what best practices do you follow when benchmarking ML-based PDE solvers? Let’s discuss! 👇

  • View profile for Vin Vashishta
    Vin Vashishta Vin Vashishta is an Influencer

    AI Strategist | Monetizing Data & AI For The Global 2K Since 2012 | 3X Founder | Best-Selling Author

    209,664 followers

    I built the data and AI strategies for some of the world’s most successful businesses. One word helped V Squared beat our Big Consulting competitors to land those clients. Can you guess what it is? Actionable. Strategy must clear the lane for execution and empower decisions. It must serve people who get the job done and deliver results. Most strategies, especially data and AI strategies, create bureaucracy and barriers that slow execution. They paralyze the business, waiting for the perfect conditions and easy opportunities to materialize. CEOs don’t want another slide deck and a confident-sounding presentation about “The AI Opportunity.” They want a pragmatic action plan detailing strategy implementation, execution, delivery, and ROI. They need a framework for budgeting based on multiple versions of the AI product roadmap that quantifies returns at different spending levels. They need frameworks to decide which risks to take. Business units don’t want another lecture about AI literacy. They need a transformation roadmap, a structured learning path, and training resources. They need to know who to bring opportunities to, how to make buying decisions, and when to kick off AI initiatives. Most of all, data and AI strategy must address the messy reality of markets, customers, technical debt, resource constraints, imperfect conditions, and business necessity. Technical strategy is only valuable if it informs decision-making and optimizes actions to achieve the business’s goals.

  • View profile for Priyanka Vergadia

    #1 Visual Storyteller in Tech | VP Level Product & GTM | TED Speaker | Enterprise AI Adoption at Scale

    117,298 followers

    If you’re leading AI initiatives, here is a strategic cheat sheet to move from "𝗰𝗼𝗼𝗹 𝗱𝗲𝗺𝗼" to 𝗲𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝘃𝗮𝗹𝘂𝗲. Think Risk, ROI, and Scalability. This strategy moves you from "𝘄𝗲 𝗵𝗮𝘃𝗲 𝗮 𝗺𝗼𝗱𝗲𝗹" to "𝘄𝗲 𝗵𝗮𝘃𝗲 𝗮 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀 𝗮𝘀𝘀𝗲𝘁." 𝟭. 𝗧𝗵𝗲 "𝗪𝗵𝘆" 𝗚𝗮𝘁𝗲 (𝗣𝗿𝗲-𝗣𝗼𝗖) • Don’t build just because you can. Define the Business Problem first • Success: Is the potential value > 10x the estimated cost? • Decision: If the problem can be solved with Regex or SQL, kill the AI project now. 𝟮. 𝗧𝗵𝗲 𝗣𝗿𝗼𝗼𝗳 𝗼𝗳 𝗖𝗼𝗻𝗰𝗲𝗽𝘁 (𝗣𝗼𝗖) • Goal: Prove feasibility, not scalability. • Timebox: 4–6 weeks max. • Team: 1-2 AI Engineers + 1 Domain Expert (Data Scientist alone is not enough). • Metric: Technical feasibility (e.g., "Can the model actually predict X with >80% accuracy on historical data?") 𝟯. 𝗧𝗵𝗲 "𝗠𝗩𝗣" 𝗧𝗿𝗮𝗻𝘀𝗶𝘁𝗶𝗼𝗻 (𝗧𝗵𝗲 𝗩𝗮𝗹𝗹𝗲𝘆 𝗼𝗳 𝗗𝗲𝗮𝘁𝗵) • Shift from "Notebook" to "System." • Infrastructure: Move off local GPUs to a dev cloud environment. Containerize. • Data Pipeline: Replace manual CSV dumps with automated data ingestion. • Decision: Does the model work on new, unseen data? If accuracy drops >10%, halt and investigate "Data Drift." 𝟰. 𝗥𝗶𝘀𝗸 & 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 (𝗧𝗵𝗲 "𝗟𝗮𝘄𝘆𝗲𝗿" 𝗣𝗵𝗮𝘀𝗲) • Compliance is not an afterthought. • Guardrails: Implement checks to prevent hallucination or toxic output (e.g., NeMo Guardrails, Guidance). • Risk Decision: What is the cost of a wrong answer? If high (e.g., medical advice), keep a "Human-in-the-Loop." 𝟱. 𝗣𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 • Scalability & Latency: Users won’t wait 10 seconds for a token. • Serving: Use optimized inference engines (vLLM, TGI, Triton) • Cost Control: Implement token limits and caching. "Pay-as-you-go" can bankrupt you overnight if an API loop goes rogue. 𝟲. 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻 • Automated Eval: Use "LLM-as-a-Judge" to score outputs against a golden dataset. • Feedback Loops: Build a mechanism for users to Thumbs Up/Down outcomes. Gold for fine-tuning later. 𝟳. 𝗢𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝘀 (𝗟𝗟𝗠𝗢𝗽𝘀) • Day 2 is harder than Day 1. • Observability: Trace chains and monitor latency/cost per request (LangSmith, Arize). • Retraining: Models rot. Define when to retrain (e.g., "When accuracy drops below 85%" or "Monthly"). 𝗧𝗲𝗮𝗺 𝗘𝘃𝗼𝗹𝘂𝘁𝗶𝗼𝗻 • PoC Phase: AI Engineer + Subject Matter Expert. • MVP Phase: + Data Engineer + Backend Engineer. • Production Phase: + MLOps Engineer + Product Manager + Legal/Compliance. 𝗛𝗼𝘄 𝘁𝗼 𝗺𝗮𝗻𝗮𝗴𝗲 𝗔𝗜 𝗣𝗿𝗼𝗷𝗲𝗰𝘁𝘀 (𝗺𝘆 𝗮𝗱𝘃𝗶𝗰𝗲): → Treat AI as a Product, not a Research Project. → Fail fast: A failed PoC cost $10k; a failed Production rollout costs $1M+. → Cost Modeling: Estimate inference costs at peak scale before you write a line of production code. What decision gates do you use in your AI roadmap? Follow Priyanka for more cloud and AI tips and tools #ai #aiforbusiness #aileadership

  • View profile for Shristi Katyayani

    Senior Software Engineer | Avalara | Prev. VMware

    9,253 followers

    In today’s always-on world, downtime isn’t just an inconvenience — it’s a liability. One missed alert, one overlooked spike, and suddenly your users are staring at error pages and your credibility is on the line. System reliability is the foundation of trust and business continuity and it starts with proactive monitoring and smart alerting. 📊 𝐊𝐞𝐲 𝐌𝐨𝐧𝐢𝐭𝐨𝐫𝐢𝐧𝐠 𝐌𝐞𝐭𝐫𝐢𝐜𝐬: 💻 𝐈𝐧𝐟𝐫𝐚𝐬𝐭𝐫𝐮𝐜𝐭𝐮𝐫𝐞: 📌CPU, memory, disk usage: Think of these as your system’s vital signs. If they’re maxing out, trouble is likely around the corner. 📌Network traffic and errors: Sudden spikes or drops could mean a misbehaving service or something more malicious. 🌐 𝐀𝐩𝐩𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧: 📌Request/response counts: Gauge system load and user engagement. 📌Latency (P50, P95, P99):  These help you understand not just the average experience, but the worst ones too. 📌Error rates: Your first hint that something in the code, config, or connection just broke. 📌Queue length and lag: Delayed processing? Might be a jam in the pipeline. 📦 𝐒𝐞𝐫𝐯𝐢𝐜𝐞 (𝐌𝐢𝐜𝐫𝐨𝐬𝐞𝐫𝐯𝐢𝐜𝐞𝐬 𝐨𝐫 𝐀𝐏𝐈𝐬): 📌Inter-service call latency: Detect bottlenecks between services. 📌Retry/failure counts: Spot instability in downstream service interactions. 📌Circuit breaker state: Watch for degraded service states due to repeated failures. 📂 𝐃𝐚𝐭𝐚𝐛𝐚𝐬𝐞: 📌Query latency: Identify slow queries that impact performance. 📌Connection pool usage: Monitor database connection limits and contention. 📌Cache hit/miss ratio: Ensure caching is reducing DB load effectively. 📌Slow queries: Flag expensive operations for optimization. 🔄 𝐁𝐚𝐜𝐤𝐠𝐫𝐨𝐮𝐧𝐝 𝐉𝐨𝐛/𝐐𝐮𝐞𝐮𝐞: 📌Job success/failure rates: Failed jobs are often silent killers of user experience. 📌Processing latency: Measure how long jobs take to complete. 📌Queue length: Watch for backlogs that could impact system performance. 🔒 𝐒𝐞𝐜𝐮𝐫𝐢𝐭𝐲: 📌Unauthorized access attempts: Don’t wait until a breach to care about this. 📌Unusual login activity: Catch compromised credentials early. 📌TLS cert expiry: Avoid outages and insecure connections due to expired certificates. ✅𝐁𝐞𝐬𝐭 𝐏𝐫𝐚𝐜𝐭𝐢𝐜𝐞𝐬 𝐟𝐨𝐫 𝐀𝐥𝐞𝐫𝐭𝐬: 📌Alert on symptoms, not causes. 📌Trigger alerts on significant deviations or trends, not only fixed metric limits. 📌Avoid alert flapping with buffers and stability checks to reduce noise. 📌Classify alerts by severity levels – Not everything is a page. Reserve those for critical issues. Slack or email can handle the rest. 📌Alerts should tell a story : what’s broken, where, and what to check next. Include links to dashboards, logs, and deploy history. 🛠 𝐓𝐨𝐨𝐥𝐬 𝐔𝐬𝐞𝐝: 📌 Metrics collection: Prometheus, Datadog, CloudWatch etc. 📌Alerting: PagerDuty, Opsgenie etc. 📌Visualization: Grafana, Kibana etc. 📌Log monitoring: Splunk, Loki etc. #tech #blog #devops #observability #monitoring #alerts

Explore categories