Software Development

Explore top LinkedIn content from expert professionals.

  • View profile for Molly Johnson-Jones
    Molly Johnson-Jones Molly Johnson-Jones is an Influencer

    CEO & Co-Founder @ Flexa | Future of Work Speaker & Creator | Employer Brand | DEI | Talent Intelligence

    94,622 followers

    If you're losing brilliant women at the final stages of hiring - this might be why... Let me talk you through a recent example where a company had a disproportionately high number of women dropping out at late interview and offer stage for their tech roles: They were offering great salaries. Flexible working. A decent benefits package. So what was going wrong? We took a look at the data. Out of 2 billion data points, a few things stood out: → Diversity is non-negotiable. Women in tech rank it 31% higher than the average candidate. If they don’t see representation in leadership, they won’t apply → Flexible hybrid work wins, because structure matters. Demand for remote-only roles is 11% below average, while core hours and in-office collaboration rank higher → Family-friendly policies trump flashy perks. Fertility leave (+41%), job sharing (+33%), and parental leave (+19%) are the real differentiators But then we dug deeper; and that's where it got really interesting: → Women in data roles showed a higher demand for in-office work - mentorship and access to resources mattered → Women in engineering & development wanted mission-driven work and career progression above all else → Women in product roles prioritised culture and flexibility more than any other group The company checked their employer brand. Their careers page talked about “great culture” and “exciting opportunities.” But it said nothing about what actually mattered to the people they were trying to hire. They weren’t losing candidates because of the salary or the benefits. They were losing them because they don't know what their target talent groups actually want. The companies getting this right aren’t guessing. They’re using data to shape their employer brand - so they attract the right people, with the right message. Download our women in tech report to access more of these insights: https://lnkd.in/enYcGpeW And tell me if you've turned down a job offer for similar reasons? #WomenInTech #Hiring #EmployerBranding #FutureOfWork #DiversityMatters

  • View profile for Rajya Vardhan Mishra

    Engineering Leader @ Google | Mentored 300+ Software Engineers | Building High-Performance Teams | Tech Speaker | Led $1B+ programs | Cornell University | Lifelong Learner | My Views != Employer’s Views

    114,151 followers

    Dear Software Engineers, If your app serves 10 users → a single server and REST API will do If you’re handling 10M requests a day → start thinking load balancers, autoscaling, and rate limits /— If one developer is building features → skip the ceremony, ship and test manually If 10 devs are pushing daily → invest in CI/CD, testing layers, and feature flags /— If your downtime just breaks one page → add a banner and move on If your downtime kills a business flow → redundancy, health checks, and graceful fallbacks are non-negotiable /— If you're just consuming APIs → learn how to handle 400s and 500s If you're building APIs for others → version them, document them, test them, and monitor them /— If your product can tolerate 3s of lag → pick clarity over performance If users are waiting on each click → profiling, caching, and edge delivery are part of your job /— If your data fits in RAM → store it in memory, use simple maps If your data spans terabytes → indexing, partitioning, and disk I/O patterns start to matter /— If you're solo coding → naming things poorly is just annoying If you're on a growing team → naming things poorly is a ticking time bomb /— If you're fixing bugs once a week → logs and console prints might do If you're running production → you need structured logs, tracing, alerts, and dashboards /— If your deadlines are tight → write the simplest code that works If your code is expected to last → design for readability, testability, and change /— If you work alone → "it works on my machine" might be fine If you're in a real team → reproducible builds and shared dev setups are your baseline /— If your app is new → move fast, clean up later If your app is in maintenance hell → you now pay interest on every rushed decision People think software engineering is just about building things. It’s really about: – Knowing when not to build – Being okay with deleting good code – Balancing tradeoffs without always having all the data The best engineers don’t just ship fast. They build systems that are safe to move fast on top of.

  • View profile for David Heinemeier Hansson

    Co-owner & CTO of 37signals (Makers of Basecamp + HEY)

    146,623 followers

    Since the dawn of computing, humans have sought to estimate how long it takes to build software, and for just as long, they've consistently failed. Estimating even medium-sized projects is devilishly difficult, and estimating large projects is virtually impossible. Yet the industry keeps insisting that the method that hasn't worked for sixty years will definitely work on this next project, if we all just try a little harder. It's the definition of delusional. The fundamental problem is that as soon as a type of software development becomes so routine that it would be possible to estimate, it turns into a product or a service you can just buy rather than build. Very few people need to build vanilla content management systems or e-commerce stores today, they just use WordPress or Shopify or one of the alternatives. Thus, the bulk of software development is focused on novel work. But the thing about novel work is that nobody knows exactly what it should look like until they start building. For just as long as software industry has been failing to estimate the work, it's also been deluding itself into thinking that you can specify novel work upfront, and produce something people actually want. Yet we've also tried that many times before! And nobody cared for the outcome. Because it invariably didn't end up solving the real problems. The ones you could only articulate after building half of a wrong solution, changing direction, and then coming up with something better. It's time to accept this. Smart programmers have tried for decades, and they have repeatedly failed, just as folks fail today, when we try to cut against the grain of human ingenuity, and insist that software needs estimation. The solution is not to try harder nor to hope that this time is somehow different. It's to change tactics. Give up on estimates, and embrace the alternative method for making software by using budgets, or appetites, as we call them in our Shape Up methodology. It turns out that programmers are actually surprisingly good at delivering great software on time, if you leave the scope open to negotiation during development. You're not going to get exactly what you asked for, but you wouldn't want that anyway. Because what you asked for before you began building was based on the absolute worst understanding of the problem. Great software is the product of trade-offs and concessions made while making progress. That's how you cut with the grain of human nature. It's the core realization that's been driving us for decades at 37signals, and which has resulted in some wonderful products built by small teams punching way above their weight. We've incorporated it into Shape Up, but whether you use a specific methodology or not, giving up on estimates can help you ship better and sooner. https://lnkd.in/g_pyM67V

  • View profile for Laurie Kirk

    researcher @google; serial complexity unpacker

    82,021 followers

    A squadron of F22’s was once taken out by an imaginary line. On a mission to Japan, an unforseen software bug occurred crossing the international date line. Longitude swaps from W179.99 to E180 degrees. Navigation, comms, and even fuel management went down! — This wasn’t a simple "turn it on and off again” fix; something was seriously wrong. Reboots weren't helping. According to Maj. Gen. Sheppard: “…all systems dumped and when I say all systems, I mean all systems...they could have been in real trouble." — Thankfully, the squad of 12 F22’s were accompanied by a KC-10 tanker, who they followed visually back to Hawaii safely. Details are sparse, but these issues aren't uncommon in aviation! It's extremely difficult to notice subtle bugs over millions of lines of code. — In the 80s, F16s would invert when crossing the equator (in simulation). Similarly, the Soviet Su-24's computers would freeze if the altitude went below zero. Commercial software has a defect density of 1~10 bugs per thousand lines of code. NASA's benchmark is ~0.1. Even with space-shuttle levels of quality, you can expect ~100 unforseen bugs for every million lines of code. Formal verification of software is a really interesting field; if you’d like to learn more, JPL’s “Power of 10” rules for developing safety-critical code is an interesting read.

  • View profile for Rocky Bhatia

    400K+ Engineers | Architect @ Adobe | GenAI & Systems at Scale

    214,753 followers

    Demystifying CI/CD Pipelines: A Simple Guide for Easy Understanding 1. Code Changes:   Developers make changes to the codebase to introduce new features, bug fixes, or improvements. 2. Code Repository:   The modified code is pushed to a version control system (e.g., Git). This triggers the CI/CD pipeline to start. 3. Build:   The CI server pulls the latest code from the repository and initiates the build process.   Compilation, dependency resolution, and other build tasks are performed to create executable artifacts. 4. Predeployment Testing:   Automated tests (unit tests, integration tests, etc.) are executed to ensure that the changes haven't introduced errors.   This phase also includes static code analysis to check for coding standards and potential issues. 5. Staging Environment:   If the pre deployment tests pass, the artifacts are deployed to a staging environment that closely resembles the production environment. 6. Staging Tests:   Additional tests, specific to the staging environment, are conducted to validate the behavior of the application in an environment that mirrors production. 7. Approval/Gate:   In some cases, a manual approval step or a set of gates may be included, requiring human intervention or meeting specific criteria before proceeding to the next stage. 8. Deployment to Production:   If all tests pass and any necessary approvals are obtained, the artifacts are deployed to the production environment. 9. Post deployment Testing    After deployment to production, additional tests may be performed to ensure the application's stability and performance in the live environment. 10. Monitoring:    Continuous monitoring tools are employed to track the application's performance, detect potential issues, and gather insights into user behaviour. 11. Rollback (If Necessary):    If issues are detected post deployment, the CI/CD pipeline may support an automatic or manual rollback to a previous version. 12. Notification:    The CI/CD pipeline notifies relevant stakeholders about the success or failure of the deployment, providing transparency and accountability. This iterative and automated process ensures that changes to the codebase can be quickly and reliably delivered to production, promoting a more efficient and consistent software delivery lifecycle. It also helps in catching potential issues early in the development process, reducing the risk associated with deploying changes to production.

  • View profile for Armand Ruiz
    Armand Ruiz Armand Ruiz is an Influencer

    building AI systems @meta

    206,800 followers

    How To Handle Sensitive Information in your next AI Project It's crucial to handle sensitive user information with care. Whether it's personal data, financial details, or health information, understanding how to protect and manage it is essential to maintain trust and comply with privacy regulations. Here are 5 best practices to follow: 1. Identify and Classify Sensitive Data Start by identifying the types of sensitive data your application handles, such as personally identifiable information (PII), sensitive personal information (SPI), and confidential data. Understand the specific legal requirements and privacy regulations that apply, such as GDPR or the California Consumer Privacy Act. 2. Minimize Data Exposure Only share the necessary information with AI endpoints. For PII, such as names, addresses, or social security numbers, consider redacting this information before making API calls, especially if the data could be linked to sensitive applications, like healthcare or financial services. 3. Avoid Sharing Highly Sensitive Information Never pass sensitive personal information, such as credit card numbers, passwords, or bank account details, through AI endpoints. Instead, use secure, dedicated channels for handling and processing such data to avoid unintended exposure or misuse. 4. Implement Data Anonymization When dealing with confidential information, like health conditions or legal matters, ensure that the data cannot be traced back to an individual. Anonymize the data before using it with AI services to maintain user privacy and comply with legal standards. 5. Regularly Review and Update Privacy Practices Data privacy is a dynamic field with evolving laws and best practices. To ensure continued compliance and protection of user data, regularly review your data handling processes, stay updated on relevant regulations, and adjust your practices as needed. Remember, safeguarding sensitive information is not just about compliance — it's about earning and keeping the trust of your users.

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect & Engineer | AI Strategist

    720,630 followers

    Starting Your CI/CD Journey 1. 𝗦𝘁𝗮𝗿𝘁 𝗦𝗺𝗮𝗹𝗹, 𝗧𝗵𝗶𝗻𝗸 𝗕𝗶𝗴    - Don't try to overhaul your entire codebase at once    - Begin with a small project as your pilot    - Gradually expand your CI/CD pipeline as you gain experience and confidence 2. 𝗚𝗲𝘁 𝗧𝗲𝗮𝗺 𝗕𝘂𝘆-𝗜𝗻    - CI/CD is a significant shift in workflow - ensure your team is on board    - Educate your team on the benefits of CI/CD:    - Faster time to market    - Improved code quality    - Reduced manual errors    - Address concerns and foster a culture of continuous improvement 3. 𝗘𝗺𝗯𝗿𝗮𝗰𝗲 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻    - The heart of CI/CD is automation - the more, the better    - Look for opportunities to automate manual tasks in your development lifecycle Key Automation Milestones Strive to reach these crucial automation checkpoints in your CI/CD journey: 1. 𝗨𝗻𝗶𝘁 𝗧𝗲𝘀𝘁 𝗘𝘅𝗲𝗰𝘂𝘁𝗶𝗼𝗻 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻    - Ensure all unit tests run automatically with each code change 2. 𝗕𝘂𝗶𝗹𝗱 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻    - Automate your build process to create consistent, reproducible builds 3. 𝗖𝗼𝗱𝗲 𝗖𝗼𝘃𝗲𝗿𝗮𝗴𝗲 𝗖𝗵𝗲𝗰𝗸 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻    - Automatically measure and report on code coverage for each build 4. 𝗖𝗼𝗱𝗲 𝗤𝘂𝗮𝗹𝗶𝘁𝘆 𝗖𝗵𝗲𝗰𝗸 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻    - Implement automated code quality checks to maintain high standards 5. 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗦𝗰𝗮𝗻𝗻𝗶𝗻𝗴 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻    - Integrate automated security scans to catch vulnerabilities early 6. 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗲𝗱 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁𝘀 𝘄𝗶𝘁𝗵 𝗚𝗮𝘁𝗶𝗻𝗴    - Set up automated deployments with quality gates to ensure only validated code reaches production 7. 𝗙𝗲𝗲𝗱𝗯𝗮𝗰𝗸 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻 𝘁𝗼 𝗣𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻 𝗧𝗲𝗮𝗺𝘀    - Establish automated feedback loops to keep production teams informed 8. 𝗕𝗶𝗻𝗮𝗿𝘆 𝗦𝘁𝗼𝗿𝗮𝗴𝗲 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻 𝗶𝗻𝘁𝗼 𝗥𝗲𝗽𝗼 𝗠𝗮𝗻𝗮𝗴𝗲𝗿    - Automate the storage of build artifacts in a repository manager 9. 𝗜𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝗦𝗲𝘁𝘂𝗽 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻    - Implement Infrastructure as Code (IaC) to automate environment setups Pro Tips for CI/CD Success - 𝗖𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴: Stay updated with the latest CI/CD tools and best practices - 𝗠𝗲𝘁𝗿𝗶𝗰𝘀 𝗠𝗮𝘁𝘁𝗲𝗿: Track key performance indicators (KPIs) to measure the impact of your CI/CD implementation - 𝗜𝘁𝗲𝗿𝗮𝘁𝗲 𝗮𝗻𝗱 𝗜𝗺𝗽𝗿𝗼𝘃𝗲: Regularly review and refine your CI/CD pipeline based on team feedback and changing project needs How has implementing CI/CD transformed your development process? What challenges did you face, and how did you overcome them?

  • View profile for Jyothish Nair

    Doctoral Researcher in AI Strategy & Human-Centred AI | Technical Delivery Manager at Openreach

    19,655 followers

    Reliability, evaluation, and “hallucination anxiety” are where most AI programmes quietly stall. Not because the model is weak. Because the system around it is not built to scale trust. When companies move beyond demos, three hard questions appear: →Can we rely on this output? →Do we know what “good” actually looks like? →How much human oversight is enough? The fix is not better prompting. It is a strategy and operating discipline. 𝐅𝐢𝐫𝐬𝐭: ⁣Define reliability like a product, not a vibe. Every serious AI use case should have a one-page SLO sheet with measurable targets across: →Task success ↳Right-first-time rate and rubric-based acceptance →Factual grounding ↳Evidence coverage and unsupported-claim tracking →Safety and compliance ↳Policy violations and PII leakage →Operational quality ↳Latency, cost per task, escalation to humans Now “good” is no longer opinion. It is observable. 𝐒𝐞𝐜𝐨𝐧𝐝:  evaluation must be continuous, not a one-off demo test. Use a simple loop: 𝐏lan: Define rubrics, datasets, and risk tiers 𝐃⁣o: Run offline evaluations and limited pilots 𝐂heck: Monitor drift and regressions weekly 𝐀ct: Update prompts, data, guardrails, and workflows Support this with an AI test pyramid: →Unit checks for prompts and tool behaviour →Scenario tests for real edge failures →Regression benchmarks to prevent backsliding →Live monitoring in production Add statistical control charts, and you can detect silent degradation before users do. 𝐓𝐡𝐢𝐫𝐝: reduce hallucinations by design. →Run a short failure-mode workshop and engineer controls: →Require retrieval or evidence before answering →Allow safe abstention instead of confident guessing →Add claim checking and tool validation →Use structured intake and clarifying flows You are not asking the model to behave. You are designing a system that expects failure and contains it. 𝐅𝐨𝐮𝐫𝐭𝐡: make human-in-the-loop affordable. Tier risk: →Low risk: Light sampling →Medium risk: Triggered review →High risk: Mandatory approval Escalate only when signals demand it: low confidence, missing evidence, policy flags, or novelty spikes. Review becomes targeted, fast, and a source of improvement data. 𝐅𝐢𝐧𝐚𝐥𝐥𝐲: Operate it like a capability. Track outcomes, risk, delivery speed, and cost on a single dashboard. Hold a short weekly reliability stand-up focused on regressions, failure modes, and ownership. What you end up with is simple: ↳Use case catalogue with risk tiers ↳Clear SLOs and error budgets ↳Continuous evaluation harness ↳Built-in controls ↳Targeted human review ↳Reliability cadence AI does not scale on intelligence alone. It scales on measurable trust. ♻️ Share if you found thisuseful. ➕ Follow (Jyothish Nair) for reflections on AI, change, and human-centred AI #AI #AIReliability #TrustAtScale #OperationalExcellence

  • View profile for Dion Wiggins

    CTO at Omniscien Technologies | Board Member | Strategic Advisor | Consultant | Author

    12,931 followers

    Trust Betrayed. Again. Anthropic—the company that branded itself as “privacy-first” and “safety-driven”—just torched its own moat. Starting now, Claude will train on your chat transcripts and coding sessions unless you manually opt out by September 28. Five years of storage replaces the old 30-day deletion rule. Free, Pro, Max, Claude Code—no exceptions. This is not an update. It is a betrayal. → Hypocrisy laid bare: The self-proclaimed “responsible” AI company now runs the same playbook as the rest—harvest first, ask forgiveness later. → Compliance nightmare: Sensitive conversations, contracts, legal docs, and code can now sit in Anthropic’s servers for half a decade. Opt-out ≠ consent. → Structural exposure: For governments and enterprises that bought Claude for its privacy promises, the foundation just cracked. → Pattern confirmed: In the end, every closed model company caves to the same growth imperative: extract more data, hold it longer, and lock users in. The last fig leaf of “privacy-first AI” has fallen. The message is simple: sovereignty and control cannot be outsourced. The question for every policymaker, CIO, and enterprise is now clear: how many more times will you let “responsible AI” vendors betray your trust before you build systems you truly control? https://lnkd.in/gm2J-T6h

  • View profile for Anurag(Anu) Karuparti

    Agentic AI Strategist @Microsoft (30k+) | Author - Generative AI for Cloud Solutions | LinkedIn Learning Instructor | Responsible AI Advisor | Ex-PwC, EY | Marathon Runner

    31,501 followers

    𝐀𝐈 𝐂𝐨𝐦𝐩𝐥𝐢𝐚𝐧𝐜𝐞 & 𝐃𝐚𝐭𝐚 𝐏𝐫𝐨𝐭𝐞𝐜𝐭𝐢𝐨𝐧 𝐋𝐚𝐰𝐬 𝐟𝐨𝐫 𝐆𝐞𝐧𝐀𝐈 𝐀𝐩𝐩𝐬 Building GenAI Apps for a Global Audience?  Understanding Regional Data Protection and AI laws is not optional, it is foundational. Here is what you need to know: 1. UNDERSTANDING GLOBAL REGULATORY VARIANCE Building GenAI for a global audience requires understanding regional data protection and AI laws. Key Regulations by Region: • EU AI Act: Risk-based AI obligations for certain AI systems and transparency use cases • GDPR (EU): Transparency & Consent • DPDP (India): Digital Personal Data Protection • PIPL (China): Strict Data Localization • CCPA (California): Data Access & Opt-Out • LGPD (Brazil): Local Compliance Rules 2. IMPACT OF THESE REGULATIONS ON YOUR AI TRAINING DATA To build compliant GenAI apps,  Ensure that data used for training AI models follows the regional rules: Data Collection → Processing → Model Training → Deployment Three Core Requirements: a. User Consent: Obtain explicit consent for data collection and use b. Data Minimization: Collect only necessary data for the intended purpose c. Anonymization: Remove personally identifiable information from training data 3. MITIGATING AI ETHICS AND BIAS RISKS AI systems must be fair and ethical, particularly in high-risk areas: a. Fairness: Ensure your AI models don't discriminate, especially in areas like recruitment or finance. b. Bias Mitigation: Regularly test and adjust your models to reduce bias in the outputs. 4. ENSURING TRANSPARENCY IN AI MODEL DEVELOPMENT Transparency is a cornerstone of compliance, especially when your AI impacts users directly: a. Explainability: Protect data in transit and at rest. b. Consent Management: Collect, track, and manage user consent. c. Privacy by Design: Embed privacy into every system layer. 5. MANAGING CROSS-BORDER DATA FLOW GenAI apps often rely on data from various regions, so it's critical to understand data sovereignty laws: a. Data Sovereignty: Follow local laws on where data is stored and processed. b. Data Transfer Agreements: Use SCCs or BCRs for compliant cross-border transfers. THE COMPLIANCE CHECKLIST Before launching GenAI globally, verify: 1. Regional Compliance: • GDPR for EU? (Transparency & Consent) • DPDP for India? (Data Protection) • PIPL for China? (Data Localization) • CCPA for California? (Access & Opt-Out) • LGPD for Brazil? (Local Rules) 2. Training Data: • User consent obtained? • Data minimized? • PII anonymized? 3. Ethics & Bias: • Fairness tested? • Bias mitigation in place? 4. Transparency: • Explainability documented? • Consent management system? • Privacy by design? 5. Cross-Border: • Data sovereignty compliance? • Transfer agreements (SCCs/BCRs)? Each region has different requirements.  Build for the strictest, adapt for the rest. Which regulation applies to your GenAI app?

Explore categories