Experimental Research Models

Explore top LinkedIn content from expert professionals.

Summary

Experimental research models are structured systems—ranging from living cells and organoids to computer simulations—used by scientists to study biological processes, test new medicines, or develop biomedical devices. These models help researchers ask precise questions, simulate real-life conditions, and predict how treatments or interventions might work in humans.

  • Clarify your needs: Choose the research model that matches your specific goals, whether you want to study disease, test drugs, or evaluate biomedical devices, making sure the model’s complexity and relevance align with your questions.
  • Consider ethical and practical factors: Weigh factors like cost, accessibility, and ethical considerations—such as reducing animal use—when selecting or building a model for your experiments.
  • Embrace new tools: Explore innovative approaches like machine learning–assisted platforms or virtual cell models to speed up data gathering, improve experimental planning, and make modeling more accessible and transparent.
Summarized by AI based on LinkedIn member posts
  • View profile for Jack (Jie) Huang MD, PhD

    Chief Scientist I Founder and CEO I President at AASE I Vice President at ABDA I Visit Professor I Editors

    35,120 followers

    In this newsletter, I explore how three important biomedical research platforms compare: cell models, organoid models, and mouse models. Each model has unique advantages for studying human disease, drug development, and biological mechanisms. I also analyze their structural complexity, relevance to humans, cost, ethical considerations, and scope of application to help researchers choose the right model for their needs. Understanding these differences is critical to advancing translational research and personalized medicine. #CellModels #OrganoidModels #MouseModels #BiomedicalResearch #DrugDevelopment #PrecisionMedicine #LifeSciences #3DCellCulture #TranslationalResearch #ResearchTools #CSTEAMBiotech

  • View profile for Sergiu P. Pașca

    Professor at Stanford University

    15,395 followers

    Sharing today the latest from our lab, just published in 𝙉𝙖𝙩𝙪𝙧𝙚 𝘽𝙞𝙤𝙢𝙚𝙙𝙞𝙘𝙖𝙡 𝙀𝙣𝙜𝙞𝙣𝙚𝙚𝙧𝙞𝙣𝙜. As stem cell–based neural models gain traction for disease modeling and drug testing, one of the major bottlenecks has been scaling up production. In work led by Yuki Miura and Genta Narazaki, we present a simple and cost-effective way to prevent neural organoid fusion that allows scalable generation of cortical organoids without compromising quality. In brief, this is done by simply adding the cheap food additive xantham gum! This enabled a single experimenter to screen all FDA-approved drugs for neuropsychiatric disorders across 2,400 organoids, identifying compounds that impair human cortical development. Hoping that will be one more step toward scalable human models for brain development and drug discovery. Link to the article here: https://lnkd.in/gBUqNHpb and a short video made by Yuki to show how to dissolve the xantham gum: https://lnkd.in/gX5wipSe #Organoids #StemCells #DrugScreening #Neurodevelopment #TranslationalResearch

  • View profile for Ismail Lazoglu

    Director of Manufacturing and Automation Research Center at Koc University, Professor of Mechanical Engineering

    5,444 followers

    Our new article titled “Real-time physiological environment emulation for the Istanbul heart ventricular assist device via acausal cardiovascular modeling” was just published in Artificial Organs. The cost and complexity associated with animal testing are significantly reduced by using mock circulatory loops prior. Novel mock circulatory loops allow us to test biomedical devices preclinically due to their flexibility, scalability, and cost-effectiveness. The presented work describes the development of a hardware-in-the-loop platform to emulate human physiology for the Istanbul Heart (iHeart-II) LVAD. A closed-loop system is developed whereby the effect of the LVAD on the heart and vice versa can be studied. An acausal model of the cardiovascular system is calibrated to emulate advanced-stage heart failure. A new prototype of the iHeart-II LVAD is connected between two air-actuated chambers emulating the left ventricle and aortic chambers with PID controllers tracking numerically modeled pressures from the in-silico model. A lead–lag compensator is used to maintain fluid level. Controllers are tuned using nonlinear Hammerstein-Weiner models identified using open-loop data. The iHeart-II LVAD is operated at various speeds in its operational range, and the resulting hemodynamics are visualized in real time. Hemodynamic variables, such as LVAD flow rate, aortic, left ventricular, and pulse pressure, demonstrate trends similar to clinical observations. The iHeart-II LVAD achieves hemodynamic normalization at ~3500 rpm for the emulated condition. A novel evaluation methodology is adopted to study the performance of the iHeart LVAD under advanced-stage heart failure emulation. The models and controllers used in the platform are readily replicable to facilitate VAD research, pedagogy, design, and development. I would like to thank my doctoral research assistants, Hammad Ur Rahman, Dr. Khunsha Mahmood and MS assistant Farouk Abdulhamid from Koç University Manufacturing and Automation Research Center, and our medical supervisors Prof. Süha Küçükaksu and Prof. Vedat BAKUY from Cardiovascular Surgery Department in the School of Medicine at Başkent University for their contributions to this research. We would like to thank the Scientific and Technological Research Council of Turkey (TÜBİTAK project 318S143) for funding this research. The article is available at the following link; https://lnkd.in/dH_dZQma #artificialorgan #heartpump #ventricularassistdevice #VAD #LVAD #cardiovascularmodel #biomedical #modeling #control #hemodynamics #heartfailure #IstanbulHeartVAD #iHeartVAD

  • View profile for Kristin Gleitsman

    CSO at Eigen Bio | AI x Bio Advisor | Scaling Systems for Diagnostics & Discovery | Fellow, Fellows Fund VC | ex VCYT, GH, PACB

    8,166 followers

    Virtual Cell Models are all the rage right now, and this week’s AI ∩ Bio paper covers one recent paper that aims to democratize this approach by encoding complex multicellular dynamics in plain language. Summary: This paper introduces a plain-language “cell behavior hypothesis grammar” that turns rules like “oxygen decreases necrosis” into executable agent-based models. The aim is to let researchers build virtual experiments directly from human-readable statements, initialize them with single-cell or spatial data, and test how cell–cell and tissue dynamics unfold. Why it matters: it makes modeling more accessible, assumptions more transparent, and experiments easier to prioritize. Scientific Insight: At its core, the grammar provides dictionaries of signals (what cells sense) and behaviors (what cells do), plus simple response forms, so a one-line rule becomes math the simulator can execute. The paper shows this through diverse examples: hypoxic tumor growth, PDAC invasion seeded from Visium data, tumor–immune dynamics, an EGF “go vs. grow” test validated with organoids and cell tracking, and cortical layer formation modeled from asymmetric division rules. If you’re new to this, the big idea is: start with rules you can read, tie parameters to data where possible, and test which parameters truly drive outcomes. Leadership Angle: For diagnostics and translational leaders, this work is a pragmatic step toward virtual cell laboratories: models initialized from tissue data can be used to explore therapy combinations and microenvironmental dynamics before committing wet-lab time. The scope is still local-tissue, not clinical, but when used carefully these grammars act as prioritization engines, or tools to sharpen questions and rank hypotheses. Mentorship Angle: If you’re early in your career, I would consider two lessons here. ->First, write the biological models down explicitly, framing for yourself how outcomes shift when parameters are perturbed. ->Second, design experiments that rigorously interrogate these models: not to confirm them, but to expose where they fail. Transparent rulebooks, sensitivity analyses, and reproducible code will accelerate your science regardless of your toolkit. Link in comments. Thank you to the authors: Jeanette Johnson Daniel R. Bergman Heber Lima da Rocha David L. Zhou Eric Cramer Ian Mclean Yoseph Dance Max Booth Zachary Nicholas Tamara Lopez-Vidal Atul Deshpande Randy Heiland Elmar Bucher Fatemeh Shojaeian, MD, MPH Matthew Dunworth André Forjaz Michael Getz Inês Godet Furkan Kurtoglu Melissa Lyman John Metzcar Jacob Mitchell, PhD Andy Raddatz Jacobo Solorzano Gomez Aneequa Sundus Yafei Wang DeNardo David Andrew Ewald Daniele Gilkes Luciane Tsukamoto Kagohara Ashley Kiemen Elizabeth Thompson Denis Wirtz Laura Wood Pei-Hsun Wu Neeha Zaidi Lei Zheng Jacquelyn Zimmerman Jude Phillip Elizabeth M. Jaffee Joe Gray Lisa Coussens Young Hwan Chang Laura Heiser Genevieve Stein-O'Brien Elana Fertig Paul Macklin

  • View profile for Sergei Kalinin

    Weston Fulton chair professor, University of Tennessee, Knoxville

    24,863 followers

    🔬 Building ML-Assisted Experimental Ecosystems: A Bottom-up Approach Creating machine learning (ML)-enabled experimental ecosystems is incredibly complex. With countless possible connections and decision-making processes, foundational models alone can’t simply “give us answers.” So, where do we begin? Step 1: Accelerate Data Acquisition Start by developing ML workflows that enhance data collection efficiency on a single instrument. Here, sample selection and data interpretation remain as in traditional setups, but by identifying internal efficiencies, we’re able to operate faster without changing core processes. Step 2: Build Upstream Feedback Next, integrate characterization results to inform upstream sample selection. This creates a feedback loop, refining experiment planning by better aligning initial sample choices with desired outcomes—an early step toward smarter, data-driven experiment planning. Step 3: Enhance Downstream Data Analytics Finally, improve downstream analysis by updating theoretical models based on new data, ultimately generating knowledge. This strengthens our ability to interpret results in ways that update and refine our scientific understanding. But is this enough? Not quite. In reality, we deal with multiple instruments, researchers, and planning decisions that can be connected in workflows in a vast number of combinations. Designing these connections is a challenge in itself and could benefit from approaches in auction theory, game theory, or other complex decision-making frameworks. However, the first step is to build connections that allow all these elements to exist within a shared knowledge space. #MachineLearning #ExperimentAutomation #ScientificWorkflows #AIforScience #MaterialsScience #ResearchInnovation

  • View profile for Chi-Ping Day

    Hybrid mouse model: biological mouse x computer mouse!

    3,419 followers

    It's official: all new NIH funding opportunities moving forward should incorporate language on consideration of novel alternative methods (NAMs), which include computational modeling and predictive technologies, cell-free methods and assays and cell-based culture models. Any method or model need to be tested for validation, so the scope of its predictive power can be identified. For example, zebrafish is very different from human being, but the developmental processes of melanocytes are well conserved between the two species. Using zebrafish to study inherited disease resulting in defect melanocyte can be translated to human being. Therefore, studies on melanocytic development validate this model. Many novel or alternative models are still waiting for validation study. Let's take organoid culture of tumor as an example. The consensus opinion is that 3D culture can better represent cell-cell and cell-stroma interaction in the tumor, so the signaling pathways in 3D culture better reflect the counterparts in a tumor. We can actually validate this hypothesis by scRNA analysis results of 2D culture, 3D culture, and the real tumor from patients. Comparison on the distribution of cell lineages, ligand-receptor interaction, and response to target therapies in these models and real tumors can give us very clear idea: what can each model actually be translated. I believe that such validation study will fit the mission of NIH perfectly.  

  • View profile for Michael F. Chiang

    Director, National Eye Institute, National Institutes of Health

    17,214 followers

    Dry eye disease is very common and can be extremely frustrating for people, yet there are surprisingly few FDA-approved drug options. This is partly because of lack of model systems for the pathophysiology of the human eye & eyelids. Here's a video about an eye-on-a-chip model that creates a potential experimental drug testing platform (supported by National Eye Institute (NEI)!): https://lnkd.in/eFd_sUxw Work led by Dan Huh (Penn Bioengineering) and Jeongyun Seo, with many collaborators including Vivian Lee, Vatinee Bunya, Mina Massaro-Giordano MD (Scheie Eye Institute), Vivek Shenoy (Penn Materials Science and Engineering), Woo Byun, Andrei Georgescu, Yoon-Suk Yi, Farid Alisafei.

    A New Device Helps Treat Dry Eye

    https://www.youtube.com/

  • View profile for Israel Agaku

    Founder & CEO at Chisquares (chisquares.com)

    9,786 followers

    Here are tips to understand the nuances of cohort studies, randomized trials, and quasi-experiments. 1️⃣ Cohort Studies Observational designs where researchers track groups of individuals over time to observe outcomes based on natural exposures. Key Features: ✅ No Intervention: Researchers don’t assign people to interventions; they observe what naturally occurs. ✅ Grouping by Exposure: Participants are grouped based on characteristics (e.g., smokers vs. non-smokers). ✅ Time Frame: Can be prospective (forward in time) or retrospective (analyzing past data). 🎯 Example: Imagine tracking college students with a ChatGPT subscription at enrollment vs those without a subscription. By following these groups over time, we can assess academic performance (e.g., test scores). 💪 Strengths: ✔ Captures real-world conditions. ✔ Effective for studying rare exposures or long-term outcomes. 👎 Limitations: ✘ Prone to confounding (other factors may influence observed relationships). ✘ Cannot establish causation. 2️⃣ Randomized Trials RCTs are experimental designs where participants are randomly assigned to intervention or control groups. Key Features: ✅ Random Assignment: Reduces bias by ensuring comparable groups. ✅ Controlled Conditions: The researcher controls the intervention. 🎯 Example: To evaluate ChatGPT's impact on academic performance, half the students in a school could be randomized to use ChatGPT for homework, while the other half uses traditional methods (e.g., textbooks). Outcomes like scores can then be compared. 💪 Strengths: ✔ Establishes causality. ✔ Minimizes bias through randomization. 👎 Limitations: ✘ Costly and time-intensive. ✘ Ethical concerns when withholding interventions from certain groups. 3️⃣ Quasi-Experiments: Bridging the Gap These designs involve interventions but lack randomization, making them practical for real-world evaluations. Key Features: ✅ Non-Randomized Assignment: Groups are assigned based on existing conditions or convenience. ✅ Intervention: Researchers introduce an intervention to evaluate its effects. ✅ Comparison Groups: Pre-existing groups or natural events are often used. 🎯 Example: If one state in the country adopts ChatGPT in classrooms while a neighboring state does not, researchers can compare outcomes between the two to evaluate the policy's impact. 💪 Strengths: ✔ Practical for evaluating large-scale policies or programs. ✔ Reflects real-world settings. 👎 Limitations: ✘ Greater risk of bias and confounding. ✘ Weaker causal inferences compared to RCTs. 🛠️ When to Use Each Design Cohort Studies: Ideal for understanding associations, especially when interventions are impractical or unethical. Randomized Trials: Best for testing interventions when randomization is possible and ethical. Quasi-Experiments: Useful for real-world evaluations when randomization isn’t feasible, but causal insights are still needed. Please, reshare ♻️ #Chisquares #VillageSchool #StudyDesign

  • View profile for Jeff Bissen

    Modeling human biology in vitro to avoid clinical failures by confidently predicting efficacy & toxicity in drug discovery | Follow my profile & subscribe to my newsletter for the latest in organ-on-a-chip technology 🔔

    30,553 followers

    Today is World Cancer Day 💜 Here are 3 ways that organ-on-a-chip models are helping in cancer research 👇 Cancer is the 2nd leading cause of death in the US, surpassed only by heart disease. Rather than a single disease, it's a collection of illnesses, including lung cancer, brain cancer, skin cancer, breast cancer, prostate cancer, and many others. The lavender cancer ribbon represents all cancers collectively, serving as a universal symbol of support for patients, survivors, and caregivers. To overcome cancer, the research community needs cutting-edge tools to develop next-generation therapies 🧬 Organ-on-a-chip (OOC) models are contributing in 3 major ways: 1️⃣ Predicting toxicity Phase I clinical trials assess the safety of drugs, but many are not successful. Simply put, it's not surprising when drugs appear safe in animal models but end up being toxic in human patients. What's needed is preclinical models that better recapitulate the complexity of human biology. That's exactly what OOC models do. Xellar Biosystems has developed a model for drug-induced liver injury (DILI), and we have the data to show we can detect toxicity that animal models often miss. 2️⃣ Ensuring efficacy Of course, safety alone isn't enough because drugs must also show promising efficacy in clinical trials. This is another area where traditional preclinical models often fail to predict what will actually happen in human patients. For example, 2D cell culture systems struggle to accurately model the tumor microenvironment (TME). OOC models fill this gap and give you confidence in go/no-go decisions, nominating the best compound for clinical trials. 3️⃣ Enabling precision Different patients have different responses, even though they receive the same drug. This highlights the heterogeneity of cancer and shows the need for patient-specific models. Luckily, primary cells can be used in OOC systems, enabling functional precision oncology. In other words, it's possible to stratify patient populations and personalize medical treatments. - - - Animal models and 2D culture systems have been the traditional choices in drug discovery and preclinical research, but the field is shifting. 90% of clinical trials fail, and OOC models hold great promise in drastically reducing that bottleneck. By predicting toxicity, ensuring efficacy, and enabling precision, we can usher in a new era of cancer research, bringing better therapies to patients who desperately need them 💜

Explore categories