🧬 Reimagining Genetic Algorithms with LLMs in OpenEvolve I wanted to share some technical insights about how we've implemented evolutionary algorithms in OpenEvolve, our open-source implementation of Google DeepMind's AlphaEvolve system. Unlike traditional genetic algorithms with explicit mutation and crossover operators, OpenEvolve leverages the power of Large Language Models to create a more sophisticated evolutionary process: 🔄 Mutation via LLMs Instead of random bit flips or simple code transformations, OpenEvolve uses an ensemble of LLMs (Gemini-Flash-2.0-lite + Gemini-Flash-2.0) to generate sophisticated code modifications. These models understand programming concepts and can make targeted changes or complete rewrites based on the problem context. 🌿 Selection with MAP-Elites Our selection mechanism combines the MAP-Elites algorithm with an island-based population model. This maintains diversity across multiple feature dimensions while balancing exploration and exploitation - crucial for breaking through optimization plateaus. 🧩 Implicit Crossover Traditional crossover explicitly combines segments from two parent solutions. In OpenEvolve, "crossover" happens implicitly - we feed multiple high-performing programs as inspiration to the LLM, which then combines concepts in ways far more sophisticated than traditional bit-swapping could achieve. 📊 Cascade Evaluation Our fitness evaluation uses a multi-stage approach where promising solutions gradually undergo more intensive evaluation - similar to tournament selection but with progressive complexity. This architecture allowed us to successfully replicate AlphaEvolve's results, evolving from simple geometric patterns to sophisticated mathematical optimization in the circle packing problem (achieving a perfect match with DeepMind's reported 2.635 sum of radii). The most exciting finding? Traditional mutation operators would never discover scipy.minimize on their own, but our LLM-driven evolution did - showing how this approach can navigate complex solution spaces in ways classical genetic algorithms simply cannot. For those interested in the technical details, our implementation is available at https://lnkd.in/gbBgJRau - with database.py (selection) and controller.py (mutation) being particularly illustrative of how we've reimagined genetic algorithms for the LLM era. I'm excited to see how others might apply and extend this approach to their own domains. What other applications do you see for LLM-driven evolutionary algorithms? #MachineLearning #EvolutionaryAlgorithms #LLM #OpenSource #AIResearch #AlgorithmDiscovery
Evolutionary Algorithms
Explore top LinkedIn content from expert professionals.
Summary
Evolutionary algorithms are computer programs inspired by natural selection, where solutions to problems evolve over time through processes like mutation, selection, and crossover. Recent advances are making these algorithms more creative and adaptive by combining them with large language models and hybrid optimization techniques for tasks ranging from prompt engineering to complex engineering design.
- Explore new possibilities: Try evolutionary algorithms for creative problem-solving where traditional methods fall short, especially in areas like artificial intelligence and engineering.
- Combine approaches: Integrate evolutionary algorithms with other optimization techniques, such as Bayesian optimization or large language models, to balance exploration and efficiency.
- Promote diversity: Use population-based search and novelty-aware ranking to encourage a wider range of solutions and avoid getting stuck in narrow solution spaces.
-
-
AI : The Future of Prompt Engineering? How PromptBreeder Uses Co-Evolution for Continual Improvement … ? Prompt engineering is key to unlocking the capabilities of large language models (LLMs). However, designing effective prompts remains more an art than a science. What if we could take a rigorous, automated approach to evolving better prompts? That's the promise of PromptBreeder, an intriguing new technique from DeepMind. PromptBreeder represents a major advance in prompt optimization. It uses an evolutionary algorithm to breed more effective prompts over successive generations. 👉 Here's how it works: 1. Initialize a population of "prompts" (instructions to the LLM) and "mutation prompts" (instructions for modifying prompts). This seeds genetic diversity. 2. Evaluate each prompt on a batch of training examples, scoring its fitness. Better prompts produce more accurate LLM responses. 3. Evolve the population over generations using a genetic algorithm. High-fitness prompts mutate and crossover to yield new prompt variations. 4. Crucially, mutation prompts also evolve, becoming better at generating useful prompt mutations over time. 5. The cycle repeats, with prompts and mutation prompts co-evolving in a self-referential loop. This technique outperformed state-of-the-art prompting methods like Chain-of-Thought on benchmarks in arithmetic, commonsense reasoning, and hate speech detection. 👉 For example, on a grade school math dataset (GSM8K), PromptBreeder evolved prompts like: "Show all your working. You should use the correct mathematical notation and vocabulary, where appropriate. You should write your answer in full sentences and in words. You should use examples to illustrate your points and prove your answers. Your workings out should be neat and legible." This prompt improved accuracy from 56% to 84% compared to the baseline prompt of simply "Solution:". Unlike other prompt optimization techniques, PromptBreeder improves continuously over generations rather than hitting diminishing returns. And it requires no parameter updates, making it scalable. The authors suggest it could become even more powerful when paired with larger foundation models. The paper provides a fascinating glimpse into the future where systems not only learn, but learn how to learn better. PromptBreeder shows language itself can be the substrate for open-ended self-improvement, no parameters required. 👉 I highly recommend reading the full paper for an in-depth look at this new technique: PromptBreeder: Self-Referential Self-Improvement via Prompt Evolution https://lnkd.in/g2QDDJZn This research opens exciting possibilities for automating prompt engineering through co-evolution. The self-referential approach is powerful yet simple and scalable. PromptBreeder demonstrates how recursive self-improvement grounded in language alone can unlock more of LLMs' latent capabilities. I'm eager to see future work building on these ideas!
-
Exciting News to Kick Off 2025! I'm happy to announce that our latest paper, titled 'Large Language Model-Based Evolutionary Optimizer: Reasoning with Elitism', has been published in Neurocomputing, Elsevier! This work explores the potential of Large Language Models (LLMs) as black-box optimizers, leveraging their remarkable reasoning capabilities for zero-shot optimization across a variety of scenarios, including multi-objective and high-dimensional problems. We introduce Language-Model-Based Evolutionary Optimizer (LEO), a novel, population-based method for numerical optimization. Applications include benchmark challenges and real-world engineering problems like, Supersonic nozzle shape optimization, Heat transfer optimization, Windfarm layout optimization. Key Highlights: 1. Comparable performance to state-of-the-art optimization methods Insights into leveraging LLMs creative potential while addressing challenges like hallucinations. 2. Practical guidelines for reliable optimization using LLMs 3. Limitations and exciting directions for future research A huge thanks to all the collaborators Shuvayan Brahmachary, Subodh Joshi, Kaushic K, Kaushik Koneripalli, Aniruddha Panda, Harshil Patel, PhD, et al.; and the reviewers for their support and feedback! If you're interested in cutting-edge intersections of AI, optimization, and engineering, I invite you to check out the paper: https://lnkd.in/e5hzJwhh Wishing everyone a joyful and prosperous New Year!
-
Bayesian Optimization (BO) is sample-efficient but exploration-limited. Evolutionary algorithms are exploration-rich but sample-hungry. What happens when you combine them? Multi-objective BO (MOBO) is well suited for autonomous discovery workflows, but its acquisition functions can be greedy, concentrating candidates in narrow regions of the Pareto front because they prioritize expected improvement over coverage. Evolutionary algorithms handle diversity well with population-based search, but require far more evaluations to converge because they rely on iterative selection pressure rather than a surrogate model. Evolutionary Guided BO (EGBO), introduced by Low et al. (https://lnkd.in/evanXmBx), combines both in a single loop, letting evolutionary and acquisition-driven proposals compete for selection within a MOBO framework. This gives the optimizer access to diverse candidates without sacrificing the sample efficiency of model-guided search. A new preprint systematically evaluates hybrid strategies across diverse problem types and introduces a novelty-aware ranking step that penalizes redundancy among selected candidates. The gains are consistent across benchmarks, and particularly strong on constrained problems where narrow feasible regions make diversity critical. Here is how the modified EGBO workflow operates: 🔹The BO acquisition function proposes a small set of candidates it considers most promising based on the surrogate model. 🔹An evolutionary algorithm independently proposes a larger set of candidates drawn from diverse regions of the design space. 🔹Both sets are pooled into a single merged candidate list and scored by the acquisition function, so exploitation-driven and exploration-driven proposals compete on equal footing. In standard EGBO, the top-scored candidates are selected directly. 🔹The novelty-aware modification adds one step: as candidates are selected sequentially, a novelty term penalizes those too close to already-selected batch members, reducing redundancy and promoting broader coverage within each batch. Across 10 synthetic benchmarks and 4 real-world datasets (reaction optimization, pharmaceutical formulations, industrial coatings, drug screening), novelty-aware EGBO consistently outperforms both standard EGBO and acquisition-only MOBO. The largest gains appear in many-objective and constrained settings, though in very high-dimensional feature spaces, standard acquisition-only MOBO performs better. A practical framework worth considering for campaigns with complex trade-offs or constrained feasible regions. 📄 Novelty-Aware Evolutionary Bayesian Optimisation for Multi-Objective Discovery Science, ChemRxiv, April 6, 2026 🔗 https://lnkd.in/eSEbHMp3
-
📈 𝗘𝘃𝗼𝗹𝘂𝘁𝗶𝗼𝗻 𝗶𝘀 𝗔𝗰𝗰𝗲𝗹𝗲𝗿𝗮𝘁𝗶𝗻𝗴 As a 30-year groupie of evolutionary computation, I am particularly sensitive to this trend: 𝘦𝘷𝘰𝘭𝘶𝘵𝘪𝘰𝘯𝘢𝘳𝘺 𝘈𝘐 𝘪𝘴 𝘰𝘯 𝘵𝘩𝘦 𝘳𝘪𝘴𝘦. The latest case in point, of course is Google DeepMind’s AlphaEvolve, which uses LLMs to automate the construction of evolutionary operators (in this case, mutations, which do not have to be defined ahead of time) to create new programs to solve mathematical programs. 👴 The application of evolutionary algorithms (EAs) to neural networks goes back to at least Geoffrey Miller, Peter Todd and Shailesh Hedge’s 1989 paper on designing NN with GAs, Dave Chalmers’ (yes, THAT David Chalmers!) 1990 paper on “genetic connectionism”, Hiroaki Kitano’s GA with graph generation, Dario Floreano’s and Francesco Mondada’s neuroevolution, Riccardo Poli’s 1998 topology and weights evolution, Kenneth Stanley’s and Risto Miikkulainen 2002 NeuroEvolution of Augmenting Topologies (NEAT). ❄️ Then deep learning burst onto the scene and made it look like EAs had become obsolete. The fact that Generative Adversarial Networks (GANs), a super popular deep learning technique, looked very similar in their principles to co-evolutionary (mini-max) games, did not change that perception, even though multiple groups have been using co-evolution techniques to improve and stabilize GANs. OpenAI’s 2017 paper, Evolution Strategies as a Scalable Alternative to Reinforcement Learning (https://lnkd.in/gmVmue_5) was a weirdly interesting article but it did not move the needle as it simply stated that you could do as well with EAs as another technique. With the resurgence of Reinforcement Learning, this odd paper may have found new meaning. 📈 But in recent years we have seen an explosion of new ideas beyond evolving the weights (a mostly inefficient alternative to SGD) and architectures (but here is a good review of Evolutionary Neural Architecture Search: https://lnkd.in/g9TREhCH)). The figure attached to this post is from an excellent early 2025 review that categorizes the different approaches that have emerged, although it does not include the more recent AlphaEvolve (https://lnkd.in/giZJRkwX). In addition to AlphaEvolve, which offers an original use of LLMs and EAs, I would mention, with the arbitrary discretionary power writing a post gives me, the work of Sakana AI (2024, open access! https://lnkd.in/g_FKQbnm): merging models in “data flow space” by combining inference paths from different models, very cool 💡! I also find the AI Scientist work of Sakana AI particularly intriguing as it follows the same path as the one I have been pursuing for 2 decades. Lots of creative uses of EAs by this group. Evolution is back!
-
A few weeks ago, I shared an article from DeepMind: AlphaEvolve—an approach that uses Evolutionary AI and LLMs to automatically discover and optimize algorithms. Inspired by this idea, we’ve been exploring how a similar method can evolve dynamic business logic for segmentation. The result is a reusable framework we call the Evolution Rules Program, which: - Uses LLMs to generate interpretable business logic candidates - Applies MAP-Elites to explore a diverse range of logic - Leverages Q-Learning to optimize logic against measurable goals In the blog, we share an example of how we evolved trip mission logic. This approach delivers logic that is transparent, adaptive, and performance-driven. https://lnkd.in/gqyTPE35 This approach can be used to evolve and optimize solutions in many business domains where measurable goals exist—such as refining heuristic rules, improving processes, optimizing inventory, enhancing content design, or accelerating code efficiency.
-
𝐖𝐡𝐚𝐭 𝐢𝐟 𝐲𝐨𝐮𝐫 𝐚𝐥𝐠𝐨𝐫𝐢𝐭𝐡𝐦 𝐜𝐨𝐮𝐥𝐝 𝐞𝐯𝐨𝐥𝐯𝐞 𝐢𝐭𝐬𝐞𝐥𝐟 — 𝐟𝐚𝐬𝐭𝐞𝐫 𝐭𝐡𝐚𝐧 𝐚𝐧𝐲 𝐜𝐥𝐚𝐬𝐬𝐢𝐜𝐚𝐥 𝐜𝐨𝐦𝐩𝐮𝐭𝐞𝐫? Welcome to the 🌌 Quantum Andhra Series, where we explore how quantum principles are transforming computation, learning, and intelligence itself. Today’s concept: 𝐐𝐮𝐚𝐧𝐭𝐮𝐦 𝐄𝐯𝐨𝐥𝐮𝐭𝐢𝐨𝐧𝐚𝐫𝐲 𝐀𝐥𝐠𝐨𝐫𝐢𝐭𝐡𝐦 (𝐐𝐄𝐀) 𝐖𝐡𝐚𝐭 𝐢𝐬 𝐐𝐄𝐀? It’s a hybrid approach that merges Quantum Computing 🧩 with Evolutionary Intelligence 🧬 — allowing systems to learn, adapt, and evolve across multiple states at once. Let’s decode the flowchart step by step 👇 🔹 𝐒𝐭𝐞𝐩 𝟏 — 𝐈𝐧𝐢𝐭𝐢𝐚𝐥𝐢𝐳𝐞 𝐂𝐨𝐧𝐭𝐫𝐨𝐥 𝐏𝐚𝐫𝐚𝐦𝐞𝐭𝐞𝐫𝐬: Define population size, iteration limits, and convergence thresholds — the foundation of evolution. 🔹 𝐒𝐭𝐞𝐩 𝟐 — 𝐈𝐧𝐢𝐭𝐢𝐚𝐥𝐢𝐳𝐞 𝐏𝐨𝐩𝐮𝐥𝐚𝐭𝐢𝐨𝐧: Create potential solutions represented by qubits, enabling superposition — where each individual can exist in multiple states simultaneously. 🔹 𝐒𝐭𝐞𝐩 𝟑 — 𝐂𝐨𝐦𝐩𝐮𝐭𝐞 𝐅𝐢𝐭𝐧𝐞𝐬𝐬: Evaluate how well each quantum individual performs using a fitness function. 🔹 𝐒𝐭𝐞𝐩 𝟒 — 𝐐𝐮𝐚𝐧𝐭𝐮𝐦 𝐄𝐯𝐨𝐥𝐮𝐭𝐢𝐨𝐧𝐚𝐫𝐲 𝐎𝐩𝐞𝐫𝐚𝐭𝐢𝐨𝐧: Apply quantum rotation gates to update qubit states — ensuring diversity and preventing early stagnation (a key issue in classical algorithms). 🔹 𝐒𝐭𝐞𝐩 𝟓 — 𝐭 = 𝐭 + 𝟏 (𝐈𝐭𝐞𝐫𝐚𝐭𝐢𝐨𝐧 𝐔𝐩𝐝𝐚𝐭𝐞): Advance to the next generation, refining candidate solutions through quantum-inspired evolution. 🔹 𝐒𝐭𝐞𝐩 𝟔 — 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐞 𝐍𝐞𝐰 𝐏𝐨𝐩𝐮𝐥𝐚𝐭𝐢𝐨𝐧: Produce a new set of evolved individuals — smarter, faster, and closer to the global optimum. 🔹 𝐒𝐭𝐞𝐩 𝟕 — 𝐂𝐚𝐥𝐜𝐮𝐥𝐚𝐭𝐞 𝐂𝐥𝐮𝐬𝐭𝐞𝐫 𝐂𝐞𝐧𝐭𝐞𝐫𝐬 𝐚𝐧𝐝 𝐅𝐢𝐭𝐧𝐞𝐬𝐬 𝐕𝐚𝐥𝐮𝐞𝐬: For clustering problems, determine cluster centers, membership degrees, and their fitness scores, making solutions more precise and stable. 🔹 𝐒𝐭𝐞𝐩 𝟖 — 𝐂𝐡𝐞𝐜𝐤 𝐂𝐨𝐧𝐝𝐢𝐭𝐢𝐨𝐧 (𝐭 < 𝐓): If the maximum iterations aren’t reached, evolution continues. Otherwise, it’s time to stop. 🔹 𝐒𝐭𝐞𝐩 𝟗 — 𝐄𝐍𝐃: Extract the final, optimized quantum solution — the fittest among all evolved states. 𝐖𝐡𝐲 𝐢𝐭 𝐌𝐚𝐭𝐭𝐞𝐫𝐬: Quantum Evolutionary Algorithms combine the parallelism of qubits ⚙️ with the adaptability of evolutionary systems They open new frontiers in: 🔸 AI Optimization 🔸 Clustering & Data Mining 🔸 Quantum-Inspired Machine Learning 🔸 Complex System Modeling 𝐐𝐮𝐚𝐧𝐭𝐮𝐦 𝐀𝐧𝐝𝐡𝐫𝐚 isn’t just a series — it’s a vision to make India a global leader in Quantum Intelligence, empowering every learner and innovator with knowledge that shapes the future. 🇮🇳 #QuantumAndhra #QuantumComputing #ArtificialIntelligence #EvolutionaryAlgorithm #QuantumEvolution #Optimization #Innovation #DataScience #QuantumFuture #TechRevolution
-
GEPA [1] reminds me of Ansor [2], the work we did at Berkeley led by Lianmin Zheng (author of "Judging LLM as a Judge" and creator of Chatbot Arena). We used evolutionary algorithms to generate high-performance tensor programs for deep neural networks and found they consistently outperformed reinforcement learning (RL). Evolutionary algorithms discovered solutions in far fewer steps, especially when RL rewards were too sparse or overly incremental. The main argument for reinforcement learning is its potential for zero-shot adaptation post-deployment. But for many tasks that require one-time search followed by using the optimal result (like LLM evaluations), I struggle to understand the appeal of RL over evolutionary approaches. When you need to find the best solution once and then deploy it, why not use the method that gets there faster and more reliably? Thoughts on when you'd choose RL over evolutionary algorithms for optimization problems? #MachineLearning #AI #Optimization #ReinforcementLearning #EvolutionaryAlgorithms [1] https://lnkd.in/g2rwMjjb [2] https://lnkd.in/ghE9hFMw
-
Evolutionary algorithm generates tailored "molecular fingerprints." Method, published in the journal Chem, is also suitable for predicting quantum chemical properties and toxicity of molecules. University of Munster, Germany. 10 May 2024 Excerpt: A team led by Prof Frank Glorius from the Institute of Organic Chemistry at University of Münster has now developed an evolutionary algorithm that searches for optimal molecular representations based on the principles of evolution, using mechanisms such as reproduction, mutation and selection. It identifies molecular structures particularly relevant to the respective question and uses them to encode molecules for various machine-learning models. Depending on the model and the given question, customized “molecular fingerprints” are created, which chemists used in their study to predict chemical reactions with surprising accuracy. The method, published in the journal Chem, is also suitable for predicting quantum chemical properties and the toxicity of molecules. In order to use machine learning, researchers must first convert the molecules into a computer-readable form. Many research groups have already tackled this problem, and consequently, there are various ways of performing this task. However, it is difficult to predict which of the available methods is best suited to answer a specific question – for example, to determine whether a chemical compound is harmful to humans. The new algorithm is designed to help find the optimal molecular fingerprint in each case. To do this, the algorithm gradually selects the molecular fingerprints that achieve the best results in the prediction from many randomly generated molecular fingerprints. “Following the example of nature, we use mutations, i.e. random changes to individual components of the fingerprints, or recombine components of two fingerprints,” explains doctoral student Felix Katzenburg. Note: One of the study’s primary goals was to develop a method for encoding molecules that can be applied to any molecular data set and does not require expert knowledge of the underlying relationships. Direct link to publication available in enclosed announcement. Publication: Chem | Online February 29,2024 An evolutionary algorithm for interpretable molecular representations. https://lnkd.in/e2NsRhZ4 https://lnkd.in/e-xvrsj9
-
🚀 AlphaEvolve: Redefining Algorithm Discovery with Gemini Google DeepMind just released AlphaEvolve, a Gemini-powered evolutionary coding agent that could redefine how we discover and optimize algorithms. AlphaEvolve blends the creative problem-solving power of Gemini’s large language models with automated evaluators, driving remarkable advances across Google's computing ecosystem—from optimizing data center scheduling (recovering 0.7% of global computing resources) to accelerating TPU hardware design. An evolutionary coding agent like AlphaEvolve generates algorithms as code, systematically evaluates their performance, then iteratively refines and selects top-performing solutions—a powerful "survival-of-the-fittest" approach that rapidly evolves toward optimal outcomes. I'm old enough to have been exposed to (and have coded) classic genetic algorithms—systems that mimic natural evolution to iteratively select and refine solutions. AlphaEvolve follows a similar evolutionary philosophy but takes it a significant step further. Instead of abstract "genetic" encodings, AlphaEvolve directly generates human-readable, executable code using large language models, enabling richer exploration and faster evolution toward practical, impactful solutions. What stood out for me: 🧩 Algorithmic creativity: AlphaEvolve discovered new matrix multiplication algorithms surpassing Strassen's landmark 1969 method—highlighting AI's potential for foundational computational breakthroughs. 🌍 Real-world impact: AlphaEvolve already improved Gemini’s AI training speed by 23% and optimized FlashAttention kernels by up to 32.5%, directly translating to significant energy and cost savings. 🤝 Collaborative synergy: Combining Gemini Flash’s breadth and Gemini Pro’s depth, AlphaEvolve harnesses collective intelligence to tackle problems across mathematics, computing, and potentially drug discovery and materials science. Although still early, AlphaEvolve illustrates a crucial step toward AGI: AI agents autonomously improving AI algorithms, creating a self-reinforcing loop toward increasingly capable systems. More here: https://lnkd.in/e6KBWvrX
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development