Lately, I’ve had many conversations about 𝘇𝗲𝗿𝗼 𝘄𝗮𝘀𝘁𝗲 𝗽𝗮𝘁𝘁𝗲𝗿𝗻 design, so I thought I’d lay out the math behind it. Let’s begin with the first part of the problem: 𝘰𝘱𝘵𝘪𝘮𝘪𝘻𝘪𝘯𝘨 𝘧𝘢𝘣𝘳𝘪𝘤 𝘶𝘴𝘢𝘨𝘦 given a fixed set of pieces. As suggested, this is a classic mathematical optimization problem. To formulate it, we need a few core components: • 𝗢𝗯𝗷𝗲𝗰𝘁𝗶𝘃𝗲 𝗳𝘂𝗻𝗰𝘁𝗶𝗼𝗻: In our case, this is the total area of fabric consumed. The goal is to minimize this area. • 𝗩𝗮𝗿𝗶𝗮𝗯𝗹𝗲𝘀 (𝗱𝗲𝗴𝗿𝗲𝗲𝘀 𝗼𝗳 𝗳𝗿𝗲𝗲𝗱𝗼𝗺): These include the position, rotation, and flipping of each piece. Each piece contributes a 6-dimensional vector: 2 for displacement, 2 for rotation, and 2 for flipping. • 𝗖𝗼𝗻𝘀𝘁𝗿𝗮𝗶𝗻𝘁𝘀: The most obvious one is no overlap between pieces. This can be handled using collision detection, often made more efficient through techniques like hierarchical trees to reduce the number of checks. Other constraints include grainline alignment (some pieces must follow the fabric's grainline, some others are diagonal. Note that some cheaper production completely drop this constraint to save more fabric), and specific rules about whether certain pieces can be flipped. The naïve approach would be to try every valid arrangement, calculate the fabric usage, and select the layout with the minimum area. But this would take forever to compute. And I don’t mean that metaphorically. Optimization tasks suffer under a problem called the curse of dimensionality. Even though the problem seems easy to solve (it’s just an area calculation), finding the optimal solution becomes exponentially difficult with the number of pieces added to the system. Instead, we resort to approximations. Approximation algorithms do not guarantee the absolute best solution to the system, but they give a good enough solution in a reasonable amount of time. The trade-off is usually between solution quality and runtime. Some popular approaches: • 𝗦𝗶𝗺𝘂𝗹𝗮𝘁𝗲𝗱 𝗔𝗻𝗻𝗲𝗮𝗹𝗶𝗻𝗴: Inspired by physics, this technique treats each layout as an energy state. It perturbs the system in search of lower energy (better solutions) and eventually settles into a stable, near-optimal configuration. • 𝗚𝗲𝗻𝗲𝘁𝗶𝗰 𝗔𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺𝘀: Mimic natural selection, evolving better solutions over generations based on fitness criteria. They literally simulate the survival of the fittest concept and give the solution that has the best “genes”. • 𝗔𝗜-𝗯𝗮𝘀𝗲𝗱 𝗠𝗲𝘁𝗵𝗼𝗱𝘀: While the algorithms above incur a similar computational cost every time you run them, AI can shift that cost to the training phase. Once trained, a model can instantly generate optimal or near-optimal layouts. Inference is fast, and the heavy lifting is done only once during training. Now, when we move from optimizing fixed patterns to generating patterns that are inherently zero-waste by design, AI becomes the only scalable solution. Let's talk about that in a different post!
Using Math to Minimize Resource Waste
Explore top LinkedIn content from expert professionals.
Summary
Using math to minimize resource waste means applying mathematical techniques to reduce unnecessary use of materials, energy, or space in everyday processes, manufacturing, data management, and design. This approach helps businesses and individuals save costs, protect the environment, and boost efficiency by making smarter decisions based on calculations rather than guesswork.
- Analyze current processes: Review how resources like materials, energy, or space are currently used and look for areas where calculations can reveal waste or inefficiencies.
- Apply mathematical tools: Use simple equations, geometry, or advanced algorithms to redesign layouts, packaging, or workflows so that every unit of resource is used wisely.
- Encourage collaboration: Involve both creative and technical team members early on to combine practical design ideas with mathematical thinking, leading to solutions that cut waste without sacrificing quality.
-
-
Algebraic Geometry Offers Fresh Solution to Data Center Energy Inefficiency Key Insights: • Mathematicians from Virginia Tech, led by Professor Gretchen Matthews and Assistant Professor Hiram Lopez, are applying algebraic geometry to address energy inefficiency in data centers. • Data replication, a common method for ensuring reliability and backup, significantly increases energy consumption by duplicating information across servers. • The researchers propose using algebraic structures to distribute data efficiently across servers, reducing redundancy while maintaining reliability. The Problem with Traditional Data Replication: • Data centers currently rely on redundant data replication to safeguard against data loss, often replicating data two or three times across servers. • This approach consumes substantial energy and storage resources, creating environmental and economic inefficiencies. • With the exponential growth of global data generation, smarter storage and recovery methods are urgently needed. The Algebraic Geometry Approach: • The researchers utilize polynomial-based mathematical structures to break data into smaller pieces and distribute them across neighboring servers. • In case of server failure, instead of relying on multiple full copies of data, neighboring servers can collectively recover the missing information using these algebraic structures. • While the use of polynomials for data storage dates back to the 1960s, recent advancements allow researchers to build specialized polynomial systems optimized for localized data recovery. Benefits of the Algebraic Geometry Method: • Reduced Energy Consumption: Less reliance on redundant copies means lower energy demands for storage and replication. • Efficient Data Recovery: Localized recovery algorithms minimize the need for long-range data transfers, saving both time and power. • Scalability: The approach is well-suited for growing data infrastructures, where efficient distribution is increasingly critical. Implications for the Future of Data Centers: • Greener Data Centers: Algebraic geometry could help reduce the carbon footprint of large-scale data storage operations. • Cost Efficiency: Lower energy requirements translate to significant cost savings for data center operators. • Resilience and Reliability: Localized recovery ensures data remains accessible and secure even during server failures. Future Outlook: • Further research will focus on refining the polynomial algorithms to handle larger datasets and integrate seamlessly with existing data center architectures. • As global data demands continue to grow, these algebraic approaches could play a key role in making data centers more sustainable and cost-efficient. This innovative application of algebraic geometry to real-world data infrastructure challenges highlights the power of mathematical research in driving technological and environmental advancements.
-
Last month, we saved 5 lakhs in just 10 minutes by doing one thing. Let me tell you about this small adjustment that made a huge impact at Go Zero. Here's how our packaging works: → Ice cream goes into plastic cups → 12 cups go into cartons → Cartons go into crates for storage and transport And the cartons we were buying were the standard size in the market. So, each crate held 5 cartons = 60 cups total. One day, someone walked out of our cold room carrying these crates. And I noticed something - there was empty space in each crate. It got me thinking how we can fit one more carton in here. Tried it. Didn't fit. It was just 10mm short. Instead of accepting it, I did the math. We already had 5 cartons in the crate. If I reduced each carton's height by just 2mm, I'd free up exactly the 10mm needed for the 6th carton. The impact was immediate: 5 cartons per crate became 6 cartons per crate. Scale that up - every 100 crates now carry 600 cartons instead of 500. Same truck. Same storage space. 20% more product. All because of 2mm. Sometimes the biggest breakthroughs come from the smallest observations. You just have to be willing to question what everyone else accepts as "standard."
-
Ohhh younger me… how naive I was. “I’ll never use these equations in the real world. This is pointless.” Fast forward to today, and I’m using machining formulas daily to dial in optimal performance, maximize tool life, and squeeze every ounce of efficiency out of a cut. Funny how that works. Those “boring” equations, SFM, RPM, IPM, chip thinning adjustments, MRR, they’re not just academic exercises. They’re leverage. Here’s what really changes when you understand the formulas instead of guessing: • You control the process – Instead of copying speeds and feeds from a chart, you can calculate exactly what your application needs. • You optimize tool life – Proper chip thickness and surface speed prevent premature wear, heat buildup, and catastrophic failures. • You increase MRR intelligently – Not just “turn it up and hope.” Strategic increases based on math. • You solve problems faster – Chatter? Premature edge breakdown? Poor finish? The equations usually tell you why. • You speak the language of performance – Whether you’re working with suppliers, programmers, or tooling reps, the math gives you credibility. In high-mix, real-world manufacturing, you can’t just “send it.” The difference between a good cut and a great one is often hidden inside a formula. I’ve found that the more I understand the math behind the cut, the more confident I become in pushing boundaries safely. Shoutout to the fundamentals we once rolled our eyes at. Turns out… they were the cheat codes all along. #Machining #Manufacturing #CNC #ContinuousImprovement #ARCHCuttingTools
-
A pattern maker accepting 15% waste is bad at math. But any designer CREATING 15% waste is worse. The real problem? We're ALL bad at math when we work in silos. 5 simple steps that can help us to get out of silos and start to design for profit, and sustainability: 1. Zero waste is a team sport, not a solo act Traditional: Design → Pattern → Production → "Why so much waste?" Smart: Design + Pattern together → Test on actual fabric width → Adjust → Zero waste Time investment: 2 extra hours. Fabric saved: 10-15% forever. 2. The geometry both sides need to know Designers: Your 150cm fabric doesn't care about your asymmetric vision Pattern makers: Their creativity needs your mathematical guidance, not your judgment Solution: 30-minute weekly sessions translating ideas into tessellating reality 3. Simple fixes that require both brains That gorgeous curved hem? Designer alternative: Geometric angles achieving the same movement Pattern maker input: "If we shift the angle 5°, we save 12cm per garment" Result: Design integrity maintained, waste eliminated 4. The tools nobody teaches in school For designers: Basic pattern shapes, fabric width constraints For pattern makers: Design thinking, aesthetic problem-solving For both: How €1 of waste × 10,000 units = €10,000 lost 5. Real collaboration looks like this "This curve creates 20% waste" ❌ "This curve creates 20% waste. What if we tried this instead?" ✅ One kills creativity. One redirects it. The competitive advantage hiding in plain sight Brands where designers and pattern makers collaborate from day 1: → 10-15% lower material costs → 50% faster sampling → Premium pricing for "innovative design" → Teams that actually enjoy Monday meetings Start here: Next project: Designer + pattern maker in the room for initial sketch Rule: No design moves forward without waste calculation Result: Same creativity, better margins The future belongs to teams that see constraints as catalysts, not obstacles. P.S.: Zero-waste patterns don't have to be boring, and garments can be pretty complicated! See example below!
-
@AirONAr Blowdown loss in a cooling tower is the process of discharging a portion of the circulating water to control the concentration of dissolved solids, which accumulate due to evaporation. As water evaporates in the cooling process, the remaining water becomes increasingly concentrated with impurities, such as salts and minerals. Blowdown helps prevent scaling, fouling, and corrosion by removing these impurities and maintaining the required water quality for effective heat exchange. 1. Blowdown Loss Calculation Blowdown is essential for controlling the concentration of dissolved solids (TDS) in the cooling water. The amount of blowdown required depends on the concentration of dissolved solids and the number of cycles of concentration (CoC). CoC refers to the ratio of the concentration of dissolved solids in the circulating water to the concentration in the makeup water. The formula for calculating blowdown loss is: Blowdown Rate = Makeup Water Rate / (Cycles of Concentration - 1) Where: • Makeup Water Rate = Rate at which fresh water is added to replace evaporated water (m³/h or GPM). • Cycles of Concentration (CoC) = The ratio of dissolved solids in the circulating water to that in the makeup water. 2. Cycles of Concentration (CoC) CoC is a key factor in blowdown determination. It is calculated as: CoC = (Concentration of Solids in Circulating Water) / (Concentration of Solids in Makeup Water) Maintaining optimal CoC minimizes blowdown and water wastage. Higher CoC reduces blowdown, but excessively high values can lead to scaling and fouling, while low CoC requires more blowdown to control TDS. 3. Example Calculation For example, if the makeup water rate is 100 m³/h and the CoC is 4, the blowdown rate would be: Blowdown Rate = 100 m³/h / (4 - 1) = 33.33 m³/h This means 33.33 m³/h of water is discharged as blowdown to maintain water quality. 4. Factors Influencing Blowdown • Water Quality: Higher impurity levels in makeup water lead to higher blowdown requirements. • Cooling Load: Increased cooling demand raises evaporation, and more blowdown is required to manage TDS. • Water Treatment: Effective water treatment can help reduce blowdown by controlling scaling and fouling. 5. Mitigation of Blowdown Loss To minimize blowdown loss: • Optimize CoC: Proper management of CoC balances blowdown with water quality. • Water Treatment: Using chemicals to control scaling and fouling reduces blowdown frequency. • Filtration: Filtration systems help remove suspended solids, extending time between blowdowns. In summary, blowdown loss is essential for maintaining cooling tower efficiency by controlling dissolved solids. Proper management of CoC and water treatment can help reduce blowdown and improve water conservation.
-
Calculating Factory Capacity: A Guide for Factory Owners As we continue to strive for operational excellence, it's essential that we accurately calculate our factory capacity to optimize production, reduce waste, and increase efficiency. This guide outlines the steps to calculate your factory capacity using existing machines and manpower. Why Calculate Factory Capacity? Calculating factory capacity helps: 1. *Optimize production planning*: Ensure you're producing at optimal levels to meet demand. 2. *Identify bottlenecks*: Pinpoint areas for improvement to increase efficiency. 3. *Reduce waste*: Minimize excess production, reduce scrap, and lower costs. 4. *Improve resource allocation*: Make informed decisions about machine and manpower utilization. Step-by-Step Guide to Calculating Factory Capacity: 1. *Gather data*: Collect information on: - Machine specifications (capacity, speed, uptime) - Manpower availability (shifts, hours, skills) - Production data (historical output, demand forecasts) 2. *Calculate machine capacity*: - Determine the maximum output of each machine per hour (or shift) - Consider factors like machine downtime, maintenance, and changeovers 3. *Calculate manpower capacity*: - Determine the total available manpower hours per shift (or day) - Consider factors like employee skills, training, and absenteeism 4. *Calculate total factory capacity*: - Multiply machine capacity by the number of machines - Multiply manpower capacity by the number of employees - Add the two totals together to get the overall factory capacity 5. *Adjust for efficiency and utilization*: - Apply an efficiency factor (e.g., 80% to account for downtime, waste) - Consider utilization rates (e.g., 90% to account for production variability) Example Calculation: Suppose your factory has: - 5 machines with a maximum output of 100 units/hour each - 20 employees working 2 shifts/day, with a total available manpower of 160 hours/day Machine capacity: 5 machines * 100 units/hour * 8 hours/shift = 4,000 units/shift Manpower capacity: 20 employees * 8 hours/shift = 160 hours/shift Total factory capacity: 4,000 units/shift + 160 hours/shift = 4,160 units/shift (assuming 1 hour of manpower = 1 unit of production) Adjusted for efficiency and utilization: - Efficiency factor: 80% (to account for downtime, waste) - Utilization rate: 90% (to account for production variability) Adjusted factory capacity: 4,160 units/shift * 0.8 * 0.9 = 2,990 units/shift Conclusion: Accurately calculating factory capacity is crucial for optimizing production, reducing waste, and increasing efficiency. By following these steps and adjusting for efficiency and utilization, you'll be able to make informed decisions about production planning, resource allocation, and process improvements. If you have any questions or need further guidance, please don't hesitate to reach out.
-
The Fibonacci sequence, a mathematical pattern rooted in nature’s growth dynamics, has transcended theoretical confines to become a cornerstone of sustainable industrial innovation. This study explores its pervasive presence in natural systems, from phyllotaxis in plants to spiral galaxies, and its subsequent adaptation into modern engineering and manufacturing processes. By harnessing Fibonacci-derived structures, industries achieve material efficiency, energy optimization, and waste reduction, aligning with circular economy principles. Civil engineers utilize Fibonacci spirals for load-bearing designs that minimize steel usage, while data scientists employ sequence-based algorithms to model resource distribution networks. In product design, biomimetic Fibonacci patterns reduce material waste without compromising structural integrity. The sequence’s role in bridging natural efficiency with industrial applications is exemplified in photovoltaic array configurations, where Fibonacci spacing enhances light absorption. This paper demonstrates how nature-inspired mathematics fosters cleaner production paradigms, offering scalable solutions for global sustainability challenges.
-
Before you map the workflow, identify bottlenecks, implement any automation or optimisation, there's one number you absolutely need to know. It's not the number of steps in your process, your error rate, nor your processing time. It's how many times this process runs. I've seen teams jump straight into optimising a process without first asking: "How often does this actually happen?" Imagine you spend three weeks automating a manual process that takes two hours to complete. But if that process only runs once a year, you've just invested 120 hours to save 2 hours annually. At that rate, it'll take 60 years to break even. Now imagine a different scenario: a task that takes just five minutes but happens 500 times per day. That's 2,500 minutes daily, or over 41 hours. If you can shave even one minute off that task, you're saving 8+ hours every single day. And that's really where the real impact lives. Here's how to approach any process improvement opportunity: ⏩ Step 1: Count the frequency How many times does this process run per day? Per week? Per month? Get the actual number, not an estimate. ⏩ Step 2: Calculate the current cost Frequency × Time per instance × Cost per hour = Total current cost A 30-minute process that runs 20 times per day with a $50/hour resource cost = 10 hours daily = $500/day = $130,000/year ⏩ Step 3: Determine realistic improvement Be honest about how much time you can actually save. Cutting a 30-minute process to 20 minutes is realistic. Cutting it to 2 minutes might not be. ⏩ Step 4: Calculate ROI Time saved × Frequency × Cost per hour = Annual savings Compare that to your estimated implementation time and cost. Once you know the frequency, you can properly prioritise: ⏩ High Frequency + High Time = Urgent Priority Optimise these first. Even small improvements yield massive returns. ⏩ High Frequency + Low Time = Watch Carefully These can sneak up on you. Don't ignore these. ⏩ Low Frequency + High Time = Evaluate Case by Case These feel important because each instance is painful, but the aggregate impact might be low. Consider whether simplification (not automation) might be enough. ⏩ Low Frequency + Low Time = Deprioritise Unless there's a compliance or quality issue, leave these alone. Your time is better spent elsewhere. On a final note, process improvement shouldn't be about making every process perfect but really making the right processes better. And you can't know which processes are "right" without knowing the frequency. So, before you start your next improvement project, ask yourself: "How many times does this actually run?" That number might just save you weeks of wasted effort. #processefficiency #processanalysis
-
🟠 [Story 11] "𝙏𝙤 𝙛𝙞𝙣𝙙 𝙤𝙪𝙩 𝙬𝙝𝙖𝙩 𝙝𝙖𝙥𝙥𝙚𝙣𝙨 𝙬𝙝𝙚𝙣 𝙮𝙤𝙪 𝙘𝙝𝙖𝙣𝙜𝙚 𝙨𝙤𝙢𝙚𝙩𝙝𝙞𝙣𝙜, 𝙞𝙩 𝙞𝙨 𝙣𝙚𝙘𝙚𝙨𝙨𝙖𝙧𝙮 𝙩𝙤 𝙘𝙝𝙖𝙣𝙜𝙚 𝙞𝙩 - 𝙗𝙪𝙩 𝙣𝙤𝙩 𝙣𝙚𝙘𝙚𝙨𝙨𝙖𝙧𝙞𝙡𝙮 𝙤𝙣𝙚 𝙩𝙝𝙞𝙣𝙜 𝙖𝙩 𝙖 𝙩𝙞𝙢𝙚!" -George Box 🚨🚨Wait..... What? In A/B Testing, we recommend only one change at a time. So what is Box even saying here? ------------------------------ (CRO's stick till the end... ) Have you ever stared at 7 different ML model parameters, each with 10 possible values, realizing you would need to run 10^7 = 10 million experiments to try every combination? 🤯 Most data scientists today simply use random search or Bayesian optimization. But the fascinating solution to this "many parameters, limited time" problem emerged from food crisis during World War 2. Today, we struggle with tuning machine learning models by juggling learning rates, number of layers, dropout rates, batch sizes, etc. In the 1980s, manufacturing engineers faced similar challenges optimizing their production lines. But it all started in the 1940s, when Britain was desperately trying to grow enough food during wartime blockades. George Box, a young chemist in 1940s Britain faced what seemed like an impossible task, optimize fertilizer production when each experiment took days, resources were scarce, and they needed answers fast. The traditional approach (which is still considered best-practice today in A/B testing) is changing one variable at a time and observe. It is painfully slow. Test one temperature value, then try different pressures, then adjust concentrations, then modify timing. Twenty separate experiments that completely missed how these factors might interact with each other. Box's breakthrough came by asking - what if we changed multiple factors at once, but in a mathematically clever way? He developed 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗲 𝗦𝘂𝗿𝗳𝗮𝗰𝗲 𝗠𝗲𝘁𝗵𝗼𝗱𝗼𝗹𝗼𝗴𝘆 (𝗥𝗦𝗠), a mathematical framework that could map out how multiple factors interacted while requiring exponentially fewer experiments. What once took months could now be done in days with much less resources, and highly reliable results. This breakthrough transforms how modern companies run experiments today. Lets walk through a simple CRO example. Determining Number of form fields (say 3 to 7 fields), and Submit button size (ranging from 40px to 80 px width). RSM can be used to design below 5 experiments (or variants) for conversion rates. - High (7 fields) | High (80 px) - High (7 fields) | Low (40 px) - Low (3 fields) | High (80 px) - Low (3 fields) | Low (40 px) - Center (5 fields) | Center (60px) This helps measuring interaction effects as well as efficient grid search. This method underpins modern hyperparameter tuning, manufacturing optimization, and drug development processes. It works best when you have continuous variables, and large enough samples. ------------ PS: I post a story like this every Sunday. Check out previous ten linked in comment👇
Explore categories
- Hospitality & Tourism
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development