New Math Data’s cover photo
New Math Data

New Math Data

Information Technology & Services

Houston, Texas 1,861 followers

AWS Advanced Tier Partner

About us

New Math Data was founded in 2018 to help clients with their Cloud needs and then evolved into analyzing and forecasting massive data sets for the power industry. Since then, New Math Data has formed a reliable and dedicated team of Solutions Architects and Engineers, which has proudly collaborated with numerous amazing customers helping them build a strong AWS footprint. NMD is an AWS Advanced Partner. Right decisions. Lean architecture. Faster to production.

Website
http://newmathdata.com
Industry
Information Technology & Services
Company size
51-200 employees
Headquarters
Houston, Texas
Type
Privately Held
Founded
2018
Specialties
DataAnalytics, MachineLearning, DataEngineering, AI, Artificial Intelligence, AgenticAI, GenAI, and AWS

Locations

Employees at New Math Data

Updates

  • 60% faster oil well scenario creation. ⚙️ That’s what happens when AI is embedded into the workflow instead of bolted on as a surface-level chatbot. Xecta needed a better way for engineers to build and modify complex well scenarios inside its Production Advisor platform. So a conversational AI interface was implemented that translates natural language into structured backend-ready scenarios. The result: • 60% reduction in scenario build time • 25% increase in platform usage • 50% faster onboarding for new users Real AI value comes from removing friction inside complex workflows. Not from adding another chat box to the screen. 🚀 #ArtificialIntelligence #DataEngineering #CloudComputing #MachineLearning #DigitalTransformation

  • Adding a new table to a Debezium CDC pipeline sounds simple... Until you realize Debezium won’t backfill historical rows by default. That means your downstream systems only receive future changes, while all existing data in the new table gets missed. ⚠️ For teams syncing data into warehouses like Redshift, that creates an immediate consistency problem. We recently tackled this exact challenge while working with AWS MSK Connect + Debezium. Here’s the approach: 1️⃣ Add the new table to table.include.list 2️⃣ Use Debezium Incremental Snapshots 3️⃣ Trigger the snapshot through a database-based signaling table 4️⃣ Backfill historical records while live CDC continues uninterrupted Why this matters: ✅ No connector rebuilds ✅ No downtime for existing streams ✅ No table locking ✅ No duplicate connector overhead It’s a much cleaner way to evolve CDC pipelines as source schemas grow over time. If your team is running Debezium in production, this is one of those implementation details worth understanding before it becomes a painful edge case. If you want to read the full breakdown, we got you covered. Read the article right here on LinkedIn. #DataEngineering #CloudArchitecture #AWS #Kafka #Debezium

  • They turned hours of supply chain risk analysis into seconds. ⚡ A global research company was sitting on massive volumes of risk data across spreadsheets, PDFs, reports, and images. The problem wasn’t lack of data. It was that none of it connected, so every investigation meant manually digging, cross-referencing, and hoping nothing got missed. They needed a solution, and it wasn't "just implement AI". It was building a system that could actually make the data usable. Together, we helped to implement a multimodal pipeline to ingest everything, structured and unstructured. A unified knowledge layer using vector search to connect it all. RAG powered by Claude to generate answers in real time. And a conversational interface so analysts could query the system directly instead of hunting through files. The impact was immediate. 💯 Research cycles that used to take hours dropped to seconds. Data that lived in silos became connected intelligence. Analysts moved from manual digging to asking better questions. And the biggest shift wasn’t just speed. They started uncovering relationships across suppliers, entities, and risks that were previously buried in disconnected data. That’s the difference between having data and actually being able to use it. If you're curious about the full story, read our case study or contact us at any time. https://lnkd.in/gaA_BN2A #ArtificialIntelligence #CloudComputing #DataEngineering #MachineLearning #SupplyChain 🚀

  • What is Claude Code? 👀 It’s one of the biggest buzzwords in AI engineering right now… but most people are barely scratching the surface. A lot of teams are still using it like a smarter autocomplete. That’s not where things get interesting. The real shift is this: You’re no longer working with one assistant. You’re orchestrating a team ⚙️ Multiple agents. Each with a role. Running in parallel. One explores your codebase. Another writes. Another debugs. Another reviews. All at the same time. 🤯 That changes how work gets done. Faster iteration. Less waiting. More structured workflows that actually scale as things get more complex. And here’s the part most people will miss… The gap won’t come from who has access to AI. It’ll come from who knows how to use it like a system. The teams that figure this out early will move faster, ship faster, and solve harder problems. Everyone else will still be prompting one model… one task at a time. We broke this down step-by-step, including how to set up agent “swarms” and where things can go wrong. And it’s all right here on LinkedIn. No need to leave the platform. If you’ve been hearing the term “Claude Code” and wondering what the hype is about, this is a good place to start. 🙌 #ArtificialIntelligence #CloudComputing #DataEngineering #MachineLearning #SoftwareEngineering 🚀

  • Not everything we build lives in the cloud. Sometimes it shows up at a cycling event in Texas. 🚴 Did you spot the New Math Data gear at Gran Fondo Texas? Behind every architecture, pipeline, or system is a team that does more than just talk code. It’s always interesting to see where the people behind the work go when they step away from the keyboard. Tell us, what do your days off look like? #CloudComputing #DataEngineering #MachineLearning #TechCulture #EngineeringTeams

    • New Math Data swag at the Gran Fondo Texas
  • Your grid is producing more data than ever. So why are outages still catching teams off guard? ⚡ It’s not a data problem. It’s a visibility problem. Modern power systems are flooded with inputs. Smart meters, sensors, distributed energy sources… the volume isn’t slowing down. But more data doesn’t automatically lead to better decisions. What are successful teams actually doing? They’re don't just collect data They visualize it across time, location, and system behavior. That shift changes everything. Because once patterns become visible, teams can: • Catch anomalies before they escalate • Understand load changes as they happen • Pinpoint weak spots across the grid • Respond faster with real confidence But here’s where it gets challenging… As grids grow more complex, traditional dashboards start to fall short. Now you’re dealing with real-time data streams, spatial dependencies, and multiple variables interacting at once. And that’s where most systems start to break down. There’s a deeper layer to how leading teams are approaching this, especially when combining visualization with advanced analytics and predictive models. Curious? Read our latest article to find out how: https://lnkd.in/gNDaBKZW #DataEngineering #CloudComputing #EnergyTech #DataVisualization #PowerSystems

  • Can’t stop. Won’t stop. 💪 Nathan Warren just earned the Amazon Web Services (AWS) Certified Advanced Networking - Specialty. This one goes deep. Hybrid connectivity, network architecture, security, performance optimization across complex AWS environments. Not an easy certification to pass, and definitely not a common one. Another strong signal of the depth we’re building across the New Math Data team. 👏 #AWS #CloudNetworking #CloudComputing #AWSCertified #DataEngineering

    • Nathan Warren achieves AWS Certified Advanced Networking - Specialty
  • What happens when engineers rely on AI agents… but never learn the fundamentals? Agentic AI is quickly changing how software gets built. Instead of tools that generate code or answers, we now have systems that can plan tasks, call APIs, and execute multi-step workflows across services. ⚙️ That’s powerful. But it also raises an important question for the next generation of engineers. If AI agents can design architectures, write code, and orchestrate systems, what happens when the people using them never develop the underlying engineering instincts? Understanding system boundaries, permissions, observability, and risk becomes even more important as AI systems gain autonomy. Because when an agent makes a mistake, it doesn’t just produce a bad output.🚨 It can trigger real system actions. We explored this idea in a short piece on why responsibility and fundamentals still matter in the age of agentic AI. 📖 It’s a quick 5-minute read if you’re thinking about how AI will shape the next generation of engineers. https://lnkd.in/gwexDUiq #ArtificialIntelligence #AgenticAI #AIEngineering #SoftwareArchitecture #ResponsibleAI

  • Python developers have quietly been dealing with the same pain for years. 🩼 Dependency installs that feel slow. Multiple tools just to manage environments. CI pipelines that spend way too long installing packages. pip. virtualenv. poetry. pipenv. Each solves a piece of the problem… but the ecosystem ends up fragmented. A newer tool called uv is starting to change that. It replaces multiple parts of the Python packaging workflow with a single, extremely fast tool built in Rust. We tested it and broke down: • how it works • why it’s so fast • how teams can start using it immediately The best part? You can read the entire article right here on LinkedIn. If you work with Python environments, dependency management, or CI pipelines, reading this will be the best 10 minutes you spend today. #Python #SoftwareEngineering #DeveloperTools #CloudEngineering #DataEngineering

  • When search infrastructure slows down, the entire product experience suffers. 📉 That’s exactly the challenge UpContent, a content discovery and curation platform, was facing. As their platform grew, their search layer needed to handle more data, more queries, and higher reliability requirements. The existing setup was starting to introduce operational complexity and performance constraints. So the team set out to modernize. ⚙️ Working together, we helped UpContent transition their search architecture to Amazon OpenSearch Service on AWS, focusing on: • scalable search infrastructure • improved operational automation • stronger cloud-native security • optimized search performance 🚀 The result? A faster, more reliable search experience for users and a platform architecture built to support future growth. Search is often one of the most critical components of modern SaaS platforms. When it works well, users barely notice. When it doesn’t, everything feels slower. This case study walks through how the UpContent team modernized their search platform and what the migration looked like. It’s a quick read if you’re thinking about modernizing search or improving platform scalability. https://lnkd.in/g2985_EQ #AWS #OpenSearch #CloudArchitecture #DataEngineering #SaaS

Similar pages

Browse jobs