Agentic AI Is Not the Future: Why Classical Machine Learning Is the Sustainable Choice
Credit - Generated using local LLM (stable diffusion + photoshop)

Agentic AI Is Not the Future: Why Classical Machine Learning Is the Sustainable Choice

As I scroll through my LinkedIn feed, I'm bombarded with posts about "Agentic AI" being the future of everything. Companies are rushing to implement LLM-based agents for tasks that could be solved with a simple logistic regression. It's time we had an honest conversation about this madness.

Given my entry-level hands-on experience with AI and machine learning, I believe the hype around Agentic AI is not only overstated but potentially detrimental. Implementing solutions via Agentic AI often results in inefficient resource use, skyrocketing computational demands, and significant environmental costs. Instead, we should revisit the tried-and-true foundations of classical machine learning (ML), which offer customized, efficient, and sustainable alternatives. In this post, I'll unpack why Agentic AI is causing more harm than good and make a case for why classical ML should be our go-to approach for most real-world problems.

The Disarray Surrounding Agentic AI Adoption

Let's start with the elephant in the room: environmental impact. Every time you invoke an LLM agent for a task, you're:

  • Consuming 10-100x more electricity than necessary
  • Generating significant heat that requires additional cooling
  • Contributing to data center expansion that's already straining our power grids

A recent study showed that training GPT-3 consumed enough electricity to power an average American home for 120 years. Now imagine using such models for simple classification tasks that a 10KB decision tree could handle.

The chaos stems from a "one-size-fits-all" mindset. Businesses are implementing Agentic AI for tasks where simpler solutions suffice, ignoring trade-offs like scalability challenges and high implementation barriers. In contrast, classical ML offers tailored, efficient alternatives that minimize these issues.

Why Classical ML Wins: Efficiency, Customization, and Sustainability

Classical ML focuses on data-driven models like regression or clustering, which are inherently more resource-efficient and customizable. These methods require less computational overhead because they don't involve the ongoing, autonomous reasoning of Agentic AI. For instance, traditional ML can optimize workflows with predefined rules, reducing energy use compared to the adaptive, power-hungry nature of agentic systems.

Key advantages include:

  • Customization: Classical ML allows fine-tuning for specific problems, avoiding the bloat of general-purpose agents.
  • Efficiency: It uses fewer resources, with models that train quickly and run on modest hardware, cutting electricity needs.
  • Environmental Benefits: By sidestepping LLM-heavy processing, it generates less server heat and lowers carbon footprints.
  • Trade-off Management: Issues like accuracy vs. speed are better handled, as classical approaches can be optimized without the unpredictability of autonomous agents.

In essence, classical ML addresses most trade-offs—cost, speed, and sustainability—without the chaos of over-engineering.

Real-World Example: Anomaly Detection

Let me demonstrate with a practical example. Here's how most companies are approaching anomaly detection today:

The Agentic AI Approach (The Overkill)

# Agentic AI Approach - Using LLM for Anomaly Detection
import openai
import pandas as pd
import json
import time

class LLMAnomaly Detector:
    def __init__(self, api_key):
        self.client = openai.OpenAI(api_key=api_key)
        self.token_count = 0
        self.cost = 0
        
    def detect_anomalies(self, data):
        """
        Uses LLM to detect anomalies - inefficient and expensive
        """
        anomalies = []
        
        for idx, row in data.iterrows():
            # Converting each row to text for LLM processing
            prompt = f"""
            Analyze this sensor data and determine if it's anomalous:
            Temperature: {row['temperature']}°C
            Pressure: {row['pressure']} bar
            Vibration: {row['vibration']} Hz
            Previous 5 readings average:
            Temperature: {row['temp_avg']}°C
            Pressure: {row['pressure_avg']} bar
            Vibration: {row['vibration_avg']} Hz
            
            Respond with JSON: {{"is_anomaly": true/false, "confidence": 0-1}}
            """
            
            # API call - costs money and time
            response = self.client.chat.completions.create(
                model="gpt-4",
                messages=[{"role": "user", "content": prompt}],
                temperature=0
            )
            
            # Track usage
            self.token_count += response.usage.total_tokens
            self.cost += (response.usage.total_tokens / 1000) * 0.03
            
            result = json.loads(response.choices[0].message.content)
            if result['is_anomaly']:
                anomalies.append(idx)
            
            time.sleep(0.1)  # Rate limiting
            
        return anomalies

# Usage
detector = LLMAnomaly Detector(api_key="your-key")
anomalies = detector.detect_anomalies(sensor_data)
print(f"Cost: ${detector.cost:.2f}")
print(f"Time: ~{len(sensor_data) * 1.5} seconds")
        

The Classical ML Approach (The Smart Way)

# Classical ML Approach - Isolation Forest for Anomaly Detection
import numpy as np
from sklearn.ensemble import IsolationForest
from sklearn.preprocessing import StandardScaler
import pandas as pd
import time

class EfficientAnomalyDetector:
    def __init__(self):
        self.scaler = StandardScaler()
        self.model = IsolationForest(
            contamination=0.1,  # Expected proportion of anomalies
            random_state=42,
            n_estimators=100
        )
        self.training_time = 0
        self.inference_time = 0
        
    def prepare_features(self, data):
        """
        Feature engineering for better anomaly detection
        """
        features = data.copy()
        
        # Add derived features
        features['temp_deviation'] = features['temperature'] - features['temp_avg']
        features['pressure_deviation'] = features['pressure'] - features['pressure_avg']
        features['vibration_deviation'] = features['vibration'] - features['vibration_avg']
        
        # Rolling statistics
        features['temp_rolling_std'] = features['temperature'].rolling(5).std()
        features['pressure_rolling_std'] = features['pressure'].rolling(5).std()
        
        # Rate of change
        features['temp_rate_change'] = features['temperature'].diff()
        features['pressure_rate_change'] = features['pressure'].diff()
        
        return features.fillna(0)
    
    def train(self, training_data):
        """
        Train the model - happens once, runs anywhere
        """
        start_time = time.time()
        
        features = self.prepare_features(training_data)
        feature_cols = ['temperature', 'pressure', 'vibration',
                       'temp_deviation', 'pressure_deviation', 'vibration_deviation',
                       'temp_rolling_std', 'pressure_rolling_std',
                       'temp_rate_change', 'pressure_rate_change']
        
        X = features[feature_cols]
        X_scaled = self.scaler.fit_transform(X)
        self.model.fit(X_scaled)
        
        self.training_time = time.time() - start_time
        
    def detect_anomalies(self, data):
        """
        Detect anomalies - milliseconds per prediction
        """
        start_time = time.time()
        
        features = self.prepare_features(data)
        feature_cols = ['temperature', 'pressure', 'vibration',
                       'temp_deviation', 'pressure_deviation', 'vibration_deviation',
                       'temp_rolling_std', 'pressure_rolling_std',
                       'temp_rate_change', 'pressure_rate_change']
        
        X = features[feature_cols]
        X_scaled = self.scaler.transform(X)
        
        # -1 for anomalies, 1 for normal
        predictions = self.model.predict(X_scaled)
        anomaly_scores = self.model.score_samples(X_scaled)
        
        anomalies = data.index[predictions == -1].tolist()
        
        self.inference_time = time.time() - start_time
        
        return anomalies, anomaly_scores

# Usage
detector = EfficientAnomalyDetector()
detector.train(training_data)
anomalies, scores = detector.detect_anomalies(sensor_data)
print(f"Training time: {detector.training_time:.3f} seconds")
print(f"Inference time: {detector.inference_time:.3f} seconds for {len(sensor_data)} samples")
print(f"Cost: $0.00 (after initial training)")
        

The Shocking Comparison

Let's compare these approaches with real numbers:

Article content
Comparison (Agentic Vs Classical)

The hard truth is "For 90% of business problems, you're using a sledgehammer to crack a nut".

The Path Forward

Before jumping on the Agentic AI bandwagon, ask yourself:

  1. Can this be solved with classical ML? (Usually yes)
  2. What's the environmental cost?
  3. Is the added complexity worth it?
  4. What's the TCO over 5 years?

Call to Action

It's time we moved past the hype and built sustainable, efficient solutions. The next time a vendor tries to sell you an "AI Agent" for your anomaly detection, forecasting, or classification needs, show them these numbers.

Let's build a future where AI is efficient, not just impressive. Where solutions are sustainable, not just cutting-edge. Where we solve real problems, not create new ones.

What's your take? Have you encountered Agentic AI pitfalls in your work, or seen classical ML shine? Let's discuss in the comments.

Note: This article reflects my perspective based on current trends in AI. Always evaluate technologies in the context of your specific needs.

To view or add a comment, sign in

More articles by Shameer Mohammed

Others also viewed

Explore content categories