Forecasting is an important tool in the data science toolkit, and its value is especially clear for ride-hailing platforms like Lyft, where understanding granular supply and demand dynamics is critical. In a recent tech blog, Lyft’s engineering team shared how they tackle this challenge through real-time spatial-temporal forecasting—predicting demand and supply every 5 minutes across millions of small geographic cells. The team evaluated two major model families for this task: classical time-series models and deep neural networks. Deep learning performs well in offline settings because it captures richer spatial and temporal patterns. But in real-time environments—where models must be retrained frequently and run with ultra-low latency—classical models often outperform. Their ability to refit every few minutes makes them better at handling sudden spikes, especially for short-term predictions within 5 to 30 minutes. Engineering cost is also a major factor. Deep learning requires heavy GPU compute and more operational overhead, while classical models are lightweight, inexpensive at scale, and easier to maintain. This study is a great reminder that practical ML is all about balance. The most accurate model on paper isn’t always the best model in production. Understanding the nature of the data, the latency constraints, and the operational cost often matters just as much as the algorithm itself. #DataScience #MachineLearning #Algorithm #Forecasting #TimeSeries #Tradeoff #SnacksWeeklyonDataScience – – – Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts: -- Spotify: https://lnkd.in/gKgaMvbh -- Apple Podcast: https://lnkd.in/gFYvfB8V -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/gGK4E9qj
Tech Market Analysis Tools
Explore top LinkedIn content from expert professionals.
-
-
5 Ways Semiconductor Companies Forecast Demand Despite Long Lead Times and Highly Cyclical Markets 1. Customer Collaboration & Long-Term Supply Agreements (LTSAs) Companies secure 12–36 month forecasts from major customers. Use NCNR (non-cancellable, non-returnable) contracts to lock demand. Example: TSMC receives long-range demand plans from Apple for iPhone SoCs, enabling early wafer allocation. Infineon gets multi-year volume commitments from automotive OEMs for power MOSFETs and MCUs. 2. Multi-Quarter Order Backlog & Pipeline Analysis Continuous analysis of book-to-bill ratios, backlog ageing, and order cancellations. Sharp reductions in bookings often signal a market downcycle. Example: During the 2021 chip shortage, NXP and STMicroelectronics used 6–9 month backlogs to justify increasing wafer starts at foundries. When PC demand crashed in 2022, Intel’s falling book-to-bill warned of overcapacity. 3. Market Intelligence & Macro Indicators Track global semiconductor reports, sector growth, and end-market signals (EVs, cloud, consumer electronics). Example: ON Semiconductor monitors EV adoption forecasts to model future SiC MOSFET needs. Smartphone shipment trends from IDC/Gartner help Qualcomm and MediaTek predict next-year modem and SoC demand. 4. Statistical & Scenario-Based Forecast Models Use historical patterns (seasonality of consumer devices), inventory ratios, and regression models. Run best-case, base-case, and worst-case scenarios. Example: NVIDIA forecasts GPU demand by modeling cloud capex cycles from Amazon, Google, and Microsoft. Memory makers (Samsung, Micron) use scenario models when DRAM/NAND prices swing due to oversupply. 5. Channel Monitoring & Inventory Tracking Track distributor inventory, sell-in vs. sell-through, and sudden stock build-up. A spike in distributor stock often indicates demand softening. Example: Texas Instruments (TI) closely monitors distributor inventory days; rising inventory signals that the industrial market is slowing. Analog Devices (ADI) checks if sensor ICs are stuck in channels instead of reaching OEMs. ~~~~~~ If you are looking to invest in semiconductors and need expert insights, drop us a DM.
-
Startups don’t fail because they lack data. They fail because they don’t use it to predict. At the growth stage, everything feels urgent: Scaling teams Managing burn Driving revenue Making fast decisions But most of those decisions are based on gut feel or past trends. What if you could actually see what’s coming next? That’s where predictive analytics comes in. Here’s how it helps: 1. Revenue Forecasting → Which customer segments will drive growth next quarter? → What’s your likely MRR based on current momentum? 2. Churn Prediction → Who’s about to leave your platform or unsubscribe? → What action can you take to retain them? 3. Inventory & Demand Planning → What should you produce or stock more of? → Where are you overinvesting? 4. Hiring & Resource Allocation → Which roles will bottleneck growth if not filled? → Where is your team overstaffed? 5. Marketing ROI Forecasts → Which campaigns will likely convert highest based on behavior patterns? → Where should you double down? Most growing startups operate reactively. Predictive analytics flips that Giving you a forward looking lens to make smarter, faster and more scalable decisions. Curious how we help startups scale using predictive analytics? DM me. I’ll show you what’s working. #PredictiveAnalytics #Startups #Growthstrategy #business
-
𝗝𝗣𝗠𝗼𝗿𝗴𝗮𝗻 𝗧𝗮𝘂𝗴𝗵𝘁 𝗔𝗜 𝘁𝗵𝗲 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗼𝗳 𝗠𝗮𝗿𝗸𝗲𝘁𝘀 JPMorgan researchers built a foundation model that predicts the next trade event the way an LLM predicts the next word — and it transferred to foreign markets it had never seen. TradeFM is a 524-million-parameter model trained on 10.7 billion tokens drawn from more than 9,000 U.S. equities across 368 trading days. Instead of language, its vocabulary is market microstructure: timing, size, price depth, and direction, compressed into 16,384 composite trade event tokens. 𝗪𝗵𝗮𝘁 𝘁𝗵𝗲𝘆 𝗱𝗶𝗱: • Trained on U.S. equity trade-flow data from February 2024 to September 2025 • Tested inside a simulated exchange where the model predicts trades in a continuous loop • Evaluated across 9 stocks, 3 liquidity tiers, and 9 months of held-out data — then applied, without any adjustments, to China and Japan 𝗪𝗵𝗮𝘁 𝘁𝗵𝗲𝘆 𝗳𝗼𝘂𝗻𝗱: • TradeFM matched real market patterns 2 to 3 times more closely than a standard baseline • Japan uses batch auctions at the open. China imposes 10% daily price limits. Performance degraded only moderately on both — without retraining Arman Khaledian, a former quant at Millennium and now CEO of Zanista AI, said: "That's not a toy result. 𝗜𝘁 𝗺𝗲𝗮𝗻𝘀 𝘁𝗵𝗲 𝗺𝗼𝗱𝗲𝗹 𝗶𝘀 𝗽𝗶𝗰𝗸𝗶𝗻𝗴 𝘂𝗽 𝘀𝗼𝗺𝗲𝘁𝗵𝗶𝗻𝗴 𝗿𝗲𝗮𝗹 𝗮𝗯𝗼𝘂𝘁 𝗵𝗼𝘄 𝗺𝗮𝗿𝗸𝗲𝘁𝘀 𝘄𝗼𝗿𝗸 𝗮𝘁 𝗮 𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗮𝗹 𝗹𝗲𝘃𝗲𝗹." He called it "the most interesting market simulation paper I've seen in a while. But it's a long way from a trading desk." Paper 𝘛𝘳𝘢𝘥𝘦𝘍𝘔: 𝘈 𝘎𝘦𝘯𝘦𝘳𝘢𝘵𝘪𝘷𝘦 𝘍𝘰𝘶𝘯𝘥𝘢𝘵𝘪𝘰𝘯 𝘔𝘰𝘥𝘦𝘭 𝘧𝘰𝘳 𝘛𝘳𝘢𝘥𝘦-𝘧𝘭𝘰𝘸 𝘢𝘯𝘥 𝘔𝘢𝘳𝘬𝘦𝘵 𝘔𝘪𝘤𝘳𝘰𝘴𝘁𝘳𝘶𝘤𝘵𝘶𝘳𝘦 Authors Maxime Kawawa-Beaudan, Srijan Sood, Kassiani Papasotiriou, Daniel Borrajo, Manuela Veloso If you want to hear how investors, quants, and analysts are using AI on Wall Street, check out my newsletter AI Street. Full write-up in the first comment.
-
I’ve just published a new article exploring strategies to unify data sharing across Snowflake, Databricks, and Microsoft Fabric. While consolidating onto a single platform is often ideal, the reality for many large enterprises is more complex. Team autonomy, legacy investments, and strategic diversification often lead to multi-cloud and multi-product environments. Can your cross-platform integration architecture become a strategic advantage? The article focuses on options to share delta parquet and iceberg format storage amongst the three platforms: https://lnkd.in/gs4nS8Tt In the real world, very few large organizations are unified on a single data and analytics platform. Snowflake, Databricks, and Microsoft Fabric are all very popular products with widespread adoption. All three offer lakehouse architecture tools, but what are your options if you have data in more than one of these products? How do you share data amongst the platforms in a way that minimizes replication, is cost efficient, and has low latency? This post is the first in a three-part series focusing on interoperability amongst Snowflake, Databricks, and Microsoft Fabric. #Snowflake #Databricks #AzureDatabricks #MicrosoftFabric
-
Forecasting is no longer a spreadsheet exercise. It’s an intelligence engine. If I were building a forecasting system from scratch in 2025, here’s what it would look like. 1️⃣ Phase 1: Ditch the backward-looking model. Traditional forecasts rely too heavily on rep inputs and lagging indicators. Instead: Feed the model real behavior data: emails, calls, meetings, time in stage, intent signals. Let AI surface deal velocity, risk factors, ghosted accounts, and false positives. 2️⃣ Phase 2: Build the autonomous pipeline. AI isn’t just for scoring. It’s also for triggering. Create Auto-alerts for stalled deals and agent-driven nudges: “Reach out now, buying signals just spiked.” Build auto-prioritization of deals based on historical conversion patterns and AI sentiment analysis. 3️⃣ Phase 3: Deploy next-best-action agents. This is where it gets fun. SDRs and AEs don’t log in to CRMs, they work out of an AI inbox. Every morning: “Here are your top 5 accounts. Here’s what to say. Here’s the play.” GTM motion becomes reactive → proactive → predictive. 4️⃣ Phase 4: Make forecasting a team sport. Sales leaders aren’t spending hours cleaning rollups, they’re challenging the model: “Why did we lose that deal?” “What changed in this region’s pipeline this week?” And AI answers with data, not guesses. Ok, this wasn’t meant to be a product pitch, but you can do all of this with ZoomInfo’s AI Copilot. If your forecast still starts with a spreadsheet and ends with hope, it’s time to rethink the system. What’s the most useful AI signal you’ve seen in a pipeline? #RevOps
-
In the process of utilizing and working with multiple APIs, there will be numerous instances where combining these APIs, including their individual requests and responses, becomes a necessary and important task. These requirements may vary significantly in complexity, spanning a broad spectrum from straightforward and easily manageable tasks to extremely intricate and demanding challenges that require specialized skills and knowledge. With organizations now managing tens, hundreds, and even thousands of APIs, a robust understanding of API composition, aggregation, and orchestration has become critically important for efficient operations and strategic advantage. Regardless of the specific naming conventions employed by different organizations, the fundamental task remains the same: the practical application of combining APIs and endpoints to achieve a functional outcome. API aggregation is a technique used to merge several individual requests into a unified request for streamlined processing and efficiency. 𝗧𝗵𝗲 𝗳𝗼𝗹𝗹𝗼𝘄𝗶𝗻𝗴 𝗲𝘅𝗮𝗺𝗽𝗹𝗲𝘀 𝘀𝗵𝗼𝘄 𝘁𝗵𝗲𝗶𝗿 𝘂𝘀𝗲 𝗮𝗰𝗿𝗼𝘀𝘀 𝘃𝗮𝗿𝗶𝗼𝘂𝘀 𝗱𝗼𝗺𝗮𝗶𝗻𝘀 𝗮𝗻𝗱 𝗶𝗻𝗱𝘂𝘀𝘁𝗿𝗶𝗲𝘀. 1. E-commerce Product Details Aggregation: Combine product descriptions, pricing, reviews, and availability from multiple suppliers. 2. Travel and Hospitality Flight and Hotel Search: Consolidate data from multiple booking platforms for flights, hotels, and car rentals. 3. Financial Services Account Aggregation: Display data from multiple bank accounts or credit cards in one view. 4. Social Media Unified Feed: Aggregate posts, tweets, and videos from different platforms. 5. Health and Fitness Patient Health Records: Combine data from wearable devices, medical tests, and healthcare provider systems 6. Real Estate Property Listings: Merge property data from various real estate platforms. 7. Entertainment Streaming Guide: Aggregate movie or show listings from various streaming platforms. 8. Logistics Shipment Tracking: Combine tracking details from multiple courier services. 9. Education and Learning Course Aggregation: Combine online courses from multiple providers. 10. General Utility Weather Forecasting: Combine data from multiple weather APIs for a more accurate forecast 𝗛𝗼𝘄 𝗧𝗵𝗲𝘀𝗲 𝗨𝘀𝗲 𝗖𝗮𝘀𝗲𝘀 𝗛𝗲𝗹𝗽 𝗖𝘂𝘀𝘁𝗼𝗺𝗲𝗿𝘀 1. Unified Experience: Customers get consolidated information without multiple API calls. 2. Improved Performance: Fewer client-side API requests result in faster load times. 3. Customizations: Easier to apply business logic or transformations on aggregated data 𝗪𝗵𝗲𝗿𝗲 𝘆𝗼𝘂 𝘄𝗼𝘂𝗹𝗱 𝗻𝗲𝗲𝗱 𝘁𝗵𝗶𝘀 ? 1. Request Aggregation 2. Response Aggregation 3. Combine API 4. Real-Time Data Aggregation 5. Multi-Layer Aggregation 6. Analytics and Monitoring 7. Orchestrated Workflows #api #technology #engineering
-
What is Compatibility Mode in Databricks Unity Catalog & How Does It Enable Cross-Platform Reads? Compatibility Mode helps us read Databricks Unity Catalog tables from external systems It creates a read-only, synced version of the table in a location we choose This version supports Delta Lake v1 and Iceberg metadata formats We can read data from: Athena, Snowflake, Microsoft Fabric, Apache Spark / Trino and Unity REST API How does it enable cross platform reads? Works only for Unity Catalog managed tables, streaming tables & materialized views Creates a synced table copy in external storage Auto-refresh happens hourly by default for managed tables (configurable to near real-time with 0 MINUTES setting). Streaming tables and materialized views refresh after every commit by default. Fully automated using Predictive Optimization Steps to enable: - Ensure an external location exists & you have CREATE EXTERNAL TABLE permission - Enable Compatibility Mode using table properties - Confirm status using DESC EXTENDED Real-Life Example: Imagine we maintain customer data in Unity Catalog. Our analytics team uses Databricks, but the finance team analyzes data in Athena or Snowflake. Instead of exporting data again and again: - We just enable Compatibility Mode - Finance continues using Athena/Snowflake - Everyone reads the same fresh data - No duplicate pipelines or data drift This reduces: Engineering effort, Storage duplication and the most important -> Cross-platform conflicts!
-
Analyzing ads across channels shouldn't be chaotic but sometimes it is… Manual exports, fragmented views, metric inconsistency, and other things we "love" make cross-platform analysis unbearable. Solution? Having a live report that combines key indicators from the ad platforms you use. Example? A PPC multi-channel dashboard in Looker Studio, providing marketers with: ✅Clarity across platforms: check the performance of Google Ads, Microsoft Ads, and even social ads on one screen. ✅Real-time analytics: data syncs automatically with Coupler.io no-code connectors to reflect the current state of things. Unlike a standard Looker Studio connector, these have no limits on the number of ad sources to blend, and you can reuse your data with other BI tools. ✅Customizability: it's a white-label template that you can quickly set up and share with clients or your team. Ad analysis shouldn't take hours. It must (and can) be simple and clear. For those tired of cross-channel reporting patchwork, give this template a try: 🔗 Link in the first comment. #PPC #GoogleAds #MicrosoftAdvertising #Ad
-
PostgreSQL's Foreign Data Wrappers (FDW) are awesome, and I really wish they were more widely known and used. Many companies distribute their production data across multiple database instances or types—whether for single-tenant architecture, sharding, or optimizing the DB choice for specific workloads. Usually, though, these separations are not logically "clean." Sooner or later, use cases emerge that require aggregating and correlating data across databases—like "super tenants" needing access to multiple tenant DBs or internal dashboards pulling from different sources. How can PostgreSQL help in these scenarios? FDWs to the rescue! PostgreSQL's Foreign Data Wrappers (FDWs) allow users to abstract away a complex database architecture and expose to the backend a single, simple PostgreSQL instance that represents many databases "behind it." With FDWs, users can create "virtual" tables that act as seamless proxies for real tables in remote databases. When you query a "virtual" FDW table, PostgreSQL automatically retrieves the data from the remote database and returns it, making it function just like a native table in your own instance. What’s really awesome is just how well PostgreSQL implemented them. PostgreSQL can push down filters and aggregations to remote servers to minimize data transfer, fetch data in parallel, support both reads and writes, and just works as expected. Long live PostgreSQL and all the awesome (and sometimes unknown) features it offers!
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development