Matthew J. Glickman
New York, New York, United States
5K followers
500+ connections
View mutual connections with Matthew J.
Matthew J. can introduce you to 10+ people at Genesis Computing
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Matthew J.
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
Financial industry veteran turned passionate cloud evangelist leveraging the public cloud…
Experience
View Matthew J.’s full profile
-
See who you know in common
-
Get introduced
-
Contact Matthew J. directly
Other similar profiles
Explore more posts
-
Bharath Komaravolu, FRM
Invexor Labs • 2K followers
Snowflake × OpenAI: why this deal feels less like a power move at the data layer: a multi-year $200M agreement to bring frontier models directly inside Snowflake's Cortex AI (Snowflake's Intelligence layer for enterprise agents) If you’re thinking in systems terms: CortexAI is Snowflake trying to make “AI on enterprise data” feel like a native database capability — not a separate product you bolt on later. The story is that agentic AI only becomes valuable in production when it can reason over governed, proprietary data (without your org turning into a compliance crime scene.) Yesterday, Snowflake and OpenAI announced a multi-year, $200m partnership to bring OpenAI models natively into Snowflake’s Cortex AI, available to Snowflake’s 12,600 customers across all three major clouds. A practical lens to read this deal --- We can 'think of modern AI' as a 'supply chain': raw material = enterprise data factory = compute + connectors foreman = evals + guardrails machines = models output = decisions + actions Most orgs are trying to buy “better machines” while the foreman is missing and the factory floor is chaotic. This partnership is Snowflake trying to own the factory floor, while OpenAI provides the best machines. Enterprises already park their most valuable asset (data + permissions + governance) inside Snowflake. If the AI runs inside that boundary, Snowflake becomes the default runtime for “AI on enterprise data” instead of being just storage/warehouse.
6
1 Comment -
Singri Goutham
HSBC • 842 followers
Single-method retrieval often fails in #financial and #regulatory domains, exact-term search misses semantic relationships, while dense models overlook structured identifiers. A multi-path approach resolves this gap. The Multi-Path Retrieval (#MPR) pipeline implements four complementary retrieval strategies with a domain-specialized re-ranking mechanism to balance lexical precision and semantic understanding: #BM25 Sparse Retriever – captures exact matches for regulatory codes, ticker symbols, and key phrases. #Dense Semantic Retriever – identifies paraphrases and conceptually similar expressions through high-dimensional embeddings. #Metadata-Aware Retriever – filters at section-level granularity, narrowing retrieval to relevant document parts before detailed chunk search. #HyDE Retriever – generates a hypothetical answer, embeds it, and retrieves real text segments with similar structure and terminology, bridging the question–answer linguistic gap. Final results can be aggregated through weighted rank fusion (like semantic > BM25 > metadata > HyDE ), producing top-ranked chunks enriched through context bundling when adjacent similarities exceed 0.85. This ensures coherent, context-complete passages are provided to downstream models. Such architecture is especially effective in financial compliance and regulatory analysis, where both formal precision and contextual interpretation are essential. The hybrid fusion enhances #recall, #contextualaccuracy, and #interpretability in retrieval-augmented systems(#RAG). #GenAI #Banking #AIEngineer #LLM #Fraud
4
-
Anjli Jain
ElevenX Capital • 35K followers
**Databricks Secures a Staggering $100B Valuation on $4B ARR** Databricks' impressive rise to a $100 billion valuation, following a recent $1 billion funding round, demonstrates the robust demand for cloud data platforms. As we observe this growth, it prompts a reflection on the scalability of data solutions in the current market. At ElevenX Capital, we believe that investments in companies like Databricks highlight the importance of harnessing data for innovation. How are you strategizing your portfolio in the evolving landscape of data technology? #investing #innovation #venturecapital #entrepreneurship
1
-
Sajith Pai
Blume Ventures • 86K followers
This section on how dbt Labs transitioned from a purely PLG motion to layering on enterprise is a fascinating one. Two instructive passages that I have bolded (below from First Round Review's path to PMF series featuring dbt Labs) TLDR: GTM comprises your ICP, channel, and message. When you transition from PLG / bottom up to Enterprise / top down motion, naturally your ICP and channel also changes, but your messaging / proposition needs to change too, e.g., the enterprise may be multiple personas and are buying assurance as much as solution. --- "Handy found product-market fit organically for dbt as an open-source tool mostly used by data practitioners and developers. But a few years into running a commercial business, he realized he had to build a growth curve all over again with C-suite data leaders. “Even though we had an unbelievable amount of market pull, as we initially commercialized, it wasn’t easy for us to transform this open-source command line tool into a product that enterprises would pay a million dollars for,” says Handy. “When you have enough product-market fit, sometimes it allows you to get away with not being super tight on product marketing or sales motions. So around 2022, we went from this gigantic acceleration curve and overnight we realized, we have to sit at the adults’ table and figure things out real fast,” says Handy. After the PLG flame started to fizzle, Handy turned his attention to layering on a sales-led motion for the cloud platform. “We had to focus our efforts on telling cohesive stories to senior data leaders. We had to have a very clear, explainable answer to the question, ‘Why should I use the commercial product and not the open-source product? And it had to be digestible by someone with a C in their title,” he says. Handy’s answer: dbt Cloud can handle complex data for companies of every size. “The longer people used dbt, the more complex their code became,” he says. “It was a problem for the most sophisticated dbt users, who were often at the largest companies. So there was a real opportunity for us to step in and solve that for them with dbt Cloud.” To tell that story to enterprise customers, Handy relied on data, naturally. “At a user conference we presented a chart that showed the number of dbt projects that had a certain number of models in them — over 100, over 1,000, et cetera,” he says. “We watched that number climb and we knew as users ourselves, ‘Oh my God, trying to work in a dbt project with 5,000 models in it is challenging.’ So we started with that quantitative data point and asked folks in our community about their experiences with these very large, complex dbt projects, and validated that this was a pain in the ass without a cloud platform.”
23
4 Comments -
Dr. Thomas R. Glück
cybernetics, system design… • 4K followers
Relational databases are officially dead at relational scale. They collapse under real-world complexity – joins explode, ETL bleeds billions, governance is just expensive duct tape. Graphs aren’t a feature anymore. They’re the physics of modern data. Graph architectures dominate because relationships are native, not simulated. Traversal is O(1)-ish instead of O(n²), changes propagate instantly, lineage is built-in, not bolted-on. Everything that matters – processes, decisions, AI models – is a node or edge. No translation layer, no drift. The next enterprise intelligence paradigm won’t be “AI on top of data.” It will be data, logic, and context fused into one living graph that thinks. Tools stay tools. Architectures become intelligent systems. cCortex is the first implementation that turns GraphDBs from query engines into the actual operating system of the enterprise. Direct, lossless, neuroplastic – governance and integration simply dissolve. Why settle for dashboards when your stack can be conscious? Read the inversion: https://lnkd.in/edA5xrfF #EnterpriseAI #GraphDatabase #DeepTech #DataArchitecture #FutureO
3
-
Chandra Pendyala
I build and run technology… • 2K followers
Solving Valuable Problems with ML Most excitement today centers on automation powered by fuzzy intelligence, a major driver of productivity at scale. But are there classes of high‑value problems that truly benefit from ML’s compute‑intensive capabilities? Having spent much of my career modeling complex systems with mathematics and stochastic methods—often inventing or refining algorithms to reduce dimensionality and combinatorial complexity—I chose to explore a particularly compelling challenge. Since 2008, quantitative easing and currency manipulation across major economies have added new complexities in the obfuscation of real economic interest rates. Global energy market interventions and sovereign wealth fund activity in bond markets have further complicated the picture. For quantitative finance professionals, hedging and capital allocation have become a game of speed, making this problem all the more engaging. I experimented with the Mixture of Experts (MoE) approach in ML to tackle this challenge. Here are my first results: https://lnkd.in/gfsXTBDK Summary: This path seems very compelling. #QuantFinance #ML #MOE #OptionsPricing #MacroHedging #FIIPricing #MLinFinance
4
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content