Pinal Dave is back. On May 6, he's digging into how AI fits real SQL Server tuning work. ▸ Reviewing execution plans from a different angle. ▸ Spotting tuning patterns you might not catch right away. ▸ Troubleshooting queries that don't line up with what you're seeing. You'll also cover where AI falls short. Understanding where it helps. Where it fails. And why your judgment is still what makes the difference. If you've been curious how AI fits into what you already do, this one's practical. Register → https://lnkd.in/eb_DH2nE #SQLServer #PinalDave #DBAs #SQLServerTuning
Pinal Dave on AI in SQL Server Tuning
More Relevant Posts
-
The Godfather of SQL is back, and he’s ready to talk about more SQL tuning tips and tactics - including how AI can be a boost to your efforts. If you haven’t attended one of Pinal’s awesome webcasts, you’re in for an informative and interactive discussion covering real-world situations and questions with feedback and tips you can start using right away. Sign up now and join Pinal Dave and IDERA May 6th!
Pinal Dave is back. On May 6, he's digging into how AI fits real SQL Server tuning work. ▸ Reviewing execution plans from a different angle. ▸ Spotting tuning patterns you might not catch right away. ▸ Troubleshooting queries that don't line up with what you're seeing. You'll also cover where AI falls short. Understanding where it helps. Where it fails. And why your judgment is still what makes the difference. If you've been curious how AI fits into what you already do, this one's practical. Register → https://lnkd.in/eb_DH2nE #SQLServer #PinalDave #DBAs #SQLServerTuning
To view or add a comment, sign in
-
-
Still feeding XLSX files into LLMs? You might be paying more—for less. XLSX is a zipped bundle of XML (styles, metadata, multiple layers). Most of that is irrelevant for analysis but still gets pulled through your pipeline. For clean, tabular workloads → CSV is the smarter play. - Lower token usage - Cleaner structure (no formatting noise) - More predictable parsing Not a blanket rule—XLSX still matters when formulas, multi-sheet relationships or context are critical. But if your goal is efficient, reliable LLM analysis, start simple. Optimize your data format before you optimize your prompts. #LLM #DataEngineering #AI #Analytics #MachineLearning
To view or add a comment, sign in
-
Insight from this week of connecting AI agents with datasets: Don't build an MCP server with a very limited number of possible interactions. Give the agent a SQLite database and a description of the schema and a couple example queries.
To view or add a comment, sign in
-
General-purpose foundation models are effectively useless for financial time series. Google's TimesFM (500M parameters) posted an R² of -2.80% on zero-shot financial forecasting — worse than random. Kronos, developed at Tsinghua University, takes a fundamentally different approach. It converts K-line OHLCVA 6-dimensional values into discrete tokens using Binary Spherical Quantization (BSQ) with a codebook of 1 million entries. The architecture predicts the next candlestick the same way GPT predicts the next word. Trained on 12B+ K-lines from 45 exchanges, the results are decisive: — RankIC +93% vs general-purpose TSFMs (directional prediction accuracy) — Synthetic K-line fidelity +22% vs DiffusionTS, TimeVAE — Outperformed all 25 baseline models — MIT open source, 16.4K GitHub stars, AAAI 2026 The implication is clear. In domain-specific foundation models, the codebook directly encodes the statistical structure of the training data. Data quality becomes the model's representational capacity — literally. Pebblous provides the diagnostic and validation infrastructure for this kind of domain-specific data through DataClinic. Full analysis: 한국어: https://lnkd.in/gZeMeV3u English: https://lnkd.in/g6pxZerk #Pebblous #DataClinic #DataQuality #FinancialAI #FoundationModels #TimeSeries #AAAI2026
To view or add a comment, sign in
-
GPT-5.4 xHigh in the Codex harness with all of the context and data and tools it needs is the first time I have experienced the genuine feeling of "this is way better than I am". It has been clear that the tools have been better at coding since o1, and data analysis/research since GPT-5, but holy crap.
To view or add a comment, sign in
-
A 𝐤𝐧𝐨𝐰𝐥𝐞𝐝𝐠𝐞 𝐠𝐫𝐚𝐩𝐡, combined with 𝐭𝐞𝐱𝐭 𝐜𝐡𝐮𝐧𝐤𝐬 and their associated 𝐞𝐦𝐛𝐞𝐝𝐝𝐢𝐧𝐠 𝐯𝐞𝐜𝐭𝐨𝐫𝐬, brings critical information into an 𝐚𝐠𝐞𝐧𝐭’𝐬 𝐜𝐨𝐧𝐭𝐞𝐱𝐭—significantly improving the quality of its responses. While property graphs are increasingly recognized as a powerful way to enhance the accuracy of RAG-based knowledge agents, they often require 𝐬𝐮𝐛𝐬𝐭𝐚𝐧𝐭𝐢𝐚𝐥 𝐚𝐝𝐝𝐢𝐭𝐢𝐨𝐧𝐚𝐥 𝐞𝐟𝐟𝐨𝐫𝐭 𝐭𝐨 𝐛𝐮𝐢𝐥𝐝 𝐚𝐧𝐝 𝐦𝐚𝐢𝐧𝐭𝐚𝐢𝐧. To try to address this, I've developed an experimental 𝐂𝐨𝐝𝐞𝐱 𝐒𝐤𝐢𝐥𝐥: 𝐠𝐫𝐚𝐩𝐡𝐫𝐚𝐠-𝐨𝐫𝐚𝐜𝐥𝐞-𝐛𝐮𝐢𝐥𝐝𝐞𝐫 It automates the 𝐤𝐧𝐨𝐰𝐥𝐞𝐝𝐠𝐞 𝐠𝐫𝐚𝐩𝐡 𝐜𝐫𝐞𝐚𝐭𝐢𝐨𝐧 and the 𝐞𝐧𝐭𝐢𝐭𝐢𝐞𝐬 𝐢𝐧𝐠𝐞𝐬𝐭𝐢𝐨𝐧 on 𝐎𝐫𝐚𝐜𝐥𝐞 𝐀𝐈 𝐃𝐁 26𝐚𝐢, starting from 𝐝𝐨𝐜𝐮𝐦𝐞𝐧𝐭𝐬 and leveraging LLMs—𝐩𝐮𝐛𝐥𝐢𝐜 or 𝐩𝐫𝐢𝐯𝐚𝐭𝐞—along with embedding vectors generation for 𝐜𝐨𝐦𝐛𝐢𝐧𝐞𝐝 𝐠𝐫𝐚𝐩𝐡 𝐭𝐫𝐚𝐯𝐞𝐫𝐬𝐚𝐥 𝐪𝐮𝐞𝐫𝐢𝐞𝐬 𝐚𝐧𝐝 𝐬𝐢𝐦𝐢𝐥𝐚𝐫𝐢𝐭𝐲 𝐬𝐞𝐚𝐫𝐜𝐡, a capability 𝐮𝐧𝐢𝐪𝐮𝐞𝐥𝐲 supported by 𝐎𝐫𝐚𝐜𝐥𝐞 𝐀𝐈 𝐃𝐁 26𝐚𝐢. All of this runs through a single agent in your 𝐈𝐃𝐄, powered by 𝐒𝐐𝐋𝐜𝐥 𝐌𝐂𝐏 under the hood, to help you build more accurate knowledge bases for information retrieval. 𝐏𝐚𝐩𝐞𝐫: https://lnkd.in/dTKzMFsF 𝐃𝐞𝐦𝐨 𝐯𝐢𝐝𝐞𝐨: https://lnkd.in/dYkdvr2R Give it a try—I’d really appreciate your feedback! #GraphRAG #PGQL #Skill #Codex #SQLcl #OracleDB #26ai
To view or add a comment, sign in
-
-
In the age of AI, an open format alone isn’t enough. At Snowflake, we’re working with the community to enable true interoperability across every layer: • Data: Apache Iceberg™ v3 support for semi-structured data, CDC, and more • Governance: Apache Polaris™ to make fine-grained access controls portable across engines • Semantics: Open Semantic Interchange (OSI) to standardize metrics and business logic Plus, innovations like pg_lake are bridging transactional and analytical systems, so data can be accessed where it lives. The goal: give you full agency over your data, without moving it, breaking governance, or losing context. The future of the lakehouse is open, interoperable, and built with the community. Read more: https://bit.ly/4c0fPlI
To view or add a comment, sign in
-
Most systems are built relational-first — but not all problems should be solved that way. I’ve been working with graph-based systems and multi-database architectures for a while now, and one thing I’ve realized: Traditional relational thinking doesn’t scale well for relationship-heavy problems. Graph databases like Neo4j change how you think about data — especially for security analysis, access control, and system relationships. Currently exploring how these ideas can also apply to offline AI systems. Curious — where have you seen graph models outperform relational systems? #BackendEngineering #GraphQL #Neo4j #SystemDesign #SoftwareEngineering
To view or add a comment, sign in
-
New Post: Incremental Indexing of Temporal Shortest Paths in Neo4j for Large Knowledge Graphs - — ### Abstract Dynamic temporal knowledge graphs \(KGs\) are increasingly employed in domains such as event‑centric analytics, predictive maintenance, and real‑time recommendation. The Neo4j graph database, with its Cypher query language and property graph model, remains one of the most widely adopted platforms for storing and querying such data. However, traditional shortest‑path computations \(e.g., Dijkstra, \[…\] \[Source & Legal Disclaimer\] This is an AI-generated simulation research dataset provided by Freederia.com, released under the Apache 2.0 License. Users may freely modify and commercially use this data \(including patenting novel improvements\); however, obtaining exclusive patent rights on the original raw data itself is prohibited. As this is AI-simulated data, users are strictly responsible for independently verifying existing copyrights and patents before use. The provider assumes no legal liability. For future Enterprise API access and bulk dataset purchase inquiries, please contact Freederia.com.
To view or add a comment, sign in
-
Databricks AiChemy is a multi-agent system that autonomously searches PubMed, chemical databases, and disease graphs - all at the same time. No human stitching it together. No weeks of manual cross-referencing. Here's the part for builders: The whole thing runs on MCP. It's now the backbone letting agents query OpenTargets, PubChem, and PubMed like a single unified brain. Your private drug library can plug straight in. The search isn't keyword-based either. Feed it a known drug, it scans 250,000 molecules using fingerprint embeddings and surfaces structurally similar candidates. That's real lead generation - not a buzzword version of it. Slide 5 is the one to pay attention to. This stack isn't locked to biotech. Legal, finance, climate - any domain drowning in multi-source, unstructured data runs on the same architecture: MCP for external sources, vector search underneath, agents on top. The insight I'd take from this: your proprietary data is your moat. The agent is just the query layer. What domain do you think gets disrupted by this stack next? Follow Shreesozo for weekly MCP + agentic AI breakdowns. #MCP #AgenticAI #AIInfrastructure #Databricks #DeveloperTools
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development