3 SQL red flags to avoid: 1️⃣ Hardcoding: Make it dynamic or it won't last. 2️⃣ No Insight: If the data doesn't drive a decision, it’s just noise. 3️⃣ Join Guessing: Know the "why" behind your logic, not just the "what." Stop just pulling data—start providing value. 📊 Which one are you guilty of? ⬇️ #SQL #DataAnalytics #TechTips #DataScience #JavaScript
More Relevant Posts
-
I spent an hour debugging a query. The values were correct. The timestamps looked right. Nothing crashed. It was still returning wrong results. The change that caused it looked minimal. Before: BETWEEN current_timestamp - interval '25' hours AND from_unixtime(exec_ts/1000) - interval '1' hour After: BETWEEN from_unixtime(prev_ts/1000) AND from_unixtime(exec_ts/1000) - interval '1' hour I ran both functions in the SQL console - they printed the same types. I checked the Trino docs: from_unixtime() returns timestamp(3). Looked fine. But we run in Spark SQL. Different dialect. In Spark, from_unixtime() returns a string. So BETWEEN was comparing a string against a timestamp. No error thrown. Just wrong results. Nothing crashed. Nothing looked obviously broken. That was the whole problem. Useful reminder: when behavior stops making sense, check the real runtime types in the runtime you actually use - not the docs for a different dialect. Shortest path to the bug is often: values → types → coercion rules. That was the moment I remembered {} + [] + {} + [1] JavaScript territory. SQL does the same. Just without the memes.
To view or add a comment, sign in
-
🚨 A small mistake that can cost you HOURS (or even days) of debugging… If you’re using LINQ and trusting it blindly — stop for a second. Here’s the reality 👇 You write a LINQ query. You run it. You run it again. Same result. Looks correct, right? ✅ But then… something feels off 🤔 So you take that exact logic and test it in SQL… 💥 Boom — completely different result. Yeah. Been there. The problem? LINQ isn’t SQL. It translates to SQL — and sometimes not the way you expect. ⚠️ Things that can go sideways: • Complex joins behaving differently • Grouping that doesn’t match SQL logic • Null handling surprises • Hidden transformations under the hood And the worst part? It won’t throw an error. It’ll just quietly give you the wrong data 😬 🔥 What I do now (and you should too): 👉 Step 1: Write & test the query in SQL 👉 Step 2: Validate edge cases (nulls, duplicates, joins) 👉 Step 3: THEN convert it into LINQ 👉 Step 4: Compare results — don’t assume 💡 Rule of thumb: SQL = Source of truth LINQ = Convenience layer Don’t debug blindly. Verify intentionally. Trust me — this habit will save you more time than any debugger ever will 💯 #Developers #Programming #DotNet #SQL #LINQ #CodingTips #Debugging:
To view or add a comment, sign in
-
-
Process is as, if not more, important than the final result. Methodology and good documentation are intangibles to long-term success that often get overlooked in the day-to-day. Writing this out, I realize step #2 can be via python script with pandas dataframes. No need to use Excel or PowerQuery. At the time, I did what I knew how to do and got the job done. I’ll bring this self-critique of my own project into future workflows, continue documenting, and continue providing quality consulting services. Public News Service Contour Tracking Project: 1. Source .txt for desired states from https://lnkd.in/gNa8Dp-g 2. Use PowerQuery to format into .csv for ingestion into python script 3. Python script joins .csv files to FM Contours via FCC API (https://lnkd.in/g3KypZ8b) using unique id 4. Script creates .shp for each individual contour, then merges contours into a single feature class for each state 5. Final dataset contains useful information such as call sign, facility id, and radio service I’m accepting new clients. Check out my profile, send me a message, whatever works. Let’s do it!
To view or add a comment, sign in
-
-
𝗗𝗮𝘆 𝟲𝟲 — 𝗖𝗼𝗻𝘀𝗶𝘀𝘁𝗲𝗻𝘁 𝗨𝗽𝗸𝘀𝗶𝗹𝗹𝗶𝗻𝗴 𝗶𝗻 𝗧𝗲𝗰𝗵 & 𝗔𝗻𝗮𝗹𝘆𝘁𝗶𝗰𝘀 💻 𝗝𝗮𝘃𝗮𝗦𝗰𝗿𝗶𝗽𝘁 — 𝗠𝗲𝘁𝗵𝗼𝗱𝘀 (𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻 𝗮𝘀 𝗣𝗿𝗼𝗽𝗲𝗿𝘁𝘆) • Learned how a 𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻 can be used as an 𝗢𝗯𝗷𝗲𝗰𝘁 𝗣𝗿𝗼𝗽𝗲𝗿𝘁𝘆 (𝗠𝗲𝘁𝗵𝗼𝗱) • Understood how to call methods using 𝗢𝗯𝗷𝗲𝗰𝘁.𝗠𝗲𝘁𝗵𝗼𝗱() • Practiced defining and using 𝗠𝗲𝘁𝗵𝗼𝗱𝘀 inside objects • Strengthened understanding of 𝗢𝗯𝗷𝗲𝗰𝘁-𝗢𝗿𝗶𝗲𝗻𝘁𝗲𝗱 𝗝𝗮𝘃𝗮𝗦𝗰𝗿𝗶𝗽𝘁 📊 𝗣𝗼𝘄𝗲𝗿 𝗕𝗜 — 𝗗𝗮𝘁𝗮 𝗖𝗹𝗲𝗮𝗻𝗶𝗻𝗴 (𝗣𝗼𝘄𝗲𝗿 𝗤𝘂𝗲𝗿𝘆) 🔹 Opened 𝗣𝗼𝘄𝗲𝗿 𝗤𝘂𝗲𝗿𝘆 using 𝗧𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺 𝗗𝗮𝘁𝗮 🔹 Checked and corrected 𝗗𝗮𝘁𝗮 𝗧𝘆𝗽𝗲𝘀 🔹 Removed 𝗗𝘂𝗽𝗹𝗶𝗰𝗮𝘁𝗲 𝗥𝗼𝘄𝘀 🔹 Handled 𝗠𝗶𝘀𝘀𝗶𝗻𝗴 𝗩𝗮𝗹𝘂𝗲𝘀 (𝗡𝘂𝗹𝗹𝘀) 🔹 Renamed 𝗖𝗼𝗹𝘂𝗺𝗻𝘀 for better clarity 🔹 Applied changes using 𝗖𝗹𝗼𝘀𝗲 & 𝗔𝗽𝗽𝗹𝘆 🔹 Learned importance of 𝗗𝗮𝘁𝗮 𝗖𝗹𝗲𝗮𝗻𝗶𝗻𝗴 for accurate analysis 📊 🗄 𝗦𝗤𝗟 — 𝗤𝘂𝗲𝗿𝘆 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 • Learned 𝗤𝘂𝗲𝗿𝘆 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗧𝗲𝗰𝗵𝗻𝗶𝗾𝘂𝗲𝘀 in 𝗦𝗤𝗟 • Understood how to improve 𝗤𝘂𝗲𝗿𝘆 𝗣𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 • Explored concepts like 𝗜𝗻𝗱𝗲𝘅𝗶𝗻𝗴, 𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝘁 𝗝𝗢𝗜𝗡𝗦, 𝗮𝗻𝗱 𝗙𝗶𝗹𝘁𝗲𝗿𝗶𝗻𝗴 • Strengthened knowledge of 𝗗𝗮𝘁𝗮𝗯𝗮𝘀𝗲 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 📈 Building stronger foundations every day in 𝗝𝗮𝘃𝗮𝗦𝗰𝗿𝗶𝗽𝘁, 𝗣𝗼𝘄𝗲𝗿 𝗕𝗜, and 𝗦𝗤𝗟 with a focus on real-world skills. #Day66 #JavaScript #Methods #PowerBI #PowerQuery #DataCleaning #SQL #QueryOptimization #DataAnalytics #LearningJourney #TechUpskilling #CareerGrowth
To view or add a comment, sign in
-
𝗗𝗮𝘆 𝟵𝟭 — 𝗖𝗼𝗻𝘀𝗶𝘀𝘁𝗲𝗻𝘁 𝗨𝗽𝘀𝗸𝗶𝗹𝗹𝗶𝗻𝗴 𝗶𝗻 𝗧𝗲𝗰𝗵 & 𝗔𝗻𝗮𝗹𝘆𝘁𝗶𝗰𝘀 💻 𝗝𝗮𝘃𝗮𝗦𝗰𝗿𝗶𝗽𝘁 — 𝗖𝗼𝗻𝘀𝘁𝗿𝘂𝗰𝘁𝗼𝗿𝘀 (𝗖𝗼𝗻𝘁𝗶𝗻𝘂𝗮𝘁𝗶𝗼𝗻) • Continued practice on 𝗖𝗼𝗻𝘀𝘁𝗿𝘂𝗰𝘁𝗼𝗿 𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝘀 • Strengthened understanding of 𝗢𝗯𝗷𝗲𝗰𝘁 𝗖𝗿𝗲𝗮𝘁𝗶𝗼𝗻 & 𝗥𝗲𝘂𝘀𝗮𝗯𝗶𝗹𝗶𝘁𝘆 • Improved confidence in 𝗢𝗢𝗣 𝗖𝗼𝗻𝗰𝗲𝗽𝘁𝘀 📊 𝗣𝗼𝘄𝗲𝗿 𝗕𝗜 — 𝗗𝗔𝗫 & 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀 🔹 Created 𝗗𝗔𝗫 𝗠𝗲𝗮𝘀𝘂𝗿𝗲𝘀 (𝗧𝗼𝘁𝗮𝗹 𝗦𝗮𝗹𝗲𝘀, 𝗦𝗮𝗹𝗲𝘀 𝗡𝗼𝗿𝘁𝗵, 𝗛𝗶𝗴𝗵 𝗦𝗮𝗹𝗲𝘀) 🔹 Used 𝗖𝗔𝗟𝗖𝗨𝗟𝗔𝗧𝗘 for modifying filter context 🔹 Applied 𝗙𝗜𝗟𝗧𝗘𝗥 for custom conditions 🔹 Used 𝗥𝗔𝗡𝗞𝗫 to rank products 🔹 Created 𝗦𝗮𝗹𝗲𝘀 𝗖𝗮𝘁𝗲𝗴𝗼𝗿𝘆 using 𝗜𝗙 𝗖𝗼𝗻𝗱𝗶𝘁𝗶𝗼𝗻 🔹 Analyzed data using 𝗧𝗮𝗯𝗹𝗲 𝗩𝗶𝘀𝘂𝗮𝗹 📊 🗄 𝗦𝗤𝗟 — 𝗩𝗶𝗲𝘄𝘀 • Practiced 𝗩𝗶𝗲𝘄𝘀 in 𝗦𝗤𝗟 • Strengthened understanding of 𝗥𝗲𝘂𝘀𝗮𝗯𝗹𝗲 𝗤𝘂𝗲𝗿𝗶𝗲𝘀 • Improved knowledge of 𝗗𝗮𝘁𝗮 𝗔𝗯𝘀𝘁𝗿𝗮𝗰𝘁𝗶𝗼𝗻 📈 Strong focus on 𝗗𝗔𝗫, 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀, 𝗮𝗻𝗱 𝗢𝗢𝗣 — building real-world problem-solving skills step by step 🚀 #Day91 #JavaScript #Constructors #OOP #PowerBI #DAX #CALCULATE #RANKX #SQL #Views #DataAnalytics #LearningJourney #TechUpskilling #CareerGrowth 🚀
To view or add a comment, sign in
-
📻 𝗦𝗲𝗾𝘂𝗲𝗹𝗶𝘇𝗲 𝗧𝗿𝗮𝗻𝘀𝗮𝗰𝘁𝗶𝗼𝗻𝘀 A transaction is a group of operations that either: ✅ All succeed (commit) ❌ All fail (rollback) No partial updates. 🎹 𝗧𝘆𝗽𝗲𝘀 𝗼𝗳 𝗧𝗿𝗮𝗻𝘀𝗮𝗰𝘁𝗶𝗼𝗻𝘀 𝗶𝗻 𝗦𝗲𝗾𝘂𝗲𝗹𝗶𝘇𝗲 1️⃣ 𝗠𝗮𝗻𝗮𝗴𝗲𝗱 (𝗥𝗲𝗰𝗼𝗺𝗺𝗲𝗻𝗱𝗲𝗱) await sequelize.transaction(async (t) => { // auto commit or rollback }); ♦️ Cleaner ♦️ Less error-prone 2️⃣ 𝗨𝗻𝗺𝗮𝗻𝗮𝗴𝗲𝗱 const t = await sequelize.transaction(); try { await User.create(data, { transaction: t }); await t.commit(); } catch (err) { await t.rollback(); } ♦️ Use when you need fine-grained control 💡 𝗪𝗵𝗶𝗰𝗵 𝗼𝗻𝗲 𝘁𝗼 𝗰𝗵𝗼𝗼𝘀𝗲 ♦️ Use Managed by default ♦️ Use Unmanaged when you really need control 👉 We’ll dive deeper into 𝗙𝗮𝘀𝘁𝗶𝗳𝘆 𝗣𝗹𝘂𝗴𝗶𝗻𝘀 in the upcoming posts. Stay tuned!! 🔔 Follow Nitin Kumar for daily valuable insights on LLD, HLD, Distributed Systems and AI. ♻️ Repost to help others in your network. #javascript #node #sequelize #sql #mysql
To view or add a comment, sign in
-
-
Every backend developer has faced this dilemma: ❌ Auto-increment IDs? You're leaking your business volume to anyone paying attention. order_id=100420 at 10am, 100780 at 2pm = 360 orders in 4 hours. Your growth curve, exposed. 📉 ❌ UUID v4? 128-bit random keys destroy your B+ tree index performance once data outgrows memory. Every insert becomes a cache miss. 😩 ❌ Snowflake-style timestamp IDs? You married the wall clock. NTP step backward? VM resume after snapshot? Your ID generator either emits duplicates or stalls. ⏱️ There's a fourth option — and it sidesteps all three. permid64 generates 64-bit IDs from a simple idea: 🔑 "Uniqueness comes from a counter. The random-looking surface comes from a reversible permutation." A persistent counter provides strictly monotonic uniqueness — no wall clock involved, ever. A bijection over 64 bits maps those sequential values into opaque-looking IDs that reveal nothing about your business volume. The best part? It's FULLY DECODABLE. When an anomalous ID appears in a production log, you can instantly recover the instance and sequence number — no database lookup required. 👇 Quick look at the Python implementation: from permid64 import Id64 gen = Id64.multiplicative(instance_id=42, state_file="orders.state") token = gen.next_base62() // e.g. '3kTMd92Hx7Q' print(f"New Order: ORD_{token}") // Incident tracing: decode instantly meta = gen.decode_base62('3kTMd92Hx7Q') print(f"Instance: {meta.instance_id}, Sequence: {meta.sequence}") You get compact, URL-safe tokens for external use — and full traceability for ops. No trade-off required. 📦 pip install permid64 🔗 Full technical deep-dive (B+ tree analysis, bijection proofs, Feistel network internals) in the comments 👇 #Python #SoftwareEngineering #BackendDevelopment #Database #IDGeneration #Microservices #OpenSource
To view or add a comment, sign in
-
-
A senior developer looked at my SQL query and said: ‘It works. But it will fail in production.’ I had written what I thought was a solid query. It passed every test. It returned the correct output. It even handled edge cases. Then came the review. “This will scan the entire table. On real data, it won’t finish.” That moment changed how I think about SQL. I had been optimizing for correctness. Production systems require optimization for scale and efficiency. What I learned the hard way: → A correlated subquery inside a WHERE clause can destroy performance → NOT IN behaves unpredictably with NULLs → Indexes become useless when columns are wrapped in functions → EXPLAIN PLAN is not optional—it's the starting point Most SQL problems are not logic problems. They are execution problems. That’s the gap between writing queries… and writing queries that survive production. If you’ve worked with large datasets, you’ve seen this happen. #SQL #DataEngineering #DataAnalytics #Learning #Python
To view or add a comment, sign in
-
-
Ever find that your Claude Cowork tasks... drag? I have built a few dashboards using Cowork, because the source data lives in 4 different systems, and I need to be able to hand these over to non-technical maintainers. This is good in principle BUT has a few drawbacks: 1. It eats all the tokens. 2. It's slow: running the Big Dashboard takes over 30 minutes every day. 3. I have to manually reauthenticate the connectors each time. If I forget, I get a report with "missing data". I thought about the objective again, not just the tool. The objective here is: ** Extract the same data set from sources, updated periodically (e.g. daily, e.g. on-demand), and populate the same set of interactive HTML files to allow everyone to visualise these ** This doesn't need an LLM. It needs predictable, cheap execution. I used Claude to build a Python script that: 1. Logs in to my data sources 2. Downloads the data (same queries every time, remember?) 3. Writes these to a set of Javascript constants (e.g. const REVENUE_BY_MONTH = [["2026-04","US",x], ["2026-04","GB",y], ["2026-04","NL",z], ...]) - these allow the HTML dashboard to run a set of queries over the top of the data, to allow filtering by geo, by time range, by product category, etc. It also converted an existing dashboard download into a template that displays this data. I can now distribute a small package: - Python script - HTML template And that's it!
To view or add a comment, sign in
-
-
If you’re still building SQL queries using string concatenation… you’re making your life harder than it needs to be. Not because SQL is bad - but because treating queries like strings is an engineering liability. It works in dev. It breaks in production. Developers are still duct-taping raw queries together like this: "𝗦𝗘𝗟𝗘𝗖𝗧 * 𝗙𝗥𝗢𝗠 𝘂𝘀𝗲𝗿𝘀 𝗪𝗛𝗘𝗥𝗘 𝗮𝗴𝗲 > " + 𝘀𝘁𝗿(𝘂𝘀𝗲𝗿_𝗶𝗻𝗽𝘂𝘁) If your queries depend on + 𝘀𝘁𝗿(𝘂𝘀𝗲𝗿_𝗶𝗻𝗽𝘂𝘁): you’re not just writing brittle code - you’re opening the door to bugs and injection risks. On the flip side, bringing in a massive ORM just to handle a few complex joins is severe overkill. I’ve been there: • Debugging messy query strings • Chasing silent bugs • Rewriting the same logic again and again You need a middle ground. That’s where 𝗣𝘆𝗣𝗶𝗸𝗮 comes in. 𝗣𝘆𝗣𝗶𝗸𝗮: It’s a pure SQL query builder that sits in the perfect sweet spot and gives you structure without losing control: ✅ Writes in pure Python ✅ Natively parameterizes inputs (safer, avoids injection issues) ✅ Makes queries highly composable (letting FastAPI and Pydantic handle the rest) I broke down exactly why this tool is a massive upgrade over raw strings and when you should (and shouldn't) use it. Breakdown in the carousel 👇 Curious - how are you handling dynamic SQL today? #Python #SQL #DataScience #DataEngineering #BackendEngineering #SoftwareArchitecture #TechTips
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development