𝗗𝗮𝘆 𝟭𝟬 - 𝗧𝗢𝗣 / 𝗟𝗜𝗠𝗜𝗧 𝐄𝐯𝐞𝐫𝐲 𝐝𝐚𝐭𝐚 𝐩𝐫𝐨𝐟𝐞𝐬𝐬𝐢𝐨𝐧𝐚𝐥 𝐞𝐯𝐞𝐧𝐭𝐮𝐚𝐥𝐥𝐲 𝐦𝐞𝐞𝐭𝐬 𝐨𝐧𝐞 𝐥𝐚𝐧𝐠𝐮𝐚𝐠𝐞: 𝐒𝐐𝐋 TOP (SQL Server) or LIMIT (MySQL, PostgreSQL) is used to control how many rows a query returns. It’s helpful when you want to preview a small portion of large datasets, check the first few records, or quickly inspect your data without fetching everything. #SQL #DataSkills #DataAnalytics #LearningInPublic #THE_Africa_Idealist #30DaysOfSQL #30DaysSQLCards
Cephas Ukuedojor’s Post
More Relevant Posts
-
#KaliyonaSQL #KaliyonaDataAnalytics #KaliyonaWithGayathriBhat Opened MySQL Workbench Connected to Local instance Opened a new query tag #SQL #DataAnalytics #LearningJourney #Beginners
To view or add a comment, sign in
-
Find the query consuming the most total time on your database. SELECT left(query, 80) AS query_preview, calls, round(total_exec_time::numeric / 1000, 2) AS total_seconds, round(mean_exec_time::numeric, 2) AS avg_ms, rows FROM pg_stat_statements ORDER BY total_exec_time DESC LIMIT 10; The key insight: sort by total_exec_time, not mean_exec_time. A query averaging 2ms but called 5 million times consumes 10,000 seconds of database time. A query averaging 500ms called twice consumes 1 second. The first query is 10,000x worse for your database, even though it looks fast. This is the single most common mistake I see in query monitoring. People optimise the slowest individual query when they should be optimising the query with the highest total cost. If pg_stat_statements isn't enabled yet: shared_preload_libraries = 'pg_stat_statements' Requires a restart, but it's the single most impactful thing you can do for PostgreSQL observability. Every production database should have it. Run this query. The top result will probably surprise you. #PostgreSQL #Database #QueryOptimization #Performance #SQL
To view or add a comment, sign in
-
Continuing my work on query performance. You ship a query on Monday. It runs in 2 ms. Six months later, the same query, same SQL, same indexes, takes 4 seconds. Nothing changed. Except one thing did: the table grew. Welcome to Big O notation. If you understand this concept, you can understand why queries that look fine today become a problem at scale. It is one of the most important ideas for understanding how databases really behave as data grows. I wrote a new post about it: https://lnkd.in/deEQdHUG #mysql #readyset #acedirector
To view or add a comment, sign in
-
-
Turn PostgreSQL into a time-series engine with a single extension ⏱️ In standard PostgreSQL, all rows live in one table. As time-series data grows into the millions, queries cannot skip irrelevant data, so even recent lookups scan far more than needed. Timescale solves this with hypertables, which automatically partition data into time-based chunks. Queries only touch the relevant chunks, leaving the rest untouched. Other capabilities: • Shrink storage by up to 95% with columnar compression that stays fully queryable • Faster queries with continuous aggregates that refresh only new data • Built-in retention policies to automatically remove old data Plus, TimescaleDB is open source! Just install the extension and continue using PostgreSQL. ️📦 Link: https://fandf.co/3Qef5B0 #PostgreSQL #TimeSeries #SQL #Sponsor
To view or add a comment, sign in
-
-
SQL Day 30 ( studying the NULL is not zero. NULL is "I don't know." 📌) Today I learned something that sounds simple but changes everything. NULL ≠ 0. NULL ≠ empty string. NULL = unknown. And if you don't handle it? Your calculations break. SQL has some built-in functions to handle NULL values, and the most common functions are: COALESCE() - The preferred standard. (Works in MySQL, SQL Server and Oracle) IFNULL() - (MySQL) ISNULL() - (SQL Server) NVL() - (Oracle) IsNull() - (MS Access) sql syntax SELECT COALESCE(column_name, 'No data') FROM table_name; If the column is NULL, show "No data" instead. Why this matters: A NULL value doesn't break your query anymore. You decide what fills the gap. #Data analytics #LearningSQL #DataJourney #SQLBeginners #WomeninTech
To view or add a comment, sign in
-
A quick look at CHAR vs VARCHAR in MySQL 🚀 CHAR stores data in fixed length, while VARCHAR stores only the required length. This small difference plays a big role in memory usage and performance. Choosing the right type helps in building efficient databases. #MySQL #SQL #DatabaseDesign
To view or add a comment, sign in
-
-
🔹 FOREIGN KEY in MySQL (in words) A foreign key is a field (column) in one table that is used to connect with another table. 👉 It takes values from a column (usually primary key) in another table 👉 It helps to maintain relationship between tables 👉 It prevents invalid or unrelated data 🔹 Simple Explanation One table = Parent table (original data) Another table = Child table (uses that data) Foreign key makes sure child table only stores values that exist in parent table 🔹 Example in words Department table has dept_id Employee table also has dept_id Employee can only use department IDs that already exist 🔹 Key Points Ensures data consistency Avoids wrong entries Creates link between tables Special thanks to Anand Kumar Buddarapu Uppugundla Sairam Saketh Kallepu #ForeignKey #MySQL #SQL #DBMS #DataIntegrity #TableRelationship #DatabaseBasics #LearnSQL
To view or add a comment, sign in
-
Building SQL fundamentals step by step through hands-on practice 💻 In this video, I worked with MySQL through Command Prompt and performed core database operations: • Connected to MySQL server using root credentials • Listed available databases using SHOW DATABASES; • Selected my working database with USE ankitadb; • Initiated table creation using CREATE TABLE During the process, I also encountered and corrected a database selection error, which improved my understanding of SQL syntax and command accuracy. Small practical exercises like these are helping me strengthen my foundation in SQL and database management as I continue learning data-related technologies. Looking forward to exploring more advanced SQL concepts and real-world database operations 🚀 #SQL #MySQL #DatabaseManagement #DataAnalytics #LearningByDoing #TechJourney #SQLPractice #DataScienceLearning
To view or add a comment, sign in
-
📌Why SQL Indexing Matters An SQL index is typically implemented using data structures like B-Trees (default in many databases) that allow the database to locate rows efficiently without scanning the full table. Suppose you frequently run: SELECT * FROM users WHERE email = 'abc@example.com'; Without an index → the database performs a full table scan (O(n)) Create an index: CREATE INDEX idx_users_email ON users(email); With the index, the database can traverse the B-Tree and find matching rows much faster (O(log n)) ✅ Faster filtering on WHERE clauses ✅ Better performance for joins ✅ Can optimize ORDER BY/ GROUP BY ✅ Critical for scaling read-heavy applications There are some tradeoffs as well like extra storage usage and slower writes because indexes must also be updated when we insert , update or delete. Use indexing for high-read and low-write columns, foreign keys or column joins and for frequently filtered or sorted fields. Do not index every column blindly. The best index is not “more indexes” it’s the right indexes for your query patterns. #SQL #DatabaseOptimization #BackendDevelopment #SystemDesign #PostgreSQL #MySQL #SoftwareEngineering
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development