Built an Event Scheduler Using Heaps and Hash Tables in Python Hi everyone, this week I implemented an Event Scheduler system focusing on algorithm efficiency and scalable data structures. Key Data Structures Used: Hash Table → O(1) event lookup by ID Min-Heap → O(log n) priority management Timestamp filtering → Efficient range queries The scheduler supports: ✔ Adding events ✔ Updating priorities ✔ Cancelling events ✔ Retrieving the next event ✔ Querying events within a time range This project reinforced how critical data structure selection is for system performance. Efficient design can transform potentially O(n) operations into O(1) or O(log n). Excited to continue building more algorithm-focused systems. #Python #DataStructures #Algorithms #ComputerScience #Heap #HashTable
More Relevant Posts
-
Getting the "plumbing" right before the ML takes over. I’m currently building a House Price Valuation System, and if there’s one thing my CS background has taught me, it’s that a model is only as good as the data pipeline behind it. This screenshot is from the Data Preprocessing phase. I’m using Python (Pandas/NumPy) to handle the messy reality of raw data—things like categorical imputation and logical defaults—so the data is actually structured and ready for testing in the ML models. Whether it’s an ML project or a business dashboard, I’ve found that the real engineering happens in the "boring" parts: the cleaning, the logic, and the automated pipelines. Once the technical foundation is solid, the rest usually falls into place. #CSEngineer #Python #MachineLearning #SystemArchitecture #BuildingInPublic
To view or add a comment, sign in
-
-
Day 24 of 100 Completed Today reinforced cycle detection patterns and continued working with real-world data through EDA. • #141 - Linked List Cycle (Easy) - solved • Continued EDA on dataset 🔎 Focus Areas • Fast-slow pointer technique for cycle detection • Recognizing repeated patterns across different problem types • Going deeper into data understanding and cleaning 💡 Key Takeaways (DSA) 📌 #141 Linked List Cycle This is a classic application of Floyd’s Cycle Detection: use slow and fast pointers if they meet → cycle exists no extra space needed, efficient and elegant Key insight: cycle detection isn’t limited to numbers - it applies to linked structures as well. 🚀 Python + EDA Continued working on EDA and exploring the dataset further. 💡 Key Takeaways (Python) • Better understanding of missing values and distributions • More confidence in using Pandas for exploration • Visualization is helping uncover patterns in data ⚡ Honest Reflection This was a steady day. Not very difficult, but important for reinforcing patterns. Cycle detection is now clearly a recurring concept across problems, which makes it easier to recognize. EDA still needs depth, especially in drawing meaningful insights instead of just running operations. Consistency is holding. Progress is gradual but real. Patterns recognized: Fast-Slow Pointers | Cycle Detection | Linked Lists | Data Cleaning | EDA | Pattern Recognition #100DaysOfCode #DSA #Python #EDA #LinkedList #LeetCode #BuildInPublic #CodingJourney #Consistency
To view or add a comment, sign in
-
-
𝐈𝐟 𝐘𝐨𝐮 𝐃𝐨𝐧’𝐭 𝐔𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝 𝐓𝐡𝐢𝐬 𝐏𝐫𝐨𝐛𝐥𝐞𝐦, 𝐘𝐨𝐮 𝐃𝐨𝐧’𝐭 𝐔𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝 𝐒𝐭𝐚𝐜𝐤𝐬 Today I tackled a fundamental problem that looks simple at first — but really tests your understanding of logic and data structures. 💡 𝐓𝐡𝐞 𝐂𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞: Given a string of brackets () { } [ ], determine whether it is valid. 🧠 𝐌𝐲 𝐀𝐩𝐩𝐫𝐨𝐚𝐜𝐡: Instead of checking everything at the end, I used a stack (𝐋𝐈𝐅𝐎 𝐩𝐫𝐢𝐧𝐜𝐢𝐩𝐥𝐞) to validate each step in real-time. • Push opening brackets • On closing bracket → match with the last opened one • If mismatch occurs → invalid • If everything matches & stack is empty → valid 🔥 𝐊𝐞𝐲 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠: This problem taught me how powerful simple data structures can be when used correctly. 🐍 𝐏𝐲𝐭𝐡𝐨𝐧 𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧 👇 📌 Consistency in solving such problems is helping me build strong problem-solving skills. #Python #DSA #FullStack #AI #Logic #LeetCode #AIDriven
To view or add a comment, sign in
-
-
I recently built a Python project exploring optimal trade execution using the Almgren–Chriss framework. The model compares TWAP, VWAP-style expected-volume execution, and AC-optimal strategies under market impact, slippage, and execution risk. Key components: ->Market data ingestion (yfinance + synthetic fallback) ->Heuristic calibration of market impact parameters ->Pathwise slippage simulation ->Monte Carlo analysis of execution costs ->Parameter sensitivity study ->Alpha decay modeling ->Walk-forward backtesting ->9-panel visualization dashboard This project helped me better understand the trade-off between execution cost, risk, and timing in real-world trading. GitHub:https://lnkd.in/ev6Pxf66 #QuantFinance#AlgorithmicTrading hashtag#Python hashtag#Finance hashtag#DataScience hashtag#Quant
To view or add a comment, sign in
-
-
Struggling to improve your ML pipeline? Looking for new feature ideas that actually help your model? We built features_goldmine — a Python package designed to automate feature engineering for tabular data. 👉 https://lnkd.in/d_VzuKMb Instead of manually trying random transformations, it: generates a wide range of candidate features, applies different feature engineering strategies, removes weak or redundant ideas, keeps only features that show predictive value. It works directly on raw tabular data and integrates easily into existing ML workflows. The goal is simple: improve model performance with minimal code changes and less manual feature engineering. If you work with tabular datasets, give it a try — and let me know what you think.
To view or add a comment, sign in
-
-
🚀 Learning Update: Scaling and Pipelines I learned something subtle, but very important. Why scaling your data can completely change your results. ⚠️ The Problem K-Means uses distance. So if one feature has larger values than others, It dominates the clustering. 📊 Example - Feature A range: 0 → 1 - Feature B range: 0 → 1000 Feature B will control everything. ✅ The Solution: StandardScaler Scaling transforms data so that: - Mean = 0 - Variance = 1 Now all features contribute equally 🔗 Best Practice: Pipelines Instead of doing steps separately: - Scale data - Then cluster We combine them: 👉 Pipeline = clean + reusable workflow 💡 My Takeaway Before this, I thought clustering was just algorithm + data. Now I see, data preparation can make or break ones model. Small step, big impact. #MachineLearning #DataPreprocessing #DataScience #Python #DataCamp #DataCampAfrica
To view or add a comment, sign in
-
-
I have been working on a Smart EDA (Exploratory Data Analysis) system that automatically analyzes datasets and generates interactive reports. It includes: 📊 Missing value analysis 📈 Correlation heatmaps 📉 Distribution plots 🍰 Pie charts for categorical data ⚠️ Data quality alerts 📄 HTML report generation This project helped me understand how real-world data analysis systems are built. 💡 Next goal: Convert it into a full Streamlit web application. #DataScience #Python #EDA #MachineLearning #DataAnalysis)
To view or add a comment, sign in
-
Raw data doesn’t become useful because you visualise it – it becomes useful because you model it properly. SQL for shaping logic. Python for cleaning and exploration. dbt for turning transformations into reliable, version-controlled data products. And GitHub is where all of it stops being “analysis” and starts becoming engineering. That’s the shift: from writing queries to building systems.
To view or add a comment, sign in
-
🚀 Built & shipped my own Python package: finind Over the past few weeks, I’ve been working on a lightweight library focused on financial market analysis using pandas — and I’m happy to share that it’s now live on PyPI. 📊 What it does: Core indicators: SMA, EMA, RSI, ATR Signal generation: Golden/Death Cross, MACD crossovers, RSI signals Market structure: Higher Highs, Lower Lows, Swing points Everything is vectorized, clean, and designed to plug directly into quant workflows, dashboards, or ML pipelines. 💡 Why I built it: While working on market dashboards and analysis, I realized that: Existing libraries can be bulky or rigid Signal logic often gets duplicated across projects There’s room for a cleaner, modular approach So I built finind to keep things simple, reusable, and extensible. 📈 Current traction: Already crossed 960+ downloads 🚀 Actively iterating with new features 🔗 Check it out: https://lnkd.in/guKPMD2J ⚙️ Tech stack: Python, pandas, numpy 📦 Use cases: Quant research, backtesting, trading dashboards, feature engineering Would love feedback from folks working in: Data science / ML Quant / trading systems Time-series analytics If you find it useful, feel free to check it out and share your thoughts! #Python #DataScience #Quant #Finance #OpenSource #MachineLearning #Trading
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development