𝐈𝐟 𝐘𝐨𝐮 𝐃𝐨𝐧’𝐭 𝐔𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝 𝐓𝐡𝐢𝐬 𝐏𝐫𝐨𝐛𝐥𝐞𝐦, 𝐘𝐨𝐮 𝐃𝐨𝐧’𝐭 𝐔𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝 𝐒𝐭𝐚𝐜𝐤𝐬 Today I tackled a fundamental problem that looks simple at first — but really tests your understanding of logic and data structures. 💡 𝐓𝐡𝐞 𝐂𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞: Given a string of brackets () { } [ ], determine whether it is valid. 🧠 𝐌𝐲 𝐀𝐩𝐩𝐫𝐨𝐚𝐜𝐡: Instead of checking everything at the end, I used a stack (𝐋𝐈𝐅𝐎 𝐩𝐫𝐢𝐧𝐜𝐢𝐩𝐥𝐞) to validate each step in real-time. • Push opening brackets • On closing bracket → match with the last opened one • If mismatch occurs → invalid • If everything matches & stack is empty → valid 🔥 𝐊𝐞𝐲 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠: This problem taught me how powerful simple data structures can be when used correctly. 🐍 𝐏𝐲𝐭𝐡𝐨𝐧 𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧 👇 📌 Consistency in solving such problems is helping me build strong problem-solving skills. #Python #DSA #FullStack #AI #Logic #LeetCode #AIDriven
Validating Brackets with a Stack in Python
More Relevant Posts
-
I just unlocked a new superpower this week: Regular Expressions (RegEx). It’s not just about searching for text anymore; it’s about defining complex patterns to validate and clean messy data. Key takeaways: ✅ Data Validation: Building robust patterns to verify user inputs like emails and URLs. ✅ Pattern Precision: Learning the magic of symbols like ^, $, \w, and + to ensure 100% accuracy. ✅ Efficiency: Using the re library to replace complex if/else logic with elegant, single-line patterns. #Python #RegEx #DataValidation #SoftwareEngineering #CleanCode #CS50P
To view or add a comment, sign in
-
-
Getting the "plumbing" right before the ML takes over. I’m currently building a House Price Valuation System, and if there’s one thing my CS background has taught me, it’s that a model is only as good as the data pipeline behind it. This screenshot is from the Data Preprocessing phase. I’m using Python (Pandas/NumPy) to handle the messy reality of raw data—things like categorical imputation and logical defaults—so the data is actually structured and ready for testing in the ML models. Whether it’s an ML project or a business dashboard, I’ve found that the real engineering happens in the "boring" parts: the cleaning, the logic, and the automated pipelines. Once the technical foundation is solid, the rest usually falls into place. #CSEngineer #Python #MachineLearning #SystemArchitecture #BuildingInPublic
To view or add a comment, sign in
-
-
🚀 Just built an end-to-end ML model to predict Insurance Charges! Worked on the classic insurance.csv dataset using Python, pandas, seaborn & scikit-learn. What I did: EDA + visualizations (age, BMI, smoker impact) Preprocessed data (OneHotEncoder + StandardScaler) Trained Linear Regression & Random Forest Regressor Model Results: Linear Regression: R² = 0.7836 | MAE = $4,181 Random Forest: R² = 0.8656 | MAE = $2,544 (Winner 🔥) Sample Prediction (40M, BMI 28.5, 2 kids, non-smoker, northwest): → Linear: $8,416 → Random Forest: $6,894 Great hands-on practice with regression pipelines! Would love your feedback 👇 Have you worked on similar projects? #DataScience #MachineLearning #Python #ScikitLearn #Regression
To view or add a comment, sign in
-
Today's topic is a tool combo breakdown focusing on three exciting combinations that can revolutionize your workflow and save you time. Whether it’s integrating Claude Code with Obsidian for a seamless knowledge management system or harnessing n8n combined with the Claude API to automate complex tasks, these tools offer specific benefits. Let's dive into one of our options: using Python along with the Claude API. This combo allows developers to leverage AI capabilities directly within their existing workflows. Here’s how you can set it up: 1. **Setup**: First, ensure you have Python installed on your machine. You'll also need n8n and Claude Code plugins for n8n. 2. **Write Your Script**: Start by writing a simple Python script that uses the Claude API to process text inputs. For example: ```python import n8n from claude import ClaudeAPI # Initialize Claude API cl = ClaudeAPI() # Function to get AI generated response def get_response(prompt): response = cl.get_completion(prompt) return response # Example usage of the function result = get_response("What's the weather like in New York today?") print(result) 3. **Integrate with Obsidian**: Next, you can integrate this script with Obsidian using n8n to automate tasks. This setup can save significant time and effort, reducing manual processing and allowing for more efficient workflows. Would you be interested in exploring further AI integration opportunities like this one? Let us know your thoughts or challenges in the comments below. #ClaudeCode #AIAutomation #AITools #BuildWithAI #loopfeedai
To view or add a comment, sign in
-
t-SNE: Visualizing What We Can't See Imagine 784 dimensions compressed to 2 — and the clusters you see tell you everything about the structure of the data. t-SNE makes the invisible visible. Day 27 of 60 → t-SNE — the most beautiful data visualization tool in ML. PCA finds linear components. t-SNE finds NON-LINEAR structure — preserving local neighborhoods. The idea: 1. Measure which points are close in high-dimensional space 2. Lay them out in 2D preserving those closeness relationships 3. Similar points cluster together, dissimilar ones spread apart What good t-SNE output looks like: → Tight clusters = data has natural groupings → Fuzzy boundaries = gradual transitions between groups → Outlier points far from clusters = anomalies CRITICAL caveats: 1. Distances between clusters are NOT meaningful (only within-cluster distances) 2. Results depend on "perplexity" parameter (try 5, 30, 50) 3. Never interpret the x/y axis — they're arbitrary t-SNE is for EXPLORATION, not prediction. But for making the invisible visible? Nothing compares. #tSNE #DataVisualization #MachineLearning #Python #60DaysOfML
To view or add a comment, sign in
-
-
🔷A simple train test split is not always enough. I learned this the hard way when my model looked great on paper and struggled on real data. 📌Here is what nobody tells you about splitting data properly. The basic split gives you two sets. Training and testing. That works for simple projects. But what if you need to tune your model? You test different settings, pick the best one, and evaluate on the test set. The problem is that you have now indirectly used the test set to make decisions. It is no longer a fair judge. This is where a three way split becomes important. 🔹X_train, X_temp, y_train, y_temp = train_test_split( X, y, test_size=0.3, random_state=42 ) 🔹X_val, X_test, y_val, y_test = train_test_split( X_temp, y_temp, test_size=0.5, random_state=42 ) Now you have three sets. Training set. The model learns here. 70 percent of your data. Validation set. You tune and compare models here. 15 percent. Test set. You evaluate the final model here. Once. Never again. 15 percent. The test set is sacred. You look at it exactly one time at the very end. One more thing that most people miss. Always stratify your split when your target column is imbalanced. 🔹train_test_split(X, y, stratify=y, test_size=0.2) stratify=y makes sure both sets have the same proportion of each class. Without it you might end up with a training set that barely sees the minority class and a model that has no idea it exists. The split is not a formality. It is a decision that shapes every result that follows. Get it right before you touch anything else. ❓What split ratio do you use for your projects and why? #DataScience #MachineLearning #Python
To view or add a comment, sign in
-
So there’s this exciting concept in data called “imputation.” Okay it’s not that exciting, I just like the name, but it’s actually pretty important. It’s basically when you deal with missing values by filling them in using the rest of the dataset. Not in a vague “surrounding data” way, but using actual methods like mean, median, or mode, sometimes forward or backward fill, and in more serious cases even models to estimate what should be there. The other option is to just delete the missing data. Either drop the rows or even the whole column. This is common with large datasets, especially when the missing values are small enough that removing them won’t mess with the overall analysis. But it’s not something you just do blindly, because depending on why the data is missing, you can end up biasing your results without realizing it. So yeah, it sounds like a small step, but it actually matters. #LearningInPublic #Python #DataCleaning #DataAnalysis #Data
To view or add a comment, sign in
-
-
Since July I've been building a new reporting system from scratch. My IT team is Claude AI. 11 countries. Weekly Excel files. One automated pipeline. This is what it looks like. ↓ Over the next weeks I'll share how each part works — ingestion, data model, quality checks, outputs. First up: how do you parse 11 different Excel formats into one unified structure? #financeautomation #python #duckdb #claudeai
To view or add a comment, sign in
-
-
My model hit 89% accuracy. I was proud of it. Then I tested it on different data. It dropped to 71%. Just like that. Same model. Same code. Totally different result. I had no explanation. The problem wasn't the model. It was how I was testing it. I was splitting my data once, 80% train, 20% test, trusting whatever number came out. My model wasn't learning real patterns. It was memorising that one specific slice of data. Cross-validation changed how I think about this completely. Instead of trusting one number, you get five. But here's what nobody told me early on: The standard deviation matters more than the mean. Mean: 0.87 │ Std: 0.02 → Stable. Trust it Mean: 0.87 │ Std: 0.12 → Fragile. Dig deeper Both look identical on a single split. Cross-validation exposes the truth. A single accuracy number isn't a result. It's a guess. I now run this before trusting any model, because a model that only works on the data you showed it isn't a model. It's just an expensive lookup table. Have you ever confidently presented a model that later turned out to be wrong? 👇 #MachineLearning #Python #DataScience #CrossValidation #LearningInPublic
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development