Daily Learning Log: DSA + Development — Day 13 🚀 Day 13 was about applying backend concepts practically and Python (DSA): ✅ Practiced calculating time complexity of different loop structures ✅ Solved questions involving nested loops ✅ Did dry runs to manually count iterations ✅ Strengthened understanding of how O(n), O(n²) behave with increasing input size Development (Node.js & MongoDB): ✅ Created Schema and Model using Mongoose ✅ Implemented POST API using User.create() ✅ Implemented PUT API using findByIdAndUpdate() ✅ Understood the importance of { new: true } Understanding how data is stored and how code scales together makes backend development stronger. Small bugs teach big lessons. Consistency continues. Learning one concept at a time 🚀 #Python #DSA #NodeJS #MongoDB #BackendDevelopment #LearningInPublic #MCA
Practicing DSA and Backend Development with Python and Node.js
More Relevant Posts
-
Daily Learning Log: DSA + Development — Day 17 🚀 Day 17 was focused on understanding Server-Side Rendering and debugging real project issues. Python (DSA): ✅ Revised memory concepts (heap vs stack) ✅ Strengthened understanding of references and objects ✅ Practiced thinking about how data flows in memory Development (Node.js + Express): ✅ Learned Server-Side Rendering ✅ Created the UI of my URL Shortener ✅ Fixed the “view not found” error ✅ Solved the “urls is not defined” issue by properly passing data to the template Key takeaway: 👉 Backend development is not just about writing routes. It’s about understanding how data moves from database → server → view → browser. Always open to feedback and guidance. #Python #DSA #NodeJS #ExpressJS #MongoDB #LearningInPublic #MCA
To view or add a comment, sign in
-
-
🚀 Today I went deeper into building backend APIs using Python, FastAPI, and Async SQLAlchemy. Instead of just learning theory, I implemented a mini backend system that includes: 🔹 REST API endpoints with FastAPI 🔹 Request & response validation using Pydantic 🔹 Async database integration with SQLAlchemy 🔹 SQLite async engine configuration 🔹 UUID-based primary keys for scalable data models 🔹 Automatic database table creation at startup 🔹 Error handling using HTTPException Example endpoints implemented: • GET /posts/{id} → Retrieve a post • POST /post → Create a new post Tech stack used today: Python 🐍 FastAPI ⚡ Pydantic 📦 Async SQLAlchemy 🗄️ SQLite (async driver) Key learning today: Modern Python backend development is moving toward asynchronous architectures, which significantly improve scalability and performance for real-world applications. Next steps in this project: ✔ File upload API ✔ Authentication system (JWT) ✔ Cloud deployment ✔ Production-grade database integration Building every day. Improving every day. #Python #FastAPI #BackendDevelopment #AsyncPython #SQLAlchemy #SoftwareEngineering #BuildInPublic
To view or add a comment, sign in
-
📣 Part 2 is here. In our latest blog, we take pg_semantic_cache from concept to production — covering the practical details that matter when running semantic caching in real environments. This post dives into: ✔ Tagging strategies for better cache organization ✔ Eviction policies to keep performance sharp ✔ Monitoring for visibility into cache behavior ✔ Python integration for seamless application use If you’re building LLM-powered applications on PostgreSQL and want faster responses with fewer redundant calls, this follow-up is a must-read. 🔗 Read Part 2: https://hubs.la/Q045nKyF0 #PostgreSQL #Postgres #SemanticCaching #AI #LLMs #pgEdge #Python #OpenSource #PostgresqlProfessionals #PostgresqlDBA Muhammad Aqeel
To view or add a comment, sign in
-
-
Daily Learning Log: DSA + Development — Day 20 🚀 🧠 Python (DSA): ✅ Practiced array traversal (normal + reverse) ✅ Solved frequency count problems using dictionary ✅ Focused on avoiding off-by-one errors 🌐 Development (Node.js + Express + MongoDB): ✅ Revised authentication concepts ✅ Creating authentication for url shortner ✅ Fixed small bugs in backend routing Day 20 complete ✅ #DSA #100DaysOfCode #BackendDevelopment #Consistency #LearningJourney
To view or add a comment, sign in
-
Day 14/100 – Mastering Python Dictionary Methods Today, I explored Python Dictionary Methods, one of the most important concepts for handling structured data efficiently. Dictionaries are widely used in real-world applications like APIs, data processing, backend development, and analytics. Understanding their methods helps write cleaner and more optimized code. Here are the key methods I practiced today: 🔹 get() – Safely retrieves the value of a key without causing errors. 🔹 keys() – Returns all dictionary keys. 🔹 values() – Returns all dictionary values. 🔹 items() – Returns key-value pairs as tuples. 🔹 update() – Updates or adds new key-value pairs. 🔹 setdefault() – Adds a key with a default value if it doesn’t exist. 🔹 pop() – Removes a specific key-value pair. 🔹 popitem() – Removes the last inserted key-value pair. 🔹 clear() – Removes all items from the dictionary. 📌 Key Learning: Dictionary methods are extremely powerful for data manipulation and are commonly asked in interviews for Python, Data Analytics, and Backend roles. Every small step is building a stronger foundation. Consistency is the real key to growth. 🔥 #Day14 #100DaysOfCode #Python #DataAnalytics #LearningJourney #BCA #FutureReady
To view or add a comment, sign in
-
-
I’ve just concluded my Python Bootcamp in two intensive sessions. We moved beyond basic logic to focus on how Python interacts with the hardware and the network. To my network and students: Here is the final technical summary on Context Managers, Concurrency, and Distributed Task Queues. Topic 1: Managing Resources with Context Managers A Context Manager ensures that setup and cleanup are guaranteed, preventing memory leaks and orphaned file handles. - Internal Mechanics: The with statement triggers __enter__ (initialization) and __exit__ (cleanup). - Engineering Value: The __exit__ method is guaranteed to run even if an exception occurs, mirroring a try...finally block but in a reusable, Pythonic object. - Implementation: Use the Class-Based approach for complex logic or the contextlib decorator for lightweight production code. Topic 2: Concurrency, Parallelism, and Asynchrony Optimizing execution requires identifying the bottleneck: is it the CPU or the Network (I/O)? - CPU-Bound (Parallelism): Use multiprocessing to bypass the Global Interpreter Lock (GIL) and utilize multiple CPU cores. - I/O-Bound (Concurrency): Use asyncio to manage thousands of connections via a single-threaded Event Loop. Topic 3: Distributed Scalability with Celery For tasks that exceed 500ms (emails, video processing), we decouple the execution from the web server using the Producer-Broker-Worker model. - Producer: Your Django views use .delay() to "fire and forget" the task. Broker (Redis): Acts as the persistence layer, ensuring zero data loss if a worker fails. - Worker: A dedicated process that handles the heavy lifting independently. Final Engineering Insights - Memory Overhead: multiprocessing is heavy because it copies the interpreter state; asyncio is lightweight as it stays in one process. - Reliability: Always prioritize a Message Broker for long-running tasks to ensure system resilience. #Python #SoftwareEngineering #Backend #Django #Scalability #KigaliTech
To view or add a comment, sign in
-
Want to retrieve exactly what you need from your Azure Cosmos DB with #Python? In this tutorial, Gwyneth Peña-Siguenza walks through practical querying techniques using Python: • Filters and projections • Parameterized queries • Searching arrays • Case-insensitive matching You’ll get clear examples of SQL queries optimized for document data and patterns that help you write efficient, reliable code. If you’re building Python apps on Azure Cosmos DB, this is a great way to level up your query skills. Watch here: https://msft.it/6044QZWFr #AzureCosmosDB #PythonDev
Azure Cosmos DB Python Basics: Querying
https://www.youtube.com/
To view or add a comment, sign in
-
🚀 Day 21 of My AI & Development Learning Journey Today I explored how to connect Python with a MySQL database and run SQL queries directly from Python. 🔹 Installed and used mysql-connector-python 🔹 Established a connection between Python and MySQL 🔹 Executed SQL queries using Python 🔹 Retrieved and displayed data from a database table 💡 Real-life example: Imagine a college student management system. When a student logs into the portal, Python can connect to the MySQL database to fetch their details like name, department, and marks. This is exactly how many real applications manage and display user data. Understanding how programming languages interact with databases is a key step toward building AI-powered applications, websites, and intelligent systems. Every day is a new opportunity to learn and build! 📈 #Day21 #AIJourney #Python #MySQL #Database #LearningInPublic #TechDevelopment
To view or add a comment, sign in
-
-
Python 3.11 was released over two years ago, but I still see a lot of developers ignoring or simply not knowing about asyncio.TaskGroup. Most of us learned to use asyncio.gather() to run multiple async tasks concurrently. It’s what older tutorials teach, and it mostly works—until things go wrong. The issue with gather() is how it handles failures. Imagine you are running three concurrent database queries. If the first query fails and raises an exception, gather() instantly throws that exception back to you. But here is the catch: the other two queries are not cancelled. They keep running in the background as "orphaned" tasks. This leads to: • Wasted CPU, memory, and database connections • Unexpected race conditions later in your application lifecycle You can handle this manually with gather(), but it requires verbose try/except/finally blocks to explicitly track and cancel tasks. TaskGroup fixes this by bringing structured concurrency to Python. It ties the lifetimes of concurrent tasks together using a standard context manager: async with asyncio.TaskGroup() as tg: tg.create_task(fetch_data(1)) tg.create_task(fetch_data(2)) tg.create_task(fetch_data(3)) If any task in that block fails, the TaskGroup automatically cancels all other pending sibling tasks. No orphaned background processes, and no manual cancellation boilerplate. Additionally, if multiple tasks fail at the exact same time, it bundles them into an ExceptionGroup (also introduced in 3.11) so you can see all the errors instead of just the first one. It’s not some massive paradigm shift, just a solid feature that makes handling concurrent async operations cleaner and much safer. If your tasks depend on each other or should fail as a single unit, it's worth making the switch. Are you still using gather(), or have you made the switch to TaskGroup? #Python #Asyncio #SoftwareEngineering #BackendDevelopment
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development