🚨 I thought this was a simple array problem… until binary search showed up. Day 28 of my Backend Developer Journey — and today was about 👉 combining logic + optimization 🧠 LeetCode Breakthrough Solved a problem using Binary Search + Reverse Thinking 💡 What clicked: → Reverse one array to simplify comparison → Apply upper_bound (binary search) → Maximize distance efficiently ⚡ The real trick: 👉 Don’t solve the problem as it is… 👉 Transform it into something easier 🔍 Key Insight Instead of brute force: 👉 Preprocess data (reverse array) 👉 Use binary search to reduce complexity ⚡ From O(n²) → O(n log n) 🔗 My Submission: https://lnkd.in/gF3_5BrW ☕ Spring Boot Learning 🐘 PostgreSQL + DBeaver Setup Today I stepped into real backend setup 👇 👉 Installed PostgreSQL locally 👉 Connected database using DBeaver 👉 Explored tables, queries, and DB structure ⚡ Why this matters 💡 Backend isn’t complete without DB understanding 👉 Writing code is one part 👉 Managing real data is the real game 🔥 Big Win Today ✅ Successfully connected Spring Boot to PostgreSQL ✅ Understood how applications talk to databases 🧠 The Shift 👉 Optimization comes from thinking differently 👉 Tools like DB clients make dev life easier 👉 Backend = Code + Database + Efficiency 📈 Day 28 Progress: ✅ Learned binary search application deeply ✅ Set up real database environment ✅ Took one step closer to production-level backend 💬 When did you first realize backend is more than just writing APIs? 👇 #100DaysOfCode #BackendDevelopment #SpringBoot #Java #PostgreSQL #LeetCode #CodingJourney
Binary Search Boosts Backend Efficiency
More Relevant Posts
-
Built a production-grade backend from scratch — here's what I learned. TaskAlloc is an employee and task allocation REST API I built with FastAPI and PostgreSQL. Not a tutorial follow-along — I designed the architecture, made the decisions, and figured out why things break. What's under the hood: → 3-tier role system (Admin / Manager / Employee) with access enforced at the query layer — not just filtered in the response → JWT auth with refresh token rotation. Raw tokens never touch the database, only SHA-256 hashes are stored. If the DB leaks, the tokens are useless. → Task state machine — PENDING → IN_PROGRESS → UNDER_REVIEW → COMPLETED. Invalid transitions are rejected before any database write. → Middleware that auto-logs every mutating request with who did it, what resource they touched, and the HTTP status code → 67 passing tests against SQLite in-memory. No external database needed to run the suite. 35+ endpoints. Soft delete. UUID primary keys. Docker + Docker Compose. Full Swagger docs. The thing that surprised me most was how much I learned from just trying to do things the right way — not "make it work" but "make it work correctly." Things like why audit logs shouldn't have a foreign key to users, or why you write the activity log before the status update commits. GitHub in the comments. #FastAPI #Python #BackendDevelopment #PostgreSQL #SoftwareEngineering #BuildingInPublic #OpenToOpportunities #Development
To view or add a comment, sign in
-
Leveling up the backend journey! After mastering core REST APIs and demystifying Spring Security with JWTs in my previous projects, it was time to push the boundaries with complex data relationships and a brand-new database. What happened: I recently built and designed "Quill" ~ a fully-fledged blogging and article publishing platform. While I had a blast designing the frontend to bring the platform to life, my true mission was under the hood: architecting a scalable backend capable of handling posts, user interactions, and media. What I learned: This project was a massive step up in complexity. I engineered the backend with Java and Spring Boot, but this time, I leveled up my tech stack: 🔹 PostgreSQL Debut: This was my first time using Postgres, and honestly? It was incredibly fun! Transitioning to it gave me a fresh perspective on database management, and I really enjoyed leveraging its robustness for this project. 🔹 Complex Data Relationships: I went deep into Spring Data JPA, mapping out complex One-to-Many and Many-to-Many relationships across Users, Posts, Comments, and Tags without compromising query performance. 🔹 Multipart File Handling: I stepped out of pure text/JSON data and implemented a custom Image Controller to securely handle, store, and serve multipart file uploads for article cover images. 🔹 Security at Scale: I successfully carried over the custom JWT authentication architecture from my previous "LetsGo" project, applying it to a much larger surface area to protect diverse endpoints, user roles, and content ownership. Key takeaway: Building "Quill" taught me that a well-structured database schema is the heartbeat of any good application. Moving to Postgres and handling complex table relations proved that when your backend architecture is solid, scaling the rest of the application feels incredibly natural. Github Link - https://lnkd.in/gxz9a7eY What was your experience like when you first switched databases, or when you first tried PostgreSQL? Let me know in the comments! 👇 #Java #SpringBoot #PostgreSQL #BackendDevelopment #DatabaseDesign #RESTAPI #ProjectBasedLearning #SoftwareEngineering #LearningInPublic #DeveloperJourney
To view or add a comment, sign in
-
Day 2/60: Production Infrastructure That Actually Scales What Most Developers Do: Start with SQLite. Hardcode credentials. Skip migrations. Write blocking database calls. Wonder why it breaks at 10K users. What I Built Today: ✅ Async SQLAlchemy 2.0 with connection pooling ✅ Docker Compose (PostgreSQL + Redis + Backend) ✅ Alembic migration system with rollback ✅ Database health checks and monitoring ✅ Multi-stage Docker builds (40% smaller images) ✅ Development scripts (init, validate, wait-for-db) ✅ 31 tests, 100% coverage on database layer Technical Decisions: Async Everything: Non-blocking I/O handles 100 concurrent users on single thread Connection Pooling: QueuePool (5+10) for PostgreSQL, NullPool for SQLite Health Checks: pg_isready with retry logic, services wait for dependencies Type Safety: mypy --strict passes, Mapped[T] catches bugs at compile time Architecture Highlight: DatabaseManager singleton manages lifecycle. Session context managers handle transactions. Automatic rollback on errors. Zero connection leaks. Why It Matters: Technical debt is a choice. Building for 10K users from day one means adding workers when growth comes, not rewriting the database layer. What's Working: ``` docker-compose up -d → All services healthy pytest → 31/31 tests passing Database connection → ✅ Validated ``` Metrics: - 11 new files - 1,800 lines of production code - 600 lines of documentation (DATABASE.md) - 100% test coverage on new code - 0 linting errors Day 3 Tomorrow: Database models (User, Organization, Channel, Post). First Alembic migration. Schema design for ML features. Buffer - Building a solid foundation for your API ecosystem. Would love to connect. Repository: https://lnkd.in/g8pdgJvM Medium Blog: https://lnkd.in/gRrs6WaR #BufferIQ #BuildingInPublic #DatabaseEngineering #Docker #Python #PostgreSQL #SQLAlchemy #SoftwareArchitecture #Buffer
To view or add a comment, sign in
-
Optimizing for Milliseconds: Building a Custom Trie Search 🚀 As my logistics project, Vela Route, grows, I realized that standard database queries wasn't enough for the "search-as-you-type" experience I wanted. Today, I moved beyond CRUD and implemented a custom Trie (Prefix Tree) data structure in Java. 🧠 The Challenge: Database LIKE queries can get sluggish as records scale. 🛠️ The Solution: I built an in-memory Retrieval Tree that "warm-starts" via a Spring Boot CommandLineRunner. ⚡ The Result: Tracking number lookups now happen in $O(L)$ time (based only on the length of the string), making the search nearly instantaneous regardless of database size. It’s been an incredible deep dive into memory management, recursion, and bridging the gap between PostgreSQL and RAM. Check out the implementation on my GitHub! #Java #SpringBoot #DataStructures #SoftwareEngineering #VelaRoute #BackendDevelopment #Trie #CodingBootcamp #BuildingInPublic
To view or add a comment, sign in
-
-
🚀 Built a Complete Spring Boot REST API with CRUD Operations I’m excited to share my latest project where I developed a RESTful API using Spring Boot and MySQL. This project demonstrates full CRUD functionality and follows a clean layered architecture. 🔧 Tech Stack: • Spring Boot • Spring Data JPA • MySQL • REST API • RAPID API (Testing) 📁 Architecture: Client → RestController → Service → Repository →Entity-> Database 📌 Features Implemented: ✅ Create Student (POST) ✅ Get All Students (GET) ✅ Get Student By ID (GET) ✅ Update Student (PUT) ✅ Delete Student (DELETE) 🔗 API Endpoints: POST /students GET /students GET /students/{id} PUT /students/{id} DELETE /students/{id} This project helped me understand: • REST API design • Layered architecture • Database integration using JPA • Testing APIs using RAPID API CLIENT Looking forward to feedback and suggestions! #SpringBoot #RESTAPI #Java #MySQL #BackendDevelopment #SpringDataJPA #Learning #CRUD #Developer
To view or add a comment, sign in
-
Sunday ship: @perryts/postgres is out. A pure-TypeScript Postgres driver that speaks the wire protocol directly. No libpq. No native addons. No FFI. Why another one? → Every Node Postgres driver worth using wraps libpq or ships a platform-specific .node addon. That's a dead end for AOT - Perry compiles TypeScript to a statically-linked native binary via LLVM, and there's no V8 at execution time to host a C addon into. → The Perry-native Postgres GUI this drives (Tusk) needed things most drivers throw away for ergonomics: exact numeric (not float), full column metadata (attnum, tableOid, typmod), structured errors with every documented ErrorResponse field, and raw row bytes on demand. So: one TypeScript source, three runtime targets. → Node.js 22+ → Bun 1.3+ → Perry AOT → 4.6 MB static binary, 1.8 MB RSS, single-digit-ms cold start Honest performance story: V8's JIT beats Perry-native on per-query wall time in a warm long-running process. Perry wins everywhere else — cold start, memory footprint, deploy size, and the platforms Node and Bun can't reach at all (CLIs, serverless cold paths, mobile, embedded Linux). What's in the box: SCRAM-SHA-256 / MD5 / cleartext auth, TLS with mid-stream upgrade, simple + extended query, 20 type codecs, exact numeric via a Decimal wrapper, structured PgError, cancel protocol, LISTEN/NOTIFY, connection pool, transactions, libpq URLs, PG* env vars. MIT. Feedback welcome. https://lnkd.in/dyeDTJG7
To view or add a comment, sign in
-
An Introduction to SQL SQL (Structured Query Language) is the standard language for managing and manipulating relational databases. It allows you to create, read, update, and delete data (CRUD operations) in a structured way. ✅ Schemas & basics ✅ Basic CRUD ✅ SELECT basics ✅ JOINs ✅ Aggregation & GROUP BY ✅ Subqueries & CTEs ✅ Window functions ✅ Transactions & concurrency ✅ Indexes & performance Save & share with your team! Follow TheVinia Everywhere Stay connected with TheVinia and keep learning the latest in Web Development, React, and Tech Skills. 🎥 YouTube – Watch tutorials, roadmaps, and coding guides 👉 https://lnkd.in/gfKgVVFf 📸 Instagram – Get daily coding tips, updates, and learning content 👉 https://lnkd.in/gK4S-ah8 💼 Telegram – Follow our journey, insights, and professional updates 👉 https://lnkd.in/gU8M8hwd 💼 Medium : https://lnkd.in/gy9iSHqv ✨ Join our community and grow your tech skills with us. If you found this guide helpful, follow TheDevSpace | Dev Roadmap, w3schools.com, and JavaScript Mastery for more tips, tutorials, and cheat sheets on web development. Let's stay connected! 🚀 #SQL #Databases #Query #Postgres #MySQL
To view or add a comment, sign in
-
🚀 Building SoftTech Solutions Project using Spring Boot & MySQL I recently worked on developing a SoftTech Solutions project, where I designed and built a backend application using Spring Boot, DevTools, and MySQL — and it gave me a solid understanding of real-world application development. 💡 About the Project: This project focuses on managing software-related operations such as employee data, project tracking, and resource handling through a structured backend system. 👨💻 Tech Stack Used: ✔️ Spring Boot – for rapid backend development ✔️ Spring DevTools – for faster development & auto-restart ✔️ MySQL – for efficient data storage and management 🔹 Key Features Implemented: ✔️ RESTful API development using @RestController ✔️ Full CRUD operations (Create, Read, Update, Delete) ✔️ Database integration using Spring Data JPA ✔️ Clean layered architecture (Controller → Service → Repository) ✔️ Real-time testing of APIs 📌 Sample API Flow: Client Request → Controller → Service Layer → Repository → MySQL Database → Response 🔥 Why Spring Boot? ➡️ Simplifies backend development with minimal configuration ➡️ Built-in server (Tomcat) for easy deployment ➡️ Seamless database integration ➡️ Scalable and maintainable architecture 💭 Key Takeaway: Working on this project helped me understand how backend systems are structured, how APIs interact with databases, and how to build scalable applications efficiently. Grateful for the learning experience and guidance throughout the development journey. I want to thank my Mentor #AnandKumarBuddarapu #SpringBoot #Java #MySQL #BackendDevelopment #SoftwareDevelopment #LearningJourney #TechProjects
To view or add a comment, sign in
-
156 SQL migrations and no backend server. That's what the mydba.dev architecture looks like after a year of development. Zero backend application code. Every API endpoint is a PostgreSQL function. The stack is almost absurdly simple: • React + TypeScript frontend (Vercel) • PostgREST auto-generates REST endpoints from the database schema • Clerk JWTs validated via JWKS • Row-level security handles authorization • A Go collector writes metrics directly to PostgreSQL Adding a new API endpoint means writing `CREATE FUNCTION` in a SQL migration file. Not a route handler. Not a controller class. Just SQL. 𝗪𝗵𝗮𝘁 𝘄𝗼𝗿𝗸𝘀 𝗿𝗲𝗮𝗹𝗹𝘆 𝘄𝗲𝗹𝗹: Deployment simplicity. There's no backend to deploy, scale, or monitor. The frontend ships via `git push`. Database changes ship via migration files. That's the entire deployment process. Performance is excellent. PostgREST is fast, and PostgreSQL functions with proper indexes are fast. No ORM overhead, no serialization layers, no N+1 query problems. The database IS the truth. 𝗪𝗵𝗮𝘁'𝘀 𝗴𝗲𝗻𝘂𝗶𝗻𝗲𝗹𝘆 𝗵𝗮𝗿𝗱: Debugging SQL functions is painful compared to stepping through Python or Go. Stack traces are cryptic. Testing is awkward -- you're essentially writing integration tests against a real database. Schema migrations on compressed TimescaleDB hypertables are a special kind of adventure. You can't just ALTER TABLE casually when you have columnar compression enabled. I've built patterns around it (rename tables, security-barrier views, careful migration ordering), but it's complexity that a normal backend wouldn't have. There's no middleware layer. Cross-cutting concerns like request logging, rate limiting, and input validation all need creative solutions. Some of those solutions are elegant. Some are ugly. All of them live in SQL. Would I do it again? Absolutely. But I'd invest in better migration tooling earlier. And I'd accept from day one that some things are just harder in SQL -- and that's a worthwhile tradeoff for the simplicity you get everywhere else. Anyone else running a PostgREST-only architecture in production? I'd love to compare notes. #PostgreSQL #PostgREST #BuildingInPublic #Architecture #SoftwareEngineering
To view or add a comment, sign in
-
Built a production-style backend system from scratch using FastAPI and PostgreSQL. Core components: REST API with FastAPI PostgreSQL database with SQLAlchemy ORM JWT-based authentication (access + refresh tokens) User management (signup, login, update, delete) File upload and download system with integrity checks (SHA-256) System design: Modular architecture (routers, models, schemas, utils) Separation of concerns across layers Database schema with relationships and constraints Secure password handling with bcrypt What this phase focused on: structuring a backend like a real system, not a script handling data flow from request → database → response enforcing validation and consistency building endpoints that are actually usable and testable GitHub: https://lnkd.in/dEEsGg3G
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development