Scaling Python backends with asyncio and PostgreSQL (asyncpg) requires thinking beyond async/await syntax. If you don't map your coroutines to the underlying OS-level sockets and memory buffers, you will hit silent deadlocks, connection exhaustion, and OOM crashes. Spent a lot of time reading and building lately, and I wanted to share the most important aspects of building high-performance async database drivers. Here’s what I’ve learned: Throttle with asyncio.BoundedSemaphore: Don't just dump 10,000 tasks onto the event loop. Match your semaphore limit to your connection pool's max_size. This provides backpressure, preventing task queue timeouts and event loop thrashing. (Tip: Always use BoundedSemaphore over Semaphore to catch rogue .release() calls). Pipeline with executemany(): Stop running .execute() in a loop. executemany leverages the Postgres extended query protocol (PARSE once, BIND/EXECUTE many) to pack the TCP window and eliminate thousands of network Round Trip Times (RTT). Isolate State with Savepoints: Use nested async with conn.transaction() blocks to handle partial payload failures. When an inner block fails, it just flags the Postgres SubXID as aborted (leaving dead tuples for the VACUUM process) while allowing the parent transaction to safely commit. Prevent OOMs with Server-Side Cursors: Never use .fetch() for massive multi-million row exports. Stream them via async for row in conn.cursor(query, prefetch=chunk_size). This guarantees your Python process memory stays strictly bounded to the chunk size, no matter how large the table gets. Shield Your Cleanup: If a client abruptly drops an HTTP connection, the ASGI server will inject an asyncio.CancelledError. If you don't wrap your pool.release() and tx.rollback() in asyncio.shield() inside your Unit of Work, the network socket will be left permanently checked out, leading to a silent pool deadlock. Adopt asyncio.TaskGroup: (Python 3.11+) Move away from naked asyncio.gather(). TaskGroups provide structured concurrency—if one concurrent validation query fails, the siblings are safely and instantly cancelled, returning their leased connections to the pool immediately. Avoid Distributed Transactions: Don't attempt Two-Phase Commits (2PC) across microservices using the event loop; it destroys throughput. Rely on the Transactional Outbox pattern: commit your local database mutation and an event payload in the same transaction, and let your message broker manage eventual consistency. Stop treating the event loop like magic. Treat it like an I/O multiplexing coordinator. #Python #Asyncio #PostgreSQL #BackendEngineering #SoftwareArchitecture #DistributedSystems
Scaling Async Python Backends with asyncio and PostgreSQL
More Relevant Posts
-
Is writing custom Python scripts for ingestion a sign of seniority, or a sign of inefficiency? 🐍💻 In 2026, the "hand-coded" vs. "low-code" debate has moved past the surface level. We are finally asking the right question: Where should a Data Engineer spend their "Code Capital"? If you are still writing boilerplate scripts to move data from a Postgres database to an S3 bucket, you might be falling into the Maintenance Trap. Here is why the industry is shifting toward a hybrid model: 1. The Maintenance Trap 🪤 Writing the first script is fun. Maintaining 100+ individual ingestion scripts is a nightmare. Every time an API version changes, a primary key is renamed, or a source schema drifts, your weekend is gone. Managed ELT tools like Airbyte or Fivetran treat these "connectors" as a commodity, handling the boring parts so you don't have to. 2. Spend Your "Code Capital" Wisely 💎 Your time is your most valuable asset. Spending it on basic data movement is like an architect laying bricks—it’s necessary work, but it’s not where the value is created. The Rule: Use low-code for the "pipes" (Ingestion). Save the custom Python/SQL for the "engine" (Transformations, business logic, and complex SCD logic). 3. The Hybrid Reality 🛠️ Low-code isn't a silver bullet. High-seniority engineering comes into play when you hit the limits of a managed tool: Complex API Rate Limits: When you need custom backoff strategies. Deeply Nested JSON: When the out-of-the-box flattener creates a mess. Proprietary Sources: When a pre-built connector simply doesn't exist. 4. Productivity = Control + Speed 🚀 Seniority in 2026 isn't about how much code you write; it’s about how much value you deliver with the least amount of code to maintain. Choosing a managed tool for 80% of your sources allows you to focus 100% of your energy on the 20% that actually drives business insights. The Bottom Line: Don’t be a "script collector." Be a Platform Architect. Build systems that scale, not just scripts that run. Are you still writing custom ingestion code for standard sources, or have you made the leap to fully managed ELT? Let’s hear your take in the comments! 👇 #DataEngineering #Airbyte #Python #ETL #ModernDataStack #DataArchitecture #CloudComputing #SoftwareEngineering #DataOps #BigData
To view or add a comment, sign in
-
-
🚀 How I Finally Understood What “Knowing Python” Means for Data Engineering: A few years ago, I thought learning Python for data engineering meant memorizing syntax, solving coding challenges, and building side projects. But the first time I had to build a real data pipeline… reality hit me hard. I realized Python wasn’t just a language. It was the glue holding the entire data ecosystem together. Let me tell you the version I wish someone had told me on day one. 🧠 Lesson 1: Data Lives on Disk, but Work Happens in Memory: I remember loading a “small” CSV file and watching my laptop freeze. That’s when I learned the golden rule: Data sits on disk, but Python processes it in memory. RAM is limited. Your data probably isn’t. That’s why tools like Spark, DuckDB, and Polars exist — they help you work with data bigger than your laptop can handle. 🐍 Lesson 2: Python Basics Actually Matter: I used to skip the fundamentals. Big mistake. In real pipelines, you rely on: Lists and dictionaries to structure data Loops and comprehensions to transform it Functions to keep your code clean Classes when things get complex Exception handling so your pipeline doesn’t explode at 2 AM These aren’t “beginner topics.” They’re survival skills. 🔄 Lesson 3: ETL/ELT Is Where Python Shines: Once I understood ETL, everything clicked. Python helps you: Extract data from APIs, databases, cloud storage Transform it using Pandas, Polars, Spark, or SQL Load it into warehouses like Snowflake or BigQuery It’s not about writing fancy scripts. It’s about moving data from chaos → clarity. 🧰 Lesson 4: The Tools Become Your Superpowers: I used to think data engineers only needed Python. Then I met: psycopg2 for databases boto3 for AWS requests for APIs Parquet, JSON, CSV, XML Kafka, Kinesis, SFTP Suddenly, Python felt less like a language and more like a Swiss Army knife. ✔️ Lesson 5: Data Quality Is Non‑Negotiable: Your pipeline isn’t done when it runs. It’s done when the data is trustworthy. Tools like: Great Expectations Cuallee help you validate data before anyone sees it. 🧪 Lesson 6: Tests Save You From Yourself: The first time I broke production, I learned this the hard way. pytest became my best friend. Tests catch bugs before your users do. ⏱️ Lesson 7: Schedulers & DAGs Are the Real Magic: Pipelines don’t run because you press “Run.” They run because schedulers like: Airflow Dagster Cron wake them up at the right time. And DAGs make sure everything runs in the right order. 🎯 Final Thought : Learning Python for data engineering isn’t about mastering every feature of the language. It’s about understanding how Python connects systems, moves data, and keeps pipelines reliable. Once you see Python as the orchestrator of your data world, everything changes.
To view or add a comment, sign in
-
Here's my Ultimate Advanced Python Tricks Cheatsheet for Data Analysts: (Save this - these are the ones that actually matter in real work) Every analyst knows pd.read_csv() and df.head(). The ones getting promoted know what comes after that. Here are 15 advanced Python tricks that separate junior analysts from senior ones 👇 1. Memory-Optimized Data Loading Specify data types while loading to reduce memory and speed up processing. 2. Select Columns Efficiently Always load only the columns you need — never the entire dataset. 3. Conditional Filtering with Multiple Rules Apply complex business logic to slice data precisely in one line. 4. Vectorized Feature Engineering Multiply columns directly instead of loops — faster and more scalable. 5. Use query() for Cleaner Filtering Write SQL-like filter conditions that are readable and easy to maintain. 6. Advanced GroupBy with Multiple Aggregations Generate sum, mean, and max insights across categories in one operation. 7. Window Functions SQL Style Rank rows within groups directly in Python — exactly like SQL window functions. 8. Rolling Window Analysis Calculate 7-day moving averages to smooth trends for time-series reporting. 9. Handle Missing Data Strategically Fill nulls with the median — preserves distribution instead of distorting it. 10. Efficient Deduplication with Priority Sort by date first then drop duplicates — keeps the most recent record per user. 11. Merge Datasets Like SQL Joins Combine two dataframes on a key column exactly like a SQL LEFT JOIN. 12. Pivot Tables for Quick Reporting Summarize revenue by category and region instantly without building a dashboard. 13. Explode Nested Data Transform list-like columns into individual rows for deeper granular analysis. 14. Apply Custom Functions Efficiently Use np.where for conditional logic - significantly faster than apply() on large datasets. 15. Chain Operations for Clean Pipelines Drop nulls, filter, and engineer features in one readable chained expression. Most analysts use Python like a calculator. Senior analysts use it like a pipeline. The difference is not knowing more functions. It is knowing how to chain them together to go from raw messy data to a clean business insight in minutes. Save this. Practice each one on a real dataset. Watching is not learning. Building is. Which of these are you not using yet? ♻️ Repost to help someone level up their Python skills 💭 Tag a data analyst who needs to see this 📩 Get my full Python analytics guide: https://lnkd.in/gjUqmQ5H
To view or add a comment, sign in
-
-
Here's my Ultimate Advanced Python Tricks Cheatsheet for Data Analysts: (Save this - these are the ones that actually matter in real work) Every analyst knows pd.read_csv() and df.head(). The ones getting promoted know what comes after that. Here are 15 advanced Python tricks that separate junior analysts from senior ones 👇 1. Memory-Optimized Data Loading Specify data types while loading to reduce memory and speed up processing. 2. Select Columns Efficiently Always load only the columns you need — never the entire dataset. 3. Conditional Filtering with Multiple Rules Apply complex business logic to slice data precisely in one line. 4. Vectorized Feature Engineering Multiply columns directly instead of loops — faster and more scalable. 5. Use query() for Cleaner Filtering Write SQL-like filter conditions that are readable and easy to maintain. 6. Advanced GroupBy with Multiple Aggregations Generate sum, mean, and max insights across categories in one operation. 7. Window Functions SQL Style Rank rows within groups directly in Python — exactly like SQL window functions. 8. Rolling Window Analysis Calculate 7-day moving averages to smooth trends for time-series reporting. 9. Handle Missing Data Strategically Fill nulls with the median — preserves distribution instead of distorting it. 10. Efficient Deduplication with Priority Sort by date first then drop duplicates — keeps the most recent record per user. 11. Merge Datasets Like SQL Joins Combine two dataframes on a key column exactly like a SQL LEFT JOIN. 12. Pivot Tables for Quick Reporting Summarize revenue by category and region instantly without building a dashboard. 13. Explode Nested Data Transform list-like columns into individual rows for deeper granular analysis. 14. Apply Custom Functions Efficiently Use np.where for conditional logic - significantly faster than apply() on large datasets. 15. Chain Operations for Clean Pipelines Drop nulls, filter, and engineer features in one readable chained expression. Most analysts use Python like a calculator. Senior analysts use it like a pipeline. The difference is not knowing more functions. It is knowing how to chain them together to go from raw messy data to a clean business insight in minutes. Save this. Practice each one on a real dataset. Watching is not learning. Building is. Which of these are you not using yet? ♻️ Repost to help someone level up their Python skills 💭 Tag a data analyst who needs to see this 📩 Get my full Python analytics guide: https://lnkd.in/g7W9Cv-J
To view or add a comment, sign in
-
-
𝗗𝗮𝘆 𝟲𝟰: 𝗛𝗼𝘄 𝗣𝘆𝘁𝗵𝗼𝗻 𝗖𝗹𝗮𝘀𝘀𝗲𝘀 𝗕𝗲𝗰𝗼𝗺𝗲 𝗗𝗷𝗮𝗻𝗴𝗼 𝗠𝗼𝗱𝗲𝗹𝘀 Today I linked two big ideas. Python object oriented programming. And Django models. They are the same thing. A Django model is just a Python class. Here is what I learned about Python classes. A class is a blueprint. An object is a built thing from that blueprint. - class Car: defines the blueprint. - __init__ runs when you build the object. self points to that new object. - Instance attributes like self.brand are unique to each object. - Class attributes like company are shared by all objects. Methods live inside classes. - Instance method: uses self. Works with your object's data. - Class method: uses cls. Decorated with @classmethod. Works with the class itself. - Static method: uses no self or cls. Decorated with @staticmethod. Just a function inside the class. Inheritance lets a child class reuse a parent class. - class Dog(Animal): Dog gets all of Animal's code. - Use super() to run the parent's __init__. Python does not have private. It has conventions. - _name is protected. A hint to other coders. - __name is private. Python changes its name to _ClassName__name. Dunder methods define how your object acts with Python's built-ins. - __str__: for print() and str(). - __len__: for len(). - __eq__: for ==. Abstract Base Classes force subclasses to write specific methods. - from abc import ABC, abstractmethod - @abstractmethod means "you must write this method". Now for Django. A Django model is a Python class that inherits from models.Model. - Each class attribute becomes a database column. - Django reads these attributes and creates the SQL table for you. You do not write SQL. You run two commands. - python manage.py makemigrations - python manage.py migrate The __str__ method in your model controls what you see in the Django admin. Without it you see "Post object (1)". With it you see your post title. OOP is the foundation. Django models are the practical application. A model class maps directly to a database table. Fields map to columns. Your __str__ method controls the display. Understanding Python classes first makes Django models obvious. Source: https://lnkd.in/gChPWWZS
To view or add a comment, sign in
-
Choosing between Python and Go for your next microservice is one of the most common architectural decisions backend teams face in 2026. Both languages power microservices at massive scale. But the real answer isn't "always pick X." It depends on what your service actually does. THE PERFORMANCE REALITY JSON API Benchmark (4-core VM, 100 concurrent connections): → Go: 95,000 req/s, 1.05ms avg latency, 12MB memory → Python (FastAPI): 12,500 req/s, 8.1ms avg latency, 52MB memory → 7.6x throughput gap But context matters: → <1,000 req/s: Both languages adequate → 1,000-10,000 req/s: Python works, Go more efficient → >10,000 req/s: Go's efficiency = fewer instances, lower costs CONTAINER COMPARISON → Go: 8-15MB images, 10-50ms startup, 5-10MB idle memory → Python: 180-350MB images, 1-3s startup, 35-60MB idle memory → Cost impact: 5-10x reduction for high-traffic services with Go WHEN TO CHOOSE PYTHON → ML/AI integration (PyTorch, TensorFlow ecosystem) → Data processing and ETL (pandas, NumPy unmatched) → Rapid prototyping (FastAPI auto-docs, no compile step) → Your team knows Python well WHEN TO CHOOSE GO → High-throughput API services (>10K req/s) → Infrastructure and platform services → Low-latency requirements (sub-5ms p99) → Small container footprint matters (edge, serverless) REAL-WORLD PATTERN Companies like Uber, Dropbox, Spotify use BOTH strategically: → Go handles the HOT PATH (performance-critical user requests) → Python handles the SMART PATH (ML models, data analytics) DECISION FRAMEWORK 1. ML/AI model serving? → Python 2. >10K req/s or <5ms p99 latency? → Go 3. Data processing/ETL? → Python 4. Infrastructure service? → Go 5. Team expertise? → Use what your team knows KEY INSIGHT The best microservices architectures use both languages strategically. Python and Go aren't competitors — they're complements. Start with the language your team knows. If a Python service hits performance limits, profile first (most issues are algorithmic, not language-related). If you genuinely need Go-level throughput, rewrite that specific service. Microservices exist precisely to make this targeted migration possible. The right question isn't "Python or Go?" It's "Which services need Python's strengths and which need Go's?" Complete performance comparison: https://lnkd.in/d9WXQ_vA What's your team's approach to language selection? #Python #Go #Microservices #Performance #SoftwareArchitecture #BackendDevelopment
To view or add a comment, sign in
-
I've been writing Python for years. So when I needed to look inside a JSON file, Python was always the answer. Quick script, parse the structure, print what I need. Not because it was the right tool - because it was the familiar one. Once I started using jq, I realized how much time I'd been wasting on tasks that don't need a script at all. jq is an open source command-line JSON processor. Single binary, no runtime dependencies, written in C, fast as hell. It's basically SQL for JSON - you query, filter, and reshape structured data right in your terminal. And most data engineers I work with have never installed it. Here's the kind of stuff I use it for daily. Debugging an API response? Pipe it through jq and pull out the three fields you actually care about from a nested array. One line, no script, no notebook. Checking which Glue jobs failed overnight? AWS CLI returns JSON by default. With jq you can filter for failed runs and extract just the timestamps and error messages. No scrolling through walls of output. Validating JSONL files before loading them into a pipeline? A quick jq one-liner with wc -l tells me how many records have null values in a given field. No pandas. No kernel startup. No waiting. Every one of those used to be a throwaway Python script for me. Ten lines of code to do what jq handles in one. jq won't replace your transformation layer. But for the dozens of small JSON tasks that come up every week - inspecting payloads, parsing cloud CLI output, sanity-checking files - nothing else is faster. Most data engineers don't need more tools. They need to use the terminal better.
To view or add a comment, sign in
-
PySpark code is a classic implementation of a Reliable Streaming Pipeline ⚙️ Phase 1: The Continuous Engine This part of the code tells Databricks to keep the "engine" running 24/7. Python (spark.readStream.table("source_append_table") .filter("(status IS NULL) AND (record_type = 'file_type')") .writeStream .foreachBatch(load_all_and_route_errors) # Calls the logic below .option("checkpointLocation", "/mnt/delta/checkpoints/dual_target_load") .trigger(processingTime='10 seconds') # ✅ Makes it run continually .start() ) 🛠️ Phase 2: The Validation & Routing Function This is the internal logic (load_all_and_route_errors) that runs every time new data is detected. 1. Persisting Data (The Memory Guard) 💾 Python microBatchDf.persist() Icon: 🧠 Action: Saves the incoming data in RAM. Why: Since we are writing to two tables (Main and Error), we don't want Spark to do the work twice. Caching it here makes the job twice as fast. 2. The Validation Engine (The Inspector) ⚖️ Python errors = F.array_remove(F.array( F.when(F.col("order_id").isNull(), "Missing order_id"), F.when(F.col("price") < 0, "Negative price") ), None) Icon: 🔍 Action: Captures WHY it failed. It creates a list of errors for every row. Note: Unlike simple filters, this ensures you have an audit trail of reasons for every bad record. 3. Flagging the Data 🏷️ Python validated_df = (microBatchDf .withColumn("validation_status", F.when(F.size(errors) > 0, "Invalid").otherwise("Valid")) ) Icon: 🚩 Action: Tags every single row as either Valid or Invalid based on the results of the Validation Engine. 🍴 Phase 3: The Fork in the Road (Dual Write) Path A: The Clean Production Table ✅ Python only_valid_records = validated_df.filter("validation_status = 'Valid'") (only_valid_records.write .format("delta") .mode("append") .saveAsTable("main_target_table")) Icon: 🏦 Strategy: Only rows with zero errors move forward. This keeps your business dashboards clean and trustworthy. Path B: The Quarantine/Error Table 🚨 Python invalid_records = validated_df.filter("validation_status = 'Invalid'") if not invalid_records.isEmpty(): (invalid_records.write .format("delta") .mode("append") .saveAsTable("error_records_table")) Icon: 🚧 Strategy: Redirects bad data to a separate log. Because we captured the reasons, engineers can immediately see that "Row X failed because of a negative price." 🧹 Phase 4: Final Cleanup Python microBatchDf.unpersist() Icon: 🧼 Action: Clears the memory block. Why: In a continually running job, if you forget this, your cluster memory will fill up over time and eventually crash (OOM error). 💡 Summary of "Continuous" Best Practices Use Job Clusters: In Databricks, run this as a "Continuous Job" type so Databricks automatically restarts it if the cloud provider has a hiccup. final tip: since you are now running this continually, ensure your cluster is sized correctly for a 24/7 workload!
To view or add a comment, sign in
-
-
Improving your Python skills is not just about writing code that works. It is about writing code that is efficient, readable, scalable, and production ready. These Python tips and tricks focus on practical improvements that make a real difference: ➜ Writing clean and Pythonic code using best practices ➜ Using list, dict, and set comprehensions effectively ➜ Leveraging built in functions for faster execution ➜ Optimizing loops and reducing time complexity ➜ Understanding memory usage and performance tuning ➜ Mastering functions, lambda expressions, and closures ➜ Applying object oriented design properly ➜ Handling exceptions and debugging efficiently ➜ Working smartly with files and data processing ➜ Using generators and iterators for memory efficiency ➜ Structuring projects with modules and virtual environments ➜ Writing reusable, maintainable, and testable code ➜ Avoiding common mistakes that slow down applications Perfect for developers who want to move from basic scripting to writing professional level Python code. Learn more from w3schools.com Code smarter. Build faster. Think like a pro. 𝐂𝐨𝐮𝐫𝐬𝐞𝐬 𝐲𝐨𝐮 𝐰𝐢𝐥𝐥 𝐫𝐞𝐠𝐫𝐞𝐭 𝐧𝐨𝐭 𝐭𝐚𝐤𝐢𝐧𝐠 𝐢𝐧 𝟐𝟎𝟐𝟔. 1 Meta Front-End Developer 🔗 imp.i384100.net/g1KEQ5 2. Programming with JavaScript 🔗 imp.i384100.net/XYDqvg 3. Machine Learning Specialization 🔗 imp.i384100.net/XYQ9jy 4. Deep Learning Specialization 🔗 imp.i384100.net/jroLxe 5. IBM Data Science Professional Certificate 🔗 imp.i384100.net/LXbNjj 6. Python for Data Science, AI & Development 🔗 imp.i384100.net/1rq3Km 7. Google Data Analytics 🔗 imp.i384100.net/KjnNrn 8. Google Cybersecurity 🔗 imp.i384100.net/Or5L6G 9. Google Project Management 🔗 imp.i384100.net/OeRLoP 10. Meta Social Media Marketing 🔗 imp.i384100.net/RGyDYv 11. Google Cloud imp.i384100.net/19Pz7D 12. Data Structures and Algorithm imp.i384100.net/VxYdN6 13. IBM Full STACK Developer imp.i384100.net/JKVZ22 14. Full Stack Java Developer imp.i384100.net/o4LJOo 15. Mean Stack Developer imp.i384100.net/BnykPB Happy Learning 🌟 ♻️Repost and share to help others. #python #aicourses #aicommunity #linkdin #upskill #career #growth #freecourses #microsoft LinkedIn Learning
To view or add a comment, sign in
-
I stood on the big stage at PyCon DE & PyData 2026 last week and told 2,000 Python engineers something nobody wanted to say out loud. "AI can do everything except?" The room went quiet. Then the heads started nodding. That was my lightning talk, and it set the tone to end the day full of honest, grounded engineering conversations. Summary of other talks and tutorials I attended on Day 2 - Ship Data with Confidence: Declarative Validation for PySpark & Pandas. Ryan Sequeira's talk on declarative validation for PySpark & Pandas hit on something most teams get backwards. We build monitoring to catch bad data after it breaks things, when we should be blocking it from entering the pipeline in the first place. His open-source approach embeds validation directly into the pipeline, so errors surface at the earliest possible stage. Reactive is expensive. Proactive is a design choice. - "From Struggling to Mastery: A Practical Guide to Data Pipeline Operations" Akif Cakir introduced an Operational Excellence Maturity Pyramid a 5-level framework (Struggling → Basic → Decent → Strong → Mastery) for data teams trying to grow without falling apart. The uncomfortable truth he put on the slide: most teams know they need to improve, but they have no shared definition of what "better" even looks like. You can't measure progress without a map. - "Building Secure Environments for CLI Code Agents" Harald Nezbeda made a practical case for containerizing CLI code agents in full isolation from your host system persistent auth, workspace access via volume mounts, full API logging, all sandboxed. As AI agents get more capable, this kind of thinking moves from "nice to have" to "why didn't we do this sooner." - "Accelerate FastAPI Development with OpenAPI Generator". Evelyne G. & Kateryna Budzyak's tutorial on OpenAPI Generator was a 90-minute deep dive into a workflow I wish I'd known earlier. Design your API as an OpenAPI spec in YAML, and the generator spits out your FastAPI endpoints and strictly-typed Pydantic models automatically no GenAI, just clean contract-first engineering. Less ambiguity between teams. Less debugging. More trust. - "Build a web coding platform with Python, run in WebAssembly" - Maris Nieuwenhuis built an interactive Python coding platform using Pyodide + WebAssembly that executes code entirely client-side. No backend. No security risks from running user code on a server. No infrastructure overhead. Just Python, in the browser, actually working. The crowd reaction said everything. - Django-Q2: Async Tasks Made Simple" Moin Uddin introduced Django-Q2. It is the alternative nobody told you about. Moin Uddin made a compelling case for Django-Q2 async tasks and cron jobs using your existing database as a broker, no Redis, no RabbitMQ, no 3-page config file. For small to medium projects that need to move fast, this might be the most practical thing I heard all day. 6 sessions. One lightning talk I'll remember for a while.
To view or add a comment, sign in
-
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development