Ever formatted strings only to lose track of what was hardcoded and what came from variables? Once you flatten everything, debugging and safety checks get tricky fast. This matters because the line between literal text and user input is exactly where bugs and vulnerabilities hide. Think escaping HTML, handling SQL inputs, or enforcing formatting rules. Meet t-strings, a new Python idea that looks like f-strings but keeps structure instead of collapsing to one string. Switch f'...' to t'...' and you get an object with parts you can inspect: which segments are plain text and which are injected values. With that clarity, you can target only the dynamic pieces and make changes or sanitization precise, without guessing. Examples you can implement: - Uppercase just the variable values and leave your literals intact. - Escape or validate only the user-provided segments. - Emit something other than a plain string, such as a DOM object or a domain-specific data structure. If you build web backends, data pipelines, or any system that formats user data, this pattern can make your code safer and easier to maintain. Worried about a steep learning curve? The mental model mirrors f-strings and starts with a single prefix change. Need a plain string at the end? You can still materialize one when required. At borntoDev, we make emerging Python patterns practical for your day-to-day. Ready to grow from practitioner to Tech Expert? Share how you would use t-strings and follow borntoDev for more practical deep dives. 🚀 #borntoDev #Python #CleanCode #ApplicationSecurity #WebDevelopment #SoftwareEngineering
Prevent Bugs with Python's t-strings: Safeguarding User Input
More Relevant Posts
-
I just reviewed 1,000 lines of code. No bugs. Perfect syntax. Impeccable patterns. I rejected it. Why? The code: python def a(b, c): d = [] for e in b: if e > c: d.append(e) return d What it actually does: Nobody knows. Including the developer who wrote it. The problem with "clever" code: Developer thinks: → Short variable names save time → Concise code is better → Comments are for weak developers Reality: → Code is read 10x more than written → Future maintainers waste hours deciphering → Bugs hide in obscurity The same code, readable: python def filter_high_value_transactions(transactions, minimum_amount): """ Filter transactions above a minimum amount. Args: transactions: List of transaction amounts minimum_amount: Threshold for filtering Returns: List of transactions above threshold """ high_value_transactions = [] for transaction_amount in transactions: if transaction_amount > minimum_amount: high_value_transactions.append(transaction_amount) return high_value_transactions Which one would you rather maintain? Code should optimize for: ❌ Brevity ❌ Cleverness ❌ Showing off skills ✅ Readability ✅ Maintainability ✅ Clarity My new code review checklist: Variable names: → Descriptive (not x, y, temp) → Pronounceable (can discuss verbally) → Searchable (can grep codebase) Function names: → Verb-based (describes action) → Clear purpose → No abbreviations Comments: → Explain WHY, not WHAT → Document complex logic → Clarify business rules Structure: → Small functions (< 50 lines) → Single responsibility → Obvious flow Tests: → Readable test names → Clear expectations → Document edge cases The developer's response: "But isn't this too verbose?" Me: "Would you rather spend: → 5 extra minutes writing clear code → Or 5 hours debugging cryptic code?" They rewrote it. Now everyone on the team can understand it. Code is communication. Write for humans first. Machines second. #CleanCode #CodeReview #SoftwareEngineering #Readability #BestPractices
To view or add a comment, sign in
-
🚀 Considering code-based vs no-code web scraping tools? Let's break it down! When it comes to web scraping, choosing between code-based and no-code tools can be a crucial decision. Code-based tools like Python's BeautifulSoup offer flexibility and customization, but they require coding skills. On the other hand, no-code tools like Octoparse or Import.io are user-friendly and don't require programming knowledge. Here's a tip: If you're a beginner looking to quickly extract data without coding, a no-code tool might be your best bet. However, if you need specific data points or advanced functionalities, a code-based approach could be more suitable. Which type of web scraping tool do you prefer using, and why? Let's discuss in the comments below! #WebScraping #NoCode #CodeBased #DataExtraction #TechTools #DataMining #DataAnalytics
To view or add a comment, sign in
-
-
I just migrated backend of a major project I'm working on from Flask to FastAPI. Same endpoint. Same logic. Completely different developer experience. Flask version: Manual input validation Manual error handling Manual JSON serialization Manual Swagger setup try/except blocks standing guard like overworked security FastAPI version: 4 lines. Type hints. Return statement. That’s it. What’s actually happening: - user_id: int validates automatically - Invalid input → 422 response (automatic) - Response models serialize clean JSON (automatic) - OpenAPI docs generate themselves (automatic) - Async support without duct tape - No magic. Just smart design around Python’s type system. This isn’t just “less code.” It’s fewer edge cases, fewer hidden bugs, better performance, and APIs that document themselves. In simple terms, Flask feels like: “Here are the tools. Build carefully.” FastAPI feels like: “I already installed the guardrails. You’re welcome.” This is what “work smarter, not harder” looks like in API architecture. If you’re building APIs in Python, this difference is huge. #Python #FastAPI #BackendDevelopment #SoftwareEngineering #APIDevelopment #backendMigration #FlaskVSFastAPI #Flask
To view or add a comment, sign in
-
-
When dealing with a problem engineers/analysts have been split into two camps: • Click-driven software: fast, structured and convenient • Code-first environments: powerful, flexible and programmable However, switching between them breaks your workflow and often reproducibility. At PE Bytes, that trade-off is gone. You don't have to pick a side, you can have both. Our engineering & data analytics platform now allows you to execute the exact Python code generated by the web apps directly in your browser, powered by the same deterministic computational core. What this enables in practice: ✅ Start with structured UI for rapid setup ✅ Get the deterministic Python code that produced your result ✅ Execute and explore “what-if” scenarios in the browser ✅ Move the same validated code into your full Python environment when needed The video demonstrates two workflows: 1) Food Engineering Search milk → define a new product (milk powder) via Python → compute thermo-physical properties in one unified environment. 2) Data Analytics Two-sample t-test via UI → identical Python code generation → guaranteed reproducibility between UI and Python (web or native). This is not just convenience. It is reproducible analytics infrastructure for modern R&D teams who want both simplicity and control, without fragmentation. #engineering #statistics #dataanalysis #python #reproducibility
To view or add a comment, sign in
-
🧠 LEETCODE CONSISTENCY SERIES 🚀 Day 1️⃣1️⃣ of 365 Days 🔁 📘 Topic: Strings 🧩 Problem: Longest Common Prefix (LeetCode #14) ⏱ Time Taken: ~15 minutes 🔹 Approach: Sorting + Character Comparison 🧠 After sorting the array of strings, only the first and last strings need to be compared. Any common prefix shared by all strings must also be common between these two. 🧩 Step-by-Step Algorithm 🔍 🧱 Step 1: Initialization / Setup Sort the list of strings lexicographically. This groups similar prefixes together. 🧵 Step 2: Starting State / Base Case Store the first and last strings from the sorted list. Initialize an empty string prefix to store the result. 🔁 Step 3: Main Loop / Traversal Iterate character-by-character up to the minimum length of the first and last strings. ⚖️ Step 4: Key Logic / Condition Check If characters at the same index match, add them to prefix. Stop the loop as soon as a mismatch is found. 🎯 Step 5: Termination Condition Loop ends on the first mismatch or when one string ends. 🏁 Step 6: Return Result Return the accumulated prefix. 🧠 Why This Works 💡 Sorting ensures the maximum possible difference appears between first and last strings. If these two share a prefix, all middle strings must share it too. Avoids unnecessary comparisons with every string. ⏱ Time Complexity ⌛ Best Case: O(n log n) Average Case: O(n log n) Worst Case: O(n log n) (Sorting dominates the complexity) 💾 Space Complexity 📦 O(1) (Ignoring input sorting space) Only a few variables are used for comparison. 🚀 What I Learned Today 📚 Sorting can simplify string comparison problems. Comparing just two extreme elements can be enough. Clean logic often beats brute force. 📌 Next Goal 🎯 Explore vertical scanning and Trie-based solutions for this problem. Consistency > Motivation 💪 🔗 GitHub: https://lnkd.in/dRGB_B8Z #DSA #LeetCode #Python #ProblemSolving #365DaysOfCode #CodingJourney 🚀
To view or add a comment, sign in
-
-
I just released nestdict v2.0.0a1 — a complete rewrite of a Python library I built for working with nested dicts. Honestly, v1 wasn’t good enough. It worked for basic cases, but the design had limitations and edge cases. Over time I realised fixing it incrementally wouldn’t get it to the level I wanted. So I rewrote it from scratch with a cleaner architecture and stronger guarantees. This problem comes up everywhere when working with APIs or JSON: 𝑐𝑖𝑡𝑦 = 𝑟𝑒𝑠𝑝𝑜𝑛𝑠𝑒.𝑔𝑒𝑡("𝑑𝑎𝑡𝑎", {}).𝑔𝑒𝑡("𝑐𝑢𝑠𝑡𝑜𝑚𝑒𝑟", {}).𝑔𝑒𝑡("𝑎𝑑𝑑𝑟𝑒𝑠𝑠", {}).𝑔𝑒𝑡("𝑐𝑖𝑡𝑦") With nestdict: 𝑓𝑟𝑜𝑚 𝑛𝑒𝑠𝑡𝑑𝑖𝑐𝑡 𝑖𝑚𝑝𝑜𝑟𝑡 𝑁𝑒𝑠𝑡𝐷𝑖𝑐𝑡 𝑛𝑑 = 𝑁𝑒𝑠𝑡𝐷𝑖𝑐𝑡(𝑎𝑝𝑖_𝑟𝑒𝑠𝑝𝑜𝑛𝑠𝑒) 𝑐𝑖𝑡𝑦 = 𝑛𝑑.𝑔𝑒𝑡("𝑑𝑎𝑡𝑎.𝑐𝑢𝑠𝑡𝑜𝑚𝑒𝑟.𝑎𝑑𝑑𝑟𝑒𝑠𝑠.𝑐𝑖𝑡𝑦") Much cleaner and easier to work with. It also supports: • 𝗌𝖾𝗍𝗍𝗂𝗇𝗀 𝖽𝖾𝖾𝗉𝗅𝗒 𝗇𝖾𝗌𝗍𝖾𝖽 𝗏𝖺𝗅𝗎𝖾𝗌 𝗐𝗂𝗍𝗁𝗈𝗎𝗍 𝗐𝗈𝗋𝗋𝗒𝗂𝗇𝗀 𝖺𝖻𝗈𝗎𝗍 𝗂𝗇𝗍𝖾𝗋𝗆𝖾𝖽𝗂𝖺𝗍𝖾 𝖽𝗂𝖼𝗍𝗌 • 𝖿𝗅𝖺𝗍𝗍𝖾𝗇 / 𝗎𝗇𝖿𝗅𝖺𝗍𝗍𝖾𝗇 𝗇𝖾𝗌𝗍𝖾𝖽 𝗌𝗍𝗋𝗎𝖼𝗍𝗎𝗋𝖾𝗌 • 𝗐𝗈𝗋𝗄𝗂𝗇𝗀 𝗐𝗂𝗍𝗁 𝗅𝗂𝗌𝗍𝗌 (𝗎𝗌𝖾𝗋𝗌.[𝟢].𝗇𝖺𝗆𝖾) • 𝖿𝗎𝗇𝖼𝗍𝗂𝗈𝗇𝖺𝗅 𝖠𝖯𝖨 𝖿𝗈𝗋 𝗈𝗇𝖾-𝗈𝖿𝖿 𝗈𝗉𝖾𝗋𝖺𝗍𝗂𝗈𝗇𝗌 • 𝖿𝗎𝗅𝗅𝗒 𝗍𝗒𝗉𝖾𝖽, 𝗓𝖾𝗋𝗈 𝖽𝖾𝗉𝖾𝗇𝖽𝖾𝗇𝖼𝗂𝖾𝗌 One line that captures the idea well: 𝗣𝘆𝗱𝗮𝗻𝘁𝗶𝗰 𝗶𝘀 𝗳𝗼𝗿 𝗱𝗮𝘁𝗮 𝘆𝗼𝘂 𝗱𝗲𝗳𝗶𝗻𝗲. 𝗡𝗲𝘀𝘁𝗱𝗶𝗰𝘁 𝗶𝘀 𝗳𝗼𝗿 𝗱𝗮𝘁𝗮 𝘁𝗵𝗮𝘁 𝗮𝗿𝗿𝗶𝘃𝗲𝘀. This is still alpha (v2.0.0a1). I'm sharing early to get feedback and improve the API before stabilizing it. You can try it with: 𝒑𝒊𝒑 𝒊𝒏𝒔𝒕𝒂𝒍𝒍 𝒏𝒆𝒔𝒕𝒅𝒊𝒄𝒕==2.0.0𝒂1 --𝒑𝒓𝒆 This rewrite taught me a lot about API design, handling edge cases, and building reliable abstractions in Python.
To view or add a comment, sign in
-
-
I recently came across a post from a user on Reddit who was frustrated with tracking ski trails and lift conditions. They wanted a way to pull real-time data from Stratton Mountain’s report (grooming status and lift/trail openings) without manually clicking through a website everyday. I took the challenge and built a web scraping solution using Python. Technical Hurdle: The project was more than just a simple scrape. The resort’s website uses dynamic elements, meaning the desired data was only accessible after certain JavaScript functions were executed. A standard "static" scrape would return empty results. Solution: To deliver the functionality the user needed, I implemented a headless browser session using Selenium. This allowed the script to: - Simulate a real user visit to trigger the necessary JavaScript - "Wait" for the dynamic trail elements to render completely before extraction - Capture and transform raw web data into a clean, actionable format Result: By automating this workflow, I provided the user with a reliable way to access mountain data. Instead of being locked behind a browser, the information is now structured and ready for any future needs from custom dashboards to automated notifications. Check out the code on GitHub: https://lnkd.in/gHQ3JtAd What’s one manual task in your hobby or business that you wish you could automate?
To view or add a comment, sign in
-
Day 4 of #200DaysOfCode! 🚀 After tackling "3Sum" yesterday, I decided to circle back to the problem that started it all for many of us: "Two Sum" (LeetCode 1). While this is often the first problem developers solve, revisiting it with a focus on optimization is always valuable. The Strategy: Space vs. Time Yesterday, for 3Sum, I used sorting and pointers to save space. Today, for Two Sum, I used a Hash Map (Dictionary) to maximize speed. The Logic (One-Pass Hash Map): Instead of using a nested loop to find a pair (which would be a slow O(N^2)), I utilized a dictionary to "remember" the numbers I've seen so far. Iterate through the array. Calculate the complement: diff = target - nums[i]. Check if this diff already exists in our dictionary. If yes, we found our pair! Return the indices immediately. If no, store the current number and its index in the dictionary for future lookups. This approach trades a bit of memory O(N) for a massive gain in speed, bringing the time complexity down to a linear O(N). The Result: My Python solution hit a perfect 0 ms runtime, beating 100.00% of submissions. ⚡ It’s fascinating how different data structures (Hash Maps vs. Pointers) solve similar "Sum" problems in completely different ways. Day 4 down. The foundation is solid. 🧱 Which pattern do you prefer implementing: The "Two Pointer" dance or the "Hash Map" lookup? 👇 #200DaysOfCode #Python #LeetCode #TwoSum #Algorithms #HashMap #DataStructures #ProblemSolving #DeveloperJourney #Optimization
To view or add a comment, sign in
-
-
Docker-style sandboxes for AI agents may no longer be the default. Monty is a minimal, secure Python interpreter written in Rust, designed specifically for running LLM-generated code safely. Instead of spinning up containers or heavyweight sandboxes, Monty embeds directly into your agent runtime. What makes it interesting 👇 • Runs a controlled subset of Python — enough for agents to express logic • Completely blocks filesystem, env, and network access by default • Host interaction only through explicit, developer-defined functions • Extremely fast startup (single-digit microseconds) • Supports modern Python type hints and typechecking • Can snapshot interpreter state and resume later • Tracks and limits memory, execution time, and stack depth • Callable from Rust, Python, or JavaScript This isn’t a general-purpose Python replacement. It’s a safe execution layer for agent code, without container overhead or high latency. The implications for tool-using agents, evaluators, and planners are big. 👇 I break down patterns like this — secure execution, agent tooling, and real architectures — here: 👉 https://lnkd.in/dE86ybTc GitHub repo link in comments ⬇️ ♻️ Share with someone building agent runtimes #AIAgents #Python #Rust #DevTools #LLM #AgenticAI
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development