Built 𝗙𝗖𝗘𝗙 (𝗙𝗶𝗹𝗲 𝗖𝗼𝗺𝗽𝗿𝗲𝘀𝘀𝗶𝗼𝗻 & 𝗘𝘅𝗽𝗼𝗿𝘁 𝗙𝗼𝗿𝗺𝗮𝘁𝘀) → a desktop application focused on 𝗿𝘂𝗹𝗲-𝗯𝗮𝘀𝗲𝗱 𝗳𝗶𝗹𝗲 𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴 𝗮𝗻𝗱 𝗱𝗲𝘁𝗲𝗿𝗺𝗶𝗻𝗶𝘀𝘁𝗶𝗰 𝗼𝘂𝘁𝗽𝘂𝘁𝘀, not just image compression. link for more details: https://lnkd.in/gq-x3k4j The idea was simple: take one input file → apply predefined compression rules → generate 𝗺𝘂𝗹𝘁𝗶𝗽𝗹𝗲 𝗼𝗽𝘁𝗶𝗺𝗶𝘇𝗲𝗱 𝗼𝘂𝘁𝗽𝘂𝘁𝘀 in a single run. What FCEF does: - Converts images into 𝗝𝗣𝗚, 𝗣𝗡𝗚, 𝗪𝗘𝗕𝗣, 𝗮𝗻𝗱 𝗣𝗗𝗙 - Applies 𝘁𝗵𝗿𝗲𝗲 𝗰𝗼𝗺𝗽𝗿𝗲𝘀𝘀𝗶𝗼𝗻 𝘁𝗶𝗲𝗿𝘀 (high / mid / low) with clearly defined rules - Performs 𝘀𝗺𝗮𝗿𝘁 𝗿𝗲𝘀𝗶𝘇𝗶𝗻𝗴 based on compression tier while preserving aspect ratio - Generates a 𝗽𝗿𝗲𝗱𝗶𝗰𝘁𝗮𝗯𝗹𝗲 𝗱𝗶𝗿𝗲𝗰𝘁𝗼𝗿𝘆 𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 so outputs are easy to reason about and debug Each compression tier is treated as a deterministic configuration: - High → no resizing, max quality - Mid → balanced quality with controlled resizing - Low → aggressive compression with smaller dimensions Internally, the project is structured around 𝗳𝗼𝗿𝗺𝗮𝘁 𝗵𝗮𝗻𝗱𝗹𝗲𝗿𝘀 and a centralized processor, making it easy to extend with new formats or rules (PDF compression is next). This project reinforced how much clarity you gain when you treat file operations as a 𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲 rather than UI-driven actions. Tech: Python, Pillow, CustomTkinter More than a desktop app -> this was an exercise in 𝘀𝘆𝘀𝘁𝗲𝗺 𝗱𝗲𝘀𝗶𝗴𝗻, 𝗲𝘅𝘁𝗲𝗻𝘀𝗶𝗯𝗶𝗹𝗶𝘁𝘆, 𝗮𝗻𝗱 𝗰𝗼𝗿𝗿𝗲𝗰𝘁𝗻𝗲𝘀𝘀. #BackendDevelopment #Python #SystemDesign #SoftwareEngineering #DeveloperProjects #FullStackDevelopment
FCFE Desktop App for Image Compression and Optimization
More Relevant Posts
-
🚀 DSA Challenge – Day 172 Problem: House Robber with Color Constraints 🏠🎨💰 Today’s problem is a twist on the classic House Robber DP, but with an additional constraint that changes the decision logic. 🧠 Problem Summary You are given: 👉 nums[i] → money in the ith house 👉 colors[i] → color code of the ith house Rule: ❌ You cannot rob two adjacent houses if they share the same color. 🎯 Goal Return the maximum amount of money you can rob. 💡 Key Insight This is a Dynamic Programming problem. Unlike the classic House Robber: The restriction isn’t just adjacency It also depends on whether adjacent houses share the same color So every decision depends on: Current index Whether the previous house was robbed Whether colors conflict ⚙️ Approach We use bottom-up DP: State definition: dp[index][prevTaken] Where: index → current house prevTaken → whether previous house was robbed At each house we decide: 1️⃣ Skip → move to next house 2️⃣ Take → only if: Previous wasn’t taken OR Colors are different We compute from the end toward the start. 📈 Complexity Time Complexity: O(n) Space Complexity: O(n) ✨ Key Takeaway This problem reinforces an important lesson: ➡️ When constraints depend on previous decisions, think in terms of state-based DP. Small modifications to classic problems often require: Careful state design Clear transition rules Edge-case handling That’s where real interview difficulty lies 🔥 🔖 #DSA #100DaysOfCode #Day172 #DynamicProgramming #HouseRobber #LeetCode #Algorithms #ProblemSolving #CodingChallenge #InterviewPrep #Python #SoftwareEngineering #DeveloperJourney #TechCommunity
To view or add a comment, sign in
-
-
HistFactory CLs limits in your browser - no server, no Python, no install. We compiled NextStat's inference engine to WebAssembly (542 KB). Drop a pyhf workspace.json, pick an analysis mode (Brazil band, profile scan, MLE fit, hypothesis test), and get results in milliseconds. Everything runs locally in your browser tab. No data leaves your machine. Try it: https://lnkd.in/eXiqyHFe Under the hood, this is the same Rust inference core as NextStat (AD + L-BFGS-B + profile likelihood + CLs), now running client-side. The included example computes a 41-point Brazil band in ~12 ms. Why this matters for HEP: a lot of "quickly sanity-check this workspace" moments currently require a full Python environment (pyhf) or a ROOT install. Now it's a browser tab. #HighEnergyPhysics #HistFactory #WebAssembly #Rust #Statistics
To view or add a comment, sign in
-
-
We just stress-tested a FastAPI app with ~1,200 concurrent users and millions of monthly API calls. And the first thing that broke… wasn’t Python. Not even close. Real-world bottlenecks were: 1️⃣ Database connection pooling Async code was fine, bad pooling was the problem. 2️⃣ External I/O latency Storage, third-party APIs, and network latency mattered way more than CPU. 3️⃣ Zero actual observability No metrics or tracing means scaling is just a guess. The shocking truth: FastAPI never actually became a bottleneck. Our design did. After addressing the real problems: • Consistent peak traffic • Predictable latency • Simplified horizontal scaling Which leads me to wonder: How many teams are framework-optimizing… instead of addressing the actual production bottleneck? #FastAPI #Backend #Scalability #Python #SystemDesign
To view or add a comment, sign in
-
-
I prompted Claude Code to use ffmpeg to trim an eight hour file at two hours then split into ten minutes chunks. Then spin up a subagent for every chunk and use Whisper to extract the speech to text in parallel. It's a great idea except that Whisper is resource intensive and it seized the computer. Plan B 1. Started with an 8.6 hour video, but only the first 2 hours had content 2. Trimmed to first 2 hours 3. Split into 8 × 15-minute WAV audio segments for crash resilience 4. Tried Python whisper with MPS (GPU) - failed with NaN errors (known PyTorch/Metal bug) 5. Downloaded ggml-large-v3 model (~2.9GB) for whisper-cpp 6. Now running whisper-cpp with proper Metal GPU support - transcribing each segment with progress output, will merge into full_transcript.txt when done. Output location: /Users/rtp/Movies/whisper_segments/full_transcript.txt #ffmpeg
To view or add a comment, sign in
-
I recently read about Multithreading, Multiprocessing, and AsyncIO, and thought to share this mental model because it's just too good: 🧵 Multithreading = One kitchen, multiple cooks - Everyone shares the same space - Only one can use the stove at a time (thanks, GIL!) - Great for waiting around (I/O tasks), not for heavy cooking 🧠 Multiprocessing = Multiple kitchens, each with their own stove - True parallel cooking - Each process gets its own memory & CPU core - Heavy lifting? This is your move. ⚡ AsyncIO = One super-efficient cook - Rice simmering? Chop vegetables. - Waiting for the oven? Prep the salad. - Single-threaded but intelligently switching—no wasted time, no extra salaries. 🚀 It's not about which is best—it's about matching your problem to the right kitchen setup. - I/O-bound (API calls, file reads) → Threading or AsyncIO - CPU-bound (data crunching, image processing) → Multiprocessing - Massive scale, single thread → AsyncIO #Python #Concurrency #SoftwareEngineering #Coding #TechTips
To view or add a comment, sign in
-
Ever wished you could visualize analog series with perfectly aligned structures and highlighted differences—all in one clean grid? I've built a free web tool that does exactly that: https://lnkd.in/gRE2M7Fi What it does: Upload a SMILES string or SDF file with your analogs, provide the parent structure, and click "Load & Align." The app uses MCS-based alignment and highlights structural changes in green—perfect for QSAR analysis, SAR presentations, or medicinal chemistry reports. Key features: ✓ Customizable grid layout (columns, label size, bond width) ✓ Multiple labeling options via dropdown menu ✓ Manual controls: rotate, mirror, and edit highlights atom-by-atom when MCS needs refinement ✓ Export high-resolution PNGs for publications and presentations ✓ Fully local—runs in your browser with zero data transmission Built with Claude Code 🤖 and RDKit.js. For those who prefer Python, I've also created a PyQt5-based desktop version: https://lnkd.in/gr4aweYR This is work in progress—I welcome your feedback, bug reports, and feature suggestions!
To view or add a comment, sign in
-
-
We often discuss models, but structure interpretation comes first. Tools focusing on clean alignment and explicit chemical differences can meaningfully support SAR-driven reasoning.
Postdoctoral Researcher at UCSF | Computer-accelerated drug discovery | Modern computational methods for better drugs
Ever wished you could visualize analog series with perfectly aligned structures and highlighted differences—all in one clean grid? I've built a free web tool that does exactly that: https://lnkd.in/gRE2M7Fi What it does: Upload a SMILES string or SDF file with your analogs, provide the parent structure, and click "Load & Align." The app uses MCS-based alignment and highlights structural changes in green—perfect for QSAR analysis, SAR presentations, or medicinal chemistry reports. Key features: ✓ Customizable grid layout (columns, label size, bond width) ✓ Multiple labeling options via dropdown menu ✓ Manual controls: rotate, mirror, and edit highlights atom-by-atom when MCS needs refinement ✓ Export high-resolution PNGs for publications and presentations ✓ Fully local—runs in your browser with zero data transmission Built with Claude Code 🤖 and RDKit.js. For those who prefer Python, I've also created a PyQt5-based desktop version: https://lnkd.in/gr4aweYR This is work in progress—I welcome your feedback, bug reports, and feature suggestions!
To view or add a comment, sign in
-
-
🚀 DSA Challenge – Day 171 Problem: Exclusive Time of Functions ⏱️🔥 Today’s problem is a classic Stack simulation question that tests your understanding of function execution flow in a single-threaded CPU. 🧠 Problem Summary You are given: 👉 n functions with IDs from 0 to n-1 👉 A list of logs in the format: "{function_id}:{start|end}:{timestamp}" Rules: CPU is single-threaded Only one function runs at a time Functions can call other functions (nested / recursive) When a function starts → push to stack When it ends → pop from stack 🎯 Goal: Return an array where res[i] represents the exclusive time of function i. Exclusive time = actual execution time excluding time spent in child function calls. 💡 Key Insight This is a stack simulation problem. Why? Because: The most recently started function is always the one running. Nested calls pause the parent function. When a function ends: Duration = end_time - start_time + 1 Add duration to that function Subtract this duration from its parent (if exists) The +1 is important because: End timestamps are inclusive. ⚙️ Approach 1️⃣ Initialize: res = [0] * n stack = [] 2️⃣ For each log: Parse fid, type, time 3️⃣ If "start": Push (fid, time) onto stack 4️⃣ If "end": Pop from stack Compute duration Add to current function Subtract from parent (if stack not empty) 📈 Complexity Time Complexity: O(m) Where m = len(logs) Space Complexity: O(n) Stack depth in worst case. ✨ Why This Problem Is Important This teaches: ✅ Stack simulation ✅ Handling nested intervals ✅ Inclusive time calculation ✅ Managing parent-child execution overlap Very common in: System design interviews OS-related questions Execution trace problems Call stack simulation 🔖 #DSA #100DaysOfCode #Day171 #Stack #Simulation #LeetCode #Algorithms #InterviewPrep #CodingJourney #Python #ProblemSolving #SoftwareEngineering #TechCommunity
To view or add a comment, sign in
-
-
FastAPI just shipped a 2x JSON serialization performance improvement. One line of code. Declare your return type annotation on the route handler. That's it. @app.post("/items/") async def create_item(item: Item) -> Item: return item When you declare the return type, FastAPI hands serialization directly to Pydantic. Pydantic v2 runs its JSON serialization on the Rust side. No intermediate steps. No jsonable_encoder conversion. Straight to bytes. Without the return type annotation, FastAPI runs additional processing before serialization. With it, you skip that layer entirely and let Rust handle the heavy lifting. Three things worth knowing: - This works out of the box after upgrading to FastAPI 0.131.0 - You do not need orjson or any extra dependency - The performance gain scales with payload size. Larger responses see a bigger difference. If you are already on Pydantic v2 and using FastAPI, this is a free performance upgrade. Add return type annotations to your route handlers, upgrade to 0.131.0, and your API gets faster without touching your business logic. #FastAPI #Python #Rust #BackendDevelopment #APIs
To view or add a comment, sign in
-
-
DSA Series ... 🚀 Two pointers approach (Last problem) Leetcode #42 (Level: Hard) Trapping Rain Water ... 🎯 Problem : Given an elevation map (the bars), calculate how much rainwater can be trapped between the "peaks". Working Principle ... 🧐 Simply take it as two walls for easy to understand, moving inward from the left and right sides. ** Pointers: Start one pointer at the very beginning (left) and one at the end (right). ** ** leftMax and rightMax for bottleneck ** ** trappedWater to hold water level values ** - By maintaining leftMax and rightMax variables and moving inward from both ends of the array, we can calculate the trapped water in a single pass. - If height[left] is smaller than height[right], we know the water level is bounded by the left side. So compare the height[left] with leftMax. * If it is greater than max value, replace leftMax value, otherwise add water level with trappedWater by minus max with current height. * Then move (Increase) the left pointer. - If height[right] is smaller than height[left], compare the height[right] with rightMax. * If it is greater than max value, replace rightMax value, otherwise add water level with trappedWater by minus max with current height. * Then move (Decrease) the right pointer. - If there is overtaken with left and right pointers, terminate the loop and return trappedWater. That's all ... 😎 * Time Complexity: O(n) - We only pass through the array once. * Space Complexity: O(1) - No extra arrays needed, just a few variables. Get in touch : ) ... 🚀 I'll see you all in next post(Chapter) that was about another approach in DSA "Sliding Window"... 🔥 If you have any suggestions or questions, comment or ping me ... 💬 #Java #CodingInterview #Algorithms #LeetCode #SoftwareEngineering #DataStructures #DSA #TwoPointers #TrappingRainWater
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development