Wow... just checked the stats and Python PLY Preview has officially passed 20,000 downloads! I'm blown away. It’s been 2 years since I first published this little VS Code extension, and I'm thrilled so many people find it useful for debugging 3D data. To celebrate, I’ve just pushed v0.0.5. This update fixes some annoying bugs and adds features I've wanted for a while, especially for data science work: - PyTorch & NumPy Support: The big one! You can now preview PyTorch Tensors and NumPy arrays with (n, 6) color data right from the debugger. - No More Junk Files: The extension now automatically cleans up temp files after your debug session ends. (They're now stored in .vscode/ply_preview). - Smarter Activation: I've tweaked the logic so it stops trying to activate on variables inside comments or strings. This project is, and always will be, open source. If you find a bug, have an idea, or want to contribute, please head over to GitHub. Pull Requests are always welcome! Thanks for all the support and feedback over the last two years. Get the update: https://lnkd.in/eAHnUdyD Report bugs or contribute: https://lnkd.in/dBkee7Tp #VSCode #Python #PyTorch #NumPy #DataScience #3DVisualization #PointCloud #OpenSource #DeveloperTools
More Relevant Posts
-
There’s an opinion that good visualization isn’t really about Python. But here’s a wonderful counterexample to that idea: python-graph-gallery.com by Yan Holtz — it’s truly inspiring. And it’s also an excellent, well-structured guide on how to create great visualizations in Python on your own!
"Python creates only ugly charts" ❌ I've heard this so many times! It's true that most of the matplotlib charts out there are... not polished. It's also true that the Matplotlib's API is hard to grasp. And the doc is... not easy to follow. But with: - a bit of #dataviz design theory - a few simple fundamental concepts about the syntax - and a bit of iteration... ✨ You can create literally anything! I've spent years of my life gathering examples in the python-graph-gallery.com I've also created Matplotlib-journey.com with Joseph Barbier to provide the best learning experience. Let's push the limits of Matplotlib and get rid of its bad reputation! ---- Original chart by Gilbert Fontana 🙏
To view or add a comment, sign in
-
-
Another great article from Real Python "Python MarkItDown: Convert Documents Into LLM-Ready Markdown." It talked about using the MarkItDown package to process various file formats (pdf, Word, ppt, Excel, etc.) and convert them into clean Markdown format to be passed on to an LLM for various workflows. The tutorial covered using the tool in CLI and creating an MCP server on Claude Desktop to give the LLM the use of MarkItDown as a tool. Very insightful article and something I'd highly recommend my friends who work with the technology to check out! #Python #RealPython #LLM #AItools #MarkItDown #GenerativeAI #AIDevelopment #PythonProgramming #MachineLearning #AIWorkflow #TechEducation #DataScienceTools #ClaudeDesktop #MCP #EdTech #HardingUniversity #AIInnovation
To view or add a comment, sign in
-
Python flies for product code-until a CPU-bound task slams into the GIL. My go-to fix: keep the FastAPI app in Python, move the hot loop to Rust via PyO3. Simple idea, senior impact: write a tiny Rust function, expose it as a Python module, and call it from your endpoint. PyO3 lets you release the GIL and run true parallel threads, with safe memory and minimal copying of bytes/NumPy buffers. Examples that feel advanced yet stay readable: 1. Text similarity: compute cosine similarity for 10k embeddings to re-rank search results or recommendations. 2. Image uploads: resize + quick blur/noise check before storing, so you catch bad images early. 3. Data crunching: fast JSON/CSV parsing with type coercion and basic validation for ETL jobs. 4. Security/telemetry: compress and hash log chunks before shipping to storage to cut bandwidth and cost. My playbook: prototype in Python, profile, isolate the pure-CPU part, design a tiny interface (in/out as bytes or arrays), then return clean Python types. #Python #Rust #PyO3 #FastAPI #Backend #Software #Development #Performance
To view or add a comment, sign in
-
\(aiohttp & asyncio\) vs Requests: Comparing Python HTTP Libraries 1. Requests - The Simple Synchronous Library What it is: Synchronous blocking HTTP library Simple, intuitive API Most popular for basic HTTP operations Blocks execution until response is received Simple scripts Sequential API calls Learning/prototyping When performance isn't critical import requests import time # Basic GET request response = requests.get\('https://lnkd.in/gZr2tbvq) print\(response.json\(\)\) # Making multiple requests \(BLOCKING - one at a time\) def fetch\_multiple\_sync\(\): urls = \[ 'https://lnkd.in/geAKeWr3', 'https://lnkd.in/gTPFcRiG', 'https://lnkd.in/gndc7mJB' \] start = time.time\(\) results = \[\] for url in urls: response = requests.get\(url\) results.append\(response.json\(\)\) print\(f"Time taken: \{time.time\(\) - start:.2f\} seconds"\) # Output: \~3 seconds \(1 second per request, sequential\) return results # POST request with headers and data re https://lnkd.in/g9ze5V35
To view or add a comment, sign in
-
Day 1 of documenting my data analysis journey. 📝 After getting comfortable with Excel, I moved on to Python and started learning about arrays. When working with data in Python, especially with libraries like NumPy and Pandas, arrays form the foundation of how data is stored and processed. They let you slice, filter and transform data in a clean and efficient way. Arrays are important because they make computations faster and more structured. NumPy arrays, for example, are much quicker than Python lists since they’re stored in a continuous block of memory. One key concept I focused on today was array indexing. It’s simply how you access specific elements, rows or columns from an array, similar to how you’d select parts of a table. That’s it for today’s progress.🤸 Next, I’ll be exploring array transposition and shape manipulation. I’m taking it one step at a time and enjoying the process of understanding how data really works. Excited to see how this builds up over time.😊 #DataAnalysis #DataforHealth #Data #Datajourney #Documentation
To view or add a comment, sign in
-
-
I spent 3 days debugging one whitespace. I used to ignore the "Golden Rule" of Python strings. It cost me hours of frustration until I realized: Strings are immutable. I was writing `text.strip()` thinking it cleaned my data. But the variable remained dirty because I wasn't assigning it back. Once I fixed my workflow, I discovered the 3 tools that actually separate pros from beginners: 1. The Janitor: Data is rarely clean. [cite_start]`.strip()` removes the hidden spaces that break your code, while `.zfill()` perfectly pads your IDs 2. The Power Duo: `.split()` and `.join()` are the most powerful text processing team. [cite_start]They turn messy CSV strings into structured lists instantly 3. The Modern Standard: Stop using `.format()`. [cite_start]F-strings are cleaner, faster, and the absolute standard for injecting variables . Stop fighting your data. Start formatting like a pro. --- #Python #DataScience #CodingTips #EdTech #TechSkills #DigitalTransformation #DeveloperLife 💡 What is the one coding error you keep making? Share below!
To view or add a comment, sign in
-
Built a quick reference guide for RAG patterns. Spent some time documenting 5 common patterns I kept seeing in production systems: • Semantic chunking • HyDE (query expansion) • Re-ranking • Metadata filtering • Query decomposition Each one has working Python code and notes on when it's worth using vs. when it's overkill. Also threw in some case studies with actual numbers ($900K-$2.3M impact range) from real implementations I researched. Nothing groundbreaking - just a clean reference for the trade-offs (latency, cost, quality) since I couldn't find one that laid it out clearly. Live docs: https://lnkd.in/edsUB5mA Code: https://lnkd.in/e9Xqrme6 #MachineLearning #RAG #Python
To view or add a comment, sign in
-
In this blog post, I share a small experiment sparked by a question from my manager: can an SVG DAX measure from a semantic model be rendered outside Power BI? Using a Fabric Notebook (with a bit of AI-assisted coding 😁), I queried the SVG via SemPy, scaled it in Python, and rendered it directly in the notebook. Once it worked, new ideas opened up quickly—automated KPI images, embeddable SVG badges, and small scheduled reports without touching Desktop. I hope this helps having fun in exploring new ways to use semantic models and SVG in your BI workflows. #powerbi #fabricnotebook #svgdaxquery
To view or add a comment, sign in
-
Code’s cheap, reports are easier than ever. Business is the new CS degree. Python’s killing it: Quarto turns Markdown with code into PDF, DOCX, or PowerPoint, no sweat-like Jupyter, but leaner. For Excel charts? Python-in-Excel gives you matplotlib visuals right in cells, skipping clunky native stuff. With LLMs mastering Python, generating reports is just table stakes. The real win? Your insights-picking what data matters and spinning it into a story. Who’s making business their super-power? #SystemDesign #Python #Data
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Congrats Carlos Marañes 🎉