We have been talking about how great the web-based notebooks are… But they do hit real computational limits. Try something simple like computing eigenvalues of a 1000×1000 matrix. You’re looking at seconds of runtime and ~40 MB of memory consumption just for this single experiment. And now imagine stacking multiple experiments on top of that. That’s where things start to break. Now with the new compute notebooks you can: • Run multiple notebooks in parallel • Handle heavy numerical workloads • Terminate sessions instantly and reclaim memory • Keep your interactive apps separate from computation In the short video-demo below: • Interactive notebook apps • Eigenvalue computation of a1000x1000 matrix #Python #Engineering #DataScience #WebAssembly #Simulation #Computing #concurrency
More Relevant Posts
-
⚡ Excited to share a project I built with my groupmate Hansani Hathurusinghe! We developed SCHEDVIZ — a CPU Scheduling Simulator that implements and compares all major CPU scheduling algorithms for any given set of processes! 🖥️ 🔧 Algorithms implemented: ✅ First Come First Served (FCFS) ✅ Round Robin (configurable time quantum) ✅ Shortest Process Next (SPN) ✅ Shortest Remaining Time First (SRTF) ✅ Priority Scheduling 📊 Key Highlights: The simulator takes process inputs, runs all selected algorithms, generates Gantt charts for each, computes average waiting time & turnaround time, and automatically recommends the best algorithm for the given workload! 🎯 🛠️ Tech Stack: Python · HTTP Server · HTML/CSS · JavaScript A great hands-on experience understanding how algorithm choice directly impacts CPU performance! 💡 #CPUScheduling #OperatingSystems #Python #Algorithms #StudentProjects #Engineering #SystemsProgramming
To view or add a comment, sign in
-
The Demo Gap Your demo works perfectly. On your laptop. On your GPU. On your data. In your Jupyter notebook. With your Python version. Ship it to production and it falls apart in ways that take engineers days to debug. This is the demo gap. And it exists because the development and deployment environment are two completely different technology stacks held together by Docker and hope. #NeuralEcosystems closes the gap. Same language. Same runtime. Development to production. No translation layer. What works on your laptop works everywhere. https://lnkd.in/d2EqRgzc #NeuralOS #NeuralSCRIPT #NeuralSCRIPT++ #NeuralCPU #NeuralGPU #NeuralFUSE #NeuralRV #NeuralEDGE #NeuralDB #NeuralPIPE #NeuralSENSE #NeuralAUTO #NeuralFUZZY #NeuralIP #NeuralSDR #NeuralMESH #NeuralUI #NeuralZONE #NeuralGAURD #NeuralSHARE #NeuralGHOST #NeuralBIO #NeuralHEALTH #NeuralNAV #NeuralWEB #UAE #Innovation
To view or add a comment, sign in
-
-
llmfit 🔧 llmfit is originally a terminal tool written in Rust that helps determine which LLM models can run on your machine based on RAM, CPU, and GPU. Original project: https://lnkd.in/dFmp-8w6 Core idea With a single command, it detects your hardware and scores models across: quality speed memory fit context length to show which models are actually practical on your device. Main features in the original version Database of hundreds of models from providers like Meta Llama, Mistral, Qwen, Gemma, DeepSeek and others MoE support for models like Mixtral and DeepSeek Dynamic quantization from Q8_0 down to Q2_K Ollama integration My Python version 🚀 I built a Python implementation here: https://lnkd.in/dGhzNryU Compared with the original version, my version adds: Python CLI implementation Extended modern model database Hardware scoring system Estimated tokens/sec output Detailed fit levels: Perfect Good Marginal Too Tight CPU / GPU / CPU+GPU / MoE execution mode detection Memory percentage calculation Recommendation engine for best model selection before download Why this matters Before downloading very large offline AI models (sometimes 20GB+), you can check: whether the model fits your hardware whether it will run smoothly whether quantization helps whether MoE models save memory This avoids wasting time and storage on models that will not run efficiently. #Python #AI #LLM #OpenSource #LocalAI
To view or add a comment, sign in
-
-
🚀 Day 28 of the DSA grind: Today, I tackled Maximum Width of Binary Tree (LeetCode 662), and it perfectly highlighted the difference between standard logic and system architecture. 🌳 When calculating width, the instinct is to use spatial coordinates: moving left is -1 and moving right is +1. I call this the "Shadow" trap. It works for vertical alignment, but it completely breaks tree physics! Why? Because trees grow exponentially, not linearly. If you just add +1 and -1, you completely lose track of the massive, empty null gaps in the middle of the tree. ⚙️ 1. The Heap Index Engine Instead of mapping coordinates, treat the tree like a massive Array. If a parent is at index i: Left Child = (2 * i) + 1 Right Child = (2 * i) + 2 Because you are multiplying by 2, the indices grow exponentially. If nodes are missing, their numbers are physically skipped. The width of a floor is simply: Last Index - First Index + 1. 🛡️ 2. The Overflow Fix LeetCode sets a massive trap here. Because indices multiply by 2 every level, a skewed tree will violently overflow a 32-bit integer and crash your C++ compiler. The Fix: Reset the math back to zero at the start of every floor! Grab the minIndex of the queue, and normalize every node: currIndex = originalIndex - minIndex. #DSA #LeetCode #DataStructures #Algorithms #BinaryTrees #SoftwareEngineering #TechJourney #Coding #CPlusPlus #InterviewPrep
To view or add a comment, sign in
-
-
🎙️ Just dropped on Talk Python To Me (Ep. 544) ! WheelNext is on Air! 🎧 Listen here → https://lnkd.in/ehWRzNcG `pip install <package>` is incredible for pure Python, unfortunately often insufficient for scientific python where compiled code is all the rage. Large wheels, no GPU detection, poor CPU optimizations. We're fixing this! A coalition from NVIDIA, Astral, Quansight, Meta, AMD, Intel, Red Hat, and many others has been building WheelNext, a community focused on re-inventing the Wheel (pun intended)! The goal: Automatic right CUDA version, right CPU optimizations. Smaller wheels. Better performance unlocked for scientific computing. PEP 825 is live: https://lnkd.in/exJMa4Tk On the episode, Ralf Gommers, Charlie Marsh, Michael Kennedy, and myself dig into why this matters, how it works, and when it's coming to your workflow. Credit to many of the fantastic people who helped us getting so far: Michal Gorny, Konstantin, Andrey Talman, Dr. Andy R. Terrel (he/him), Michael Sarahan, Barry Warsaw, Donald Stufft, Emma Smith, Eli Uriegas, Chris Gottbrath If you maintain a package with native code, now is the time to get involved! #Python #OpenSource #CUDA #PythonPackaging #DeepLearning #pytorch #NVIDIA
To view or add a comment, sign in
-
-
Open source thrives when we bridge the gap between an advanced research tool and the high school classroom. Jupyter Everywhere is the result of a massive community relay. We integrated Pyodide for in-browser Python, xeus-r for statistical computing, and JupyterLite to bypass the server-side hurdles that usually hinder K-12 tech adoption. Agriya Khetarpal details how Quansight collaborated with Skew The Script and CourseKata to turn these modular tools into a cohesive, "single-click" environment for students. Read the technical journey: https://lnkd.in/dpCmQRJ6
To view or add a comment, sign in
-
-
Need to generate molecules that match a specific 3D shape? MLConfGen uses DDPM diffusion to generate novel drug-like conformers fitting your target geometry. Open-source, one API call. Give it a spin and drop a star on GitHub!
MLConfGen has a stable release v0.3.3🔥 Spatially-aware molecule generation framework in a python package, designed to be simple to install, easy to run, and useful in real research workflows - even on CPU. With | pip install mlconfgen[torch] |, you can quickly: - generate molecules fitting arbitrary stl shapes (protein pockets and more) - use it as a true random real molecule generator - generate molecules with constrained synthesizable fragments - explore several polished inference workflows included in the package A lot of care went into making this package not just functional, but actually pleasant to use for day-to-day experimentation and research. If it looks interesting, please give it a try — and if you like it, a ⭐ star on GitHub would mean a lot. If you’d like to explore using MLConfGen in production, I’d be happy to connect. GitHub: https://lnkd.in/eNNGnTYu PyPI: https://lnkd.in/efByfxDh Article: https://lnkd.in/eWBQ4e9b
To view or add a comment, sign in
-
-
I deployed my first End-to-End Machine Learning Project: Laptop Price Predictor 💻📈 I deployed my first end-to-end Machine Learning web app! This tool helps users estimate the market value of a laptop based on its specs like RAM, CPU, and GPU. Key Highlights: 📌Model: Built using Random Forest Regressor for high accuracy. 📌Data: Cleaned and processed a dataset of 1,300+ laptops using Pandas. 📌Tech Stack: Python, Flask, Scikit-learn, and HTML/CSS. 📌Deployment: Live on PythonAnywhere. Live Demo:https://lnkd.in/gEbevAaH Solving deployment challenges and handling categorical data was a great learning experience. On to the next one! 📈 #MachineLearning #Python #DataScience #Flask #WebDev #Portfolio
To view or add a comment, sign in
-
URGENT HELP NEEDED!!! Lately someone asked me a question and I need your suggestions for my reply. Someone: " Why don’t you just build or find a Python library that can efficiently run, train, and deploy heavy LLMs on CPU instead of GPU? " Me: 🤨????? Open to suggestions before I accidentally reinvent physics.
To view or add a comment, sign in
-
AI will probably have a second boom the day we figure out how to quantum compute at scale. I made a big leap recently in finishing building all the features I wanted for Spinachlang, a quantum computing language I created to basically throw ideas out there for how the form should look. At the moment, quantum languages are okay because we don't do much with them, but when we want to create real things with them, we will need more abstractions. That's why I built Spinachlang. It's a pipy lib, so you can install it with "pip install spinachlang". And the documentation is right here: https://lnkd.in/eWdFYJ4X Also, check out the project on GitHub: https://lnkd.in/emCfAZE6 I made a demo here to show how it can be used under jupyter: https://lnkd.in/ewQH-QnB and a bunch of examples here: https://lnkd.in/ez-vt6xG #quantum #ai #programminglanguage #tket #pipy #python
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development