Simulating Complexity with JavaScript & KaTeX: The Race of Functions 💻 In my spare time, I love bringing mathematical concepts to life through visualization. My latest project is a "Function Race" that pits different growth rates against each other to see which one truly dominates as x approaches 50. While we often hear that the Factorial function (x!) is incredibly fast, it’s fascinating to watch x^x (Self-exponentiation) step onto the track and leave everything else in the dust. Technical Highlights of the Project: ✅ Rendering: Built with HTML5 Canvas for smooth, high-performance animations. ✅ Typography: Integrated KaTeX for high-quality, real-time LaTeX mathematical notation rendering directly over the canvas. ✅ Dynamic Sorting: Implemented a real-time easing algorithm to rearrange the bars based on their current values, creating a smooth "racing" effect. ✅ Mathematical Precision: Used Stirling’s approximation to ensure smooth growth transitions for x! across the real number line. This was a fun way to visualize why choosing the right algorithm and understanding Big O notation is so critical in software development. Polynomial growth might look manageable at first, but once you hit Exponential or Super-Exponential scales, the world changes! What’s your favorite "hidden boss" function that grows faster than people expect? Let's discuss in the comments! 👇 🔗 Explore the logic of mathematics and engineering on my YouTube channel: [https://lnkd.in/dDc4Tv5z #CreativeCoding #JavaScript #WebDevelopment #Mathematics #DataVisualization #Algorithms #BigO #STEM #SoftwareEngineering #CodingLife
More Relevant Posts
-
Coding agents can write code but can't see what it renders. They generate a component, maybe run a build, and hope for the best. There's no feedback loop between "I wrote this UI" and "this is what it actually looks like." I've been building something on the side to fix this. RVST is a native desktop rendering engine for Svelte. Written in Rust. No browser. No webview. Your Svelte components compile to JS, RVST runs them in an embedded runtime, lays out with Taffy, renders with Vello on the GPU, and displays in a native window. But the real point isn't replacing Electron. It's RenderQuery. Agents get a test harness that opens a real GPU-rendered window and takes JSON commands on stdin. Snapshot the scene graph. Find elements by role. Click by text. Diff state changes. Every interaction auto-runs lints — contrast regressions, lost focus, missing handlers — surfaced without asking. 8 ASCII introspection modes let agents read a UI without a single screenshot. Semantic trees, layout rects, pixel renders, structure maps — all from the CLI, all pipeable. Built-in analyzers run on the live render: WCAG contrast from actual pixels, density heatmaps, accessibility audits, auto diagnostics. Open source. Apache 2.0. npm install -g @rvst/cli https://lnkd.in/eAHN8kDH
Media Attachment
To view or add a comment, sign in
-
This is the same problem we see in creative, just in a different form. You can generate endlessly (ads, UIs, code) but without a tight feedback loop on what actually renders and performs, you’re left guessing. The idea of agents interacting with real outputs (not just generating them) feels like the shift from “more content” → “better decisions.” Different layer, same problem. Without feedback on what actually works, you’re just guessing. Shane Murphy is clearly thinking deeply about this layer of the stack, excited to see where this goes. Definitely worth following his work.
Building the future of AI harness intelligence. Lead Scientist and Founder at Zaius 🙊 As seen in Forbes, Insider, and CBS News. 🗞️
Coding agents can write code but can't see what it renders. They generate a component, maybe run a build, and hope for the best. There's no feedback loop between "I wrote this UI" and "this is what it actually looks like." I've been building something on the side to fix this. RVST is a native desktop rendering engine for Svelte. Written in Rust. No browser. No webview. Your Svelte components compile to JS, RVST runs them in an embedded runtime, lays out with Taffy, renders with Vello on the GPU, and displays in a native window. But the real point isn't replacing Electron. It's RenderQuery. Agents get a test harness that opens a real GPU-rendered window and takes JSON commands on stdin. Snapshot the scene graph. Find elements by role. Click by text. Diff state changes. Every interaction auto-runs lints — contrast regressions, lost focus, missing handlers — surfaced without asking. 8 ASCII introspection modes let agents read a UI without a single screenshot. Semantic trees, layout rects, pixel renders, structure maps — all from the CLI, all pipeable. Built-in analyzers run on the live render: WCAG contrast from actual pixels, density heatmaps, accessibility audits, auto diagnostics. Open source. Apache 2.0. npm install -g @rvst/cli https://lnkd.in/eAHN8kDH
Media Attachment
To view or add a comment, sign in
-
Video built entirely by prompting. Built this reel with Claude + Remotion (Video by "talking to" Claude) https://lnkd.in/eiBQM-QP Frames stripped from the original, MCP tool wired up, Claude rebuilt the whole video. The video you're watching is the output of a fully agentic video pipeline. A Python script stripped every image asset from the original MGEN Showcase render. Then I built an MCP tool that let Claude talk to Remotion directly — not through code suggestions, not through screenshots, but actually controlling scene composition, timing, motion, and layout. I asked Claude to strip the Grinch character out of every scene and rebuild the reel programmatically from the extracted assets. What you're watching is that rebuild — 19 student cards, no Grinch, generated end to end by Claude driving Remotion through MCP. What's inside the reel: 19 Northeastern MGEN and ISE students — their projects, stats, and one-liners. Same roster as the main MGEN Showcase. Different production path. If you want to try this yourself, the code is on GitHub and set up as a starter. Requires Node 18+ and works best with Claude Code. The repo includes a CLAUDE.md file that gives Claude the workspace rules — so you can open the folder, say "Read CLAUDE.md and help me add a scene for [Name]," and be off. Swap the scenes, adapt the brand file, render your own. https://lnkd.in/ecYbDfnZ Quick start: npm install npm start That opens Remotion Studio at localhost:3000. Render one scene: npx remotion render Scene01Naimisha out/scene-01.mp4 Render the full reel: npx remotion render MGENFullReel out/full-reel.mp4 Add a scene: drop Scene-NN-Firstname.tsx into src/compositions, register it in src/Root.tsx, and it appears in the studio on save. Repo: https://lnkd.in/ecYbDfnZ Remotion: https://remotion.dev Claude Code: https://claude.ai/code Node.js: https://nodejs.org The argument underneath the demo: if an LLM can drive a video renderer through a tool protocol, the bottleneck stops being the edit. It becomes the brief. #ClaudeCode #Remotion #MCP #AIVideo #NortheasternMGEN
To view or add a comment, sign in
-
Recently, the brilliant engineer Cheng Lou (https://lnkd.in/dKs3RBkw) released pretext—a groundbreaking text dynamics engine and layout framework for JavaScript that calculates massive typography equations mathematically in less than a microsecond. I was so blown away by his geometric algorithms that I decided to completely translate and port his entire physics engine natively over to Dart! *Introducing flutter_pretext!* All the credit for the underlying abstractions, pure-code measuring mechanics, and cursor math goes entirely to *Cheng Lou*. By leveraging his architecture in Flutter, we can bypass the heavy multi-pass DOM/Widget tree rebuilds that normally break native performance. It drops right in with a few massive widgets: ObstacleTextFlow: Throw floating avatars or moving geometry over a paragraph natively, and the text shatters and wraps fluidly on both sides in real time. ShrinkWrapText: Completely eliminates the "dead trailing space" bug in chat bubbles by bounding text strictly to the longest drawn line. BalancedText: Brings professional typography to dynamic app data by ending awkward orphaned headline words natively. I'm incredibly grateful to Cheng Lou for open-sourcing the foundations of this. You can check out his original engine repository here: https://lnkd.in/d2_VbBsr Head over to my ported package on pub.dev to test out the 60FPS fluid physics loop in the example app! Pub: https://lnkd.in/dQ-Zb6GE Repo: https://lnkd.in/dyKEEQ-5 #Flutter #Dart #FlutterDev #OpenSource #UIUX
To view or add a comment, sign in
-
"Make the heading bigger." 5 words. 12 messages to Claude before it touches the right element. That was my workflow every single day. "It's the h1 in the hero section." "No, the one in components/Hero.tsx." "It uses Tailwind, the class is text-3xl I think." "Actually it might be in pages/index.tsx." I was spending more time describing UI elements than actually designing them. So I built Design Mode. Point at your UI. Click. Type what you want. Claude changes the code. That's it. No more playing "guess which DOM element I mean." Here's how it works: → Hover any element to see its box model (margin, padding, border — all color-coded) → Click to annotate — just type plain English like "make this bigger" or "add more spacing" → Claude automatically reads your annotations and edits the source file → It detects your stack (Tailwind, CSS Modules, styled-components) and edits accordingly → One click to test responsive — mobile, tablet, desktop The killer feature? Every message you send to Claude silently checks for new annotations in the background. You don't even have to say "read my annotations." You just... annotate and talk. It feels invisible. It works with React, Vue, Svelte — anything with a dev server. Open source. MIT licensed. Two ways to use it: 🔌 Claude Code plugin: /plugin install design-mode ⚡ Standalone MCP: works with Claude Desktop, Cursor, Windsurf The gap between "what I see" and "what I can tell the AI" was the bottleneck. Design Mode closes it. Link in comments 👇 What's the most time you've wasted trying to describe a UI element to an AI? #ClaudeCode #DeveloperTools #AI #WebDevelopment #OpenSource
To view or add a comment, sign in
-
🪬 I built a real-time hand-tracking experience in pure HTML + JavaScript — no backend, no frameworks, just a browser and your hands. Inspired by Doctor Strange's Mystic Arts, I combined MediaPipe Hand Landmarker with canvas physics to create: ✋ 15+ spell modes — Fire Mandala, Astral Portal, Rasengan, Chidori, Kamehameha, Susanoo, Infinity Gauntlet and more 🔥 Fire Pen & Neon Pen — draw in mid-air with your index finger 🌈 Rainbow mode, color picker, undo, save as PNG ⚡ Two-hand energy beam — bring both palms together to fire a beam 🖐 Zero mouse needed — hover your finger over any button for 1.5s to click it using dwell detection The coolest part? The entire UI is gesture-controlled. No keyboard. No mouse. Just your hands. Built this as a side project to explore real-time computer vision in the browser. The possibilities for AR, education, and accessibility are wild. Drop a 🪬 if you want the source code! #MachineLearning #ComputerVision #MediaPipe #JavaScript #HandTracking #WebDev #AIProjects #OpenSource #BuildInPublic #Innovation
To view or add a comment, sign in
-
𝗠𝗮𝗻𝗱𝗲𝗹𝗯𝗿𝗼𝘁 𝗦𝗲𝘁 𝗶𝗻 𝗝𝗦: 𝗙𝗶𝘅𝗶𝗻𝗴 𝗭𝗼𝗼𝗺 𝗮𝗻𝗱 𝗣𝗿𝗲𝗰𝗶𝘀𝗶𝗼𝗻 My last Mandelbrot demo broke after about 16 zooms. The fractal turned black and blocky. This post explains why and how I fixed it. Computers store numbers with limited precision. JavaScript uses 64-bit doubles with about 15-17 useful digits. Zooming reduces the coordinate range very fast. Each click previously zoomed to 20% of the previous range. After 16 clicks, the range drops to about 1.5e-12. At that scale, two adjacent pixel coordinates share all 15 digits. Their difference becomes zero. Every pixel gets the same value. The image grids and blacks out. This is catastrophic cancellation. The fix involves three key changes. First, switch from click to mouse wheel. This gives smooth zoom in both directions. The zoom always centers exactly on the cursor. Second, reduce the zoom step. Old factor: 0.1 (to 20% range per click). New factor: 0.8 (to 80% range per scroll tick). The slower zoom extends the useful life from ~16 to ~130 steps before hitting the precision wall. Third, add a precision guard. We stop zooming in when the range drops below 1e-12. The image freezes at the last good level instead of turning black. Here is the new wheel listener logic. It maps the cursor to the complex plane. It builds a new symmetric window around that point. It blocks the page scroll with passive: false. It cancels queued renders to avoid slowdowns during fast scrolling. // Throttle with requestAnimationFrame for smoothness let rafId; canvas.addEventListener('wheel', (e) => { e.preventDefault(); updateCoordinates(e); cancelAnimationFrame(rafId); rafId = requestAnimationFrame(() => Mandelbrot()); }, { passive: false }); A comparison of the before and after. Before: click only, zoom in, ~16 steps max, no guard, page scroll unaffected. After: wheel in/out, exact cursor center, ~130 steps, stops at 1e-12, blocks page scroll. To zoom deeper than ~130 steps, you need arbitrary precision math. Libraries like decimal.js add 50 or more digits. This slows computation 10-100x. Professional tools use perturbation theory for efficient deep zooms. That is a future upgrade. Other remaining limits. - Full canvas re-render on every zoom event. - No mobile pinch-to-zoom support. - Single worker for all columns. - Fixed MAX_ITERATION (1000). Deep zoom areas need more iterations. You can tweak the experience. Try different ZOOM_FACTOR values. 0.5 is aggressive. 0.95 is very smooth. Find the demo at quijosakaf.com. Source code is on GitHub. Enter the full post link here: https://lnkd.in/gicKthrt This post was written with help from Claude. I used AI to understand IEEE 754 and arbitrary precision. The core concepts were new to me. Join my Telegram channel for more: https://t.me/GyaanSetuAi What limits do you hit in your visual projects?
To view or add a comment, sign in
-
Static portfolios are dead. I replaced mine with a system. https://lnkd.in/gUgqy-k3 You don’t scroll it. You operate it.I got tired of static portfolio websites, so I built one you can actually interact with. When I started designing my personal site, I didn’t want another digital resume listing skills in Python, AI/ML, and Networking. I wanted the interface itself to demonstrate how I think and build systems. So I went a bit overboard (in a good way). 🔹 The entire experience is built as an interactive, brutalist-inspired web app 🔹 The “About” and “Network View” sections run on a custom physics-based SVG engine 🔹 Every node is draggable, zoomable, and fully responsive 🔹 No heavy third-party libraries — I implemented force-directed graph logic (repulsion, attraction, center gravity) from scratch 🔹 Optimized to maintain ~60fps on both desktop and mid-range mobile devices I even made the contact form simulate a TCP handshake… because why not turn networking concepts into UI? If you’re into: • Minimal, dark-mode, hacker-style interfaces • Creative coding and system-level thinking • Breaking things just to see how they behave You might enjoy this. Try it, mess with the physics, or even wire up your own network topology: https://lnkd.in/gUgqy-k3 Feedback from builders, engineers, and curious minds is welcome. #SoftwareEngineering #ReactJS #WebDevelopment #FrontendDev #CreativeCoding #PhysicsEngine #Networking #BuildInPublic #UIEngineering #JavaScript #TechInnovation #DeveloperPortfolio #100DaysOfCode #TechCareers
To view or add a comment, sign in
-
-
One thing I have noticed is that a lot of modern libraries and utilities have released versions that are technically very tight, but ultimately not good in the real world. They feel kind of forced. 1. Tailwind 4. I get the logic, but switching to pure CSS config is always messy, because CSS is still messy. Tokens will always be JS. Tailwind is JS. Let's stick with JS. (I'm sticking with JS) Tailwind 3 still cooks. 2. Font Awesome 6: we're just getting too complicated now 3. Material Icons 2: we call them "material symbols" but also icons too if you select the dropdown? But now we have different icons and a whole bunch of new options. They should've created a new product and left the old material icons alone (which were fine). 4. DART/SCSS: I get the deprecation of at-import -- but omg, what a trial it is to get this to work without warnings (an AI cannot figure this out at all)
To view or add a comment, sign in
-
"The project is never-ending, but the results are finally here" project is closing in , If you understand the math, you can build anything. Tommorow is the day,Ready to show this to my supervisor tomorrow—wish me luck! I’ve always believed that the best way to understand the web is to build without the "safety net" of heavy libraries. I just finished building 4 custom interactive backgrounds from scratch. The Engineering Breakdown: The Math: Used the distance formula d = sqrt (x2-x1)^2 + (y2-y1)^2 for real-time particle proximity detection and "web" connectivity. The Geometry: Leveraged Math.cos() and Math.sin() for algorithmic shape generation—no static images, just pure trigonometry. The Rendering: Bypassed the React Virtual DOM in favor of the Canvas 2D Context, ensuring a locked 60 FPS by talking directly to the GPU. #Javascript #CanvasAPI #WebPerformance #CreativeCoding #MathInCode
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development