LeetCode POTD 💫: Description: There are n 1-indexed robots, each having a position on a line, health, and movement direction. You are given 0-indexed integer arrays positions, healths, and a string directions (directions[i] is either 'L' for left or 'R' for right). All integers in positions are unique. All robots start moving on the line simultaneously at the same speed in their given directions. If two robots ever share the same position while moving, they will collide. If two robots collide, the robot with lower health is removed from the line, and the health of the other robot decreases by one. The surviving robot continues in the same direction it was going. If both robots have the same health, they are both removed from the line. Your task is to determine the health of the robots that survive the collisions, in the same order that the robots were given, i.e. final health of robot 1 (if survived), final health of robot 2 (if survived), and so on. If there are no survivors, return an empty array. Return an array containing the health of the remaining robots (in the order they were given in the input), after no further collisions can occur. Note: The positions may be unsorted. Here's my solution: https://lnkd.in/gr_iruSp #Python #DSA #Leetcode #DailyChallenge #Hard
Surviving Robots Health After Collision
More Relevant Posts
-
I’ve been bouncing back and forth between Isaac Lab and Isaac Sim a lot lately. Train in Lab, deploy in Sim, test with ROS, repeat. Training is fine. Deployment… not so much. Not because it’s conceptually hard, it’s just fragile. A long chain of manual steps that are easy to mess up: ➡️ Rebuilding observation tensors by hand ➡️ Matching joint ordering between training configs and USD articulations ➡️ Wiring action scaling and PD gains ➡️ Writing the Omniverse extension boilerplate from scratch Get one index wrong and your robot’s hip commands go to the knee. Ask me how I know. So I built something to fix it. A Lab to Sim tool that automates the whole handoff. You point it at your training outputs (env.yaml, agent.yaml, policy.pt) and it generates a complete, runnable Isaac Sim extension. Config, policy controller, observation builder, scenario runner, keyboard teleop, UI, the lot. Under the hood it: ➡️ Parses Isaac Lab IO Descriptor YAMLs and Python task configs ➡️ Maps training joint names to USD articulation DOF order (exact + fuzzy matching) ➡️ Heuristically infers robot type (manipulator, quadruped, humanoid) based on kinematic structure ➡️ Validates all tensor dimensions before generation so you catch issues early, not at runtime ➡️ Spits out a self‑contained extension you can drop straight into Isaac Sim. It includes a browser‑based wizard with a visual joint‑mapping diagram. So you can see exactly where things would go wrong before they do. #Robotics #IsaacSim #IsaacLab #NVIDIA #ReinforcementLearning #Omniverse
To view or add a comment, sign in
-
LeetCode POTD 💫: Description: There is a robot starting at the position (0, 0), the origin, on a 2D plane. Given a sequence of its moves, judge if this robot ends up at (0, 0) after it completes its moves. You are given a string moves that represents the move sequence of the robot where moves[i] represents its ith move. Valid moves are 'R' (right), 'L' (left), 'U' (up), and 'D' (down). Return true if the robot returns to the origin after it finishes all of its moves, or false otherwise. Note: The way that the robot is "facing" is irrelevant. 'R' will always make the robot move to the right once, 'L' will always make it move left, etc. Also, assume that the magnitude of the robot's movement is the same for each move. Here's my solution: https://lnkd.in/gkXEWXtP #Python #DSA #Leetcode #DailyChallenge #Easy
To view or add a comment, sign in
-
-
🔥 Day 551 of #750DaysOfCode 🔥 🚧 LeetCode 3661: Maximum Walls Destroyed by Robots (Hard) Today’s problem was a great mix of greedy thinking + binary search + DP optimization 🤯 🧠 Problem Insight We are given: Robots with positions & shooting distances Walls on a number line Each robot can shoot left or right, but: 👉 Bullets stop if they hit another robot 👉 Goal is to maximize unique walls destroyed 💡 Key Challenges Choosing left vs right direction optimally Avoiding overlapping counts of walls Handling blocking by neighboring robots Ensuring maximum unique destruction ⚙️ Approach Breakdown ✅ Step 1: Sort robots & walls 👉 Makes range queries possible ✅ Step 2: Use Binary Search (lowerBound / upperBound) 👉 Efficiently count walls in range ✅ Step 3: Precompute: left[i] → walls destroyed if robot shoots left right[i] → walls destroyed if robot shoots right num[i] → overlapping walls between robots ✅ Step 4: Apply Dynamic Programming Maintain: subLeft → max walls if current robot shoots left subRight → max walls if current robot shoots right 👉 Transition carefully avoids double counting ⏱ Complexity Sorting: O(n log n) Binary Search per robot: O(log n) DP traversal: O(n) 👉 Overall: O(n log n) 🧠 What I Learned How to combine Binary Search + DP Handling interval overlaps smartly Thinking in terms of choices per index (left/right) Writing clean helper functions (lowerBound, upperBound) 📌 Takeaway Hard problems are not about complex code, they’re about breaking constraints into structured decisions. #LeetCode #DataStructures #Algorithms #CodingChallenge #Java #ProblemSolving #100DaysOfCode #SoftwareEngineering #DSA #750DaysOfCode
To view or add a comment, sign in
-
-
Most robotics data collection is just logging with extra steps. We designed something different: Typical approach: Start a python script recording, manually synchronize timestamps later, write custom scripts for each data format, hope the episode boundaries make sense during training. Trossen SDK: Episode-first architecture where the runtime knows what an episode is from the start. Here's what that actually means: → Configuration-driven episodes Define your episode structure in YAML. Arms, cameras, mobile bases, force sensors - whatever your embodiment needs. The runtime handles synchronization automatically. → Timestamp alignment at capture time No post-processing. Hardware sync, software alignment with bounded drift compensation where necessary. Sub-millisecond accuracy across modalities. → Training-ready outputs Direct MCAP and LeRobot format export. No conversion pipelines. No manual timestamp alignment. The data comes out ready for your training loop. The real difference: failure modes are handled by design, not by accident. Operator accidentally starts recording early? The runtime handles episode trimming. Camera drops frames? Interpolation and drift correction are built in. Mobile base loses sync? Graceful degradation with clear error states. This is what happens when you design for real data collection workflows instead of adapting general-purpose logging tools. Check it out here: http://bit.ly/4t6hlc7
To view or add a comment, sign in
-
-
🤖 I built a full AI Engineering Assistant from scratch in 7 days — here's what I learned. As someone learning AI engineering, I wanted to go beyond tutorials and build something real. So I challenged myself: one week, one complete AI system. Here's what the system can do: ✅ Debug robotics problems using RAG (Retrieval-Augmented Generation) ✅ Generate Arduino, ESP32 & ROS2 code on demand ✅ Diagnose machine images using a custom-trained YOLOv8 model ✅ Remember conversation context across multiple turns ✅ Fall back to web search when local knowledge runs out 🔧 The tech stack I built with: → FastAPI — REST API backend with 4 endpoints → ChromaDB — vector database for semantic search → sentence-transformers — text embeddings (all-MiniLM-L6-v2) → Ollama + phi3 — local LLM inference on my RTX 4050 GPU → YOLOv8 — fine-tuned on 4,131 Arduino/ESP32/Raspberry Pi images → Streamlit — web UI with 4 tabs 💡 What I learned that no tutorial teaches you: → RAG without an LLM just prints chunks. The retrieval and the generation must work together. → Windows multiprocessing needs if __name__ == '__main__' or YOLO training crashes silently → Hardcoded backslash paths break everything. Always use raw strings or os.path.join() → PyTorch ships CPU-only by default. Getting GPU working needs a specific index URL → "Application startup complete" and "server is running" are not the same thing The biggest lesson? Debugging IS the learning. Every error I hit taught me something the docs never mention. 🔗 Full project on GitHub: https://lnkd.in/ds2QZhJW If you're building with AI/ML — what's the most unexpected bug you've had to debug? #AIEngineering #MachineLearning #Robotics #RAG #YOLOv8 #FastAPI #Python #BuildInPublic #LLM #ComputerVision
To view or add a comment, sign in
-
LeetCode Daily | Day 69 🔥 LeetCode POTD – 657. Robot Return to Origin (Easy) ✨ 📌 Problem Insight Given a robot starting at (0, 0): ✔ Moves are given as a string of 'U', 'D', 'L', 'R' ✔ Each move shifts position by 1 unit ✔ Need to check → does it come back to origin? 🔍 Initial Thinking – Simulation ✔ 💡 Idea: ✔ Track x and y coordinates ✔ For every move: • 'U' → y-- (or y++) • 'D' → opposite of 'U' • 'R' → x++ • 'L' → x-- ✔ At the end → check (x == 0 && y == 0) ⏱ Complexity: O(n), Space: O(1) ✔ Single pass ✔ No extra data structures needed 💡 Cleaner Insight – Count Balance 🔥 👉 Instead of coordinates: ✔ Count 'U' == 'D' ✔ Count 'L' == 'R' ✔ If both balances hold → back to origin 🧠 Key Learning ✔ Simple simulation problems test clarity, not complexity ✔ Coordinate tracking is a powerful pattern ✔ Balance/count approach simplifies thinking ✔ Always look for symmetry in movement problems A very clean problem where observation > optimization ⚡ #LeetCode #DSA #Algorithms #CPlusPlus #ProblemSolving #CodingJourney #DataStructures
To view or add a comment, sign in
-
-
This robot decides who to chase — and always picks the closest target!!! "Autonomous target selection (ROS 2)" 🎯 Project: Turtlesim Catch Them All A master turtle autonomously detects, tracks, and “catches” other turtles in the environment. 👉 Notice how it always goes to the nearest target instead of a farther one? That behavior is not random — it’s controlled through decision-making logic using ROS 2 parameters. ⚙️ What’s happening under the hood Multiple nodes: turtle_controller, turtle_spawner, turtlesim_node Real-time communication using topics (pose, cmd_vel, alive_turtles) Service-based actions (/spawn, /kill, custom /catch_turtle) Client–server interaction between controller and spawner Custom messages for structured data exchange Parameter-driven behavior (spawn rate, targeting strategy, naming) Launch + YAML configuration to run the entire system with one command 🧠 What I learned Even though it looks simple, a lot of concepts, logic, and effort are working behind the scenes to make this system run smoothly. This project helped me truly understand how different ROS 2 components connect and work together in a real system. 💭 🧿 Personal note This took me longer than expected — but instead of rushing, I focused on understanding things properly. Glad I was able to build this completely on my own, step by step. 📹 Demo video below 👇 #ROS2 #Robotics #AutonomousSystems #RobotNavigation #Python #Engineering #StudentProjects #LearningInPublic
To view or add a comment, sign in
-
This week's note: spatial transforms — one of the most fundamental tools in robotics and something I reference constantly. Covers rigid transform definitions, frame changes, composition, and a practical example showing how to batch transform composition efficiently with vectorization. https://lnkd.in/gP67yYci #robotics #kinematics #python
To view or add a comment, sign in
-
Excited to finally open-source NervLynx — a modular robotics runtime framework I’ve been building. In my experience working on robotics projects (and the ones I’ve built), most start with quick Python scripts for prototypes. That works great for demos… until you need to move to production. Suddenly you’re fighting non-deterministic behavior, impossible-to-debug data flows, missing observability, and fragile lifecycle management. NervLynx is my attempt to close that exact gap. It’s not a full autonomy stack or a ROS replacement. It’s the runtime foundation that sits underneath your nodes — giving you: Deterministic + async execution with priority scheduling Traceable dataflow (every message carries topic, source, sequence, timestamp, schema, and trace_id) Built-in safety primitives (watchdogs, backpressure detection, startup dependency supervision) Full observability out of the box (replayable JSONL traces, latency/flow stats, Prometheus-style metrics) Config-driven graph wiring via YAML + a clean plugin SDK First-class support for both Python and C++ runtimes Ready-to-use deployment profiles (Docker + systemd) Everything is MIT licensed, thoroughly tested, and ships with working smoke tests and examples so you can try it immediately. I'm currently working on sharing realtime results from actual sensors using NervLynx — and I'll be posting those results soon. If you’re building mobile robot software (perception → planning → actuation pipelines, surveillance stacks, or any hardware-in-the-loop system) and you’ve ever struggled moving from prototype scripts to something maintainable and observable, I’d genuinely love your feedback. 👉 Repo: https://lnkd.in/gGu29zMW #Robotics #OpenSource #RobotSoftware #AutonomousSystems #Embedded #Python #CPP
To view or add a comment, sign in
-
-
Building a "Smart Calculator" by merging Hardware with AI! 🤖💡 I’m excited to share some progress on our current "Robotics" class project! Our goal is to transform a LEGO EV3 robot into a smart calculator capable of autonomously scanning and recognizing physical wooden numbers and mathematical operators. How does it work? We are currently using a single motor to rotate a platform, while a Color Sensor reads the light reflectivity of the piece during a 360-degree rotation. Since all the wooden pieces are painted black and placed on a white surface, the sensor acts almost like a shape scanner. Because each number has a distinct physical shape and outline (different edges and gaps), it reflects a unique sequence of light and shadow, creating a distinct "Light Reflectivity Signature." We collected this raw data, cleaned it, and trained a custom Machine Learning model (KNN) from scratch to recognize these signatures! 📊 Current Progress & Results: We focused heavily on building a solid data pipeline. So far, we've successfully trained our model on numbers (1 to 4) and the addition (+) operator. The initial results are amazing: we achieved a 100% Accuracy in object recognition! (I'll drop a screenshot of the classification report in the comments 👇). Our Tech Stack: • Hardware: EV3 MicroPython, Color Sensor, Touch Sensor. • Software & AI: Python, Pandas, Scikit-learn (KNN), Jupyter Notebooks via SSH. The project is still in progress as we work on training the model on the rest of the numbers and operators. Building this pipeline and solving the hardware-software integration challenges has been an incredible learning experience! #Robotics #MachineLearning #ArtificialIntelligence #EV3 #Python #DataScience #Engineering #ComputerEngineering #BuildingInPublic
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development