✅ Day 93 of 100 Days LeetCode Challenge Problem: 🔹 #657 – Robot Return to Origin 🔗 https://lnkd.in/gfZBi3XR Learning Journey: 🔹 Today’s problem involved tracking movements of a robot on a 2D plane. 🔹 I used a dictionary to map each move to its coordinate change: • 'U' → +1 (y-axis) • 'D' → -1 (y-axis) • 'R' → +1 (x-axis) • 'L' → -1 (x-axis) 🔹 Maintained a coordinate array ans = [0, 0] representing (x, y). 🔹 Iterated through each move and updated the respective axis. 🔹 Finally checked whether the robot returned to the origin [0, 0]. Concepts Used: 🔹 Coordinate Simulation 🔹 HashMap / Dictionary 🔹 String Traversal Key Insight: 🔹 The robot returns to origin only if horizontal and vertical movements cancel out. 🔹 Net displacement in both x and y directions must be zero. Complexity: 🔹 Time: O(n) 🔹 Space: O(1) #LeetCode #Algorithms #DataStructures #CodingInterview #100DaysOfCode #Python #ProblemSolving #LearningInPublic #TechCareers
Robot Returns to Origin LeetCode Challenge
More Relevant Posts
-
LeetCode POTD 💫: Description: There is a robot starting at the position (0, 0), the origin, on a 2D plane. Given a sequence of its moves, judge if this robot ends up at (0, 0) after it completes its moves. You are given a string moves that represents the move sequence of the robot where moves[i] represents its ith move. Valid moves are 'R' (right), 'L' (left), 'U' (up), and 'D' (down). Return true if the robot returns to the origin after it finishes all of its moves, or false otherwise. Note: The way that the robot is "facing" is irrelevant. 'R' will always make the robot move to the right once, 'L' will always make it move left, etc. Also, assume that the magnitude of the robot's movement is the same for each move. Here's my solution: https://lnkd.in/gkXEWXtP #Python #DSA #Leetcode #DailyChallenge #Easy
To view or add a comment, sign in
-
-
Gemini Robotics-ER 1.6, In this video, Move from theory to real coding and show how to run the API on your PC using Python. I cover basic setup, how the model understands images and gives coordinates, how it can write and run code to analyze things like gauges, and how it breaks down tasks like a robot “brain.” ⚠️ This is still in preview, so results may vary always use safety checks before connecting to real robots. Try it yourself using the Gemini API docs and AI Studio. Do check out my Youtube channel : https://lnkd.in/gZbzm-dR Check it out: Docs https://lnkd.in/gredDXJK #GeminiRobotics #GoogleDeepMind #AI #Robotics #EmbodiedAI
To view or add a comment, sign in
-
Work in progress from our IBM AI Racing League project. We’re building a Python autonomous driver for TORCS, using telemetry from each lap to tune braking, racing lines, and segment-level speed targets on the Corkscrew track. This screenshot shows the simulator running alongside our live driver logs — speed, track position, lap time, and controller state are all being captured for post-lap analysis. Current focus: making the fastest lap repeatable while keeping it clean through the kink, S-turn, and final hairpin. Tech stack: Python, TORCS, VS Code, telemetry analysis, IBM SkillsBuild / Granite-assisted development. #IBMSkillsBuild #IBMAIRacingLeague #Python #AI #AutonomousDriving #Telemetry #TORCS
To view or add a comment, sign in
-
-
The current modular stack of robotic perception (separate blocks for detection, depth, and tracking) is a temporary artifact of our computational limits. Latency and accumulated error are killed by unified architectures. By 2030, we won't be fusing distinct outputs; we will be querying a single, holistic scene representation that encodes geometry, semantics, and affordance simultaneously. We are moving from "pipelines" to "foundation world models." The simplification of the stack will be the greatest driver of reliability. Here is my prediction for the next decade of perception. 👇 #SpatialAI #python #3d #research
To view or add a comment, sign in
-
Just solved LeetCode 657: Robot Return to Origin and it’s a great reminder that clean, fundamental thinking often beats over-engineering. The Problem: Given a string of moves (U, D, L, R), determine if a robot starting at (0,0) returns to the origin after executing all commands. My Approach: Coordinate Tracking Simulation Instead of reaching for complex data structures, I simply tracked position: • x → horizontal (R = +1, L = -1) • y → vertical (U = +1, D = -1) • Return true if x == 0 && y == 0 Time: O(n) | Space: O(1) Key Insight: Opposite moves cancel each other out. If count(U) == count(D) and count(L) == count(R), the robot must return home. Sometimes a quick count beats step-by-step simulation! This pattern (coordinate tracking + cancellation logic) is foundational for grid problems, path simulations, and even early-stage BFS/DFS intuition. Mastering the basics makes medium problems feel much more approachable. #LeetCode #Algorithms #CodingInterview #SoftwareEngineering #Python #ProblemSolving #TechCareers #DataStructures
To view or add a comment, sign in
-
-
I recently completed a computer vision project focused on monocular forward-warning perception for ADAS. The goal of this project was to build a modular prototype that can process forward-facing driving video, detect road users, track relevant objects, assign proxy-based risk scores, and generate frame-level alert outputs. The project includes two comparable pipelines: Baseline pipeline: Detection → Tracking → ROI-based relevance filtering → Risk scoring → Primary target selection → Alert generation Cascade pipeline: Baseline perception → Proxy distance/headway reasoning → Risk-gate assignment → Budgeted crop refinement → Acceptance filtering → Coarse-to-refined fusion The main idea was not to build a production-ready ADAS system, but to explore how forward-scene objects can be prioritized using monocular video and how additional computation can be allocated only to more relevant targets. Key technologies used: Python, YOLOv8, OpenCV, multi-object tracking, ROI filtering, risk scoring, and data visualization. This project helped me improve my understanding of computer vision pipelines, forward-scene perception, risk-based object prioritization, and alert visualization for ADAS-related applications. GitHub repository: https://lnkd.in/d4dx2-AA #ComputerVision #ADAS #AutonomousDriving #YOLOv8 #OpenCV #Python #IntelligentMobility #MachineLearning #GitHubProjects
To view or add a comment, sign in
-
AI is only as powerful as it is visible. 📡 After perfecting the Inference Engine on Day 5, I realized that industrial data is only useful if it can be monitored in real-time. Today, I moved the project out of the terminal and into a professional Live Diagnostic Dashboard. 🛠️ What’s new in Day 6: Live UI: Built a web-based monitoring interface using Streamlit. Real-Time Telemetry: Visualizing 1Hz sensor data (Temperature & Gradient) from a simulated 5V rail. Dynamic AI Logic: The dashboard feeds live data into my Random Forest model, calculating failure probability on-the-fly. Automated Alerts: Notice the System Warning in the video—that's the AI detecting a high-risk anomaly and flagging it before a failure occurs. Watching the data plot live on my desktop makes the transition from "code" to "industrial tool" feel real. 🚀 Source code for the Dashboard is live on GitHub: 🔗 [Link in the first comment!] #AIML #Python #PredictiveMaintenance #DataScience #BuildInPublic #StJosephsInstituteOfTechnology #ECE #MachineLearning #Streamlit #SaiSrinath18
To view or add a comment, sign in
-
🚫 Mouse? I Replaced It With Air Gestures 👋 Just hand gestures → full control. 👉 ☝️ Index finger → Move cursor 👉 👍 + ☝️ → Left click 👉 ☝️ + ✌️ → Right click No mouse. No touch. Just vision + code. 💡 What’s powering this? • OpenCV → real-time camera processing • MediaPipe → accurate hand tracking • PyAutoGUI → system cursor control 🎯 Result: Smooth, responsive, and feels like controlling your PC with air gestures. This is just the beginning — turning ideas into real-world interaction. 👇 Watch the demo (30 sec) Would you use this in daily life? #ComputerVision #OpenCV #MediaPipe #Python #Innovation #TechProjects #AI #StudentDeveloper #FutureTech #BuildInPublic #MachineLearning #DeveloperLife #Engineering #TechInnovation #SmartSystems #Automation
To view or add a comment, sign in
-
🤖 I built a full AI Engineering Assistant from scratch in 7 days — here's what I learned. As someone learning AI engineering, I wanted to go beyond tutorials and build something real. So I challenged myself: one week, one complete AI system. Here's what the system can do: ✅ Debug robotics problems using RAG (Retrieval-Augmented Generation) ✅ Generate Arduino, ESP32 & ROS2 code on demand ✅ Diagnose machine images using a custom-trained YOLOv8 model ✅ Remember conversation context across multiple turns ✅ Fall back to web search when local knowledge runs out 🔧 The tech stack I built with: → FastAPI — REST API backend with 4 endpoints → ChromaDB — vector database for semantic search → sentence-transformers — text embeddings (all-MiniLM-L6-v2) → Ollama + phi3 — local LLM inference on my RTX 4050 GPU → YOLOv8 — fine-tuned on 4,131 Arduino/ESP32/Raspberry Pi images → Streamlit — web UI with 4 tabs 💡 What I learned that no tutorial teaches you: → RAG without an LLM just prints chunks. The retrieval and the generation must work together. → Windows multiprocessing needs if __name__ == '__main__' or YOLO training crashes silently → Hardcoded backslash paths break everything. Always use raw strings or os.path.join() → PyTorch ships CPU-only by default. Getting GPU working needs a specific index URL → "Application startup complete" and "server is running" are not the same thing The biggest lesson? Debugging IS the learning. Every error I hit taught me something the docs never mention. 🔗 Full project on GitHub: https://lnkd.in/ds2QZhJW If you're building with AI/ML — what's the most unexpected bug you've had to debug? #AIEngineering #MachineLearning #Robotics #RAG #YOLOv8 #FastAPI #Python #BuildInPublic #LLM #ComputerVision
To view or add a comment, sign in
-
Robot vibe coding is a thing now. In this video, I’m not writing a single line of robot code myself. Instead, I’m using an AI Copilot inside VSCode to generate and refine a complete robot program - just by describing what I want. This is built on top of our NOVA SDK, using Python, VSCode and NVIDIA Omniverse as the environment. Which means: all the recent AI coding advances suddenly apply to robotics as well. But here’s what really surprised me: The Copilot doesn’t just generate code. It actually understands robotics problems and how to solve them for me. Singularities. Velocity limits. Out-of-reach targets. It interprets the SDK’s planning error messages, creates sub-programs to search for solutions, and iterates towards a working solution. So now I’m wondering: If this keeps improving at the current pace - will we all be vibe coding robots sooner than we think?
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development