The 'monocopter' is a type of #micro #aerial #vehicle (MAV) largely inspired from the flight of botanical samaras (Acer palmatum). A large section of its fuselage forms the single wing where all its useful aerodynamic forces are generated, making it achieve a highly efficient mode of flight. However, compared to a multi-rotor of similar weight, monocopters can be large and cumbersome for transport, mainly due to their large and rigid wing structure. Overall, the vehicle weighs 69 grams, achieves a maximum lateral speed of about 2.37 ms−1, an average power draw of 9.78W and a flight time of 16 min with its semi-rigid wing. In this work, a monocopter with a foldable, semi-rigid wing is proposed and its resulting flight performance is studied. The wing is non-rigid when not in flight and relies on centrifugal forces to become straightened during flight. The wing construction uses a special technique for its lightweight and semi-rigid design, and together with a purpose-designed autopilot board, the entire craft can be folded into a compact pocketable form factor, decreasing its footprint by 69%. The proposed craft accomplishes a controllable flight in 5 degrees of freedom by using only one thrust unit. It achieves altitude control by regulating the force generated from the thrust unit throughout multiple rotations. Lateral control is achieved by pulsing the thrust unit at specific instances during each cycle of rotation. A closed-loop feedback control is achieved using a motion-captured camera system, where a hybrid Proportional Stabilizer Controller and Proportional-Integral Position Controller are applied. #research #paper: https://lnkd.in/gbtUTExx #authors: Shane Kyi Hla Win, Luke Soe Thura Win, Danial Sufiyan, Shaohui Foong #robotics #engineering #quadcopter #drones #innovation #technology
Drone Control Methods for Micro Robotics
Explore top LinkedIn content from expert professionals.
Summary
Drone control methods for micro robotics are advanced techniques used to guide and stabilize tiny flying machines, often inspired by insects or plants, using lightweight sensors, vision systems, and artificial intelligence. These methods allow micro drones to navigate, balance, and respond to their environment with remarkable agility, even at miniature scales.
- Explore vision-based systems: Consider using small cameras and neural networks to estimate a drone’s orientation and control its flight, reducing reliance on bulky sensors.
- Utilize foldable designs: Implement lightweight, collapsible wing structures that make transportation and storage easier without sacrificing aerial performance.
- Apply reinforcement learning: Train neural networks to manage drone movements and adapt to unpredictable environments, making micro drones more resilient and precise.
-
-
The Nature Portfolio journal on Robotics just published our article on estimating and controlling a drone’s flight attitude purely based on vision 👁 In order to fly, both #drones 🚁 and #insects 🐝 have to estimate and control their “attitude”, i.e., the angle they make with respect to gravity. Drones typically estimate their attitude with the help of tiny sensors that measure accelerations. In contrast, in insects no specific sense of acceleration has been found. So how do insects find the gravity direction? In 2022, we showed in an article in Nature that attitude can be estimated by combining #optical #flow with a #motion #model. Optical flow is the motion we perceive visually when moving ourselves or looking at dynamic objects in our environment 👀🏃♀️. A motion model captures what happens when we take actions 🦾. For instance, it can model that to accelerate sideways, a helicopter-like drone has to tilt to the side. At the time, we demonstrated the theory with flying robot experiments 🤖. However, we always needed to include rotation rate measurements from the gyros. There is a rough equivalent for gyros in nature: the “halteres” of Dipterans such as mosquitoes 🦟. However, not all insects have such halteres. Moreover, the theory predicted that gyros are not necessary for attitude estimation. In our new article, “All eyes, no IMU: learning flight attitude from vision alone”, we train neural networks 🧠 to estimate the attitude and rates purely based on vision. In particular, we used event-based cameras 🎥 These cameras do not capture image frames at a fixed rate, but have each pixel transmit an “event” when its brightness changes. This results in a very low latency and high dynamic range. Our study showed that neural networks with memory were able to successfully map incoming events to the drone’s attitude, allowing for closed-loop, onboard attitude control. An investigation of the neural networks revealed that they exploit not only motion cues but also “pictorial” cues 🖼 – in this case the edges of the ground surface in the border of the field of view. Excluding the borders led to slightly lower performance in the environment used for training, but generalized better over different environments. The developed method is promising for insect-sized flying drones (~100mg), as it allows for an even smaller sensory package. Finally, it raises interesting questions about how insects would use and fuse visual and other sensory information for estimating their attitude. Congratulations to Jesse H. and Stein Stroobants for their enormous efforts to make this happen, and co-author Sander Bohté for the great collaboration. A big thank you to the organizers of the special collection at npj Robotics, Nitin J Sanket and Alessio Franci. Finally, we are grateful for the funding by the research Air Force Office of Scientific Research (AFOSR) and the NWO (Dutch Research Council), in the context of the Dutch Research Agenda (NWA).
-
MIT researchers flew the first end-to-end deep reinforcement learning powered flight with micro aerial vehicles! Soft-actuated insect-scale micro aerial vehicles (IMAVs) gained attraction due to the resilience to collision. IMAVs are promising, but we lack the computationally efficient controllers especially as uncertainty is amplified in the millimeter scale. Basically, the Real2Sim bridge is (or was) not there. Here's how they bridged the Real2Sim gap: 1) 𝗥𝗲𝗽𝗹𝗮𝗰𝗲𝗱 𝗲𝗻𝘁𝗶𝗿𝗲 𝗰𝗼𝗻𝘁𝗿𝗼𝗹 𝘀𝗰𝗵𝗲𝗺𝗲 with a single neural network (NN) trained through reinforcement learning without relying on hand-tuned demonstrations (2 layer MLP with 32 neurons). 2) Accounted for system delay during initialization of NN through re-matching in behavior cloning (BC) 3) Incorporated delay in RL training with proximal policy optimization (PPO) 4) 𝗥𝗮𝗻𝗱𝗼𝗺𝗶𝘇𝗲𝗱 𝗱𝗼𝗺𝗮𝗶𝗻 𝗽𝗮𝗿𝗮𝗺𝗲𝘁𝗲𝗿𝘀 in both BC and RL for robustness against system uncertainty. This resulted in a successful 𝟱𝟬 𝘀𝗲𝗰𝗼𝗻𝗱 𝘇𝗲𝗿𝗼-𝘀𝗵𝗼𝘁 𝗳𝗹𝗶𝗴𝗵𝘁 𝘄𝗶𝘁𝗵 𝟭.𝟯𝟰 𝗰𝗺 𝗹𝗮𝘁𝗲𝗿𝗮𝗹 𝗮𝗻𝗱 𝟬.𝟬𝟱 𝗰𝗺 𝗮𝗹𝘁𝗶𝘁𝘂𝗱𝗲 𝗲𝗿𝗿𝗼𝗿. The efficiency of training procedure needs a mention. Modified BC phase requires less than 5 minutes on a M1 Macbook Air, and each PPO fine-tuning iteration takes about 15 minutes. 𝗘𝘃𝗲𝗻 𝘄𝗶𝘁𝗵 𝗶𝗻𝗰𝗼𝗿𝗽𝗼𝗿𝗮𝘁𝗶𝗼𝗻 𝗼𝗳 𝘀𝘆𝘀𝘁𝗲𝗺 𝗱𝗲𝗹𝗮𝘆 𝗶𝗻𝘁𝗼 𝗯𝗼𝘁𝗵 𝗕𝗖 𝗮𝗻𝗱 𝗥𝗟 𝗽𝗵𝗮𝘀𝗲 𝗮𝗻𝗱 𝗿𝗮𝗻𝗱𝗼𝗺𝗶𝘇𝗲𝗱 𝗱𝗼𝗺𝗮𝗶𝗻 𝗽𝗮𝗿𝗮𝗺𝗲𝘁𝗲𝗿𝘀! The researchers proved that such high performance control does not need to be computationally intensive anymore. Congrats to Nemo Yi-Hsuan Hsiao, Weitung Chen, Yun Chang, Pulkit Agrawal, and Kevin Chen! Here's the paper: https://lnkd.in/gaKH8S5v I post my latest interesting read in robotics, 𝗳𝗼𝗹𝗹𝗼𝘄 𝗺𝗲 𝘁𝗼 𝘀𝘁𝗮𝘆 𝘂𝗽𝗱𝗮𝘁𝗲𝗱!
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development