1. What does an exoskeleton's AI algorithm learn, and how long does it take?
As a wearable device used to assist or enhance human movement, an exoskeleton needs to learn parameters to provide natural assistance and adapt to the gait characteristics of different users. Traditional exoskeletons learn through manual parameter adjustment, but the drawback is that users often need to maintain movement for more than 30 minutes to adjust the assistance parameters appropriately. Furthermore, the assistance mode is fixed, requiring readjustment when changing users or scenarios, which is time-consuming and labor-intensive. Therefore, the learning time is crucial to the quality of the exoskeleton experience.
Currently, leading exoskeleton companies are trying to introduce AI to replace manual parameter adjustment, allowing the device to learn autonomously and shortening the learning time. Representative products include "Pi" from Kenqing Technology and "hypershell Pro X" from JiKe Technology.
|
|
|
Figure 1: Kenqing "Pi" Technology
|
|
|
|
Figura 2: Hypershell Pro X de Hypershell Technology
|
2. What are the main types of AI algorithms for exoskeletons and what is their learning time?
The AI for exoskeletons available on the market can be broadly divided into three categories:
① Rule-based and threshold-based (no learning):
This is the earliest and most basic solution: it uses gyroscopes, angle sensors, stride speed/frequency sensors, and threshold judgment conditions to determine the user's movements and provide assistance. Although, strictly speaking, this type of exoskeleton doesn't have a "learning period," it has virtually no ability to adapt to individual differences or complex changes in terrain and pace. Therefore, users will not experience an improved performance over time.
2. Statistical Learning/Pattern Recognition (seconds to minutes):
This is the approach used by many consumer lightweight exoskeletons: it adjusts assistance curves based on historical data, distinguishing common modes such as walking and climbing/descending. Compared to rule-based control, the experience is smoother. The intelligent control logic of Hypershell and Π is closer to this type. Hypershell even intelligently recognizes various terrains such as mountains, snow, sand, and gravel. While Π doesn't have as many assistance modes as Hypershell, its standard mode particularly smoothly recognizes changes in assisted movement when walking, climbing/descending, and ascending stairs.
3. Real-time Spatial Visual Perception + Adaptive AI (milliseconds):
This is an innovative approach in recent years and also a key direction adopted by Kenqing Technology in its latest exoskeleton, the "Π6". The primary goal is to equip the device with "eyes" that utilize spatial perception information to learn and predict the most appropriate assistance method for the user. By incorporating data from multiple sources, including vision and the IMU, more parameters and indicators are provided for the AI. Combined with the results of analyzing a large amount of gait data from diverse user types using big data models, the accuracy of terrain recognition and action prediction is improved. This allows the AI to understand the user's intentions more quickly and makes the learning process virtually imperceptible to the user.
3. Differences Between Different AI Algorithms
(1) Statistical Learning/Pattern Recognition
Technically, conventional statistical learning/pattern recognition algorithms are primarily based on inertial measurement units (IMUs) worn on the legs. These IMUs detect changes in acceleration and angular velocity in the thighs and calves to determine the user's current movement state and the terrain. When the system detects a change in gait or movement characteristics, it can typically complete the recognition process in seconds and switch to the corresponding assistance mode. The advantage of this approach is its clear structure and reliable response.
However, in principle, IMUs are better at "sensing what the body is doing" than at "predicting what will happen in the environment." Therefore, assistance adjustments are usually made after the movement has occurred, based on the recognition results. This also means that in situations with rapidly changing terrain or continuous mixing, the assistance curve is more discrete, changing modes, and the adjustment process may not always be continuous and smooth. Furthermore, when relying solely on inertial information to infer terrain, recognition accuracy will be somewhat affected on complex or atypical road surfaces, requiring users to briefly adapt or adjust the parameters for a better match.
In general, the design philosophy of the Hypershell and π exoskeletons is more akin to that of a well-defined, experienced assistant, maintaining a relatively consistent user experience across various standard walking situations. However, they cannot guarantee rapid and precise terrain transitions when moving across varying terrain.
(2) Spatial Visual Perception + Real-Time Adaptive AI: Making the AI Learning Period Almost Disappear
What changes does spatial visual perception bring to exoskeletons?
According to research, the total delay of using visual sensors to assist in adjusting the lower limb exoskeleton is shown in the table below. The data in the table shows that the total time from perception to planning and generating a new trajectory is approximately 81.83 milliseconds (approximately 0.08 seconds). This speed is much faster than the support phase of a single human step (typically 400-600 milliseconds), meaning the system has enough time to plan the new assistance curve and gait before the user takes the next step.
|
|
|
Table 1: Mean and standard deviation of key experimental data for visual algorithm perception and gait trajectory generation time. (Obstacle Detection Error, Perception Delay (RV time), Planning Delay (CFFTG time))
|
Meanwhile, the YOLO visual synthesis algorithm model not only responds extremely fast, but also achieves an accuracy of over 95% in multi-terrain recognition
③. The research and experimental results are shown in the figure below.
|
|
|
Figure 3. Evaluation metrics of real-time gait environment classification vision systems. (B) Inconsistency matrix of clear gait environment categories classified by the YOLO-LSTM vision system. (D) Changes in environment classification results based on the integrated vision system over time.
|
These studies clearly demonstrate that visual perception allows exoskeletons to anticipate the next step in the terrain. This eliminates the need for users to wait for the exoskeleton to learn and adjust, enabling true intelligence. The latest generation of Kenqing Technology's exoskeleton, the Π6, is designed around this goal, ultimately achieving the powerful combination of "personalized analysis before use + real-time sensing and adjustment during use."
① Before use: Personalized recommendations are completed in advance through gait video AI analysis. Unlike most exoskeletons that "only begin learning after being put on," the Π6's learning process begins before the user even puts on the device.
Users simply upload a video of their daily walking gait via their mobile phone, and the Π6 uses AI gait analysis algorithms to model and analyze the user's stride length and cadence, lower limb movement rhythm, walking stability, and symmetry. It automatically calculates personalized recommended assistance parameters before the device is activated. Compared to exoskeleton solutions that primarily rely on general gait models (such as Hypershell), the Π6 has already achieved the first step of "personalized" customization from the very beginning.
② In use: Forward-facing camera + d-TOF, real-time recognition of strides and terrain changes
During actual walking, the Π6 does not rely on a single sensor or a fixed assistance curve. Instead, by installing an RGB-D camera and a d-TOF sensor on the hip, it detects the ground surface and obstacles in real time. Through algorithms, it automatically calculates collision-free foot placement points and gait trajectories, achieving rapid response.
In summary, compared to exoskeletons that primarily rely on inertial sensors to determine status (such as Hypershell and π), Kenqing Technology's "Π6" not only "senses people" but also "sees the environment" in advance. Therefore, it truly allows users to "understand you" without having to learn (understand any technical details), "wait (no long break-in period)," or "change (no need to change their walking style)," becoming a helpful partner that "understands you" as soon as it's on.
①Experiment-free exoskeleton assistance via learning in simulation - PMC - https://pmc.ncbi.nlm.nih.gov/articles/PMC11344585/
②Low Obstacle Avoidance for Lower Limb Exoskeletons - https://www.research.unipd.it/retrieve/d17fbf00-51d3-4ea2-9824-869fc059ad32/short11.pdf
③Adaptive Vision-Based Gait Environment Classification for Soft Ankle Exoskeleton-https://www.mdpi.com/2076-0825/13/11/428





