My research interests are about Robot Control Method and Artificial Intelligence.
If the Robot is placed in a working environment that moves with humans, they must perform the most optimized controls in various environments.
I'm interested in properly judging complex information in these diverse environments,
choosing the best out of a myriad of controllable ways, and trying to implement it through artificial intelligence and control technologies.
● Pose for Metaverse awarded and presented at
ICROS'22
| 2022-06-03 | Co-Author | ICROS 거제 소노캄 |
● Development of the IMU Sensor Based GATE Phase Detection Algorithm that is Robust to Changes in Terrain and Walking Speed
| 2022.10.19 | First Author |
AF2022-0560
| KSPE 한국정밀공학회 추계 학술대회 |
● Intuitive Robotic Arm Teleoperation Framework Utilizing Vision Based Real-Time Hand Sign Tracking
| 2024.02.21 | First Author | | KROS 2024 in Pyeongchang |
International
● Multitask Learning for Multiple Recognition Tasks: A Framework for Lower-limb Exoskeleton Robot Applications
| 2023.08.28 | First Author |
IEEE RO-MAN 2023
|
Pre-released on Arxiv
|
● An Intuitive Framework to Minimize Cognitive Load in Robotic Control: A Comparative Evaluation of Body Parts
| 2024.06.27 | First Author | IEEE UR 2024 |
Award
● 2022.06.03 | Hanyang University |
team. YOROKE
| Capstone Design Fair . Gold Medal (금상) | 학과 1위, 공학대학 전체2위
● 2022.08.08 | Hanyang University | team. YOROKE |
SHARE Challenge
. Bronze Medal (동상)
This projects aims to develop 3D LiDAR scanning and object recognition technology for collaborative robots, utilizing 2D LiDAR that rotates 360 degrees. This method allows for data acquisition from surrounding objects without blind spots, and can be implemented at a relatively low cost.
In this project, I was responsible for detecting human objects and drawing 3D bounding boxes using the Person Minkowski U-Net, based on projected 3D point cloud data.
Implementation Video
Project Duration :
2023.06 ~ 2024.01
Data Collection for Personalized Walking Assistance in Lower-Limb Exoskeleton Robot
Project Background
Data collection for what?
I participated in a project that optimally controls the lower-limb exoskeleton robot for personalization.
In general, human walking differs from person to person, and it shows various and irregular patterns even within an individual.
Therefore, when controlling the Lower-Limb exoskeleton robot, it is necessary to be able to properly cope with these variations in order to apply the most optimal assistance.
To do this, I am collecting data to develop AI models that robustly judge even with these variances in the project, or to analyze personalization parameter factors.
AMTI Force Plate : Calculation of ground reaction force more accurately
What I had to do here was to synchronize and Integrate all the data of the Equipments above.
Sensor Software Communication
When collecting data, if the sensor software was incompatible with each other, it had to be made possible to collect from one software through communication protocols or certain equipment for synchronization.
The software I used was Motive, Xsens Analyze, and ROS.
In the Motive software, AMTI Force Plate, Delsys Trigno EMG, and Optitrack Marker data were compatible.
In the Xsens Analyze Software, IMU data, Optitrack Marker data were compatible.
it’s IMU Data is not a sensor raw signal, but calculated information about each joint link.
In ROS, Optitrack Marker data, EMG Data, IMU Data, Insole FSR Data were compatible.
it’s IMU Data is a sensor raw signal. (not calculated)
In the case of ROS, since it communicates based on TCP IP, it could be made compatible with all software that supports TCP IP.
Means of Communication
I was able to communicate between sensors through two methods:
Esync equipment , TCP IP protocol
Data Extraction
The data informations I want to collet are EMG signal, Optitrack Marker data, Force Plate, and IMU Sensor’s Raw Sensor data.
Force Plate, Optitrack Marker, and EMG signals, excluding IMU, were compatible with the Motive software.
Xsens Analyzer, a software dedicated to Xsens IMU, can communicate through Motive and Esync. but One problem here was that getting IMU raw sensor values was hard with the Xsens Analyzer software.
The values that the Analyzer software sent were not raw values, but values that were calculated in some way.
The only way to collect IMU Raw Sensor Signal was to collect it from ROS via TCP IP communication.
Synchronizer
Force Plate, EMG, and Optitrack Marker information excluding IMU can be collected in motive.
Optitrack Marker, EMG, and IMU information excluding Force Plate can be collected in ROS.
and
the data I want to collect was EMG signal, Optitrack Marker data, Force Plate, and IMU Sensor’s Raw Sensor data.
for this, I came up with an idea that use FSR Insole sensor.
If the Insole FSR sensor is placed on the Force Sensor and stepped on it, the signal is transmitted at the same time, so it is possible to determine at which moment it is the simultaneous point.
Development of the Synchronization GUI Program
I developed a GUI program that synchronizes data collected from Motive with data collected from ROS.
The GUI program I developed made it easier for fellow researchers who are not familiar with programming to be able to do data processing that can be complicated.
Experiment Protocol
I collected the GAIT data on 15 subjects according to terrain & speed condotions.
Terrain:
Levelground Walk
Stair Ascent / Descent
Ramp Ascent / Descent
Speed:
in case of Levelground : 70, 90, 110, 130 BPM
in case of Ramp, Stair : 70, 90, 100 BPM
All Subjects walk along the Metronome BPM Sounds.
Project Duration :
2022.06 ~ 2023.03
Muktitask Learning for Multiple Recognition Tasks_ A Framnework for Lower-limb Exoskeleton Robot
Introduction
What is the GAIT Phase?
GAIT phase is a quantitative value between 0 and 100% of the walking stage.
Why is it Important?
For optimized gait control of the Exoskeleton Robot, it is necessary to assist the appropriate magnitutde of the force at the correct timing.
This requires Clarification of the walking stage in the current user’s current walk so that the optimization algorithm can be applied appropriately.
I developed a CNN model that Robustly judges this GAIT Phase regardless of Terrain, Walking Speed / Direction, and environmental Change in data collection
with a minimum number of IMUs (Only one IMU sensor used on the thigh)
In the Future, I Plan to apply this algorithm to real Exoskeleton Robot
Implementation Focus
● Use as few IMU sensors as possible
● Strong against various speed and geographical changes
● Perform topographic and phase judgments simultaneously by changing the head of one BackBone Network
CNN Model Implementation
Convolutional Neural Network is an artificial neural network model using the characteristics of Convolution Kernel, which filters the characteristic distribution of data.
Using these characteristics, I thought that the convolutional kernel could filter the characteristic changes over time in walking data.
Input Pipelining Algorithm
Feature Sets that can be obtained from IMU sensors are Angular Velocity, Linear Acceleration, and Orientation(Quaternion).
Despite this limited information, I tried to proceed with the most optimal preprocessing for the model.
the entire Preprocessing Process is as follows.
First, IMU data are stacked in order of arrival time, and cut by T seconds to make 2D information.
After that, up/down sampling is performed with 200 fixed records, and then the average filter and Min Max Normalization are performed.
The main points of this process are as follows.
● Orientation-free: the Performance has to be preserved regardless of direction
Linear Acceleration, Angular Velocity is a feature set independent of the user’s walking direction because it is defined as the IMU’s Local Reference Frame.
But, Orientation Feature is able to has a different value depending on the User’s Walking Direction.
So I excluded Orientation information from Input Feature.
● Stacking up data over time to create inputs that look like 2D photos
Feature information used is Angular Velocity and Linear Acceleration information.
In order to reinforce the lack of the number of features, data accumulated for T seconds was used as input data.
● Average Filtering
If the number of records is sampled by the same number(200), when the sampling rate is lower than 200, there is a point where the graph is discretly cut off.
Convolutional Kernel is a kernel that detects characteristic changes in data, and it can be found during experiments that these discrete points have a bad effect on learning.
So to smoothen the data, an average filter was used.
(I mixed Delaying and Pulling filters so that there is no time delay.)
● Normalized to be strong against various environmental changes
For Linear Acceleration and Angular Velocity, the scaling of the values may vary slightly if the attachment area is slightly distorted.
Mixmax scaling was performed so that it could respond robustly to this variances.
Model Structure
I made the model structure by referring to LeNet, VGGNet, ResNet which were created using Convolutional Kernel in the field of Computer Vision.
All convolutional Kenels and pooling layers are one-dimensional filters, and the direction of the filter is 200 sample directions. That is, it is learned by downsampling data of 200 lengths per channel.
The characteristics of the implemented model are as follows.
● The basic block consists of Convolutional Kernel, BatchNormalization, and Activation Function.
● In the first two layers, Down Sampling was stronger than other layers by using a 2X1 pooling layer.
● Batchnormalization was put in each block to make learning faster.
● Only two FC layers and one activation function were used for the Output Layer ("Head" in Multitask Learning), and Each Output Layer (Head) is inserted to determine Terrain Recognition and Gaitphase Recognition.
Development of the Stewart-Gough Platform Impedance Control Algorithm
Introduction
Project Mission
What I had to do in this Project was to do Impedance Control of the Stewart-Gough Platform manipulator by using Robotics.
What is the Stewart-Gough Platform?
The Stewart-Gough platform(a.k.a Stewart Platform) is a type of 6-DOF manipulator with six Prismatic Actuators.
Stewar Platform generally has the structure of
- Top Plate
- Spherical Joint (3-DOF)
- Linear Actuator
- Universial Joint (2-DOF)
- Bottom Plate
Because the Stewart Platform motor is oriented in a similar to to the direction in which the force is to be applied (vertical to the upper plate),
It has the advantage of having a very high carrying load (high robustness), and also being able to firmly control the movement of six degrees of freedom even with simple mechanism.
What is the Impedance Control?
Impedance control is a method of robot control that considers the interaction between the robot and the environment.
This method measures the differences in Actual & Desired Values of q’‘,q’,q (q-set) generated during the interaction between the robot and the environment, and adjusts the robot’s motion accordingly.
q-set can be Cartesian Space Variable, and also Joint Space Variable.
The control method I applied was in Cartesian Space.
Therefore, it was my goal to implement a system that would force the virtual Spring, Damper to power against differences in the Desired Q-set of the End-Effector.
Workflow
1. Implementing an Overall High-Level Control Structure
2. Implementing the Dynamic Controller
3. Implementing the Forward Kinematics and Force Torque Mapping
Implementation
Overall High-level Control Structure
The First Thing I did was to create an overall Control Structure.
Insufficient knowledge of this, I looked for how impedance control is performed on a typical Robot Arm, UR5.
As a result, I Could lots of Hint from the paper below.
Abstract—Robot manipulators are designed to interact with
their surroundings. Even if a task does not specifically involve
interaction, the robot may collide with unknown obstacles during
its motion. To overcome these problems, it is necessary to consider
possible interactions inside the control system. This paper aims
to design a controller that allows the manipulator to reach
a final pose, …
In this paper, it implement impedance control to move the Actual Q-set value, which is obtained from motor encoder& Forward Kinematics, to Desired Q-Set.
the Entire Control Structure is described as below.
A brief summary of the structure above is as follows.
1. Take "Position in Current Cartesian Space" from the controller and calculate "the Force each actuator must Exert to go to Desired Position"
2. Change Joint Space Force to Acceleration in Cartesian Space in Forward Dynamics
3. Integrate acceleration in Cartesian Space twice to switch to Position in Cartesian Space
4. Convert "Position in Catesian Space" to "Position in Joint Space" using Inverse Kinematics (a.k.a I.K Solver)
(in revolute actuator, position means 'rotation Angle')
5. The motor driver applies a position value to the motor and transfers the actual position of the current motor from the motor encoder to the Forward Kinematics (a.k.a F.K Solver) node.
6. Use Forward Kinematics to convert the current Actual Joint Position to the Actual Cartesian Position, which is fed back to the Controller node.
Repeat entire Process
Stewart Platform Manipulator is closed chain, not Open Chain’s robotic arm as above,
so the internal structure of the controller is very different.
but the Overall Control Structure is the same.
There’s only one thing different. The input value accepted by the motor driver is the position value for the ur robot,
but my motor driver accepted the Torque value. so it did not need a conversion process using Forward Dynamics and IK Solver.
Using the above picture as a hint, I expressed the control structure picture of the Stewart Platform as below.
As you can see in the picture above, it is a simpler structure that you can convert from Joint Force to Joint Torque.
Dynamic Controller is key to the overall control structure.
the Dynamic Controller’s inputs are Desired Position(P) & Velocity(P’) & Acceleration(P”) in Cartesian Space ,
and Actual Position(P) & Velocity(P’) & Acceleration(P”) in Cartesian Space.
※P is same as Q
And the Output value is the Joint Force that each Actuator has to exert.
Dynamic Controller
The Dynamic Controller was created by referring to the paper below.
In this paper, an analytical study of the kinematics and
dynamics of Stewart platform-based machine tool structures
is presented. The kinematic study includes the derivation of
closed form expressions for the inverse Jacobian matrix
of the mechanism and its time derivative. An evaluation
of a numerical iterative scheme for an on-line solution of the
forward kinematic problem is also presented. …
In the above paper, I borrowed Inverse Kinematics, Jacobian Mapping, and Inverse Dynamics.
The expression that this controller wants to create is as follows.
● Fdy
A node that calculates the Actuator Force required to perform Gravity Compensation for the set gravity when the Cartesian Space’s Position, Velocity, and Acceleration values are input.
This node can be created by referring to the above paper, and I made it with MATLAB to verify that it works well.
● Fim
Impedance Force only calculates Cartesian Force from the difference from Desired Position Set in Cartesian Space space, as explained above.
Later, the Joint Force following Desired Position Set can be obtained by multiplying Cartesian Force with Jacobian Mapping.
Forward Kinematics and Force Torque Mapping
○ Forward Kinematics
Because the Forward Kinematics of Stewart Platform is the closed chain structures, unlike a typical Kinematics Solutions in Open Chain Manipulator,
it is much more difficult to find the solution of Forward Kinematis than Inverse Kinematics.
Therefore, the Forward Kinematics node is usually implemented through an Newton Raphson method rather than an Analytic method,
and the node is implemented using the open source below.
○ Force Torque Mapping
Originally, it is a part that can be accurately done only when calculating how much current should be given through dynamic modeling of the motor.
But here, this process was too complicated and I didn’t have time, so I thought Force and Torque were in the first linear mapping relationship, and heuristically just found the slope and put it in.
Project Duration :
2021.06 ~ 2022.06
Development of Pose Estimation & Analysis for AI Metabus Exercise Platform
Introduction
What Problem do we want to Solve?
Virtual reality for fitness and health care applications require accurate and real-time pose estimation for interactive features.
Yet, they suffer either a limited angle of view when using handset devices such as smartphones and VR gears for capturing human pose or a limited input interfaces when using distant imaging/computing devices such as Kinect.
Solution
Our team’s goal was to solve this Trade Off relationship by separating the metaverse and Pose Estimation system.
The embedded platform NVIDIA Jetson Xavier and camera devices are integrated into one, separating them from other computing platforms to provide a viewing angle for the user’s entire body and providing a close input and visualization interface to the user.
Contribution
● This system is a platform to provide the pleasure of exercising with multiple people to those who wanted to exercise with people but were reluctant to reveal themselves to others.
● By separating the camera recognition device from the display and input device that the user looks at and inputs, we have established an interface that has a wide camera angle and is comfortable for the user.
● In addition, by implementing 3d mapping with a 2D camera, we have built a system that has the potential to reduce equipment costs for Pose Estimation. (Existing 3d cameras and other motion capture equipment are very expensive.)
AI Part
I was in charge of the AI part in this project.
My goal was to predict 2D skeleton information using a 2D camera and map it to 3D markers.
In addition, based on the estimated 2D skeleton information, I implemented a simple Machine Learning model that determines the current user’s Pose like sitting, standing, and walking.
Through this process, the user’s movement information was input to the 3D metaverse world.
3d Pose Estimation
The 3d Pose mapping Prodedure is illustrated as follows.
First, human Skeleton information was extracted in 2D-X,Y coordinates using AI Pose Estimation model. In order to minimize the reference time, we used MobileNet, the lightest model for Poes Estimation.
And I used a pose lift model that lifts 2D-X,Y pose datas into 3D-X,Y,Z pose datas so that the extracted 2D data can be mapped in the 3D metaverse world.
The metaverse world was implemented by Unity, and the system transmited Marker Data over python TCP IP communication protocol (Socket Programming).
In addition to mapping poses, it was necessary to distinguish how many times they did and what kind of movements they performed when performing a specific exercise. The model for this is implemented as a simple machine learning model with the previously predicted Pose Estimated 2DX,Y coordinate system as input.
All data for creating the Random Forest model was collected directly by recording the output value of the 2d Pose Estimation model (MobileNet) being actually inference.
Team members posing as squats for data collection
Reward & Publication
● 2022 Hanyang Capstone Design Fair . Gold Medal (금상) (학과 1위, 공학대학 전체2위)
It was to make an Binary Classifier to judge defective products in the water manufacturing process by using machine learning models.
characteristics of the Dataset
● Curse Of Dimensionality
The number of Feature Sets is too large for the number of data.
the number of dataset is 1763, but the number of feature set is 1559.
● Highly Imbalanced Dataset
There is much more data on normal products than defective ones.
● Cannot Using Domain Knowledge
Because the company’s process is confidential, it does not disclose what the data features. In other words, Domain Knowledge is not available.
Solution
Feature Selection
Because the number of data features was too large and the feature information could not be known, proper visualization and analysis of the data was impossible.
Even if the EDA process is omitted, I wanted to determine which Feature Set is the most influential Feature Set for learning and remove the Feature Set that is not or bad influential on learning.
While looking for ways to select the Feature Set properly, I was able to find the Wrapper Method.
The Wrapper method is the method of extracting the best fature subset of model.
In other words, it refers to a method of learning all Feature Set combinations one by one to find which Feature Set combination is the most effective for learning.
There are two Types of the Wrapper Method: Forward Wrapper Method , Backward Wrapper Method.
Forward Wrapper Method:
It starts with no feature, adding the most important feature every time it is repeated until there is no further improvement in performance.
Backward Wrapper Method:
Start with all features, remove the least important features one by one, and repeat until there is no further improvement in performance.
Among them, I chose the forward wrapper method.
Ensemble Learning
Ensemble Learning refers to the combination of several weak classifiers to create a strong classifier. The data entering each classifier in the ensemble must be different dataset.
(Here, techniques such as Bagging and Pasting are used to determine how different datasets are determined.)
but, if different datasets were put differently for Record, there was a concern that the performance of Classifier would be greatly degraded due to the insufficient number of datasets.
In order to prevent this problem, I considered putting differently for Feature(not Record) in each model, based on the fact that this dataset is Highly imbalanced.
At this time, the Forward Wrapper method was applied to each learner in order to put most optimal featureset in each Weak Classifier.
The forward wrapper method is a Feature Selection method that returns the most suitable feature combination for the Model among all possible feature combinations.
(This method has the disadvantage of taking a very long time because it is a method of finding the most optimal combination by comparing the number of cases one by one.)
After calculating the wrapper for each classifier, I sellected the most optimal feature for each classifier and learned the model.
Among the learned ML models selected in this way, I selected the three Weak Classifiers that performed the best performance.
(All models are implemented with scikit-Learn package)
Voting Method
Generally, types of Voting include Hard Voting, Soft Voting, Weighted Voting.
The method I chose here is Weighted Voting.
I multiplied the probability value of the each weak classifier by weight and divided the total value by threshold.
Here, optimization was carried out for the the weight(w1,w2,w3) and threshold values to have the best performance through grid search for 10 random seed values.
(It took a very long time…..)
Performance Evaluation
The key criteria for selecting the final model are AUC and Recall values.
Due to the nature of the task, I judged that the most fatal case was the situation in which the model judged that there was no problem even though the product was defective. So even if Accuracy and Precision score fall, I chose the model to reduce False Negative (FN) as much as possible.
The figure above shows the performance index of the Enseamble model with the best performance, and it could be seen that the Recall and AUC scores for Threshold were the highest.
Project Duration :
2nd semester project for 3rd year of bachelor course
Design & Control of a Quadrupedal Spider Robot
Introduction
Quadrupedal Spider Robot
At the end of the First Semester of second grade course, I did a Simple Project to control spider robots using Inverse Kinematics.
Hardward Design
A simple representation of the robot structure is shown below.
Each leg has three motors.
To make it into a robot shape, I designed the parts using Solid Works CAD tool & 3d Printer.
Analytical Inverse Kinematics Solver
One of the legs of the spider robot to control has 3DOF Robot.
The space to be mapped is a three-dimensional space(x,y,z), so , I could Solve Inverse Kinematics even with a simple cos law as you can see the figure above.
○ Motion Test
Trial & Error
Since this Quadrupedal Spider Robot consist of cheap motors and simple position control,
There were difficulities in controlling.
○ Failure
After several motion optimizations, I finally succeeded in making this robot walk.
○ Success
Project Duration :
1st semester project for 2nd year of bachelor course