profile photo
Jeonghwan Kim
[CV] [Github] [Email]

  I'm a Robotics PhD student at Georgia Tech, advised by Dr. Sehoon Ha. I received my Master's degrees in ECE and Math from Georgia Tech, and my Bachelor's degree from Seoul National University, majoring in Electrical and Computer Engineering.
  I love working on robot learning, computer animation, and ML-based control. My goal is to develop algorithms that enable robots to seamlessly interact in everyday environments.



Publications
ARMP: Autoregressive Motion Planning for Quadruped Locomotion and Navigation in Complex Indoor Environments

Jeonghwan Kim, Tianyu Li, Sehoon Ha
IROS 2023

We present a data-driven motion planner that autoregressively generates locomotion trjaectories for complex indoor navigation scenarios.

Project Page / Video
ACE: Adversarial Correspondence Embedding for Cross Morphology Motion Retargeting from Human to Nonhuman Characters

Tianyu Li, Jungdam Won, Alexander Clegg, Jeonghwan Kim, Akshara Rai, Sehoon Ha
SIGGRAPH ASIA 2023

Project Page / Paper / Video
Auto-rigging 3D Bipedal Characters in Arbitrary Poses

Jeonghwan Kim, Hyeontae Son, Jinseok Bae, Young Min Kim
EUROGRAPHICS short paper 2021

We train neural network that performs rigs and skinning of a 3D model based on their mesh and volumetric data.

PDF / Code
Learning to generate 3D shapes with Generative Cellular Automata

Dongsu Zhang, Changwoon Choi, Jeonghwan Kim, Young Min Kim
International Conference on Learning Representations (ICLR) 2021

We train a generative 3D Cellular Automata that can generate and complete 3D data represented in Voxels.

PDF / Code


Projects
Implementation of PPO for Multi-Agent Path Finding with Dynamic Obstacles

Efficient multi-agent path finding algorithm is essential for reducing cost when deploying robots to logoistics warehouses. In this project, we train a Multi Agent variant of Proximal Policy Optimization(PPO) algorithm for multi agent path finding with dynamic obstacles.

PDF
Stabilizing Controllers with Root Based Polynomial Regression

We propose a novel method of stabilizing classical controllers via techniques from machine learning. We use Polynomial Root Kernel(PRK) and Polynomial Root Gradients(PRG) to trained neural network to generate both discrete and continuous controllers satisfying root criterion stability. We successfully generated stabilizing feed-back controllers and parallel feed-forward compensator(PFC) along with unique application to Belgian chocolate problem.

Design and Control of Scalable Magnetic Levitation System with Deep Reinforcement Learning

We Model 3DoF levitating magnetic ball with 2D plane of electro magnets on MATLAB/Simulink. The three dimensional positional control of the levitating object was done via Deep Deterministic Policy Gradient (DDPG) algorithm.

PDF



Teaching Experience
CS4496/7496 Computer Animation
[Spline Visualizer] [Vector Field Visualizer]
CS3451 Computer Graphics

The site is generated using template from Maks Sorokin