The Autonomous Control and Decision Systems (ACDS) Laboratory is part of the Flight Mechanics and Controls (FMC) group at the Daniel Guggenheim School of Aerospace Engineering, Georgia Institute of Technology (GaTech). The ACDS laboratory is also affiliated with the Institute for Robotics and Inteligent Machines (IRIM) and the Decision Control Lab (DCL) at GaTech. Forty years after the first Apollo mission, the technological/industrial development as well as the need for deep space exploration has created new challenges. Among these is the challenge of building autonomous robotic systems that can accomplish difficult missions in remote, unknown or partially known environments, and adapt to changing and dynamic situations. Robotic systems should be able to robustly walk, navigate, efficiently explore, quickly learn new motor skills and generalize these skills to unseen conditions. While absolute autonomy is critical, robotic systems should safely co-operate with and be efficiently teleoperated by humans during motor control tasks in manufacturing and space missions.
Besides autonomy, our interests include the investigation of the computational principles related to neural organization,computation, function and/or behavior. Here are few questions that better explain our point of view: how neural-hardware organization relates to function? Are there any variational principles in control theory and machine learning that explain this relationship? What are the underlying neural optimization algorithms used to perform motor control and learning tasks? Can we transfer control and hardware design principles from neural organisms to robotic and aerospace systems?
On the mathematical side our research lies at the intersection of Control and Dynamical Systems Theory, Machine Learning, Information Theory and Statistical Physics. At the core of this intersection are 1) the optimality principles in control theory namely the Dynamic Programming and the Pontryagin Maximization Principle, 2) the fundamental connections between Partial Differential and Stochastic Differential Equations, 3) Information Theoretic interpretations of control and learning and 4) Efficient machine learning algorithms for statistical inference, learning and control.
- Statistical Physics and Nonlinear Stochastic Optimal Control:
- Infinite Dimensional Optimization and Control:
- Parallel Computation for Stochastic Optimal Control and Learning:
Parallel Stochastic Path Integral Control for 1 quadrotor moving throught moving obstacles.
Parallel Stochastic Path Integral Control for a team of 9 quadrotors moving throught moving obstacles.
- Learning Motor Control skills in Robotics using Reinforcement Learning:
- Neural Newroks for Stochastic Optimal Control:
- Predictive Control for Neuromodulation:
Application of Reinforcement Learning methods for supressing epileptic seizures.