Introduction to Track D

Track D is for members to learn the decision-making/optimal control in Artificial Intelligence. Deep Reinforcement Learning is becoming the forefront and hottest branch of Artificial Intelligence today. Unlike Computer Vision, Deep Reinforcement Learning mainly solves decision-making and optimal control problems, and is considered to be the core technology of future Artificial Intelligence. AlphaGo and AlphaZero's victory over human players is based on Deep Reinforcement Learning.

Recently, Deep Reinforcement Learning has made breakthrough in sample efficiency, stability, and generalization, making it ready for many other applications. For instance,

1. It improves the accuracy of Computer Vision;
2. It is used in autonomous systems such as autonomous driving and robot learning;
3. It is used for optimizing financial transaction systems and city management systems;
4. It is used for optimizing manufacturing processes.

The objective of this track is to enable members with the basic Machine Learning and Deep Learning knowledge to tackle decision-making/optimal control problems in the real world.

Prerequisites

The basic understanding of Machine Learning and Deep Learning is required to learn Deep Reinforcement Learning.

Main Contents

The members will learn the following knowledge and skills in this track:

Value Iteration
Policy Iteration
Monte Carlo Learning
TD Learning
Q-Learning
Deep Q-Learning (DQN)
Double DQN
Dueling Network Architecture
Soft Q-Learning
Sarsa
Policy Gradient Methods
Vanilla Policy Gradient
REINFORCEMENT Algorithm
Actor-Critic/A3C/A2C
Natural Policy Gradient TRPO
Proximal Policy Optimization (PPO)
Deterministic Policy Gradient (DPG)
Deep Deterministic Policy Gradient (DDPG)
Model based Reinforcement Learning
Meta Learning
Hierarchical Reinforcement Learning
Robot Learning