Reinforcement Car Racing with A3C Se Won Jang
Jesik Min
Chan Lee
[email protected]
[email protected]
[email protected] ∗
Stanford University
Abstract
performance of entire game-solving deep-rl pipeline. We hope that the discussion and better understanding of frontend CNN portion of deep RL pipelines drawn from this work on particular 2D game can provide some useful insights into more complex real-world challenges such as autonomous vehicles.
In this paper, we introduce our implementation of the asynchronous advantage critic model and discuss how different convolutional neural network designs affect its performance in the OpenAI Gym CarRacing-v0 environment. We also explore how A3C hyperparameters (learning rate, number of threads) influence the effectiveness of the CNN feature extractor by visualizing the filter weights and comparing the network scores. Our implementation of the A3C model with carefully designed CNN feature extractor shows an average reward increase of approximately 100.0 points compared to the vanilla A3C implementation, currently achieving fourth place on the CarRacing-v0 environment leaderboard.
2. Background 2.1. Simulation Environment Our target environment is OpenAI Gym CarRacing-v0 Environment, an experimental OpenAI Gym for 2D car racing game that hasn’t been solved yet. This particular environment requires many graphics dependencies including Box2D and OpenGL. As we implement algorithms such as A3C that depends on multi-threading, we had to either hack into OpenAI Gym environment itself or create a wrapper class that takes care of racing condition so that we can circumvent all the issues. There are several rules in this environment. As in Figure 2.1, each frame is given by 96 × 96 pixels with 3 color channels. Reward is computed for every frame, −0.1 every frame and 1000 N for every track tile visited, where N is the total number of tiles in track. Since track is randomly generated for each game, the total number of tiles in track also varies, ranging from 230 to 320 in most cases. Episode finishes when all tiles are visited or 1000 frames have been played. One of the most unique characteristics of the environment is that it has a continuous action space opposed to many other games including Atari games that only contain discrete actions that an agent can choose. Each action consists of 3 continuous values, each value corresponding to steering, acceleration, and brake. Steering ranges from −1.0 to 1.0, where −1.0 is the leftmost steering angle and 1.0 is the rightmost steering angle an agent can take. Acceleration and brake ranges from 0 to 1. Most importantly, the environment defines ”solving” as getting average reward of 900 over 100 consecutive trials. We found the standard to be rather strict, because it was even hard for an experienced person to consistently score
1. Introduction The OpenAI Gym CarRacing-v0 environment is one of the very few unsolved environments in the OpenAI Gym framework. While many recent deep reinforcement algorithms such as DDQN, DDPG, and A3C are reported to perform well in simple environments such as Atari[10][8][9], the complex and random car racing environment is particularly difficult to solve with prior deep reinforcement learning algorithm due to its complexity and randomness. With random and complicated pixel inputs that are fed into CNN portion, the deep RL networks that follow from the CNN will not learn well. We try to solve this environment with a carefullydesigned CNN component and our modification of stateof-art A3C algorithm with the concept of “continuous certainty”. To our knowledge, there has not been any in-depth, extensive exploration of how image processing part, primarily composed of convolutional neural network, affects the ∗ Jaehyun Kim (
[email protected]), who is not taking this class but CS234, worked with us in devising and implementing A3C with continuous certainty. We consulted with Guillaume Genthial, our CS234 mentor, for some comments and regarding the direction of the paper. We also appreciate to course staff in CS231A, CS231N, and CS234, including Rishi Bedi, our CS231N mentor, who gave us very useful feedback throughout the entire quarter.
1
above 900.0, and no model has solved the game yet.
niques during last few years, scholars have developed unprecedented deep reinforcement learning algorithm as well. Since Mnih et al. redefined the era of deep RL[10], many people solved many different games (e.g. Atari and OpenAI Gym) with different sorts of architectures [14]. Recently, there have been significant efforts in tackling environment with continuous setting, for example, with continuous action space, because such games are highly similar to the actual world[8][11][9][17]. However, while many deep RL algorithms receive raw pixels as inputs and therefore, take advantage of CNN heavily to process those image inputs, there have been no extensive exploration of how we can find a proper CNN architecture, how we can preprocess the inputs for CNNs, and whether we can apply pretrained CNNs to a target environment. Some works have rigorously experimented with hyperparameters of CNN only[2] and some works have benchmarked different types of deep RL algorithm[3]. Although Kawaguchi et al. showed that deep learning needs to be carefully set to get expected performance[6], there has been no exploration of CNNs and state-of-art deep RL combined. Here, we will extensively discuss how our novel modification of A3C could give a competitive score by carefully setting CNNs.
(a) OpenAI Gym CarRacing-v0 Webpage
3. Method 3.1. Convolutional Neural Network and Image preprocessing
(b) In-game Screenshot of CarRacing-v0
Figure 1. Our Target Simulated Environment: OpenAI Gym CarRacing-v0. Figure 1(a) is a screenshot of the main page of OpenAI Gym CarRacing-v0 experimental environment. Figure 1(b) is a screenshot of the actual game being played by a person.
As mentioned in the simulation environment section, the CarRacing-v0 environment represents each game state by a 96 × 96 × 3 RGB array. The bottom 12 × 96 × 3 pixels area (see Figure 1(b)) contains the car dashboard displaying state information such as the velocity, acceleration, gyroscope and the relative driving wheel angle of the car agent. It is important to note that these state information are indirectly represented by the 96 × 96 × 3 game state array, and is not explicitly accessible anywhere within the framework. Since the state frame only contains raw RGB pixel data and we don’t have the exact knowledge on how each state information is rendered on the bottom dashboard of the game state frame, we decided to use Convolutional Neural Network as the feature extractor for our A3C model. This CNN module behaves as an add-on to the A3C model, extracting visual information from the given state frames and squashing them to an n dimensional real vector. The A3C model, upon receiving the feature vector, computes the policy and expected return of the agent at the given state. Due to a number of reasons further explained in the results section, instead of training the CNN feature extractor with raw RGB pixel data, we take a number of steps to preprocess the state frames. First, we apply grey-scaling to the original raw pixel
2.2. Related Work There have been recent breakthroughs in convolutional neural networks to solve image recognition and computer vision related task. From the landmark work of AlexNet[7], which was one of the first works that tackle ImageNet classification challenge with convolutional neural networks, there have been many modifications and improvements in the vanilla-version of CNNs such as VGGNet[16], SqueezeNet[4], and hierarchal CNN or HDCNN[18]. Many studies focused on implementing deep but efficient and stable CNN architecture. However, one of the breakthroughs that made many vision task easier is the concept of “transfer learning,” which can improve the performance of learning by avoiding much expensive datalabeling efforts[13]. There have been many applications of transfer learning from well-trained networks like VGGNet to various domains including cancer detection, video classification, and writing recognition[15][5][1]. Aided with such remarkable achievements in CNN tech2
frame to reduce the depth of the image from 3 to 1. Then, we subtract 127.0 from each pixel so that the values are zero-centered from range -127 to 128. This is so that the model becomes more robust with randomization of the filter weights. Third, as mentioned in the previous paragraph, we remove the dashboard by removing the 12×96 pixels area in the bottom of each frame. We additionally crop out 96 × 6 pixels from both left and right ends of each frame so that each frame is a 84 × 84 square array. Since we cropped out the bottom dashboard area, we need a way to restore the lost velocity, acceleration and driving wheel position information back into our game state. Therefore, we lastly concatenate 5 consecutive 84×84 preprocessed frames to construct a single state representation. This is the same method used by Mnih et al.[10] to extract spatial movement information in a number of the Atari game problems. The resulting state array has the shape of 84 × 84 × 5. This array is then fed to our CNN feature extractor. As further explained in the results section, we experiment with a variety of CNN layers and hyper parameters. The criteria that we considered are 1) the performance of the CarRacing agent as defined by the environment, 2) training time. Since many RL algorithms, especially A3C models are known to take a long time to train and converge, we put heavy focus not only on the performance but also on the training time as well. Deeper CNN networks, as seen in the recent trends, may be able to more accurately represent the state as feature vectors. However, with a large number of parameters and computational complexity, it would be very hard for us to distinguish if our model has already converged, or is still being optimized. This is especially so in the CarRacing-v0 problem that we are dealing with, since it is computationally much heavier than other openAI Gym environments and hence takes longer to execute an action at each step. Since training time is a very important criteria, our implementation of CNN does not have max-pooling, batch normalization or dropout layers between the convolutional layers. Instead, we attempt to replace these layers by 1) zero centering the raw pixel values, 2) xavier weight initialization with reduced scale, and 3) relatively shallow network. Our results verify our hypothesis. As shown in table 1, shallower and lighter CNN feature extractor showed better performance, with our 2-layer CNN model ultimately achieving fourth rank in the OpenAI Gym leaderboard.
action our agent chooses is always one of those five actions. That is, even though the action space of the environment is entirely continuous space, the agent ignores that important characteristic just because of the intrinsic architecture of vanilla A3C model (see Figure 3 for vanilla A3C example). As we will see in Section 5, DDPG, which is targeted for continuous action space, is not apt for our target environment due to our environment’s complexity, and vanilla A3C model with cherry-picked discrete actions gives us somewhat reasonable results. However, we suggest that the simple A3C model can still be improved by incorporating continuous nature of softmax probability when choosing optimal policy. Instead of simply taking an argmax of softmax probability vector to be the optimal action, which actually might be extreme when softmax probability of each action is close to each other as we will discuss in the example below (Figure 4 and 5), by multiplying the softmax probability with the argmax action, we can smooths out the five discrete actions into a totally continuous space. Mathematically, incorporating continuous certainty does not harm backprop procedure as well, since doing so is merely a multiplication computation.
Figure 2. An example of optimal policy output in vanilla A3C model. In this contrived example, the softmax probability of the third action is the greatest, so the policy network chooses the third action as the optimal action to take from given state.
Let us see when our suggestion of certainty multiplication is particularly helpful. Suppose an agent encountered a corner that gives a softmax probability of, for example, [0.19, 0.19, 0.24, 0.19, 0.19] as demonstrated in Figure 4. The simple A3C model will choose action 3, taking acceleration with magnitude of 1.0 or full acceleration. This might work in some cases, but usually, taking full acceleration for consecutive time frames in this game is not a good strategy. If an agent encounters an unfamiliar, sharp corner after consecutive full accelerations, the agent or even a well-trained human cannot manipulate the car properly and will most likely to deviate from the track, which will result in seriously bad score at the end. Now, let us observe what happens when an agent extracts more information from softmax probability. As described in Figure 5, by multiplying 0.24 to [0.0, 1.0, 0.0],
3.2. Continuous Certainty We introduce the concept of “continuous certainty” to the vanilla A3C model. It smooths out the discrete action space that the trained agent can choose from, so that the output of policy network becomes completely continuous. While we simply cherry-picked five possible actions that the agent would take, it contains stark disadvantages, because regardless of how well the network is trained, the optimal 3
Figure 5. Our Final A3C Model Architecture. A stack of five preprocessed frames are input into the network. The front-end twolayered CNN extracts image features from the pixel frames and pass them into the policy network and value network. Each network outputs 5 × 1 softmax vector and 1-dimensional scalar value estimation respectively. The argmax of softmax vector from the policy network is then multiplied with the softmax value to give the optimal action the agent should take.
Figure 3. A bad scenario for optimal policy selection when we use vanilla A3C model. The softmax probability of each action is very close to each other, and simply taking an argmax will result in one extreme discrete value.
the agent will take the action of [0.0, 0.24, 0.0] and this will stabilize the learning as the agent will lessen how much it accelerates. In other words, instead of taking a full acceleration, because the agent is quite “uncertain” which action is definitely better than the other, it becomes more careful in taking the argmax action. In this particular example, the “certainty” of taking the argmax action ([0.0, 1.0, 0.0]) is only 0.24, so the agent decides to take the acceleration of 0.24. One can notice that this is how an actual novice human driver learns how to drive, when he or she is driving the area for the first time.
5. Results/Analysis 5.1. CNN Architectures As mentioned in Section 2, the game agent does not know how each state information (velocity, acceleration, driving wheel position) gets computed and rendered on the bottom dashboard of the game state frame. We hypothesized that a carefully designed CNN would be able to extract visual state information from the RGB frames, providing the A3C RL agent sufficient data to infer critical information such as the curve angle, velocity, acceleration, driving wheel position, distance from the road center, etc. The CNN, upon extracting these information, would embed them in an n-dimensional state vector, with which the A3C network in turn computes the policy and the expected reward. Thus, the training process of the two networks (CNN and A3C) is joint rather than separate. 5.1.1
Figure 4. A diagram that shows how a simple additional multiplication of continuous probability to the argmax action smooths out the extreme choices of discrete action. It is the same case described in Figure 3.2, but multiplying the softmax probability gave us a different, less extreme action.
Limitations of Pretrained CNN
We initially considered pretraining the CNN to the visual components presented in the environment. However, due to a number of limitations, we decided that it would be better to train the CNN along with the A3C network. First, we could not use other pretrained CNN from classification problems. The visual objects presented in each frame could only be found in this specific environment, and not anywhere else, which means that none of the existing CNNs had the capability to correctly detect the visual components of this game environemnt. Second, the number of classes (objects) to pretrain the network was not definite. As explained in the simluation environment section, this environment randomly generates the game map after every episode, virtually making it impossible to label the frames with a finite number of classes, let alone the fact that there is no
4. Experiments For evaluation, we basically followed the evaluation rule of OpenAI Gym CarRacing-v0. We tested many different architectures that will be discussed in the following section (Section 5, and for each configuration, we ran 100 games to take average of the score. 4
5.1.3
way to guarantee the correctness of the labels generated by automated scripts. Since the number of classes is a critical hyperparameter for image detection / classification CNNs, and due to the fact that they are very hard to change once the network is trained, pretraining the CNN on the game environment is bound to be a very complex task. Lastly, since CNNs are designed to perform well on image classification / detection tasks, using pretrained CNNs may not be well suited for an RL objective, which, in this case, is to maximize the reward by completing the course in a timely manner. Due to the difference in the objective, pretrained CNN modules may neglect crucial information the the A3C RL agent might need to perform better. Due to these reasons, we have deeemed it unreasonable to use a pretrained network to tackle this problem. 5.1.2
CNN Performance Analysis
We tested our model with multiple CNN architectures of varying depths, from 2 layers to 7 layers. With the deeper CNNs, we gave filter size of 3 and stride 2 for most layers so that the network still has a large enough receptive field to detect large critical visual components such as a large curve, curve start indicator, etc. With shallower networks, we had to give filters with bigger filter size and stride (8 and 4 for the 2 layer model) for the same purpose. Table 1 shows the performance comparison of our models in the CarRacing-v0 environment. Note that although we have tested with more CNN architectures, only the models that showed reasonable performances are listed on the table. In Table 1, We can clearly see that the model with the shallowest CNN architecture performs the best in the given task. Deeper layer CNNs show converged performance of about 300, while our shallowest model with 2 layers and wider filter size shows converged performance of 571 (https://gym.openai.com/evaluations/ eval_IEdi97CIQeC7ZFKmM9L3dA). Moreover, a close inspection of the episodic results shows that the model achieves a very high score of over 700 in approximately 25 percent of all evaluation episodes. The reason that the mean score is 571.68 is that in three out of one hundred episodes, the car agent achieves a very low score close to zero. Although the evaluation video was not saved for these episodes, we were able to reproduce this behavior with the same model later. This was due to the randomness in the racing circuit generation of the CarRacing-v0 environment. In the cases where our car agent ends up with a very low score, we found out that the frame contains a very sharp 160 to 180 degrees turn in the beginning of the game, and the frame looks like as if there are two tracks in the game that you can choose from. The car agent then gets confused on which road to take, and gets stuck in-between the two roads in the grass zone, resulting in very low scores around 0 to 20 points. The result proves our initial hypothesis that deeper CNNs would not perform better than the shallower ones. This is due to a number of reasons. First, as we can see in Figure 1, the state frames returned by the CarRacing-v0 environment is not complex enough to require a deep CNN architecture. Tracks are colored in grey, grass in green, and the car in red. The shape of a frame is only 84 by 84 (after preprocessing), even smaller than Atari Pong which has the state size of 210 by 160 by 3. The game frame is much smaller than the average image size of the ImageNet examples. Moreover, each frame typically only contains about 5 different colors, with very simple shapes such as straight lines, square patches of grass, etc. This means that we do not require a complex, deep CNN architecture to tackle this problem. Second, deeper and more complex CNN architectures are harder to train. Since our training objective is not the
Limitations of Raw RGB Pixel States
In the methods section, we noted that we made the design decision to preprocess the frames and stack 5 consecutive frames to construct a single state. Interestingly, we found that it is not easy to train the CNN feature extractor to show reasonable performance with just the RGB state frames. Upon close analysis of the evaluation videos recorded by the OpenAI Gym Monitoring feature, we have noticed that our model trained with pure RGB state frames completely fails to learn to make curves or slow down. This led us to realize that the 96 × 96 × 3 state representation, unlike the 1024 × 1024 × 3 frame rendered for human players during interactive gameplay, was too crude to be able to caputre the subtle changes in velocity, accleration, driving wheel position, etc. For example, the acceleration bar in the game state representation had only 2 to 3 pixels height in average, indicating that our CNN had access to only an extremely lossy representation of the original 1024 × 1024 × 3 state frame. Therefore we had to make a decision to change the architecture of our CNN to take 84 × 84 × 5 preprocessed state pixel arrays, instead of 96 × 96 × 3 raw frames, as mentioned in Section 3.1. To extend the discussion on RGB pixel states, after trying several schemes as in 5.1.3, we conjectured that applying canny edge detection results into some noisy, lowquality edges and hog features also simply loses some important information regarding tracks, which do not improve the pipeline but actually degrade the performance. On the other hand, applying Laplacian edge detection improved the average performance by small amount of 20, but the standard deviation was twice greater than that of our initial choice. Therefore, for evaluation purpose, we chose a simple grayscale, meanshift, and crop strategy for image preprocessing instead of Laplacian edge detector in order to make our performance more consistent and stable during evaluation. 5
classification softmax scores, the backpropagation phase is conducted not after an image and a label is shown, but after the agent receives a reward after executing an action. This means that a deep CNN model that is known to achieve super-human performance in image classification tasks may not be the best model for our RL objective. It would be very hard to train the large number of parameters that follow with the deeper models, and it is very likely that the model would fall into a local minima at an early stage of the training. Third, deeper models take longer to train. Deeper convolutional networks inherently have a larger number of parameters, and hence, requires a larger set of training examples requires more iterations to converge. This problem becomes more evident with our A3C model, since the model is known to take a lot of time to train. For example, even our model with the simplest CNN architecture (CNN Model 4 of Table 1) took 1 million iterations to converge. Since the A3C model utilizes multicore CPUs rather than GPUs, the training time increases significantly with increase in the number of parameters. Moreover, we have found that it is very hard to tell whether an A3C model for this environment has converged or not. As we have described in our CS234 paper, the model improves performance after sudden bursts in the losses, and after a long period of extremely low rewards. We have not yet found the right way to decide whether if the model has converged or not. In our implementations, we deemed the model converged if the average reward does not increase by 10 points for 200,000 iterations. However, this may not be the right way to determine the convergence of the A3C model with CNN. This means that there is a chance that the models with deeper CNN could have converged to parameters with higher performance. But it still does not change the fact that the deeper models took significantly more time to reach a certain performance, to the point that we think is quite unreasonable (Over 2 days on Google Cloud 8-core CPU mahines).
CNN Model #1 L1-5 (5 layers) F=16, W=3, S=2 CNN Model #2 L1: F=16, W=8, S=3 L2: F=32, W=5, S=2 L3: F=32, W=4, S=2 L4: F=16, W=3, S=2 CNN Model #3 L1: F=16, W=8, S=3 L2: F=32, W=3, S=2 L3: F=32, W=2, S=1 CNN Model #4 L1: F=16, W=8, S=4 L2: F=32, W=3, S=2
2 Threads
4 Threads
187.03 ±44.54
169.25 ±41.87
182.42 ±31.71
198.23 ±36.96
391.26 ±22.62
370.02 ±32.14
571.68 ±19.38
481.65 ±17.91
Table 1. Effects of CNN architecture and the number of threads to overall performance of the pipeline. The best performance could be observed from the two-layered CNN architecture with 2 threads and is highlighted in the table.
Grayscale, meanshift, and Crop 571.68 ± 19.38
Canny Edge Detector 430.15 ± 36.71
Laplacian Edge Detector 590.90 ± 45.01
HOG Features 390.28 ± 29.23
Table 2. Effects of different image preprocessing strategies to overall performance of the entire pipeline. The best performance could be observed when we apply Laplacian edge detector for the image preprocessing process but with high standard deviation.
to interpret, but we assume that it captures more fine details of the large patches captured by the first layer.
5.2. CNN Filter Activations In order to check what CNN has actually learned in the well-performing pipeline, we visualized the CNN weights for two convolution layers. As stated in the CNN Model 4 of Table 5.1.3, the weights of the first convolutional layer was 8 × 8 × 5 × 16 and the weights of the second layers was 3 × 3 × 10 × 32. As shown in Figure 6, most of the filters of the first convolution layer detect a progression of five frames from straight line to corner. As shown in Figure 7, some filters of the first layer detect a progression from corner to straight track. Considering that those two types of progression are most important features of track that the agent needs to detect well to give a good performance, we can say that the CNN with optimal hyperparmeters learned something useful. The weights of the second convolution layer was hard
Figure 6. 5th and 11th filter of the first convolution layer.
Figure 7. 13th filter of the first convolution layer.
6
Figure 8. 23rd filter of the second convolution layer.
5.3. Different RL Algorithms While we have focused on how CNN portion of the pipeline influence the overall performance, another core component of the pipeline would be deep reinforcement learning network. We have tried many different deep RL algorithms including classic DDQN with discrete action space[10], vanilla DDPG[8], our modification of DDPG with human-replay buffer, vanilla A3C[9], and our novel implementation of A3C with continuous certainty. As shown in Table 9, the classical approach of DDQN did not perform well. More precisely, the agent trained with DDQN had hard time after the very first corner. Such behavior was expected as the map is randomized for each game and with a linearly decaying exploration epsilon value for DDQN algorithm, the agent will have hard time learning something useful, particularly at the early learning stage. DDPG, which was originally designed for continuous action space, did not perform well. Even aided with human-replay buffer, the performance was not boosted and the optimal action values that were output from the agent quickly saturated to some extreme values, for instance, [−1.0, 1.0, 1.0]. In other words, at least for this particular environment, DDPG quickly fell into the local minimum and could not recover back. The agent trained with A3C algorithm gave a very decent performance. However, since the vanilla version of A3C algorithm simply outputs the discrete action, we could improve the performance with our idea of “continuous certainty.” It gave us the best performance among all the models and ranked fourth in the entire leaderboard. We believe that our concept of reusing softmax probability after taking argmax at the end of the output matrix can be widely used for any sort of deep neural networks, including CNN as well, which require continuous outputs. For instance, for video prediction task - a task for predicting future video frames from the past frames[12], we may add continuous certainty at the end of the pipeline to get the continuous output.
Figure 9. Performance of different deep RL algorithms. We evaluated five different models we implemented to solve the task and each reported average score is the performance of the best model for each different architecture. A3C model with continuous certainty recorded 571.68 with standard deviation of 19.38, which is a competitive record with some of the best scores uploaded on OpenAI Gym so far.
cumstances. More importantly, we experimented with many different CNN architectures along with various image preprocessing techniques that might enhance performance of CNN such as gray scaling, mean shifting, cropping, and edge detection. While there have been no significant focus on CNN component of game-solving deep reinforcement learning pipeline, as deep reinforcement learning network becomes deeper and more complicated as in the case of A3C with continuous certainty, our novel modification on A3C with multiple threads, carelessly designed (e.g. unnecessarily deep layers or random image preprocessing without verification) CNN portion can degrade the overall performance of the deep RL pipeline. As a result, our final implementation using a 2-layer CNN and 2 threads shows an average reward increase of 100.0 points compared to the vanilla A3C implementation, currently achieving fourth place on the CarRacing-v0 environment leaderboard. For future work, we have several ideas to improve our model presented in this paper. First of all, we think we may try a deep residual CNN for the CNN component. According to our observation so far, we believe doing so will not significantly improve or may even harm the performance as the CNN becomes harder to train due to a low resolution of the inputs, but it still worths trying other network as well so that applying state-of-art CNN network is not always advantageous to deep RL framework. In addition, after having
6. Conclusion We have presented our exploration of the OpenAI Gym CarRacing-v0 environment where most deep reinforcement learning algorithms perform poorly due to complex and random nature of the environment. We built on top of the asynchronous advantage critic (A3C) model with the concept of “continuous certainty” that multiplies argmax action with softmax probability to avoid extreme action in uncertain cir7
trained for more than 4, 000, 000 iterations, we noticed that different models at different checkpoints tended to be good at one task but not as good at another task. For example, some overfitted model performed better at the straight portion, while some generalizable model did well in cornering but not as well as the overfitted ones when it comes to the straight lane (they would oscillate left and right in straight line, which is an obstacle for getting higher score). Hence, we would like to try an ensemble of models that are particularly good at straight lane and models that are good at cornering. We could also try some other image processing to boost performance. Last but not least, we would like to explore the performance of other models that we did not have enough time to do so. For instance, we suspect that A3C with LSTMs can enhance the performance significantly. It is true that our current model attempts to capture the window of frames to reflect the recent history for the next move, but LSTM is more state-of-art, reliable, and explicit way for agent to learn how to determine its next action from a set of past actions. We would also love to see the performance of simple policy gradients and other modifications of DDQN if possible.
[6]
[7]
[8]
[9]
[10]
[11] [12]
7. Github Repositories A3C: https://github.com/sjang92/car racing DDPG1: https://github.com/jessemin/racing ddpg DDPG2: https://github.com/jessemin/car racing DDQN: https://github.com/jakekim1009/hw2 for racing
[13]
[14]
We implemented our DDQN agent based on the code from CS234 Assignment2. [15]
References [1] D. C. Cires¸an, U. Meier, and J. Schmidhuber. Transfer learning for latin and chinese characters with deep neural networks. In Neural Networks (IJCNN), The 2012 International Joint Conference on, pages 1–6. IEEE, 2012. [2] T. Domhan, J. T. Springenberg, and F. Hutter. Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves. In IJCAI, pages 3460–3468, 2015. [3] Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel. Benchmarking deep reinforcement learning for continuous control. In Proceedings of the 33rd International Conference on Machine Learning (ICML), 2016. [4] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size. arXiv preprint arXiv:1602.07360, 2016. [5] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE con-
[16]
[17]
[18]
8
ference on Computer Vision and Pattern Recognition, pages 1725–1732, 2014. K. Kawaguchi. Deep learning without poor local minima. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 586–594. Curran Associates, Inc., 2016. A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Proceedings of International Conference on Machine Learning, 2016. V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing atari with deep reinforcement learning. In NIPS Deep Learning Workshop. 2013. G. Neumann. The reinforcement learning toolbox, reinforcement learning for optimal control tasks. na, 2005. J. Oh, X. Guo, H. Lee, R. L. Lewis, and S. Singh. Actionconditional video prediction using deep networks in atari games. In Advances in Neural Information Processing Systems, pages 2863–2871, 2015. S. J. Pan and Q. Yang. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345–1359, 2010. J. Peters and S. Schaal. Reinforcement learning of motor skills with policy gradients. Neural networks, 21(4):682– 697, 2008. H.-C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao, D. Mollura, and R. M. Summers. Deep convolutional neural networks for computer-aided detection: Cnn architectures, dataset characteristics and transfer learning. IEEE transactions on medical imaging, 35(5):1285–1298, 2016. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. K. G. Vamvoudakis and F. L. Lewis. Online actor–critic algorithm to solve the continuous-time infinite horizon optimal control problem. Automatica, 46(5):878–888, 2010. Z. Yan, H. Zhang, R. Piramuthu, V. Jagadeesh, D. DeCoste, W. Di, and Y. Yu. Hd-cnn: hierarchical deep convolutional neural networks for large scale visual recognition. In Proceedings of the IEEE International Conference on Computer Vision, pages 2740–2748, 2015.