CS 7642: Reinforcement Learning and Decision Making Solved

$ 29.99
Category:

Description

Lunar Lander Project #2
1 Problem
1.1 Description
For this project, you will be implementing and training an agent to successfully land the “Lunar Lander” in OpenAI gym. You are free to use and extend any type of RL agent.
1.2 Lunar Lander Environment
The problem consists of an 8-dimensional continuous state space and a discrete action space. The four discrete actions available are: do nothing, fire the left orientation engine, fire the main engine, fire the right orientation engine. The landing pad is always at coordinates (0,0). Coordinates consist of the first two numbers in the state vector. The total reward for moving from the top of the screen to the landing pad ranges from 100 – 140 points varying on the lander placement on the pad. If the lander moves away from the landing pad it is penalized the amount of reward that would be gained by moving towards the pad. An episode finishes if the lander crashes or comes to rest, receiving an additional -100 or +100 points respectively. Each leg ground contact is worth +10 points. Firing the main engine incurs a -0.3 point penalty for each occurrence. Landing outside of the landing pad is possible. Fuel is infinite, so, an agent could learn to fly and land on its first attempt. The problem is considered solved when achieving a score of 200 points or higher on average over 100 consecutive runs.
1.3 State Representation
As noted earlier, there are four actions available to your agent: do nothing, fire the left orientation engine, fire the main engine, fire the right orientation engine. Additionally, at each time step, the state is provided to the agent as an 8-tuple:
(x,y,vx,vy,θ,vθ,legL,legR)
x and y are the x and y-coordinates of the lunar lander’s position. vx and vy are the lunar lander’s velocity components on the x and y axes. θ is the angle of the lunar lander. vθ is the angular velocity of the lander. Finally, legL and legR are binary values to indicate whether the left leg or right leg of the lunar lander is touching the ground. It should be noted, again, that you are working in a six dimensional continuous state space with the addition of two more discrete variables.
1.4 Procedure
This problem is more sophisticated than anything you have seen so far in this class. Make sure you reserve enough time to consider what an appropriate solution might involve and, of course, enough time to build it.
• Create an agent capable of solving the Lunar Lander problem found in OpenAI gym
– Upload/maintain your code in your private repo at https://github.gatech.edu/gt-omscs-rldm
– Use any RL agent discussed in the class as your inspiration and basis for your program
• Create graphs demonstrating
– The reward for each training episode while training your agent
– The reward per episode for 100 consecutive episodes using your trained agent
– The effect of at least 3 hyper-parameters of your choosing on your agent ∗ You will select the ranges to be evaluated.
1
∗ Evaluate your results in the context of the algorithm and the problem. Why do your results make sense?
• The quality of the code is not graded. You don’t have to spend countless hours adding comments, etc. But, it will be examined by the TAs.
• Make sure to include a README.md file for your repository
– Include thorough and detailed instructions on how to run your source code in the README.md
• You will be penalized by 25 points if you:
– Do not have any code or do not submit your full code to the GitHub repository
– Do not include the git hash for your last commit in your paper
• Write a paper describing your agent and the experiments you ran
– Include the hash for your last commit to the GitHub repository in the paper’s header.
– Make sure your graphs are legible and you cite sources properly. While it is not required, we recommend you use a conference paper format. Just pick any one.
– 5 pages maximum – really, you will lose points for longer papers.
– Explain your experiments.
– Graph: Reward at each training episode while training your agent and discussion of results
– Graph: Reward per episode for 100 consecutive episodes using you trained agent and discussion of the results.
– Graph: Effect of hyperparameters and discussion of the results.
– Explanation of pitfalls and problems you encountered.
– Explanation of algorithms used: what worked best? what didn’t work?
– What would you try if you had more time?
– Save this paper in PDF format.
– Submit to Canvas!
1.5 Notes
• If you get a Box2D error when running gym.make(‘LunarLander-v2’), you will have to compile Box2D from source. Please follow these steps and try running Lunar Lander again https://github.com/openai/ gym/issues/100:
pip uninstall box2d-py git clone https://github.com/pybox2d/pybox2d cd pybox2d/ python setup.py clean python setup.py build sudo python setup.py install
• Even if you don’t build an agent that solves the problem you can still write a solid paper.
1.6 Resources
1.6.1 Lectures
• Lesson 8: Generalization
1.6.2 Readings
1.7 Source Code
• https://github.com/openai/gym
• https://github.com/openai/gym/blob/master/gym/envs/box2d/lunar_lander.py
1.7.1 Documentation
• http://gym.openai.com/docs/
1.8 Submission Details
• Your written report in PDF format (Make sure to include the git hash of your last commit)
To complete the assignment, submit your written report to Project 2 under your Assignments on Canvas: https://gatech.instructure.com
Note: Late is late. It does not matter if you are 1 second, 1 minute, or 1 hour late. If Canvas marks your assignment as late, you will be penalized. Additionally, if you resubmit your project and your last submission is late, you will incur the penalty corresponding to the time of your last submission.
1.9 Grading and Regrading
When your assignments, projects, and exams are graded, you will receive feedback explaining your errors (and your successes!) in some level of detail. This feedback is for your benefit, both on this assignment and for future assignments. It is considered a part of your learning goals to internalize this feedback. This is one of many learning goals for this course, such as: understanding game theory, random variables, and noise.
It is important to note that because we consider your ability to internalize feedback a learning goal, we also assess it. This ability is considered 10% of each assignment. We default to assigning you full credit. If you request a regrade and do not receive at least 5 points as a result of the request, you will lose those 10 points.
References

Reviews

There are no reviews yet.

Be the first to review “CS 7642: Reinforcement Learning and Decision Making Solved”

Your email address will not be published. Required fields are marked *