MarathonEnvs v2.0.0-alpha.1
Pre-release
Pre-release
·
155 commits
to master
since this release
What's new in MarathonEnvs-v2.0.0-alpha.1
WebGL Demo / Support for in browser
- See Web Demo
marathon-envs Gym wrapper (Preview)
- Use marathon-envs as a OpenAI Gym environment - see documentation
ml-agents 0.14.1 support
- Updated to work with ml-agents 0.14.1 / new inference engine
Unity 2018.4 LTS
- Updated to use Unity 2018.4 LTS. Should work with later versions. However, sometimes Unity makes breaking physics changes.
MarathonManBackflip-v0
- Train the agent to complete a backflip based on motion capture data
- Merged from StyleTransfer experimental repro
MarathonMan-v0
- Optimized for Unity3d + fixes some bugs with the DeepMind.xml version
- Merged from StyleTransfer experimental repro
- Replaces DeepMindHumanoid
ManathonManSparse-v0
- Sparse version of MarathonMan.
- Single reward is given at end of episode.
TerrainHopperEnv-v0, TerrainWalker2dEnv-v0, TerrainAntEnv-v0, TerrainMarathonManEnv-v0
- Random Terrain envionments
- Merged from AssaultCourse experimental repro
SpawnableEnvs (Preview)
- Set the number of instances of an envrionmwnt you want for training and inference
- Envrionments are spawned from prefabs, so no need to manually duplicate
- Supports ability to select from multiple agents in one build
- Unique Physics Scene per Environment (makes it easier to port envionments however runs slower)
- SelectEnvToSpawn.cs - Optional menu to enable user to select from all agents in build
Scorer.cs
- Score agent against 'goal' (for example, max distance) to distinguish rewards from goals
- Gives mean and std-div over 100 agents
Normalized Observations (-1 to 1) and reward (0 to 1)
- No need to use normalize flag in training. Helps with OpenAI.Baselines training
Merge CameraHelper.cs from StyleTransfer. Controls are
- 1, 2, 3 - Slow-mo modes
- arrow keys or w-a-s-d rotate around agent
- q-e zoom in / out
Default hyperparams are now closer to OpenAI.Baselines
- (1m steps for hopper, walker, ant, 10m for humanoid)