You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Great work! We are trying to replicate your experiments.
Description of our setup - Ubuntu 16.04, installed rl-generalization and docker using the install instructions given in readme.
We came across an interesting bug that seems incorrect. We wanted to see the performance of HalfCheetah when only varying density so we ran python -m examples.run_experiments examples/test_density.yml /tmp/output with the following yml file
and with SunblazeHalfCheetahRandomExtreme only changing the density to 1000000 in mujoco.py as below:
class RandomExtremeHalfCheetah(RoboschoolXMLModifierMixin, ModifiableRoboschoolHalfCheetah): #edited to only change density
def randomize_env(self):
self.density = 1000000 #manually changed density value
with self.modify_xml('half_cheetah.xml') as tree:
for elem in tree.iterfind('worldbody/body/geom'):
elem.set('density', str(self.density))
def _reset(self, new=True):
if new:
self.randomize_env()
return super(RandomExtremeHalfCheetah, self)._reset(new)
@property
def parameters(self):
parameters = super(RandomExtremeHalfCheetah, self).parameters
parameters.update({'density': self.density})
return parameters
Looking at the json output of run_experiments, the SunblazeHalfCheetah model testing reward on both the SunblazeHalfCheetah and SunblazeHalfCheetahRandomExtreme (with density manually set to 1000000) are nearly the same. Last 2 rewards of both testing environments below:
How can we confirm the density is changing? It doesn't seem logical that the Mujoco HalfCheetah simulation should be able to move at all given a density of 1000000 nor have similar testing rewards to the nominal environment.
The text was updated successfully, but these errors were encountered:
I think environments can change when you add RoboschoolForwardWalkerMujocoXML.__init__(self, self.model_xml, 'torso', action_dim=6, obs_dim=26, power=0.9) in randomize_env(self). I can be wrong because I'm using different version of roboschool.
Great work! We are trying to replicate your experiments.
Description of our setup - Ubuntu 16.04, installed rl-generalization and docker using the install instructions given in readme.
We came across an interesting bug that seems incorrect. We wanted to see the performance of HalfCheetah when only varying density so we ran
python -m examples.run_experiments examples/test_density.yml /tmp/output
with the following yml fileand with SunblazeHalfCheetahRandomExtreme only changing the density to 1000000 in
mujoco.py
as below:Looking at the json output of
run_experiments
, theSunblazeHalfCheetah
model testing reward on both theSunblazeHalfCheetah
andSunblazeHalfCheetahRandomExtreme
(with density manually set to 1000000) are nearly the same. Last 2 rewards of both testing environments below:How can we confirm the density is changing? It doesn't seem logical that the Mujoco HalfCheetah simulation should be able to move at all given a density of 1000000 nor have similar testing rewards to the nominal environment.
The text was updated successfully, but these errors were encountered: