We are familiar with using Ray RLLIB and Stable-Baselines3 to solve similar problems using RL, but both require Gym v0.21.
Do you have a version of the Airlift Challenge code that uses Gym v0.21?
Do you have specific RL libraries that you're expecting participants to use?
Posted by: Ultra_Labs @ Feb. 3, 2023, 4:47 p.m.Thank you for bringing this to our attention. We are not expecting a specific RL library, and are good with using Ray RLLIB and Stable-Baselines3.
We are going to look into opening up to allow use of Gym v0.21, but unfortunately will have to get back to you about it. I am not sure offhand if our environment will work with this version...
Something else to be aware of is that for automatic evaluation, we are only providing CPUs and not GPUs.
Posted by: abeckus @ Feb. 3, 2023, 5:31 p.m.Hi, we've tested 0.21 and it isn't compatible at the moment. We'll discuss this issue further and keep you updated.
Posted by: ccafeccafe @ Feb. 6, 2023, 1:25 p.m.Hello @Ultra_Labs,
In looking at the RLLIB documentation, it seems that RLLIB treats OpenAI Gym environments as single agent.
So, I wonder if it would be better to use RLLIB's Petting Zoo wrapper to interface with the simulator for training: https://docs.ray.io/en/latest/rllib/rllib-env.html#pettingzoo-multi-agent-environments
And then, the "compute_single_action" method could be used in our "MySolution" class for evaluation.
That being said, we have not had a chance to test the Petting Zoo wrapper, so not positive it works.
I realize that it is getting late in the competition. We do hope to have future iterations of the competition, and will look at providing explicit support for RLLIB in the future.