Given the simple challenge:
I have implemented Q(\lambda)-Learning algorithm for an agent who chooses his next action/move based on knowledge which it gets from a sensor which provides the agent/robot with 360 distance measurements (i.e. for each degree one laser scan with the distance to an object in range of x meters).
I dont care about the type of the simulator. It should only feed my algorithm frequently with 360 distance-values (of the 360 degree laser scan sensor) with which I can make a decision for my next action/move. Moreover, I want to roughly sketch the environment the agent/robot is in; e.g. a circle,rectangle or any other shaped environment surrounded by walls.
What methods or simulator or application can I use in order to see how my algorithm navigates/steers the agent/robot?
Thanks!
Aucun commentaire:
Enregistrer un commentaire