|1930 - meetup|
|2000 OpenGL a shadery|
This is an extension of BrmBot Turing. We would like to equip the Roomba vacuum cleaner with rat-in-a-box and allow the rat to control the Roomba's movements. We aim to setup and execute a scientifically plausible experiment on whether a rat can learn the abstract concepts required to control a vehicle such as Roomba.
We can change the licenses if anyone has problems with it.
Precautionary note: This is a cognitive, non-invasive experiment that will not harm the rat at all - after the experiment, it will continue living long calm life. We love rats! And we believe something so fun and interesting as such an experiment is a great thing to happen to such an immensely curious animal as a rat is.
We have most of the technical setup ready. However, it turned out that to carry out the experiment successfully would require several learning sessions per day, every day, which is something we unfortunately simply do not have time for. If anyone who could commit to that could help, we could get the experiment going in few days, but until then, we have to put this project on hold.
The details of Roomba are easily available, its control interface is described at BrmBot Turing. An important thing to remember is that Roomba cannot move in any direction at any moment, instead it can be moving only the way it is headed and/or turn in a given angle. Thus, the rat cannot simply point in the direction it wants to go, it needs to decompose the route to turning actions.
The interface between rat and Roomba will be an Arduino and three sensitive push sensors. (In the future iterations, nose sensors could replace them.) The rat will be enclosed in a plastic box, having these three sensors accessible. They control the three available actions: left, forward and right.
The rat needs to receive rewards (tiny food pellets). So far, we do not have the mechanism and pellets will need to be administered personally - this means that the person will be part of the experiment!
It would be nice to also equip the arduino with some sensor that could be used to track and record the trajectory; otherwise, only approximate information from Roomba's odometry can be used for that.
We need to achieve two goals: Make the rat want to go to some specific place in the room, and teach it how to control the vehicle to achieve that.
(Following based heavily on Frantisek Susta's comments.)
To, aby si potkan sám uvedl nějaké zařízení do pohybu a někam si dojel, je hodně složité jako celek, ale dá se to rozložit na jednoduché kroky a ty naučit odzadu. Taky je potřeba si uvědomit, že ten potkan vnímá prostor jinak než my a to místo, ke kterému má mířit, bude potřebovat jasně označit - např. tam bude zdroj světla, zdoj pachu, zdroj zvuku a on se bude pohybovat ve směru rostoucího gradientu k tomuto zdroji. Na něm pak dostane odměnu.
Tohle ma ale problem - Roomba totiz umi jezdit pouze dopredu a otacet se. To znamena, ze pokud potkan urci cil stoupnutim na urcite misto podlazky, pozice tohohle mista v ramci klece nebude korespondovat s pozici Roomby v ramci mistnosti. Bud se rozhodneme, ze tohle potkan nejak zvladne, nebo ho budeme muset naucit na prime ovladani otaceni. Nebo se to v ramci kroku c. 3 proste nauci sam, coz je vlastne celkem pravdepodobne, protoze packu pro otoceni urcitym smerem bude mit ze vsech nejblize gradientu. Tezke bude otocit se o 180 stupnu.
Another thing to consider that in the first iteration, human will administer the pellets (always standing at the target spot). A human is very intensive smell source. Will it interfere with the experiment? Could multiple humans alternate in the experiments or does it have to be always a single one?
Device: See above.
This is for the first stage, likely Algernon being used. If things will progress well, in second stage for Yossarian we would like to have automated pellet administrator and nose sensors instead of buttons.
The goal of the experiment is to find out whether a rat can learn to reach a destination by executing abstract action, and how easily.
We will consider a rat mastering step #3 a success already, though there's probably not a long way to #4. We will measure the total time spent learning and the final average time (and trajectory length?) to reach the target from a random position.
We have two young rats, if we are to try it out, we need to hurry *a lot*. This is the plan:
All parts are at Letnany, and we have some (limited) space to hack there. Anyone is welcome to visit! Please write a short note to the mailing list if you are interested and we will arrange some hacking sessions.
Feedback of these people was invaluable so far: Cyril Brom, Frantisek Susta, Iveta Fajnerova, Tereza Nekovarova.