Complex Motion Patterns
We have developed Kouretes Color Classifier (KC2), a graphical tool for labeling images and learning a classifier for color segmentation. It usage is shown in this video clip [wmv].
We have developed Kouretes Localization (KLoc), a fully functional and parametric module for robot visual-landmark-based self-localization using an auxiliary particle filter. The proposed approach includes an easy method for learning quickly the motion and sensor models in the current circumstances. An example of localization in the SPL field is shown in this video [wmv].
Microsoft Robotics Studio Simulation
We have used reinforcement learning to learn a kick motion while standing on one leg without falling (Webots simulator). Check out this video [wmv].
We have experimented with using programmable central pattern generators (CPGs) to capture the walk pattern provided by Aldebaran and enhance it further by altering the parameters of the CPG.
Team formations, tactics, and strategies is largely an unexplored area in the four-legged robocup research. In our work, we took a radical step in behavior control and implemented robot soccer strategies, which are based on human soccer strategies used in real soccer games. Considering that the ultimate goal of RoboCup is a game between robots and professional human football players, we believe that our work takes a step towards this goal.
According to our coordination scheme, the strategy of the team is realized using tactics with well-defined roles for each player. So far, we have defined and implemented four tactics: Passive Defence, Pressing Defence, Counter Attack, Passing Attack. There are four roles in each tactic: Attacker, Midfielder, Defender, Goalkeeper. Each role in each tactic is implemented using Petri-Net Plans. The figure below shows the plan for the Attacker role in the Counter Attack tactic. A finite state machine combined with a broadcast communication scheme is used to decide the team tactic and player roles at each time depending on the current position of the ball in the field and the location of each player. This work resulted in improved team play with better field coverage by the players.
Robust Visual Recognition
The main sensor used by the Aibo robots is the CCD camera. The robots rely on visual information to isolate particular colors, identify objects of certain shape in the field, and estimate their distance. Our work focuses on adding robustness to visual recognition against object occlusion and faulty color segmentation. In particular, we use histograms to represent the distribution of the target color along the various scanlines over the image. Identification of the histogram modes leads to correct recognition of the field landmarks. The figure below shows an example of recognizing the ball, a beacon, and a goal in the same camera frame. Check out our SETN 2008 paper [pdf].
The scaling properties of these algorithms through exploitation of domain knowledge make them attractive for the RoboCup domain. Factorization of the representation can be done on the basis of the proximity between players during a game. In addition, these techniques will be particularly useful for learning sophisticated motion skills for the Aibo robot. The large number of degrees of freedom on the Aibo imply a huge joint action space. This obstacle could be overcome again by appropriate factorization of the representation on the basis of joint proximity on the robot body. Kouretes is our venue for adapting these algorithms and testing their potential in a difficult task with real-time constraints.