Site Location : Home/Research  
 
 
Research
 

Complex Motion Patterns

We have developed an interactive tool for designing complex motion patterns, called Kouretes Motion Editor (KME).
Video: [wmv]






Vision

We have developed a tool for labeling images and learning a classifier for color segmentation.
Video: [wmv]



Microsoft Robotics Studio Simulation

We have worked on all aspects of Microsoft Robotics Studio (MSRS) soccer games.
Video: [wmv]

Webots Simulation

We have worked on all aspects of Webots soccer games (RoboStadium competition).
Video: [wmv]

Skill Learning

We have used reinforcement learning to learn a kick motion while standing on one leg without falling (Webots simulator).
Video: [wmv]

Localization

We are in the process of learning motion and sensors models to be used with our particle filter localization method.

Bipedal Walk

We are using programmable central pattern generators (CPGs) to capture the walk pattern provided by Aldebaran and enhance it further by altering the parameters of the CPG.


Team Coordination

Team formations, tactics, and strategies is largely an unexplored area in the four-legged robocup research. In our work, we took a radical step in behavior control and implemented robot soccer strategies, which are based on human soccer strategies used in real soccer games. Considering that the ultimate goal of RoboCup is a game between robots and professional human football players, we believe that our work takes a step towards this goal.

According to our coordination scheme, the strategy of the team is realized using tactics with well-defined roles for each player. So far, we have defined and implemented four tactics: Passive Defence, Pressing Defence, Counter Attack, Passing Attack. There are four roles in each tactic: Attacker, Midfielder, Defender, Goalkeeper. Each role in each tactic is implemented using Petri-Net Plans. The figure below shows the plan for the Attacker role in the Counter Attack tactic. A finite state machine combined with a broadcast communication scheme is used to decide the team tactic and player roles at each time depending on the current position of the ball in the field and the location of each player. This work resulted in improved team play with better field coverage by the players.

Videos of the Pressing Defense Tactic: [defender] [midfielder] [attacker] [all]




Robust Visual Recognition

The main sensor used by the Aibo robots is the CCD camera. The robots rely on visual information to isolate particular colors, identify objects of certain shape in the field, and estimate their distance. Our work focuses on adding robustness to visual recognition against object occlusion and faulty color segmentation. In particular, we use histograms to represent the distribution of the target color along the various scanlines over the image. Identification of the histogram modes leads to correct recognition of the field landmarks. We are also developing a classification-based color segmentation scheme which is insensitive to illumination variability, the main problem we faced during games in 2007. The figure below shows an example of recognizing the ball, a beacon, and a goal in the same camera frame.



Reinforcement Learning

Reinforcement learning has been used widely in many robotic applications, mostly in a single-agent form. The multi-agent versions have not been adopted widely due to difficulties associated with efficiency and scaling to realistic domains. Recent independent research work by our team leaders has led to extensions of classic planning and reinforcement learning algorithms to collaborative multi-agent learning (where many agents learn to collaborate as a team.

The scaling properties of these algorithms through exploitation of domain knowledge make them attractive for the RoboCup domain. Factorization of the representation can be done on the basis of the proximity between players during a game. In addition, these techniques will be particularly useful for learning sophisticated motion skills for the Aibo robot. The large number of degrees of freedom on the Aibo imply a huge joint action space. This obstacle could be overcome again by appropriate factorization of the representation on the basis of joint proximity on the robot body. Kouretes is our venue for adapting these algorithms and testing their potential in a difficult task with real-time constraints.


 

 
     Technical University Of Crete-Greece