Kouretes Robocup Team

  • Increase font size
  • Default font size
  • Decrease font size

Research

Nao Robots


Complex Motion Patterns

We have developed Kouretes Motion Editor (KME), an interactive tool for designing complex motion patterns. Check out our ICRA 2009 paper [pdf] as well as the related ICRA 2009 video [wmv].

kmescreenshotfinal
KME Screenshot


Snapshots of a stand-up motion designed using KME.

Vision

We have developed Kouretes Color Classifier (KC2), a graphical tool for labeling images and learning a classifier for color segmentation. It usage is shown in this video clip [wmv].

KCC Screenshot

Kouretes Localization

We have developed Kouretes Localization (KLoc), a fully functional and parametric module for robot visual-landmark-based self-localization using an auxiliary particle filter. The proposed approach includes an easy method for learning quickly the motion and sensor models in the current circumstances. An example of localization in the SPL field is shown in this video [wmv].

Kloc Screenshot

Microsoft Robotics Studio Simulation

We have worked on all aspects of the Microsoft Robotics Studio (MSRS) competitions. Check out these two videos [wmv] [mp4].

MSRS Screenshot

Webots Simulation

We have worked on all aspects of the RobotStadium (Webots) competitions. Check out these two videos [wmv1] [wmv2].

Webots Screenshot

Skill Learning

We have used reinforcement learning to learn a kick motion while standing on one leg without falling (Webots simulator). Check out this video [wmv].

Webots Screenshot

Bipedal Walk

We have experimented with using programmable central pattern generators (CPGs) to capture the walk pattern provided by Aldebaran and enhance it further by altering the parameters of the CPG.

Reconstructed 5-step walk joint trajectories

Aibo Robots


Team Coordination

Team formations, tactics, and strategies is largely an unexplored area in the four-legged robocup research. In our work, we took a radical step in behavior control and implemented robot soccer strategies, which are based on human soccer strategies used in real soccer games. Considering that the ultimate goal of RoboCup is a game between robots and professional human football players, we believe that our work takes a step towards this goal.

According to our coordination scheme, the strategy of the team is realized using tactics with well-defined roles for each player. So far, we have defined and implemented four tactics: Passive Defence, Pressing Defence, Counter Attack, Passing Attack. There are four roles in each tactic: Attacker, Midfielder, Defender, Goalkeeper. Each role in each tactic is implemented using Petri-Net Plans. The figure below shows the plan for the Attacker role in the Counter Attack tactic. A finite state machine combined with a broadcast communication scheme is used to decide the team tactic and player roles at each time depending on the current position of the ball in the field and the location of each player. This work resulted in improved team play with better field coverage by the players.

Check out our ICTAI 2007 paper [pdf] and videos of the Pressing Defense Tactic: [defender] [midfielder] [attacker] [all]

An example of a Petri-Net Plan.

Robust Visual Recognition

The main sensor used by the Aibo robots is the CCD camera. The robots rely on visual information to isolate particular colors, identify objects of certain shape in the field, and estimate their distance. Our work focuses on adding robustness to visual recognition against object occlusion and faulty color segmentation. In particular, we use histograms to represent the distribution of the target color along the various scanlines over the image. Identification of the histogram modes leads to correct recognition of the field landmarks. The figure below shows an example of recognizing the ball, a beacon, and a goal in the same camera frame. Check out our SETN 2008 paper [pdf].

A screenshot of the visualization tool.

Reinforcement Learning

Reinforcement learning has been used widely in many robotic applications, mostly in a single-agent form. The multi-agent versions have not been adopted widely due to difficulties associated with efficiency and scaling to realistic domains. Recent independent research work by our team leaders has led to extensions of classic planning and reinforcement learning algorithms to collaborative multi-agent learning (where many agents learn to collaborate as a team.

The scaling properties of these algorithms through exploitation of domain knowledge make them attractive for the RoboCup domain. Factorization of the representation can be done on the basis of the proximity between players during a game. In addition, these techniques will be particularly useful for learning sophisticated motion skills for the Aibo robot. The large number of degrees of freedom on the Aibo imply a huge joint action space. This obstacle could be overcome again by appropriate factorization of the representation on the basis of joint proximity on the robot body. Kouretes is our venue for adapting these algorithms and testing their potential in a difficult task with real-time constraints.