In this document, we will provide a brief overview of the software architecture developed to generate and run ODOI gaits.
We focus on Gait simulation and execution only. Another step will be the description of the software that will be able to trigger specific behaviors based on sensory inputs.
The objective is to generate, test and validate a set of “standard” robot gaits and to observe, by changing the stride length for instance, if we can notice some “patterns”.
These patterns can then be used to generate automatically new gaits. The main steps are described on the picture below (Figure 1):
Figure 1: Gait generation There are three main steps.
The module referred to as “ODOI Gait cycle Simulation” aims at computing the position of each joints based on the following inputs:
- Dimension of the limbs;
- Description of the sensors (position, threshold);
- Gait parameter – mainly if it a slow or fast gait;
- Gait type such as move forward or backward, rotation on left or right (in this case we provide the radius as well);
- And finally the “emo-mimic” that we want to compute. Emo-mimic means the emotion that the robot will have to mimic.
The output of the simulator consists, for each limb, in a list of positions for a given increment of time.
The graphic interface displays the robot in the three planes (Sagittal, Frontal and Horizontal). At this stage (sept 2014) only the dimension of the limbs and the Gait parameters have been implemented - see Figure 2.
Figure 2: Graphic display of the Odoi gaits.
The module referred to as “ODOI Gait computation” aims at computing for each limb the position and the speed at a given frequency.
The inputs are:
- The gait duration;
- The ratio (%) of each phase in the cycle.
There are two graphical outputs:
- The first one displays the angular positions for each limb in different planes – see Figure 3;
- The second one displays the speed variation for each limb in the different planes.
The idea is to send at a given frequency to a given list of motors a position and a speed. However the frequency will vary depending on the phase (during the double support phases, a high frequency will be used for the supporting leg for instance) and the limbs (low frequency for the trunk, shoulder girdle and arms).
Figure 3: Graphic display of the limb angular positions.
The output of this module will be a table (see Figure 4).
Figure 4: Description of the plan to be executed by the onboard computer.
For each ∆T there is (or not):
The phase id.;
- A list of Motor Id along with an angular position to be reached and the associated angular speed;
- A list of sensors and associated threshold to be checked in order to be sure that everything is on track.
The module referred to as “Execution and Fine Tune” executes the plan that will be downloaded in the robot.
This module actually sends commands to the motors and checks by reading sensor values that everything run smoothly.
The next step will be to introduce “reactions” in case there is inconsistencies between the values read on the sensors and what were expected, typically a loss of balance.
Please note that the work related to this table is still in progress.