torsdag den 25. november 2010

lab 11

Attendances: Carsten, Dan
Duration: 11:30 - 14:30

Goal

The goal this week is to look into projects.

Plan

We discuss different possible projects. We analyse the difficulty and
main problems of each project.


  • Road navigation

  • Synthetic creatures

  • Sex bots

Road navigation

The road navigation problem is have a robot identify and map a Lego
road network and be able to go from point to point in it.

Software and hardware


  • Robot able to:

    • Drive around

    • Sensors to navigate the roads.

    • Know where it is (Tacho meter)

  • Software capable of;

    • Storing road network (make graph)

    • identifying road sections

    • Resistance to poor location (tacho meter errors)

    • Handling a lot of sensor input to build a consistent view of the world

Main difficulty


  • Initially building the network mainly

    • Robot need to drive for a long with a lot of turns (which increase tacho error)

    • Making sure entire network is covered.

    • Connecting information from different sensors.

What to present at end of project.


  • Robot to build a map of a road network

  • Move from point to point in the constructed network

  • How sensors can operate together to increase precision

  • How many small behaviors can work together.

Synthetic creatures

Make a robot that emulates a creature, with desires and behaviors of
said creature.

Software and hardware


  • A potentially weird robot (as it need to match some creature)

  • Special form of locomotion

  • Simplifying creature behavior to something a robot can do.

  • Emulate complex creature behaviors with a small amount of code.

Main difficulty


  • Simulate complicated creatures with simple Lego blocks.

  • Realistic simulation of behavior and needs

What to present at end of project.


  • Behavior and need simplifications

  • Robot that looks and behaves like a creature.

  • How simple sensors were used to emulate complex sensors (smell/touch/…)

Sex bots

Robots that can share code and information with each other (over IR/Bluetooth).


Different robots with same set of behaviors. Robots have different
priorities on their behaviors. Pairing robots can combine their
priorities to form a new robot.


Behaviors can simulate a artificial environment defining the needs of
the robot (water, food, hunt, …).

Software and hardware


  • Simple robot. Maybe something to determine if one robot (set of
    priorities) yields better results than a different one.

  • Bluetooth

  • Getting information about robot developments.

  • > 1 robots. possible with 1 if we can determine a score for a set
    of priorities.

Main difficulty


  • Defining interesting behaviors.

  • Communication between robots

  • Comparing properties of different sets of priorities

  • Getting Bluetooth to work.

  • Combining robots.

  • feedback:

    • Knowing exactly what behavior each robot is doing at any given
      time.

    • Getting Bluetooth to work

What to present at end of project.


  • Unexpected good/bad behavior we wouldn't have come up with ourselves.

  • Evolutionary solutions to problems. (define problem. randomize/breed
    robots to find best solution)

Progress

We got bluetooth to work.


pc programs using bluetooth need pccomm.jar and bluecove.jar.

torsdag den 18. november 2010

lab session 10

Attendances: Carsten, Dan
Duration: 11:30 - 13:30

Goal

The goal this week is to experiment with the LejOS behavior-based
architecture. we also look at sensor listeners.

Plan

We experiment with a robot called Bumper Car. This robot runs around
until it bumps into a wall (detected using a touch sensor). After
being bumped it backs up and turns to before continuing to drive
around.

We follow the exercise plan.

  • Investigate bumper car and its source code.
  • Implement a new behavior.
  • Implement a sensor listener
  • Improve detect wall
  • Put detection of wall into the detection of wall sequence.

Robot

Our robot this week is the same as from lab 9 with a touch sensor.

PICTURE

Code

Our modified BumperCar.java.

Investigate bumper car and its source code.

When the touch sensor is pressed it backs up and turns left. When hold
down the robot continuously backs up and turns left. This shows that
the avoid behavior has higher priority than drive forward.

Arbitrator loops through it's behaviors checking if takeControl()returns true. If a behavior returns true the arbitrator breaks() from
the loop, meaning it stops checking for other behaviors. So when
DetectWall is active, DriveForward's takeControl() is not checked.

It seems the highest priority is the last in the behaviors array. The
[ 2] link from the exercise text suggested it was opposite.

Implement a new behavior.

  • Implement a new behavior that checks for Escape presses and stops
    the program if detected.

We implemented this by checking if the escape button is pressed intakeControl(). We sat it as the highest priority by setting it as
the last entry in the array of behaviors.

When the robot is going forwards, escape is registered
immediately. When the touch sensor is pressed down and the robot is
continuously avoiding the escape is not always registered. This is
because takeControl() is only run on or escape behavior between each
avoidance pattern.

The Sound.pause(x) command pauses the robot for x number of
ms. Setting it to 2000 sets a delay of 2 seconds between each
avoidance pattern (when touch sensor is kept pressed). This pause is
required to allow the sound waves sent from the sonic sensor to get
back to the robot and be registered.

Implement a sensor listener

We make a thread to check the sonic sensor if a wall is detected. This
thread running in the background means we don't have to pause in
detectwalls takeControl() to check sensor.

We implemented this by making a sampler thread. This thread is started
from detectWalls. In detectWalls takeControl() function the sampler
thread is the last sonic value is close enough to indicate a wall is
detected.

With a separate thread, a sound.pause(2000) doesn't cause pauses
between avoidance patterns as it did before.

Improve detect wall

We add a 1 second backing up before turning in the avoidance
pattern. We initially did this by going backwards and waiting a second
by using Sound.pause(1000) then doing the turning. This caused the
robot to make a weird jump in the transition from backing up to
turning. This is probably just because the backward and rotate motor
functions operate at different motor speeds.

We improved the behavior by checking if the touch sensor is pressed
after the robot has backed up. If it is pressed, we move backwards
another 2 seconds. We implemented this in the detect wall action()method by checking if the touch sensor is pressed after having gone
backwards. If it is we stop the function. The function will be started
again by the arbitrator as the touch sensor is pressed.

Put detection of wall into the detection of wall sequence.

Motivation functions have each behaviors takeControl() function
return integer signifying how much they want control.

To implement that in our DetectWall and have it be able to run again
if touch is pressed again afterwards. We could do this by these settings:

  • previous_motivation = 0 and touch.isPressed(): return 10;
  • previous_motivation > 0 and touch.isPressed(): return previous_motivation / 2;
  • !touch.isPressed(): return 0;

This sets a high motivation value if the touch sensor is pressed, if
it is continuously pressed the motivating value will decrease. But if
the touch is released ad pressed again we will have a high motivation
again.

torsdag den 11. november 2010

lab note week 9

Attendances: Carsten, Dan
Duration: 11:30 - 14:00


Goal

The goal this week is to experiment with robot positioning. We do this
by using the fact that our motors keep track of how many rotations it have done.

Plan

We start by a making a simple program to get a feel for how it
works. We can determine precision using our simple program. Then we
implement avoidance to understand challenges present when working with
motor rotations.


  • Testing precision: Testing how accurate the motor measurements are

  • Avoiding objects: Avoiding objects while traveling.

  • Improved navigation: Comparing 2 methods of computing position and
    direction of the robot.

Robot

Our robot this week is the same as from lab 8. We use the motors (both
to move and as sensors of how much the wheels have moved) and the
sonic sensor to avoid objects. The 2 light sensors are not in use (but
still attached).


PICTURE

Testing precision

We couldn't find a ruler so we used a Lego block to measure with. Each
of our units is about 0.5 cm. We indicate the starting position by
placing a Lego block (a measuring block) on the ground.


We didn't attach a pencil or marker to the robot to record it's
path. To detect inconsistencies the robot needs to travel large
distances which would require a lot of paper to measure, which we
didn't have access to.

initial test

Go forward 200 then backwards. It is precise enough that we can't
measure an error.


We had a minor problem measuring because the robot front wheel is
heading different direction when the robot is going forwards and
backwards. We have to make sure the front wheel is in the same
position when starting as it is when the robot have finished moving.

Going in a square

When the robot was going in the path of a square of lengths 50 (with
four 90 degree turns). We measured an inconsistency of 1 to 2 cm.


The robot travels the same distance as our initial test (200) but it
has a inconsistency of 1-2 cm. compared to none (from the initial
test). This suggests the inconsistency is an effect of the turns.

Code

These experiments were done with a old version of our avoidance robot
software available later. The methods goStraightAndBack and goSquare
were used.

Avoiding objects

Here we make a robot that travels forwards a fixed distance, and if a
object is in the way, the robot will go around it but still end up
having moved forwards a fixed distance.


We assume objects blocking the robot can be circumvented by doing:
turn right, forward, turn left, forward, turn left, forward, turn
right.

Solution

We avoid objects using the pattern explained above. The avoidance
pattern has 2 segments going side-ways and 1 going forwards. The
side-ways segments should not be counted towards the distance
travelled by the robot.


When we reach an object we:


  • Save distance travelled (tmp1).

  • Turn right, go forward, turn left.

  • Reset the motor counter.

  • go forward.

  • Save distance travelled (tmp2).

  • turn left, go forward, turn right.

  • go forward a distance equal to the total distance - (tmp1 + tmp2).

In practice tmp2 isn't saved as it's part of the avoidance pattern so
we know the distance.


We set the 3 forward distance used in the avoidance pattern to 40.


We detected objects using the ultrasonic distance sensor and to
simulate a object we used a black laptop case.

Observations

We tested the robot with 0, 1 and 2 avoidance patterns. We estimated
the final distance travelled to be the same. Our robot would swerve to
the left which made estimating the actual distance travelled hard.


To account for the swerve, we initially though about turning the robot
around and going back to start. But the swerve would just make the
robot go further off course. To negate the swerve on the way backwards
would mean having the robot go backwards, but then we would need
another distance sensor.


To verify the distance travelled was the same despite objects in the
way, the final distance should just be precise within a distance of 20
(half of the forward distance of our avoidance pattern), which we
could do by eye.


If we assume the inconsistency from turning is the same as our initial
tests, the travelled distance with 1 and 2 objects in the way would be
off by 2-4 and 4-8, respectively.

Code

link

Improved navigation

Here we discuss 2 methods to estimate position and direction of a
robot from the motor rotations and diameter of the wheel and distance
between the 2 wheel.

Dead Reckoning

Dead reckoning uses simple geometry to calculate how much a rotation
is to determine how far the robot travels pr. rotation. It calculates
the direction by assuming the wheels turn in opposite direction for
some amount of rotations, then uses sin and cos to determine how the
direction have changed. This method looks at both wheels.

Forward Kinematics

Forward Kinematics saves poses, which determine the robots x,y
coordinate and heading at a given time. To determine a new pose a
matrix transformation based on angular rotation around a point is
used.

torsdag den 4. november 2010

lab note 8

Attendances: Carsten, Dan
Duration: 11:30 - 13:15

Goal

The main goal is to experiment with defining multiple behaviors and
defining how they work together

Plan

We experiment with the SoundCar.java program which have 3 behaviors:
drives randomly, avoids obstacles and plays sounds. The plan consist
of:



  • Understanding the LCD output.


  • Analysing the behavior of our robot.


  • Understanding how priority of behaviors are implemented.


  • Include the light follower behavior from last week.

Robot

We added a distance sonic sensor to our robot from lab 7. The robot
now have 2 distance sensors and 1 sonic sensor.


Understanding the LCD output

0/1: in Random drive mode everything is 0. When the robot is backing
up (avoiding) the drive is 1. When the sound is playing, both drive and
avoid are 0. This fits with the code that a thread has a 1 if its
engine use is suppressed and 0 if that thread can access the motors.

s/f/b: stop/forward/back. Indicates what a thread commands the motors
to do (go forward/back or stop). When drive have f and avoid have
b the robot is going backwards.

Analysing the behavior of our robot

With only the drive thread it is just driving forward randomly. With
the avoid and sound active it is inactive unless something is put in
front of it (then it avoids). It can be seen that it will stop in the
middle of an avoid if the sound is played.

Understanding how priority of behaviors are implemented

Daemon

One reason to use daemon thread is because the program stops when only
daemon threads remain (so in this case we don't need to stop the 3
threads). Daemon threads are a service running in the background,
Setting these threads(behaviors) as daemons signify they are services
and not directly part of the program (they are just always running in
the background).

Priority

The suppressCount integer signify how many higher-priority threads
have wanted access to the motor (by calling suppress()).


When the robot is avoiding it calls suppress() (which sets
drive.suppressCount to 1), then does its avoidance, then calls
release() (which sets drive.suppressCount to 0).

If in the middle of an avoidance the sound is played, sound calls
suppress() which sets avoid.suppressCount to 1 and drive.suppressCount
to 2, then when sound releases avoid.suppressCount is set to 0 and
drive.suppressCount to 1 (which means avoid now have access to the
motors and drive still does not have access).

Include the light follower behavior from last week

Our light follower from lab 7 used motors directly, but for the
suppress priority to work it needs to use Car, so we started by
modifying that.

We insert the light follower thread above the drive but below the
avoid in the hierarchy. This results in the robot driving randomly
when there is no light source of significant intensity (above the
average of the last couple of seconds). The robot will spend most of
its time driving around after the light because the robot will
usually start up again when the average stabilizes. If the robot gets
too close to an obstacle the avoid thread takes over and tries to
avoid. While playing a sound the robot will still stop moving.

The Final implementation of the lightfollower class look like this:
(The important parts are highlighted with bold.)
import lejos.nxt.Button;
import lejos.nxt.LCD;
import lejos.nxt.MotorPort;
import lejos.nxt.SensorPort;
import lejos.nxt.addon.RCXLightSensor;

public class LightFollower extends Behavior{

private int MAXLIGHT;
private int MINLIGHT;

RCXLightSensor lightSensorLeft = new RCXLightSensor(SensorPort.S3);
RCXLightSensor lightSensorRight = new RCXLightSensor(SensorPort.S2);
MotorPort leftMotor = MotorPort.C;
MotorPort rightMotor= MotorPort.B;

private final int forward = 1,
backward = 2,
stop = 3;

int averageLeft = 0,averageRight = 0;

final int BETA = 25;

public LightFollower(String name, int LCDrow, Behavior b){
super(name,LCDrow,b);
}

public void run()
{
MAXLIGHT = 0;
MINLIGHT = 1000;
while(! Button.ESCAPE.isPressed()){
int normalizedLightLeft = normalize(lightSensorLeft.getNormalizedLightValue());
int normalizedLightRight = normalize(lightSensorRight.getNormalizedLightValue());
averageLeft = average(averageLeft,normalizedLightLeft);
averageRight = average(averageRight,normalizedLightRight);
int powerToLeft = 0;
int powerToRight = 0;
if(normalizedLightLeft > averageLeft){
powerToLeft = normalizedLightLeft;
}
if(normalizedLightRight > averageRight){
powerToRight = normalizedLightRight;
}
suppress();
forward(powerToLeft,powerToRight);
delay(100);
stop();

release();
}
}

int normalize(int light){
if (light > MAXLIGHT)
MAXLIGHT = light;
if (light < minlight =" light;" minlight ="=" output =" 100" output =" 0;"> 100)
output = 100;
LCD.drawString("output is: " + output, 0, 7);
return output;
}

int average(int formerAverage, int light){
int output = formerAverage + (BETA*(light - formerAverage))/100;
return output;
}
}

The robot behaves mostly like the one from last week except for
avoiding stuff. The sound part is played every few seconds. The random
drive thread is not noticeable active, this is likely because the
light follower is only inactive when waiting for the average light
level to level out.

Results

We have analyzed and understood the SoundCar.java program. And we
have learned how to implement different behaviors with different
priorities using threads.