torsdag den 20. januar 2011

onsdag den 12. januar 2011

lab 16 - Presentation preparation

Date: 2011-01-12 Wed
Attendances: Carsten, Dan
Duration: 11:15 - 16:00

Preparing for presentation.

Finished up GUI. Bigger font and color (colors to easily identify
robots).

Didn't add hunger updates to GUI as we use event-messages passing. And
hunger updates all the time making it difficult to implement
correctly.

Final changes. Changed Kill-event to get 20 hunger but doesn't affect the
killed robot.

Made track pretty by using smooth-cornered cut pieces of paper instead
of just A4. Robots can still move away from food while eating (because
of its eating move pattern).

Light sensor on red was bad. We lowered the error threshold. This
would make the other 2 robots less likely to detect the border, but we
didn't see this behavior.

When a robot kills something (detects a object within a value of 30)
its hunger is set to 40. If we just decrease the hunger value it would
very quickly decrease to 0 or beyond. Setting it to 40 causes hunting
to be less effective then before (as it basically was set to 0
otherwise).

Individual robot behavior

Blue (Sif)

  • Motivation: eat = 25, hunt = 80
  • Starts in hunt
  • Goes to search when hunger = 56
  • Eats until hunger = 43

Red (NXT)

  • Motivation: eat = 25, hunt = 50
  • Starts in hunt
  • Goes to search when hunger = 25
  • Eats until hunger = 18

White (NXT2)

  • Motivation: eat = 15, hunt = 25
  • Starts in hunt
  • Goes to search when hunger = 12
  • Eats until hunger = 0

Presentation

Our intended project

  • Build robots with behaviors
  • Have robots interact
  • Define new robot personalities

Show robots

  • Hunger: constantly increasing, 100 means death
  • Modes: search, eat, hunt, avoid
  • Death means get new personality that is the average of other 2
  • Behavior scores
    • search: hunger + eatingMotivation
    • hunt: huntingMotivation

Elements used and how to connect

  • 3 Robots with the same software and setup
  • PC to show robot behavior
  • PC to send new behavior to robots when they die
  • light sensor to detect track, border and food
  • sound sensor to detect other robots (kill)

What we are proud of

  • Simple system of behaviors resulting in varied personalities

What we are not proud of

  • Interaction. Can't communicate (kill/death)
  • Track borders

What we would have done with more time

  • Collision detection
  • Communication
  • More, and more advanced, behaviors

torsdag den 16. december 2010

lab 14

Date: 2010-12-16 Thu
Attendances: Carsten, Dan
Duration: 9:15 - 16:00

Continuing project

Fixing Bluetooth trouble

Getting bluetooth communication working in our current code
framework. We fixed it by sending too long strings to force read() to
stop.

Bluetooth working with 2 robots

Behaviors

3 modes

  • Hunt: Killing other robots.
    • Uses distance sensor to detect how close to other robots.
    • If close enough, kills the other robot.
    • Cannot find food when in this mode (light sensor doesn't register
      it).
    • Light sensor still need to detect border.
    • Increase hunger.
  • Search: Searching for food.
    • Sonic sensor not used.
    • Uses light sensor to detect food.
    • Increase hunger.
  • Eat: Eating food.
    • In this mode after having found food.
    • Stands still and eats.
    • Decrease hunger.
  • Avoid: Don't move outside the level.
    • Simple sequence of moving backwards and turning around.

Personality

A robots personality is defined by:

  • Hunting Motivation: This determines how hungry the robot will get
    before stop hunting and start searching for food.
  • Eating Motivation: This determines how much the robot eats after
    having found a food-source.

Initial test

Our initial test with 1 robot and no avoidance gave some interesting
results. First of all, the robot moved way too fast. We need to have
the robots move as slow as possible to be able to see what they are
doing. Also, our initial guess at a personality gave a good mix of
hunting, searching and eating. It would hunt for a while, then god
hungry and go into search mode. And when finding something to eat it
would stay a eat for a good while before going back to hunting.

sensor reading interval

Our robot has a step-count of 0.5 seconds, this indicate when to
increase hunger. But 0.5 seconds is enough time for the robot to move
over a border or food without behaviors acting on it. So we simulated
concurrency with a busy-wait. We sleep 10 ms at a time and each time
check if a border or some food is detected. If detected the 0.5
seconds sleep is interrupted.

The reason this works is that only search and hunt can be
interrupted. And neither of those use any waiting. They just choose
new speed settings for the wheels. But avoid and eat do use
thread.sleep.

Track

We set up a track. A beige rectangle with gaffer's tape as an edge and
pieces of white paper signifying food.

Killing

We implemented the kill mechanics naively by checking if the distance
sensor detects something within 10 cm (or at least when the sonic
sensor returns a value less than 10). When a robot kills something it
will make a sound. It is up to the humans to detect and remove the
robot that was killed. Having a robot detect when it is killed is not
easy, and we decided to just emulate that behavior ourselves.

Track test

Our initial test with one robot with an average personality would run
around and hunt a bit. But spend most of its time searching and
eating. It stayed alive throughout the trial of 5 minutes or so. The
main problems was the robot would sometimes not detect edges properly
and run off the track, requiring help to get back. Another problem was
our pieces of paper were not taped down, causing the robot to move the
paper around confusing itself.

Personality

The PC can now transmit a new personality to a robot. This was
difficult as the NXJ doesn't report null pointer exceptions in its
threads.

torsdag den 9. december 2010

lab 13

Date: 2010-12-09 Thu
Attendances: Carsten, Dan
Duration: 11:15 - 16:15

Continuing project.

This week we implemented the score-based behavior controller.

The controller is very similar to the one from lab 10, except this one
assigns scores to each behavior, sorts the list and execute the
behavior with the highest score.

The main problem here is designing a robot with a behavior that has
some good middle-ground. Meaning simply having one of it's behavior
properties as high or low as possible always be the best option is not
a interesting problem.

We have ended up with a robot that sometimes searches for water and
sometimes hunts for other robots. When in search-mode it won't catch
other robots even if it bumps into one. When finding a source of water
(a patch of coloured paper on the ground) the robot stands still and
refills its energy. When in hunt-mode the robot goes faster and if it
bumps into another robot it kills it.

Our robots move around randomly. It does this by having 5 different
move modes (forward, slightly left, slightly right, sharply left,
sharply right). Each mode has a even chance of being selected. Each
mode runs in an some time interval, after which a new move-mode is
selected.

The basic premise of the robot is it has a hunger indicator. the
hunger indicator decreases over time. When a robot is hungry it
searches for food. When it is not hungry it hunts for other robots.

The track we imaging is a flat white area. It is surrounded by a black
edge. And food sources are grey pieces of paper. The light sensor can
be calibrated to indicate what is the track, the edge and food.

Hunting for other robots is done using the sonic distance sensor.

Log

We started by making the behavior controller

We then made some behaviors, and decided on hardware setup

We then made the logger. We had trouble with this. We stopped before
we figured it out.

classes

  • [X] Bluetooth
  • [X] PC logger (server)
  • [X] NXJ logger (client)
  • [X] score-based behavior controller
  • [X] behaviors
    • Move randomly (search)
    • Hunt
    • Eat (check light sensor, stop and eat if food is found and hunger
      is low enough)
  • [X] Hardware setup
  • [X] Light sensor values calibration
  • [X] Behaviors using sensors correctly
  • [X] Personality
  • [X] PC breeder (Personality merge)

torsdag den 2. december 2010

lab note 12, starting project bluetooth comminuaction

Attendances: Carsten, Dan
Duration: 11:15 - 15:00

Continuing project

Borrowed 2 standard robots.

classes

  • Bluetooth
  • PC logger (server)
    • Made initial server
    • uses ObjectInputStream to send object instances across Bluetooth.
    • Instances being sent implement Message.
  • NXJ logger (client)
    • Made initial client
  • score-based behavior controller
  • behaviors
  • PC breeder

torsdag den 25. november 2010

lab 11

Attendances: Carsten, Dan
Duration: 11:30 - 14:30

Goal

The goal this week is to look into projects.

Plan

We discuss different possible projects. We analyse the difficulty and
main problems of each project.


  • Road navigation

  • Synthetic creatures

  • Sex bots

Road navigation

The road navigation problem is have a robot identify and map a Lego
road network and be able to go from point to point in it.

Software and hardware


  • Robot able to:

    • Drive around

    • Sensors to navigate the roads.

    • Know where it is (Tacho meter)

  • Software capable of;

    • Storing road network (make graph)

    • identifying road sections

    • Resistance to poor location (tacho meter errors)

    • Handling a lot of sensor input to build a consistent view of the world

Main difficulty


  • Initially building the network mainly

    • Robot need to drive for a long with a lot of turns (which increase tacho error)

    • Making sure entire network is covered.

    • Connecting information from different sensors.

What to present at end of project.


  • Robot to build a map of a road network

  • Move from point to point in the constructed network

  • How sensors can operate together to increase precision

  • How many small behaviors can work together.

Synthetic creatures

Make a robot that emulates a creature, with desires and behaviors of
said creature.

Software and hardware


  • A potentially weird robot (as it need to match some creature)

  • Special form of locomotion

  • Simplifying creature behavior to something a robot can do.

  • Emulate complex creature behaviors with a small amount of code.

Main difficulty


  • Simulate complicated creatures with simple Lego blocks.

  • Realistic simulation of behavior and needs

What to present at end of project.


  • Behavior and need simplifications

  • Robot that looks and behaves like a creature.

  • How simple sensors were used to emulate complex sensors (smell/touch/…)

Sex bots

Robots that can share code and information with each other (over IR/Bluetooth).


Different robots with same set of behaviors. Robots have different
priorities on their behaviors. Pairing robots can combine their
priorities to form a new robot.


Behaviors can simulate a artificial environment defining the needs of
the robot (water, food, hunt, …).

Software and hardware


  • Simple robot. Maybe something to determine if one robot (set of
    priorities) yields better results than a different one.

  • Bluetooth

  • Getting information about robot developments.

  • > 1 robots. possible with 1 if we can determine a score for a set
    of priorities.

Main difficulty


  • Defining interesting behaviors.

  • Communication between robots

  • Comparing properties of different sets of priorities

  • Getting Bluetooth to work.

  • Combining robots.

  • feedback:

    • Knowing exactly what behavior each robot is doing at any given
      time.

    • Getting Bluetooth to work

What to present at end of project.


  • Unexpected good/bad behavior we wouldn't have come up with ourselves.

  • Evolutionary solutions to problems. (define problem. randomize/breed
    robots to find best solution)

Progress

We got bluetooth to work.


pc programs using bluetooth need pccomm.jar and bluecove.jar.

torsdag den 18. november 2010

lab session 10

Attendances: Carsten, Dan
Duration: 11:30 - 13:30

Goal

The goal this week is to experiment with the LejOS behavior-based
architecture. we also look at sensor listeners.

Plan

We experiment with a robot called Bumper Car. This robot runs around
until it bumps into a wall (detected using a touch sensor). After
being bumped it backs up and turns to before continuing to drive
around.

We follow the exercise plan.

  • Investigate bumper car and its source code.
  • Implement a new behavior.
  • Implement a sensor listener
  • Improve detect wall
  • Put detection of wall into the detection of wall sequence.

Robot

Our robot this week is the same as from lab 9 with a touch sensor.

PICTURE

Code

Our modified BumperCar.java.

Investigate bumper car and its source code.

When the touch sensor is pressed it backs up and turns left. When hold
down the robot continuously backs up and turns left. This shows that
the avoid behavior has higher priority than drive forward.

Arbitrator loops through it's behaviors checking if takeControl()returns true. If a behavior returns true the arbitrator breaks() from
the loop, meaning it stops checking for other behaviors. So when
DetectWall is active, DriveForward's takeControl() is not checked.

It seems the highest priority is the last in the behaviors array. The
[ 2] link from the exercise text suggested it was opposite.

Implement a new behavior.

  • Implement a new behavior that checks for Escape presses and stops
    the program if detected.

We implemented this by checking if the escape button is pressed intakeControl(). We sat it as the highest priority by setting it as
the last entry in the array of behaviors.

When the robot is going forwards, escape is registered
immediately. When the touch sensor is pressed down and the robot is
continuously avoiding the escape is not always registered. This is
because takeControl() is only run on or escape behavior between each
avoidance pattern.

The Sound.pause(x) command pauses the robot for x number of
ms. Setting it to 2000 sets a delay of 2 seconds between each
avoidance pattern (when touch sensor is kept pressed). This pause is
required to allow the sound waves sent from the sonic sensor to get
back to the robot and be registered.

Implement a sensor listener

We make a thread to check the sonic sensor if a wall is detected. This
thread running in the background means we don't have to pause in
detectwalls takeControl() to check sensor.

We implemented this by making a sampler thread. This thread is started
from detectWalls. In detectWalls takeControl() function the sampler
thread is the last sonic value is close enough to indicate a wall is
detected.

With a separate thread, a sound.pause(2000) doesn't cause pauses
between avoidance patterns as it did before.

Improve detect wall

We add a 1 second backing up before turning in the avoidance
pattern. We initially did this by going backwards and waiting a second
by using Sound.pause(1000) then doing the turning. This caused the
robot to make a weird jump in the transition from backing up to
turning. This is probably just because the backward and rotate motor
functions operate at different motor speeds.

We improved the behavior by checking if the touch sensor is pressed
after the robot has backed up. If it is pressed, we move backwards
another 2 seconds. We implemented this in the detect wall action()method by checking if the touch sensor is pressed after having gone
backwards. If it is we stop the function. The function will be started
again by the arbitrator as the touch sensor is pressed.

Put detection of wall into the detection of wall sequence.

Motivation functions have each behaviors takeControl() function
return integer signifying how much they want control.

To implement that in our DetectWall and have it be able to run again
if touch is pressed again afterwards. We could do this by these settings:

  • previous_motivation = 0 and touch.isPressed(): return 10;
  • previous_motivation > 0 and touch.isPressed(): return previous_motivation / 2;
  • !touch.isPressed(): return 0;

This sets a high motivation value if the touch sensor is pressed, if
it is continuously pressed the motivating value will decrease. But if
the touch is released ad pressed again we will have a high motivation
again.