torsdag den 16. december 2010

lab 14

Date: 2010-12-16 Thu
Attendances: Carsten, Dan
Duration: 9:15 - 16:00

Continuing project

Fixing Bluetooth trouble

Getting bluetooth communication working in our current code
framework. We fixed it by sending too long strings to force read() to
stop.

Bluetooth working with 2 robots

Behaviors

3 modes

  • Hunt: Killing other robots.
    • Uses distance sensor to detect how close to other robots.
    • If close enough, kills the other robot.
    • Cannot find food when in this mode (light sensor doesn't register
      it).
    • Light sensor still need to detect border.
    • Increase hunger.
  • Search: Searching for food.
    • Sonic sensor not used.
    • Uses light sensor to detect food.
    • Increase hunger.
  • Eat: Eating food.
    • In this mode after having found food.
    • Stands still and eats.
    • Decrease hunger.
  • Avoid: Don't move outside the level.
    • Simple sequence of moving backwards and turning around.

Personality

A robots personality is defined by:

  • Hunting Motivation: This determines how hungry the robot will get
    before stop hunting and start searching for food.
  • Eating Motivation: This determines how much the robot eats after
    having found a food-source.

Initial test

Our initial test with 1 robot and no avoidance gave some interesting
results. First of all, the robot moved way too fast. We need to have
the robots move as slow as possible to be able to see what they are
doing. Also, our initial guess at a personality gave a good mix of
hunting, searching and eating. It would hunt for a while, then god
hungry and go into search mode. And when finding something to eat it
would stay a eat for a good while before going back to hunting.

sensor reading interval

Our robot has a step-count of 0.5 seconds, this indicate when to
increase hunger. But 0.5 seconds is enough time for the robot to move
over a border or food without behaviors acting on it. So we simulated
concurrency with a busy-wait. We sleep 10 ms at a time and each time
check if a border or some food is detected. If detected the 0.5
seconds sleep is interrupted.

The reason this works is that only search and hunt can be
interrupted. And neither of those use any waiting. They just choose
new speed settings for the wheels. But avoid and eat do use
thread.sleep.

Track

We set up a track. A beige rectangle with gaffer's tape as an edge and
pieces of white paper signifying food.

Killing

We implemented the kill mechanics naively by checking if the distance
sensor detects something within 10 cm (or at least when the sonic
sensor returns a value less than 10). When a robot kills something it
will make a sound. It is up to the humans to detect and remove the
robot that was killed. Having a robot detect when it is killed is not
easy, and we decided to just emulate that behavior ourselves.

Track test

Our initial test with one robot with an average personality would run
around and hunt a bit. But spend most of its time searching and
eating. It stayed alive throughout the trial of 5 minutes or so. The
main problems was the robot would sometimes not detect edges properly
and run off the track, requiring help to get back. Another problem was
our pieces of paper were not taped down, causing the robot to move the
paper around confusing itself.

Personality

The PC can now transmit a new personality to a robot. This was
difficult as the NXJ doesn't report null pointer exceptions in its
threads.

torsdag den 9. december 2010

lab 13

Date: 2010-12-09 Thu
Attendances: Carsten, Dan
Duration: 11:15 - 16:15

Continuing project.

This week we implemented the score-based behavior controller.

The controller is very similar to the one from lab 10, except this one
assigns scores to each behavior, sorts the list and execute the
behavior with the highest score.

The main problem here is designing a robot with a behavior that has
some good middle-ground. Meaning simply having one of it's behavior
properties as high or low as possible always be the best option is not
a interesting problem.

We have ended up with a robot that sometimes searches for water and
sometimes hunts for other robots. When in search-mode it won't catch
other robots even if it bumps into one. When finding a source of water
(a patch of coloured paper on the ground) the robot stands still and
refills its energy. When in hunt-mode the robot goes faster and if it
bumps into another robot it kills it.

Our robots move around randomly. It does this by having 5 different
move modes (forward, slightly left, slightly right, sharply left,
sharply right). Each mode has a even chance of being selected. Each
mode runs in an some time interval, after which a new move-mode is
selected.

The basic premise of the robot is it has a hunger indicator. the
hunger indicator decreases over time. When a robot is hungry it
searches for food. When it is not hungry it hunts for other robots.

The track we imaging is a flat white area. It is surrounded by a black
edge. And food sources are grey pieces of paper. The light sensor can
be calibrated to indicate what is the track, the edge and food.

Hunting for other robots is done using the sonic distance sensor.

Log

We started by making the behavior controller

We then made some behaviors, and decided on hardware setup

We then made the logger. We had trouble with this. We stopped before
we figured it out.

classes

  • [X] Bluetooth
  • [X] PC logger (server)
  • [X] NXJ logger (client)
  • [X] score-based behavior controller
  • [X] behaviors
    • Move randomly (search)
    • Hunt
    • Eat (check light sensor, stop and eat if food is found and hunger
      is low enough)
  • [X] Hardware setup
  • [X] Light sensor values calibration
  • [X] Behaviors using sensors correctly
  • [X] Personality
  • [X] PC breeder (Personality merge)

torsdag den 2. december 2010

lab note 12, starting project bluetooth comminuaction

Attendances: Carsten, Dan
Duration: 11:15 - 15:00

Continuing project

Borrowed 2 standard robots.

classes

  • Bluetooth
  • PC logger (server)
    • Made initial server
    • uses ObjectInputStream to send object instances across Bluetooth.
    • Instances being sent implement Message.
  • NXJ logger (client)
    • Made initial client
  • score-based behavior controller
  • behaviors
  • PC breeder

torsdag den 25. november 2010

lab 11

Attendances: Carsten, Dan
Duration: 11:30 - 14:30

Goal

The goal this week is to look into projects.

Plan

We discuss different possible projects. We analyse the difficulty and
main problems of each project.


  • Road navigation

  • Synthetic creatures

  • Sex bots

Road navigation

The road navigation problem is have a robot identify and map a Lego
road network and be able to go from point to point in it.

Software and hardware


  • Robot able to:

    • Drive around

    • Sensors to navigate the roads.

    • Know where it is (Tacho meter)

  • Software capable of;

    • Storing road network (make graph)

    • identifying road sections

    • Resistance to poor location (tacho meter errors)

    • Handling a lot of sensor input to build a consistent view of the world

Main difficulty


  • Initially building the network mainly

    • Robot need to drive for a long with a lot of turns (which increase tacho error)

    • Making sure entire network is covered.

    • Connecting information from different sensors.

What to present at end of project.


  • Robot to build a map of a road network

  • Move from point to point in the constructed network

  • How sensors can operate together to increase precision

  • How many small behaviors can work together.

Synthetic creatures

Make a robot that emulates a creature, with desires and behaviors of
said creature.

Software and hardware


  • A potentially weird robot (as it need to match some creature)

  • Special form of locomotion

  • Simplifying creature behavior to something a robot can do.

  • Emulate complex creature behaviors with a small amount of code.

Main difficulty


  • Simulate complicated creatures with simple Lego blocks.

  • Realistic simulation of behavior and needs

What to present at end of project.


  • Behavior and need simplifications

  • Robot that looks and behaves like a creature.

  • How simple sensors were used to emulate complex sensors (smell/touch/…)

Sex bots

Robots that can share code and information with each other (over IR/Bluetooth).


Different robots with same set of behaviors. Robots have different
priorities on their behaviors. Pairing robots can combine their
priorities to form a new robot.


Behaviors can simulate a artificial environment defining the needs of
the robot (water, food, hunt, …).

Software and hardware


  • Simple robot. Maybe something to determine if one robot (set of
    priorities) yields better results than a different one.

  • Bluetooth

  • Getting information about robot developments.

  • > 1 robots. possible with 1 if we can determine a score for a set
    of priorities.

Main difficulty


  • Defining interesting behaviors.

  • Communication between robots

  • Comparing properties of different sets of priorities

  • Getting Bluetooth to work.

  • Combining robots.

  • feedback:

    • Knowing exactly what behavior each robot is doing at any given
      time.

    • Getting Bluetooth to work

What to present at end of project.


  • Unexpected good/bad behavior we wouldn't have come up with ourselves.

  • Evolutionary solutions to problems. (define problem. randomize/breed
    robots to find best solution)

Progress

We got bluetooth to work.


pc programs using bluetooth need pccomm.jar and bluecove.jar.

torsdag den 18. november 2010

lab session 10

Attendances: Carsten, Dan
Duration: 11:30 - 13:30

Goal

The goal this week is to experiment with the LejOS behavior-based
architecture. we also look at sensor listeners.

Plan

We experiment with a robot called Bumper Car. This robot runs around
until it bumps into a wall (detected using a touch sensor). After
being bumped it backs up and turns to before continuing to drive
around.

We follow the exercise plan.

  • Investigate bumper car and its source code.
  • Implement a new behavior.
  • Implement a sensor listener
  • Improve detect wall
  • Put detection of wall into the detection of wall sequence.

Robot

Our robot this week is the same as from lab 9 with a touch sensor.

PICTURE

Code

Our modified BumperCar.java.

Investigate bumper car and its source code.

When the touch sensor is pressed it backs up and turns left. When hold
down the robot continuously backs up and turns left. This shows that
the avoid behavior has higher priority than drive forward.

Arbitrator loops through it's behaviors checking if takeControl()returns true. If a behavior returns true the arbitrator breaks() from
the loop, meaning it stops checking for other behaviors. So when
DetectWall is active, DriveForward's takeControl() is not checked.

It seems the highest priority is the last in the behaviors array. The
[ 2] link from the exercise text suggested it was opposite.

Implement a new behavior.

  • Implement a new behavior that checks for Escape presses and stops
    the program if detected.

We implemented this by checking if the escape button is pressed intakeControl(). We sat it as the highest priority by setting it as
the last entry in the array of behaviors.

When the robot is going forwards, escape is registered
immediately. When the touch sensor is pressed down and the robot is
continuously avoiding the escape is not always registered. This is
because takeControl() is only run on or escape behavior between each
avoidance pattern.

The Sound.pause(x) command pauses the robot for x number of
ms. Setting it to 2000 sets a delay of 2 seconds between each
avoidance pattern (when touch sensor is kept pressed). This pause is
required to allow the sound waves sent from the sonic sensor to get
back to the robot and be registered.

Implement a sensor listener

We make a thread to check the sonic sensor if a wall is detected. This
thread running in the background means we don't have to pause in
detectwalls takeControl() to check sensor.

We implemented this by making a sampler thread. This thread is started
from detectWalls. In detectWalls takeControl() function the sampler
thread is the last sonic value is close enough to indicate a wall is
detected.

With a separate thread, a sound.pause(2000) doesn't cause pauses
between avoidance patterns as it did before.

Improve detect wall

We add a 1 second backing up before turning in the avoidance
pattern. We initially did this by going backwards and waiting a second
by using Sound.pause(1000) then doing the turning. This caused the
robot to make a weird jump in the transition from backing up to
turning. This is probably just because the backward and rotate motor
functions operate at different motor speeds.

We improved the behavior by checking if the touch sensor is pressed
after the robot has backed up. If it is pressed, we move backwards
another 2 seconds. We implemented this in the detect wall action()method by checking if the touch sensor is pressed after having gone
backwards. If it is we stop the function. The function will be started
again by the arbitrator as the touch sensor is pressed.

Put detection of wall into the detection of wall sequence.

Motivation functions have each behaviors takeControl() function
return integer signifying how much they want control.

To implement that in our DetectWall and have it be able to run again
if touch is pressed again afterwards. We could do this by these settings:

  • previous_motivation = 0 and touch.isPressed(): return 10;
  • previous_motivation > 0 and touch.isPressed(): return previous_motivation / 2;
  • !touch.isPressed(): return 0;

This sets a high motivation value if the touch sensor is pressed, if
it is continuously pressed the motivating value will decrease. But if
the touch is released ad pressed again we will have a high motivation
again.

torsdag den 11. november 2010

lab note week 9

Attendances: Carsten, Dan
Duration: 11:30 - 14:00


Goal

The goal this week is to experiment with robot positioning. We do this
by using the fact that our motors keep track of how many rotations it have done.

Plan

We start by a making a simple program to get a feel for how it
works. We can determine precision using our simple program. Then we
implement avoidance to understand challenges present when working with
motor rotations.


  • Testing precision: Testing how accurate the motor measurements are

  • Avoiding objects: Avoiding objects while traveling.

  • Improved navigation: Comparing 2 methods of computing position and
    direction of the robot.

Robot

Our robot this week is the same as from lab 8. We use the motors (both
to move and as sensors of how much the wheels have moved) and the
sonic sensor to avoid objects. The 2 light sensors are not in use (but
still attached).


PICTURE

Testing precision

We couldn't find a ruler so we used a Lego block to measure with. Each
of our units is about 0.5 cm. We indicate the starting position by
placing a Lego block (a measuring block) on the ground.


We didn't attach a pencil or marker to the robot to record it's
path. To detect inconsistencies the robot needs to travel large
distances which would require a lot of paper to measure, which we
didn't have access to.

initial test

Go forward 200 then backwards. It is precise enough that we can't
measure an error.


We had a minor problem measuring because the robot front wheel is
heading different direction when the robot is going forwards and
backwards. We have to make sure the front wheel is in the same
position when starting as it is when the robot have finished moving.

Going in a square

When the robot was going in the path of a square of lengths 50 (with
four 90 degree turns). We measured an inconsistency of 1 to 2 cm.


The robot travels the same distance as our initial test (200) but it
has a inconsistency of 1-2 cm. compared to none (from the initial
test). This suggests the inconsistency is an effect of the turns.

Code

These experiments were done with a old version of our avoidance robot
software available later. The methods goStraightAndBack and goSquare
were used.

Avoiding objects

Here we make a robot that travels forwards a fixed distance, and if a
object is in the way, the robot will go around it but still end up
having moved forwards a fixed distance.


We assume objects blocking the robot can be circumvented by doing:
turn right, forward, turn left, forward, turn left, forward, turn
right.

Solution

We avoid objects using the pattern explained above. The avoidance
pattern has 2 segments going side-ways and 1 going forwards. The
side-ways segments should not be counted towards the distance
travelled by the robot.


When we reach an object we:


  • Save distance travelled (tmp1).

  • Turn right, go forward, turn left.

  • Reset the motor counter.

  • go forward.

  • Save distance travelled (tmp2).

  • turn left, go forward, turn right.

  • go forward a distance equal to the total distance - (tmp1 + tmp2).

In practice tmp2 isn't saved as it's part of the avoidance pattern so
we know the distance.


We set the 3 forward distance used in the avoidance pattern to 40.


We detected objects using the ultrasonic distance sensor and to
simulate a object we used a black laptop case.

Observations

We tested the robot with 0, 1 and 2 avoidance patterns. We estimated
the final distance travelled to be the same. Our robot would swerve to
the left which made estimating the actual distance travelled hard.


To account for the swerve, we initially though about turning the robot
around and going back to start. But the swerve would just make the
robot go further off course. To negate the swerve on the way backwards
would mean having the robot go backwards, but then we would need
another distance sensor.


To verify the distance travelled was the same despite objects in the
way, the final distance should just be precise within a distance of 20
(half of the forward distance of our avoidance pattern), which we
could do by eye.


If we assume the inconsistency from turning is the same as our initial
tests, the travelled distance with 1 and 2 objects in the way would be
off by 2-4 and 4-8, respectively.

Code

link

Improved navigation

Here we discuss 2 methods to estimate position and direction of a
robot from the motor rotations and diameter of the wheel and distance
between the 2 wheel.

Dead Reckoning

Dead reckoning uses simple geometry to calculate how much a rotation
is to determine how far the robot travels pr. rotation. It calculates
the direction by assuming the wheels turn in opposite direction for
some amount of rotations, then uses sin and cos to determine how the
direction have changed. This method looks at both wheels.

Forward Kinematics

Forward Kinematics saves poses, which determine the robots x,y
coordinate and heading at a given time. To determine a new pose a
matrix transformation based on angular rotation around a point is
used.

torsdag den 4. november 2010

lab note 8

Attendances: Carsten, Dan
Duration: 11:30 - 13:15

Goal

The main goal is to experiment with defining multiple behaviors and
defining how they work together

Plan

We experiment with the SoundCar.java program which have 3 behaviors:
drives randomly, avoids obstacles and plays sounds. The plan consist
of:



  • Understanding the LCD output.


  • Analysing the behavior of our robot.


  • Understanding how priority of behaviors are implemented.


  • Include the light follower behavior from last week.

Robot

We added a distance sonic sensor to our robot from lab 7. The robot
now have 2 distance sensors and 1 sonic sensor.


Understanding the LCD output

0/1: in Random drive mode everything is 0. When the robot is backing
up (avoiding) the drive is 1. When the sound is playing, both drive and
avoid are 0. This fits with the code that a thread has a 1 if its
engine use is suppressed and 0 if that thread can access the motors.

s/f/b: stop/forward/back. Indicates what a thread commands the motors
to do (go forward/back or stop). When drive have f and avoid have
b the robot is going backwards.

Analysing the behavior of our robot

With only the drive thread it is just driving forward randomly. With
the avoid and sound active it is inactive unless something is put in
front of it (then it avoids). It can be seen that it will stop in the
middle of an avoid if the sound is played.

Understanding how priority of behaviors are implemented

Daemon

One reason to use daemon thread is because the program stops when only
daemon threads remain (so in this case we don't need to stop the 3
threads). Daemon threads are a service running in the background,
Setting these threads(behaviors) as daemons signify they are services
and not directly part of the program (they are just always running in
the background).

Priority

The suppressCount integer signify how many higher-priority threads
have wanted access to the motor (by calling suppress()).


When the robot is avoiding it calls suppress() (which sets
drive.suppressCount to 1), then does its avoidance, then calls
release() (which sets drive.suppressCount to 0).

If in the middle of an avoidance the sound is played, sound calls
suppress() which sets avoid.suppressCount to 1 and drive.suppressCount
to 2, then when sound releases avoid.suppressCount is set to 0 and
drive.suppressCount to 1 (which means avoid now have access to the
motors and drive still does not have access).

Include the light follower behavior from last week

Our light follower from lab 7 used motors directly, but for the
suppress priority to work it needs to use Car, so we started by
modifying that.

We insert the light follower thread above the drive but below the
avoid in the hierarchy. This results in the robot driving randomly
when there is no light source of significant intensity (above the
average of the last couple of seconds). The robot will spend most of
its time driving around after the light because the robot will
usually start up again when the average stabilizes. If the robot gets
too close to an obstacle the avoid thread takes over and tries to
avoid. While playing a sound the robot will still stop moving.

The Final implementation of the lightfollower class look like this:
(The important parts are highlighted with bold.)
import lejos.nxt.Button;
import lejos.nxt.LCD;
import lejos.nxt.MotorPort;
import lejos.nxt.SensorPort;
import lejos.nxt.addon.RCXLightSensor;

public class LightFollower extends Behavior{

private int MAXLIGHT;
private int MINLIGHT;

RCXLightSensor lightSensorLeft = new RCXLightSensor(SensorPort.S3);
RCXLightSensor lightSensorRight = new RCXLightSensor(SensorPort.S2);
MotorPort leftMotor = MotorPort.C;
MotorPort rightMotor= MotorPort.B;

private final int forward = 1,
backward = 2,
stop = 3;

int averageLeft = 0,averageRight = 0;

final int BETA = 25;

public LightFollower(String name, int LCDrow, Behavior b){
super(name,LCDrow,b);
}

public void run()
{
MAXLIGHT = 0;
MINLIGHT = 1000;
while(! Button.ESCAPE.isPressed()){
int normalizedLightLeft = normalize(lightSensorLeft.getNormalizedLightValue());
int normalizedLightRight = normalize(lightSensorRight.getNormalizedLightValue());
averageLeft = average(averageLeft,normalizedLightLeft);
averageRight = average(averageRight,normalizedLightRight);
int powerToLeft = 0;
int powerToRight = 0;
if(normalizedLightLeft > averageLeft){
powerToLeft = normalizedLightLeft;
}
if(normalizedLightRight > averageRight){
powerToRight = normalizedLightRight;
}
suppress();
forward(powerToLeft,powerToRight);
delay(100);
stop();

release();
}
}

int normalize(int light){
if (light > MAXLIGHT)
MAXLIGHT = light;
if (light < minlight =" light;" minlight ="=" output =" 100" output =" 0;"> 100)
output = 100;
LCD.drawString("output is: " + output, 0, 7);
return output;
}

int average(int formerAverage, int light){
int output = formerAverage + (BETA*(light - formerAverage))/100;
return output;
}
}

The robot behaves mostly like the one from last week except for
avoiding stuff. The sound part is played every few seconds. The random
drive thread is not noticeable active, this is likely because the
light follower is only inactive when waiting for the average light
level to level out.

Results

We have analyzed and understood the SoundCar.java program. And we
have learned how to implement different behaviors with different
priorities using threads.

torsdag den 28. oktober 2010

lab note 7


Attendances: Carsten, Dan
Duration: 11:30 - 13:45

** Simple program
Simply set the light value to the appropriate motor value.

*** Program
motor1 = light1;
motor2 = light2

*** Results
Both motors run at full speed at all times. This is because the light
sensors are between 300 and 600 which is more than the motor's max
value of 100.

Normalizing this could result in motors going forward the darker it is
(instead of the expected of going towards the light). But we use the
API function getNormalizedValue, which has already negated the sensor
values.

** Setup
After the modifications, the robot looked as follows:

** Advanced program
We have extended the program by calculating running min and max light
values. This is used to normalize the light sensor output to the same
range of values as the motors input.

*** Program
import lejos.nxt.Button;
import lejos.nxt.LCD;
import lejos.nxt.MotorPort;
import lejos.nxt.SensorPort;
import lejos.nxt.addon.RCXLightSensor;


public class LightFollower {

private int MAXLIGHT;
private int MINLIGHT;

RCXLightSensor lightSensorLeft = new RCXLightSensor(SensorPort.S3);
RCXLightSensor lightSensorRight = new RCXLightSensor(SensorPort.S2);
MotorPort leftMotor = MotorPort.C;
MotorPort rightMotor= MotorPort.B;

private final int forward = 1,
backward = 2,
stop = 3;

int averageLeft = 0,averageRight = 0;

final int BETA = 25;

public LightFollower(){
MAXLIGHT = 0;
MINLIGHT = 1000;
while(! Button.ESCAPE.isPressed()){
int normalizedLightLeft = normalize(lightSensorLeft.getNormalizedLightValue());
int normalizedLightRight = normalize(lightSensorRight.getNormalizedLightValue());
averageLeft = average(averageLeft,normalizedLightLeft);
averageRight = average(averageRight,normalizedLightRight);
LCD.drawString("NLeft: " + normalizedLightLeft+" ", 0, 0);
LCD.drawString("NRight: " + normalizedLightRight+" ", 0, 1);
LCD.drawString("ALeft: " + averageLeft, 0, 2);
LCD.drawString("ARight: " + averageRight, 0, 3);
LCD.drawString("MaxLight:" + MAXLIGHT, 0, 4);
LCD.drawString("MinLight:" + MINLIGHT, 0, 5);
int powerToLeft = 0;
int powerToRight = 0;
if(normalizedLightLeft > averageLeft){
powerToLeft = normalizedLightLeft;
}
if(normalizedLightRight > averageRight){
powerToRight = normalizedLightRight;
}
leftMotor.controlMotor(powerToLeft, forward);
rightMotor.controlMotor(powerToRight, forward);
try {
Thread.sleep(10);
} catch (InterruptedException e) {
}
}
}

int normalize(int light){
if (light > MAXLIGHT)
MAXLIGHT = light;
if (light < minlight =" light;" minlight ="="" output =" 100" output =" 0;"> 100)
output = 100;
LCD.drawString("output is: " + output, 0, 7);
return output;
}

int average(int formerAverage, int light){
int output = formerAverage + (BETA*(light - formerAverage))/100;
return output;
}

static void main(String[] args){
LightFollower lf = new LightFollower();
}

}

*** Results
The motors both ran slowly and didn't respond to small
fluctuations. Holding the robot up to a lamp would set both wheels
going full speed though. We think this is because our max/min light
values are the overall max/min instead of a running average (to adapt
to light/dark environments).

We added a bunch of debug output to see what was happening.

Some weird results of a value not being in range 0-100 despite clearly
stated in program. This was fixed by adding a extra LCD.drawstring. We
suspect this was a output error and the value was actually correct.

We initially forgot to set our minimum light value (so it was
defaulted to 0). This resulted in our average always being 0.

We had some trouble getting the average to work because we used
doubles. We fixed this by switching to using ints and multiplying then
dividing by 100.

We also forgot to set the motor values to 0 if the current light
values were below the average.

We ended up with an beta value of 0.25 (25 in our program). Which would
stabilize to a new value after 5 seconds (with 10ms delay).

We could have improved the robot further by calculating a separate
min/max value for each light sensor (one of our sensors were
noticeably different than the other).

But in the end our robot worked and could be seen to go towards/away
from light areas, depending on which sensors were connected to which
motors.

torsdag den 30. september 2010

Week 5 lab

We were all here this week (Zoi, Carsten and Dan). We worked from
11:15 to 13:30.

** light values:

We used the program from last time that read the normalized light
value (which is the raw light value? (range is 0-1024 according to
API)). The light conditions are just the Zuse building. For
the dark conditions we wrapped a black bag around the sensor.

- in light conditions: white=555, black=370, green=460
- in dark conditions: white=540, black=312, green=415

** carfollower using dark-white:

*** initial program

- on hill: lost track when going over edge. Light sensor gets too
far away from the floor and thinks black and white.
- on track: goes off-track in sharp turns.

*** with proportional

We made it turn less if the light value was close to the average
between the black and white light values. The robot would more easily
stray from the black line in corners and sometimes just on straight
lines.

For next week we could implement a smarter solution that, when the
error gets high enough we could make the other wheel turn in reverse
making the robot turn on its own axis.

*** Finding green

Our experiments with reading light values, we know that green is
between black and white. But in our proportional line follower, that
is the optimal line. We try to overcome this by adding a check to our
main loop that stops the car if a light values close to the green
calibrated value.

Through experiments we ended up with green being defined as the
calibrated value +- 1. And our robot would very often stop on the
black line. Next week we can use a more precise light-value (by using
different API function) to get a more accurate detection.

torsdag den 23. september 2010

Week 4

This week we had to experiment with a balancing robot utilizing the
light sensor.

Using the light measure of calibration phase of the program we
determined the light values varies from around 540 when tilting
forward (closer to the ground) to 530 when tilted backwards.

The program uses the three error values (P,I,D) to determine a course
of action.

We tried changing to effect of the D value to decrease the jerking
motions that caused the robot to overcompensate and fall over. But out
changes either didn't noticeable affect the robot, or it had too great
an affect and the robot would fall over by not compensating enough.

In the end we had the most sucess by spending a very long time in the
calibration phase finding a very accurate balance value.

torsdag den 16. september 2010

week 3

Test of sound sensor:

Clapping seems to make it go up to 50%, whistling made it go up to 98%.

Clapping at 1-5m away reads a value of about 20%.

Behind it also works.

sample:

we recorded a single clap about 1 meter away. (PLOT)

sound control car:

We implemented buttonlistener as described in the API. The program
works as expected. It only accepts very loud sounds. which meant we
had to get very close to whistle at it to make it switch settings.

Sound identification.

We tried distinguishing between a clap and a general loud noise.

we implemented the sound storage as an array of 60 integers. which was
updated as a circular array. We a circular array as this is very fast
and every ms it takes to add the integer is added to the update-time
(2ms processing time + 5 ms thread.sleep time = 7ms update-time). If
our update-time becomes too big our clap detection analysis would fail.

We implemented the clap-detection by having 3 integers. 1st contains
the average volume of the last 25ms. 2nd has the average of 25ms in
the pat to 225ms in the past. And 3rd has the average of 225ms to 250
ms in the past. A clap is then when the 1st and 3rd integers are
between 30 and 50. and the 2nd is above 85.

Our program worked about every 5th to 10th clap.

torsdag den 9. september 2010

Week 2

The sensor had a very narrow cone. So we needed a big target the
farther we wanted to measure.

It takes sound approximately 15 ms to travel 2 * 255cm. The NXJ is
capable of updating faster than this, so the speed of sound can
interfere with programs using fast update-rate.

The tracker program is a linear control system as the speed slows the
closer to the preferred distance the robot gets.

We tried 3 settings for the minPower for the tracker program.

Default (60%): It would oscillate back and forth around three 35cm
mark.

At 255cm, it's power is max(60, (255-35) * 0.5) = 110%

At 0cm, it's power is - min(60, |0-35| * 0.5) = - min(60, 16) = 60%,

At 35cm, it's power is max(60, (35-35)*0.5) = 60%

Change 1 (0%): It would stop at around 138cm away from the wall, with
a power of 51%. So 51% is the least amount of power needed to make the
wheels turn. So this could be a good minPower.

Change 2 (51%): It wasn't. Engine power needed to make the wheel turn
is variable. and 51% isn't always enough. when it backing up it would
stop at 35cm though. So we need a higher minPower. But then we're back to 60%.

We implemented a simple program the turned into the wall on high
distance, and turn away from the wall on lower distance. This worked
somewhat well. When the robot was placed at the correct distance it
would keep the correct distance untill the oscillations eventually
became too big and it would turn 180% degrees.

torsdag den 2. september 2010

02-09-2010

Well... We had most the lego blocks and we actually managed to build the car. Further more we got this block created!