Objective

The objective of this lab was to incorporate old labs and accomplish navigation and planning both in simulation and on the physical robot, ideally using things from previous labs.

Simulation

To start with, I worked on the robot in simulation. First, I opened up the provided code for lab10 and subbed in my own matrix, based on the room space that I had mapped in lab 7 and have been working with since. Next, I looked into different algorithms for navigation and planning from our lectures earlier this semester. I decided that by far the easiest, and very doable given my configuration, would be a Bug algorithm. Looking through them, I found the most efficient to be the Bug2 algorithm, which follows a vector towards the goal until it runs into an obstacle, then circumnavigates that obstacle until it can get back on track towards the goal.
i2c1
To implement this, I first created a simple function that in planner_query.py that would call the start and end point calculated randomly by the script, and calculate block by block the fastest way there. I did this in only two dimensions, allowing the robot to move up, down, left or right only. The script simply checks if the next best step is right, left, up or down, based on how large the delta Y and delta X are, and attempts to move in that direction. If there is a wall in that direction, the follow wall script is called. This code is called iteratively, so that each time a new point is calculated, that becomes "currPoints", and goal points stays the same, so the script terminates when currPoints = goalPoints.

Going to the next step

 
      def add_line(self, currPoints, goalPoints): 
        
        deltaX = -currPoints[1] + goalPoints[1]
        deltaY = -currPoints[0] + goalPoints[0]
        
        if (abs(deltaX) > abs(deltaY)):
            if(deltaX > 0): #positive delta
                if(self.grid[currPoints[0], currPoints[1]+1] == 1): #obstacle that way       
                    delta_out = self.follow_wall(currPoints, goalPoints, 0) #todo: implement
                    return delta_out
                else: #no obstacle, go that way 
                    return currPoints[0], currPoints[1]+1
                
            else: #negative delta 
                if(self.grid[currPoints[0], currPoints[1]-1] == 1): #obstacle that way 
                    delta_out = self.follow_wall(currPoints, goalPoints, 1) #todo: implement
                    return delta_out
                else: #no obstacle, go that way 
                    return currPoints[0], currPoints[1]-1
        else: 
            
            if(deltaY > 0): #positive delta
                if(self.grid[currPoints[0]+1, currPoints[1]] == 1): #obstacle that way       
                    delta_out = self.follow_wall(currPoints, goalPoints, 2) #todo: implement
                    return delta_out
                else: #no obstacle, go that way 
                    return currPoints[0]+1, currPoints[1]
                
            else: #negative delta 
                if(self.grid[currPoints[0]-1, currPoints[1]] == 1): #obstacle that way 
                    delta_out = self.follow_wall(currPoints, goalPoints, 3) #todo: implement
                    return delta_out
                else: #no obstacle, go that way 
                    return currPoints[0]-1, currPoints[1]
    

Wall following

 
      def follow_wall(self, currPoints, goalPoints, moveDir):
        
        deltaX = -currPoints[1] + goalPoints[1]
        deltaY = -currPoints[0] + goalPoints[0]
        
        leftWall   = self.grid[currPoints[0], currPoints[1]-1] == 1
        rightWall  = self.grid[currPoints[0], currPoints[1]+1] == 1
        topWall    = self.grid[currPoints[0]-1, currPoints[1]] == 1
        bottomWall = self.grid[currPoints[0]+1, currPoints[1]] == 1
       ############################################################ 
        if(moveDir == 0): #right wall 
            if(deltaY>0): #goal below
                if(not bottomWall): 
                    return currPoints[0]+1, currPoints[1] #move down
                elif (not leftWall):
                    return currPoints[0], currPoints[1]-1 #move left
                else: 
                    return currPoints[0]-1, currPoints[1] #move up
            else: #goal above
                if(not topWall): 
                    return currPoints[0]-1, currPoints[1] #move up
                elif (not leftWall):
                    return currPoints[0], currPoints[1]-1 #move left
                else: 
                    return currPoints[0]+1, currPoints[1] #move down

    
Wall following is very naive: it first calculates where the walls are in relation to the current position, knowing the direction the robot would ideally like to move in, and picks the next best option. For instance, if the robot would like to move left, but moving down would also help get closer to the goal, it moves in that direction. This algorithm deprioritizes but still allows for moving away from the goal if that is the only option. Below are several examples of the algorithm working. I did not originally have a large obstacle that would obstruct many paths, so I later added a false obstacle to ensure the wall following was working as intended.
i2c1i2c1i2c1
i2c1i2c1i2c1
i2c1i2c1i2c1


Now that I had a working algorithm, I wanted to incorporate it into the simulator to visualize how the robot should move along these pre-ordained vectors. I first had to translate the provided points into map coordinates, which was a simple conversion
 
      stepY = (interm_steps[i+1][0]*-.79/16)*2;
      stepX = (interm_steps[i+1][1]*(2.05/23)-1.85)*2;
    
Once I had done that, I could visualize these points by entering them iteratively as ground truth points on the simulator, which immediately worked very well.
i2c1i2c1i2c1i2c1
Next, I wanted to set up the world so that I could test the robot moving around similar to how mine would in real life. With some help from Vivek on Campuswire, I edited my playground script to set up the simulator in the same way my world was set up, and changed the starting position of the robot to the starting position as generated by the random placement generator. After that, I simply looped through each of the points in the previously generated navigation plan, converting them to the map, and sending it to the robot to move.
 
      # Reset Plots
      robot.reset()
      loc.plotter.reset_plot()

      for i in range(len(interm_steps)):
          #print(interm_steps[i])
          nextStepY = (interm_steps[i][0]*-.79/16)*2;
          nextStepX = (interm_steps[i][1]*(2.05/23)-1.85)*2;

          
          loc.plotter.plot_point(nextStepX, nextStepY, GT)
              
          if(i is not len(interm_steps)-1):
              stepY = (interm_steps[i+1][0]*-.79/16)*2;
              stepX = (interm_steps[i+1][1]*(2.05/23)-1.85)*2;

              move_to_next((nextStepX,nextStepY, 0), (stepX,stepY, 0))
              loc.plotter.plot_point(robot.get_gt_pose()[0], robot.get_gt_pose()[1], ODOM)
    
As you can see in the above code, I have a move_to_next script that actually moves the simulated robot from point to point. It uses closed loop feedback to analyze where the robot is in x, y, and theta space, and moves accordingly. For instance, in a left move, the robot will turn until the angle is 180 degrees, then it will continue slowly forward until the gt position is close enough to what the next point is supposed to be. This is one direction; the others are very similar.
 
      def move_to_next(currPos, nextPos):
        deltaX = nextPos [0] - currPos[0] 
        deltaY = nextPos [1] - currPos[1] 
        
        
        if(currPos[0]>nextPos[0]): #moving left
            print("moving left")
            nextAng = 180;
            robot.set_vel(0,.1)
        
            while(round(robot.get_gt_pose()[2]) != nextAng): #wait till it gets to the right angle
       
            robot.set_vel(.1,0)
            while(round(robot.get_gt_pose()[0],1) != round(nextPos[0], 1)): #wait till it gets to the right distance
                
            robot.set_vel(0,0)
            
    
As seen in the following videos, this code worked great and I was able to watch the robot move from point to point. Also included is an image of where the robot is moving vs where its supposed to be moving, which is very close. The robot moved very slowly, so this video is sped up 8x
i2c1


Real Life


Next, I worked on implementing this on my robot. I was actually able to largely port over the move_to_next code I used for most of the navigation.
First in my Arduino script, I pass the coordinates from the calculated navigation as an array of points to be moved from. All the main script does is loop through these points, sending them to the moveToPoint script.
    void loop()
{
    setMotors(0, 0);
    for(int i = 0; i<len(movePoints); i ++){ //till penultimate row 
      moveToPoint(i, movePoints[i][1], movePoints[i][0], movePoints[i+1][1], movePoints[i+1][0] ); //number, currX, currY, nextX, nextY
    }
    

To start off with, I created several helper functions that would get odometry as well as set the motor values, so that the primary script could be more concise. getYaw uses the same code as in PID and other programs earlier written that will taking a running integral of gyro values, so there is still a bit of drift, setMotors simply sets the motor values, and getDist() uses the TOF sensor to determine how far away it is from the wall or obstacle in front of it.
      ///returns current value of yaw
          int getYaw(){
            while(not(myICM.dataReady())){}
            if( myICM.dataReady() ){
                  //Read yaw data
                  myICM.getAGMT();                // The values are only updated when you call 'getAGMT'
                  dt = (millis()-lastRead)/1000;
                  lastRead = millis();
                  yaw_g = yaw_g+myICM.gyrZ()*dt;
                  int normyaw = -1;

                  //normalize the angle
                  if(yaw_g < 0){
                    while(normyaw<0){
                      normyaw = yaw_g + 360;
                    }
                  }
                  else{
                    normyaw = yaw_g;
                  }
                  //Serial.print("Yaw:");
                  //Serial.println(yaw_g);
                  return(int(yaw_g));
               }
          }

          void setMotors(int motor1, int motor2){
            myMotorDriver.setDrive( 1, 1, motor1); 
            myMotorDriver.setDrive( 0, 1, motor2);
          }

          int getDist(){
            
            distanceSensor.startRanging(); //Write configuration bytes to initiate measurement
            distance = distanceSensor.getDistance(); //Get the result of the measurement from the sensor
            distanceSensor.clearInterrupt();
            distanceSensor.stopRanging();
            //Serial.print("Distance(mm): ");
            //Serial.println(distance);
            
            return distance;
          }

    
Then, I created the main program. It essentially does the same thing as in my sim, where it determines where (up down left or right) the next point is, and then turns the robot to face in that direction and moves till it has gone far enough to be one "block," which I started as .15m. Below is an example of one of the directions; it is repeated similiarly for others.

      void moveToPoint(int i, int currX, int currY, int nextX, int nextY){

        
        if(currX&lf;nextX){ //moving left
            Serial.print("moving left");
            nextAng = 180;

            ///////Turn left///////
            setMotors(motorSpeed, -motorSpeed);
            while(currAngle != nextAng){
              currAngle = getYaw();
              delay(10);
              } //wait till it gets to the right angle
            Serial.println("angle correct");
            startDist = getDist();
            
            setMotors(motorSpeed, motorSpeed);
            while(currDist >= startDist - blockDistX){
              currDist = getDist();
              } //wait till it gets to the right distance
         
         
            setMotors(0, 0);
            
        }
}

    


I tried working with this for a while, but they gyro drfit was so bad it would just keep spinning in circles forever and getting lost because it didn't know its true angle. Therefore, I switched to an open loop configuration. I used a similar method to see if the robot was moving up, down, left, or right. Then, I did a simple check to see where it was, and where it needed to be. I did this by keeping track of the "angle" in terms of a 0, 1, 2, or 3 meaning right, up, left, or down. Then, when determining where to go next, i would use this known quanitity and the direction I was moving to choose how long to rotate for.
          setMotors(motorSpeed, 0);
          if(dirStatus == 1) { delay(spinTime*3);} //were up, going left
            else if(dirStatus == 2) { delay(spinTime*2);} //were right, going left
            else if(dirStatus == 3) { delay(spinTime);} //were down, going left
            dirStatus = 0;
    
spinTime was a variable that I tuned to be approximately a 90 degree turn. In order to get my robot to turn properly, I did end up having to put masking tape on the wheels, which made it very slippery. Once this was working, which I confirmed by printing out to the command line with the motors disengaged, I would move the robot forward one square, again setting the motors to a constant and waiting a delay of also a constant that was empirically tested.
Once I moved to real testing, I found that error would propogate the longer I tried to spin for. Therefore, instead of trying to spin 270 degrees, I simply changed the direction of the motor and spun only 90 degrees if I wanted to turn an angle that I previously would have used spinTime*3 for.
Unfortunately, I had to move houses before I was able to test this, however, since it was fully open loop and didn't use obstacle avoidance or localization, I just needed to test in a wide open space, and it would be the same as if I was on the map itself. Here are two successful runs I did and the corresponding maps.
i2c1

i2c1

What I would Change


Unfortunately this lab was not as well fleshed out as I would have liked. Ideally, I could have used localization in the sim to compare that data to the known ground truth position. Another thing is that my bug2 algorithm does not have memory, so if it got stuck in a dead end hallway, it would be stuck and would keep looping around itself. My map doesn't have any dead ends though, so I decided it was not worth my time to implement.
Furthermore, on the robot itself, I would have preferred not to use open loop control. It did not work super well, as you can see by the angles being imprecise. I also would have liked to have done it with a map, that way I could see how good it was at actually avoiding walls and obstacles. Then, I also could have used my original plan of using the TOF to measaure, in a closed loop fashion, how far the robot had gone.
Overall I was pleased with my results given that I had done all the previous steps in previous labs, and the most fun part was playing around with navigation algorithms.