Photo credits above: VEX Robotics, RobotC wiki archive

Next up in my journey through Sensor Land is the VEX Line Tracker. Those white lines of tape on the VEX competition field? They’re not just for show (or to help the field resetters), they create a path your robot can follow during autonomous.

This is a long post, with lots of information to understand & things to consider in using line trackers. Please don’t let this description make you think it’s too hard. As I describe below, I helped my daughter implement basic PID on a line-following squarebot for her 8th grade science fair project (after I learned it at 2am a few days in a row). It’s definitely doable, and I hope that this post will help you learn a few less things “the hard way.”

Table-O-Contents (this is a long one)

Line-following sensors

The Line Tracker Component

$39.99 for one set

The line tracker comes in a 3-pack, which are used together as a single group. Each sensor gets plugged into an analog port, and each sensor returns a value; your programming will combine the information from the 3 and decide how the robot should react.

So, $39.99 is not cheap; I’d recommend buying a package only if you have a good chance of using it (as opposed to, “I’ll buy one just to have on hand.”). Hopefully this article will provide some details to help with those purchasing decisions.

Installation

Line tracker installation height

Photo: VEX product info sheet

These babies need to be positioned REALLY close to the ground. Recommended distance is 1/8″ (3mm) off the floor, according to the VEX info sheet PDF. Let that sink in: 3 millimeters off the ground.

I have had the opportunity to use these just once, and I’ll say that mounting them was a challenge for us (ours were positioned off the front of the robot, like the image at right). If one of them is slightly farther from the ground than the others, or the front of one is screwed in tighter than the back, or one of them is at a slight angle, then that one sensor will deliver different minimum and maximum values than its neighbors. Mounting them flat, even, and close to the ground is key to making use of the line tracker with the smallest amount of excess/compensating stuff in your programming.

When teams are bolting these on to the bottom of the robot, be sure to make the attachment screws accessible (as opposed to we-can’t-reach-it-now) because there will probably be some adjusting needed to get them just the right distance from the floor (e.g., adding washers or spacers).

Install the sensors as close to the front of the robot as possible. There’s a slight delay in between reading the data, having your program figure out what’s what, issuing commands to the motors, and those commands taking effect in real life. Putting the sensors close to the front of the robot will permit that series of actions to take place before your robot goes astray. (More on programming below.)

Think about the game you’re building for. If the game has obstacles that your robot must drive over (such as In The Zone’s starting bar), give consideration to where you place these sensors on your robot and how you drive over the obstacle. The necessary 1/8″ from the floor may not play nice with a lot of scraping and banging.

How Does It Work?

Like the optical shaft encoders, these sensors include an infrared light and infrared sensor. In this case, there’s an infrared LED shining down, and a sensor that measures the rebounding light. Dark-colored surfaces reflect less; white—like the tape on the competition field—reflect more. The product info sheet suggests mounting the trackers near the center of the underside of a robot to shield it from any environmental infrared “light pollution,” such as from tungsten lighting.

In any usage situation, the line that one is following must be sufficiently differentiated from the surfaces around it. If you’ve got black-on-grey, both with equal reflectivity, you’re going to have a more difficult time than if you’re trying to follow, say a nice black line on a white surface. In VEX, the field tiles are not really all that dark, but the white tape is much more reflective than the field tiles.

Analog Sensors and Voltage

Digital sensors send signals to the cortex of HIGH (1) and LOW (0); that’s it. The cortex can do lots of useful things with those 1’s and 0’s, as it does with the ultrasonic sensor and shaft encoders, but the communication between the sensor and cortex is only either HIGH or LOW.

Analog sensors, on the other hand, return a voltage to the cortex, in the range of 0–5V, allowing a huge number of possible intermediate values. For this sensor, white surfaces (or highly-reflective surfaces) will return a low voltage, and dark surfaces will return a high voltage (kinda the opposite of what my brain wants to think, but hey, c’est la vie).

As with other VEX sensors, the red wire is for voltage, the black wire is the ground, and the white wire is the control signal. If you use 3-wire extensions to reach the cortex, the 3 colors of the wires must match when you plug them together.

Sensor Output

As with the potentiometer, this sensor returns different values depending on your programming language.

System White surface Dark surface Pointed away
from everything
Sensor max
easyC 38 662 770 1023
RobotC 153 2650 3076 4095

Source: VEX product page and product info sheet.

You Gotta Test

Before you attach these to your robot, attach them to a cortex and use the online window (easyC) or debugger window (RobotC) to test each one (or, in any programming language, you can have the sensor data print to an LCD screen). Since these 3 are identical devices, I suggest putting a piece of blue tape on the back of each one, with A, B, C or 1, 2, 3, so that you can know which is which.

Why do I recommend testing them? Because when using any sensor for the first time, you have to understand what it is you’ve got, and if it works the way you think it does and the way the product info sheet describes. You will save yourself tons of time and headache down the road by understanding the ins and outs of your device. My recommended process:

  • Point the sensor toward the middle of the room (“away from everything,” as in the chart above), so that no light is bouncing back to the sensor; what’s the value? How does it compare with the chart above? Write this number down in your engineering notebook.
  • Next hold it about 1/8″ away from something that’s very black. Write that number down.
    • In all of these tests, be consistent with the distance between the sensor and the object. You might want to make a little frame with standoffs and a plate so that you can have a consistent testing environment.
  • Hold it 1/8″ away from a white piece of paper. Ditto.
  • Hold it 1/8″ away from the grey field tiles. You guessed it, write this down in your engineering notebook.
  • Hold it 1/8″ away from a piece of white tape on the field. Try to position the sensor right over the tape, without overlapping the grey tiles. Write that down.
        
  • NOW, do this all again for the next sensor in the package. Do this for all 3 trackers, one at a time.

In this controlled testing, readings from each device under the exact same conditions makes their outputs directly comparable.

Now you have a complete chart for all 3 sensors. Notice anything? I bet that all 3 give you slightly different values for each item, and all are slightly off from the table above. No one’s perfect, and these trackers aren’t either. The information gathered here will be really, really important for your programmers to know, so that they understand what the readings should be when the robot is on the line, or when it is not.

AAAAAND, after you attach the 3 to your robot, you need to test them again by positioning each tracker right over the white line and grey tiles to make sure that they are installed evenly and that their values are similar to your single-component testing. (Write down the numbers from these tests in your engineering notebook too.) This process also serves as a double-check that your wiring is correct, and the sensors are delivering information to the cortex.

See an important note below about dynamic, on-the-field calibration of your robot before the start of each match.

Three Sensors Together

The line tracker is a package of 3, and you generally use all 3, attached right next to each other on the underside of the robot, facing the ground.

So how do you actually use this device to get your robot to follow a line? You take readings from the sensors and compare them to what they would be if the robot were perfectly on the line (using your amazingly-well-documented testing from above). Then you adjust the robot’s left-side or right-side power as needed to get back on track. Here’s an example of the 3 scenarios that can happen.

VEX line-following examples

Here’s a table of what each sensor would be experiencing in these scenarios:

Sensor Example A
sensor sees / sensor value
Example B
sensor sees / sensor value
Example C
sensor sees / sensor value
1 dark / high dark / high dark & light / medium
2 light / low dark & light / medium dark & light / medium
3 dark / high dark & light / medium dark / high
Action to take No change to motor power More power to left side More power to right side

Programming It

According to the table above, the side that’s veering off needs “more power.” How much is “more”? First off, a larger deviation from the line will need a larger adjustment; smaller problem, smaller fix.

Battery power is on a 0-to-127 scale, but these sensors are on a much larger scale (0 to 1023/easyC; 0 to 4095/RobotC). How do I take a sensor value and turn it into a battery power level? How do I know what the right adjustment amount is?

In many posts until now, I’ve glossed over the work needed to write a PID algorithm, or even what one is. There’s no way to describe what you need to do with a line sensor without it, so here’s a simplistic version. Scenario: the robot is drifting over to the left:

  • I’ll give the left side more power.
  • Oops! It’s gone too far the other way! Now I need to give more power to the right side!
  • Oh gosh! Now it’s overshot the line again and veered off to the left.
  • Repeat and fade.

You can imagine a robot doing these movements—it will make big zig-zags across the line. Well, it will make zig-zags only if you’re lucky. Most of the time if it’s a large change in power to one side, the robot will go so far off to the other side so quickly that—before the next reading-and-adjustment iteration can be enacted—it’s off the line, all 3 sensors read dark, and the robot is completely lost.

One can also see the problem of making power adjustments that are too small: the robot just keeps veering off to the side, and doesn’t have enough power to correct until eventually all 3 sensors are off to that side, and the robot is lost.

A Note About Safety

When you write your program, you must include a fail-safe clause. Imagine that the robot has gone too far off to the side and all sensors read dark. The program keeps taking readings after you’ve left the line, and no matter what adjustments it’s trying to make, the motors will stay on and the robot will keep driving. Forever.

My experience with the line tracker was for my daughter’s middle school science fair project. While we were learning everything the hard way, we learned about a fail-safe the hard way too. If the robot lost the line, it would just keep going, across the room, into the next room, until it crashed into a chair leg somewhere.

So, as part of any program, before you ever try it on your robot, you must include a check in your loop: If all 3 sensors read dark for xxx milliseconds, or if the sum total of all 3 sensors is greater than yyy, then set all motors to 0 and break out of the loop. You can get fancier in this step to try & re-find the line—after you’ve mastered the basic line-following action. See Free Range Robotics’ (New Zealand) guide on line tracking for more sophisticated ideas.

PID, as Briefly as I Can Manage

So what is this PID thing? It’s a mathematical way to calculate and implement small, frequent changes in motor power so that the robot tracks the line (or, in the case of shaft encoders, that it keeps driving straight). I highly recommend reading George Gillard’s fantastic, really clear, easy-to-read An Introduction to PID Controllers. You can also read my PID Beginner’s Guide, which includes sample code.

Here we go:

  • P stands for “proportional”;
  • I is for “integral”; and
  • D is for “derivative.”

This sounds a lot like calculus, by my students haven’t taken calculus yet! No problem. In order to understand and apply this concept, calculus experience is not required.

PID starts with establishing a target. In the example of shaft encoders, you’re trying to match the clicks of side A to the clicks of side B—side B’s click count is your target (in PID-speak, it’s your “set point”). In the case of line-tracking, you must make use of your initial testing from above to establish what a target sensor value is. As the robot is driving along, there are many ways to figure out if it’s correctly on the line:

  • check the left & right trackers against their preferred values
  • check the left & right against each other
  • check the center tracker against its target, with a follow-up check on the left or right sides if needed
  • compare one or more of the sensors right now to their values from the previous iteration
  • and so on.

These comparisons reveal (a) whether there is a problem that needs a motor power adjustment, and (b) in which direction the problem lies. These two together draw a path to giving the robot instructions on what to do next.

The “P”

Line following robot - black on white

Most line tracking examples show something like this: a black line on a white surface. I have yet to find a decent photo on a VEX field.
Photo: thectarnold, YouTube

At any given moment, the value of a sensor will not exactly match its target. The difference between the target and reality is called the error. You want to convert this error amount into a power amount, and add it to/subtract it from one side of the chassis to adjust the robot’s direction a teeny bit at a time.

But wait! That’s a lot of hand-waving! “Convert this error to a power level.” Really? Yes, really. You will have to figure out your own scaling factor to get from one to the other. That’s the repeat-story in PID programming: everything relies on constants/scaling factors at each step that you must figure out yourself. That’s why this is hard.

Aside: Please read my other post about RobotC’s datalogging feature. It makes figuring out kp, ki, and kd WAAAAAY simpler, because you can very easily see the numeric outputs of the error from each iteration, and allows you to easily graph the variable you’re tracking.

If I add or subtract the full amount of this error-power to one side of the robot, I’ll probably end up with the zig-zag dance described above. SO, I must implement only a fraction, or … how do you say … a proportion of the error amount in order to keep things smooth in the robot’s movement. That magic scaling factor is known as kp, and you have to figure it out yourself.

Where to start with figuring out kp? Do a little mental arithmetic. Start with the table above, and the programming language you’re using. For RobotC a white surface will return a value of about 150; a dark surface will return somewhere way north of 2000 (I’ll use 2000 here as my target value for simplicity). So if your robot strays from the line a little, the sensor value on one side will start going down, let’s say to 1800. Our error is now 2000 – 1800 = 200. Obviously we do not want to add 200 to our motor power as an adjustment! So we want to include, oh, at most 4% of that amount (200 * 0.04 = 8). So you could start with a 0.04 value for kp and see what happens (checking your datalog!). However, if you’re using easyC, the corresponding values might be 40/white tape and 600/grey tiles; this will result in error values (target – sensorValue) being smaller, resulting in a significantly larger value for kp.

The good news at this point is that you may be done. Many situations only need the P part of PID. If the robot’s movement is all that you ever hoped for, then I & D can be safely ignored. Yay!

The “I”

If P-only does not produce the desired accuracy, or if the robot has to go ridiculously slow to hold the line, one moves on to the “I” (integral) part of PID, which is equal to the running total of all of the error amounts you’ve calculated so far. You take this running total amount and multiply it by another magic fraction (called ki, which you must determine yourself) and add it to the error-power you calculated in the “P” section. Again, look at your datalog and see what that running total is, after you’ve gotten the “P” part of this process complete, and make a starting guess for ki that would produce a power-level change of 2 or 3 when multiplied against that running total. You may need to go down even more from that value as you test.

The “D”

Line tracking in action; you can clearly see the 3 sensors on the front of the robot, and they’ve got some great overhead views as it drives along. (Scroll back to the start of the video for the programming discussion.)

If things *still* aren’t good enough, no matter what values you try for kp and ki, then you add in the “D” (derivative) component. The derivative is the difference between the error you measure in this cycle and the error you measured last go-around. You multiply this D figure by yet another magic fraction that you yourself must determine (kd), and add it to the P and I results to get your new desired motor power. For a starting guess, multiply your ki value by 0.5 and see what you get.

Possible Combinations

As noted in George’s guide, some situations use P & D and skip I, making all of the possible combinations P-only, P & I, P & D, or PID. As mentioned above, lots of people can live with P-only; you’ll have to evaluate your own situation to determine where you go after Step 1.

Faux PID for Line Following

If you’d like to try a simpler method of line-following, you can just add/subtract a fixed amount of power to one side of the chassis when you detect a problem, without ever calculating any error amount or scaling factor. That “fixed amount of power” would generally be determined via trial & error—what if I add/subtract 5 to the motor power when a change is needed? Or 3?

Here you’re essentially doing the “P” part of PID in a static fashion, where the magnitude of the adjustment is not related to the size of the problem at hand. You’ll have a much lower sensitivity level, but it may get you where you need to go. It will likely require slower driving speeds, but again, this is determined by real-world testing on your specific robot.

Once you’ve got code working to do faux PID, it’s not a giant leap to making those motor power changes proportional to the problem at hand by using the “P” part of PID.

PID in easyC

easyC PID function blockEasyC users take note! There are PID function blocks in your drag-and-drop list. These function blocks are NOT for use in this type of PID.

The easyC function blocks are ONLY designed for the situation of holding a movement arm in place. The other situations where PID is needed—driving straight using encoder values, or tracking a line—do not work with this function block. In these situations you must write your own algorithm. Again, I recommend George Gillard’s document, which includes sample code at every step that you can definitely implement in easyC.

[Edit 10/18/2017] I see that the awesome jpearman from the VEX Forum has a lengthy post (with downloads) for how to implement PID in easyC. Here you go.

Slow Down

Now that you understand the computing that goes on behind the scenes of a line tracker, you can see that if the robot is moving faster than the cortex can process-and-adjust, the robot will go off the line. If you think your code is correct, but the robot keeps going off anyway, try cutting the power in half, or cut it down to *just barely* enough to make the robot drive. If it’s *still* going off the line, then you have a problem with your code, for sure. If it does stick with the line at very slow speeds, then increase the speed in chunks to figure out your maximum.

The closer your robot can stick to the line, the faster you can drive. In other words, the more sophisticated your programming, the faster your robot can likely go. If your code makes large-ish/less-precise changes at each iteration, then you’ll need to go slowly so that the robot doesn’t get lost before the next iteration.

Maximum speed will also be limited by the path of the line. In Nothing But Net (image, next section), the tape lines come together in the center of the field and make sharp-angle turns. Every robot, no matter how good the code is, will need to slow down at such a juncture.

Uses

Uhhhh … following lines? Yes, yes, but which lines, exactly? And how does one make use of these lines in autonomous?

First, in a VEX competition, autonomous lasts only 15 seconds, and line tracking robots need to move relatively slowly. This sensor will have more limited usability in autonomous, and may be most useful in autonomous skills, where you have a full 60 seconds to move about the field (see video below).

In The Zone overhead view

In The Zone overhead view (tape lines enhanced)

In The Zone

As of this writing, my team has gone to 2 In The Zone tournaments so far, and have seen only one robot obviously using this sensor during the 15-second autonomous period: THE RESISTANCE (Team 86868, otherwise known as the Starstruck World Champion).

In his 15-second autonomous, he drives down the side of the field, puts his pre-load cone onto that mobile goal, picks it up, backs all the way up to his starting area, and FOLLOWS THE 5-POINT LINE diagonally backward to a designated location, then turns and drives forward, planting the whole package in the 20-point zone and backing away. Wow. It’s really quite a stunningly beautiful use of of the line tracker (combined with information from *other* sensors too).

Looking at the In The Zone playing field (above right), did you notice that one of those white lines sticking out from the perimeter is DIRECTLY underneath the cone-loader? Not a coincidence.

In that location, I can see the benefit of mounting the sensor to the exact center of the robot; when it gets to a place like the cone-loader-line, one could tell the robot to stop, make a point-turn, and be positioned in just the right place. The trick here would be to have the sensitivity to stop after passing over a very small width of tape, stopping quickly, and likely involving the use of additional sensors (shaft encoders & gyroscope come to mind). Hard. But where would be the fun otherwise?

On-the-Field Calibration

(Special thanks to Griffin Tabor, who I’ve met through the VEX World Coaches Association Facebook group. If you’re a mentor/coach reading this, and you’re not a group member, sign up today! It’s an amazing resource on every topic you can imagine, and a supportive community.)

Griffin points out that, despite what the VEX product page states—that the tracker is not affected by different lighting conditions—the tracker is indeed sensitive to different lighting conditions. Your team’s lab or garage has gentle consumer-grade lighting; tournaments in gymnasiums or other large rooms have industrial lighting. It makes a difference.

For a robot with a line tracker, this means that its sensors will return different values for light and dark than they did back home. How to guarantee that lighting conditions won’t throw off your robot? On-the-field calibration!

The idea is to capture sensor data when you put the robot on the field before the match starts and store it in global variables that the program will use in its auton algorithms, instead of static values as described above. For easyC users, this code would be placed in the “Initialize” tab of your competition program; for RobotC, it’s the “pre-autonomous” section.

Here’s the code in a separate post; it’s written in RobotC, but has instructions at each step of how to implement in easyC. Thank you Griffin!

Nothing But Net overhead view

Nothing But Net overhead view (tape lines enhanced)

Sensor Array Variations

If you go back through the VEX Games of Yore, you’ll see that they all (all the ones I’ve seen, anyway) have white tape here & there. Even though the lines are included in all of these games, in my 5 years hanging around VEX I have only ever seen a handful of teams use them. Probably because, well, it’s hard.

I mentioned above that for this year’s cone loader it would be handy to have the sensor in the center of the robot. There are other patterns of line following that would benefit from a different placement of sensors.

Also, just because these trackers come 3-to-a-pack doesn’t mean that you are restricted to using only 3. Some robots use 4 or 5 sensors next to each other to expand their viewing range; this would particularly help when approaching sharp turns.

You could alternately have 2 sets of sensors—one on the front and one on the back of the robot—and use the combined information to steer. You get the idea. HOWEVER, as with the ultrasonic sensor, the more sensors you add to the cortex, the less often each one will get polled. And the more calculations your cortex has to do for line tracking, the less capacity it has for other things.

As referenced above, the New Zealand Free Range Robotics document is an excellent resource. It has extensive discussion of alternate ways to arrange the sensors, and where/how more-than-3 sensors would help. The second-to-last post of this VEX Forum thread also has a good description of sensor placement alternatives, and the tradeoffs of choosing one or the other. (Although I think the discussion is from the days of whatever came before this infrared sensor, it’s still a useful set of ideas to consider.)

Video: Line Tracking in Action

Here’s a video from a number of years back of a robot doing programming skills, following lines for the vast majority of movements. It’s pretty awesome. As you can see, the robot moves rather slowly, especially when making turns; if you’re trying line tracking, don’t expect to be using motor power 100 on your drive train!

Additional Info

Resources

If you’re just starting out with this sensor, it might take a little while before it behaves just the way you want. Fear not.

Resources for easyC Users

I’ve seen easyC’s so-called “sample program” for this tracker, and it basically just reads the sensor data from one tracker into a variable, prints it to screen, and calls that a “sample.” I feel your pain, which is among the reasons we switched to RobotC this year. I did find this video tutorial, which walks you through the basic line tracker setup (without PID) pretty well.

THAT SAID, I recommend that you look at some VEX Forum posts (Google search on “VEX Forum + line follow” and/or “VEX Forum + line track”) and read the code that many people have posted there. Some of it is pseudo-code, and some of it is full-fledged RobotC. In either case, it is useful to you too. As I mentioned above, there is no easyC function block for this type of PID algorithm, and you’re going to have to write your own. Reading pseudo-code and RobotC code will give you a detailed roadmap of how to write it in easyC. George Gillard’s PID guide is also a great resource.

As I said above, I helped my 8th-grade daughter implement line-following PID on a squarebot with easyC—on an old PIC microcontroller. If that’s possible, you can do it too.

Troubleshooting

As I’ve mentioned (several times), this stuff is hard. Your program is practically guaranteed not to work on the first try.

  • Proper installation. Before anything else, make sure that your sensors are installed according to product specifications; 1/8″ off the floor is really close; if your sensors are farther away, they just won’t work as well.
  • Start small. Make a line-tracker-only test program instead of sticking your new code into your whole auton routine. You’ll have to run this segment many times to “tune” your adjustment factors (P, I, or D), and having a stand-alone program will make repetition easier.
  • Start slow. As mentioned above, if the robot’s speed is too high, then it will drive off the line faster than the cortex can make its next iterative adjustment.
  • Turn it off and on again. The old standby; takes very little time, solves many problems.
  • Print to screen / RobotC datalog. This is your best resource in any type of sensor-related troubleshooting.
    • Having certain values print to the screen inside your looping is the only way to understand what the code itself is trying to do (as opposed to what you want it to do, or thought you told it to do).
    • If you are doing print-to-screen using the orange downloading cable, you’ll have to move around the field with the computer; that’s just how it works (RobotC supports wireless connection to the robot with the $50 programming hardware kit).
    • If you just can’t figure out why the code is not working the way you wanted, have a different status message print to screen (or to LCD) when the program enters each block of code. It’s possible that the way you’ve written your comparison conditions, the program will *never* enter a certain block. Or if you have a while loop, perhaps the code is never exiting the loop.
  • Print out your code on paper. There’s just too much code in this algorithm to see on your screen at once. Laying things out on the table is often the only way you’ll ever find certain problems. I can’t stress this enough. Old-school sometimes works best.
  • Check your comparison statements and all mathematical functions. There’s a lot going on here, and you may have a + or – backward, or a < switched with a >, and so on.
  • Did you get left and right confused? Again, there’s so much stuff going on in this algorithm, it’s not hard to switch L and R somewhere along the way.
  • Go back to your original testing values. Are the numbers used in your program consistent with your original testing? There’s also a lot of numbers floating around in this process; make sure that your code matches your tests.
  • Do a quick tracker test again. Place the robot over the line and use the debugger/online window to ensure that the values returned by the sensors right now match what you measured when you put them on your robot. If they don’t, why? (And you should update your engineering notebook if values are changed for some reason.)

♦           ♦           ♦

If you’ve gotten this far in this very, very long post, congratulations. If you’re considering purchasing a line tracker, I hope this tome has helped in your decision-making process. If you’re trying to use one right now, I hope this post and the resources linked here have given you some assistance if you’re having difficulty.


Other Sensors

Here’s a list of all of the sensors in my review:

Share this post: