Computer Hardware Interfacing

Creating a base platform for autonomous vehicles

Ryan MacKenzie                        2009
Supervisor                              Franz Weehuizen


Creating a base platform for autonomous vehicles. 1

Summary. 4

Introduction. 5

Background. 6

Platform Design Considerations. 8

Hardware. 8

Embedded PC. 8

Microcontroller. 9

GPS. 11

$GPRMC. 12

Compass. 15

Actuators. 15

Proximity sensors. 16

Motor Drivers. 17

Servos. 17

Encoders. 18

Drive train. 20

Summary of parts consideration. 22

Software. 23

C#. 24

Java. 26

LabVIEW... 27

Matlab. 28

Low Level languages. 29

Part Selection: 31

Hardware Platform.. 31

Embedded PC. 31

Microcontroller. 31

GPS. 33

Compass. 34

Proximity sensors. 34

Motor Drivers. 35

Servos. 36

Encoders. 36

Drive train. 37

Software: 37

Results. 38

Component Aquision. 38

Hardware Platform.. 38

Software Design. 40

Microcontroller. 40

Graphical user interface. 43

Microcontroller GUI 43

Compass. 45

GPS. 46

Class GPSStruct. 47

Class NMEA protocols. 47

Location.cs and Satellites.cs. 48

Waypoints. 49

Conclusions. 51

Future Work. 52

Bibliography. 53

Appendix A.. 55

Appendix B. 56

Appendix C. 58




The purpose of this project is to design a high level software program that can read raw data from low level hardware and display that data in a user friendly manner, with the intent to be the base of an autonomous robot design. The design explicitly aims at enabling research in robot navigation and the platform must meet criteria such as low cost, simple fabrication, and have a suite of sensors to aid the navigation. The system is based on a two level architecture where the master controller is an industrial Personal Computer (PC) and the slave controller is an 8051 microcontroller. The master controller is programmed using C# and interfaces with the slave using a hexadecimal code system. The master is also responsible for receiving communication from the sensors which include, compass and GPS, and displaying the data in a graphical user interface (GUI). The slave is responsible for motor control and reading data from sensors with high interrupt overheads like encoders.

The project is open ended and at time of completion a GUI has been created which can send commands to the slave, read data from the compass and GPS as well as display their respective data on screen.  To increase the usefulness of the data a simple waypoint system has been implemented.



Robots are being used in many industries these days and in recent years the development of robotic platforms has been increasing. Their development is not exclusively tied to large companies and/or well funded research facilities, but can extend down to smaller start-up companies and even individuals. Any time a new technology in this field is developed it needs to be tested and a new platform is designed to meet the projects requirements. Designing a robot is a coordinated effort integrating the various systems and technologies to achieve the required outcome. When a company intends to deploy a new type of technology they need to develop an entire platform, and this is normally started from scratch.    

Over the last few decades the research in this field has moved away from solely looking at industrial manipulators and into autonomous vehicles. In developing and testing components for a robotics platform, researchers and companies alike need to start from scratch with a custom solution for their project. The only alternative to avoiding this is to purchase expensive starter kits from companies for thousands of dollars. Examples of this can be seen in the CoroWare Explorer EX-D Robot development platform costing US$8000 and the SR4 Professional from Smart Robots costing nearly US$9500. These alternatives are by far the quickest and require little knowledge of a mechatronics system in order to deploy the required component or technology, but the cost or lack of required features or components is often the reason why developers choose not to purchase these systems.

This study aims to develop a cost effective and flexible control platform which can be used by developers as the base for a varied number of robots and their applications, which are cheaper and have similar function capabilities as there commercial counterparts. Limitations form the basis of this project, to find a solution that will allow developers to start with a pre-existing system that is flexible and to extent hardware independent meaning the user can add hardware and software modules as they need them. At university level research it is common for researchers to modify existing remote controlled (RC) cars trading off design metrics for cost (Jizhong Xiao, 2004), and enabling the designers to use common off the shelf components.

 The proposed solution consists of a two tier hardware platform where an industrial computer communicates with a microcontroller via hexadecimal codes. Any microcontroller can be used as long as communications can be set up between the two devices. Both the GPS and compass connect directly to the pc and sensors that produce frequent interrupts like encoders are connected to the microcontroller. The microcontroller will also be responsible for motor control and handling of analogue inputs.  All of this is possible because of the way the devices communicate and the way the software is coded.

This projects looks towards the interaction of the various devices and exists mostly in software.




Mobile robots have come a long way in recent years. Like the miniaturization of PC’s these mobile platforms are getting smaller and smarter. In the past robots were controlled by heavy and expensive computers that were far too large to be carried onboard the device. This meant that the robot had to be tethered to the computer by a data cable. By yesteryear standards powerful computers can be found in many devices, and this has led to small very sensor-rich mobile robots being created by many companies and research facilities.   

The increase in interest of mobile robots, is not only because they make great toys or are inspired by science fictions stories, but because they make great tools for engineering education, security manufacturing and researching control techniques. Mobile robots can be found at almost all universities in both postgraduate and undergraduate programmes, and are useful in the studying a wide variety of disciplines i.e. Computer science, Electrical and Mechanical engineering as well as Cybernetics and Mechatronics.  (Braunl, 2008)

Autonomous vehicles require the integration of various systems, both hardware and software related. An increasing amount of time and effort over the last few years has been put into developing and optimizing these systems in order to achieve a superior product. This has been made easier with the continuous improvement of components that make up these systems.

Two block diagrams, below, show how some of these systems integrate with each other at the highest level, the first relating to the hardware and the second figure shows the software overview. It can be seen that even in its simplest form, there are many systems to consider. Design choices and changes need to consider not only the system at hand but the implications to the overall the system.


Figure 1 Platform overview

Figure 1 shows the five main subsystems of the entire project, the arrows in the sensor and control system indicate the flow of communication between the devices. All these systems were looked at in great detail, to better understand how the final design should function.

The PC sits at the top of the hierarchy and is responsible for relaying and processing information from the micro controller as well as reading data from the compass and GPS. The PC, as it has a many more tasks to perform per unit time,  reacts to commands and inputs much more slowly than the micro controller can,. The general rule is microcontrollers can operate in real time, whereas PC’s with full fledged operating systems are slow. (Linda Null, 2006)

The advantages of having a two tier system like this one, and the reason to why they have become so popular is likely because of the following:

The PC

·         Allows a graphical user interface

·         Can connect to sensors using USB, Bluetooth, and RS232

·         Complex functions can be pushed into the higher layer and programmed in C#

·         The microcontroller is faster at reacting to its environment and inputs.

·          Allows for quick networking over WIFI or 3G networks

The microcontroller:

·         Inputs with high CPU overheads can be read on one or more microcontrollers and the data can be packed and sent to the higher layer for interpretation.

·         It is faster at reacting to its environment and inputs.

·         Can read and output both digital and analogue signals.

Two tier architectures allow for a more flexible programming environment that is feature rich, allowing multiple low level controllers to communicate directly with high level control. The low level controller look after time critical data and precision, where the high level controller instruct the lower devices what to do and plans how to navigate around the environment.

Figure 2 shows a more detailed overview of the software and how the higher and lower languages interact with each other, and the flow of data between the classes. It is important put a lot of thought into the setup of how these classes communicate as it is harder to change once the program is taking shape.


Figure 2 Software architecture



 Platform Design Considerations

The prototype of this project was based on the figure 3, more details about what is in each block are shown below.

Figure 3


Embedded PC

Embedded PC’s (EPC) are essentially small, robust computers that can operate in extreme conditions. An embedded computer is a microprocessor based system that is embedded in a larger system (which may or may not be a computer system). A very important aspect is that the design of the hardware and software of the embedded system (ES) derives its specifications from the environment with which it will interact. In many cases they have the same inputs and outputs of standard desktop computer including USB, SATA and even HDMI in some cases.

EPC’s are often classified in different categories depending on their intended use.  The simplest may be just a small 200MHz microprocessor with a real time operating environment, where new devices can be running the newest Intel Atom or Core 2 Duo. Choosing the right device for mobile robots is important as an EPC with a fast processor will drain batteries faster than its lighter, slower counterpart.

I require my device to run, at a minimum, Windows XP embedded with the .NET framework t allow for programming high level languages.  This means that the chosen EPC will have many features that are already found in desktop PCs. The recommended minimum hardware requirements are: 300 MHz Pentium class processor, 128 MB RAM and 1 GB Hard drive space.



Microcontrollers (also called microcontroller unit, MCU or µC) are extremely versatile devices that are normally used in automatically controlled devices, such as automobile engine control systems, remote controls, appliances, and power tools. Microcontrollers are economical to digitally control devices and processes because they reduce the size and costs of using separate microprocessor, memory and I/O devices.

Microcontrollers are essentially a small computer on a single integrated circuit consisting of a relatively simple CPU with support functions such as a crystal oscillator, timers, serial ports as well as a watchdog timer and analogue I/O etc. Program memory in the form of ROM is also often included on chip, as well as small amounts of RAM. While some embedded systems are very complicated, many have minimal requirements for memory and program length, with no operating system, and low software complexity. Typical input and output devices include switches, relays, solenoids, LEDs, small LCD displays, and sensors for data. Embedded systems seldom have any human interaction devices of any kind. This makes many of them well suited for battery applications. (Heath, 2003)

Looking at some of the required features in more detail:

CPU: Microcontrollers must provide real time response to events in the systems they are controlling. The CPU must be capable of handling all the requirements handed to it by the software, interrupts and input devices, be it through the AD, DAC or directly over the ports. 

UART: One of the microcontrollers’ requirements is that it has a UART, otherwise known as a serial port, for communication. The fact that the microcontroller will have an integrated serial port means it can very easily read and write values to the serial port. If it was not for the integrated serial port, writing a byte to a serial line would be a rather tedious process requiring turning on and off one of the I/O lines in rapid succession to properly "clock out" each individual bit, including start, stop, and parity bits. This is also known as bit banging.

PWM: A dedicated Pulse Width Modulation (PWM) block makes it possible for the CPU to control devices like motors, without using CPU resources in tight timer loops, as the resource is handed to a dedicated device. I would prefer to have more that one of these as they are used in so many applications

SPI: The Serial Peripheral Interface Bus or SPI bus is a synchronous serial data link standard, named by Motorola and is similar to I2C which is owned by Philips. SPI operates in full duplex mode, and devices communicate in master/slave mode where the master device initiates the data frame. SPI also allows for multiple slave devices with individual slave select lines. This is useful when communicating with compatible devices as it doesn’t tie up a UART.

Timers: Microcontrollers have timers that may be controlled, set, read, and configured individually. These timers have three general functions:

·         Keeping time and/or calculating the amount of time between events,

·         Counting the events themselves

·         Generating baud rates for the serial port.

Interrupts: As the name implies, an interrupt is an event which interrupts normal program execution. Program flow is always sequential, altered only by instructions which purposely cause program flow to deviate in some way. Interrupts give a way to "put on hold" the normal program flow, then execute a subroutine, and resume normal program flow as if it had never left. This subroutine, called an interrupt handler, is only executed when a certain event occurs. The event may be one of the timers "overflowing," information received on the serial port, or triggered by an external event. The microcontroller may be configured so that when any of these events occur the main program is temporarily suspended, and control is passed to a special section of code which would execute some function related to the event that occurred. Once complete, control would be returned to the original program.

Watchdog Timer: Another useful feature to make sure your application is always running is called a watchdog timer. The watchdog, effectively, makes sure that the running software hasn't crashed. If the watchdog determines that the software has crashed, it automatically reboots the microcontroller. The watchdog also offers a feature that, instead of rebooting the system, will trigger an interrupt. When the watchdog is enabled, the software must write to a certain special function register (SFR) on a regular basis to let the watchdog know that the program is still running correctly. If the program does not perform this write operation within a given period of time, the watchdog assumes the program has crashed and actions a system reboot (or an interrupt depending on how the watchdog has been configured).

A/D and DAC: The A/D converter is a device that is built into almost all microcontrollers that continuously converts varying analogue signals from sensors into binary code for the microcontroller. The DAC essentially reverses this process, by taking the binary code and converting it into a corresponding analogue value.



The GPS system is important for navigation of mobile robots across open terrain. Unfortunately the system is complex and governed by many protocols. It is important to understand the many aspects of the GPS system in order to program software that so to errors, improve its accuracy and precision and ultimately use the system to its advantage.  

In a considerably simplified approach, each satellite is sending out signals with the following content: I am satellite X, my position is Y and this information was sent at time Z. In addition to its own position, each satellite sends data about other satellites and positions. Orbit data (ephemeris und almanac data) is then stored by the GPS receiver for later calculations.

To find its position on earth, the GPS receiver compares the time when the signal was sent by the satellite with the time the signal was received. From this time difference, the distance between receiver and satellite can be calculated.

If data from other satellites are taken into account, the present position can be calculated by trilateration (distance from three points). This means that at least three satellites are required to determine the position of the GPS receiver on earth. The calculation of a position from three satellite signals is called 2D-position fix (two-dimensional position). If four or more satellites where used then, an absolute position in a three dimensional space can be determined. A 3D-position fix also gives the elevation above sea level.

By constantly recalculating its position, the GPS receiver can find its speed and direction normally referred to as "ground speed".

 Accuracy of GPS positions strongly depend on the nature of the signals. The GPS signal is quite complex and offers the possibility of finding the following parameters:

·         One-way (passive) position determination,

·         Exact distance and direction determination (doppler effect),

·         Transmission of navigation information,

·         Simultaneous receiving of several satellite signals,

·         Provision of corrections for ionospheric delay of signals

·         Insusceptibility against interferences and multi path effects.

The GPS data signal information consists of a 50 Hz signal and contains data like satellite orbits, clock corrections and other system parameters (information about the status of the satellites). This data are constantly being transmitted by each satellite, and from this the data receiver gets its date, the approximate time and the position of the satellites.

The complete data signal consists of 37500 bits and at a transmission rate of 50 bit/s and a total of 12.5 minutes is necessary to receive the complete signal, this is , if no information about the satellites is stored on the receiver or if the information is outdated.

The first three subframes are identical for all 25 frames. Every 30 seconds the most important data for finding the position is transmitted within these three subframes. From the almanac data the GPS receiver can identify what satellites it is likely to be receiving data from and their actual position. Once the satellites in line of sight are know the receiver limits its search to only these and this accelerates the position determination of the device.

Within the subframes is the special information required to obtain time, date, longitude latitude and other parameter needed for navigation. When the GPS receive this data it translates it into a useable form. One of the main forms is called NMEA and it is a protocol used by receivers to transmit the data to a receiving device, in my case the embedded PC. The protocol output is EIA-422A which may essentially be considered RS-232 compatible transmitting data at 4800 baud with 8 data bits, no parity and one stop bit. I will be using the NMEA0183 standard, which are all ASCII and the sentences transmitted all start with a $ sign and end with a return, the data is also comma separated. Below is an example taken from a source dedicated to GPS and the NMEA protocol and is just one of the many types of sentences sent by the GPS receiver.  


RMC - Recommended Minimum Navigation Information
        1     2    3    4    5     6  7   8   9    10 11|  13
        |     |    |    |    |     |  |   |   |    |  | |   |
 Field Number: 
  1) UTC Time
  2) Status, V=Navigation receiver warning A=Valid
  3) Latitude
  4) N or S
  5) Longitude
  6) E or W
  7) Speed over ground, knots
  8) Track made good, degrees true
  9) Date, ddmmyy
 10) Magnetic Variation, degrees
 11) E or W
 12) FAA mode indicator (NMEA 2.3 and later)
 13) Checksum
A status of V means the GPS has a valid fix that is below an internal
quality threshold, e.g. because the dilution of precision is too high 
or an elevation mask test failed. (Bennett, 1997)

There are three types of starts a GPS receiver can initialise with the first is a cold where neither ephemeris nor almanac data and the last position are known and can take up to 12.5 minutes. This happens when the receiver has been switched off for several weeks, has been stored without batteries or has travelled approximately 300 km or more since the last position fix. The second is a warm start is can take as little as 45 seconds to get a position fix but the almanac data needs to be available and the time of the receiver has to be  correct but the ephemeris data may be outdated. The more satellites that come into view since the last position determination, the longer the warm start takes. If the receiver was turned on within 6 hours of it last being shut down and was located in approximately the same place, it can take as little as 15 seconds to acquire a positional fix.

Most GPS receivers indicate the number of satellites they are receiving data from, but also their position on the firmament (orbit around earth). This enables the user to judge if a relevant satellite is obscured by an obstacle and if so the user might be able to change the position of the receiver by a couple of meters to improve the accuracy.

DOP values (dilution of precision) are commonly used to show the quality of the satellite geometry,. Based on which factors are used for the calculation of the DOP values are:

·         GDOP (Geometric Dilution Of Precision); Overall-accuracy; 3D-coordinates and time

  • PDOP (Positional Dilution Of Precision) ; Position accuracy; 3D-coordinates
  • HDOP (Horizontal Dilution Of Precision); horizontal accuracy; 2D-coordinates
  • VDOP (Vertical Dilution Of Precision); vertical accuracy; height
  • TDOP (Time Dilution Of Precision); time accuracy; time

HDOP-values below 4 are good, above are considered 8 bad. HDOP values become worse if the received satellites are above the receiver. VDOP values on the other hand become worse the closer the satellites are to the horizon. PDOP values are best if one satellite is positioned vertically above and three are evenly distributed close to the horizon. For an accurate position determination, the GDOP value should not be smaller than 5. The PDOP, HDOP and VDOP values are part of the NMEA data sentence $GPGSA.

Errors in GPS signals can come from a number of sources, these include:

The multipath effect is caused by reflection of satellite signals (radio waves) on objects. For GPS signals this effect mainly appears in the areas with large buildings or other types of tall objects like trees or hills. Reflected signal takes more time to reach the receiver than the direct signal and the error caused by the reflections is typically in the range of a few meters.

Another source of inaccuracy is the reduced speed of propagation in the troposphere and ionosphere. While radio signals travel with the velocity of light in the outer space, their propagation in the ionosphere and troposphere is slower.

Clock inaccuracies and rounding errors: Despite the synchronization of the receiver clock with the satellite time during the position determination there is still a slight inaccuracy of the time and leads to an error of about 2 m in the position. Adding to this are rounding and calculation errors of the receiver that can sum up about 1 m.

The errors of the GPS system are summarized in the following table. These individual values are not constant values and may or may not be compounded.

Ionospheric effects

± 5 meters

Shifts in the satellite orbits

± 2.5 meter

Clock errors of the satellites' clocks

± 2 meter

Multipath effect

± 1 meter

Tropospheric effects

± 0.5 meter

Calculation- and rounding errors

± 1 meter

To increase the accuracy of a GPS system a technique called differential GPS (DGPS) can be used. It is typically used by civil receivers and enables them to achieve accuracies of 5 m or less. In order for this to work effectively a second stationary GPS receiver is applied for correcting the measurements of the first receiver. If the position of the stationary receiver is known very accurately, a correction signal can be sent which is received and analyzed by a receiver connected to the mobile GPS. The correction signal is like the GPS signals free of charge in some countries.


Although there is so much more to these systems the sections covered here are those that were found to be important to the project. More information on the GPS system can be found at, as well as The first site covers GPS in great depth and is where a lot of the above information is sourced from. The second website is the authoritative source for the NMEA protocol and the third is extremely useful for understanding the sentences of the NMEA protocol.



A compass is a very useful sensor in mobile robotic applications. It is extremely useful in self localization as a mobile robot has to use many sensors in order to know its current position and orientation. A standard method to find orientation is known as dead reckoning and utilizes the robots shaft encoders on each wheel and a know starting position.  Dead reckoning works by adding all the driving and turning activities of the robot to estimate its current orientation in its environment. There are however many factors which can cause this approximation to be inaccurate, for instance wheel slippage. Over time these errors add up and can prove useless for determining orientation; this is why it is important to use a compass on mobile robots.

There are two types of compasses available on the market, the first being the cheap analogue variety that can only tell the robot rough directions, normally using a voltage level. This could be directly connected to the A/D converter on a microcontroller and output one of eight possible directions depending on the voltage.

The next type is the digital compass, which is far more complex and has a greater output resolution. Most of these devices have a resolution of 1 and an accuracy of 2. The digital compass is normally fitted with inclinometers on two axes to help with accuracy when the device is not level. (Appin Knowledge Solutions, 2007)


DC motors are arguably the most common type of motors used for locomotion in mobile robots. Many types of DC motors are clean, quiet, and can supply enough power of a number of varied tasks. These motors are also much easier to control and can be mounted with portable power source unlike that of the pneumatic, hydraulic or AC motor which are harder to control and in high torque application require an umbilical cord, so these other motors are not normally used small mobile robots.

Standard DC motors rotate with ease where stepper motors move in increments. Stepper motors do not require shaft encoders as their position can be calculated by counting the number of steps the motor has taken. However DC motors don’t move in steps, so a shaft encoder is required to find the distance travel. Bräunl states that the first step when building robot hardware is to select the appropriate motor system, the best choice being an encapsulated motor combination comprising of a DC motor, Gearbox, and Encoder.

In order to find the correct motor with the right output power for a desired task a linear model of a DC motor can be used.

(Michigan Engineering, 2007)




Angular position of shaft


Terminal resistance

Angular shaft Velocity


Rotor inductance

Angular shaft acceleration


Rotor inertia


Current through Armature

Friction constant


Applied terminal Voltage

Torque constant


Back EMF voltage

Back EMF constant

Motor torque

Speed constant

Applied torque

Regulation constant

Output Power

Power loss to thermal effects

Input power



In this motor model the motor inductance and friction are considered negligible and set to zero. This model can be simple coded into Matlab to find the desired outcomes. (Braunl, 2008) (Bekey, 2005)

Proximity sensors

Sensors that measure distance are among the most important sensors found on mobile robots.  For many years robots have been equipped with various sensor types to measure distance to the nearest obstacle so the device can navigate around it or in more sophisticated implementations map the surrounding environment. The three main types of sensors used for distance resolving applications are sonar, infrared and laser sensors.

Sonar sensors work by sending out a short acoustic signal of about 1ms at an ultra sonic frequency of 50Hz to 250Hz. This signal is emitted and the time taken to receive the echo is measured. The time taken is proportional to twice the distance of the nearest obstacle. If the emitted signal is not reflected within a certain time frame no object is detected.  Signals are sent roughly 20 times a second giving it the typical clicking sound.

They do have a number of disadvantages but are still a very useful and powerful device for sensing distance. It is common that the sensors have a very narrow cone of typically 15so 24 of these devices are needed to cover the surrounding area.  This is not practical, so often there are more sensors in the front and back with only a couple of sensors on the sides of the robot. These will cause black spots in the robot’s sensing capabilities and added to this is another common problem that sonar is extremely susceptible to interference and reflections. Many more disadvantages with using sonar sensors have been well documented and can be found in articles published about the devices (Barshan, 2000) (Kuc, 2001).

Infrared sensors are much more complicated than sonar sensors and therefore require more complicated circuitry to measure the fast time of flight of a proton. The system instead uses a fast pulsed infrared LED at about 40 KHz together with a detection array. The wavelength used is typically 880nm, and is invisible to the naked eye but can be seen using an IR detector. The angle of the reflected light changes according to distance and therefore can be used to measure distance.

Like the sonar sensor, the detector can output either an analogue or digital output, with the value ranging according to the distance measured. Unfortunately both the analogue and digital type output is not linear and can be tricky to account for in software, because the sensor output signal rises to a peak between the near and far readings. An example of this shown in the sensors data sheet and, for convenience, can be found in Appendix B.

 Lasers: In many of today’s mobile robots, sonar sensors have been replaced by the more sophisticated IR sensors and for precise obstacle avoidance and environmental mapping, lasers are being used. Lasers are capable of producing almost perfect 2D maps of their local environment, and even 3D maps of its global environment. Like the other sensors the laser has its disadvantages; the laser is too bulky and heavy for small robot applications, and is generally the reason why sonar and IR are used instead.

Motor Drivers

Most motor application in mobile robots requires two things:

1.       Run the motor forward and backwards.

2.       The ability to change the motors speed.

It is typical that the motors will be connected to an H bridge either designed specifically for the motor or an off the self solution is purchased that will meet the needs of the motors. The H Bridge allows the motor to be run in both forward and reverse directions. Bridges designed specifically for the motors will normally be designed to accept a PWM input, where the off the self solution can be purchased to accept multiple types of inputs including serial, analogue, or a RC receiver.

It is important that when connecting an H bridge to a microcontroller digital output pins that a power amplifier be used. This is because those outputs have tight power restrictions imposed on them, and they can only be used to drive logic chips but never motors directly.   


A servo motor is a high quality DC motor, typically used in servoing applications and must be able to handle fast changes in position speed and acceleration.  In contrast a servo is used by hobbyists and comprises of a DC motor with encapsulated electronics using PW control. A servo has three inputs Vcc, ground, and PW input control. PW unlike PWM is used to tell the motor what angular position to move too. A servo’s disk cannot be rotated like a dc motor, but the disk can rotate   120 from its centre location. The PW signal used is has a frequency of 50Hz, and the pulse width sets the desired position.

Servos are useful in robotics application for steering applications of moving sensors around. Unfortunately servos do not have positional feedback, so its true position can vary from its required position. This may be caused by obstructions or if the load is too high for the device.   




An encoder is a sensor attached to a rotating object (such as a wheel or motor) to measure rotation. By measuring rotation your robot can do things such as determine displacement, velocity, acceleration, or the angle of a rotating sensor.

These are three basic types of ways positional feedback can be achieved. As well as having many different types and configurations, all of which stem back from the resolver. Here is a general description and understanding of the technologies.


This rotary positioning device is an analogue sensor which has been used for many years in military applications. Resolvers were engineered to be reliable in the harshest of conditions. Resistance to heat, dust, humidity, oil and vibration makes this device outperform encoders and pulse coders. In typical applications the resolver transfers data to a PLC and is then interpreted to perform various functions.


This is a digital device which contains a glass scale. This glass scale contains evenly spaced divisions in which a light passes through the glass .The encoder generally contains 3 sensing cells, one for home position (Z channel) and two for incremental position (A+B channel). Encoders also contain a higher resolution and greater accuracy then resolvers. However, they are much less tolerable to shock and temperatures above 100 degrees C.

Pulse Coders

A pulse coder is very similar to an encoder in that it detects positional feedback. Pulse coders, however; also track the velocity feedback or rotational speed of the encoder.

A typical encoder uses optical sensors, a moving mechanical component, and a special reflector to provide a series of electrical pulses to the microcontroller. These pulses can be used as part of a PID feedback control system to determine the distance, rotational velocity, and the angle of a moving robot or robot part.

Knowing the distance or angle between each pulse, and the time from start to finish, position, angle or velocity can easily be determined. Encoders are necessary for making robot arms, and are useful for acceleration control of heavier robots. They are also commonly used for mobile robots.

For every pulse sent out by the encoder, the rotating wheel has travelled a certain angle. In order to do the calculation to find the distance travelled, other factors need to be known, such as wheel diameter and encoder resolution (number of clicks per 360 degrees, or counts per revolution). The way to calculate this is:

wheel circumference / counts per revolution = distance travelled per encoder count

The velocity is just distance divided by time

distance travelled per encoder count / time = velocity

After determining the distance and velocity, these values must then be run through a PID feedback control algorithm so that the robot can match a pre-determined distance and velocity.

There are several problems with using encoders for robot position control. Errors can quickly build up caused by slippage. This is why it is not recommended to use encoders for position feedback in robot navigation. Encoders have a finite accuracy. If an encoder has a 360 count resolution, the accuracy can be off by an entire degree. Ambient light needs to be kept out of the sensor, as it could potentially result in false reads.

High resolution encoders for velocity control can take a lot of computational power. It is therefore better to use a digital counter IC to count encoder clicks than to have a small microcontroller counting the clicks.  If a small microcontroller was used, it would be better to get the controller to read the counter value serially.



Drive train

The wheel has, because of its efficiency with relatively simple mechanical construction, been one if not the most commonly chosen locomotion mechanism in mobile robotics and for man-made wheels in general. Balance is usually not a problem because of designs that allow all wheels to touch the ground at any time. Three wheels on a robot are sufficient for achieving good balance, but four wheels will improve stability further. 

Some of the more important features of wheeled robotic applications are in the areas of traction and stability, manoeuvrability and control. There are 4 basic wheel types (Roland Siegwart, 2004) each with its strength and weaknesses;

1.       The standard wheel; two degrees of freedom; rotation around the (motorized) wheel axle and the contact point.

2.       Castor wheel; two degrees of freedom; rotation around an offset steering joint

3.       Swedish wheel; three degrees of freedom; rotation around the (motorized) wheel axle, around the roller and around the contact point

4.       Ball or spherical wheel; realization technically difficult.


Due to their differences in kinematics, the choice of wheel type will have a significant effect on the overall kinematics of the robot.

The standard and castor wheel will have a primary axis of rotation and are this highly directional.  To move into a different position, the wheel must first be steered along a vertical axis.  The main difference between these two types of wheels is that the standard wheel can achieve steering motion without side effects, whereas a castor wheel, due to the rotation of an offset axis, will result in a force passed on to the robot chassis during steering.

The Swedish wheel and the spherical wheel are both designs that are less restricted by directions. The Swedish wheel functions like the standard wheel  while providing low resistance in other directions such as perpendicular or at an intermediate angle. The small rollers attached to the circumference of the wheel are passive, and the wheel’s primary axis is the only actively powered joint. The main benefit of this is that the wheel, despite being powered along the one principal axis, can kinematically move with very little friction along several possible paths other than just forward and backwards.

The Spherical wheel is a true omnidirectional wheel often designed to be actively powered to spin along any direction.

Robots designed for all-terrain environments or those with four or more wheels require a suspension system to achieve constant ground contact, if the vehicle travels across uneven ground.  One of the easiest ways to achieve good suspension is to build this into the wheel itself e.g. installing a deformable tire of soft rubber to a castor wheel. This basic solution can naturally not complete with sophisticated suspension systems where a robot needs more dynamic suspension for very uneven terrain.



Wheel geometry:

The choice of wheel type for a mobile robot is closely linked with the choice of wheel arrangement. Unlike automobiles, often designed for highly standardized environments, mobile robots are designed for applications in a wide variety of situations. There is no single wheel configuration that maximises the manoeuvrability, control and stability for all environments used by mobile robots.

There are a multitude of wheel configurations commonly used e.g., Two-wheel differential drive, two independently driven wheels in the rear/front with an unpowered omnidirectional wheel in the front/rear, three motorized Swedish or spherical wheels arranged in a triangle with omnidirectional movement possible, two motorized wheels in the rear with two steered wheels in the front (designed for steering to be different between the pairs to avoid skidding) and four steered and motorized wheels or four omnidirectional wheels. 

There are important trends and groupings covering the general advantages and disadvantages of each configuration in terms of stability, manoeuvrability and control.


The minimum number of wheels required for stability is two if this centre of mass is below the wheel axle. However, under most circumstances this solution requires wheel dimensions that are impractically large. Dynamics can also cause a two-wheeled robot to strike the floor with a third point of contact, for instance with a sufficiently high  enough for motor torques from standstill. Conventionally, static stability requires a minimum of three wheels and the centre of gravity within the triangles formed by the ground contact points of the wheels. Stability can also be further improved by adding more wheels, although once the number of contact points exceeds three, the hyperstatic nature of geometry will require some form of suspension as mentioned above.


For omnidirectional robots, Swedish or spherical wheels that are powered, are often chosen to achieve the level of manoeuvrability required for the wheels to move in more than one direction.  Ground clearance with these wheels are somewhat limited due to the mechanical constraints of constructing omnidirectional wheels but a recent solution now makes this possible e.g. a four-castor wheel configuration in which each castor wheel actively steered and actively translated. Other popular classes of mobile robots, in terms of high manoeuvrability are those with circular chassis that have an axis of rotation at the centre allowing the robot to achieve spin without changing its ground footprints giving it properties similar to those of the omnidirectional wheels. Adding one or two additional ground contact points may be used for increasing stability, based on the application specifics.






There is an inverse correlation between controllability and manoeuvrability. For example omnidirectional designs require significant processing to convert desired rotational and translational velocities to individual wheel commands. These designs often have more degrees of freedom at the wheel. Controlling an omnidirectional robot for a specific direction of travel is also more difficult and less accurate compared to less manoeuvrable designs. In a differential drive vehicle, the two motors attached to the two wheels must be driven along exactly the same velocity profile creates its own challenges e.g., due to variations between wheels, motors and environment.

There is in summary no ‘ideal’ drive configurations that simultaneously maximise control, manoeuvrability and stability for a robot, but the drive configuration is generally chosen based on priority between these features.


Summary of parts consideration

From this research of the commercial robots a parts list was created to estimate the approximate cost of required components (Appendix A). It was found that the components need to build a similar robot would cost approximately US$1600.  Based on this part list, objectives and time of this project I set a budget of $500.





In order to solve problems, there are many choices of programming languages available. There are general languages, which are suitable for common problems (C, C++, java), and languages that have been developed for special problems (SQL, PERL). This is similar when programming robots. General robot programming languages include languages such as ARPS/VAL (Advanced Robot Programming System /Variable Assembly Language), SRCL (Siemens Robot Control Language), MML (Model-Based Mobile Robot Language) (Meynard, 2000) or MRL (Multiagent Robot Language) (Hiroyuki Nishiyama, 1998). There are also languages designed for hobbyists (Microsoft Robotics Studio, Lego Mind storms). These programs require the users to purchase hardware that have either been specially designed for the software or drivers have been written for the specific device.

Programming drivers for the hardware is outside the scope of this project, as this is deemed to be too specialised and time consuming. With the flexibility of the project in mind, the high level software will most likely be programmed in C#, Java, LabVIEW or Matlab. Matlab is considered a high level language as control of the hardware can be done through Simulink.  The low level languages are slightly easier as there are only two languages commonly used with the exception of the OOPIC. The low level languages are more complicated in that there many compilers available on the market with various advantages and disadvantages. 



C# is a multi-paradigm programming language encompassing imperative, functional, generic, object-oriented (class-based), and component-oriented programming disciplines. It was developed by Microsoft within the .NET initiative and later approved as a standard by ECMA (ECMA-334) and ISO (ISO/IEC 23270). C# is one of the programming languages designed for the Common Language Infrastructure.


C# is intended to be a simple, modern, general-purpose, object-oriented programming language  (ECMA, 2006). Its development team is led by Anders Hejlsberg, the designer of Borland's Turbo Pascal. Mr Hejlsberg said in an interview that its object-oriented syntax is based on C++ and other languages  (Hejlsberg, 2000). James Gosling, who created the Java programming language in 1994, called it an 'imitation' of that language (Gosling, 2002). The most recent version is C# 3.0, which was released in conjunction with the .NET Framework 3.5 in 2007.

The ECMA standard (on their website lists the design goals for C# as:

  • C# language intended to be a simple, modern, general-purpose, object-oriented programming language.
  • The language, and implementations thereof, should provide support for software engineering principles such as strong type checking, array bounds checking, detection of attempts to use uninitialized variables, and automatic garbage collection. Software robustness, durability, and programmer productivity are important.
  • The language is intended for use in developing software components suitable for deployment in distributed environments.
  • Source code portability is very important, as is programmer portability, especially for those programmers already familiar with C and C++.
  • Support for internationalization is very important.
  • C# is intended to be suitable for writing applications for both hosted and embedded systems, ranging from the very large that use sophisticated operating systems, down to the very small having dedicated functions.
  • Although C# applications are intended to be economical with regard to memory and processing power requirements, the language was not intended to compete directly on performance and size with C or assembly language.

By design, C# is the programming language that most directly reflects the underlying Common Language Infrastructure (CLI). Most of its intrinsic types correspond to value-types implemented by the CLI framework some notable distinguishing features of C# are:

  • There are no global variables or functions. All methods and members must be declared within classes. Static members of public classes can substitute for global variables and functions.
  • Local variables cannot shadow variables of the enclosing block, unlike C and C++. Variable shadowing is often considered confusing by C++ texts.
  • C# supports a strict Boolean data type, bool. Statements that take conditions, such as while and if, require an expression of a Boolean type
  • In C#, memory address pointers can only be used within blocks specifically marked as unsafe, and programs with unsafe code need appropriate permissions to run. Most object access is done through safe object references, which always either point to a "live" object or have the well-defined null value
  • Managed memory cannot be explicitly freed; instead, it is automatically garbage collected. Garbage collection addresses the problem of memory leaks by freeing the programmer of responsibility for releasing memory which is no longer needed.
  • Multiple inheritance is not supported, although a class can implement any number of interfaces. This was a design decision by the language's lead architect to avoid complication and simplify architectural requirements throughout CLI.
  • C# is more type safe than C++. The only implicit conversions by default are those which are considered safe, such as widening of integers
  • Enumeration members are placed in their own scope.
  • C# provides properties as syntactic sugar for a common pattern in which a pair of methods, accessor (getter) and mutator (setter) encapsulate operations on a single attribute of a class.
  • Full type reflection and discovery is available.
  • C# currently (as of 3 June 2008) has 77 reserved words.

(ECMA, 2006)

The C# language definition and the CLI are standardized under ISO and ECMA standards, which provide reasonable and non-discriminatory licensing protection from patent claims. Microsoft uses C# and the CLI in its Base Class Library (BCL), which is the foundation of its proprietary .NET framework. This provides a variety of non-standardized classes (extended I/O, GUI, web services).

Using C# allows the programmer to utilise the vast pre-programmed libraries of the .NET framework. The Microsoft .NET Framework is a software framework that can be installed on computers running Microsoft Windows operating systems. It includes a large library of coded solutions to common programming problems, and a virtual machine that manages the execution of programs written specifically for the framework. The .NET Framework is a Microsoft product, intended to be used by new applications designed for the Windows platform.

The framework's Base Class Library provides a large range of features including user interface, data and data access, database connectivity, web application development, numeric algorithms, and network communications. The class library is used by programmers, who combine it with their own code to produce applications.

Programs written for the .NET Framework run in an environment that manages the program's requirements. Part of the .NET Framework runtime environment is known as the Common Language Runtime (CLR). The CLR provides the appearance of an application virtual machine so that programmers don’t need to consider the specific CPU that will run the program. This allows the programmer to write applications for embedded devices on a desktop with ease. The CLR also provides other important services such as security, memory management, and exception handling.




Java was originally developed by James Gosling at Sun Microsystems and released in 1995 as the basis for the Sun Microsystems' Java platform. The language derives much of its syntax from C and C++, but has a simplified object model with fewer low-level facilities. Java applications are typically compiled to a class file that can run on any Java Virtual Machine regardless of computer architecture.

In compliance with the specifications of the Java Community Process, Sun relicensed most of their Java technologies under the GNU General Public License in May 2007. GNU Compiler for Java and GNU Class path are alternative implementations to these Sun technologies.

There were five primary goals in the creation of the Java language  (The Java Language Environment, 1997):

  1. It should be "simple, object oriented and familiar".
  2. It should be "robust and secure".
  3. It should be "architecture neutral and portable".
  4. It should execute with "high performance".
  5. It should be "interpreted, threaded, and dynamic".

One characteristic of Java is portability, which means that computer programs written in the Java language must run similarly on any supported hardware/operating-system platform (The Java Language Environment, 1997). This is achieved by compiling the Java language code to a Java byte code, rather than directly to platform-specific machine code. Java byte code instructions are similar to machine code, but are intended to be interpreted by a virtual machine (VM) written specifically for the host hardware.

Standardized libraries provide a common way to access host-specific features such as graphics, threading and networking. A major benefit of using byte code is porting. However, the overhead in running java application in the virtual environment means that interpreted programs almost always run more slowly than programs compiled to native executables would.



LabVIEW (short for Laboratory Virtual Instrumentation Engineering Workbench) is a platform and development environment for a visual programming language from National Instruments. LabVIEW was originally released for the Apple Macintosh in 1986. It is frequently used for data acquisition, instrument control, and industrial automation on a variety of platforms including Microsoft Windows, various distributions of UNIX, Linux, and Mac OS X. The latest version of LabVIEW is LabVIEW 2009, released August 2009.

The programming language used in LabVIEW, also referred to as “G”, is a dataflow programming language. Execution is determined by the structure of a graphical block diagram (the LV-source code) where different function-nodes can be attached by drawing wires. These wires transmit variables that any node can execute as soon as all input data is available. Since this can be occurring at multiple nodes simultaneously, “G” is essentially capable of parallel execution. Multi-processing and multi-threading hardware is automatically utilised by the built-in scheduler, which multiplexes multiple threads over the nodes ready for execution.

One benefit of LabVIEW over other development environments is the support for instrumentation hardware. Drivers for many different types of instruments are included or available. These are shown as graphical blocks. There is also extensive support for abstraction layers that offer standard software interfaces to communicate with various hardware devices. Theses provided driver interfaces save program development time, allowing people with limited coding experience to write programs in a shorter time frame compared to more conventional languages.

In terms of performance, LabVIEW has a compiler that writes code for the CPU platform. The program runs with the help of the run-time engine, containing some precompiled code to perform common tasks that are defined by the G language. The run-time engine provides a consistent interface to various operating systems and reduces compile time. Many libraries with a large number of functions for data acquisition, signal generation, mathematics, statistics, signal conditioning, analysis, are provided in several LabVIEW package options

 Like the Java virtual machine, the benefit of the LabVIEW environment is the platform independent nature of the G code, which is portable between the different LabVIEW systems for different operating systems.

There is a low cost LabVIEW Student Edition aimed at educational institutions for learning purposes. There is also an active community of LabVIEW users who communicate through several e-mail groups and Internet forums.



Simulink, developed by The MathWorks, is a commercial tool for modelling, simulating and analyzing multi domain dynamic systems. Essentially it is very similar to the LabVIEW with its graphical programming and allowances for direct programming. Its primary uses a graphical interface that consists of blocks with large sets of block libraries. It integrates well with the rest of the MATLAB environment and can either read or write from the control window. Simulink is widely used in control theory and digital signal processing for multi domain simulation and design.

Simulink with a plug-in (Real-Time Workshop) from MathWorks can automatically generate C code for real-time implementation of systems. As the efficiency and flexibility of the code improves, this is becoming more widely used for production systems. It is also, because of its flexibility and capacity for quick iteration, a popular tool for embedded system design work.

x86-based real-time systems  and xPC Target can be used together to simulate and test Simulink and Stateflow models in real-time. There are also add-ons for other embedded devices.

Matlab with all its available tools are freely available at university, and I'm very familiar with programming m-files and using Simulink. Its inability to interface with drivers for various hardware devices makes it problematic to use for this project, although it might prove to be a powerful testing tool for control algorithms.



Low Level languages

C is a general-purpose computer programming language developed in 1972 by Dennis Ritchie at the Bell Telephone Laboratories for use with the UNIX operating system.

Although C was designed for implementing system software, it is also widely used for developing portable application software.

C is one of the most popular programming languages. It is widely used on many different software platforms, and there are few computer architectures for which a C compiler does not exist. C has greatly influenced many other popular programming languages, most notably C++, which originally began as an extension to C.

C is an imperative (procedural) systems implementation language. It was designed to be compiled using a relatively straightforward compiler, to provide low-level access to memory, to provide language constructs that map efficiently to machine instructions, and to require minimal run-time support. C was therefore useful for many applications that had formerly been coded in assembly language.

Despite its low-level capabilities, the language was designed to encourage machine-independent programming. A standards-compliant and portably written C program can be compiled for a very wide variety of computer platforms and operating systems with little or no change to its source code. The language has become available on a very wide range of platforms, from embedded microcontrollers to supercomputers.

C's design is tied to its intended use as a portable systems implementation language. It provides simple, direct access to any addressable object (for example, memory-mapped device control registers), and its source-code expressions can be translated in a straightforward manner to primitive machine operations in the executable code. Some early C compilers were comfortably implemented (as a few distinct passes communicating via intermediate files) on PDP-11 processors having only 16 address bits. C compilers for several common 8-bit platforms have been implemented as well. Making it an ideal microcontroller language.

Like most imperative languages in the ALGOL tradition, C has facilities for structured programming and allows lexical variable scope and recursion, while a static type system prevents many unintended operations. In C, all executable code is contained within functions. Function parameters are always passed by value. Pass-by-reference is simulated in C by explicitly passing pointer values. Heterogeneous aggregate data types (struct) allow related data elements to be combined and manipulated as a unit. C program source text is free-format, using the semicolon as a statement terminator (not a delimiter) .





C also exhibits the following more specific characteristics:

  • lack of nested function definitions
  • variables may be hidden in nested blocks
  • partially weak typing; for instance, characters can be used as integers
  • low-level access to computer memory by converting machine addresses to typed pointers
  • function and data pointers supporting ad hoc run-time polymorphism
  • array indexing as a secondary notion, defined in terms of pointer arithmetic
  • a pre-processor for macro definition, source code file inclusion, and conditional compilation
  • complex functionality such as I/O, string manipulation, and mathematical functions consistently delegated to library routines
  • A relatively small set of reserved keywords

The relatively low-level nature of the language affords the programmer close control over what the computer does, while allowing special tailoring and aggressive optimization for a particular platform. This allows the code to run efficiently on very limited hardware, such as embedded systems.

C's primary use is for "system programming", including implementing operating systems and embedded system applications, due to a combination of desirable characteristics such as code portability and efficiency, ability to access specific hardware addresses, ability to "pun" types to match externally imposed data access requirements, and low runtime demand on system resources.

Overall C is a powerful language for programming microcontrollers and other embedded devices. As most microcontrollers compilers can be programmed in C this gives it a huge advantage over the Assembler language. Assembly is extremely time consuming when programming function like the serial communications.














Part Selection:

The hardware design began with the selection of components for the mobile robotic platform according to platform diagram. Parts were chosen according to the features that they offered to the project. Devices that had a small footprint and consumed less power than their competitors were preferred, as long as the cost was acceptable. The more features parts had the higher the likely hood that they would be included in the project. In some cases parts were selected by default as they came as with other devices, this is mostly prevalent in the chassis where a RC truck was purchased, where many components came in one package.

Hardware Platform

Using the idea that many universities have used in the past I decided that building a platform from an existing RC car would be the quickest way to start assembling the robot and start coding the devices for communication with the PC. Although the Tamiya TX1 is often used, it is rather tall and has a poor stability when weight is placed above its centre of gravity. This led me to purchase two smaller RC trucks, strip the parts and CNC mill a new base plate that the parts could be connected back onto.

The base plate template was produced by taking a photocopy of the original vehicles plate, and then imported into SolidWorks. Once the image was imported it was scaled to 1:1.  It was decided that the vehicle needed to be longer and wider, so the image was cut and paste three more times and spread out across the screen, with the images overlapping in the centre. An outline of the new plate was made and three of the four images were removed. The original base plate was symmetrical so the fourth image was moved around the new (also symmetrical) plate to locate the mounting holes for the original drive train.

The plate was then cut out and drilled and the components mounted. More about the mounting can be found in the Drive train section below.

Embedded PC

Massey University has an embedded PC but because of its low specifications it was not able to run windows embedded with the .NET 3.5 Framework. To get around this issue an Asus EEEPC was used to simulate the operating environment and required functionality.


I choose to use the Silabs 8051F020 as it is extremely versatile. As the C# communication function is very inefficient it requires a huge among of processing power from the microcontroller. The device also has two UART’s so serial communications can be setup too other devices and not just the PC. Massey University also has very good compliers (Kiel) and personal for support that make programming the devices simpler. Other microcontrollers were not chosen because they lacked the connectivity and communication that the F020 offered.

Other features that made the microcontroller appealing:



12-Bit ADC

-          Programmable throughput up to 100 ksps

-          8 external inputs; programmable as single-ended or differential

-          Programmable amplifier gain:  16, 8, 4, 2, 1, 0.5

-          Data-dependent windowed interrupt generator

-          Built-in temperature sensor (±3 °C)


High-Speed 8051 µC Core

-          Pipelined instruction architecture; executes 70% of instructions in 1 or 2 system clocks

-          Up to 25 MIPS throughput with 25 MHz system clock

-          22 vectored interrupt sources


8-Bit ADC

-          ±1 LSB INL; no missing codes

-          Programmable throughput up to 500 ksps

-          8 external inputs

-          Programmable amplifier gain:  4, 2, 1, 0.5



-          4352 bytes data RAM

-          64 kB Flash; in-system programmable in 512-byte sectors (512 bytes are reserved)

-          External parallel data memory interface


Two 12-Bit DACs

-          Can synchronize outputs to timers for jitter-free waveform generation


Digital Peripherals

-          64 port I/O; all are 5 V tolerant

-          Hardware SMBus™ (I2C™ compatible), SPI™, and two UART serial ports available concurrently

-          Programmable 16-bit counter/timer array with 5 capture/compare modules

-          5 general-purpose 16-bit counter/timers

-          Dedicated watchdog timer; bidirectional reset

-          Real-time clock mode using Timer 3 or PCA


Two Comparators

Internal Voltage Reference

VDD Monitor/Brown-out Detector


Clock Sources

-          Internal programmable oscillator:  2–16 MHz

-          External oscillator: Crystal, RC, C, or Clock

-          Can switch between clock sources on-the-fly


On-Chip JTAG Debug & Boundary Scan

-          On-chip debug circuitry facilitates full speed, non-intrusive in-system debug (no emulator required)

-          Provides breakpoints, single stepping, watch points, stack monitor

-          Inspect/modify memory and registers

-          Superior performance to emulation systems using ICE-chips, target pods, and sockets

-          IEEE1149.1 compliant boundary scan

Supply Voltage:  2.7 to 3.6 V

-          Typical operating current:  10 mA at 25 MHz

-          Multiple power saving sleep and shutdown modes

100-Pin TQFP

Temperature Range:  –40 to +85 °C




The GPS device itself is not that important, but rather the information it can receive from the satellites. I required GGA, GSA,GSV, RMC, GLL and preferably VTG. There were many devices that offered these NMEA strings, so I chose a device that also had blue tooth, to allow it to be mounted anywhere on the robot.

The purchased device was the Qstarz BT-Q1000 Bluetooth GPS Travel Recorder; it had an impressive spec sheet that included fast start up as well as long battery life, and it could be charged via USB and used the MTK GPS Chipset.

The MTK GPS chipset is technically one of the latest and most sensitive GPS systems. This means that the data logger will work even under the most adverse conditions. This is important if the robot was to travel in a forest with overhanging trees, or in a city with tall buildings.

The available NMEA string can also be parsed in software to figure the amount of error that is incurred and decisions can be made on how to proceed accordingly. If the signal is really week the robot may need to rely on other sensors for a period of time until the signal is re-acquired.  

  The GPS has the following specs:


Although there were many options to choose from only the Silabs F350, and the Devantech CMP03 compass has similar features for a relatively low cost. The Devantech device was smaller and consumed less power but required the microcontroller to receive its signal. The Silabs F350 was more versatile as it could be connected directly to a USB port removing the overhead from the microcontroller.

The Silabs F350 compass is based on the C8051F350 MCU. This multi-axis, tilt-compensated electronic compass interfaces very small signals directly to the MCU with no additional instrumentation circuitry. There are three separate axis of magnetoresistive sensing elements and a two-axis accelerometer, to help accuracy when the device is tilted and a temperature sensor to probe its environment.

Proximity sensors

The proximity sensors are normally connected directly to the microcontroller, and I wanted a device that could measure both near and far (30cm to 5m) as well as being connected to a PC as for testing. As this device is such a small part of the project and it is the associated code that matters I went for a simple hobbyist device. The solution I found was the Devantech SRF02 Low cost Ultrasonic Range Finder. The SRF02 features both I2C and a Serial interfaces. The serial interface is a standard TTL level UART format at 9600 baud, 1 start, 2 stop and no parity bits, and may be connected directly to the serial ports on any microcontroller. Up to 16 SRF02's may be connected together on a single bus, either I2C or Serial, which is important as 8 devices are needed to cover the perimeter of the robot.

New commands in the SRF02 include the ability to send an ultrasonic burst on its own without a reception cycle, and the ability to perform a reception cycle without the preceding burst. Because the SRF02 uses a single transducer for both transmission and reception, the minimum range is higher than other dual transducer rangers.

 SRF02 Specification:

·         Voltage- 5v only required

·         Current - 4mA Typical

·         Frequency- 40 KHz

·         Range- 15cm to 6m 

·         Resolution-3 to 4cm

·         Analogue Gain- Automatic 64 step gain control

·         Connection Modes –

§  Standard I2C Bus.

§  Serial Bus - connects up to 16 devices to any uP or UART serial port

·         Full Automatic Tuning - No calibration, just power up and go

·         Timing - Fully timed echo, freeing host controller of task.

·         Units- Range reported in uS, mm or inches.

·         Light Weight- 4.6gm

·         Small Size- 24mm x 20mm x 17mm height

By itself the device can only communicate with the microcontroller but the manufacturer also makes a USB to I2C interface module that proved useful in testing the capabilities of the device.

Motor Drivers

The original motor drivers designed for this project involved building dual H-bridges that would accept a PWM input from the microcontroller. The original design failed to supply the required output current for repetitive starts and with no current feedback tended to burn out in stall conditions. To avoid redesigning the H-bridges a Sabertooth motor controller was purchased.

This added to the overall flexibility of the project as it was capable of running in many operating modes as well as thermal and battery protection.

The specifications are as follows:

Input voltage: 6-24V nominal, 30V absolute max.

Output Current: Up to 25A continuous per channel. Peak loads may be up to 50A per channel for a few seconds.

Recommended power sources are:

·         5 to 18 cells high capacity NiMH or NiCad

·         2s to 6s lithium ion or lithium polymer. Sabertooth motor drivers have a lithium battery mode to prevent cell damage due to over-discharge of lithium battery packs.

·         6v to 24v high capacity lead acid

·         6v to 24v power supply (when in parallel with a suitable battery).


Dimensions: Size: 65 x 80 x 20mm Weight: 3.5oz / 96g

Mixed and independent options:

The Sabertooth features mixed modes designed especially for differential drive robots, where two motors provide both steering and propulsion. It also has independent options in all operating modes. This is useful for dual motor control.

Synchronous regenerative drive:

Going one step farther than just regenerative braking, the Sabertooth motor driver will return power to the battery any time a deceleration or motor reversal is commanded. This can lead to dramatic improvements in run time of systems that stop or reverse often, like a placement robot or a vehicle driving on hilly terrain (Wellington, NZ). This drive scheme also saves power by returning the inductive energy stored in the motor windings to the battery each switching cycle, instead of burning it as heat in the motor windings. This makes part-throttle operation very efficient.

Thermal and over current protection:

Dual temperature sensors and over current sensing, that will protect itself from failure due to overheating, overloading and short circuits.

Easy mounting and setup:

All operating modes and options are set with DIP switches. Its small size and light weight allowing for more payload, or smaller and more nimble platform designs


Carefree reversing:

Unlike some other motor drivers, there is no need for the Sabertooth to stop before being commanded to reverse. It can go from full forward immediately to full reverse or vice versa. Braking and acceleration are proportional to the amount of reversal commanded, so gentle or rapid reversing is possible.

Many operating modes:

With analogue, R/C and serial input modes, as well as dozens of operating options, the Sabertooth has the flexibility to be used for many types of projects or by simply adjusting the selector switch the mobile platform can be controlled via a remote control.

The Sabertooth has a 5V output designed to power microcontrollers and other sensors, allowing the robot to carry fewer batteries.

The Sabertooth dramatically increased  the ability to control the motors through a variety  of control methods, allowing me to test the platform using the Remote control that was supplied with the trucks.


Although cheap, hobby motors acquired through the purchase of the RC trucks, were not implemented and will be considered future work. I simply ran out of time and as they connect though the microcontroller so they were not considered to be detrimental to the project if not implemented. 


These devices were not purchased as they fell outside the budget. They were not critical in this case as the direction and acceleration could be determined through the compass and velocity and distance travelled could be determined using the GPS. I acknowledge the fact that GPS is inaccurate and that with encoders the system would be more reliable.  Encoders were not budgeted for in the project schedule as I knew there would be little time to implement them. I do have code that I have used in the past and will include that once the project has finished.


Drive train

From the background section I knew I would be using the RC truck configuration as well a driving the robot in Differential mode. Using two RC trucks meant I know had twice the motor power and could carry a much heavier payload.

The drive trains were taken from the vehicles and mounted back onto the new base plate.  The wheel and suspension that would have been in the middle was removed altogether and the differential gear boxes were locked. As the new plate was longer than the original one the drive shafts had to be extended. This now meant the platform had in addition to the two motors there are also four gearboxes and independent suspension on each wheel.

The four powered wheels will allow the vehicle to turn in a very small circle and in theory it should be the same turning circle as the size of the chassis, and therefore increase the robot’s manoeuvrability. As stated in the research the increase in manoeuvrability decreases the controllability as the wheels will need to turn at exactly the same speed. This problem cannot be overcome until the encoders and the corresponding control algorithms are implemented.



The main reason for not using the languages designed for building mobile robots is that they don’t offer the flexibility that I require for this project. The limitations of current languages were so frustrating that Ferenc (Ferenc Vajda, 2003) wrote an article about creating a new language that was object orientated to solve this problem. Although I will not write my own language for this project I saw the greatest flexibility in languages that included extensive libraries for communication and mathematical operations.

Simulink and LabVIEW use proprietary source code and I foresaw problems connecting devices and communicating with these devices. They also use run time environments like Java which would make them slower when operating on a high level platform. The exception to this would be LabVIEW which can be deployed as a Real Time Operating Environment. To avoid these potential problems I decided to use C# as the high level language.

C# had its own problems in the fact that I had never programmed in this language before, but because it was object orientated it was very similar to programming in Java. C# had extensive libraries and sample code that proved to be useful in my program.

C# also has a well established programming environment that is free for students who access to AAMSDN. Microsoft Visual Studio 2008 is far more advanced and feature rich than the other programming environments I investigated.

I also searched for example code available on the internet and found many more examples in C# than I did in other languages. This was the deciding factor that led me to program the software in C#.

As I was going to use the 8051 microcontroller from Silabs, I only had the choice of programming the device in Assembler or C. Coding in C is far quicker and more intuitive than programming in Assembler, and was the main reason for choosing the higher level language.


The emphasis for this project was the software design in order to allow the various sensors to communicate with each other. Once the sensors were interfacing with the PC, the software was enhanced to interpret and perform calculation on the incoming data. In order to test the navigation software and motor control a mobile platform was also developed.

Component Aquision

The bill of materials can be found in Apendix A, with the componets supplier and their respective prices listed. There are also extra mounts and sensors listed as they provide a comparison between what is available on the market and what is either implented in this project or will be implented in the future. This comparison will be discussed and can be found in the conclusion section of this report.

Hardware Platform

Once the component selection was complete the overall size of the platform was calculated and a new base plate was designed in SolidWorks. The platform was constructed out of aluminium and the components from the two RC trucks that were reassembled on the custom base plate that formed the spine of the Hardware platform.

all three base plates.jpgA comparison of the base plates is shown in figure 4. The original plate from the manufacturer is the blue design in the middle, and has been enlarged to show more details. The image on the right is the actual size difference between the original design and the new platform.

Text Box: Figure 4Figure 5 show the final design and component layout with regards to the drive train. From the dual drive shafts it can be seen that the robot is driven in differential mode. As outline in the components selections part of this report  the vehicle needed suspension and this was achieved using oiled filled shock and foam inserts  inside the tyres. Stability was improved because of the increased width and length of the base as shown in figure 4.

The motors were estimated to draw a maximum of 15 Amps based on the internal resistance of the winding and the battery voltage of 7.2V. An H-Bridge was designed to meet these specifications, using MOSFETS in parallel as components with the correct specifications could not be sourced in time. Initially the Bridges heated up to the point of failure, and it was thought that the heat sinking was insufficient. Larger heat sinks and fans were used to test the design and although the device still failed it took a significant amount of time. After more tests were carried out the MOSFET drivers started failing, this problem has still not been resolved but the suspected cause is the paralleling of the devices to achieve the required current.

Later tests were carried out to confirm the motors specifications and it was found that each motor was capable of achieving a stall current of 22 Amps. This was because I did not account that the batteries would hold close to 8.5V just after charging. In order to avoid redesigning the Bridges and implementing code in the microcontroller to limit the PWM on motor stalls, (via current sensing) a commercial solution for the problem was purchased.

Figure five also shows the RC components on the chassis, this was used to test the weight carrying capabilities and run time. It was found that the device could run for approximately 15 minutes carrying 4 Kg’s of weight. Taking into account the weight of the embedded PC, microcontroller, and sensors as well as their power sources that  that leaves just over 2Kg’s for extra payloads. At max payload the chassis would bend at put a lot of force on the drive shafts, this was an expected problem and provisions were made in the design to add supports over the drive shafts, to increase the rigidity and stabilise the platform.   

The four powered wheels allow the vehicle to turn in a very small circle and in theory it should be the same turning circle as the size of the chassis. In testing this was only the case at low speeds on a smooth even surface. With high torques on the motors while on uneven terrain, turning in the diameter of the platform proved impossible. I believe this is because the lugs on the wheels would grip intermittently and cause a much wider turning circle as there was an uneven force applied on either side of the vehicle. There is surly also a slight variation between power output on each side and a test was set up to check this. With no friction on the wheels, the difference between the current drawn was negligible. Unfortunately a test with friction was not able to be carried out.   

Figure 5 Hardware platform

Software Design

Software design was broken into two parts, the microcontroller and the EPC (simulated on a desktop). The micro controller was programmed first as I was familiar with programming it from a project previous to this one. Following this the C# Graphical User Interface (GUI) was designed. The GUI was programmed in steps according to the complexity of the device being integrated. The programming order was as follows:

·         Microcontroller

o   Communications

o   Motor control

·         Embedded PC

o   Microcontroller interface

o   Proximity sensors

o   Compass

o   GPS

The research I did on the devices proved very useful in many cases as there were often code examples that I could use to help me program the corresponding device. The original code for these devices was examined in detail and the excess fat was trimmed. The code was then changed to suite my needs. In all high level programming cases a separate GUI was coded for each device, in order to debug the algorithms and keep the main code segment clean. Once algorithms were tested and debugged they were added to the main code segment.

This section relies on code snippets to help understand the way the code was implemented, and is best viewed in colour to easily differentiate the code comments


The code for the microcontroller was originally based on the Silabs example called Blinky.c This simple program allowed the user to blink the LED onboard at a given speed via a serial connection to a PC.  The code was modified to allow input from buttons, and sending messages back to the computer. Changes were also made to the way the code executes the blink speed of the LED. An example is shown below.

Communications were easy to set up as the example program covered most of the ground work. The program uses UART 0, with port 0 as the communications channel. When the interrupt service routine (ISR) for UART 0 is initiated and new data is found in the input buffer SBUF, it tells the main routine that there is information to interpret. The UART0 ISR is also responsible for the manual inputs of the buttons, comparing the value of ports to a known value and acting on these inputs and sending the commands back to the PC. Sending information back to the PC is also relatively simple as this is handled by the output buffer. The output buffer SBUF0 sends back four bytes to the PC letting the it know what the microcontroller is doing. 

When the microcontroller receives a new command a flag is set to one, in the main routine. In the case of the motor control; the switch case in the main routine looks for the value of the hex code and then sends a decimal code to Timer 3. The code sent to the Timer 3 sets the drive mode (forward, left or right) of the robot and the speed the wheels slow, medium or fast.

The start of UART0_ISR. Looking for the received byte and acting on manual button actions.

//-- pending flags RI0 (SCON0.0) and TI0(SCON0.1)

if  ( RI0 == 1) //-- interrupt caused by received byte



      received_byte = SBUF0; //-- read the input buffer

      RI0 = 0; //-- clear the flag






 //START: buttons 1 and 2 together


  if  ( P5 == 0x03)




         // lcd_goto(0x00) ;   //-- go to first Row

                printf("   Start   ");

                  //lcd_goto(0x40);   //-- go to Second Row

                  printf("  but 2 & 3:  ");




                  command = 0xCC; // DEC=204                       



The code below shows the microcontroller code for sending data back to the PC. The small delay ensures the computer receives the first byte before the microcontroller sends the next byte. The messages sent are the same as those shown on the GUI for easy interpretation.

//set output buffer

      outBuffer[0] = command;

      outBuffer[1] = P2;

      outBuffer[2] = message;

      outBuffer[3] = received_byte;


    for (count = 0; count <4 ; count++)



        while(    TI0 ==0)



            SBUF0 = outBuffer[count]; //-- send the sent_byte to output






      TI0 = 0;




The main routine is shown to clarify how the motors are controlled.

if (new_cmd_received == 1) //If a comand is recieved do this


               switch (received_byte)


                        case 0:

                              lcd_clear();//Clear the Display

                        //-- Display

                              lcd_goto(0x00) ;   //-- go to first Row

                            printf("  ");


                              blink_speed = 0;



                        case 1:


                              //-- Display

                              lcd_goto(0x00) ;   //-- go to first Row

                            printf("  Left Slow"  );


                              blink_speed = 1000001;

                              break; // slow

                        case 4:


                              //-- Display

                              lcd_goto(0x00) ;   //-- go to first Row

                            printf("  Right Slow"  );


                              blink_speed = 1000002;

                              break; // slow


void Timer3_ISR (void) interrupt 14


      TMR3CN &= ~(0x80); //-- clear TF3




      if ( (LEFT_count % 10) == 0) //-- do every 10th count


            switch (blink_speed)


                  case 0: // STOP


                        LEFT = 0; //-- change state of LEFT

                        RIGHT = 0;

                        LEFT_count = 0;



                  case 1000001: // LEFT

                        LEFT = ~LEFT; //-- change state of LEFT

                        RIGHT = 0;

                        LEFT_count = 0;


                        case 1000002:  // RIGHT

                        LEFT = 0; //-- change state of LEFT

                              RIGHT = ~RIGHT;

                              LEFT_count = 0;



Graphical user interface

The GUI is used for the initial setup of the mobile robot platform. Ideally the robot will be programmed before it sets of for its mission. The main reason for using a high level operating system like Windows is so that the operator can connect to the onboard PC remotely and change goals while the vehicle is moving. The basic functionality of the program has been finished but there is scope for future work to be carried out to polish up the user interface and add extra functionality.

The C# GUI program from here on is known only as program. The program consists of a solution file with two project files and nine class files. The majority of these classes fall under the GPS project file, and only three of the classes in the SerialPortCommunications project file

Microcontroller GUI

The microcontrollers interface was the first program to be written in C# this was because I had set up the communications on the microcontroller and needed to test the microcontroller. Further to this C# was a new language for me and there are a multitude of tutorials that cover setting up communication over a serial port.

The Micro tab allows for communication to be setup with any micro controller programmed to receive and send commands over a serial port. Allowances were also made for sending information in formats other that hexadecimal codes, as this coding system is limited in it functionality. Text boxes are provided for displaying the raw data as well as viewing the Code, Port value, message and the received byte.

The reason for this setup was to allow the microcontroller to send information that could be used for various reasons in future implementations. I envision the following, the code is used for flagging important events, the port value is the actual value read on a port, the message and received byte are provided for informative reasons.

For example


Port Value:


Received Byte:



Turning Left



Port Value:


Received Byte:






Provision for manual control has also been added; initially it was used for testing and debugging the code. The manual controls were left in the code to allow control of the robot through remote desktop on the embedded pc, and if a vision system was added to the project in the future then manual control could prove very useful.






Once the microcontroller was coded, the compass interface was coded. The compass is simply a Silabs F350 microcontroller set up to communicate via USB to a PC. In many ways the compass communication code is the same as the microcontrollers, with the exception of baud rate, parity and stop bits. The exceptions were easily over come, by hard coding them into the software.

The compass sends a total of nine bytes to the PC, the breakdown is as follows

1.       Degree

2.       Degree

3.       Minutes

4.       Temperature

5.       Temperature

6.       Inclination on X axis

7.       Inclination on Y axis

8.       Status Register

9.       Cyclic redundancy check (CRC)

The difficulty with the compass code is converting between, strings, byte arrays, and hexadecimal and knowing when to convert. Help for the algorithms was sourced from the Silabs example application, which was written in C.





The GPS implementation was by far, the largest single part of the project, and therefore was included in the main program as a project file. In order to use the data from the GPS device multiple NMEA strings from the satellites had to be interpreted. The NMEA strings included information not only pertaining to position but also the accuracy of the received information. The NMEA string implemented were GPRMC, GPGSV, GPGSA and GPGGA.

The GPS tab using the parsed information from these various stings and displays then. In addition to the interpreted stings the tab also displays the raw NMEA data as well as the satellite data. 

The GPS project file is made up of five class files, and there important feature are covered below. By separating the GPS project into smaller pieces it makes the code more manageable and easy to follow. Another advantage of using classes is that new instances can be created of that class, and if multiple instances are used and code needs to be changed then only the class needs to be changed for all the changes to take place. Simply put if the same lines of code are being used over and over again, create a class and call a new instance of that class. In the frmMain.cs there is some code that needs to still be put into classes.

 Looking at the current classes in GPS.

·         GPSStructs.cs

·         Location.cs

·         NMEAProtocol.cs

·         Satellite.cs

·         Waypoint.cs

Class GPSStruct

The class GPSStruct, is used for storing the smaller parts of the program that would other while make the program layout bloated. The class draws on three useful features in C# the first being struct the second enum, and the third is basically a small class.

The structures in GPSStruct are used to store the values for classes that are very short. In the project code structures are used for storing the various light weight objects that make up the NMEA sentences. They are extremely efficient when it comes to memory use and this why they are extensively used in the code. 

When creating a struct object using the new operator, it gets created and the appropriate constructor is called. Unlike classes, structs can be instantiated without using the new operator. If the reseved word new is not new, then the fields will remain unassigned and the object cannot be used until all of the fields are initialized.

There is no inheritance for structs as there is for classes. A struct cannot inherit from another struct or class, and it cannot be the base of a class. Structs, however, inherit from the base class Object. A struct can implement interfaces, and it does that exactly as classes do.

Here an example of the GPGSA struct is used for storing the data relating to the precision of the GPS data.

public struct GPGSAData


      public char Mode;            // M = manual, A = automaeltic 2D/3D

      public byte mFixMode;        // 1 = fix not available, 2 = 2D, 3 = 3D

      public int[] SatsInSolution; // ID of sats in solution

      public double PDOP;          //

      public double HDOP;          //

      public double VDOP;          //

      public int Count;            //    


GPSStruct  also uses enums to store variables that are inclemental of one another. The enum keyword is used to declare an enumeration, a distinct type consisting of a set of named constants called the enumerator list. Every enumeration type has an underlying type, which can be any integral type except char. The default underlying type of the enumeration elements is int. By default, the first enumerator has the value 0, and the value of each successive enumerator is increased by 1. For example:

public enum Cardinal : uint


            North = 0,

            South = 1,

            East = 2,

            West = 3,


Class NMEA protocols

The string is passed into the NMEA class, the data is read and the type of NMEA string is determined. Once the type of string is found, the data is passed into another method that breaks the string up according to its contents. The string is separated into fields and substrings using regular expressions (Regex), in order to obtain the relevant data in each of the received fields. 

Once the NMEA string command has been found it is passed through a switch case to determine which method to run, in case below sCmd was equal to GPRMC. The switch case called the ProcessGPRMC method and passed the data that was received by the satellite.  

public void ProcessCommand(string sCmd, byte[] bData)


                  string data = EncodeToString(bData);


                  { case "GPRMC":







Looking inside the ProcessGPRMC method it can be seen that the data is split and variables are set to the values of the fields or substrings.

public void ProcessGPRMC(string data)


            string[] fields = Regex.Split(data, ",");

            //Time: Hour, Minute, Second

            //Time is Zulu

            GPRMC.Hour = Convert.ToInt32(fields[0].Substring(0, 2));

            GPRMC.Minute = Convert.ToInt32(fields[0].Substring(2, 2));

            GPRMC.Second = Convert.ToInt32(fields[0].Substring(4, 2));

            GPRMC.Day = Convert.ToInt32(fields[8].Substring(0, 2));

            GPRMC.Month = Convert.ToInt32(fields[8].Substring(2, 2));

            GPRMC.Year = Convert.ToInt32(fields[8].Substring(4, 2));

            GPRMC.DataValid = Convert.ToChar(fields[1]);



            GPRMC.Latitude = Convert.ToDouble(fields[2]) / 100;

            if (fields[3] == "S")

                GPRMC.LatitudeHemisphere = Cardinal.South;


                GPRMC.LatitudeHemisphere = Cardinal.North;



            GPRMC.Longitude = Convert.ToDouble(fields[4]) / 100;

            if (fields[5] == "E")

                GPRMC.LatitudeHemisphere = Cardinal.East;


                GPRMC.LatitudeHemisphere = Cardinal.West;


            GPRMC.GroundSpeed = Convert.ToDouble(fields[6]);



Location.cs and Satellites.cs

These two classes are used for holding data until the next time they are updated. They are updated from the NMEAProtocal.cs class. They provide the ability for main program to operate on their values while the new incoming strings are being parsed. They are similar to the GPSStruct.cs class but are required to be placed within their own .cs file.


Waypoints were implemented but not fully and tested and initial finding are that there are bugs in the code. These bugs are most likely in the way the distance is calculated and the inaccuracy of the longitude and latitude rounding errors. The project ran out of time before the errors could be corrected.

Even if the calculations were correct they do not account for elevation, and a vehicle travelling up a hill would likely to suffer “distance travelled errors”. The code however does account for curvature of the earth. Navigation is a research area in itself, and this simple implementation is an example of what could have been done.  

Two formulas were used to determine the distance between the two co-ordinates.

The first is the Law of cosine:

Where R= 6371Km

The second is called the Haversine function, and is shown in the code below.





public void Distance()



float Lat1Radians = (float)(WPS.GPSLatTemp4) * (float)Math.PI;

float Lat2Radians = (float)WPS.Lat2 * (float)Math.PI;

float Long1Radians = (float)(WPS.GPSLongTemp4) *(float)Math.PI;

double Long2Radians = WPS.Long2 * Math.PI;



float dLat = Lat2Radians - Lat1Radians;

float dLon = (float)Long2Radians - (float)Long1Radians;

float a = ((float)Math.Sin(dLat / 2) * (float)Math.Sin(dLat / 2)+                                                            (float)Math.Cos(Lat1Radians) * (float)Math.Cos(Lat2Radians)*(float)Math.Sin(dLon / 2) * (float)Math.Sin(dLon / 2));

float c = 2 * (float)Math.Asin(Math.Min(1, Math.Sqrt(a)));

float d = (float)R * (float)c;

WPS.Distance = (d.ToString());




public void Bearing()


float Lat1Radians = WPS.GPSLatTemp4 * (float)Math.PI;

float Lat2Radians = (float)(WPS.Lat2 * (float)Math.PI);

float Long1Radians = WPS.GPSLongTemp4 * (float)Math.PI;

float Long2Radians = (float)WPS.Long2 * (float)Math.PI;



float latitude1 = Lat1Radians;

float latitude2 = Lat2Radians;

float longitudeDifference = (Long2Radians - Long1Radians);

float y = (float)Math.Sin(longitudeDifference) * float)Math.Cos(latitude2);

float x = (float)Math.Cos(latitude1) * (float)Math.Sin(latitude2) -

          (float)Math.Sin(latitude1) * (float)Math.Cos(latitude2) *                                                   .         (float)Math.Cos(longitudeDifference);

float degree = ((float)Math.Atan2(y, x)*(180/(float)Math.PI) + 360)%360;            WPS.Heading = (degree.ToString());













This project aimed to interface four relatively simple devices, and setting up communications between these devices and an embedded PC, for the purpose of developing a cost effective solution that could serve the basis for a range of other mobile robotic projects. Although the initial program was simply allowing devices to display data in a user friendly manner, it became complicated when all this data was required to be processed e.g, for navigation. During the course of this project, particularly while programming the GUI it came apparent to me that the project was far bigger than I had initially anticipated, but I decided to continue to endeavour with it.  Issues were solved as they arose and could not be put aside, as future parts of the project normally depended on all the previous parts. This became troublesome as it often took more time to solve than I had originally planned for.

The most interesting part of the project for me was learning more about the sub systems that make up mobile robots as well as developing code in languages I have not used before. This project gave me a broader prospective of mobile robotic platforms and provided opportunities to further  improve my programming skills.

The cost of the project was within my budget of $500 in addition to this the purchase of a commercial motor driver is still needed, as it would take too long to redesign and build a suitable driver.  In conclusion the completion of the robot chassis, control software, and a GUI proves that it is far cheaper for companies, researchers and individuals to develop their own custom solution for their specific project needs. Interfacing hardware is not complicated nor is it time consuming. The limiting factors are apparent in the coding for efficiently manipulating and using the data.

I did not cover aspects such as control or navigation in much detail, and I hope that by continuing this project after I graduate I further improve this project by implementing these.

Altogether I am pleased with meeting my objectives and with the overall outcome of my project.


Future Work

The original aim of the project was to allow sensors to communicate with higher level software, and this intrinsically means that there are possibilities for future work on this project. As it stands the project is a shell for further development.

One area that need to be addressed is the inaccuracies found in the waypoint class, as well as the developing the feedback control systems for the motors.

Navigation is only basically implemented and ideally with the all the sensors covered in this project a Kalman filter should be implemented to remove uncertainty and noise from measurements.

Mapping could be included to help the robot navigate around obstacles. Global waypoints could be replaced with geographical mapping waypoints (X and Y coordinates) as they are far easier to work with. This is because the coordinates are mapped in pixels against the known longitude and latitude, and because it gives a visual feedback of the vehicle location. This could be achieved by, importing maps of the target area and converting them using commercial software, or using the Google earth API directly in the software.

 The software interface is programmed for functionality and is not optimised for usability. User trials need to be carried out to test the human computer interaction (HCI).

To increase the flexibility of the mobile platform motor control, it would be recommended to program the software to accept at least four of most common drive types used in the mobile robot industry.

In addition further improvements could be achieved by implementing vision systems or indoor navigation. Adding these systems would make this mobile platform comparable to robots currently found on the market in terms of functionality, however at a significantly reduced price.







Appin Knowledge Solutions. (2007). Robotics. Hingham: Infinity Science press.

Barshan, A. U. (2000). Nearal network based target differentiation using sonar for robotics applications. IEEE Robotics and Automation Vol.16 no. 4 , 435-442.

Bekey, G. A. (2005). Autonomous Robots. London: The MIT Press.

Berkeley University. (n.d.). UNit 1 - Introduction to GPS . Retrieved July 07, 2009, from Berkeley University:

Braunl, T. (2008). Embeded robotics . Berlin: Springer.

Dana., P. H. (n.d.). Receiver Position, Velocity, and Time. Retrieved June 07, 2009, from University of Colorado at Boulder:

ECMA. (2006, June 1). C# Language Specification. Retrieved July 15, 2009, from Ecma International:

Ferenc Vajda, T. U. (2003). High-Level Object-Oriented Program Language for Mobile Microrobot Control. Budapest: BUTE BME.

Gosling, J. (2002, Jamuary 1). Why Microsoft's C# isn't. (W. Wong, Interviewer)

Heath, S. (2003). Embeded Systems Design. Oxford: Newnes.

Hejlsberg, A. (2000, July 1). Deep Inside C#: An Interview with Microsoft Chief Architect Anders Hejlsberg. (J. Osborn, Interviewer)

Hiroyuki Nishiyama, H. O. (1998). A Multiagent Robot Language forCommunication and Concurrency Control. International Conference on Multiagent systems , 206-213.

Jay Farrell, M. B. (1999). The global positioning system and inertial navigation. McGraw-Hill.

Jizhong Xiao, A. C. (2004). A Mobile Robot Platform with DSP-based Controller . International Conference on Robotics and Biomimetics (pp. 844-848). Shenyang: University of New York.

Krause, L. (1987). A Direct Solution to GPS-Type Navigation Equations. Aerospace and Electronic Systems , 225–232.

Kuc. (2001). Pseudoamplitude scan snor maps. IEEE Robotics and Automation Vol.17 no 5 , 767-770.

Linda Null, J. L. (2006). The essentials of computer organization and architecture. Sudbury: Jones and Bartlett.

Meynard, J. P. (2000). Control of industrial robots through high-level task programming. Linköping : Linköping University.

Michigan Engineering. (2007, November 9). DC Motor Speed Modeling in Simulink. Retrieved 11 16, 2009, from Control Tutorials for Matlab and Simulink:

Qstarz. (2009, January 12). BT-Q1000. Retrieved Febuary 5, 2009, from Qstarz International:

Robot electronics. (2009, September 3). SRF02 Ultrasonic range finder. Retrieved Febuary 26, 2009, from Robot electronics:

Silicon Labs. (2007, May 14). Support Documents C8051F02x. Retrieved November 17, 2009, from Silicon Labs:

Society of robots. (10, March 2009). Encoders. Retrieved November 2009, 17, from Society of robots:

Tamiya New Zeland. (2009, January 09). Tamiya (#58280) - Tamiya 1/8 TXT-1 TAMIYA XTREME TRUCK. Retrieved January 09, 2009, from Tamiya shop:

Teemu Kemppainen, T. K. (2009). Robot Brothers EasyWheels and ReD in Field Robot Event. Helsinki: University of Helsinki.

The Java Language Environment. (1997). Retrieved May 2009, from Sun Developer Network :

Vogel Miklós, V. F. (2001). Miniaturized microbiological laboratories to develop robotic systems. Budapest: BUTE BME.

Wikimedia Foundation, Inc. (2009, November 16). C. Retrieved November 16, 2009, from Wikipedia:

Wikimedia Foundation, Inc. (2009, November 11). C#. Retrieved November 17, 2009, from Wikipedia:

Wikimedia Foundation, Inc. (2009, November 11). LabVIEW. Retrieved November 17, 2009, from Wikipedia:

Wikimedia Foundation, Inc. (2009, November 1). Simulink. Retrieved November 16, 2009, from Wikipedia:

Wikimedia Foundation, Inc. (2009, November 16). Java. Retrieved November 16, 2009, from Wikipedia:

Wobner, A. K. (2009, April 19). The GPS System. Retrieved July 07, 2009, from kowoma:

Appendix A

Parts list:

Motor Drivers

Dimension Engineering Sabertooth Dual 25A 6V-24V Regenerative Motor Driver


Servo Controller

Controls up to 32 servo motors


Device Mounts

Servo and motors


Chassis parts

Wheels, drive train, motors


Aluminium base plate


Sensor pan and tilt

For camera or laser


Sonar sensors

X6  Devantech SRF02




Current sensor

Phidgets 30amp



CMUcam3 Robot Vision System


Relay module

Devantech 8 channel


Wireless module



Micro controller

Silabs 8051F020



Silabs 8051F020


embedded pc











Appendix B

Project management

Task Description

Duration (days)



Points (0/100)

Find hardware

7 days




Order parts

1 day




Research software options

21 days




     Micro controller software

4 days





7 days





2 days




    Comms bus for PC to MC

5 days




    PC i.e. Java C# C++...

3 days




Finalize Topology of hardware

3 days




126 days




Program micro controller motor controller

28 days




Program I2C for PC interface

14 days




Program interface for sensors to PC

21 days




     Proximity sensor

7 days





7 days





7 days





63 days




     Proximity Sensors

14 days





21 days





28 days






Project goals for computer hardware interfacing:



Due to the nature of the project being mostly software based on both the low and high level languages the project will be divide into three separate goals. The goals will reflect the inherent complexity involved with solving problems as they arise.


Goal one:


A micro controller (MC) will be programmed to control two motors. The MC will receive a bit patter to one of its ports or through a bus from a PC and act accordingly.  A GUI will be set up on the pc with arrows much like that found on a keyboard when one of the arrow keys are pressed the corresponding motor/s will move. Once this is implemented two other modules will then be added, firstly a range finder and then a compass module. Information from these modules will also then be displayed in the GUI.


Once all these are implemented, a simple navigation script will be added to the GUI with basic obstacle avoidance. The navigation will work on the heading from the compass and time, if an obstacle is encountered the code should slow down the motors and then move around the object. Two shaft encoders may be added to the drive shafts to help calculate speed, distance and direction.  Testing this goal will most likely be simulated in software, with generated compass headings.


Goal two:


For this goal two more modules will be added namely GPS and way point planning. Both these modules will be implemented in the GUI. Ideally the GPS will have its own screen showing long and lat as well as location of satellites and the number of satellites connected too. The waypoints will be on the main screen of the GUI so a user can easily input this data. Path planning will find the heading to the next waypoint, and then use the compass to navigate in that heading. The position will periodically be checked against the GPS data to see if the device has reached its location. The same simple object detection will be used as in goal one. The equipment will most likely be mounted to a trolley and moved around a car park.


Goal three:


The equipment will be mounted to a rolling platform that I built last year. If this is successful more range finders will be added and better obstacle detection/avoidance algorithm implemented.


Goal three would be the ultimate success in this project, but to be able to achieve goal two would be a reasonable undertaking in the time given for the project.



Appendix C

Commercial robots for which this project was compared to.

SuperDroid HD2 Robot Kit with Wi-Fi Control and PTZ Camera

USD $11748.60


Robot Base

·         HD2 Robot base with 26" widened chassis
(10 Ah 24V NiMH batteries

·         (Sabertooth 2x25 motor controller

·         285 RPM gear motors

·         Wi-Fi Custom Control Interface

·         Gamepad

·         Camera System
 360+ degree heavy duty camera pan
 -10 to +60 degree heavy duty camera tilt.
 Samsung 30x Optical zoom (with very low light feature)
 10 inch clear plastic UV resistant dome.
 3 fixed pinhole

·         4 camera IP video server*

SuperDroids 4WD  Wi-Fi Controlled (ATR) All Terrain Robot

USD $3798.50

·         1x Welded Aluminium Base for 32mm Motors

·         1x and Vectoring Robot Hardware Kit

·         2x  Wheel and Shaft Set Pair - Large

·         4x Electric Motor Hook-up Kit

·         1x Electric Power Hook-up Kit

·         2x 15 Amp Connector Set

·         4x 24VDC 190 RPM Gear Motor

·         4x PWM Motor Controller 3A 12-55V

·         2x 12V 4000 mAHr NiMH 2x5 Battery Pack

·         1x Internet Camera Server with Audio

·         1x 360+ degree Pan and Tilt  1x Wi-Fi

·         1x Gamepad controller

Smart Robots SR4



·         Microprocessor (ARM 9) CPU @200MHz

·         Memory SDRAM 32 MB

·         Data storage 32 MB Cache and 4 GB (Gigabyte) USB Flash Operating system Linux; GNU Linux

·         Programming languages Java, Derby, XML C and C++ Communications Wireless 802.11g and

·         Connectivity USB, Serial, I2C and Ethernet ports

·         User Panel (Temperature and Buttons)

·         Motor driver board (Real-time Motor Current)

·         Batteries Two 12v12Ah gel cell batteries

·         Navigation Map-based way-point navigation

·         Sensors Wheel encoders, beacon triangulation, Polaroid sonar  bumpers floor-edge IR detectors Web connectivity On-board web

·         Two independent drive wheels (differential steering) and two casters

SuperDroid LT-F Treaded Robot w/ IR Camera and PS2 Control

USD $7843.50

Welded Aluminium Base for 52mm Motors (qty 1)

• Aluminium Chassis. (qty 2)

• Two positive traction drive treads

• 2x Electric Motor

• Electric Power Hook-up parts (Wire, fuses, switch, connectors, etc.)
• Includes all required hardware, bearings, chains, sprockets, etc.
• 2x Upgraded Type 04 IG52 285 RPM motors (Qty 2)
• 2x 24V 2200mAHr battery packs (Qty 2)
• 1x battery mount (Qty 1)
• 1x Sabertooth Dual 10A Motor Driver (Qty 1)
• 1x Rear Flipper stabilizer arms (Qty 1)
• 1x SyRen 10A motor controller for flipper/stabilizer arm
• 1x Colour IR camera mounted in the front of the robot.
• 1x 2.4GHz video transmitter.
• 900MHz data Transceiver with PS2 controller and 5" colour 2.4GHz
• 1x LCD Receiver (Qty 1)

CoroWare Explorer EX-D Robot Development Platform (Linux / XP Dual Boot)

SUSD$ 8000

• Base Payload Capacity: 15 pounds
• CPU: 2.0GHz
• RAM: 1 GB
• Wireless Networking: 802.11n Wi-Fi PC-card included
• Disk space: 80 GB
• Battery: 13 AH
• Battery life: 2.5 - 4 Hours
• Base Type: Articulated 4 wheel drive
• Steering: Skid
• Wheel Encoders: YES
• Voltage Sensor: YES
• Camera 1600x1200 Colour
• I/O: 8 Digital inputs, 8 Digital Outputs, 4 Analogue Inputs, USB, RS-232, and IEEE-1394 (firewire)
• IR Sensors: Front And Back
• Operating System: Windows XP/Linux Dual Boot