Posted on by

So you’ve written the ultimate ROS program: after thousands of lines of code your robot will finally achieve sentience and bring about the singularity!

One by one you launch your nodes. Each one bringing the Apocalypse ever closer. You hit enter on the last command. And. And, nothing happens. What went wrong? How will you find, and forever squash that bug that prevented your moment of triumph? This blog attempts to answer those questions, and more*.

At BLUEsat we’ve had our share of complicated ROS debugging problems. The best ones happen when you are half-way through a competition task, with time ticking on the clock. Although this article will also look at the more common situation of debugging in a less time pressured, and fire prone environment**.

Below are several tools and techniques that we have successfully deployed to debug our ROS environment.

Keep Calm And … FIRE!

You’ve probably heard this before but its very important when debugging to not jump to conclusions or apply fixes you haven’t tested properly. Google for example has a policy of rolling back changes on its services rather than trying to push a fix. A similar idea applies in a competition or time pressured situation: make sure you have thought through that patch that removes the “don’t kill humans” safety from your robot!  That being said, unfortunately a roll back is unlikely to be applicable in a competition situation, nor is it likely to be able to put out that fire you just started on your robot. So we can’t just copy Google, but we should think about what we are doing before we do it.

Basically any patches or configuration fixes you apply during such a situation is a calculated risk, and you should make sure you understand those risks before you do something. During the European Rover Challenge last year I found it was possible to tweak small settings, restart certain nodes, and re-calibrate systems; but it was too risky to power cycle the rover during a task due to the time it took to establish communication. Likewise restarting our drive systems or cameras was very disruptive to our pilot, so could only be done in certain situations where the damage done by not fixing the system could be worse. That being said, after a critical camera failed we did attempt to programmatically power cycle that device – the decision being that the camera was important enough to attempt such a risky move. (In the end we weren’t able to do this during the task, and our pilot managed to navigate the rover without the camera in question.)

In a non time pressured situation you can be more flexible. It is possible to test different options and see if they work. That is provided they don’t damage your robot. However a structured approach is often beneficial for those more complicated bugs. I often find that when I’m debugging an intermittent or difficult to detect problem that it is easy to end up lose track of what I’ve tried, or get results mixed up. A technique I’ve found to be very useful was to record what I was doing as I did it, especially if the problem includes sensor data. We had a number of problems with our Rover‘s steering system when we first implemented our swerve drive and I found writing down ADC values and rotation readings in different situations really helped debug it (You can read more about how we use ADC’s in our steering system in one of our previous articles).

Basically the main point is too keep your head clear, and think through the consequences before you act. Know the risks and have your E-Stop button ready! Now lets look at some tools you can use to aid you in your debugging.

The BLUEtongue 2.0 Rover being debugged during the 2016 ERC, with Team Members Assisting. (LTR): Timothy Chin, Sebastian Holzapfel, Simon Ireland, Nuno Das Neves, Harry J.E Day.
Debugging is often a team effort.

(more…)


Posted on by

It’s one thing to design a satellite or rover, but without manufacturing you’re dead in the water. Over the years at BLUEsat the problem, or more specifically the cost, of manufacturing has been recurring issue for our mechanical engineering teams. It’s not unusual for the bill for manufacturing our designs to come in at several times our material costs, not to mention long lead times, lack of quality control and no second chances once the part comes in.

Late last year the society decided that enough was enough and purchased a CNC router. A CNC is a simple machine at its core. It consists of a rapidly spinning tool that cuts away at material, which is then mounted on driven guide rails, controlling its position in space. Using this system in combination with computer controls, a CNC router can cut out almost any geometry that we choose.

BLUEsat's CNC Router
BLUEsat’s CNC Router

The process for making a part on the CNC has three stages:

  1. Model the part in CAD, we use Autodesk Inventor.
  2. Create a tool path using CAM software, we use HSM
  3. Secure your material to the CNC router, load the tool path, and begin cutting.

One of the parts that we made recently was an aluminium work holding jig. The model is shown below. This part has some complex features such as bottom rails, notched sides, counterbored holes and raised supports. To make this part by hand would take days and a very competent machinist, and we don’t have access to either.

Jig Plate CAD model
Jig Plate CAD model

Using this model, a tool path was developed with CAM software. The program does most of the heavy lifting, but the user must define the positions of each feature, the speed the machine moves at and how fast it will spin. These speeds are very important to the quality of the final piece and must be tailored to each feature. Below is an example of what the tool path looks like on the computer, red lines indicate that the machine is moving, blue lines show it is cutting.

 

Jig Plate CAM operations
Jig Plate CAM operations

Finally, with our tool path created, we were ready to set up the CNC itself. The material needs to be secured to the surface of the bed to prevent any movement during the cutting operation. This can be done in a number of ways, such as using a machine vice or work holding clamps. For this piece, we started with work holding clamps and then secured it using holes drilled into the material itself.

Now onto the fun part, the cutting. The tool path is loaded onto the CNC and the machine is set to run. Generally, we do a single operation at a time. This gives us time to clean up after each cut and inspect if it was successful. Here are a few videos of cutting.

 

 

 

 

All up, this part took 6hrs to machine. That included the setup, cutting and cleaning up of the part. Below is the final part:

Completed Jig Plate
Completed Jig Plate
Bottom View
Bottom View

 

 

 

 

 

 

 

 

 

 

 

Using our CNC has allowed for rapid prototyping of parts, drastically reduced lead times and most importantly, cut manufacturing costs by an order of magnitude.


Posted on by


 

 

At the start of semester we ran a number of seminars on different skills that we use at BLUEsat. In the first of these videos, former Rover Software Team Lead, Harry J.E Day went through an introduction to “the Robotics Operating System” (ROS) covering setting up ROS catkin workspaces, and basic publishers and subscribers.

You will need the vm image from here: http://bluesat.com.au/vm-image/

The slides from this presentation can be found here: http://bluesat.com.au/an-introduction…

Errata:

  • Some slides in the recording refer to “std_msg” there should be an ‘s’ on the end (i.e “std_msgs”)
  • On the slide “A Basic Node – CMakeLists.txt (cont)” there should only be one ‘o’ in “node.”
  • In step 4 of the “Publisher (cont)” section there should be an e on the end of “pub_node”
  • The person on the last slide was the first robotics COO at BLUEsat not CTO.

These have all been corrected in the slides.


Posted on by

One of the most surprising things about our experience at the European Rover Challenge last year was how incredibly close to total failure we came. Two days before the competition began, while we were in Poland, our control board failed. In addition to having to port all of our embedded codebase to Arduino in two days, we had to fix our overcurrent protection mechanism on the rover’s claw. This was a critical system since it prevents the claw servo from overheating when try to pick up objects. Before we developed our original software solution, a large number of servos had been destroyed due to overheating. Due to errors we’d made in calibration during the port to Arduino, our original software solution didn’t work and we had to think of something else.

Seb Holzapfel and I realised that a hardware solution would also solve this problem. We designed the circuit shown below. It consists of an Op-Amp, a diode, a mosfet and a few resistors. It was designed such that when a large amount of current flows through the 100m Ohm resistor, the PWM signals will be cut off from the claw. This causes the servo motor in the claw to stop drawing current, and therefore prevents overheating.

But why was this worth writing about? Well, we had to build this in a very short period of time and we didn’t really have the correct spare parts on hand. We only had a few op-amps, some jumper cables, some veroboard and a few resistors. This wasn’t enough to build the circuit shown above. We had to improvise. I realised that since our control boards had all failed, we could, in fact, harvest them for the parts we needed. Fortunately, after doing a quick stocktake of the parts on the old control boards, I determined that all the parts we would need were present. We just had to salvage them.

 

Seb is shown above trying, ultimately unsuccessfully, to fix one of our control boards.

 

 

While everyone else was out testing the rover and after we ported the code to Arduino successfully, Seb and I found a bit of spare time on our hands. About 2 hours. We got to work, I desoldered parts from the dead control boards with the hot air gun, while Seb put those parts together into the monstrosity you see below.

It isn't pretty, but it worked.

We then tested it using a coil of wire as a load and verified that it worked. It was then deployed onto the rover. Despite being built in just an afternoon, it actually worked better than the previous software solution when we tested it with the rover. And with this “solution”, we came 9th.

And that’s how we built a critical system in just 2 hours from parts we salvaged from dead control boards.

 

BLUEsat OWR ERC Team.
Back row (left to right): myself, Timothy Chin, Denis Wang, Simon Ireland, Nuno Das Neves, Helena Kertesz.
Front row (left to right): Harry J.E. Day, Seb Holzapfel

Posted on by

In our last article, as part of our investigation into different Graphical User Interface (GUI) options for the next European Rover Challenge (ERC) we looked at a proof of concept for using QML and Qt5 with ROS. In this article we will continue with that proof of concept by creating a custom QML component that streams a ros sensor_msgs/Video topic, and adding it to the window we created in the previous article.

Setting up our Catkin Packages

  1. In qt-creator reopen the workspace project you used for the last tutorial.
  2. For this project we need an additional ROS package for our shared library that will contain our custom QML Video Component. We need this so the qt-creator design view can deal with our custom component. In the project window, right click on the “src” folder, and select “add new”.
  3. Select “ROS>Package” and then fill in the details so they match the screenshot below. We’ll call this package “ros_video_components” and  the Catkin dependencies are “qt_build roscpp sensor_msgs image_transport” The QT Creator Create Ros Package Window
  4. Click “next” and then “finish”
  5. Open up the CMakeLists.txt file for the ros_video_components package, and replace it with the following file.
    ##############################################################################
    # CMake
    ##############################################################################
    
    cmake_minimum_required(VERSION 2.8.3)
    project(ros_video_components)
    
    ##############################################################################
    # Catkin
    ##############################################################################
    
    # qt_build provides the qt cmake glue, roscpp the comms for a default talker
    find_package(catkin REQUIRED COMPONENTS qt_build roscpp sensor_msgs image_transport)
    include_directories(include ${catkin_INCLUDE_DIRS})
    # Use this to define what the package will export (e.g. libs, headers).
    # Since the default here is to produce only a binary, we don't worry about
    # exporting anything.
    catkin_package(
        CATKIN_DEPENDS qt_build roscpp sensor_msgs image_transport
        INCLUDE_DIRS include
        LIBRARIES RosVideoComponents
    )
    
    ##############################################################################
    # Qt Environment
    ##############################################################################
    
    # this comes from qt_build's qt-ros.cmake which is automatically
    # included via the dependency ca ll in package.xml
    find_package(Qt5 COMPONENTS Core Qml Quick REQUIRED)
    
    ##############################################################################
    # Sections
    ##############################################################################
    
    file(GLOB QT_RESOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} resources/*.qrc)
    file(GLOB_RECURSE QT_MOC RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS include/ros_video_components/*.hpp)
    
    QT5_ADD_RESOURCES(QT_RESOURCES_CPP ${QT_RESOURCES})
    QT5_WRAP_CPP(QT_MOC_HPP ${QT_MOC})
    
    ##############################################################################
    # Sources
    ##############################################################################
    
    file(GLOB_RECURSE QT_SOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS src/*.cpp)
    
    ##############################################################################
    # Binaries
    ##############################################################################
    add_library(RosVideoComponents ${QT_SOURCES} ${QT_RESOURCES_CPP} ${QT_FORMS_HPP} ${QT_MOC_HPP})
    qt5_use_modules(RosVideoComponents Quick Core)
    target_link_libraries(RosVideoComponents ${QT_LIBRARIES} ${catkin_LIBRARIES})
    target_include_directories(RosVideoComponents PUBLIC include)
    
    

    Note: This code is based on the auto generated CMakeList.txt file provided by the qt-create ROS package.
    This is similar to what we did for the last example, but with a few key differences

    catkin_package(
        CATKIN_DEPENDS qt_build roscpp sensor_msgs image_transport
        INCLUDE_DIRS include
        LIBRARIES RosVideoComponents
    )
    

    This tells catkin to export the RosVideoComponents build target as a library to all dependencies of this package.

    Then in this section we tell catkin to make a shared library target called “RosVideoComponents” that links the C++ source files with the Qt MOC/Header files, and the qt resources. Rather than a ROS node.

    add_library(RosVideoComponents ${QT_SOURCES} ${QT_RESOURCES_CPP} ${QT_FORMS_HPP} ${QT_MOC_HPP})
    qt5_use_modules(RosVideoComponents Quick Core)
    target_link_libraries(RosVideoComponents ${QT_LIBRARIES} ${catkin_LIBRARIES})
    target_include_directories(RosVideoComponents PUBLIC include)
    
  6. Next we need to fix our package.xml file, the qt-creator plugin has a bug where it puts all the ROS dependecies in one build_depends and run_depends tag, rather than putting them seperatly. You need to seperate them like so:
      <buildtool_depend>catkin</buildtool_depend>
      <buildtool_depend>catkin</buildtool_depend>
      <build_depend>qt_build</build_depend>
      <build_depend>roscpp</build_depend>
      <build_depend>image_transport</build_depend>
      <build_depend>sensor_msgs</build_depend>
      <build_depend>libqt4-dev</build_depend>
      <run_depend>qt_build</run_depend>
      <run_depend>image_transport</run_depend>
      <run_depend>sensor_msgs</run_depend>
      <run_depend>roscpp</run_depend>
      <run_depend>libqt4-dev</run_depend>
    
  7. Again we need to create src/ resources/ and include/ros_video_components folders in the package folder.
  8. We also need to make some changes to our gui project to depend on the library we generate. Open up the CMakeLists.txt file for the gui package and replace the following line:
    find_package(catkin REQUIRED COMPONENTS qt_build roscpp sensor_msgs image_transport)

    with

    find_package(catkin REQUIRED COMPONENTS qt_build roscpp sensor_msgs image_transport ros_video_components)
  9. Then add the following lines to the gui package’s package.xml,
    <build_depend>ros_video_components</build_depend>
    <run_depend>ros_video_components</run_depend>
    
  10. In the file browser create src and include/ros_video_components folders in the ros_video_components folder.

Building the Video Streaming Component

When we are using the rover the primary purpose the GUI serves in most ERC tasks is displaying camera feed information to users. Thus it felt appropriate to use streaming video from ROS as a proof of concept to determine if QML and Qt5 would be an appropriate technology choice.

We will now look at building a QML component that subscribes to a ROS image topic, and displays the data on screen.

  1. Right click on the src folder of the ros_video_components folder, and select “Add New.”
  2. We first need to create a class for our qt component so select,  “C++>C++ Class”
  3. We’ll call our class “ROSVideoComponent” and it has the custom base class “QQuickPaintedItem.” We’ll also need to select that we want to “Include QObject” and adjust the path of the header file so the compiler can find it. Make sure your settings match those in this screenshot:
    Qt Creator C++ Class Creation Dialouge
  4. Open up the header file you just created and update it to match the following
     
    #ifndef ROSVIDEOCOMPONENT_H
    #define ROSVIDEOCOMPONENT_H
    
    #include <QQuickPaintedItem>
    #include <ros/ros.h>
    #include <image_transport/image_transport.h>
    #include <sensor_msgs/Image.h>
    #include <QImage>
    #include <QPainter>
    
    class ROSVideoComponent : public QQuickPaintedItem {
        // this marks the component as a Qt Widget
        Q_OBJECT
        
        public:
            ROSVideoComponent(QQuickItem *parent = 0);
    
            void paint(QPainter *painter);
            void setup(ros::NodeHandle * nh);
    
        private:
            void receiveImage(const sensor_msgs::Image::ConstPtr & msg);
    
            ros::NodeHandle * nh;
            image_transport::Subscriber imageSub;
            // these are used to store our image buffer
            QImage * currentImage;
            uchar * currentBuffer;
            
            
    };
    
    #endif // ROSVIDEOCOMPONENT_H
    

    Here, QQuickPaintedItem is a Qt class that we can override to provide a QML component with a custom paint method. This will allow us to render our ROS video frames.
    Also in the header file we have a setup function which we use to initialise our ROS subscriptions since we don’t control where the constructor of this class is called, and our conventional ROS subscriber callback.

  5. Open up the ROSVideoComponent.cpp  file add change it so it looks like this:
     
    #include <ros_video_components/ROSVideoComponent.hpp>
    
    ROSVideoComponent::ROSVideoComponent(QQuickItem * parent) : QQuickPaintedItem(parent), currentImage(NULL), currentBuffer(NULL) {
    
    }
    

    Here we use an intialiser list to call our parent constructor, and then initialise our currentImage and currentBuffer pointers to NULL. The latter is very important as we use it to check if we have received any ROS messages.

  6. Next add a “setup” function:
    void ROSVideoComponent::setup(ros::NodeHandle *nh) {
        image_transport::ImageTransport imgTrans(*nh);
        imageSub = imgTrans.subscribe("/cam0", 1, &ROSVideoComponent::receiveImage, this, image_transport::TransportHints("compressed"));
        ROS_INFO("setup");
    }
    

    This function takes in a pointer to our ROS NodeHandle, and uses it to create a subscription to the “/cam0” topic.  We use image transport, as is recomended by ROS for video streams, and direct it to call the recieveImage callback.

  7. And now we implement  said callback:
    void ROSVideoComponent::receiveImage(const sensor_msgs::Image::ConstPtr &msg) {
        // check to see if we already have an image frame, if we do then we need to delete it
        // to avoid memory leaks
        if(currentImage) {
            delete currentImage;
        }
    
        // allocate a buffer of sufficient size to contain our video frame
        uchar * tempBuffer = (uchar *) malloc(sizeof(uchar) * msg->data.size());
        
        // and copy the message into the buffer
        // we need to do this because the QImage api requires the buffer we pass in to continue to exist
        // whilst the image is in use, but the msg and it's data will be lost once we leave this context.
        memcpy(tempBuffer, msg->data.data(), msg->data.size());
        
        // we then create a new QImage, this code below matches the spec of an image produced by the ros gscam module
        currentImage = new QImage(tempBuffer, msg->width, msg->height, QImage::Format_RGB888);
        
        ROS_INFO("Recieved");
        
        // We then swap out of buffer to avoid memory leaks
        if(currentBuffer) {
            delete currentBuffer;
            currentBuffer = tempBuffer;
        }
        // And re-render the component to display the new image.
        update();
    }
    
  8. Finally we override the paint method
    
    void ROSVideoComponent::paint(QPainter *painter) {
        if(currentImage) {
            painter->drawImage(QPoint(0,0), *(this->currentImage));
        }
    }
    
  9. We now have our QML component, and you can check that everything is working as intended by building the project (hammer icon in the bottom right or the IDE or using catkin_make). In order to use it we must add it to our qml file, but first since we want to be able to use it in qt-creator’s design view we need to add a plugin class.
  10. Right click on the src folder and select “Add New” again.
  11. Then select “C++>C++ Class.”
  12. We’ll call this class OwrROSComponents, and use the following settings:OwrROSCOmponents class creation dialouge
  13. Replace the header file so it looks like this
    #ifndef OWRROSCOMPONENTS_H
    #define OWRROSCOMPONENTS_H
    
    #include <QQmlExtensionPlugin>
    
    class OWRRosComponents : public QQmlExtensionPlugin {
        Q_OBJECT
        Q_PLUGIN_METADATA(IID "bluesat.owr")
    
        public:
            void registerTypes(const char * uri);
    };
    
    #endif // OWRROSCOMPONENTS_H
    
  14. Finally make the OwrROSComponents.cpp file look like this
    #include "ros_video_components/OwrROSComponents.hpp"
    #include "ros_video_components/ROSVideoComponent.hpp"
    
    void OWRRosComponents::registerTypes(const char *uri) {
        qmlRegisterType<ROSVideoComponent>("bluesat.owr",1,0,"ROSVideoComponent");
    }
    
  15. And now we just need to add it our QML and application code. Lets do the QML first. At the top of the file (in edit view) add the following line:
    import bluesat.owr 1.0
    
  16. And just before the final closing bracket add this code to place the video component below the other image
    ROSVideoComponent {
       // @disable-check M16
       objectName: "videoStream"
       id: videoStream
       // @disable-check M16
       anchors.bottom: parent.bottom
       // @disable-check M16
       anchors.bottomMargin: 0
       // @disable-check M16
       anchors.top: image1.bottom
       // @disable-check M16
       anchors.topMargin: 0
       // @disable-check M16
       width: 320
       // @disable-check M16
       height: 240
    }
    

    This adds our custom “ROSVideoComponent” who’s type we just registered in the previous steps to our window.

    Note: the @disable-check M16 prevents qt-creator from getting confused about our custom component, which it doesn’t detect properly. This is an unfortunate limit of using cmake (catkin) rather than qt’s own build system.

  17. Then because Qt’s runtime and qt-creator use different search paths we also need to register the type on the first line of our MainApplication::run() function
    qmlRegisterType<ROSVideoComponent>("bluesat.owr",1,0,"ROSVideoComponent");
    
  18. Finally we need to add the following lines to the end of our run function in main application to connect our video component to our NodeHandle
    ROSVideoComponent * video = this->rootObjects()[0]->findChild<ROSVideoComponent*>(QString("videoStream"));
    video->setup(&nh);
    

    And the relevant #include

    #include <ros_video_components/ROSVideoComponent.hpp>
    
  19. To test it publish a video stream using your preferred ros video library.
    For example if you have the ROS gscam library setup and installed you could run the following to stream video from a webcam:

    export GSCAM_CONFIG="v4l2src device=/dev/video0 ! videoscale ! video/x-raw,width=320,height=240 ! videoconvert"
    rosrun gscam gscam __name:=camera_1 /camera/image_raw:=/cam0

Conclusion

So in our previous post we learnt how to setup Qt and QML in ROS’s build system, and get that all working with the Qt-Creator IDE. Then this time we built on that system to develop a widget that takes ROS video data and renders it to the screen, demonstrating how to integrate ROS’s message system into a Qt/QML environment.

The code in this tutorial forms the basis of BLUEsat’s new rover user interface, which is currently in active development. You can see the current progress on our github, where there should be a number of additional widgets being added in the near future.  If you want to learn more about the kind of development we do at BLUEsat or are a UNSW student interested in joining feel free to send an email to info@bluesat.com.au.

Acknowledgements

Some of the code above is based on a Stack Overflow answer by Kornava, about how to created a custom image rendering component, which can be found here.


Posted on by

The BLUEsat Off World Robotics Software team is rebuilding our user interface, in an effort to address maintenance and learning curve problems we have with our existing glut/opengl based gui. After trying out a few different options, we’ve settled on a combination of QT and QML. We liked this option as it allowed us easy maintenance, with a reasonable amount of power and flexibility. We decided to share with you a simple tutorial we made for working with QT and ROS.

In part one of this article we go through the process of setting up a ROS package with QT5 dependencies and building a basic QML application. In the next instalment we will then look at streaming a ROS sensor_msgs/Image video into a custom QML component. The article assumes that you have ROS kinetic setup, and some understanding of how ROS works. It does not assume any prior knowledge of QT.

Full sources for this tutorial can be found on BLUEsat’s github.

Setting up the Environment

First things first, we need to setup Qt, and because one of our criteria for GUI solutions is ease of use, we also need to setup qt-creator so we can take advantage of its visual editor features.

Fortunately there is a ROS plugin for qt-creator (which you can find more about here). To setup we do the following (instructions for Ubuntu 16.04, for other platforms see the source here):


sudo add-apt-repository ppa:beineri/opt-qt571-xenial
sudo add-apt-repository ppa:levi-armstrong/ppa
sudo apt-get update && sudo apt-get install qt57creator-plugin-ros

We also need to install the ROS QT packages, these allow us to easily setup some of the catkin dependencies we will need later (note: unfortunately these packages are currently designed for QT4, so we can’t take full advantage of them)


sudo apt-get install ros-kinetic-qt-build

Setting up our ROS workspace

  1. We will use qt-creator to create our workspace, so start by opening qt-creator.
  2. On the welcome screen select “New Project”. Then chose “Import Project>Import ROS Workspace”.
    The QT-Creator new project dialogue display the correct selection for creating a new ros project.
  3. Name the project “qt-gui” and set the workspace path to a new folder of the same name. An error dialogue will appear, because we are not using an existing workspace, but that is fine.
  4. Then click “Generate Project File”The QT Import Existing ROS Project Window
  5. Click “Next”, choose your version control settings then click “Finish”
  6. For this project we need a ROS package that contains our gui node. In the project window, right click on the “src” folder, and select “add new”.
  7. Select “ROS>Package” and then fill in the details so they match the screenshot below. We’ll call it “gui” and  the Catkin dependencies are “qt_build roscpp sensor_msgs image_transport” QT Creator Create Ros Package Window
  8. Click “next” and then “finish”
  9. Open up the CMakeLists.txt file for the gui package, and replace it with the following file.
    
    ##############################################################################
    # CMake
    ##############################################################################
    
    cmake_minimum_required(VERSION 2.8.0)
    project(gui)
    
    ##############################################################################
    # Catkin
    ##############################################################################
    
    # qt_build provides the qt cmake glue, roscpp the comms for a default talker
    find_package(catkin REQUIRED COMPONENTS qt_build roscpp sensor_msgs image_transport)
    set(QML_IMPORT_PATH "${QML_IMPORT_PATH};${CATKIN_GLOBAL_LIB_DESTINATION}" )
    set(QML_IMPORT_PATH2 "${QML_IMPORT_PATH};${CATKIN_GLOBAL_LIB_DESTINATION}" )
    include_directories(${catkin_INCLUDE_DIRS})
    # Use this to define what the package will export (e.g. libs, headers).
    # Since the default here is to produce only a binary, we don't worry about
    # exporting anything. 
    catkin_package()
    
    ##############################################################################
    # Qt Environment
    ##############################################################################
    
    # this comes from qt_build's qt-ros.cmake which is automatically 
    # included via the dependency ca ll in package.xml
    #rosbuild_prepare_qt4(QtCore QtGui QtQml QtQuick) # Add the appropriate components to the component list here
    find_package(Qt5 COMPONENTS Core Gui Qml Quick REQUIRED)
    
    ##############################################################################
    # Sections
    ##############################################################################
    
    file(GLOB QT_RESOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} resources/*.qrc)
    file(GLOB_RECURSE QT_MOC RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS include/gui/*.hpp)
    
    QT5_ADD_RESOURCES(QT_RESOURCES_CPP ${QT_RESOURCES})
    QT5_WRAP_CPP(QT_MOC_HPP ${QT_MOC})
    
    ##############################################################################
    # Sources
    ##############################################################################
    
    file(GLOB_RECURSE QT_SOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS src/*.cpp)
    
    ##############################################################################
    # Binaries
    ##############################################################################
    
    add_executable(gui ${QT_SOURCES} ${QT_RESOURCES_CPP} ${QT_FORMS_HPP} ${QT_MOC_HPP})
    qt5_use_modules(gui Quick Core)
    target_link_libraries(gui ${QT_LIBRARIES} ${catkin_LIBRARIES})
    target_include_directories(gui PUBLIC include)
    install(TARGETS gui RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION})
    

    Note: This code is based on the auto generated CMakeList.txt file provided by the qt-create ROS package.
    Lets have a look at what this file is doing:

    cmake_minimum_required(VERSION 2.8.3)
    project(gui)
    
    ...
    
    find_package(catkin REQUIRED COMPONENTS qt_build roscpp sensor_msgs image_transport)
    include_directories(${catkin_INCLUDE_DIRS})
    

    This part is simply setting up the ROS package, as you would expect in a normal CMakeLists.txt file.

    find_package(Qt5 COMPONENTS Core Qml Quick REQUIRED)
    

    In this section we setup catkin to include Qt5, and tell it we need the Core, QML, and Quick components. This differs from a normal qt-build CMakeList.txt, because we need QT5 rather than QT4.

    file(GLOB QT_RESOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} resources/*.qrc)
    file(GLOB_RECURSE QT_MOC RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS include/ros_video_components/*.hpp)
    
    QT5_ADD_RESOURCES(QT_RESOURCES_CPP ${QT_RESOURCES})
    QT5_WRAP_CPP(QT_MOC_HPP ${QT_MOC}
    

    This section tells cmake where to find the QT resource files, and where to find the QT header files so we can compile them using the QT precompiler.

    file(GLOB_RECURSE QT_SOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS src/*.cpp)
    

    And this tells cmake where to find all the QT (and ROS) source files for the project

    add_executable(gui ${QT_SOURCES} ${QT_RESOURCES_CPP} ${QT_FORMS_HPP} ${QT_MOC_HPP})
    qt5_use_modules(gui Quick Core)
    target_link_libraries(gui ${QT_LIBRARIES} ${catkin_LIBRARIES})
    target_include_directories(gui PUBLIC include)
    install(TARGETS gui RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION})
    

    Finally we setup a ROS node (executable) target called “gui” that links the C++ source files with the Qt MOC/Header files, and the QT resources.

  10. Next we need to fix our package.xml file, the qt-creator plugin has a bug where it puts all the ROS dependencies in one build_depends and run_depends tag, rather than putting them separately. You need to separate them like so:
     
    <buildtool_depend>catkin</buildtool_depend>
      <buildtool_depend>catkin</buildtool_depend>
      <build_depend>qt_build</build_depend>
      <build_depend>roscpp</build_depend>
      <build_depend>image_transport</build_depend>
      <build_depend>sensor_msgs</build_depend>
      <build_depend>libqt4-dev</build_depend>
      <run_depend>qt_build</run_depend>
      <run_depend>image_transport</run_depend>
      <run_depend>sensor_msgs</run_depend>
      <run_depend>roscpp</run_depend>
      <run_depend>libqt4-dev</run_depend> 
    
  11. Create src/, include/<package name> and resources/ folders in the package. (Note: it doesn’t seem possible to do this in qt-creator you have to do it from the terminal or file browser).

Our ROS workspace is now setup and ready to go. We can start on our actual code.

Creating a Basic QML Application using the Catkin Build System

We’ll start by creating a basic ROS node, that displays a simple qml file.

  1. Right click on the src folder in the gui package, and select “Add New”
  2. Select “ROS>Basic Node” and then click choose
  3. Call it “guiMain” and click Finish. You should end up with a new file open in the editor that looks like this:Qt creator window displaying a main function with a basic ros "Hello World" in it
  4. We’ll come back to this file later, but first we need to add our QML Application. In order to call ros::spinOnce, without having to implement threading we need to subclass the QQmlApplicationEngine class so we can a Qt ‘slot’ to trigger it in the applications main loop  (more on slots later). So to start we need to create a new class: right click on the src directory again and select “Add New.”
  5. Select “C++>C++ Class “, then click “Choose”
  6. Set the “Class Name” to “MainApplication”, and the “Base Class” to “QQmlApplicationEngine.”
  7. Rename the header file so it has the path “../include/gui/MainApplication.hpp” This allows the CMakeLists.txt file we setup earlier to find it, a run the MOC compiler on it.
  8. Rename the source file so that it is called “MainApplication.cpp”. Your dialog should now look like this:Qt Creator "C++ Class" dialogue showing the settings described above.
  9. Click “Next”, then “Finish”.
  10. Change you MainApplication.hpp file to match this one:
    #ifndef MAINAPPLICATION_H
    #define MAINAPPLICATION_H
    
    #include <ros/ros.h>
    #include <QQmlApplicationEngine>
    
    class MainApplication : public QQmlApplicationEngine {
        Q_OBJECT
        public:
            MainApplication();
            //this method is used to setup all the ROS functionality we need, before the application starts running
            void run();
            
        //this defines a slot that will be called when the application is idle.
        public slots:
            void mainLoop();
    
       private:
            ros::NodeHandle nh;
    };
    
    #endif // MAINAPPLICATION_H
    

    Two important parts here, we add the line “Q_OBJECT” below the class declaration. This tells the QT MOC compiler to do QT magic here in order to make this into a valid QT Object.
    Secondly, we add the following lines:

    public slots:
        void mainLoop();
    

    What does this mean? Well QT uses a system of “slots” and “signals” rather than the more conventional “listener” system used by many other gui frameworks. In layman’s terms a slot acts similarly to a callback, when an event its been “connected” to occurs the function gets called.

  11. Now we want to update the MainApplication.cpp file. Edit it so it looks like the following:
    #include "gui/MainApplication.hpp"
    #include <QTimer>
    
    MainApplication::MainApplication() {
        
    }
    
    void MainApplication::run() {
        
        //this loads the qml file we are about to create
        this->load(QUrl(QStringLiteral("qrc:/window1.qml"))); 
        
        //Setup a timer to get the application's idle loop
        QTimer *timer = new QTimer(this);
        connect(timer, SIGNAL(timeout()), this, SLOT(mainLoop()));
        timer->start(0);
    }
    

    The main things here are we load a qml file, at the resource path “qrc:/window1.qml”. In a moment we will create this file. The other important thing is the set of timer code. Basically how this works is we create a timer object, we create a connection between the timer object’s “timeout” event (signal), and our “mainLoop” slot which we will create in a moment. We then set the timeout to 0, causing this event to trigger whenever the application is idle.

  12. Finally we want to add the the mainLoop function to the end of our MainApplication code, it simply calls ros::SpinOnce to get the latest ROS messages whenever the application is idle.
    void MainApplication::mainLoop() {
        ros::spinOnce();
    }
    
  13. In our guiMain.cpp we need to add the following lines at the end of our main function:
        QGuiApplication app(argc, argv);
        MainApplication engine;
        engine.run();
        
        return app.exec();
    

    This initialises our QML application, calls our run function, then enters QT’s main loop.

  14. You will also need to add these two #includes, to the top of the guiMain.cpp file
    #include <QGuiApplication>
    #include <gui/MainApplication.hpp>
    
  15. We now have all the C++ code we need to run our first demo. All that remains is writing the actual QT code. Right click on the “resources” folder, and select “Add New.”
  16. In the New File window, select “Qt” and then “QML File (Qt Quick 2)”, and click “Choose.”
  17.  Call the file “window1” and finish.
  18. We want to create a window rather than an item, so change the qml file so it looks like this:
    import QtQuick 2.0
    import QtQuick.Window 2.2
    
    Window {
        id: window1
        visible: true
    
    }
    
  19. Now we will use the visual editor to add a simple image to the window. With the QML file open, click on the far left menu and select “Design” mode. You should get a view like this:
    QT Creator QML Design View
  20. From the left hand “QML Types” toolbox drag an image widget onto the canvas. You should see rectangle with “Image” written in it on your canvas.
    Qt Creator Design Window with an image added
  21. We need to add an image for it to use. To do this we need a resource file, so switch back to edit mode. Right click on the resources folder and select “Add New.”
  22. Select “Qt>Qt Resource File” and then click “Choose”
  23. Call the resource file “images,” and finish.
  24. This should open the resource file editor, first you need to add a new prefix. Select “add>New Prefix”
    QT Creator Resource File Editor: Select Add>Add Prefix
  25. Change the “prefix” to “/image”.
  26. We now want to add an image. Find an image file on your computer than is appropriate then click “Add Files” and navigate to it. If the file is outside your project, qt-creator will prompt you to save it to your resources folder, which is good. You should now have a view that looks like this:
    QT Creator Resource File Editor with an image added
  27. Switch back to the qml file and design view.
  28. Click the image, on the right hand side will be a drop down marked “source”, select the image you just added from it. (Note if the designer has auto filled this box but the image preview is not appearing you may need to select another image, and then reselect the one you want). I used the BLUEsat logo as my image:
    I used the BLUEsat logo for my example
  29. Now we just need to put the qml somewhere we can find it. As in steps 21 to 26, create a new resource file in the resources folder called qml and add the qml window1.qml to it under the “/” prefix.
  30. At this point you should be able to build your project. You can build using catkin_make as you normally would, or by clicking the build button in the bottom left corner of the ide.
  31. To run your ROS node you can either run it as you would normally from the command line using “rosrun”, or you can run it from the ide. To setup running it from the ide select “Project” from the left hand menu, then under “desktop” select run.
  32. Under the run heading, select “Add Run Step>rosrun step.” You should get a screen like this.new/prefix1QT Creator - Project settings screen
  33. Set the package to “gui”, and the target should auto fill with “gui” as well.
  34. Press run in the bottom left corner. You should get something like this (note: depending on where you places the image you may need to resize the window to see it). Important: as always with ROS, roscore needs to be running for nodes to start correctly. Window Displaying BLUEsat Logo
  35. You have a working gui application now, compiled with catkin and running a ROS Spin Loop at the appropriate loop, but this is a bit useless without using some information from other ROS nodes in the gui. In our next article, we will look at streaming video from ROS into Qt so stay tuned!

 


Posted on by

Embedded Programming

Wrapping up the BLUEtongue 2.0 Rover’s Drive System series, following articles by Chris about the mechanical re-design of the system and Harry about the high-level software implementation, this article will outline the role of the embedded system in connecting the electrical motors to the high-level software. Primarily, this article will focus on analogue-to-digital converters and their use in the drive system of BLUEtongue 2.0. Some understanding of electrical circuits and microprocessors is assumed in the following explanations.

ADC methodologies

Analog-to-Digital Converters (ADCs) are a cornerstone of signal processing, and are used in nearly all electrical devices today. The objective of an ADC is to convert an analog voltage signal into a digital representation. Various methods exist in implementing an ADC, each having their own benefits and purpose. To provide a comparison there are two key concepts when analysing ADC methods, that being their speed and their cost.

The speed of an ADC reflects how fast a sample-conversion sequence is performed and is most often measured as ‘how many sample-conversions can be done within a certain time-frame’ (in samples per second). A higher speed is of course useful when high bandwidth is needed. On the other hand, cost describes how expensive it is to implement – as well as improve the resolution – of the ADC and is influenced by the complexity and number of hardware components required in the design.

Typical ADC methods demonstrate that an increase in speed of the solution will cause an increase in the cost. This is indicative of the trade-off between parallel and sequential logic, as parallel logic will be faster but will require more hardware components. For this article, I will give a brief outline of 3 key ADC solutions:

  • The first method that will be addressed is the Dual Slope, also known as the Integrating method. This method works by charging up an integrator circuit by the voltage being sampled for a fixed amount of time, then discharging the same circuit at a known reference voltage back to no-charge. By using a counter to track time for the discharge phase, the circuit is able to accurately derive a digital equivalent of the original analogue signal using Latex formula. Due to requiring the charging and discharging of a capacitor, this method is one of the slower methods used but also not very costly.
  • Next is Successive Approximation (SAR), which, as the name suggests, operates by estimating the digital output by testing each bit in the final representation progressively from the MSB to the LSB. At each step, it sets the current ‘result’ bit to HIGH (bit = ‘1’), performs a Digital-to-Analog conversion (DAC) and checks whether the analog equivalent is greater than the sampled analog voltage, setting the bit back to LOW (bit = ‘0’) if true then moves on to the next MSB. Doing this ensures that the resulting digital value is the closest binary representation that is still less than the sampled voltage. This method’s speed is typically faster than the ramping method, but has a greater cost as a payoff due to the more complex circuitry.
  • The last method mentioned is Flash ADC, one of the fastest ADC methods. A flash ADC is a group of parallel comparators which individually check the input voltage against reference voltage for all possible digital outputs and then uses a priority encoder to select the appropriate binary result. The cost of this method is the largest of the three as it requires enough components to perform all these voltage comparisons in parallel.
The internal working of BLUEtongue 2.0
The internal working of BLUEtongue 2.0

In addition to the methods described here, there are also interesting ADC solutions such as the Sigma-Delta, but we will leave that for the reader to explore.

ADCs on BLUEtongue

One of the primary uses of the ADC on BLUEtongue was to implement the swerve drive system. To ensure the system’s functionality, it was important for the real-time wheel headings to be known as accurately as possible. To achieve this, potentiometers (pots) were integrated into the front two shafts of the wheel rotators and fed back to the control board, where the analog read-out of the pot was converted into a digital signal that was then passed through to the on-board computer via USB.

In addition to the swerve drive, ADCs were also used in feedback systems for arm manipulation.

 

ADC on the PIC

For BLUEtongue v2.0 the control board consisted of a custom made PCB, housing the dsPIC33EP512MC806 microprocessor (PIC) from Microchip (Read more here). The ADC on the PIC used for the rover is an implementation of the SAR system, with a few additional features.

The PIC provided two independent SAR modules, the first module (ADC0) was able to operate in 12-bit resolution with one channel S&H (Sample and Hold, where the analogue input is captured for the length of the conversion) if desired, whilst both are able to operate at 10-bit resolution with 4 channel S&H.

The resulting conversions were stored in a dedicated 16×16-bit buffer (one buffer for each ADC module exists) allowing for convenient access upon completion. Furthermore, to signify that a conversion sequence has been performed, the PIC is able to generate interrupts or, alternatively, set a ‘done’ bit/flag. The former is useful for time-sensitive, synchronous data whilst the latter (which would be implemented through a form of polling) is less time-critical and better for asynchronous conversions.

For the purpose of the swerve drive, we implemented ADC1 in 12-bit resolution and used the ‘Channel Scan Select’ feature (which allowed the module to sequentially scan multiple ADC pins) to allow the best resolution possible whilst also providing the conversion requirements for the multiple data feedback sources. Furthermore, we used the interrupt method as feedback data for the swerve system constituted an urgent situation.

Programming the PIC

The following code demonstrates how the ADC was setup on the PIC.

// ** Code to setup adc for reading potentiometers ** //
// ** Uses the input scan select system to allow reading ** //
// ** of multiple analog inputs within a single module ** //

void setupADC1(void) {
    // Set appropriate pins as inputs (to read from the pots)
    TRISBbits.TRISB8 = 1;
    TRISBbits.TRISB10 = 1;
    TRISBbits.TRISB12 = 1;
    TRISBbits.TRISB15 = 1;
    TRISEbits.TRISE0 = 1;
    TRISEbits.TRISE1 = 1;
    TRISEbits.TRISE2 = 1;
    TRISEbits.TRISE3 = 1;

    // Setup the pins to read analog values
    ANSELBbits.ANSB8 = 1;
    ANSELBbits.ANSB10 = 1;
    ANSELBbits.ANSB12 = 1;
    ANSELBbits.ANSB15 = 1;
    ANSELEbits.ANSE0 = 1;
    ANSELEbits.ANSE1 = 1;
    ANSELEbits.ANSE2 = 1;
    ANSELEbits.ANSE3 = 1;

    // Set the control registers to zero, these contain garbage after a reset
    // This also ensures the ADC module is OFF
    AD1CON1 = 0;
    AD1CON2 = 0;
    AD1CON3 = 0;

    // clear ADC1 control registers: CON4, CHS0, CHS123 and CHSSH/L
    AD1CON4 = 0;
    AD1CHS0 = 0;
    AD1CHS123 = 0;
    AD1CSSH = 0;
    AD1CSSL = 0;

    AD1CON1bits.AD12B = 1; // Activate 12 bit adc.

    // *** CLOCK SETTINGS *** //
    //Changes the ADC module clock period for both conversion ad sampling.
    // Tad must be greater than 117.6 ns (electrical specs, pg560), T_CY is 1/70Mhz
    // Tad T_CY * (ADCS + 1)
    // Tad/T_CY - 1 ADCS
    // ADCS (117.6*10^-9)*(70*10^6) - 1
    // ADCS 7.232 ~ 8

    AD1CON3bits.ADCS = 0x0F; // T_AD = T_CY * (ADCS + 1)
    AD1CON3bits.SAMC = 0x1F; // Sampling for TAD * 14 (as required for 12-bit)

    // Auto-sampling, automatically end sampling and begin conversion
    AD1CON1bits.SSRC = 0b111;

    // Select the pins that will be cycled through via input scan select
    // NOTE: The ADC scans in ascending order of analog number, i.e.
    // if connecting an4, 9, 5, 12 the buffer will be filled:
    // 4, 5, 9, 12. Ensure any changes enforce this convention!
    AD1CON2bits.CSCNA = 1; // Activate channel scan select
    AD1CSSLbits.CSS8 = 1;
    AD1CSSLbits.CSS10 = 1;
    AD1CSSLbits.CSS12 = 1;
    AD1CSSLbits.CSS15 = 1;
    AD1CSSHbits.CSS24 = 1;
    AD1CSSHbits.CSS25 = 1;
    AD1CSSHbits.CSS26 = 1;
    AD1CSSHbits.CSS27 = 1;

    // Will need to interrupt after (N-1) sample/conversion sequences.
    // Where N = number of signals being read (e.g. an16 an24 = 2 signals = SMPI = 1)
    AD1CON2bits.SMPI = 7; //interrupt on sample conversion

    //automatically begin sampling whenever last conversion finishes, SAMP bit will be set automatically
    AD1CON1bits.ASAM = 1;

    // Clear interupt flag, set interrupt priority
    _AD1IF = 0;
    _AD1IP = 3;

    // Enable the interupt
    _AD1IE = 1;

    //enable ADC1
    AD1CON1bits.ADON = 1;
}

// ADC interrupt serve routine (ISR). This sets a variable so that the main function
// knows that a conversion has finished and can read from buffer.
void __attribute__((__interrupt__, no_auto_psv)) _AD1Interrupt(void) {
    _AD1IF = 0;
    adc_ready = 1;
}

 

The ERC 2016 team, posing with the rover. From left: (standing:) Jim Gray, Timothy Chin, Denis Wang, Simon Ireland, Nuno Das Neves, Helena Kertesz, (kneeling:) Harry J.E. Day, Seb Holzapfel
The ERC 2016 team, posing with the rover. From left: (standing:) Jim Gray, Timothy Chin, Denis Wang, Simon Ireland, Nuno Das Neves, Helena Kertesz, (kneeling:) Harry J.E. Day, Seb Holzapfel

Conclusion

Going forward, the Off-World Robotics team will continue to develop and expand its use of signal processing with the aid of ADCs for the drive system, as well as other key systems such as the fine control of the arm. The experience gained from programming on the microprocessor and implementing the ADCs has been very rewarding for me. The knowledge will also prove invaluable to the team as we look to enhance the embedded system for the next iteration of the rover, code-named NUMBAT, with a Controller Area Network ( will appear in a future article!). I hope you have enjoyed this write-up and found the series informative.

To view the entire embedded system repo, click here.

Thank you for reading, to keep up to date with BLUEsat and the Rover, like us on Facebook and stay tuned for more posts on this site. If you are interested in getting involved, you can find more here.


Posted on by

Welcome back to the second article in our three part series on the BLUEtounge 2.0 Rover’s suspension and drive system. In our last post Chris wrote about the mechanical re-design of the system, and in this post we will look at how we designed the high level software architecture for this system. We will also look at some of the challenges we faced along the way.

The System

The BLUEtounge 2.0 Rover has four wheel's, with the front two being able to steer independently
BLUEtounge 2.0, with its four wheel modules. You can see the front left module is turning.

The BLUEtounge 2.0 Rover has four independently controlled wheels, with the front two wheels also being able to steer. This was a big departure from BLUEtounge 1.0’s skid steer system, which used six wheels, and turned by having the wheels on one side of the rover spin in the opposite direction to those on the other side of the rover. The system was meant as a stepping stone towards a full swerve drive system on either BLUEtounge, or our next rover platform NUMBAT.

Furthermore the BLUEsat Off-World Robotics code base is based around the R.O.S (Robotics Operating System) framework. This framework provides a range of existing software and hardware integrations, and is based around the idea of many separate processes (referred to as nodes), that communicate over TCP based ROS ‘topics’ using data structures called ‘messages’.

That, along with the nature of BLUEsat as a student society placed some interesting requirements on our design:

  • The system needed to be able to work with only two wheel modules being able to steer, but as much as possible the code needed to be reusable for a system with four such modules.
  • The system needed to avoid being heavily tied to the motors and embedded systems used on BLUEtounge, as many of them would be changing for NUMBAT.
  • Due to European Rover Challenge (ERC) requirements, the system needed to support user input, and be able to be controlled by an AI.

As a consequence of the above, and to avoid reinventing the wheel (no pun intended), the system needed to use standard ROS messages and conventions as much as possible. It also needed to be very modular to improve reusability.

User Input

The user controls the rover’s speed and rotation using an xbox controller. After some investigation, our initial approach was to have one of the analogue sticks control the rover’s direction, whilst the other controlled its speed. This was primarily because we had found that using a single stick to control direction and speed was not very intuitive for the user.

As ROS joystick analogue inputs are treated as a range between -1 and 1 on two axes, the first version of the system simply used the up/down axis of the left stick as the magnitude applied to a unit vector formed by the position of right stick. The code looked a bit like this:

double magnitude = joy->axes[SPEED_STICK] * SPEED_CAP;
cmdVel.linear.x = joy->axes[DIRECTION_STICK_X] * magnitude;
cmdVel.linear.y = joy->axes[DIRECTION_STICK_Y] * magnitude * -1; 

(Note: that all code in this article uses the ROS standard of x being forwards <-> backwards, and y being port <-> starboard)

This code produced a geometry_msgs::Twist message that was used by our steering system. However we found that this system had several problems:

  • It was very difficult to do fine manoeuvring of the rover, because the range of slow speeds corresponded to too small an area on the joystick. However, since we could only control the power rather than the velocity of the motors, we couldn’t simply reduce the overall power of the rover as this would mean it was unable to traverse steep gradients.
  • Physical deadzones on the joysticks meant that driving the rover could be somewhat jerky.
  • The code above had a mathematical problem, where the rover’s max speed was higher whilst steering than could be achieved travelling in a straight line.
  • Having a two axis direction control was unintuitive for the driver, and hard to control accurately.

In response to this one of our team members (Sean Thompson) developed a new control system that used only one axis for each stick. In this system the left stick was used for power, whilst the right stick was used for (port/starboard) steering.  The system also implemented dead zone and exponential scaling which allowed for better manoeuvring of the rover at low speeds, whilst still being able to utilise the rover’s full power.

Full source code for this implementation can be found here.

The rover uses the following control configuration whilst driving (diagram credit: Helena Kertesz)
The rover uses the following control configuration whilst driving. Diagram Credit: Helena Kertesz.

Steering

The steering system for the rover allows the rover to rotate about a point on the line formed between the two rear wheels. In order to achieve this, each wheel must run at a separate speed and the two front wheels must have separate angles. The formulas used to determine these variables are displayed below.

Latex formulaLatex formula

The rover steers by adjusting both the speed of its wheels and the angle of its front wheels. (Diagram Credit: Chris Squire)
The rover steers by adjusting both the speed of its wheels and the angle of its front wheels. Diagram Credit: Chris Miller.

In order to accommodate this a software module was built that converted the velocity vector (Latex formula) discussed in the previous section, into the rotational velocities required for each of the wheel modules, and the angles needed for the front two wheels. The system would publish these values as ros messages in a form compatible with the standard ros_command module, enabling easier testing in ROS’s gazebo simulator and hopefully good compatibility with other ROS systems we might need to use in the future.

The following code was used to implement these equations:

        const double turnAngle = atan2(velMsg->linear.y,velMsg->linear.x);
        const double rotationRadius = HALF_ROVER_WIDTH_X/sin(turnAngle);
        
        // we calculate the point about which the rover will rotate
        // relative to the centre of our base_link transform (0,0 is the centre of the rover)

        geometry_msgs::Vector3 rotationCentre;
        // the x axis is in line with the rear wheels of the rover, as shown in the above diagram
        rotationCentre.x = -HALF_ROVER_WIDTH_X;
        // and the y position can be calculated by applying Pythagoras to the rotational radius of the rover (r_turn) and 
        // half the length of the rover
        rotationCentre.y = sqrt(pow(rotationRadius,2)-pow(HALF_ROVER_LENGTH_Y,2));
        // omega_rover is then calculated by the magnitude of our velocity vector over the rotational radius
        const double angularVelocity = fabs(sqrt(pow(velMsg->linear.x, 2) + pow(velMsg->linear.y, 2))) / rotationRadius;

        //calculate the radiuses of each wheel about the rotation center
        //NOTE: if necessary this could be optimised
        double closeBackR = fabs(rotationCentre.y - ROVER_CENTRE_2_WHEEL_Y);
        double farBackR = fabs(rotationCentre.y + ROVER_CENTRE_2_WHEEL_Y);
        double closeFrontR = sqrt(pow(closeBackR,2) + pow(FRONT_W_2_BACK_W_X,2));
        double farFrontR = sqrt(pow(farBackR,2) + pow(FRONT_W_2_BACK_W_X,2));
        
        //V = wr
        double closeBackV = closeBackR * angularVelocity;
        double farBackV = farBackR * angularVelocity;
        double closeFrontV = closeFrontR * angularVelocity;
        double farFrontV = farFrontR * angularVelocity;
        
        //work out the front wheel angles
        double closeFrontAng = DEG90-atan2(closeBackR,FRONT_W_2_BACK_W_X);
        double farFrontAng = DEG90-atan2(farBackR,FRONT_W_2_BACK_W_X);
        
        //if we are in reverse, we just want to go round the same circle in the opposite direction
        if(velMsg->linear.x < 0) {
            //flip all the motorVs
            closeFrontV *=-1.0;
            farFrontV *=-1.0;
            farBackV *=-1.0;
            closeBackV *=-1.0;
        }
        
        
        //finally we flip the values if we want the rotational centre to be on the other side of the rover
        if(0 <= turnAngle && turnAngle <= M_PI) {
            output.frontLeftMotorV = closeFrontV;
            output.backLeftMotorV = closeBackV;
            output.frontRightMotorV = farFrontV;
            output.backRightMotorV = farBackV;
            output.frontLeftAng = closeFrontAng;
            output.frontRightAng = farFrontAng;
            ROS_INFO("right");
        } else {
            output.frontRightMotorV = -closeFrontV;
            output.backRightMotorV = -closeBackV;
            output.frontLeftMotorV = -farFrontV;
            output.backLeftMotorV = -farBackV;
            output.frontLeftAng = -farFrontAng;
            output.frontRightAng = -closeFrontAng;
            ROS_INFO("left");
        }

Separating steering from the control of individual joints also had another important advantage, in that it significantly improved the testability and ease of calibration of the rover’s systems. Steering code could be tested to some extent in the gazebo simulator using existing plugins, whilst control of individual joints could be tested without the additional layer of abstraction provided by the steering system. It also allowed the joints to be calibrated in software (more on this in our next article).

Joint Control System

In BLUEtounge 1.0, our joint control system consisted of many lines of duplicated code in the main loop of our serial driver node. This code took incoming joystick messages and converted them directly into pwm values to be sent through our embedded systems to the motors. This code was developed rapidly and was quite difficult to maintain, but with the addition of the feedback loops needed to develop our swerve drive, the need to provide valid transforms for 3d and automation purposes, and our desire to write code that could be easily moved to NUMBAT – a new solution was needed.

We took an object oriented approach to solving this problem. First a common JointController class was defined, this would be an abstract class that handled subscribing to the joints control topic, calling the joints update functions and providing a standard interface for use by our hardware driver (BoardControl in the diagram below) and transform publisher (part of JointsMonitor).  This class would be inherited by classes for each type of joint, where the control loop for that joint type could be implemented (For example the drive motors control algorithm was implemented in JointVelocityController, whilst the swerve motors where implemented in JointSpeedBasedPositionController).

UML Diagram of the BLUETounge 2.0 Rovers driver control system
The BLUEtounge 2.0 Rover’s joint system consisted of a JointMonitor class, used to manage timings and transforms, as well as an abstract JointController class that was used to implement the different joint types with a standard interface. Diagram Credit: Harry J.E Day, with amendments by Simon Ireland and Nuno Das Neves.

In addition a JointMonitor class was implemented, this class stored a list of joints and published debugging and transform information at set increments. This was a significant improvement in readability from our previous ROS_INFO based system as it allowed us to quickly monitor the joints we wanted. The main grunt of this class was done in the endCycle function, which was called after the commands had been sent to the embedded system. It looked like this:

// the function takes in the time the data was last updated by the embedded system
// we treat this as the end of the cycle
void JointsMonitor::endCycle(ros::Time endTime) {
    cycleEnd = endTime;
    owr_messages::board statusMsg;
    statusMsg.header.stamp = endTime;
    ros::Time estimateTime = endTime;
    int i,j;
    // currentStateMessage is a transform message, we publish of all the joints
    currentStateMessage.velocity.resize(joints.size());
    currentStateMessage.position.resize(joints.size());
    currentStateMessage.effort.resize(joints.size());
    currentStateMessage.name.resize(joints.size());
    
    // we look through each joint and estimate its transform for a few intervals in the future
    // this improves our accuracy as our embedded system didn't update fast enough
    for(i =0; i < numEstimates; i++, estimateTime+=updateInterval) {
        currentStateMessage.header.stamp = estimateTime;
        currentStateMessage.header.seq +=1;
        j =0;
        for(std::vector<JointController*>::iterator it = joints.begin(); it != joints.end(); ++it, j++) {
            jointInfo info = (*it)->extrapolateStatus(cycleStart, estimateTime);
            publish_joint(info.jointName, info.position, info.velocity, info.effort, j);

        }
        statesPub.publish(currentStateMessage);
    }
    // we also publish debugging information for each joint
    // this tells the operator where we think the joint is
    // how fast we think it is moving what PWM value we want it to be at. 
    for(std::vector<JointController*>::iterator it = joints.begin(); it != joints.end(); ++it, j++) {
            jointInfo info = (*it)-&amp;amp;gt;extrapolateStatus(cycleStart, endTime);
	    owr_messages::pwm pwmMsg;
	    pwmMsg.joint = info.jointName;
	    pwmMsg.pwm = info.pwm;
	    pwmMsg.currentVel = info.velocity;
	    pwmMsg.currentPos = info.position;
            pwmMsg.targetPos = info.targetPos;
            statusMsg.joints.push_back(pwmMsg);

    }
    debugPub.publish(statusMsg);  
    
    
}

Overall this system proved to be extremely useful, it allowed us to easily adjust code for all motors of a given type and reuse code when new components where added. In addition the standardised interface allowed us to quickly debug problems (of which there where many), and easily add new functionality. One instance where this came in handy was with our lidar gimbal, the initial code to control this joint was designed to be used by our autonomous navigation system, but we discovered for some tasks it was extremely useful to mount a camera on top and use the gimbal to control the angle of the camera. Due to the existing standard interface it was easy to add code to our joystick system to enable this, and we didn’t need to make any major changes to our main loop which would have been risky that close to the competition.

Conclusion

Whilst time consuming to implement and somewhat complex this system enabled us to have a much more manageable code base. This was achieved by splitting the code into separate ROS nodes that supported standard interfaces, and using an OO model for implementing our joint control. As a result it is likely that this system will be used on our next rover (NUMBAT), even though the underlying hardware and the way we communicate with our embedded systems will be changing significantly.

Next in this series you will hear from Simon Ireland on the embedded systems we needed to develop to get position feedback for a number of these joints, and some of the problems we faced.

Code in this article was developed for BLUEsat UNSW with contributions from Harry J.E Day, Simon Ireland and Sean Thompson, based on the BLUEtounge 1.0 steering and control code by Steph McArthur, Harry J.E Day, and Sam Scheding. Additional assistance in review and algorithm design was provided by Chris Squire, Chris Miller, Yiwei Han, Helena Kertesz, and Sebastian Holzapfel. Full source code for the BLUEtounge 2.0 rover as deployed at the European Rover Challenge 2016, as well as a full list of contributors, can be found on github.


Posted on by

This is the first part in a small three-part series about the re-design of the rover suspension. We’ll touch on aspects across several parts of the team, but for now I’ll introduce you to the mechanical aspects.

However, before I talk about this re-design, I feel it necessary to explain why such substantial change was needed. When we first began the design of BLUEtongue back in 2013 the team opted for a Rocker-Bogie style of suspension due to it its many benefits in traction and stability when operating in rocky environments.

The BLUEtounge 1.0 rover on the Globe Lawn steps.
Initial Mechanical Build

Resulting from the complexity and cost attached to steerable wheels (such as swerve drives), we utilised skid steering like that you’ll find on a tank or bobcat. Unfortunately, to a significant extent, we misunderstood the physical nature of the suspension we were in the process of designing and the ramifications our choice to peruse skid steering would have. Upon initial testing, the inherent problems in the system began to make themselves known. First, the suspension was too tall and insufficiently rigid for a skid steering design. Due to this, attempts to turn the rover resulted in either the flexure of the structure or would cause the bogie to “kick”, rendering the rover immobile. You may see older photos of the rover with what we’ve called “bracing bars”.

The BLUEtounge 1.0 Rover, you can see the bracing bars attached to each of the rover's wheel assemblies.
Addition of Bracing Bars

These bars lock the bogie to the rocker, permitting limited steering capability and allowing the rover to limp around. Secondly, the rocker was too long and couldn’t fit in conventional luggage. As we’d planned from the start to flat pack the rover into our personal luggage for transit to and from the contest, we had to search long and hard to find a suitable enclosure. Thirdly, the construction order. As many undergraduates will quickly realise when they build things for the first time, build order is a very important thing to consider. Whilst in a Computer Aided Design (CAD) environment, assembly really is as easy as a few clicks. Need to mount a motor in tight spot? Sure! Try to do this in physical space where motors can’t fly through walls? Not so easy. Due to this, our assembly process was very convoluted, requiring gearboxes to be adjoined to the motors within other structures, and removed for disassembly, etc (It was a nightmare!). All in all, our first suspension iteration was an utter nightmare. Hindsight really is 20/20.

So, now that we’re on the same page as to the why, I want to introduce you to the what. Post our first presence at the European Rover Challenge in 2015, we realised the suspension was one of the key limiting factors of the BLUEtongue rover platform. With the knowledge that a fundamental redesign was needed, we got to work over the next few months. The final design is a parallel swing arm type suspension with a full rotation swerve drive. The new system was designed with a heavy focus on steering, dynamic response, assembly and transport.

A CAD Render of the BLUEtounge 2.0 Mars Rover with its new suspension system
CAD Render with new Suspension

As seen in the video attached below, steering is achieved through the actuation of a radially free, but axially constrained, shaft. Due to the low loads experienced and limited rotation speeds, this arrangement is achieved with radial bearings and circlips. The design originally called for the use of a swivelling hub (really just a small scale Lazy Susan) for the axial restoring force. However, during initial testing, these were negated to allow for power cabling to pass through the shaft centres. Here it quickly became evident these hubs were unnecessary. Luckily so as well, as this topside location was later used to mount analogue potentiometers for feedback once it was established that the intended locating method of relative encoders and magnetically activated homing was insufficient (Stay tuned for our next two articles for more on this). In order to drive the shaft, a DC motor with gearhead was mounted parallel, and an addition reduction gear step used to mechanically link the two. Additional problems arose from this arrangement where the torque loading during operation consistently began to “strip” the lock screw of the brass pinion gear, leading to un-actuated free rotation of the shaft. A problem easily solved through the use of thicker walled Carbon Steel (1045 for anyone interested) replacement pinion gears.

 

 

 

 

 

 

 Coupled with the problem of rover steering is the dynamic response. Due to time pressures, we were unable to properly characterise the design to validate our solution. As a result, we opted to take a leaf from the hobbyist’s books and use shock absorbers designed for large scale RC cars. Whilst a little smaller than ideal, the readily available variety of damping fluids and compression springs allowed for on the fly adaptation and variability. This allowed us to tailor the dynamics of the system to those desirable for the rover. This design will serve as a starting point to aid in verification of analytical and numerical modelling, laying the foundation for our upcoming NUMBAT rover. I’ve included some slow motion video for you to enjoy, it’ll give you a good idea of how the suspension operates under an impulse loading. Watch this space for future posts about this kind of thing, we’ll be revisiting this later (eventually…).

 

 

 

 

 

 

I’m not going to dive too deep into the remaining points on assembly and transport as they deal more with how you design something opposed to what you’re designing. Our main objective here was to decouple mounting arrangements such that subassemblies can be shipped separately and then joined with minimal effort. If you take a look at the suspension, it can be boiled down into three main parts. The suspension subassembly, the rotation subassembly and the wheel subassembly. When mated, these for a completed suspension and drive assembly that can then easily be joined to the rover chassis. All-in-all we only need to insert or remove a total of nine screws to join or remove each suspension unit. A major improvement over the Rocker-Bogie, which would require a complete disassembly of both the wheel and suspension structures. (Lessons learned)

Thank you for reading, like us on Facebook or stay tuned here for more articles, and feel free to get involved with the project if this grabbed your interest. You can find more about joining here.