Posted on by

Did you want to be an astronaut growing up? Were your lofty ambitions brought down as you got older?

I’m here today to tell you to aim high once again – to aim for space. Maybe not as high as actually personally going to space, but you can get pretty close thanks to advancements in miniature spacecraft. It has never been easier to send something you built yourself to space. While it’s still a lot of work, the rewards are incredible.

In recent years, increasing numbers of small satellites have been launched by people and organisations that historically had no ability to reach space. The most common architecture for these small satellites are known as CubeSats. These CubeSats are built with commercial off-the-shelf parts and can be developed by individuals or small teams in the space of a few years. They are launched into space by hitchhiking on the backs of larger satellites. These advances mean that CubeSats have become as much as 1000 times cheaper than traditional satellites. This cost decrease has enabled the rise and growth of NewSpace startups such as Planet, which has grown to a valuation of over a billion dollars in five years.

Two of Planet's Dove CubeSates being deployed from the International Space Station.
The first pair of Planet’s Dove CubeSats being deployed from the International Space Station.

 

Here’s what you’ll need to get started on developing your own CubeSat mission:

  1. An idea;
  2. Some money; and
  3. A few skills.

It doesn’t sound like much, does it? Let’s go into a bit more depth.

The Idea

The idea you come up will be what your bit of space hardware will do once it’s up there, or in other words, its mission. Satellites are the invisible MVPs of today’s world, taking care of weather forecasts, global navigation, communications and much more. If you want to send some hardware up there in the form of a satellite or otherwise, you will first need to find a problem to solve with it.

There are over 2000 operational satellites in space today, all doing their part for us. However, the small satellites and hosted payloads you or I can send up will not be doing the same work as the bigger billion dollar satellites. I mention this because the key to finding and building on a good idea isn’t sitting around and thinking really hard. To build a solid idea, you will have to read widely, speak to the people whose problem you’re looking to solve, and to listen carefully to their feedback.

Money

While money isn’t as big an issue nowadays as it once was thanks to the NewSpace revolution, reaching space is still an expensive ordeal. You will most likely need hundreds of thousands of dollars to pay for construction, testing, launch, and operations.

Now, there is a way to reverse this problem entirely, and instead make money from your space mission. The way to do this is to go back to your idea and to ask: Is this something people would pay for? Am I tackling a big enough pain point for people? While this is not the traditional way, you and I are even less likely to find success begging NASA or ESA for money.

Skills

Now here is where we at BLUEsat come in! As engineers with few ideas and little money, skills are where we try to excel.

Some serious engineering ability is still needed nowadays to reach space. But with open source architectures and modular off-the-shelf parts becoming more readily available, the level of knowledge needed has dropped considerably. A bit of background on the basics of spacecraft engineering, electrical engineering and coding is all you’ll need to get started. Learning the rest will happen automatically as you design and build.

This is more or less how BLUEsat approaches spacecraft engineering. Students joining BLUEsat aren’t equipped with encyclopedic knowledge of how spacecraft are built and how they work. We simply teach our members the basics, install some software for them and point them towards some problem that we would like to solve. Every one of our senior members has started from such humble origins and slowly googled and built their way to greater understanding.

Members of BLUEsat's ground station team messing about with RF electronics.
Members of BLUEsat’s ground station team messing about with RF electronics.

So why am I telling you this?

At BLUEsat, our Orbital Systems Division is hard at work on a number of projects. We have recently put together a team to work on developing a mission for our own CubeSat, and we need your help. No matter your year or degree, we will gladly take you in and help build your space engineering capabilities. We meet at Electrical Engineering (G17) room 419 every Saturday between 10:30AM and 5PM. Feel free to pop in and say hi.

I’ll see you folks in Part 2, where we talk a little more about how to come up with space mission ideas.


Posted on by

BUCKLE UP EVERYONE WE’RE GOING TO GO ON A WILD AND EXHILARATING JOURNEY INVOLVING SPREADSHEETS AND LOTS OF MEETINGS

BLUEsat does a lot of cool stuff. Robots, satellites and radios are all super cool. They get you engaged, using practical skills and building something physical that you can show off.

TO DO ALL OF THAT, YOU NEED MONEY

Soft drinks paid for a surprising amount of the robot

Before you can even start on figuring out how to allocate money to all the people who need it, you need to work out a rough budget for the project itself. As a non-technical member, I have no idea how much stuff costs. It therefore falls on team leads and the CTO to give me the numbers that we need to work with.

Batteries are expensive

I speak to work out a reasonable amount that can be allocated to each project, based on funding from previous years. Team leads then return a budget, and we discuss which parts are essential, working towards a target amount that everyone is happy with.

It’s at this point that we start stressing about how we’re going to afford all of this.

After figuring out how much money we’ll roughly need for the year, how we’re going to get the money suddenly become very important. Traditionally, BLUEsat has gotten a significant chunk of its funding from the University. In more recent years, our operations have grown and we’ve started working on two projects in tandem. This naturally increases costs. To keep up with our increasing capacity to churn through cash, we’ve started to seek sponsors from outside the university.

BLUEsat is currently sponsored by:

Platinum Sponsors:

NSW Government Logo  UNSW

Silver Sponsors:

Arc Clubs Logo

Bronze Sponsors:

ServoCity

Once all of that is done, we’ve got our budget for the year! Wouldn’t life be nice if nothing unplanned for happened?

 

 

 


Posted on by

The Waterfall Plot

Want to get your hands metaphorically dirty with some BlueSat projects but don’t have enough cash to fund both your HECS debt and your rover? This is a project so simple to follow along that even an arts undergraduate can complete. We will be transforming radio signals that exist everywhere around you into a graph known as a Waterfall Plot. It will look something like this:


Leave this running on your computer for long enough that your mates walk by, they will think you’re tapping into Russian communications, and then land yourself an internship at Telstra.

Technically you can tap into Russian communications, its not a joke there, but other practical and less anti-facist applications include checking the signal strength in your network, interpretting packet radio, and listening to the Triple J hottest 100 (as shown in diagram).

 

So lets get started!

 

What you will need

You will need these to get started:

  • SDR (software defined radio)
  • Antenna
  • GNU Radio
  • Python
  • A Computer with at least 1 usb port

Software defined radios are extremely handy pieces of equipment due to their size, cost and effectiveness. They connect via USB and only require an Antenna. We found a source that sells the model we will be using for only $20 which you can go to here.

GNU Radio is a python based open-source graphical tool to create signal flow graphs and generating flow-graph source code. We will be using GNU Radio to communicate with our SDR and it has the potential to do much much more. You can download their software here: www.gnuradio.org

Since GNU Radio will be operating in Python, it kind of makes sense to have Python. But what is Python? No it is not malware, so rest assured it won’t swallow up your operating system. Python is a widely used programming language, and the one that we will be using in this project. Make sure you get the right bit version by checking whether your downloaded version of GNU Radio runs on 64-bit or 32-bit. You can download it here: www.python.org

Here is an image of the SDR:

 

GNU Radio

Assuming that we have been successful up to this point in purchasing the equipment, downloading GNU Radio and setting up, we can begin creating our program.

A template that we will be using can be downloaded through this link. This will save time learning how to configure GNU Radio, however you may learn in your spare time how to add more powerful tools to improve and diversify from this template. Opening the file in the hyperlink will look like this.

The two main blocks that allows this program to function are the source block and sink block. The source block funnels the data from the SDR to GNU Radio and the sink block compiles the infomation to be displayed on a custom GUI. You may tweak the template as you gradually gain a better understanding of how the program works, including adding an audio sink which isn’t hard but that’s homework for you to figure out.

The last step is to compile and run the program, either by clicking the ‘play button’ or the F5 shortcut if you can’t find it. This will create a new window with the waterfall plot showing all the receivable frequencies in your range. The frequency slider on the bottom will allow you to adjust the centre frequency that you want to listen to.

So now you have your own cheap and miniature device for frequency capture! But now it is time to test it out on bigger and much more expensive equipment, like maybe a 2m antenna on top of the Electrical Engineering building…

Join BlueSat to participate in bigger and better projects than this by contacting us. Happy tapping into communications in the meantime!


Posted on by

In our last article, as part of our investigation into different Graphical User Interface (GUI) options for the next European Rover Challenge (ERC) we looked at a proof of concept for using QML and Qt5 with ROS. In this article we will continue with that proof of concept by creating a custom QML component that streams a ros sensor_msgs/Video topic, and adding it to the window we created in the previous article.

Setting up our Catkin Packages

  1. In qt-creator reopen the workspace project you used for the last tutorial.
  2. For this project we need an additional ROS package for our shared library that will contain our custom QML Video Component. We need this so the qt-creator design view can deal with our custom component. In the project window, right click on the “src” folder, and select “add new”.
  3. Select “ROS>Package” and then fill in the details so they match the screenshot below. We’ll call this package “ros_video_components” and  the Catkin dependencies are “qt_build roscpp sensor_msgs image_transport” The QT Creator Create Ros Package Window
  4. Click “next” and then “finish”
  5. Open up the CMakeLists.txt file for the ros_video_components package, and replace it with the following file.
    ##############################################################################
    # CMake
    ##############################################################################
    
    cmake_minimum_required(VERSION 2.8.3)
    project(ros_video_components)
    
    ##############################################################################
    # Catkin
    ##############################################################################
    
    # qt_build provides the qt cmake glue, roscpp the comms for a default talker
    find_package(catkin REQUIRED COMPONENTS qt_build roscpp sensor_msgs image_transport)
    include_directories(include ${catkin_INCLUDE_DIRS})
    # Use this to define what the package will export (e.g. libs, headers).
    # Since the default here is to produce only a binary, we don't worry about
    # exporting anything.
    catkin_package(
        CATKIN_DEPENDS qt_build roscpp sensor_msgs image_transport
        INCLUDE_DIRS include
        LIBRARIES RosVideoComponents
    )
    
    ##############################################################################
    # Qt Environment
    ##############################################################################
    
    # this comes from qt_build's qt-ros.cmake which is automatically
    # included via the dependency ca ll in package.xml
    find_package(Qt5 COMPONENTS Core Qml Quick REQUIRED)
    
    ##############################################################################
    # Sections
    ##############################################################################
    
    file(GLOB QT_RESOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} resources/*.qrc)
    file(GLOB_RECURSE QT_MOC RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS include/ros_video_components/*.hpp)
    
    QT5_ADD_RESOURCES(QT_RESOURCES_CPP ${QT_RESOURCES})
    QT5_WRAP_CPP(QT_MOC_HPP ${QT_MOC})
    
    ##############################################################################
    # Sources
    ##############################################################################
    
    file(GLOB_RECURSE QT_SOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS src/*.cpp)
    
    ##############################################################################
    # Binaries
    ##############################################################################
    add_library(RosVideoComponents ${QT_SOURCES} ${QT_RESOURCES_CPP} ${QT_FORMS_HPP} ${QT_MOC_HPP})
    qt5_use_modules(RosVideoComponents Quick Core)
    target_link_libraries(RosVideoComponents ${QT_LIBRARIES} ${catkin_LIBRARIES})
    target_include_directories(RosVideoComponents PUBLIC include)
    
    

    Note: This code is based on the auto generated CMakeList.txt file provided by the qt-create ROS package.
    This is similar to what we did for the last example, but with a few key differences

    catkin_package(
        CATKIN_DEPENDS qt_build roscpp sensor_msgs image_transport
        INCLUDE_DIRS include
        LIBRARIES RosVideoComponents
    )
    

    This tells catkin to export the RosVideoComponents build target as a library to all dependencies of this package.

    Then in this section we tell catkin to make a shared library target called “RosVideoComponents” that links the C++ source files with the Qt MOC/Header files, and the qt resources. Rather than a ROS node.

    add_library(RosVideoComponents ${QT_SOURCES} ${QT_RESOURCES_CPP} ${QT_FORMS_HPP} ${QT_MOC_HPP})
    qt5_use_modules(RosVideoComponents Quick Core)
    target_link_libraries(RosVideoComponents ${QT_LIBRARIES} ${catkin_LIBRARIES})
    target_include_directories(RosVideoComponents PUBLIC include)
    
  6. Next we need to fix our package.xml file, the qt-creator plugin has a bug where it puts all the ROS dependecies in one build_depends and run_depends tag, rather than putting them seperatly. You need to seperate them like so:
      <buildtool_depend>catkin</buildtool_depend>
      <buildtool_depend>catkin</buildtool_depend>
      <build_depend>qt_build</build_depend>
      <build_depend>roscpp</build_depend>
      <build_depend>image_transport</build_depend>
      <build_depend>sensor_msgs</build_depend>
      <build_depend>libqt4-dev</build_depend>
      <run_depend>qt_build</run_depend>
      <run_depend>image_transport</run_depend>
      <run_depend>sensor_msgs</run_depend>
      <run_depend>roscpp</run_depend>
      <run_depend>libqt4-dev</run_depend>
    
  7. Again we need to create src/ resources/ and include/ros_video_components folders in the package folder.
  8. We also need to make some changes to our gui project to depend on the library we generate. Open up the CMakeLists.txt file for the gui package and replace the following line:
    find_package(catkin REQUIRED COMPONENTS qt_build roscpp sensor_msgs image_transport)

    with

    find_package(catkin REQUIRED COMPONENTS qt_build roscpp sensor_msgs image_transport ros_video_components)
  9. Then add the following lines to the gui package’s package.xml,
    <build_depend>ros_video_components</build_depend>
    <run_depend>ros_video_components</run_depend>
    
  10. In the file browser create src and include/ros_video_components folders in the ros_video_components folder.

Building the Video Streaming Component

When we are using the rover the primary purpose the GUI serves in most ERC tasks is displaying camera feed information to users. Thus it felt appropriate to use streaming video from ROS as a proof of concept to determine if QML and Qt5 would be an appropriate technology choice.

We will now look at building a QML component that subscribes to a ROS image topic, and displays the data on screen.

  1. Right click on the src folder of the ros_video_components folder, and select “Add New.”
  2. We first need to create a class for our qt component so select,  “C++>C++ Class”
  3. We’ll call our class “ROSVideoComponent” and it has the custom base class “QQuickPaintedItem.” We’ll also need to select that we want to “Include QObject” and adjust the path of the header file so the compiler can find it. Make sure your settings match those in this screenshot:
    Qt Creator C++ Class Creation Dialouge
  4. Open up the header file you just created and update it to match the following
     
    #ifndef ROSVIDEOCOMPONENT_H
    #define ROSVIDEOCOMPONENT_H
    
    #include <QQuickPaintedItem>
    #include <ros/ros.h>
    #include <image_transport/image_transport.h>
    #include <sensor_msgs/Image.h>
    #include <QImage>
    #include <QPainter>
    
    class ROSVideoComponent : public QQuickPaintedItem {
        // this marks the component as a Qt Widget
        Q_OBJECT
        
        public:
            ROSVideoComponent(QQuickItem *parent = 0);
    
            void paint(QPainter *painter);
            void setup(ros::NodeHandle * nh);
    
        private:
            void receiveImage(const sensor_msgs::Image::ConstPtr & msg);
    
            ros::NodeHandle * nh;
            image_transport::Subscriber imageSub;
            // these are used to store our image buffer
            QImage * currentImage;
            uchar * currentBuffer;
            
            
    };
    
    #endif // ROSVIDEOCOMPONENT_H
    

    Here, QQuickPaintedItem is a Qt class that we can override to provide a QML component with a custom paint method. This will allow us to render our ROS video frames.
    Also in the header file we have a setup function which we use to initialise our ROS subscriptions since we don’t control where the constructor of this class is called, and our conventional ROS subscriber callback.

  5. Open up the ROSVideoComponent.cpp  file add change it so it looks like this:
     
    #include <ros_video_components/ROSVideoComponent.hpp>
    
    ROSVideoComponent::ROSVideoComponent(QQuickItem * parent) : QQuickPaintedItem(parent), currentImage(NULL), currentBuffer(NULL) {
    
    }
    

    Here we use an intialiser list to call our parent constructor, and then initialise our currentImage and currentBuffer pointers to NULL. The latter is very important as we use it to check if we have received any ROS messages.

  6. Next add a “setup” function:
    void ROSVideoComponent::setup(ros::NodeHandle *nh) {
        image_transport::ImageTransport imgTrans(*nh);
        imageSub = imgTrans.subscribe("/cam0", 1, &ROSVideoComponent::receiveImage, this, image_transport::TransportHints("compressed"));
        ROS_INFO("setup");
    }
    

    This function takes in a pointer to our ROS NodeHandle, and uses it to create a subscription to the “/cam0” topic.  We use image transport, as is recomended by ROS for video streams, and direct it to call the recieveImage callback.

  7. And now we implement  said callback:
    void ROSVideoComponent::receiveImage(const sensor_msgs::Image::ConstPtr &msg) {
        // check to see if we already have an image frame, if we do then we need to delete it
        // to avoid memory leaks
        if(currentImage) {
            delete currentImage;
        }
    
        // allocate a buffer of sufficient size to contain our video frame
        uchar * tempBuffer = (uchar *) malloc(sizeof(uchar) * msg->data.size());
        
        // and copy the message into the buffer
        // we need to do this because the QImage api requires the buffer we pass in to continue to exist
        // whilst the image is in use, but the msg and it's data will be lost once we leave this context.
        memcpy(tempBuffer, msg->data.data(), msg->data.size());
        
        // we then create a new QImage, this code below matches the spec of an image produced by the ros gscam module
        currentImage = new QImage(tempBuffer, msg->width, msg->height, QImage::Format_RGB888);
        
        ROS_INFO("Recieved");
        
        // We then swap out of buffer to avoid memory leaks
        if(currentBuffer) {
            delete currentBuffer;
            currentBuffer = tempBuffer;
        }
        // And re-render the component to display the new image.
        update();
    }
    
  8. Finally we override the paint method
    
    void ROSVideoComponent::paint(QPainter *painter) {
        if(currentImage) {
            painter->drawImage(QPoint(0,0), *(this->currentImage));
        }
    }
    
  9. We now have our QML component, and you can check that everything is working as intended by building the project (hammer icon in the bottom right or the IDE or using catkin_make). In order to use it we must add it to our qml file, but first since we want to be able to use it in qt-creator’s design view we need to add a plugin class.
  10. Right click on the src folder and select “Add New” again.
  11. Then select “C++>C++ Class.”
  12. We’ll call this class OwrROSComponents, and use the following settings:OwrROSCOmponents class creation dialouge
  13. Replace the header file so it looks like this
    #ifndef OWRROSCOMPONENTS_H
    #define OWRROSCOMPONENTS_H
    
    #include <QQmlExtensionPlugin>
    
    class OWRRosComponents : public QQmlExtensionPlugin {
        Q_OBJECT
        Q_PLUGIN_METADATA(IID "bluesat.owr")
    
        public:
            void registerTypes(const char * uri);
    };
    
    #endif // OWRROSCOMPONENTS_H
    
  14. Finally make the OwrROSComponents.cpp file look like this
    #include "ros_video_components/OwrROSComponents.hpp"
    #include "ros_video_components/ROSVideoComponent.hpp"
    
    void OWRRosComponents::registerTypes(const char *uri) {
        qmlRegisterType<ROSVideoComponent>("bluesat.owr",1,0,"ROSVideoComponent");
    }
    
  15. And now we just need to add it our QML and application code. Lets do the QML first. At the top of the file (in edit view) add the following line:
    import bluesat.owr 1.0
    
  16. And just before the final closing bracket add this code to place the video component below the other image
    ROSVideoComponent {
       // @disable-check M16
       objectName: "videoStream"
       id: videoStream
       // @disable-check M16
       anchors.bottom: parent.bottom
       // @disable-check M16
       anchors.bottomMargin: 0
       // @disable-check M16
       anchors.top: image1.bottom
       // @disable-check M16
       anchors.topMargin: 0
       // @disable-check M16
       width: 320
       // @disable-check M16
       height: 240
    }
    

    This adds our custom “ROSVideoComponent” who’s type we just registered in the previous steps to our window.

    Note: the @disable-check M16 prevents qt-creator from getting confused about our custom component, which it doesn’t detect properly. This is an unfortunate limit of using cmake (catkin) rather than qt’s own build system.

  17. Then because Qt’s runtime and qt-creator use different search paths we also need to register the type on the first line of our MainApplication::run() function
    qmlRegisterType<ROSVideoComponent>("bluesat.owr",1,0,"ROSVideoComponent");
    
  18. Finally we need to add the following lines to the end of our run function in main application to connect our video component to our NodeHandle
    ROSVideoComponent * video = this->rootObjects()[0]->findChild<ROSVideoComponent*>(QString("videoStream"));
    video->setup(&nh);
    

    And the relevant #include

    #include <ros_video_components/ROSVideoComponent.hpp>
    
  19. To test it publish a video stream using your preferred ros video library.
    For example if you have the ROS gscam library setup and installed you could run the following to stream video from a webcam:

    export GSCAM_CONFIG="v4l2src device=/dev/video0 ! videoscale ! video/x-raw,width=320,height=240 ! videoconvert"
    rosrun gscam gscam __name:=camera_1 /camera/image_raw:=/cam0

Conclusion

So in our previous post we learnt how to setup Qt and QML in ROS’s build system, and get that all working with the Qt-Creator IDE. Then this time we built on that system to develop a widget that takes ROS video data and renders it to the screen, demonstrating how to integrate ROS’s message system into a Qt/QML environment.

The code in this tutorial forms the basis of BLUEsat’s new user interface, which is currently in active development. You can see the current progress on our github, where there should be a number of additional widgets being added in the near future.  If you want to learn more about the kind of development we do at BLUEsat or are a UNSW student interested in joining feel free to send an email to info@bluesat.com.au.

Acknowledgements

Some of the code above is based on a Stack Overflow answer by Kornava, about how to created a custom image rendering component, which can be found here.


Posted on by

The BLUEsat Off World Robotics Software team is rebuilding our user interface, in an effort to address maintenance and learning curve problems we have with our existing glut/opengl based gui. After trying out a few different options, we’ve settled on a combination of QT and QML. We liked this option as it allowed us easy maintenance, with a reasonable amount of power and flexibility. We decided to share with you a simple tutorial we made for working with QT and ROS.

In part one of this article we go through the process of setting up a ROS package with QT5 dependencies and building a basic QML application. In the next instalment we will then look at streaming a ROS sensor_msgs/Image video into a custom QML component. The article assumes that you have ROS kinetic setup, and some understanding of how ROS works. It does not assume any prior knowledge of QT.

Full sources for this tutorial can be found on BLUEsat’s github.

Setting up the Environment

First things first, we need to setup Qt, and because one of our criteria for GUI solutions is ease of use, we also need to setup qt-creator so we can take advantage of its visual editor features.

Fortunately there is a ROS plugin for qt-creator (which you can find more about here). To setup we do the following (instructions for Ubuntu 16.04, for other platforms see the source here):


sudo add-apt-repository ppa:beineri/opt-qt571-xenial
sudo add-apt-repository ppa:levi-armstrong/ppa
sudo apt-get update && sudo apt-get install qt57creator-plugin-ros

We also need to install the ROS QT packages, these allow us to easily setup some of the catkin dependencies we will need later (note: unfortunately these packages are currently designed for QT4, so we can’t take full advantage of them)


sudo apt-get install ros-kinetic-qt-build

Setting up our ROS workspace

  1. We will use qt-creator to create our workspace, so start by opening qt-creator.
  2. On the welcome screen select “New Project”. Then chose “Import Project>Import ROS Workspace”.
    The QT-Creator new project dialogue display the correct selection for creating a new ros project.
  3. Name the project “qt-gui” and set the workspace path to a new folder of the same name. An error dialogue will appear, because we are not using an existing workspace, but that is fine.
  4. Then click “Generate Project File”The QT Import Existing ROS Project Window
  5. Click “Next”, choose your version control settings then click “Finish”
  6. For this project we need a ROS package that contains our gui node. In the project window, right click on the “src” folder, and select “add new”.
  7. Select “ROS>Package” and then fill in the details so they match the screenshot below. We’ll call it “gui” and  the Catkin dependencies are “qt_build roscpp sensor_msgs image_transport” QT Creator Create Ros Package Window
  8. Click “next” and then “finish”
  9. Open up the CMakeLists.txt file for the gui package, and replace it with the following file.
    
    ##############################################################################
    # CMake
    ##############################################################################
    
    cmake_minimum_required(VERSION 2.8.0)
    project(gui)
    
    ##############################################################################
    # Catkin
    ##############################################################################
    
    # qt_build provides the qt cmake glue, roscpp the comms for a default talker
    find_package(catkin REQUIRED COMPONENTS qt_build roscpp sensor_msgs image_transport)
    set(QML_IMPORT_PATH "${QML_IMPORT_PATH};${CATKIN_GLOBAL_LIB_DESTINATION}" )
    set(QML_IMPORT_PATH2 "${QML_IMPORT_PATH};${CATKIN_GLOBAL_LIB_DESTINATION}" )
    include_directories(${catkin_INCLUDE_DIRS})
    # Use this to define what the package will export (e.g. libs, headers).
    # Since the default here is to produce only a binary, we don't worry about
    # exporting anything. 
    catkin_package()
    
    ##############################################################################
    # Qt Environment
    ##############################################################################
    
    # this comes from qt_build's qt-ros.cmake which is automatically 
    # included via the dependency ca ll in package.xml
    #rosbuild_prepare_qt4(QtCore QtGui QtQml QtQuick) # Add the appropriate components to the component list here
    find_package(Qt5 COMPONENTS Core Gui Qml Quick REQUIRED)
    
    ##############################################################################
    # Sections
    ##############################################################################
    
    file(GLOB QT_RESOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} resources/*.qrc)
    file(GLOB_RECURSE QT_MOC RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS include/gui/*.hpp)
    
    QT5_ADD_RESOURCES(QT_RESOURCES_CPP ${QT_RESOURCES})
    QT5_WRAP_CPP(QT_MOC_HPP ${QT_MOC})
    
    ##############################################################################
    # Sources
    ##############################################################################
    
    file(GLOB_RECURSE QT_SOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS src/*.cpp)
    
    ##############################################################################
    # Binaries
    ##############################################################################
    
    add_executable(gui ${QT_SOURCES} ${QT_RESOURCES_CPP} ${QT_FORMS_HPP} ${QT_MOC_HPP})
    qt5_use_modules(gui Quick Core)
    target_link_libraries(gui ${QT_LIBRARIES} ${catkin_LIBRARIES})
    target_include_directories(gui PUBLIC include)
    install(TARGETS gui RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION})
    

    Note: This code is based on the auto generated CMakeList.txt file provided by the qt-create ROS package.
    Lets have a look at what this file is doing:

    cmake_minimum_required(VERSION 2.8.3)
    project(gui)
    
    ...
    
    find_package(catkin REQUIRED COMPONENTS qt_build roscpp sensor_msgs image_transport)
    include_directories(${catkin_INCLUDE_DIRS})
    

    This part is simply setting up the ROS package, as you would expect in a normal CMakeLists.txt file.

    find_package(Qt5 COMPONENTS Core Qml Quick REQUIRED)
    

    In this section we setup catkin to include Qt5, and tell it we need the Core, QML, and Quick components. This differs from a normal qt-build CMakeList.txt, because we need QT5 rather than QT4.

    file(GLOB QT_RESOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} resources/*.qrc)
    file(GLOB_RECURSE QT_MOC RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS include/ros_video_components/*.hpp)
    
    QT5_ADD_RESOURCES(QT_RESOURCES_CPP ${QT_RESOURCES})
    QT5_WRAP_CPP(QT_MOC_HPP ${QT_MOC}
    

    This section tells cmake where to find the QT resource files, and where to find the QT header files so we can compile them using the QT precompiler.

    file(GLOB_RECURSE QT_SOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS src/*.cpp)
    

    And this tells cmake where to find all the QT (and ROS) source files for the project

    add_executable(gui ${QT_SOURCES} ${QT_RESOURCES_CPP} ${QT_FORMS_HPP} ${QT_MOC_HPP})
    qt5_use_modules(gui Quick Core)
    target_link_libraries(gui ${QT_LIBRARIES} ${catkin_LIBRARIES})
    target_include_directories(gui PUBLIC include)
    install(TARGETS gui RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION})
    

    Finally we setup a ROS node (executable) target called “gui” that links the C++ source files with the Qt MOC/Header files, and the QT resources.

  10. Next we need to fix our package.xml file, the qt-creator plugin has a bug where it puts all the ROS dependencies in one build_depends and run_depends tag, rather than putting them separately. You need to separate them like so:
     
    <buildtool_depend>catkin</buildtool_depend>
      <buildtool_depend>catkin</buildtool_depend>
      <build_depend>qt_build</build_depend>
      <build_depend>roscpp</build_depend>
      <build_depend>image_transport</build_depend>
      <build_depend>sensor_msgs</build_depend>
      <build_depend>libqt4-dev</build_depend>
      <run_depend>qt_build</run_depend>
      <run_depend>image_transport</run_depend>
      <run_depend>sensor_msgs</run_depend>
      <run_depend>roscpp</run_depend>
      <run_depend>libqt4-dev</run_depend> 
    
  11. Create src/, include/<package name> and resources/ folders in the package. (Note: it doesn’t seem possible to do this in qt-creator you have to do it from the terminal or file browser).

Our ROS workspace is now setup and ready to go. We can start on our actual code.

Creating a Basic QML Application using the Catkin Build System

We’ll start by creating a basic ROS node, that displays a simple qml file.

  1. Right click on the src folder in the gui package, and select “Add New”
  2. Select “ROS>Basic Node” and then click choose
  3. Call it “guiMain” and click Finish. You should end up with a new file open in the editor that looks like this:Qt creator window displaying a main function with a basic ros "Hello World" in it
  4. We’ll come back to this file later, but first we need to add our QML Application. In order to call ros::spinOnce, without having to implement threading we need to subclass the QQmlApplicationEngine class so we can a Qt ‘slot’ to trigger it in the applications main loop  (more on slots later). So to start we need to create a new class: right click on the src directory again and select “Add New.”
  5. Select “C++>C++ Class “, then click “Choose”
  6. Set the “Class Name” to “MainApplication”, and the “Base Class” to “QQmlApplicationEngine.”
  7. Rename the header file so it has the path “../include/gui/MainApplication.hpp” This allows the CMakeLists.txt file we setup earlier to find it, a run the MOC compiler on it.
  8. Rename the source file so that it is called “MainApplication.cpp”. Your dialog should now look like this:Qt Creator "C++ Class" dialogue showing the settings described above.
  9. Click “Next”, then “Finish”.
  10. Change you MainApplication.hpp file to match this one:
    #ifndef MAINAPPLICATION_H
    #define MAINAPPLICATION_H
    
    #include <ros/ros.h>
    #include <QQmlApplicationEngine>
    
    class MainApplication : public QQmlApplicationEngine {
        Q_OBJECT
        public:
            MainApplication();
            //this method is used to setup all the ROS functionality we need, before the application starts running
            void run();
            
        //this defines a slot that will be called when the application is idle.
        public slots:
            void mainLoop();
    
       private:
            ros::NodeHandle nh;
    };
    
    #endif // MAINAPPLICATION_H
    

    Two important parts here, we add the line “Q_OBJECT” below the class declaration. This tells the QT MOC compiler to do QT magic here in order to make this into a valid QT Object.
    Secondly, we add the following lines:

    public slots:
        void mainLoop();
    

    What does this mean? Well QT uses a system of “slots” and “signals” rather than the more conventional “listener” system used by many other gui frameworks. In layman’s terms a slot acts similarly to a callback, when an event its been “connected” to occurs the function gets called.

  11. Now we want to update the MainApplication.cpp file. Edit it so it looks like the following:
    #include "gui/MainApplication.hpp"
    #include <QTimer>
    
    MainApplication::MainApplication() {
        
    }
    
    void MainApplication::run() {
        
        //this loads the qml file we are about to create
        this->load(QUrl(QStringLiteral("qrc:/window1.qml"))); 
        
        //Setup a timer to get the application's idle loop
        QTimer *timer = new QTimer(this);
        connect(timer, SIGNAL(timeout()), this, SLOT(mainLoop()));
        timer->start(0);
    }
    

    The main things here are we load a qml file, at the resource path “qrc:/window1.qml”. In a moment we will create this file. The other important thing is the set of timer code. Basically how this works is we create a timer object, we create a connection between the timer object’s “timeout” event (signal), and our “mainLoop” slot which we will create in a moment. We then set the timeout to 0, causing this event to trigger whenever the application is idle.

  12. Finally we want to add the the mainLoop function to the end of our MainApplication code, it simply calls ros::SpinOnce to get the latest ROS messages whenever the application is idle.
    void MainApplication::mainLoop() {
        ros::spinOnce();
    }
    
  13. In our guiMain.cpp we need to add the following lines at the end of our main function:
        QGuiApplication app(argc, argv);
        MainApplication engine;
        engine.run();
        
        return app.exec();
    

    This initialises our QML application, calls our run function, then enters QT’s main loop.

  14. You will also need to add these two #includes, to the top of the guiMain.cpp file
    #include <QGuiApplication>
    #include <gui/MainApplication.hpp>
    
  15. We now have all the C++ code we need to run our first demo. All that remains is writing the actual QT code. Right click on the “resources” folder, and select “Add New.”
  16. In the New File window, select “Qt” and then “QML File (Qt Quick 2)”, and click “Choose.”
  17.  Call the file “window1” and finish.
  18. We want to create a window rather than an item, so change the qml file so it looks like this:
    import QtQuick 2.0
    import QtQuick.Window 2.2
    
    Window {
        id: window1
        visible: true
    
    }
    
  19. Now we will use the visual editor to add a simple image to the window. With the QML file open, click on the far left menu and select “Design” mode. You should get a view like this:
    QT Creator QML Design View
  20. From the left hand “QML Types” toolbox drag an image widget onto the canvas. You should see rectangle with “Image” written in it on your canvas.
    Qt Creator Design Window with an image added
  21. We need to add an image for it to use. To do this we need a resource file, so switch back to edit mode. Right click on the resources folder and select “Add New.”
  22. Select “Qt>Qt Resource File” and then click “Choose”
  23. Call the resource file “images,” and finish.
  24. This should open the resource file editor, first you need to add a new prefix. Select “add>New Prefix”
    QT Creator Resource File Editor: Select Add>Add Prefix
  25. Change the “prefix” to “/image”.
  26. We now want to add an image. Find an image file on your computer than is appropriate then click “Add Files” and navigate to it. If the file is outside your project, qt-creator will prompt you to save it to your resources folder, which is good. You should now have a view that looks like this:
    QT Creator Resource File Editor with an image added
  27. Switch back to the qml file and design view.
  28. Click the image, on the right hand side will be a drop down marked “source”, select the image you just added from it. (Note if the designer has auto filled this box but the image preview is not appearing you may need to select another image, and then reselect the one you want). I used the BLUEsat logo as my image:
    I used the BLUEsat logo for my example
  29. Now we just need to put the qml somewhere we can find it. As in steps 21 to 26, create a new resource file in the resources folder called qml and add the qml window1.qml to it under the “/” prefix.
  30. At this point you should be able to build your project. You can build using catkin_make as you normally would, or by clicking the build button in the bottom left corner of the ide.
  31. To run your ROS node you can either run it as you would normally from the command line using “rosrun”, or you can run it from the ide. To setup running it from the ide select “Project” from the left hand menu, then under “desktop” select run.
  32. Under the run heading, select “Add Run Step>rosrun step.” You should get a screen like this.new/prefix1QT Creator - Project settings screen
  33. Set the package to “gui”, and the target should auto fill with “gui” as well.
  34. Press run in the bottom left corner. You should get something like this (note: depending on where you places the image you may need to resize the window to see it). Important: as always with ROS, roscore needs to be running for nodes to start correctly. Window Displaying BLUEsat Logo
  35. You have a working gui application now, compiled with catkin and running a ROS Spin Loop at the appropriate loop, but this is a bit useless without using some information from other ROS nodes in the gui. In our next article, we will look at streaming video from ROS into Qt so stay tuned!

 


Posted on by

Okay so in a previous post we covered the basic theory and hardware behind building a reaction wheel – a cool module used in satellites to control their orientation in space. Now that we have our hardware set up, let’s move into software development!

There are three main steps to the detumbling software:

  1. Obtain data from the gyroscope
  2. Process data and decide what output to send
  3. Send output to the motor

Obtaining Data

Most sensors nowadays use serial communications to send data bit-wise (one bit after the other) between devices. There are several serial ‘protocols’ (methods of sending data) including SPI, CAN and I2C. SPI is very fast, but is generally unreliable for transmission distances more than a few inches. CAN is reliable over longer distances (commonly used for car internals) but tends to be a bit on the slow side. I2C is a happy medium between the two, having a fairly fast speed but still able to be transmitted over distances of a few metres. For a better look into I2C, Declan has written up an easy-to-understand explanation of I2C involving football!

Our gyro chip communicates over I2C, so we need to set up our Arduino to be able to talk to it. This is actually a fairly complex task to do, but fortunately there are many companies and hobbyists out there who write code to do this for us. These ‘libraries’ are commonly available online by searching the chip number – for example, our gyro has the chip number L3GD20. For this project we used the Adafruit_Sensor and Adafruit_L3GD20_U libraries.

Once these libraries are added to our Arduino libraries folder, we can examine the example code to work out how to use the library functions. The following code is extracted from one of these example programs, with the bare essentials to read in gyro data.

#include <Adafruit_Sensor.h>
#include <Adafruit_L3GD20_U.h> //include library header files to use library code later on

Adafruit_L3GD20_Unified gyro = Adafruit_L3GD20_Unified(20); //constructor for gyro object
gyro.begin();             //initialise gyro
sensors_event_t event;    //make a box to put data into
gyro.getEvent(&event);    //get data from gyro and put into box
float wZ = event.gyro.z;  //wZ is now the angular velocity (w) around the z (vertical) axis,
                          // i.e. how fast it is spinning when suspended from a string

 

Processing Data

Removing Bias

Before we can use this gyro data to do anything useful, we need to remove its ‘bias’. Most gyros have a bias in their measurements, meaning that all measurements will have some constant error in them – this is a bad thing! To remove this bias, we simply take a bunch of measurements when the sensor is not moving, average them to estimate the bias, then subtract this bias from all subsequent measurements to get the correct angular velocity. In code it looks like this:

//Calculate bias
for (i = 0; i < 100; i++){
  sensors_event_t event; 
  gyro.getEvent(&event);
  biasZ = biasZ + event.gyro.z;
  delay(10);  //pause for 10ms
}
biasZ = biasZ / 100;

//Correct subsequent measurements
sensors_event_t event; 
gyro.getEvent(&event);
float wZ = event.gyro.z - biasZ;

Controlling Output

Now our goal with the reaction wheel system is to detumble, or in other words, to have an angular velocity of 0°/s. To do that, we calculate the ‘error’ in our system (how much we are off by). For example, if we were spinning by 10°/s clockwise, then our error would be -10°/s (clockwise rotation is negative by convention).

To decide the output value (how fast we spin the motor), we use a very simple algorithm called a ‘proportional controller’ – simply speaking, the output of our motor is set to be proportional to the error in our angular velocity, or in math-speak:

Output = K ⨉ Error

(where K is some constant value). A very small K will not be very effective (like having a very weak motor), but too high a value of K will result in overcorrection (similar to oversteering a car), resulting in oscillations around your target value (0°/s). Tuning K to the optimum value is just a matter of experimenting to get the largest K that doesn’t overshoot.

don't be smol or swol
Effect of K on Controller Performance

 

In code, the controller looks like this (very simple):

float k = 2.0;
float motorSpeed = k * (-wZ); // error=target-wZ where target = 0

(Note that for this system, the best value of K happened to be 2.0, but for other systems it does not need to be a whole number.)

As with hardware, as a first development iteration, code should be as simple as possible (reducing development time at the cost of decreasing performance), but for future iterations a full PID controller is recommended (to reduce the time the system takes to stop spinning).

 

Outputting Data

Now that we have our desired output amount, we then ‘send’ this value to the motor via an H-bridge. In simple terms, the H-bridge supplies power to the motor straight from the 12V battery pack, at a voltage proportional to the signal received from the Arduino. This allows the Arduino to set the motor speed without having to supply power directly to the motor – if an Arduino supplies too much power it will die a swift and horrible death.
The way you control the motor is very similar to controlling the brightness of an LED, except that you can push current in either direction. For example, if you wanted to set the motor to spin anticlockwise at half speed, you would do the following:

  • Set pin A to 2.5V (half of the max voltage the Arduino can put out)
  • Set pin B to 0V (where pins A and B are connected to the H-bridge)

or in code:

analogWrite(pinA, 128);    //128 is 2.5V (255 is 5V)
digitalWrite(pinB, LOW);   //LOW is 0V

This would cause the H-bridge to supply a +6V voltage (half of the full 12V available) across the motor pins, driving the motor at half speed.

Alternatively, if you wanted to set the motor to spin at full speed clockwise, you would do the following:

  • Set pin A to 0V
  • Set pin B to 5V

corresponding to the code:

digitalWrite(pinA, LOW);
analogWrite(pinB, 255);

which would put a -12V voltage across the motor pins, driving it at full speed in reverse.

(Note that you’ll need pin A and B to be PWM enabled to do this – look up the Arduino pinout for this, or look for a ‘~’ next to the pin numbers.)

Thus, to output our desired motor speed we use following code:

float cappedMotorSpeed = min(1.0, abs(motorSpeed)); //cap speed at 100%
int motorOut = (int)(255 * cappedMotorSpeed);         //255 = 5V with analogWrite()

if (motorSpeed < 0){
  analogWrite(motorPinA, motorOut);
  digitalWrite(motorPinB, LOW);
} else {
  digitalWrite(motorPinA, LOW);
  analogWrite(motorPinB, motorOut);
}

 

Execute!

By repeating this ‘input-process-output’ sequence many times per second, we can detumble the reaction wheel system.
This is what this process looks like in action (using an RF module to activate/deactivate the wheel):

 

(Code available here)


Posted on by

Introduction

In our modern world of fast, cheap and powerful computing, there are countless different components that do nearly everything imaginable (and a bunch more that you would never have thought of). Combining these different components into a final product, be it a DIY weather station, homemade rover or satellite control system is at the essence of electronics design. But how do these components communicate?

This article will discuss one of the ways in which electronic components communicate called I2C (pronounced “eye-squared-cee”). I2C is very common, well supported and relatively simple to use and by understanding it we can develop an appreciation for electronic communication. [Brief note to clear up some confusing terminology: I2C is also called two-wire interface (due to it only using two wires) by some companies to avoid trademarking issues with the I2C name (which is actually used by Phillips). There is essentially no difference between the two protocols and are operated identically. SMBUS is another very similar protocol, but is more used in computing and is less common in electronics with the notable exception of being present in the Raspberry Pi.].

Arduino connected to two circuit boards
Figure 1 – BlueSat uses I2C to communicate between different components in a satellite

 

The Basics (Writing)

As communication protocols can get technical and overly confusing pretty quickly, let’s try an analogy to explain the basics and we’ll move more into the technical aspects in subsequent posts.

Imagine it’s half time at the football game and the players have gathered around the coach for a stirring half time speech. The coach has a couple of things he wants to say to the team, change some players and make some specific comments to some of the team.

Comparison of I2C Wiring Diagram to FNL Speech
Figure 2 – Less of a stretch than it may seem at first

Master and Slave

In this case the coach is called the Master and the players Slaves. The Master is in charge of starting communication and finishing it (you don’t want to be talking back to coach out of line, he’s not too understanding), while Slaves only respond to what the master instructs them to do. The slaves are allowed to do whatever they want otherwise (e.g. drink some water, have an orange slice, glare at the other team’s huddle, etc.), but they can’t speak. Typically there’s only one Master (there can be multiple but it gets a bit confusing and we’ll ignore it for now as most projects have only a single master. There can and often are many slaves though.

Importantly, coach needs to let everyone know who he’s talking to. He can’t just say “okay you come off and you go on in your place”, no one knows who he’s talking to. Thankfully, each player is easily identified by the number on the back of their shirt (the coach gave up learning everyone’s name after the first week) and coach can instead say “17 listen, come off and 3 go on in your place”.

This number is called the Address and each slave has one. They should each be unique as if they’re not there’s no way of knowing which slave the master is talking to and general confusion results. It’s sort of like having two friends named Andrew, you’re probably going to call one of them Andy (i.e. change their address) to stop getting confused.

Start and Stop Sequences

Coach has a couple of quirks, before he starts to talk he claps his hands and grunts “alright”. This makes everyone pay attention to what he’s going to say. When he’s finished talking he claps his hands again and says “OK”.

In I2C these are called Start and Stop Sequences respectively and are unique in that they are only performed at the beginning and end of a message. They let all the Slaves know when a new message is about to start (so they can see if it’s relevant to them) and also when it’s ended (so they can stop listening).

Acknowledgement

To make matters more confusing coach wants to know you’re paying attention (he hates repeating himself just because you weren’t listening). Each time he calls for you or finishes talking to you, he wants you to say “yes coach”. The only time he doesn’t want to let you respond is when he’s done talking to you (when he’s done he means it).

This is called an Acknowledgement and is important in I2C to determine if the slave has received the master messages. If no acknowledgements are received, the master could be speaking to no one at all and it wouldn’t know. Acknowledgements are required after each message except start and stop sequences.

Coach talking to player
Figure 3 – Coach is very particular about his communication protocols to avoid any misunderstandings

Example Exchange

So let’s put all this together in an example exchange:

 

Football Action

I2C Terminology

Purpose

Coach: “Alright” *claps* Master: Start Sequence Broadcasts to all slaves a message is about to be sent
C: “14 listen” M: Address of Slave to be Written To Identifies which Slave the Master wants to communicate to
Player 14: “Yes coach” Slave: Acknowledge Bit Requested Slave lets Master know they are listening to the message
C: “You’re all over the place, stay on your man” M: Sends Message Master sends his message to the slave
14: “Yes coach” S: Acknowledge Bit Slave lets master know his message was received
C: “OK” *claps* M: Stop Sequence Let’s slaves know this communication is over

Not too complicated at all. This basic format is used for nearly all I2C messages with the address and the message being the only things to change in most cases.

 

Talking Back (Reading)

But I hear you ask, what if I need to tell Coach something, what if the slave needs to send some information to the master. This is a very common situation (imagine a thermometer over I2C, if it can’t send its temperature to the master it’s pretty useless) and is called reading (as opposed to writing which we’ve previously been doing). It’s a little bit more complicated, but not much.

Let’s go with an example first this time and then breakdown the differences.

Football Action I2C Terminology

Purpose

Coach: “Alright” *claps* Master: Start Sequence Broadcasts to all slaves a message is about to be sent
C: “11 listen” M: Address of Slave to be Written Identifies which Slave the Master wants to communicate to
Player 11: “Yes coach” Slave: Acknowledge Bit Requested Slave lets Master know they are listening to the message
C: “Do you think you can break through their defence” M: Sends Message Master sends his message to the slave
11: “Yes coach” (NOTE: Remember this is not the player responding to the coach, but instead the acknowledging the message”) S: Acknowledge Bit Slave lets master know his message was received
C: “Alright” *claps* M: Repeated Start Sequence Broadcasts to all slaves a message is about to be sent
C: “11 I’m listening M: Address of Slave to be Read From Identifies which Slave the Master wants to hear from
11: “I think I can do it coach if Riggins can block me” S: Sends Message Slave sends his message to the master
C: “Thank you” M: Acknowledge Bit Master lets slave know his message was received
C: “OK” *claps* M: Stop Sequence Let’s slaves know this communication is over

Straight away we can see a lot of similarities, in fact the first half of the message is exactly the same as the writing example except instead of ending with a stop sequence, we ended with a start sequence. This is because the first step of reading is the Master telling the Slave what information it wants and this involves the Master writing to the Slave just as we did before.

Another start sequence (often called a repeated start sequence) is sent to keep communication going. This is where the analogy starts to break down, some devices like to have a stop and a new start in between writing and reading, while others are happy with just repeating the start. When using I2C devices, make sure you check their datasheet for how they like to communicate specifically.

The master then sends the address of the slave it wants to communicate with, but changes the last part of the statement to reflect that it wants to read from the slave, rather than write to it. The slave then sends the message and the master sends an Acknowledge bit (so that the slave know its message was received). Finally, the master (as the device in charge of communication) sends a stop bit to indicate the communication is over.

 

Conclusion

There we are, hopefully you have more of an idea how I2C operates. This is only the tip of the iceberg and I2C importantly also defines a whole range of different events and implementations such as what happens when these protocols aren’t followed properly.

Remember to check back on the BLUEsat blog for more technical posts on everything space and engineering in the future.


Posted on by

Embedded Programming

Wrapping up the BLUEtongue 2.0 Rover’s Drive System series, following articles by Chris about the mechanical re-design of the system and Harry about the high-level software implementation, this article will outline the role of the embedded system in connecting the electrical motors to the high-level software. Primarily, this article will focus on analog-to-digital converters and their use in the drive system of BLUEtongue 2.0. Some understanding of electrical circuits and microprocessors is assumed in the following explanations.

ADC methodologies

Analog-to-Digital Converters (ADCs) are a cornerstone of signal processing, and are used in nearly all electrical devices today. The objective of an ADC is to convert an analog voltage signal into a digital representation. Various methods exist in implementing an ADC, each having their own benefits and purpose. To provide a comparison there are two key concepts when analysing ADC methods, that being their speed and their cost.

The speed of an ADC reflects how fast a sample-conversion sequence is performed and is most often measured as ‘how many sample-conversions can be done within a certain time-frame’ (in samples per second). A higher speed is of course useful when high bandwidth is needed. On the other hand, cost describes how expensive it is to implement – as well as improve the resolution – of the ADC and is influenced by the complexity and number of hardware components required in the design.

Typical ADC methods demonstrate that an increase in speed of the solution will cause an increase in the cost. This is indicative of the trade-off between parallel and sequential logic, as parallel logic will be faster but will require more hardware components. For this article, I will give a brief outline of 3 key ADC solutions:

  • The first method that will be addressed is the Dual Slope, also known as the Integrating method. This method works by charging up an integrator circuit by the voltage being sampled for a fixed amount of time, then discharging the same circuit at a known reference voltage back to no-charge. By using a counter to track time for the discharge phase, the circuit is able to accurately derive a digital equivalent of the original analogue signal using Latex formula. Due to requiring the charging and discharging of a capacitor, this method is one of the slower methods used but also not very costly.
  • Next is Successive Approximation (SAR), which, as the name suggests, operates by estimating the digital output by testing each bit in the final representation progressively from the MSB to the LSB. At each step, it sets the current ‘result’ bit to HIGH (bit = ‘1’), performs a Digital-to-Analog conversion (DAC) and checks whether the analog equivalent is greater than the sampled analog voltage, setting the bit back to LOW (bit = ‘0’) if true then moves on to the next MSB. Doing this ensures that the resulting digital value is the closest binary representation that is still less than the sampled voltage. This method’s speed is typically faster than the ramping method, but has a greater cost as a payoff due to the more complex circuitry.
  • The last method mentioned is Flash ADC, one of the fastest ADC methods. A flash ADC is a group of parallel comparators which individually check the input voltage against reference voltage for all possible digital outputs and then uses a priority encoder to select the appropriate binary result. The cost of this method is the largest of the three as it requires enough components to perform all these voltage comparisons in parallel.
The internal working of BLUEtongue 2.0
The internal working of BLUEtongue 2.0

In addition to the methods described here, there are also interesting ADC solutions such as the Sigma-Delta, but we will leave that for the reader to explore.

ADCs on BLUEtongue

One of the primary uses of the ADC on BLUEtongue was to implement the swerve drive system. To ensure the system’s functionality, it was important for the real-time wheel headings to be known as accurately as possible. To achieve this, potentiometers (pots) were integrated into the front two shafts of the wheel rotators and fed back to the control board, where the analog read-out of the pot was converted into a digital signal that was then passed through to the on-board computer via USB.

In addition to the swerve drive, ADCs were also used in feedback systems for arm manipulation.

 

ADC on the PIC

For BLUEtongue v2.0 the control board consisted of a custom made PCB, housing the dsPIC33EP512MC806 microprocessor (PIC) from Microchip (Read more here). The ADC on the PIC used for the rover is an implementation of the SAR system, with a few additional features.

The PIC provided two independent SAR modules, the first module (ADC0) was able to operate in 12-bit resolution with one channel S&H (Sample and Hold, where the analogue input is captured for the length of the conversion) if desired, whilst both are able to operate at 10-bit resolution with 4 channel S&H.

The resulting conversions were stored in a dedicated 16×16-bit buffer (one buffer for each ADC module exists) allowing for convenient access upon completion. Furthermore, to signify that a conversion sequence has been performed, the PIC is able to generate interrupts or, alternatively, set a ‘done’ bit/flag. The former is useful for time-sensitive, synchronous data whilst the latter (which would be implemented through a form of polling) is less time-critical and better for asynchronous conversions.

For the purpose of the swerve drive, we implemented ADC1 in 12-bit resolution and used the ‘Channel Scan Select’ feature (which allowed the module to sequentially scan multiple ADC pins) to allow the best resolution possible whilst also providing the conversion requirements for the multiple data feedback sources. Furthermore, we used the interrupt method as feedback data for the swerve system constituted an urgent situation.

Programming the PIC

The following code demonstrates how the ADC was setup on the PIC.

// ** Code to setup adc for reading potentiometers ** //
// ** Uses the input scan select system to allow reading ** //
// ** of multiple analog inputs within a single module ** //

void setupADC1(void) {
    // Set appropriate pins as inputs (to read from the pots)
    TRISBbits.TRISB8 = 1;
    TRISBbits.TRISB10 = 1;
    TRISBbits.TRISB12 = 1;
    TRISBbits.TRISB15 = 1;
    TRISEbits.TRISE0 = 1;
    TRISEbits.TRISE1 = 1;
    TRISEbits.TRISE2 = 1;
    TRISEbits.TRISE3 = 1;

    // Setup the pins to read analog values
    ANSELBbits.ANSB8 = 1;
    ANSELBbits.ANSB10 = 1;
    ANSELBbits.ANSB12 = 1;
    ANSELBbits.ANSB15 = 1;
    ANSELEbits.ANSE0 = 1;
    ANSELEbits.ANSE1 = 1;
    ANSELEbits.ANSE2 = 1;
    ANSELEbits.ANSE3 = 1;

    // Set the control registers to zero, these contain garbage after a reset
    // This also ensures the ADC module is OFF
    AD1CON1 = 0;
    AD1CON2 = 0;
    AD1CON3 = 0;

    // clear ADC1 control registers: CON4, CHS0, CHS123 and CHSSH/L
    AD1CON4 = 0;
    AD1CHS0 = 0;
    AD1CHS123 = 0;
    AD1CSSH = 0;
    AD1CSSL = 0;

    AD1CON1bits.AD12B = 1; // Activate 12 bit adc.

    // *** CLOCK SETTINGS *** //
    //Changes the ADC module clock period for both conversion ad sampling.
    // Tad must be greater than 117.6 ns (electrical specs, pg560), T_CY is 1/70Mhz
    // Tad T_CY * (ADCS + 1)
    // Tad/T_CY - 1 ADCS
    // ADCS (117.6*10^-9)*(70*10^6) - 1
    // ADCS 7.232 ~ 8

    AD1CON3bits.ADCS = 0x0F; // T_AD = T_CY * (ADCS + 1)
    AD1CON3bits.SAMC = 0x1F; // Sampling for TAD * 14 (as required for 12-bit)

    // Auto-sampling, automatically end sampling and begin conversion
    AD1CON1bits.SSRC = 0b111;

    // Select the pins that will be cycled through via input scan select
    // NOTE: The ADC scans in ascending order of analog number, i.e.
    // if connecting an4, 9, 5, 12 the buffer will be filled:
    // 4, 5, 9, 12. Ensure any changes enforce this convention!
    AD1CON2bits.CSCNA = 1; // Activate channel scan select
    AD1CSSLbits.CSS8 = 1;
    AD1CSSLbits.CSS10 = 1;
    AD1CSSLbits.CSS12 = 1;
    AD1CSSLbits.CSS15 = 1;
    AD1CSSHbits.CSS24 = 1;
    AD1CSSHbits.CSS25 = 1;
    AD1CSSHbits.CSS26 = 1;
    AD1CSSHbits.CSS27 = 1;

    // Will need to interrupt after (N-1) sample/conversion sequences.
    // Where N = number of signals being read (e.g. an16 an24 = 2 signals = SMPI = 1)
    AD1CON2bits.SMPI = 7; //interrupt on sample conversion

    //automatically begin sampling whenever last conversion finishes, SAMP bit will be set automatically
    AD1CON1bits.ASAM = 1;

    // Clear interupt flag, set interrupt priority
    _AD1IF = 0;
    _AD1IP = 3;

    // Enable the interupt
    _AD1IE = 1;

    //enable ADC1
    AD1CON1bits.ADON = 1;
}

// ADC interrupt serve routine (ISR). This sets a variable so that the main function
// knows that a conversion has finished and can read from buffer.
void __attribute__((__interrupt__, no_auto_psv)) _AD1Interrupt(void) {
    _AD1IF = 0;
    adc_ready = 1;
}

 

The ERC 2016 team, posing with the rover. From left: (standing:) Jim Gray, Timothy Chin, Denis Wang, Simon Ireland, Nuno Das Neves, Helena Kertesz, (kneeling:) Harry J.E. Day, Seb Holzapfel
The ERC 2016 team, posing with the rover. From left: (standing:) Jim Gray, Timothy Chin, Denis Wang, Simon Ireland, Nuno Das Neves, Helena Kertesz, (kneeling:) Harry J.E. Day, Seb Holzapfel

Conclusion

Going forward, the Off-World Robotics team will continue to develop and expand its use of signal processing with the aid of ADCs for the drive system, as well as other key systems such as the fine control of the arm. The experience gained from programming on the microprocessor and implementing the ADCs has been very rewarding for me. The knowledge will also prove invaluable to the team as we look to enhance the embedded system for the next iteration of the rover, code-named NUMBAT, with a Controller Area Network ( will appear in a future article!). I hope you have enjoyed this write-up and found the series informative.

To view the entire embedded system repo, click here.

Thank you for reading, to keep up to date with BLUEsat and the Rover, like us on Facebook and stay tuned for more posts on this site. If you are interested in getting involved, you can find more here.


Posted on by

Welcome back to the second article in our three part series on the BLUEtounge 2.0 Rover’s suspension and drive system. In our last post Chris wrote about the mechanical re-design of the system, and in this post we will look at how we designed the high level software architecture for this system. We will also look at some of the challenges we faced along the way.

The System

The BLUEtounge 2.0 Rover has four wheel's, with the front two being able to steer independently
BLUEtounge 2.0, with its four wheel modules. You can see the front left module is turning.

The BLUEtounge 2.0 Rover has four independently controlled wheels, with the front two wheels also being able to steer. This was a big departure from BLUEtounge 1.0’s skid steer system, which used six wheels, and turned by having the wheels on one side of the rover spin in the opposite direction to those on the other side of the rover. The system was meant as a stepping stone towards a full swerve drive system on either BLUEtounge, or our next rover platform NUMBAT.

Furthermore the BLUEsat Off-World Robotics code base is based around the R.O.S (Robotics Operating System) framework. This framework provides a range of existing software and hardware integrations, and is based around the idea of many separate processes (referred to as nodes), that communicate over TCP based ROS ‘topics’ using data structures called ‘messages’.

That, along with the nature of BLUEsat as a student society placed some interesting requirements on our design:

  • The system needed to be able to work with only two wheel modules being able to steer, but as much as possible the code needed to be reusable for a system with four such modules.
  • The system needed to avoid being heavily tied to the motors and embedded systems used on BLUEtounge, as many of them would be changing for NUMBAT.
  • Due to European Rover Challenge (ERC) requirements, the system needed to support user input, and be able to be controlled by an AI.

As a consequence of the above, and to avoid reinventing the wheel (no pun intended), the system needed to use standard ROS messages and conventions as much as possible. It also needed to be very modular to improve reusability.

User Input

The user controls the rover’s speed and rotation using an xbox controller. After some investigation, our initial approach was to have one of the analogue sticks control the rover’s direction, whilst the other controlled its speed. This was primarily because we had found that using a single stick to control direction and speed was not very intuitive for the user.

As ROS joystick analogue inputs are treated as a range between -1 and 1 on two axes, the first version of the system simply used the up/down axis of the left stick as the magnitude applied to a unit vector formed by the position of right stick. The code looked a bit like this:

double magnitude = joy->axes[SPEED_STICK] * SPEED_CAP;
cmdVel.linear.x = joy->axes[DIRECTION_STICK_X] * magnitude;
cmdVel.linear.y = joy->axes[DIRECTION_STICK_Y] * magnitude * -1; 

(Note: that all code in this article uses the ROS standard of x being forwards <-> backwards, and y being port <-> starboard)

This code produced a geometry_msgs::Twist message that was used by our steering system. However we found that this system had several problems:

  • It was very difficult to do fine manoeuvring of the rover, because the range of slow speeds corresponded to too small an area on the joystick. However, since we could only control the power rather than the velocity of the motors, we couldn’t simply reduce the overall power of the rover as this would mean it was unable to traverse steep gradients.
  • Physical deadzones on the joysticks meant that driving the rover could be somewhat jerky.
  • The code above had a mathematical problem, where the rover’s max speed was higher whilst steering than could be achieved travelling in a straight line.
  • Having a two axis direction control was unintuitive for the driver, and hard to control accurately.

In response to this one of our team members (Sean Thompson) developed a new control system that used only one axis for each stick. In this system the left stick was used for power, whilst the right stick was used for (port/starboard) steering.  The system also implemented dead zone and exponential scaling which allowed for better manoeuvring of the rover at low speeds, whilst still being able to utilise the rover’s full power.

Full source code for this implementation can be found here.

The rover uses the following control configuration whilst driving (diagram credit: Helena Kertesz)
The rover uses the following control configuration whilst driving. Diagram Credit: Helena Kertesz.

Steering

The steering system for the rover allows the rover to rotate about a point on the line formed between the two rear wheels. In order to achieve this, each wheel must run at a separate speed and the two front wheels must have separate angles. The formulas used to determine these variables are displayed below.

Latex formulaLatex formula

The rover steers by adjusting both the speed of its wheels and the angle of its front wheels. (Diagram Credit: Chris Squire)
The rover steers by adjusting both the speed of its wheels and the angle of its front wheels. Diagram Credit: Chris Miller.

In order to accommodate this a software module was built that converted the velocity vector (Latex formula) discussed in the previous section, into the rotational velocities required for each of the wheel modules, and the angles needed for the front two wheels. The system would publish these values as ros messages in a form compatible with the standard ros_command module, enabling easier testing in ROS’s gazebo simulator and hopefully good compatibility with other ROS systems we might need to use in the future.

The following code was used to implement these equations:

        const double turnAngle = atan2(velMsg->linear.y,velMsg->linear.x);
        const double rotationRadius = HALF_ROVER_WIDTH_X/sin(turnAngle);
        
        // we calculate the point about which the rover will rotate
        // relative to the centre of our base_link transform (0,0 is the centre of the rover)

        geometry_msgs::Vector3 rotationCentre;
        // the x axis is in line with the rear wheels of the rover, as shown in the above diagram
        rotationCentre.x = -HALF_ROVER_WIDTH_X;
        // and the y position can be calculated by applying Pythagoras to the rotational radius of the rover (r_turn) and 
        // half the length of the rover
        rotationCentre.y = sqrt(pow(rotationRadius,2)-pow(HALF_ROVER_LENGTH_Y,2));
        // omega_rover is then calculated by the magnitude of our velocity vector over the rotational radius
        const double angularVelocity = fabs(sqrt(pow(velMsg->linear.x, 2) + pow(velMsg->linear.y, 2))) / rotationRadius;

        //calculate the radiuses of each wheel about the rotation center
        //NOTE: if necessary this could be optimised
        double closeBackR = fabs(rotationCentre.y - ROVER_CENTRE_2_WHEEL_Y);
        double farBackR = fabs(rotationCentre.y + ROVER_CENTRE_2_WHEEL_Y);
        double closeFrontR = sqrt(pow(closeBackR,2) + pow(FRONT_W_2_BACK_W_X,2));
        double farFrontR = sqrt(pow(farBackR,2) + pow(FRONT_W_2_BACK_W_X,2));
        
        //V = wr
        double closeBackV = closeBackR * angularVelocity;
        double farBackV = farBackR * angularVelocity;
        double closeFrontV = closeFrontR * angularVelocity;
        double farFrontV = farFrontR * angularVelocity;
        
        //work out the front wheel angles
        double closeFrontAng = DEG90-atan2(closeBackR,FRONT_W_2_BACK_W_X);
        double farFrontAng = DEG90-atan2(farBackR,FRONT_W_2_BACK_W_X);
        
        //if we are in reverse, we just want to go round the same circle in the opposite direction
        if(velMsg->linear.x < 0) {
            //flip all the motorVs
            closeFrontV *=-1.0;
            farFrontV *=-1.0;
            farBackV *=-1.0;
            closeBackV *=-1.0;
        }
        
        
        //finally we flip the values if we want the rotational centre to be on the other side of the rover
        if(0 <= turnAngle && turnAngle <= M_PI) {
            output.frontLeftMotorV = closeFrontV;
            output.backLeftMotorV = closeBackV;
            output.frontRightMotorV = farFrontV;
            output.backRightMotorV = farBackV;
            output.frontLeftAng = closeFrontAng;
            output.frontRightAng = farFrontAng;
            ROS_INFO("right");
        } else {
            output.frontRightMotorV = -closeFrontV;
            output.backRightMotorV = -closeBackV;
            output.frontLeftMotorV = -farFrontV;
            output.backLeftMotorV = -farBackV;
            output.frontLeftAng = -farFrontAng;
            output.frontRightAng = -closeFrontAng;
            ROS_INFO("left");
        }

Separating steering from the control of individual joints also had another important advantage, in that it significantly improved the testability and ease of calibration of the rover’s systems. Steering code could be tested to some extent in the gazebo simulator using existing plugins, whilst control of individual joints could be tested without the additional layer of abstraction provided by the steering system. It also allowed the joints to be calibrated in software (more on this in our next article).

Joint Control System

In BLUEtounge 1.0, our joint control system consisted of many lines of duplicated code in the main loop of our serial driver node. This code took incoming joystick messages and converted them directly into pwm values to be sent through our embedded systems to the motors. This code was developed rapidly and was quite difficult to maintain, but with the addition of the feedback loops needed to develop our swerve drive, the need to provide valid transforms for 3d and automation purposes, and our desire to write code that could be easily moved to NUMBAT – a new solution was needed.

We took an object oriented approach to solving this problem. First a common JointController class was defined, this would be an abstract class that handled subscribing to the joints control topic, calling the joints update functions and providing a standard interface for use by our hardware driver (BoardControl in the diagram below) and transform publisher (part of JointsMonitor).  This class would be inherited by classes for each type of joint, where the control loop for that joint type could be implemented (For example the drive motors control algorithm was implemented in JointVelocityController, whilst the swerve motors where implemented in JointSpeedBasedPositionController).

UML Diagram of the BLUETounge 2.0 Rovers driver control system
The BLUEtounge 2.0 Rover’s joint system consisted of a JointMonitor class, used to manage timings and transforms, as well as an abstract JointController class that was used to implement the different joint types with a standard interface. Diagram Credit: Harry J.E Day, with amendments by Simon Ireland and Nuno Das Neves.

In addition a JointMonitor class was implemented, this class stored a list of joints and published debugging and transform information at set increments. This was a significant improvement in readability from our previous ROS_INFO based system as it allowed us to quickly monitor the joints we wanted. The main grunt of this class was done in the endCycle function, which was called after the commands had been sent to the embedded system. It looked like this:

// the function takes in the time the data was last updated by the embedded system
// we treat this as the end of the cycle
void JointsMonitor::endCycle(ros::Time endTime) {
    cycleEnd = endTime;
    owr_messages::board statusMsg;
    statusMsg.header.stamp = endTime;
    ros::Time estimateTime = endTime;
    int i,j;
    // currentStateMessage is a transform message, we publish of all the joints
    currentStateMessage.velocity.resize(joints.size());
    currentStateMessage.position.resize(joints.size());
    currentStateMessage.effort.resize(joints.size());
    currentStateMessage.name.resize(joints.size());
    
    // we look through each joint and estimate its transform for a few intervals in the future
    // this improves our accuracy as our embedded system didn't update fast enough
    for(i =0; i < numEstimates; i++, estimateTime+=updateInterval) {
        currentStateMessage.header.stamp = estimateTime;
        currentStateMessage.header.seq +=1;
        j =0;
        for(std::vector<JointController*>::iterator it = joints.begin(); it != joints.end(); ++it, j++) {
            jointInfo info = (*it)->extrapolateStatus(cycleStart, estimateTime);
            publish_joint(info.jointName, info.position, info.velocity, info.effort, j);

        }
        statesPub.publish(currentStateMessage);
    }
    // we also publish debugging information for each joint
    // this tells the operator where we think the joint is
    // how fast we think it is moving what PWM value we want it to be at. 
    for(std::vector<JointController*>::iterator it = joints.begin(); it != joints.end(); ++it, j++) {
            jointInfo info = (*it)-&amp;amp;gt;extrapolateStatus(cycleStart, endTime);
	    owr_messages::pwm pwmMsg;
	    pwmMsg.joint = info.jointName;
	    pwmMsg.pwm = info.pwm;
	    pwmMsg.currentVel = info.velocity;
	    pwmMsg.currentPos = info.position;
            pwmMsg.targetPos = info.targetPos;
            statusMsg.joints.push_back(pwmMsg);

    }
    debugPub.publish(statusMsg);  
    
    
}

Overall this system proved to be extremely useful, it allowed us to easily adjust code for all motors of a given type and reuse code when new components where added. In addition the standardised interface allowed us to quickly debug problems (of which there where many), and easily add new functionality. One instance where this came in handy was with our lidar gimbal, the initial code to control this joint was designed to be used by our autonomous navigation system, but we discovered for some tasks it was extremely useful to mount a camera on top and use the gimbal to control the angle of the camera. Due to the existing standard interface it was easy to add code to our joystick system to enable this, and we didn’t need to make any major changes to our main loop which would have been risky that close to the competition.

Conclusion

Whilst time consuming to implement and somewhat complex this system enabled us to have a much more manageable code base. This was achieved by splitting the code into separate ROS nodes that supported standard interfaces, and using an OO model for implementing our joint control. As a result it is likely that this system will be used on our next rover (NUMBAT), even though the underlying hardware and the way we communicate with our embedded systems will be changing significantly.

Next in this series you will hear from Simon Ireland on the embedded systems we needed to develop to get position feedback for a number of these joints, and some of the problems we faced.

Code in this article was developed for BLUEsat UNSW with contributions from Harry J.E Day, Simon Ireland and Sean Thompson, based on the BLUEtounge 1.0 steering and control code by Steph McArthur, Harry J.E Day, and Sam Scheding. Additional assistance in review and algorithm design was provided by Chris Squire, Chris Miller, Yiwei Han, Helena Kertesz, and Sebastian Holzapfel. Full source code for the BLUEtounge 2.0 rover as deployed at the European Rover Challenge 2016, as well as a full list of contributors, can be found on github.


Posted on by

This is the first part in a small three-part series about the re-design of the rover suspension. We’ll touch on aspects across several parts of the team, but for now I’ll introduce you to the mechanical aspects.

However, before I talk about this re-design, I feel it necessary to explain why such substantial change was needed. When we first began the design of BLUEtongue back in 2013 the team opted for a Rocker-Bogie style of suspension due to it its many benefits in traction and stability when operating in rocky environments.

The BLUEtounge 1.0 rover on the Globe Lawn steps.
Initial Mechanical Build

Resulting from the complexity and cost attached to steerable wheels (such as swerve drives), we utilised skid steering like that you’ll find on a tank or bobcat. Unfortunately, to a significant extent, we misunderstood the physical nature of the suspension we were in the process of designing and the ramifications our choice to peruse skid steering would have. Upon initial testing, the inherent problems in the system began to make themselves known. First, the suspension was too tall and insufficiently rigid for a skid steering design. Due to this, attempts to turn the rover resulted in either the flexure of the structure or would cause the bogie to “kick”, rendering the rover immobile. You may see older photos of the rover with what we’ve called “bracing bars”.

The BLUEtounge 1.0 Rover, you can see the bracing bars attached to each of the rover's wheel assemblies.
Addition of Bracing Bars

These bars lock the bogie to the rocker, permitting limited steering capability and allowing the rover to limp around. Secondly, the rocker was too long and couldn’t fit in conventional luggage. As we’d planned from the start to flat pack the rover into our personal luggage for transit to and from the contest, we had to search long and hard to find a suitable enclosure. Thirdly, the construction order. As many undergraduates will quickly realise when they build things for the first time, build order is a very important thing to consider. Whilst in a Computer Aided Design (CAD) environment, assembly really is as easy as a few clicks. Need to mount a motor in tight spot? Sure! Try to do this in physical space where motors can’t fly through walls? Not so easy. Due to this, our assembly process was very convoluted, requiring gearboxes to be adjoined to the motors within other structures, and removed for disassembly, etc (It was a nightmare!). All in all, our first suspension iteration was an utter nightmare. Hindsight really is 20/20.

So, now that we’re on the same page as to the why, I want to introduce you to the what. Post our first presence at the European Rover Challenge in 2015, we realised the suspension was one of the key limiting factors of the BLUEtongue rover platform. With the knowledge that a fundamental redesign was needed, we got to work over the next few months. The final design is a parallel swing arm type suspension with a full rotation swerve drive. The new system was designed with a heavy focus on steering, dynamic response, assembly and transport.

A CAD Render of the BLUEtounge 2.0 Mars Rover with its new suspension system
CAD Render with new Suspension

As seen in the video attached below, steering is achieved through the actuation of a radially free, but axially constrained, shaft. Due to the low loads experienced and limited rotation speeds, this arrangement is achieved with radial bearings and circlips. The design originally called for the use of a swivelling hub (really just a small scale Lazy Susan) for the axial restoring force. However, during initial testing, these were negated to allow for power cabling to pass through the shaft centres. Here it quickly became evident these hubs were unnecessary. Luckily so as well, as this topside location was later used to mount analogue potentiometers for feedback once it was established that the intended locating method of relative encoders and magnetically activated homing was insufficient (Stay tuned for our next two articles for more on this). In order to drive the shaft, a DC motor with gearhead was mounted parallel, and an addition reduction gear step used to mechanically link the two. Additional problems arose from this arrangement where the torque loading during operation consistently began to “strip” the lock screw of the brass pinion gear, leading to un-actuated free rotation of the shaft. A problem easily solved through the use of thicker walled Carbon Steel (1045 for anyone interested) replacement pinion gears.

 

 

 Coupled with the problem of rover steering is the dynamic response. Due to time pressures, we were unable to properly characterise the design to validate our solution. As a result, we opted to take a leaf from the hobbyist’s books and use shock absorbers designed for large scale RC cars. Whilst a little smaller than ideal, the readily available variety of damping fluids and compression springs allowed for on the fly adaptation and variability. This allowed us to tailor the dynamics of the system to those desirable for the rover. This design will serve as a starting point to aid in verification of analytical and numerical modelling, laying the foundation for our upcoming NUMBAT rover. I’ve included some slow motion video for you to enjoy, it’ll give you a good idea of how the suspension operates under an impulse loading. Watch this space for future posts about this kind of thing, we’ll be revisiting this later (eventually…).

 

 

I’m not going to dive too deep into the remaining points on assembly and transport as they deal more with how you design something opposed to what you’re designing. Our main objective here was to decouple mounting arrangements such that subassemblies can be shipped separately and then joined with minimal effort. If you take a look at the suspension, it can be boiled down into three main parts. The suspension subassembly, the rotation subassembly and the wheel subassembly. When mated, these for a completed suspension and drive assembly that can then easily be joined to the rover chassis. All-in-all we only need to insert or remove a total of nine screws to join or remove each suspension unit. A major improvement over the Rocker-Bogie, which would require a complete disassembly of both the wheel and suspension structures. (Lessons learned)

Thank you for reading, like us on Facebook or stay tuned here for more articles, and feel free to get involved with the project if this grabbed your interest. You can find more about joining here.