Welcome to our monthly updates, we are going to trial one of these each month to keep you updated on what’s going on at BLUEsat! January has certainly been a busy time for all of our teams here.
Its been a fantastic month of development. Our team finalised the rovers suspension system, a component that we anticipate will provide a great deal of stability to the platform (see image). Additionally our laser cut Chassis parts have arrived, and after a bit of post machining will be ready for fitment tests and total chassis construction. Looking ahead, the team plans to have the top and base plate of the chassis manufactured in the next few weeks.
Thomas Renneberg, Robotics CTO & Mech Chapter Lead
From the Software Team
Steady progress is being made across the team. We’ve had a few key developments in the CAN bus network, with our embedded system for publishing ROS messages now sending and the on-board computer receiving and routing the packets (the latter part in final testing). Rover Software still has a few members filtering back from holiday and with the addition of some new members (Oliver and Saksham) joining, we should have a interesting year ahead.
Simon Ireland, Rover Software Chapter Lead
From the Electrical Team
Its been a good start of the year. Altium training is on the way, and a few design projects have been proposed for the coming semester, including a pair of side module circuit boards and some optional ones such as power line filter and arm module boards. Work on power module and connector boards should be resumed shortly. There has also been a focus on DC-DC converters testing and renewal, both for BLUEtongue droving and NUMBAT construction.
Jonathan Wong, Rover Electrical Chapter Lead
From the Chief Pilot
Rover Droving has finally started again after a hiatus, and we’ve have had a number people interested in Droving. Over the last few weeks, it has been particularly difficult to get the Rover to get started, and on 27 January, the step-up transformer from the power supply to the NUC was found to be busted. A new transformer has since been ordered and arrived, and if things go according to plan, we will start full-fledged from this month. Also, I plan to start breaking down the ERC rules into smaller tasks so we can practise them.
This month was very productive for balloon team. All our subsystems, including a Raspberry Pi-based data-logger and radio-controlled separation mechanism, were completed and integrated. Ultra-low-temperature testing was conducted in a laboratory fridge set to -70C over the period of 30th-31st January in order to simulate stratospheric temperature conditions that will be experienced on a flight. This testing highlighted some weaknesses in our payload construction, such as power supplies not operating at low temperatures, and measures will be taken to remedy them. We had planned a launch for early February, but that has been pushed back to late February due to unforeseen circumstances not pertaining to our development.
Steady progress has been made on designing experiments for the biology team. With help from our friends at Flinders University we designed a incubator capable of controlling the intensity and wavelength of light available for a sample. This will give us valuable information about the on-board conditions required on the GreenSat and minimise the energy requirements of the payload.
The satellite power team focussed on completing several prototype designs this month. The designs for a variable test load were finalised and the components ordered. Prototype designs of a Lithium battery charger were also completed with testing of the design to occur next month. Finally, testing of the teams power regulators in low temperatures was also undertaken in conjunction with balloon team.
Serial Data: Laser on/off is used to send binary data (1’s and 0’s). By reading the voltage across a photoresistor, we can obtain the binary data.
Audio Data: By controlling the voltage through the laser, we can control the intensity of the laser, which we can obtain analogue data.
We plan to improve on both systems to a point where we can implement either into a PCB.
Balloon Telemetry
Recently, the Groundstation team has partnered with the Balloon team to implement telemetry. We have decided to use a ‘Pi in the Sky’ telemetry board to send data from the payload which is received and processed by a ‘USRP SDR’. We will research how to use the ‘Pi in the Sky’ board over the coming weeks.
Development of Reaction Wheel v3 is underway with 5 PCBs already designed. These boards contain the supporting hardware for the reaction wheel board, including power supply and regulation, an on-board computer (OBC), a data logging sensor board, an ADCS central hub and a mini groundstation for communicating to and commanding the reaction wheel while the experiment is in motion.
Mark Yeo, ADCS Squad Lead
Operations & Exec
Secretary’s Update
Its certainly been a busy start to the year. The main focus of the media and events team has been ramping up to o-week, and we have a lot planned for that with more to be finalised in the coming weeks!
We’ve also been trailing a new on-boarding approach where we will be running a structured session every three weeks rather than accepting new members every week. This came as a result of a survey we did last year on onboarding and our recruitment process and aims to help improve member retention in the first few months. It should also give our team leads more time to focus on their projects in between. A big thanks you to Taofiq for spearheading that project!
Our regular social evenings are going well, and we had a very successful “Jackbox Games” night a few weeks back after our Saturday workday.
Finally I’m very pleased to see the first release of our monthly updates and our email news letter! These should help improve awareness of the societies projects, recruit new members, and improve our internal communication between teams. I’m looking forward to seeing them in the following months.
So you’ve written the ultimate ROS program: after thousands of lines of code your robot will finally achieve sentience and bring about the singularity!
One by one you launch your nodes. Each one bringing the Apocalypse ever closer. You hit enter on the last command. And. And, nothing happens. What went wrong? How will you find, and forever squash that bug that prevented your moment of triumph? This blog attempts to answer those questions, and more*.
At BLUEsat we’ve had our share of complicated ROS debugging problems. The best ones happen when you are half-way through a competition task, with time ticking on the clock. Although this article will also look at the more common situation of debugging in a less time pressured, and fire prone environment**.
Below are several tools and techniques that we have successfully deployed to debug our ROS environment.
Keep Calm And … FIRE!
You’ve probably heard this before but its very important when debugging to not jump to conclusions or apply fixes you haven’t tested properly. Google for example has a policy of rolling back changes on its services rather than trying to push a fix. A similar idea applies in a competition or time pressured situation: make sure you have thought through that patch that removes the “don’t kill humans” safety from your robot! That being said, unfortunately a roll back is unlikely to be applicable in a competition situation, nor is it likely to be able to put out that fire you just started on your robot. So we can’t just copy Google, but we should think about what we are doing before we do it.
Basically any patches or configuration fixes you apply during such a situation is a calculated risk, and you should make sure you understand those risks before you do something. During the European Rover Challenge last year I found it was possible to tweak small settings, restart certain nodes, and re-calibrate systems; but it was too risky to power cycle the rover during a task due to the time it took to establish communication. Likewise restarting our drive systems or cameras was very disruptive to our pilot, so could only be done in certain situations where the damage done by not fixing the system could be worse. That being said, after a critical camera failed we did attempt to programmatically power cycle that device – the decision being that the camera was important enough to attempt such a risky move. (In the end we weren’t able to do this during the task, and our pilot managed to navigate the rover without the camera in question.)
In a non time pressured situation you can be more flexible. It is possible to test different options and see if they work. That is provided they don’t damage your robot. However a structured approach is often beneficial for those more complicated bugs. I often find that when I’m debugging an intermittent or difficult to detect problem that it is easy to end up lose track of what I’ve tried, or get results mixed up. A technique I’ve found to be very useful was to record what I was doing as I did it, especially if the problem includes sensor data. We had a number of problems with our Rover‘s steering system when we first implemented our swerve drive and I found writing down ADC values and rotation readings in different situations really helped debug it (You can read more about how we use ADC’s in our steering system in one of our previous articles).
Basically the main point is too keep your head clear, and think through the consequences before you act. Know the risks and have your E-Stop button ready! Now lets look at some tools you can use to aid you in your debugging.
At the start of semester we ran a number of seminars on different skills that we use at BLUEsat. In the first of these videos, former Rover Software Team Lead, Harry J.E Day went through an introduction to “the Robotics Operating System” (ROS) covering setting up ROS catkin workspaces, and basic publishers and subscribers.
In our last article, as part of our investigation into different Graphical User Interface (GUI) options for the next European Rover Challenge (ERC) we looked at a proof of concept for using QML and Qt5 with ROS. In this article we will continue with that proof of concept by creating a custom QML component that streams a ros sensor_msgs/Video topic, and adding it to the window we created in the previous article.
Setting up our Catkin Packages
In qt-creator reopen the workspace project you used for the last tutorial.
For this project we need an additional ROS package for our shared library that will contain our custom QML Video Component. We need this so the qt-creator design view can deal with our custom component. In the project window, right click on the “src” folder, and select “add new”.
Select “ROS>Package” and then fill in the details so they match the screenshot below. We’ll call this package “ros_video_components” and the Catkin dependencies are “qt_build roscpp sensor_msgs image_transport”
Click “next” and then “finish”
Open up the CMakeLists.txt file for the ros_video_components package, and replace it with the following file.
##############################################################################
# CMake
##############################################################################
cmake_minimum_required(VERSION 2.8.3)
project(ros_video_components)
##############################################################################
# Catkin
##############################################################################
# qt_build provides the qt cmake glue, roscpp the comms for a default talker
find_package(catkin REQUIRED COMPONENTS qt_build roscpp sensor_msgs image_transport)
include_directories(include ${catkin_INCLUDE_DIRS})
# Use this to define what the package will export (e.g. libs, headers).
# Since the default here is to produce only a binary, we don't worry about
# exporting anything.
catkin_package(
CATKIN_DEPENDS qt_build roscpp sensor_msgs image_transport
INCLUDE_DIRS include
LIBRARIES RosVideoComponents
)
##############################################################################
# Qt Environment
##############################################################################
# this comes from qt_build's qt-ros.cmake which is automatically
# included via the dependency ca ll in package.xml
find_package(Qt5 COMPONENTS Core Qml Quick REQUIRED)
##############################################################################
# Sections
##############################################################################
file(GLOB QT_RESOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} resources/*.qrc)
file(GLOB_RECURSE QT_MOC RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS include/ros_video_components/*.hpp)
QT5_ADD_RESOURCES(QT_RESOURCES_CPP ${QT_RESOURCES})
QT5_WRAP_CPP(QT_MOC_HPP ${QT_MOC})
##############################################################################
# Sources
##############################################################################
file(GLOB_RECURSE QT_SOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS src/*.cpp)
##############################################################################
# Binaries
##############################################################################
add_library(RosVideoComponents ${QT_SOURCES} ${QT_RESOURCES_CPP} ${QT_FORMS_HPP} ${QT_MOC_HPP})
qt5_use_modules(RosVideoComponents Quick Core)
target_link_libraries(RosVideoComponents ${QT_LIBRARIES} ${catkin_LIBRARIES})
target_include_directories(RosVideoComponents PUBLIC include)
Note: This code is based on the auto generated CMakeList.txt file provided by the qt-create ROS package.
This is similar to what we did for the last example, but with a few key differences
This tells catkin to export the RosVideoComponents build target as a library to all dependencies of this package.
Then in this section we tell catkin to make a shared library target called “RosVideoComponents” that links the C++ source files with the Qt MOC/Header files, and the qt resources. Rather than a ROS node.
Next we need to fix our package.xml file, the qt-creator plugin has a bug where it puts all the ROS dependecies in one build_depends and run_depends tag, rather than putting them seperatly. You need to seperate them like so:
Again we need to create src/ resources/ and include/ros_video_components folders in the package folder.
We also need to make some changes to our gui project to depend on the library we generate. Open up the CMakeLists.txt file for the gui package and replace the following line:
In the file browser create src and include/ros_video_components folders in the ros_video_components folder.
Building the Video Streaming Component
When we are using the rover the primary purpose the GUI serves in most ERC tasks is displaying camera feed information to users. Thus it felt appropriate to use streaming video from ROS as a proof of concept to determine if QML and Qt5 would be an appropriate technology choice.
We will now look at building a QML component that subscribes to a ROS image topic, and displays the data on screen.
Right click on the src folder of the ros_video_components folder, and select “Add New.”
We first need to create a class for our qt component so select, “C++>C++ Class”
We’ll call our class “ROSVideoComponent” and it has the custom base class “QQuickPaintedItem.” We’ll also need to select that we want to “Include QObject” and adjust the path of the header file so the compiler can find it. Make sure your settings match those in this screenshot:
Open up the header file you just created and update it to match the following
#ifndef ROSVIDEOCOMPONENT_H
#define ROSVIDEOCOMPONENT_H
#include <QQuickPaintedItem>
#include <ros/ros.h>
#include <image_transport/image_transport.h>
#include <sensor_msgs/Image.h>
#include <QImage>
#include <QPainter>
class ROSVideoComponent : public QQuickPaintedItem {
// this marks the component as a Qt Widget
Q_OBJECT
public:
ROSVideoComponent(QQuickItem *parent = 0);
void paint(QPainter *painter);
void setup(ros::NodeHandle * nh);
private:
void receiveImage(const sensor_msgs::Image::ConstPtr & msg);
ros::NodeHandle * nh;
image_transport::Subscriber imageSub;
// these are used to store our image buffer
QImage * currentImage;
uchar * currentBuffer;
};
#endif // ROSVIDEOCOMPONENT_H
Here, QQuickPaintedItem is a Qt class that we can override to provide a QML component with a custom paint method. This will allow us to render our ROS video frames.
Also in the header file we have a setup function which we use to initialise our ROS subscriptions since we don’t control where the constructor of this class is called, and our conventional ROS subscriber callback.
Open up the ROSVideoComponent.cpp file add change it so it looks like this:
Here we use an intialiser list to call our parent constructor, and then initialise our currentImage and currentBuffer pointers to NULL. The latter is very important as we use it to check if we have received any ROS messages.
This function takes in a pointer to our ROS NodeHandle, and uses it to create a subscription to the “/cam0” topic. We use image transport, as is recomended by ROS for video streams, and direct it to call the recieveImage callback.
And now we implement said callback:
void ROSVideoComponent::receiveImage(const sensor_msgs::Image::ConstPtr &msg) {
// check to see if we already have an image frame, if we do then we need to delete it
// to avoid memory leaks
if(currentImage) {
delete currentImage;
}
// allocate a buffer of sufficient size to contain our video frame
uchar * tempBuffer = (uchar *) malloc(sizeof(uchar) * msg->data.size());
// and copy the message into the buffer
// we need to do this because the QImage api requires the buffer we pass in to continue to exist
// whilst the image is in use, but the msg and it's data will be lost once we leave this context.
memcpy(tempBuffer, msg->data.data(), msg->data.size());
// we then create a new QImage, this code below matches the spec of an image produced by the ros gscam module
currentImage = new QImage(tempBuffer, msg->width, msg->height, QImage::Format_RGB888);
ROS_INFO("Recieved");
// We then swap out of buffer to avoid memory leaks
if(currentBuffer) {
delete currentBuffer;
currentBuffer = tempBuffer;
}
// And re-render the component to display the new image.
update();
}
We now have our QML component, and you can check that everything is working as intended by building the project (hammer icon in the bottom right or the IDE or using catkin_make). In order to use it we must add it to our qml file, but first since we want to be able to use it in qt-creator’s design view we need to add a plugin class.
Right click on the src folder and select “Add New” again.
Then select “C++>C++ Class.”
We’ll call this class OwrROSComponents, and use the following settings:
This adds our custom “ROSVideoComponent” who’s type we just registered in the previous steps to our window.
Note: the @disable-check M16 prevents qt-creator from getting confused about our custom component, which it doesn’t detect properly. This is an unfortunate limit of using cmake (catkin) rather than qt’s own build system.
Then because Qt’s runtime and qt-creator use different search paths we also need to register the type on the first line of our MainApplication::run() function
To test it publish a video stream using your preferred ros video library.
For example if you have the ROS gscam library setup and installed you could run the following to stream video from a webcam:
So in our previous post we learnt how to setup Qt and QML in ROS’s build system, and get that all working with the Qt-Creator IDE. Then this time we built on that system to develop a widget that takes ROS video data and renders it to the screen, demonstrating how to integrate ROS’s message system into a Qt/QML environment.
The code in this tutorial forms the basis of BLUEsat’s new rover user interface, which is currently in active development. You can see the current progress on our github, where there should be a number of additional widgets being added in the near future. If you want to learn more about the kind of development we do at BLUEsat or are a UNSW student interested in joining feel free to send an email to info@bluesat.com.au.
Acknowledgements
Some of the code above is based on a Stack Overflow answer by Kornava, about how to created a custom image rendering component, which can be found here.
The BLUEsat Off World Robotics Software team is rebuilding our user interface, in an effort to address maintenance and learning curve problems we have with our existing glut/opengl based gui. After trying out a few different options, we’ve settled on a combination of QT and QML. We liked this option as it allowed us easy maintenance, with a reasonable amount of power and flexibility. We decided to share with you a simple tutorial we made for working with QT and ROS.
In part one of this article we go through the process of setting up a ROS package with QT5 dependencies and building a basic QML application. In the next instalment we will then look at streaming a ROS sensor_msgs/Image video into a custom QML component. The article assumes that you have ROS kinetic setup, and some understanding of how ROS works. It does not assume any prior knowledge of QT.
Full sources for this tutorial can be found on BLUEsat’s github.
Setting up the Environment
First things first, we need to setup Qt, and because one of our criteria for GUI solutions is ease of use, we also need to setup qt-creator so we can take advantage of its visual editor features.
Fortunately there is a ROS plugin for qt-creator (which you can find more about here). To setup we do the following (instructions for Ubuntu 16.04, for other platforms see the source here):
We also need to install the ROS QT packages, these allow us to easily setup some of the catkin dependencies we will need later (note: unfortunately these packages are currently designed for QT4, so we can’t take full advantage of them)
sudo apt-get install ros-kinetic-qt-build
Setting up our ROS workspace
We will use qt-creator to create our workspace, so start by opening qt-creator.
On the welcome screen select “New Project”. Then chose “Import Project>Import ROS Workspace”.
Name the project “qt-gui” and set the workspace path to a new folder of the same name. An error dialogue will appear, because we are not using an existing workspace, but that is fine.
Then click “Generate Project File”
Click “Next”, choose your version control settings then click “Finish”
For this project we need a ROS package that contains our gui node. In the project window, right click on the “src” folder, and select “add new”.
Select “ROS>Package” and then fill in the details so they match the screenshot below. We’ll call it “gui” and the Catkin dependencies are “qt_build roscpp sensor_msgs image_transport”
Click “next” and then “finish”
Open up the CMakeLists.txt file for the gui package, and replace it with the following file.
##############################################################################
# CMake
##############################################################################
cmake_minimum_required(VERSION 2.8.0)
project(gui)
##############################################################################
# Catkin
##############################################################################
# qt_build provides the qt cmake glue, roscpp the comms for a default talker
find_package(catkin REQUIRED COMPONENTS qt_build roscpp sensor_msgs image_transport)
set(QML_IMPORT_PATH "${QML_IMPORT_PATH};${CATKIN_GLOBAL_LIB_DESTINATION}" )
set(QML_IMPORT_PATH2 "${QML_IMPORT_PATH};${CATKIN_GLOBAL_LIB_DESTINATION}" )
include_directories(${catkin_INCLUDE_DIRS})
# Use this to define what the package will export (e.g. libs, headers).
# Since the default here is to produce only a binary, we don't worry about
# exporting anything.
catkin_package()
##############################################################################
# Qt Environment
##############################################################################
# this comes from qt_build's qt-ros.cmake which is automatically
# included via the dependency ca ll in package.xml
#rosbuild_prepare_qt4(QtCore QtGui QtQml QtQuick) # Add the appropriate components to the component list here
find_package(Qt5 COMPONENTS Core Gui Qml Quick REQUIRED)
##############################################################################
# Sections
##############################################################################
file(GLOB QT_RESOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} resources/*.qrc)
file(GLOB_RECURSE QT_MOC RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS include/gui/*.hpp)
QT5_ADD_RESOURCES(QT_RESOURCES_CPP ${QT_RESOURCES})
QT5_WRAP_CPP(QT_MOC_HPP ${QT_MOC})
##############################################################################
# Sources
##############################################################################
file(GLOB_RECURSE QT_SOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS src/*.cpp)
##############################################################################
# Binaries
##############################################################################
add_executable(gui ${QT_SOURCES} ${QT_RESOURCES_CPP} ${QT_FORMS_HPP} ${QT_MOC_HPP})
qt5_use_modules(gui Quick Core)
target_link_libraries(gui ${QT_LIBRARIES} ${catkin_LIBRARIES})
target_include_directories(gui PUBLIC include)
install(TARGETS gui RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION})
Note: This code is based on the auto generated CMakeList.txt file provided by the qt-create ROS package.
Lets have a look at what this file is doing:
In this section we setup catkin to include Qt5, and tell it we need the Core, QML, and Quick components. This differs from a normal qt-build CMakeList.txt, because we need QT5 rather than QT4.
Finally we setup a ROS node (executable) target called “gui” that links the C++ source files with the Qt MOC/Header files, and the QT resources.
Next we need to fix our package.xml file, the qt-creator plugin has a bug where it puts all the ROS dependencies in one build_depends and run_depends tag, rather than putting them separately. You need to separate them like so:
Create src/, include/<package name> and resources/ folders in the package. (Note: it doesn’t seem possible to do this in qt-creator you have to do it from the terminal or file browser).
Our ROS workspace is now setup and ready to go. We can start on our actual code.
Creating a Basic QML Application using the Catkin Build System
We’ll start by creating a basic ROS node, that displays a simple qml file.
Right click on the src folder in the gui package, and select “Add New”
Select “ROS>Basic Node” and then click choose
Call it “guiMain” and click Finish. You should end up with a new file open in the editor that looks like this:
We’ll come back to this file later, but first we need to add our QML Application. In order to call ros::spinOnce, without having to implement threading we need to subclass the QQmlApplicationEngine class so we can a Qt ‘slot’ to trigger it in the applications main loop (more on slots later). So to start we need to create a new class: right click on the src directory again and select “Add New.”
Select “C++>C++ Class “, then click “Choose”
Set the “Class Name” to “MainApplication”, and the “Base Class” to “QQmlApplicationEngine.”
Rename the header file so it has the path “../include/gui/MainApplication.hpp” This allows the CMakeLists.txt file we setup earlier to find it, a run the MOC compiler on it.
Rename the source file so that it is called “MainApplication.cpp”. Your dialog should now look like this:
Click “Next”, then “Finish”.
Change you MainApplication.hpp file to match this one:
#ifndef MAINAPPLICATION_H
#define MAINAPPLICATION_H
#include <ros/ros.h>
#include <QQmlApplicationEngine>
class MainApplication : public QQmlApplicationEngine {
Q_OBJECT
public:
MainApplication();
//this method is used to setup all the ROS functionality we need, before the application starts running
void run();
//this defines a slot that will be called when the application is idle.
public slots:
void mainLoop();
private:
ros::NodeHandle nh;
};
#endif // MAINAPPLICATION_H
Two important parts here, we add the line “Q_OBJECT” below the class declaration. This tells the QT MOC compiler to do QT magic here in order to make this into a valid QT Object.
Secondly, we add the following lines:
public slots:
void mainLoop();
What does this mean? Well QT uses a system of “slots” and “signals” rather than the more conventional “listener” system used by many other gui frameworks. In layman’s terms a slot acts similarly to a callback, when an event its been “connected” to occurs the function gets called.
Now we want to update the MainApplication.cpp file. Edit it so it looks like the following:
#include "gui/MainApplication.hpp"
#include <QTimer>
MainApplication::MainApplication() {
}
void MainApplication::run() {
//this loads the qml file we are about to create
this->load(QUrl(QStringLiteral("qrc:/window1.qml")));
//Setup a timer to get the application's idle loop
QTimer *timer = new QTimer(this);
connect(timer, SIGNAL(timeout()), this, SLOT(mainLoop()));
timer->start(0);
}
The main things here are we load a qml file, at the resource path “qrc:/window1.qml”. In a moment we will create this file. The other important thing is the set of timer code. Basically how this works is we create a timer object, we create a connection between the timer object’s “timeout” event (signal), and our “mainLoop” slot which we will create in a moment. We then set the timeout to 0, causing this event to trigger whenever the application is idle.
Finally we want to add the the mainLoop function to the end of our MainApplication code, it simply calls ros::SpinOnce to get the latest ROS messages whenever the application is idle.
We now have all the C++ code we need to run our first demo. All that remains is writing the actual QT code. Right click on the “resources” folder, and select “Add New.”
In the New File window, select “Qt” and then “QML File (Qt Quick 2)”, and click “Choose.”
Call the file “window1” and finish.
We want to create a window rather than an item, so change the qml file so it looks like this:
Now we will use the visual editor to add a simple image to the window. With the QML file open, click on the far left menu and select “Design” mode. You should get a view like this:
From the left hand “QML Types” toolbox drag an image widget onto the canvas. You should see rectangle with “Image” written in it on your canvas.
We need to add an image for it to use. To do this we need a resource file, so switch back to edit mode. Right click on the resources folder and select “Add New.”
Select “Qt>Qt Resource File” and then click “Choose”
Call the resource file “images,” and finish.
This should open the resource file editor, first you need to add a new prefix. Select “add>New Prefix”
Change the “prefix” to “/image”.
We now want to add an image. Find an image file on your computer than is appropriate then click “Add Files” and navigate to it. If the file is outside your project, qt-creator will prompt you to save it to your resources folder, which is good. You should now have a view that looks like this:
Switch back to the qml file and design view.
Click the image, on the right hand side will be a drop down marked “source”, select the image you just added from it. (Note if the designer has auto filled this box but the image preview is not appearing you may need to select another image, and then reselect the one you want). I used the BLUEsat logo as my image:
Now we just need to put the qml somewhere we can find it. As in steps 21 to 26, create a new resource file in the resources folder called qml and add the qml window1.qml to it under the “/” prefix.
At this point you should be able to build your project. You can build using catkin_make as you normally would, or by clicking the build button in the bottom left corner of the ide.
To run your ROS node you can either run it as you would normally from the command line using “rosrun”, or you can run it from the ide. To setup running it from the ide select “Project” from the left hand menu, then under “desktop” select run.
Under the run heading, select “Add Run Step>rosrun step.” You should get a screen like this.new/prefix1
Set the package to “gui”, and the target should auto fill with “gui” as well.
Press run in the bottom left corner. You should get something like this (note: depending on where you places the image you may need to resize the window to see it). Important: as always with ROS, roscore needs to be running for nodes to start correctly.
You have a working gui application now, compiled with catkin and running a ROS Spin Loop at the appropriate loop, but this is a bit useless without using some information from other ROS nodes in the gui. In our next article, we will look at streaming video from ROS into Qt so stay tuned!
Welcome back to the second article in our three part series on the BLUEtounge 2.0 Rover’s suspension and drive system. In our last post Chris wrote about the mechanical re-design of the system, and in this post we will look at how we designed the high level software architecture for this system. We will also look at some of the challenges we faced along the way.
The System
BLUEtounge 2.0, with its four wheel modules. You can see the front left module is turning.
The BLUEtounge 2.0 Rover has four independently controlled wheels, with the front two wheels also being able to steer. This was a big departure from BLUEtounge 1.0’s skid steer system, which used six wheels, and turned by having the wheels on one side of the rover spin in the opposite direction to those on the other side of the rover. The system was meant as a stepping stone towards a full swerve drive system on either BLUEtounge, or our next rover platform NUMBAT.
Furthermore the BLUEsat Off-World Robotics code base is based around the R.O.S (Robotics Operating System) framework. This framework provides a range of existing software and hardware integrations, and is based around the idea of many separate processes (referred to as nodes), that communicate over TCP based ROS ‘topics’ using data structures called ‘messages’.
That, along with the nature of BLUEsat as a student society placed some interesting requirements on our design:
The system needed to be able to work with only two wheel modules being able to steer, but as much as possible the code needed to be reusable for a system with four such modules.
The system needed to avoid being heavily tied to the motors and embedded systems used on BLUEtounge, as many of them would be changing for NUMBAT.
Due to European Rover Challenge (ERC) requirements, the system needed to support user input, and be able to be controlled by an AI.
As a consequence of the above, and to avoid reinventing the wheel (no pun intended), the system needed to use standard ROS messages and conventions as much as possible. It also needed to be very modular to improve reusability.
User Input
The user controls the rover’s speed and rotation using an xbox controller. After some investigation, our initial approach was to have one of the analogue sticks control the rover’s direction, whilst the other controlled its speed. This was primarily because we had found that using a single stick to control direction and speed was not very intuitive for the user.
As ROS joystick analogue inputs are treated as a range between -1 and 1 on two axes, the first version of the system simply used the up/down axis of the left stick as the magnitude applied to a unit vector formed by the position of right stick. The code looked a bit like this:
(Note: that all code in this article uses the ROS standard of x being forwards <-> backwards, and y being port <-> starboard)
This code produced a geometry_msgs::Twist message that was used by our steering system. However we found that this system had several problems:
It was very difficult to do fine manoeuvring of the rover, because the range of slow speeds corresponded to too small an area on the joystick. However, since we could only control the power rather than the velocity of the motors, we couldn’t simply reduce the overall power of the rover as this would mean it was unable to traverse steep gradients.
Physical deadzones on the joysticks meant that driving the rover could be somewhat jerky.
The code above had a mathematical problem, where the rover’s max speed was higher whilst steering than could be achieved travelling in a straight line.
Having a two axis direction control was unintuitive for the driver, and hard to control accurately.
In response to this one of our team members (Sean Thompson) developed a new control system that used only one axis for each stick. In this system the left stick was used for power, whilst the right stick was used for (port/starboard) steering. The system also implemented dead zone and exponential scaling which allowed for better manoeuvring of the rover at low speeds, whilst still being able to utilise the rover’s full power.
Full source code for this implementation can be found here.
The rover uses the following control configuration whilst driving. Diagram Credit: Helena Kertesz.
Steering
The steering system for the rover allows the rover to rotate about a point on the line formed between the two rear wheels. In order to achieve this, each wheel must run at a separate speed and the two front wheels must have separate angles. The formulas used to determine these variables are displayed below.
The rover steers by adjusting both the speed of its wheels and the angle of its front wheels. Diagram Credit: Chris Miller.
In order to accommodate this a software module was built that converted the velocity vector () discussed in the previous section, into the rotational velocities required for each of the wheel modules, and the angles needed for the front two wheels. The system would publish these values as ros messages in a form compatible with the standard ros_command module, enabling easier testing in ROS’s gazebo simulator and hopefully good compatibility with other ROS systems we might need to use in the future.
The following code was used to implement these equations:
const double turnAngle = atan2(velMsg->linear.y,velMsg->linear.x);
const double rotationRadius = HALF_ROVER_WIDTH_X/sin(turnAngle);
// we calculate the point about which the rover will rotate
// relative to the centre of our base_link transform (0,0 is the centre of the rover)
geometry_msgs::Vector3 rotationCentre;
// the x axis is in line with the rear wheels of the rover, as shown in the above diagram
rotationCentre.x = -HALF_ROVER_WIDTH_X;
// and the y position can be calculated by applying Pythagoras to the rotational radius of the rover (r_turn) and
// half the length of the rover
rotationCentre.y = sqrt(pow(rotationRadius,2)-pow(HALF_ROVER_LENGTH_Y,2));
// omega_rover is then calculated by the magnitude of our velocity vector over the rotational radius
const double angularVelocity = fabs(sqrt(pow(velMsg->linear.x, 2) + pow(velMsg->linear.y, 2))) / rotationRadius;
//calculate the radiuses of each wheel about the rotation center
//NOTE: if necessary this could be optimised
double closeBackR = fabs(rotationCentre.y - ROVER_CENTRE_2_WHEEL_Y);
double farBackR = fabs(rotationCentre.y + ROVER_CENTRE_2_WHEEL_Y);
double closeFrontR = sqrt(pow(closeBackR,2) + pow(FRONT_W_2_BACK_W_X,2));
double farFrontR = sqrt(pow(farBackR,2) + pow(FRONT_W_2_BACK_W_X,2));
//V = wr
double closeBackV = closeBackR * angularVelocity;
double farBackV = farBackR * angularVelocity;
double closeFrontV = closeFrontR * angularVelocity;
double farFrontV = farFrontR * angularVelocity;
//work out the front wheel angles
double closeFrontAng = DEG90-atan2(closeBackR,FRONT_W_2_BACK_W_X);
double farFrontAng = DEG90-atan2(farBackR,FRONT_W_2_BACK_W_X);
//if we are in reverse, we just want to go round the same circle in the opposite direction
if(velMsg->linear.x < 0) {
//flip all the motorVs
closeFrontV *=-1.0;
farFrontV *=-1.0;
farBackV *=-1.0;
closeBackV *=-1.0;
}
//finally we flip the values if we want the rotational centre to be on the other side of the rover
if(0 <= turnAngle && turnAngle <= M_PI) {
output.frontLeftMotorV = closeFrontV;
output.backLeftMotorV = closeBackV;
output.frontRightMotorV = farFrontV;
output.backRightMotorV = farBackV;
output.frontLeftAng = closeFrontAng;
output.frontRightAng = farFrontAng;
ROS_INFO("right");
} else {
output.frontRightMotorV = -closeFrontV;
output.backRightMotorV = -closeBackV;
output.frontLeftMotorV = -farFrontV;
output.backLeftMotorV = -farBackV;
output.frontLeftAng = -farFrontAng;
output.frontRightAng = -closeFrontAng;
ROS_INFO("left");
}
Separating steering from the control of individual joints also had another important advantage, in that it significantly improved the testability and ease of calibration of the rover’s systems. Steering code could be tested to some extent in the gazebo simulator using existing plugins, whilst control of individual joints could be tested without the additional layer of abstraction provided by the steering system. It also allowed the joints to be calibrated in software (more on this in our next article).
Joint Control System
In BLUEtounge 1.0, our joint control system consisted of many lines of duplicated code in the main loop of our serial driver node. This code took incoming joystick messages and converted them directly into pwm values to be sent through our embedded systems to the motors. This code was developed rapidly and was quite difficult to maintain, but with the addition of the feedback loops needed to develop our swerve drive, the need to provide valid transforms for 3d and automation purposes, and our desire to write code that could be easily moved to NUMBAT – a new solution was needed.
We took an object oriented approach to solving this problem. First a common JointController class was defined, this would be an abstract class that handled subscribing to the joints control topic, calling the joints update functions and providing a standard interface for use by our hardware driver (BoardControl in the diagram below) and transform publisher (part of JointsMonitor). This class would be inherited by classes for each type of joint, where the control loop for that joint type could be implemented (For example the drive motors control algorithm was implemented in JointVelocityController, whilst the swerve motors where implemented in JointSpeedBasedPositionController).
The BLUEtounge 2.0 Rover’s joint system consisted of a JointMonitor class, used to manage timings and transforms, as well as an abstract JointController class that was used to implement the different joint types with a standard interface. Diagram Credit: Harry J.E Day, with amendments by Simon Ireland and Nuno Das Neves.
In addition a JointMonitor class was implemented, this class stored a list of joints and published debugging and transform information at set increments. This was a significant improvement in readability from our previous ROS_INFO based system as it allowed us to quickly monitor the joints we wanted. The main grunt of this class was done in the endCycle function, which was called after the commands had been sent to the embedded system. It looked like this:
// the function takes in the time the data was last updated by the embedded system
// we treat this as the end of the cycle
void JointsMonitor::endCycle(ros::Time endTime) {
cycleEnd = endTime;
owr_messages::board statusMsg;
statusMsg.header.stamp = endTime;
ros::Time estimateTime = endTime;
int i,j;
// currentStateMessage is a transform message, we publish of all the joints
currentStateMessage.velocity.resize(joints.size());
currentStateMessage.position.resize(joints.size());
currentStateMessage.effort.resize(joints.size());
currentStateMessage.name.resize(joints.size());
// we look through each joint and estimate its transform for a few intervals in the future
// this improves our accuracy as our embedded system didn't update fast enough
for(i =0; i < numEstimates; i++, estimateTime+=updateInterval) {
currentStateMessage.header.stamp = estimateTime;
currentStateMessage.header.seq +=1;
j =0;
for(std::vector<JointController*>::iterator it = joints.begin(); it != joints.end(); ++it, j++) {
jointInfo info = (*it)->extrapolateStatus(cycleStart, estimateTime);
publish_joint(info.jointName, info.position, info.velocity, info.effort, j);
}
statesPub.publish(currentStateMessage);
}
// we also publish debugging information for each joint
// this tells the operator where we think the joint is
// how fast we think it is moving what PWM value we want it to be at.
for(std::vector<JointController*>::iterator it = joints.begin(); it != joints.end(); ++it, j++) {
jointInfo info = (*it)-&amp;gt;extrapolateStatus(cycleStart, endTime);
owr_messages::pwm pwmMsg;
pwmMsg.joint = info.jointName;
pwmMsg.pwm = info.pwm;
pwmMsg.currentVel = info.velocity;
pwmMsg.currentPos = info.position;
pwmMsg.targetPos = info.targetPos;
statusMsg.joints.push_back(pwmMsg);
}
debugPub.publish(statusMsg);
}
Overall this system proved to be extremely useful, it allowed us to easily adjust code for all motors of a given type and reuse code when new components where added. In addition the standardised interface allowed us to quickly debug problems (of which there where many), and easily add new functionality. One instance where this came in handy was with our lidar gimbal, the initial code to control this joint was designed to be used by our autonomous navigation system, but we discovered for some tasks it was extremely useful to mount a camera on top and use the gimbal to control the angle of the camera. Due to the existing standard interface it was easy to add code to our joystick system to enable this, and we didn’t need to make any major changes to our main loop which would have been risky that close to the competition.
Conclusion
Whilst time consuming to implement and somewhat complex this system enabled us to have a much more manageable code base. This was achieved by splitting the code into separate ROS nodes that supported standard interfaces, and using an OO model for implementing our joint control. As a result it is likely that this system will be used on our next rover (NUMBAT), even though the underlying hardware and the way we communicate with our embedded systems will be changing significantly.
Next in this series you will hear from Simon Ireland on the embedded systems we needed to develop to get position feedback for a number of these joints, and some of the problems we faced.
Code in this article was developed for BLUEsat UNSW with contributions from Harry J.E Day, Simon Ireland and Sean Thompson, based on the BLUEtounge 1.0 steering and control code by Steph McArthur, Harry J.E Day, and Sam Scheding. Additional assistance in review and algorithm design was provided by Chris Squire, Chris Miller, Yiwei Han, Helena Kertesz, and Sebastian Holzapfel. Full source code for the BLUEtounge 2.0 rover as deployed at the European Rover Challenge 2016, as well as a full list of contributors, can be found on github.
We held our new agm today, and voted for our new executive. A big thank you to our outgoing executives Tom Dixon (Pres), Denis Wang (COO), Helena Kertesz (CTO – Off World Robotics), and Sam Wardhaugh(CTO Satellite), who have done a brilliant job this year leading the society this year. We have gone to ERC, establishing radio contact with the International Space Station through our groundstation and much more, and we look forward to another brilliant year of space engineering!
BLUEsat UNSW’s rover team has achieved 9th place in the European Rover Challenge (ERC)!
We competed against 44 other qualifying teams from across the world, with 22 of those teams making it to Poland for the finals.
A big thank you to all the new and old friends we have made at the competition, as well as to our sponsors and everyone else who has helped us during the competition. We will be posting more information shortly.
BLUEsat is proud to present the BLUEtongue 2.0 Rover for the European Rover Challenge 2016! We have made a lot of improvements this year, and there is still more to come. Check it out.