Posted on by

It’s one thing to design a satellite or rover, but without manufacturing you’re dead in the water. Over the years at BLUEsat the problem, or more specifically the cost, of manufacturing has been recurring issue for our mechanical engineering teams. It’s not unusual for the bill for manufacturing our designs to come in at several times our material costs, not to mention long lead times, lack of quality control and no second chances once the part comes in.

Late last year the society decided that enough was enough and purchased a CNC router. A CNC is a simple machine at its core. It consists of a rapidly spinning tool that cuts away at material, which is then mounted on driven guide rails, controlling its position in space. Using this system in combination with computer controls, a CNC router can cut out almost any geometry that we choose.

BLUEsat's CNC Router
BLUEsat’s CNC Router

The process for making a part on the CNC has three stages:

  1. Model the part in CAD, we use Autodesk Inventor.
  2. Create a tool path using CAM software, we use HSM
  3. Secure your material to the CNC router, load the tool path, and begin cutting.

One of the parts that we made recently was an aluminium work holding jig. The model is shown below. This part has some complex features such as bottom rails, notched sides, counterbored holes and raised supports. To make this part by hand would take days and a very competent machinist, and we don’t have access to either.

Jig Plate CAD model
Jig Plate CAD model

Using this model, a tool path was developed with CAM software. The program does most of the heavy lifting, but the user must define the positions of each feature, the speed the machine moves at and how fast it will spin. These speeds are very important to the quality of the final piece and must be tailored to each feature. Below is an example of what the tool path looks like on the computer, red lines indicate that the machine is moving, blue lines show it is cutting.

 

Jig Plate CAM operations
Jig Plate CAM operations

Finally, with our tool path created, we were ready to set up the CNC itself. The material needs to be secured to the surface of the bed to prevent any movement during the cutting operation. This can be done in a number of ways, such as using a machine vice or work holding clamps. For this piece, we started with work holding clamps and then secured it using holes drilled into the material itself.

Now onto the fun part, the cutting. The tool path is loaded onto the CNC and the machine is set to run. Generally, we do a single operation at a time. This gives us time to clean up after each cut and inspect if it was successful. Here are a few videos of cutting.

 

All up, this part took 6hrs to machine. That included the setup, cutting and cleaning up of the part. Below is the final part:

Completed Jig Plate
Completed Jig Plate
Bottom View
Bottom View

 

 

 

 

 

 

 

 

 

 

 

Using our CNC has allowed for rapid prototyping of parts, drastically reduced lead times and most importantly, cut manufacturing costs by an order of magnitude.


Posted on by


 

 

At the start of semester we ran a number of seminars on different skills that we use at BLUEsat. In the first of these videos, former Rover Software Team Lead, Harry J.E Day went through an introduction to “the Robotics Operating System” (ROS) covering setting up ROS catkin workspaces, and basic publishers and subscribers.

You will need the vm image from here: http://bluesat.com.au/vm-image/

The slides from this presentation can be found here: http://bluesat.com.au/an-introduction…

Errata:

  • Some slides in the recording refer to “std_msg” there should be an ‘s’ on the end (i.e “std_msgs”)
  • On the slide “A Basic Node – CMakeLists.txt (cont)” there should only be one ‘o’ in “node.”
  • In step 4 of the “Publisher (cont)” section there should be an e on the end of “pub_node”
  • The person on the last slide was the first robotics COO at BLUEsat not CTO.

These have all been corrected in the slides.


Posted on by

Welcome back to my series on How to make a space mission! Last time we talked about how doing space activities has never been easier. CubeSats are making spacecraft cheaper and easier to make. Companies like Spaceflight and Nanoracks are making launch opportunities easier to access. And companies like SpaceX and Rocket Lab are reducing the costs of launch. As happened with the internet, opportunities for science and business are appearing in areas no one could have reasonably expected. For example, who would have expected people to pay to have their ashes put in space? That’s why this is the time to be thinking of ideas for space missions.

Here’s how I try to come up with ideas:

  1. Identify a problem;
  2. Understand the problem;
  3. Establish possible solutions; and
  4. Find best solution.

As simple as they may sound, these steps are sufficient to build a really strong idea for a space mission. That said, this is by no means an easy process. The more time you put in these steps, the stronger your idea will be. Even if your idea turns out to be unfeasible right now, it just might be achievable in a few short years. And if it doesn’t turn out to be feasible? That’s failure, right? It is failure, but under the fail-fast approach, failing early when you’re in this brainstorming phase is best. You don’t want to spend months or years developing software or hardware, only to find out that it’s not possible or that no one’s interested in it!

Let’s dive in.

1) Identifying a problem

Wait a minute, why are we talking about problems? Why aren’t we talking about ideas and solutions? Well, it turns out that engineers, scientists, and startup founders all agree that the problem is the first thing that needs to be identified when trying to build something. It is the first step in the engineering design process, the scientific method, and in the lean startup approach. The engineering design process is shown below. Being an engineer, it is the process I’m most familiar with.

Steps of the Engineering Design Process
The engineering design process. Source: www.sciencebuddies.org

 

Figuring out the problem you’re trying to solve is probably the most important step. People with money are incredibly stingy folks, whether they be investors, grant providers, or otherwise. They won’t care about your solution, no matter how cool or amazing it is if you can’t persuade them that the problem you’re solving is important.

Fortunately (or unfortunately), problems aren’t hard to come by. You can read about problems all day on the internet, often in news articles and blogs. Simply asking someone about their day might be enough for you to hear three or four problems. And since you, the reader, are part of several demographics, your problems might well be problems worth solving.

2) Understanding the problem

This step is where I would try to understand what it is that the problem needs. In engineering, we call this step the “Specify Requirements” step. Simultaneously, I aim to determine whether this problem is one that can be theoretically solved within the limits of natural laws and the resources we are capable of gathering. For example, no matter how much various groups might demand faster than light travel, we simply do not have any techniques to make a warp drive or hyperdrive! A more grounded example might be something like the following.

The government has found a need to track individual cars for what they assure you are perfectly non-dystopian reasons. To do this, we require a telescope in space capable of seeing objects in the size range 1m or smaller. This is our requirement. Simple right?

Now, assuming our space telescope is at a 500km altitude and it needs to be able to resolve objects of 1m size, we can do a bit of trigonometry to show that this comes to an angular size of 0.4 arcseconds (or 0.0001 degrees). Due to something called the diffraction limit, there is a limit to how small of an object a telescope can see. The rule can be generalised as: the bigger the telescope, the smaller the things it can see. We can see this relationship below.

The relationship between telescope diameter (vertical column) and angular resolution (horizontal column). Source: en.wikipedia.org

 

Assuming the government wants us to take pictures of cars in visible light, this means that for a resolution of 0.4 arcsecs we need a telescope 16 inches (41cm) in diameter. Considering CubeSats are generally made of 10cm cubes, fitting such a large telescope into a CubeSat would be quite a tall order! This lets us rule out this idea for CubeSats. That said, a larger satellite could quite easily take images of sufficient resolution.

This step is a hard one, and will require significant research and review of scientific and commercial principles to get through. But again, the more time you spend here, the stronger your case!

3) & 4) Establishing possible solutions and picking the best one

Now we finally move into the design phase! These two steps are where things can get reeeeally complicated very quickly. Coming up with solutions may require some serious imagination and creativity. Picking the best solution is harder still, and may require some serious engineering chops and commercial considerations. As such, I won’t go into very much depth for these steps in this blog post.

While understanding the problem in the previous step, a number of solutions hopefully came to mind already. Indeed, we already considered one possible solution: 40cm telescope satellites at a 500km altitude. But what about other solutions? Why not just have a few drones flying around to take pictures? How about a plane? Or high altitude balloons? Considering my bias towards space, you can guess which solution I would pick! Here’s the justification:

  • A well made satellite will produce images for years and years at a time with a single investment. The other solutions require regularly purchasing flights, fuel, or balloons.
  • The satellite can take images of almost any place in the world without any additional investment, allowing you to make your business global as soon as your satellite launches. In comparison, the other solutions can only take images locally.

There are more issues that I haven’t covered. Nor have I produced any proof for the above statements. The reason for this is simple – I’m only writing a blog post, not proposing an actual mission! In the course of your own efforts, you will need to produce numbers through engineering and market analysis to back your assertions. These will be covered in Part 4 of this series.

 

So we didn’t go into very much depth at all! “Where is the space engineering?” you may ask. What was the point of all this? Well, my dear space-loving reader, it turns out that if you’ve done these steps to a reasonable level of detail, you’ve qualified yourself to take the next step – raising funds!

At BLUEsat, we’re in the midst of developing our own space mission. We’ve named it GreenSat. Creative, right? It’s to be a platform for agricultural and biological experiments in space, with the goal of enabling agriculture in space. Right now, we’re working on step 3 and heading towards step 4. We’ll be taking our work to the International Astronautical Congress, where we will present our ideas to an international audience. Through this, we’ll hopefully be able to get GreenSat funded and launched.

Join me in Part 3 where we’ll discuss the various avenues that now exist to raise money for space missions!


Posted on by

One of the most surprising things about our experience at the European Rover Challenge last year was how incredibly close to total failure we came. Two days before the competition began, while we were in Poland, our control board failed. In addition to having to port all of our embedded codebase to Arduino in two days, we had to fix our overcurrent protection mechanism on the rover’s claw. This was a critical system since it prevents the claw servo from overheating when try to pick up objects. Before we developed our original software solution, a large number of servos had been destroyed due to overheating. Due to errors we’d made in calibration during the port to Arduino, our original software solution didn’t work and we had to think of something else.

Seb Holzapfel and I realised that a hardware solution would also solve this problem. We designed the circuit shown below. It consists of an Op-Amp, a diode, a mosfet and a few resistors. It was designed such that when a large amount of current flows through the 100m Ohm resistor, the PWM signals will be cut off from the claw. This causes the servo motor in the claw to stop drawing current, and therefore prevents overheating.

But why was this worth writing about? Well, we had to build this in a very short period of time and we didn’t really have the correct spare parts on hand. We only had a few op-amps, some jumper cables, some veroboard and a few resistors. This wasn’t enough to build the circuit shown above. We had to improvise. I realised that since our control boards had all failed, we could, in fact, harvest them for the parts we needed. Fortunately, after doing a quick stocktake of the parts on the old control boards, I determined that all the parts we would need were present. We just had to salvage them.

 

Seb is shown above trying, ultimately unsuccessfully, to fix one of our control boards.

 

 

While everyone else was out testing the rover and after we ported the code to Arduino successfully, Seb and I found a bit of spare time on our hands. About 2 hours. We got to work, I desoldered parts from the dead control boards with the hot air gun, while Seb put those parts together into the monstrosity you see below.

It isn't pretty, but it worked.

We then tested it using a coil of wire as a load and verified that it worked. It was then deployed onto the rover. Despite being built in just an afternoon, it actually worked better than the previous software solution when we tested it with the rover. And with this “solution”, we came 9th.

And that’s how we built a critical system in just 2 hours from parts we salvaged from dead control boards.

 

BLUEsat OWR ERC Team.
Back row (left to right): myself, Timothy Chin, Denis Wang, Simon Ireland, Nuno Das Neves, Helena Kertesz.
Front row (left to right): Harry J.E. Day, Seb Holzapfel

Posted on by

In a previous post we covered the software to perform detumbling – the first function of an Attitude Determination + Control System (ADCS). We now move onto the second (and more interesting) function of an ADCS – to point yourself in a certain direction. This functionality is critical when you have any direction-specific equipment on your satellite – whether you have a parabolic antenna for providing the internet or a space laser for destroying the internet, you need your equipment to be pointing in the right direction for it to work properly.

Now in order to point, you must be able to work out what angle you’re currently sitting at – this is normally done by using a magnetometer (an electronic compass). However, in our setup there was a lot of magnetic interference from the motor, making the magnetometer very inaccurate in calculating the platform’s angle. Thus we have to make use of the other sensors on board – a gyro and an accelerometer (the latter being fairly useless for measuring rotation).

The Problem

Imagine that you’re driving a windowless tram, and someone tells you that there’s five workers chained to the tracks exactly 100m ahead. Now it just so happens that you’ve brought along your favourite pair of bolt-cutters, but you also happen to be super lazy and would rather drive the tram to them instead of walking.

Bonus points if you use a bang-bang controller
The Trolley Problem for Engineers

 

As you look down at your odometer, you remember your old physics teacher going on about how to calculate your distance from your velocity (remember: distance = velocity ⨉ time). Unfortunately, this formula only works for constant velocities, and the accelerator is way too touchy to keep a constant speed. What do you do?

The Solution

Just like dealing with incriminating evidence, this problem can be solved by chopping it into tiny pieces. Let’s say for the first second of our journey, we recorded our average velocity – say 2m/s. Then we know that we’ve travelled 2m down the road (using our handy formula d = v ⨉ t). Similarly for the next second if we measure our average velocity to be 3m/s, then we know we’ve travelled 3m. So in total we’ve travelled 2m + 3m = 5m. It turns out we can calculate our position by repeating this process until we arrive at the workers.

Now in the case of our ADCS, our trusty gyro measures angular velocity. Angular velocity formulas work the same as linear ones, so we can actually use the same approach to work out the angle of our platform with ease. For example, if after 0.1s we measured our angular velocity to be 2°/s, then 0.1s later we measured it to be 3°/s, then our current angle would be (2⨉0.1 + 3⨉0.1)° = 0.5° anticlockwise from where we started (positive angles are anticlockwise by convention).

In code, this process (also known as ‘integration’) is simple. If every 5ms a new gyro measurement is taken, then the following line can be used to calculate the new platform angle.

currAng = oldAng + angVel * 0.005;
    //where angVel is the latest unbiased gyro measurement

The Controller

Now that we have a way to calculate the angle of our ADCS, we can reuse our proportional controller from our detumbling code:

Output = K ⨉ error

(where K is some tuned constant, and error was the difference between the current value and the target value.)

Now it’s possible to use this same controller to control our angle, but variety is the spice of life so let’s go for something a bit fancier – a ‘proportional derivative’ (PD) controller. In math-speak, a PD controller looks something like this:

Output = K ⨉ error + C ⨉ error’

(where K and C are two constants.)

The little apostrophe indicates a derivative (rate of change) – in this case it’s the derivative of the error, or how fast the error is changing. Remember that the error = current angle – target angle. The target angle is usually constant, so the rate of change of the target angle is 0 (because it’s not changing). Thus

error’ = current angle’

Now we’re just left with the rate of change of the current angle (how fast the angle of the platform is changing) – sound familiar? The rate of change of the current angle is the same as the angular velocity, i.e. what we originally got from the gyro!

Putting this all together, in code-speak our PD controller is given by:

output = k * posnErr + c * angVel;
    //where posnErr = targetAng - currAng
    //and angVel = angular velocity from gyro 

Tuning K and C is simply a matter of trial and error, finding the pair of values that minimises the time to reach the target angle without overshooting it.

The Radio

Now it’s a bit boring if your satellite can only point in a predetermined direction – what’s really fun is being able to point it wherever you want while it’s in operation. A simple potentiometer on a separate Arduino allows us to digitise angles:

angle = analogRead(A0)/1024.0*(2*PI);
    //assuming your pot rotates a full 360deg

So now all we need to do is somehow transmit this angle to the Arduino on-board the reaction wheel system – to do this we need to utilise RF (Radio Frequency), the black magic of electrical engineering.
In the realm of RF, things behave in strange ways. Radio signals vary in range depending on the time of day, straight wires transfer more power than bendy ones, and you can even use funny-shaped wires to improve your signal quality. It takes mad skills to truly harness the power of RF, and many believe that RF engineers are actually wizards.

Luckily for us, these wizards also sell radio modules that can interface easily with our Arduinos, such as the NRF24L01 chip. After wiring up a module to both the on-board and remote Arduinos (example), we can transmit data using TMRh20’s RF24 library and the following code:

On the transmitter side:

RF24 radio(9, 10);              //set pins 9 and 10 to be CE and SCN respectively
const byte rxAddr[6] = "00001"; //set address of radio
radio.begin();                  //initialise radio
radio.setRetries(15, 15);       //if message not received, wait 4ms ((15+1)*250us) before retrying, retry 15x before giving up
radio.openWritingPipe(rxAddr);  //open a pipe for writing
radio.stopListening();          //stop listening for RF messages, switch to transmit mode

float angle = analogRead(A0)/1024.0*(2*PI); //read in angle of potentiometer
radio.write(&angle, sizeof(angle));         //transmit angle as a radio message

On the receiver side:

RF24 radio(18, 19);
const byte rxAddr[6] = "00001";
radio.begin();
radio.openReadingPipe(0, rxAddr);  //open a pipe for reading
radio.startListening();            //switch to receive mode, start listening for messages
if (radio.available()){            //if there's an incoming message,
    float rx;
    radio.read(&rx, sizeof(rx));   //store it in the variable rx
}

These code stumps transmit the angle of the potentiometer from the transmitting Arduino to the receiving Arduino (the one on the ADCS). This allows us to update the target angle on the on-board Arduino, thus giving us run-time control of its position.

Here’s a demonstration of this whole thing in action:

 

(As always, code is available here)


Posted on by

Did you want to be an astronaut growing up? Were your lofty ambitions brought down as you got older?

I’m here today to tell you to aim high once again – to aim for space. Maybe not as high as actually personally going to space, but you can get pretty close thanks to advancements in miniature spacecraft. It has never been easier to send something you built yourself to space. While it’s still a lot of work, the rewards are incredible.

In recent years, increasing numbers of small satellites have been launched by people and organisations that historically had no ability to reach space. The most common architecture for these small satellites are known as CubeSats. These CubeSats are built with commercial off-the-shelf parts and can be developed by individuals or small teams in the space of a few years. They are launched into space by hitchhiking on the backs of larger satellites. These advances mean that CubeSats have become as much as 1000 times cheaper than traditional satellites. This cost decrease has enabled the rise and growth of NewSpace startups such as Planet, which has grown to a valuation of over a billion dollars in five years.

Two of Planet's Dove CubeSates being deployed from the International Space Station.
The first pair of Planet’s Dove CubeSats being deployed from the International Space Station.

 

Here’s what you’ll need to get started on developing your own CubeSat mission:

  1. An idea;
  2. Some money; and
  3. A few skills.

It doesn’t sound like much, does it? Let’s go into a bit more depth.

The Idea

The idea you come up will be what your bit of space hardware will do once it’s up there, or in other words, its mission. Satellites are the invisible MVPs of today’s world, taking care of weather forecasts, global navigation, communications and much more. If you want to send some hardware up there in the form of a satellite or otherwise, you will first need to find a problem to solve with it.

There are over 2000 operational satellites in space today, all doing their part for us. However, the small satellites and hosted payloads you or I can send up will not be doing the same work as the bigger billion dollar satellites. I mention this because the key to finding and building on a good idea isn’t sitting around and thinking really hard. To build a solid idea, you will have to read widely, speak to the people whose problem you’re looking to solve, and to listen carefully to their feedback.

Money

While money isn’t as big an issue nowadays as it once was thanks to the NewSpace revolution, reaching space is still an expensive ordeal. You will most likely need hundreds of thousands of dollars to pay for construction, testing, launch, and operations.

Now, there is a way to reverse this problem entirely, and instead make money from your space mission. The way to do this is to go back to your idea and to ask: Is this something people would pay for? Am I tackling a big enough pain point for people? While this is not the traditional way, you and I are even less likely to find success begging NASA or ESA for money.

Skills

Now here is where we at BLUEsat come in! As engineers with few ideas and little money, skills are where we try to excel.

Some serious engineering ability is still needed nowadays to reach space. But with open source architectures and modular off-the-shelf parts becoming more readily available, the level of knowledge needed has dropped considerably. A bit of background on the basics of spacecraft engineering, electrical engineering and coding is all you’ll need to get started. Learning the rest will happen automatically as you design and build.

This is more or less how BLUEsat approaches spacecraft engineering. Students joining BLUEsat aren’t equipped with encyclopedic knowledge of how spacecraft are built and how they work. We simply teach our members the basics, install some software for them and point them towards some problem that we would like to solve. Every one of our senior members has started from such humble origins and slowly googled and built their way to greater understanding.

Members of BLUEsat's ground station team messing about with RF electronics.
Members of BLUEsat’s ground station team messing about with RF electronics.

So why am I telling you this?

At BLUEsat, our Orbital Systems Division is hard at work on a number of projects. We have recently put together a team to work on developing a mission for our own CubeSat, and we need your help. No matter your year or degree, we will gladly take you in and help build your space engineering capabilities. We meet at Electrical Engineering (G17) room 419 every Saturday between 10:30AM and 5PM. Feel free to pop in and say hi.

I’ll see you folks in Part 2, where we talk a little more about how to come up with space mission ideas.


Posted on by

BUCKLE UP EVERYONE WE’RE GOING TO GO ON A WILD AND EXHILARATING JOURNEY INVOLVING SPREADSHEETS AND LOTS OF MEETINGS

BLUEsat does a lot of cool stuff. Robots, satellites and radios are all super cool. They get you engaged, using practical skills and building something physical that you can show off.

TO DO ALL OF THAT, YOU NEED MONEY

Soft drinks paid for a surprising amount of the robot

Before you can even start on figuring out how to allocate money to all the people who need it, you need to work out a rough budget for the project itself. As a non-technical member, I have no idea how much stuff costs. It therefore falls on team leads and the CTO to give me the numbers that we need to work with.

Batteries are expensive

I speak to work out a reasonable amount that can be allocated to each project, based on funding from previous years. Team leads then return a budget, and we discuss which parts are essential, working towards a target amount that everyone is happy with.

It’s at this point that we start stressing about how we’re going to afford all of this.

After figuring out how much money we’ll roughly need for the year, how we’re going to get the money suddenly become very important. Traditionally, BLUEsat has gotten a significant chunk of its funding from the University. In more recent years, our operations have grown and we’ve started working on two projects in tandem. This naturally increases costs. To keep up with our increasing capacity to churn through cash, we’ve started to seek sponsors from outside the university.

BLUEsat is currently sponsored by:

Platinum Sponsors:

NSW Government Logo  UNSW

Silver Sponsors:

Arc Clubs Logo

Bronze Sponsors:

ServoCity

Once all of that is done, we’ve got our budget for the year! Wouldn’t life be nice if nothing unplanned for happened?

 

 

 


Posted on by

The Waterfall Plot

Want to get your hands metaphorically dirty with some BlueSat projects but don’t have enough cash to fund both your HECS debt and your rover? This is a project so simple to follow along that even an arts undergraduate can complete. We will be transforming radio signals that exist everywhere around you into a graph known as a Waterfall Plot. It will look something like this:


Leave this running on your computer for long enough that your mates walk by, they will think you’re tapping into Russian communications, and then land yourself an internship at Telstra.

Technically you can tap into Russian communications, its not a joke there, but other practical and less anti-facist applications include checking the signal strength in your network, interpretting packet radio, and listening to the Triple J hottest 100 (as shown in diagram).

 

So lets get started!

 

What you will need

You will need these to get started:

  • SDR (software defined radio)
  • Antenna
  • GNU Radio
  • Python
  • A Computer with at least 1 usb port

Software defined radios are extremely handy pieces of equipment due to their size, cost and effectiveness. They connect via USB and only require an Antenna. We found a source that sells the model we will be using for only $20 which you can go to here.

GNU Radio is a python based open-source graphical tool to create signal flow graphs and generating flow-graph source code. We will be using GNU Radio to communicate with our SDR and it has the potential to do much much more. You can download their software here: www.gnuradio.org

Since GNU Radio will be operating in Python, it kind of makes sense to have Python. But what is Python? No it is not malware, so rest assured it won’t swallow up your operating system. Python is a widely used programming language, and the one that we will be using in this project. Make sure you get the right bit version by checking whether your downloaded version of GNU Radio runs on 64-bit or 32-bit. You can download it here: www.python.org

Here is an image of the SDR:

 

GNU Radio

Assuming that we have been successful up to this point in purchasing the equipment, downloading GNU Radio and setting up, we can begin creating our program.

A template that we will be using can be downloaded through this link. This will save time learning how to configure GNU Radio, however you may learn in your spare time how to add more powerful tools to improve and diversify from this template. Opening the file in the hyperlink will look like this.

The two main blocks that allows this program to function are the source block and sink block. The source block funnels the data from the SDR to GNU Radio and the sink block compiles the infomation to be displayed on a custom GUI. You may tweak the template as you gradually gain a better understanding of how the program works, including adding an audio sink which isn’t hard but that’s homework for you to figure out.

The last step is to compile and run the program, either by clicking the ‘play button’ or the F5 shortcut if you can’t find it. This will create a new window with the waterfall plot showing all the receivable frequencies in your range. The frequency slider on the bottom will allow you to adjust the centre frequency that you want to listen to.

So now you have your own cheap and miniature device for frequency capture! But now it is time to test it out on bigger and much more expensive equipment, like maybe a 2m antenna on top of the Electrical Engineering building…

Join BlueSat to participate in bigger and better projects than this by contacting us. Happy tapping into communications in the meantime!


Posted on by

In our last article, as part of our investigation into different Graphical User Interface (GUI) options for the next European Rover Challenge (ERC) we looked at a proof of concept for using QML and Qt5 with ROS. In this article we will continue with that proof of concept by creating a custom QML component that streams a ros sensor_msgs/Video topic, and adding it to the window we created in the previous article.

Setting up our Catkin Packages

  1. In qt-creator reopen the workspace project you used for the last tutorial.
  2. For this project we need an additional ROS package for our shared library that will contain our custom QML Video Component. We need this so the qt-creator design view can deal with our custom component. In the project window, right click on the “src” folder, and select “add new”.
  3. Select “ROS>Package” and then fill in the details so they match the screenshot below. We’ll call this package “ros_video_components” and  the Catkin dependencies are “qt_build roscpp sensor_msgs image_transport” The QT Creator Create Ros Package Window
  4. Click “next” and then “finish”
  5. Open up the CMakeLists.txt file for the ros_video_components package, and replace it with the following file.
    ##############################################################################
    # CMake
    ##############################################################################
    
    cmake_minimum_required(VERSION 2.8.3)
    project(ros_video_components)
    
    ##############################################################################
    # Catkin
    ##############################################################################
    
    # qt_build provides the qt cmake glue, roscpp the comms for a default talker
    find_package(catkin REQUIRED COMPONENTS qt_build roscpp sensor_msgs image_transport)
    include_directories(include ${catkin_INCLUDE_DIRS})
    # Use this to define what the package will export (e.g. libs, headers).
    # Since the default here is to produce only a binary, we don't worry about
    # exporting anything.
    catkin_package(
        CATKIN_DEPENDS qt_build roscpp sensor_msgs image_transport
        INCLUDE_DIRS include
        LIBRARIES RosVideoComponents
    )
    
    ##############################################################################
    # Qt Environment
    ##############################################################################
    
    # this comes from qt_build's qt-ros.cmake which is automatically
    # included via the dependency ca ll in package.xml
    find_package(Qt5 COMPONENTS Core Qml Quick REQUIRED)
    
    ##############################################################################
    # Sections
    ##############################################################################
    
    file(GLOB QT_RESOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} resources/*.qrc)
    file(GLOB_RECURSE QT_MOC RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS include/ros_video_components/*.hpp)
    
    QT5_ADD_RESOURCES(QT_RESOURCES_CPP ${QT_RESOURCES})
    QT5_WRAP_CPP(QT_MOC_HPP ${QT_MOC})
    
    ##############################################################################
    # Sources
    ##############################################################################
    
    file(GLOB_RECURSE QT_SOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS src/*.cpp)
    
    ##############################################################################
    # Binaries
    ##############################################################################
    add_library(RosVideoComponents ${QT_SOURCES} ${QT_RESOURCES_CPP} ${QT_FORMS_HPP} ${QT_MOC_HPP})
    qt5_use_modules(RosVideoComponents Quick Core)
    target_link_libraries(RosVideoComponents ${QT_LIBRARIES} ${catkin_LIBRARIES})
    target_include_directories(RosVideoComponents PUBLIC include)
    
    

    Note: This code is based on the auto generated CMakeList.txt file provided by the qt-create ROS package.
    This is similar to what we did for the last example, but with a few key differences

    catkin_package(
        CATKIN_DEPENDS qt_build roscpp sensor_msgs image_transport
        INCLUDE_DIRS include
        LIBRARIES RosVideoComponents
    )
    

    This tells catkin to export the RosVideoComponents build target as a library to all dependencies of this package.

    Then in this section we tell catkin to make a shared library target called “RosVideoComponents” that links the C++ source files with the Qt MOC/Header files, and the qt resources. Rather than a ROS node.

    add_library(RosVideoComponents ${QT_SOURCES} ${QT_RESOURCES_CPP} ${QT_FORMS_HPP} ${QT_MOC_HPP})
    qt5_use_modules(RosVideoComponents Quick Core)
    target_link_libraries(RosVideoComponents ${QT_LIBRARIES} ${catkin_LIBRARIES})
    target_include_directories(RosVideoComponents PUBLIC include)
    
  6. Next we need to fix our package.xml file, the qt-creator plugin has a bug where it puts all the ROS dependecies in one build_depends and run_depends tag, rather than putting them seperatly. You need to seperate them like so:
      <buildtool_depend>catkin</buildtool_depend>
      <buildtool_depend>catkin</buildtool_depend>
      <build_depend>qt_build</build_depend>
      <build_depend>roscpp</build_depend>
      <build_depend>image_transport</build_depend>
      <build_depend>sensor_msgs</build_depend>
      <build_depend>libqt4-dev</build_depend>
      <run_depend>qt_build</run_depend>
      <run_depend>image_transport</run_depend>
      <run_depend>sensor_msgs</run_depend>
      <run_depend>roscpp</run_depend>
      <run_depend>libqt4-dev</run_depend>
    
  7. Again we need to create src/ resources/ and include/ros_video_components folders in the package folder.
  8. We also need to make some changes to our gui project to depend on the library we generate. Open up the CMakeLists.txt file for the gui package and replace the following line:
    find_package(catkin REQUIRED COMPONENTS qt_build roscpp sensor_msgs image_transport)

    with

    find_package(catkin REQUIRED COMPONENTS qt_build roscpp sensor_msgs image_transport ros_video_components)
  9. Then add the following lines to the gui package’s package.xml,
    <build_depend>ros_video_components</build_depend>
    <run_depend>ros_video_components</run_depend>
    
  10. In the file browser create src and include/ros_video_components folders in the ros_video_components folder.

Building the Video Streaming Component

When we are using the rover the primary purpose the GUI serves in most ERC tasks is displaying camera feed information to users. Thus it felt appropriate to use streaming video from ROS as a proof of concept to determine if QML and Qt5 would be an appropriate technology choice.

We will now look at building a QML component that subscribes to a ROS image topic, and displays the data on screen.

  1. Right click on the src folder of the ros_video_components folder, and select “Add New.”
  2. We first need to create a class for our qt component so select,  “C++>C++ Class”
  3. We’ll call our class “ROSVideoComponent” and it has the custom base class “QQuickPaintedItem.” We’ll also need to select that we want to “Include QObject” and adjust the path of the header file so the compiler can find it. Make sure your settings match those in this screenshot:
    Qt Creator C++ Class Creation Dialouge
  4. Open up the header file you just created and update it to match the following
     
    #ifndef ROSVIDEOCOMPONENT_H
    #define ROSVIDEOCOMPONENT_H
    
    #include <QQuickPaintedItem>
    #include <ros/ros.h>
    #include <image_transport/image_transport.h>
    #include <sensor_msgs/Image.h>
    #include <QImage>
    #include <QPainter>
    
    class ROSVideoComponent : public QQuickPaintedItem {
        // this marks the component as a Qt Widget
        Q_OBJECT
        
        public:
            ROSVideoComponent(QQuickItem *parent = 0);
    
            void paint(QPainter *painter);
            void setup(ros::NodeHandle * nh);
    
        private:
            void receiveImage(const sensor_msgs::Image::ConstPtr & msg);
    
            ros::NodeHandle * nh;
            image_transport::Subscriber imageSub;
            // these are used to store our image buffer
            QImage * currentImage;
            uchar * currentBuffer;
            
            
    };
    
    #endif // ROSVIDEOCOMPONENT_H
    

    Here, QQuickPaintedItem is a Qt class that we can override to provide a QML component with a custom paint method. This will allow us to render our ROS video frames.
    Also in the header file we have a setup function which we use to initialise our ROS subscriptions since we don’t control where the constructor of this class is called, and our conventional ROS subscriber callback.

  5. Open up the ROSVideoComponent.cpp  file add change it so it looks like this:
     
    #include <ros_video_components/ROSVideoComponent.hpp>
    
    ROSVideoComponent::ROSVideoComponent(QQuickItem * parent) : QQuickPaintedItem(parent), currentImage(NULL), currentBuffer(NULL) {
    
    }
    

    Here we use an intialiser list to call our parent constructor, and then initialise our currentImage and currentBuffer pointers to NULL. The latter is very important as we use it to check if we have received any ROS messages.

  6. Next add a “setup” function:
    void ROSVideoComponent::setup(ros::NodeHandle *nh) {
        image_transport::ImageTransport imgTrans(*nh);
        imageSub = imgTrans.subscribe("/cam0", 1, &ROSVideoComponent::receiveImage, this, image_transport::TransportHints("compressed"));
        ROS_INFO("setup");
    }
    

    This function takes in a pointer to our ROS NodeHandle, and uses it to create a subscription to the “/cam0” topic.  We use image transport, as is recomended by ROS for video streams, and direct it to call the recieveImage callback.

  7. And now we implement  said callback:
    void ROSVideoComponent::receiveImage(const sensor_msgs::Image::ConstPtr &msg) {
        // check to see if we already have an image frame, if we do then we need to delete it
        // to avoid memory leaks
        if(currentImage) {
            delete currentImage;
        }
    
        // allocate a buffer of sufficient size to contain our video frame
        uchar * tempBuffer = (uchar *) malloc(sizeof(uchar) * msg->data.size());
        
        // and copy the message into the buffer
        // we need to do this because the QImage api requires the buffer we pass in to continue to exist
        // whilst the image is in use, but the msg and it's data will be lost once we leave this context.
        memcpy(tempBuffer, msg->data.data(), msg->data.size());
        
        // we then create a new QImage, this code below matches the spec of an image produced by the ros gscam module
        currentImage = new QImage(tempBuffer, msg->width, msg->height, QImage::Format_RGB888);
        
        ROS_INFO("Recieved");
        
        // We then swap out of buffer to avoid memory leaks
        if(currentBuffer) {
            delete currentBuffer;
            currentBuffer = tempBuffer;
        }
        // And re-render the component to display the new image.
        update();
    }
    
  8. Finally we override the paint method
    
    void ROSVideoComponent::paint(QPainter *painter) {
        if(currentImage) {
            painter->drawImage(QPoint(0,0), *(this->currentImage));
        }
    }
    
  9. We now have our QML component, and you can check that everything is working as intended by building the project (hammer icon in the bottom right or the IDE or using catkin_make). In order to use it we must add it to our qml file, but first since we want to be able to use it in qt-creator’s design view we need to add a plugin class.
  10. Right click on the src folder and select “Add New” again.
  11. Then select “C++>C++ Class.”
  12. We’ll call this class OwrROSComponents, and use the following settings:OwrROSCOmponents class creation dialouge
  13. Replace the header file so it looks like this
    #ifndef OWRROSCOMPONENTS_H
    #define OWRROSCOMPONENTS_H
    
    #include <QQmlExtensionPlugin>
    
    class OWRRosComponents : public QQmlExtensionPlugin {
        Q_OBJECT
        Q_PLUGIN_METADATA(IID "bluesat.owr")
    
        public:
            void registerTypes(const char * uri);
    };
    
    #endif // OWRROSCOMPONENTS_H
    
  14. Finally make the OwrROSComponents.cpp file look like this
    #include "ros_video_components/OwrROSComponents.hpp"
    #include "ros_video_components/ROSVideoComponent.hpp"
    
    void OWRRosComponents::registerTypes(const char *uri) {
        qmlRegisterType<ROSVideoComponent>("bluesat.owr",1,0,"ROSVideoComponent");
    }
    
  15. And now we just need to add it our QML and application code. Lets do the QML first. At the top of the file (in edit view) add the following line:
    import bluesat.owr 1.0
    
  16. And just before the final closing bracket add this code to place the video component below the other image
    ROSVideoComponent {
       // @disable-check M16
       objectName: "videoStream"
       id: videoStream
       // @disable-check M16
       anchors.bottom: parent.bottom
       // @disable-check M16
       anchors.bottomMargin: 0
       // @disable-check M16
       anchors.top: image1.bottom
       // @disable-check M16
       anchors.topMargin: 0
       // @disable-check M16
       width: 320
       // @disable-check M16
       height: 240
    }
    

    This adds our custom “ROSVideoComponent” who’s type we just registered in the previous steps to our window.

    Note: the @disable-check M16 prevents qt-creator from getting confused about our custom component, which it doesn’t detect properly. This is an unfortunate limit of using cmake (catkin) rather than qt’s own build system.

  17. Then because Qt’s runtime and qt-creator use different search paths we also need to register the type on the first line of our MainApplication::run() function
    qmlRegisterType<ROSVideoComponent>("bluesat.owr",1,0,"ROSVideoComponent");
    
  18. Finally we need to add the following lines to the end of our run function in main application to connect our video component to our NodeHandle
    ROSVideoComponent * video = this->rootObjects()[0]->findChild<ROSVideoComponent*>(QString("videoStream"));
    video->setup(&nh);
    

    And the relevant #include

    #include <ros_video_components/ROSVideoComponent.hpp>
    
  19. To test it publish a video stream using your preferred ros video library.
    For example if you have the ROS gscam library setup and installed you could run the following to stream video from a webcam:

    export GSCAM_CONFIG="v4l2src device=/dev/video0 ! videoscale ! video/x-raw,width=320,height=240 ! videoconvert"
    rosrun gscam gscam __name:=camera_1 /camera/image_raw:=/cam0

Conclusion

So in our previous post we learnt how to setup Qt and QML in ROS’s build system, and get that all working with the Qt-Creator IDE. Then this time we built on that system to develop a widget that takes ROS video data and renders it to the screen, demonstrating how to integrate ROS’s message system into a Qt/QML environment.

The code in this tutorial forms the basis of BLUEsat’s new user interface, which is currently in active development. You can see the current progress on our github, where there should be a number of additional widgets being added in the near future.  If you want to learn more about the kind of development we do at BLUEsat or are a UNSW student interested in joining feel free to send an email to info@bluesat.com.au.

Acknowledgements

Some of the code above is based on a Stack Overflow answer by Kornava, about how to created a custom image rendering component, which can be found here.


Posted on by

The BLUEsat Off World Robotics Software team is rebuilding our user interface, in an effort to address maintenance and learning curve problems we have with our existing glut/opengl based gui. After trying out a few different options, we’ve settled on a combination of QT and QML. We liked this option as it allowed us easy maintenance, with a reasonable amount of power and flexibility. We decided to share with you a simple tutorial we made for working with QT and ROS.

In part one of this article we go through the process of setting up a ROS package with QT5 dependencies and building a basic QML application. In the next instalment we will then look at streaming a ROS sensor_msgs/Image video into a custom QML component. The article assumes that you have ROS kinetic setup, and some understanding of how ROS works. It does not assume any prior knowledge of QT.

Full sources for this tutorial can be found on BLUEsat’s github.

Setting up the Environment

First things first, we need to setup Qt, and because one of our criteria for GUI solutions is ease of use, we also need to setup qt-creator so we can take advantage of its visual editor features.

Fortunately there is a ROS plugin for qt-creator (which you can find more about here). To setup we do the following (instructions for Ubuntu 16.04, for other platforms see the source here):


sudo add-apt-repository ppa:beineri/opt-qt571-xenial
sudo add-apt-repository ppa:levi-armstrong/ppa
sudo apt-get update && sudo apt-get install qt57creator-plugin-ros

We also need to install the ROS QT packages, these allow us to easily setup some of the catkin dependencies we will need later (note: unfortunately these packages are currently designed for QT4, so we can’t take full advantage of them)


sudo apt-get install ros-kinetic-qt-build

Setting up our ROS workspace

  1. We will use qt-creator to create our workspace, so start by opening qt-creator.
  2. On the welcome screen select “New Project”. Then chose “Import Project>Import ROS Workspace”.
    The QT-Creator new project dialogue display the correct selection for creating a new ros project.
  3. Name the project “qt-gui” and set the workspace path to a new folder of the same name. An error dialogue will appear, because we are not using an existing workspace, but that is fine.
  4. Then click “Generate Project File”The QT Import Existing ROS Project Window
  5. Click “Next”, choose your version control settings then click “Finish”
  6. For this project we need a ROS package that contains our gui node. In the project window, right click on the “src” folder, and select “add new”.
  7. Select “ROS>Package” and then fill in the details so they match the screenshot below. We’ll call it “gui” and  the Catkin dependencies are “qt_build roscpp sensor_msgs image_transport” QT Creator Create Ros Package Window
  8. Click “next” and then “finish”
  9. Open up the CMakeLists.txt file for the gui package, and replace it with the following file.
    
    ##############################################################################
    # CMake
    ##############################################################################
    
    cmake_minimum_required(VERSION 2.8.0)
    project(gui)
    
    ##############################################################################
    # Catkin
    ##############################################################################
    
    # qt_build provides the qt cmake glue, roscpp the comms for a default talker
    find_package(catkin REQUIRED COMPONENTS qt_build roscpp sensor_msgs image_transport)
    set(QML_IMPORT_PATH "${QML_IMPORT_PATH};${CATKIN_GLOBAL_LIB_DESTINATION}" )
    set(QML_IMPORT_PATH2 "${QML_IMPORT_PATH};${CATKIN_GLOBAL_LIB_DESTINATION}" )
    include_directories(${catkin_INCLUDE_DIRS})
    # Use this to define what the package will export (e.g. libs, headers).
    # Since the default here is to produce only a binary, we don't worry about
    # exporting anything. 
    catkin_package()
    
    ##############################################################################
    # Qt Environment
    ##############################################################################
    
    # this comes from qt_build's qt-ros.cmake which is automatically 
    # included via the dependency ca ll in package.xml
    #rosbuild_prepare_qt4(QtCore QtGui QtQml QtQuick) # Add the appropriate components to the component list here
    find_package(Qt5 COMPONENTS Core Gui Qml Quick REQUIRED)
    
    ##############################################################################
    # Sections
    ##############################################################################
    
    file(GLOB QT_RESOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} resources/*.qrc)
    file(GLOB_RECURSE QT_MOC RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS include/gui/*.hpp)
    
    QT5_ADD_RESOURCES(QT_RESOURCES_CPP ${QT_RESOURCES})
    QT5_WRAP_CPP(QT_MOC_HPP ${QT_MOC})
    
    ##############################################################################
    # Sources
    ##############################################################################
    
    file(GLOB_RECURSE QT_SOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS src/*.cpp)
    
    ##############################################################################
    # Binaries
    ##############################################################################
    
    add_executable(gui ${QT_SOURCES} ${QT_RESOURCES_CPP} ${QT_FORMS_HPP} ${QT_MOC_HPP})
    qt5_use_modules(gui Quick Core)
    target_link_libraries(gui ${QT_LIBRARIES} ${catkin_LIBRARIES})
    target_include_directories(gui PUBLIC include)
    install(TARGETS gui RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION})
    

    Note: This code is based on the auto generated CMakeList.txt file provided by the qt-create ROS package.
    Lets have a look at what this file is doing:

    cmake_minimum_required(VERSION 2.8.3)
    project(gui)
    
    ...
    
    find_package(catkin REQUIRED COMPONENTS qt_build roscpp sensor_msgs image_transport)
    include_directories(${catkin_INCLUDE_DIRS})
    

    This part is simply setting up the ROS package, as you would expect in a normal CMakeLists.txt file.

    find_package(Qt5 COMPONENTS Core Qml Quick REQUIRED)
    

    In this section we setup catkin to include Qt5, and tell it we need the Core, QML, and Quick components. This differs from a normal qt-build CMakeList.txt, because we need QT5 rather than QT4.

    file(GLOB QT_RESOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} resources/*.qrc)
    file(GLOB_RECURSE QT_MOC RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS include/ros_video_components/*.hpp)
    
    QT5_ADD_RESOURCES(QT_RESOURCES_CPP ${QT_RESOURCES})
    QT5_WRAP_CPP(QT_MOC_HPP ${QT_MOC}
    

    This section tells cmake where to find the QT resource files, and where to find the QT header files so we can compile them using the QT precompiler.

    file(GLOB_RECURSE QT_SOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS src/*.cpp)
    

    And this tells cmake where to find all the QT (and ROS) source files for the project

    add_executable(gui ${QT_SOURCES} ${QT_RESOURCES_CPP} ${QT_FORMS_HPP} ${QT_MOC_HPP})
    qt5_use_modules(gui Quick Core)
    target_link_libraries(gui ${QT_LIBRARIES} ${catkin_LIBRARIES})
    target_include_directories(gui PUBLIC include)
    install(TARGETS gui RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION})
    

    Finally we setup a ROS node (executable) target called “gui” that links the C++ source files with the Qt MOC/Header files, and the QT resources.

  10. Next we need to fix our package.xml file, the qt-creator plugin has a bug where it puts all the ROS dependencies in one build_depends and run_depends tag, rather than putting them separately. You need to separate them like so:
     
    <buildtool_depend>catkin</buildtool_depend>
      <buildtool_depend>catkin</buildtool_depend>
      <build_depend>qt_build</build_depend>
      <build_depend>roscpp</build_depend>
      <build_depend>image_transport</build_depend>
      <build_depend>sensor_msgs</build_depend>
      <build_depend>libqt4-dev</build_depend>
      <run_depend>qt_build</run_depend>
      <run_depend>image_transport</run_depend>
      <run_depend>sensor_msgs</run_depend>
      <run_depend>roscpp</run_depend>
      <run_depend>libqt4-dev</run_depend> 
    
  11. Create src/, include/<package name> and resources/ folders in the package. (Note: it doesn’t seem possible to do this in qt-creator you have to do it from the terminal or file browser).

Our ROS workspace is now setup and ready to go. We can start on our actual code.

Creating a Basic QML Application using the Catkin Build System

We’ll start by creating a basic ROS node, that displays a simple qml file.

  1. Right click on the src folder in the gui package, and select “Add New”
  2. Select “ROS>Basic Node” and then click choose
  3. Call it “guiMain” and click Finish. You should end up with a new file open in the editor that looks like this:Qt creator window displaying a main function with a basic ros "Hello World" in it
  4. We’ll come back to this file later, but first we need to add our QML Application. In order to call ros::spinOnce, without having to implement threading we need to subclass the QQmlApplicationEngine class so we can a Qt ‘slot’ to trigger it in the applications main loop  (more on slots later). So to start we need to create a new class: right click on the src directory again and select “Add New.”
  5. Select “C++>C++ Class “, then click “Choose”
  6. Set the “Class Name” to “MainApplication”, and the “Base Class” to “QQmlApplicationEngine.”
  7. Rename the header file so it has the path “../include/gui/MainApplication.hpp” This allows the CMakeLists.txt file we setup earlier to find it, a run the MOC compiler on it.
  8. Rename the source file so that it is called “MainApplication.cpp”. Your dialog should now look like this:Qt Creator "C++ Class" dialogue showing the settings described above.
  9. Click “Next”, then “Finish”.
  10. Change you MainApplication.hpp file to match this one:
    #ifndef MAINAPPLICATION_H
    #define MAINAPPLICATION_H
    
    #include <ros/ros.h>
    #include <QQmlApplicationEngine>
    
    class MainApplication : public QQmlApplicationEngine {
        Q_OBJECT
        public:
            MainApplication();
            //this method is used to setup all the ROS functionality we need, before the application starts running
            void run();
            
        //this defines a slot that will be called when the application is idle.
        public slots:
            void mainLoop();
    
       private:
            ros::NodeHandle nh;
    };
    
    #endif // MAINAPPLICATION_H
    

    Two important parts here, we add the line “Q_OBJECT” below the class declaration. This tells the QT MOC compiler to do QT magic here in order to make this into a valid QT Object.
    Secondly, we add the following lines:

    public slots:
        void mainLoop();
    

    What does this mean? Well QT uses a system of “slots” and “signals” rather than the more conventional “listener” system used by many other gui frameworks. In layman’s terms a slot acts similarly to a callback, when an event its been “connected” to occurs the function gets called.

  11. Now we want to update the MainApplication.cpp file. Edit it so it looks like the following:
    #include "gui/MainApplication.hpp"
    #include <QTimer>
    
    MainApplication::MainApplication() {
        
    }
    
    void MainApplication::run() {
        
        //this loads the qml file we are about to create
        this->load(QUrl(QStringLiteral("qrc:/window1.qml"))); 
        
        //Setup a timer to get the application's idle loop
        QTimer *timer = new QTimer(this);
        connect(timer, SIGNAL(timeout()), this, SLOT(mainLoop()));
        timer->start(0);
    }
    

    The main things here are we load a qml file, at the resource path “qrc:/window1.qml”. In a moment we will create this file. The other important thing is the set of timer code. Basically how this works is we create a timer object, we create a connection between the timer object’s “timeout” event (signal), and our “mainLoop” slot which we will create in a moment. We then set the timeout to 0, causing this event to trigger whenever the application is idle.

  12. Finally we want to add the the mainLoop function to the end of our MainApplication code, it simply calls ros::SpinOnce to get the latest ROS messages whenever the application is idle.
    void MainApplication::mainLoop() {
        ros::spinOnce();
    }
    
  13. In our guiMain.cpp we need to add the following lines at the end of our main function:
        QGuiApplication app(argc, argv);
        MainApplication engine;
        engine.run();
        
        return app.exec();
    

    This initialises our QML application, calls our run function, then enters QT’s main loop.

  14. You will also need to add these two #includes, to the top of the guiMain.cpp file
    #include <QGuiApplication>
    #include <gui/MainApplication.hpp>
    
  15. We now have all the C++ code we need to run our first demo. All that remains is writing the actual QT code. Right click on the “resources” folder, and select “Add New.”
  16. In the New File window, select “Qt” and then “QML File (Qt Quick 2)”, and click “Choose.”
  17.  Call the file “window1” and finish.
  18. We want to create a window rather than an item, so change the qml file so it looks like this:
    import QtQuick 2.0
    import QtQuick.Window 2.2
    
    Window {
        id: window1
        visible: true
    
    }
    
  19. Now we will use the visual editor to add a simple image to the window. With the QML file open, click on the far left menu and select “Design” mode. You should get a view like this:
    QT Creator QML Design View
  20. From the left hand “QML Types” toolbox drag an image widget onto the canvas. You should see rectangle with “Image” written in it on your canvas.
    Qt Creator Design Window with an image added
  21. We need to add an image for it to use. To do this we need a resource file, so switch back to edit mode. Right click on the resources folder and select “Add New.”
  22. Select “Qt>Qt Resource File” and then click “Choose”
  23. Call the resource file “images,” and finish.
  24. This should open the resource file editor, first you need to add a new prefix. Select “add>New Prefix”
    QT Creator Resource File Editor: Select Add>Add Prefix
  25. Change the “prefix” to “/image”.
  26. We now want to add an image. Find an image file on your computer than is appropriate then click “Add Files” and navigate to it. If the file is outside your project, qt-creator will prompt you to save it to your resources folder, which is good. You should now have a view that looks like this:
    QT Creator Resource File Editor with an image added
  27. Switch back to the qml file and design view.
  28. Click the image, on the right hand side will be a drop down marked “source”, select the image you just added from it. (Note if the designer has auto filled this box but the image preview is not appearing you may need to select another image, and then reselect the one you want). I used the BLUEsat logo as my image:
    I used the BLUEsat logo for my example
  29. Now we just need to put the qml somewhere we can find it. As in steps 21 to 26, create a new resource file in the resources folder called qml and add the qml window1.qml to it under the “/” prefix.
  30. At this point you should be able to build your project. You can build using catkin_make as you normally would, or by clicking the build button in the bottom left corner of the ide.
  31. To run your ROS node you can either run it as you would normally from the command line using “rosrun”, or you can run it from the ide. To setup running it from the ide select “Project” from the left hand menu, then under “desktop” select run.
  32. Under the run heading, select “Add Run Step>rosrun step.” You should get a screen like this.new/prefix1QT Creator - Project settings screen
  33. Set the package to “gui”, and the target should auto fill with “gui” as well.
  34. Press run in the bottom left corner. You should get something like this (note: depending on where you places the image you may need to resize the window to see it). Important: as always with ROS, roscore needs to be running for nodes to start correctly. Window Displaying BLUEsat Logo
  35. You have a working gui application now, compiled with catkin and running a ROS Spin Loop at the appropriate loop, but this is a bit useless without using some information from other ROS nodes in the gui. In our next article, we will look at streaming video from ROS into Qt so stay tuned!