Posted on by

At BLUEsat UNSW, the Off-World Robotics Software Team uses the Robotics Operating System (ROS) as the basis for much of our code. The modularity, communications framework, and existing packages it provides are all factors in this, but another key benefit is its “transform” system. The transform library, which is often referred to as “tf”, provides a standardised way of managing the position of your robot[s] in a ROS system, as well as the moving components of said robots and features in the environment. This article is the first of a series that will introduce you to ROS tf.

The tf system operates by providing a tree of co-ordinate systems, called “frames”. Each frame is an independent co-ordinate system, with an origin that can move relative to its parent in the tree. A common example of this is localising a robot to its environment. To do this you would need at least two frames: one frame for your robot’s local area – lets call it “map”, and another for your robot itself – lets call it “base_link”. In order for ROS tf to localise your robot on the map we need to publish a transform between the “map” and “base_link” frames. Such a transform will often be published by a SLAM package such as hector_slam or an EKF package such as that provided in the ROS navigation packages.

BLUEsat's BLUEtongue Rover represented as a 3D model in RViz with each transform being marked by a set of axes.
ROS’s RViz tool can be used to display 3D representations of your transform tree. Here we see the BLUEtongue Rover with each set of axes representing a transform in our tree.

For every link in the tree we need to provide ROS with a “transform”, that defines the relative position of the two frames. This is where ROS’s modularity kicks in, as you are not restricted to a single node publishing all of your transforms. Instead, as with normal ROS topics, you can have many nodes providing different transforms! This means that you could have one node publishing your robot’s position in your map co-ordinate system, and another node handling the rotation of your rover’s arm. ROS tf then puts these together for you, so that you could, for example, calculate the location for the end of your arm in the map co-ordinate system.

Transforms are time stamped which means that nodes can deal with temporary data loss or lag and do accurate mapping calculations. It also means that they can be recorded with rosbag for future playback. However the time-stamping can also create some issues, which I shall talk about later in the article.

The Justification for ROS TF

So why is this useful? Say I have a LIDAR on a gimbal, and I need to know the positions of its point cloud relative to my rover’s centre of mass. But the LIDAR only publishes a plain of point information relative to its own position. Furthermore the angle of the gimbal is controlled by a separate system, to the one publishing the LIDAR data. Sound familiar?

In a traditional solution the code that reads the LIDAR data, must be aware of the position of the gimbal and manually transform its output to its desired co-ordinate system. This means that the gimbal must know to provide that data, and your system must have a way of syncing the timing of the inputs.

In ROS all of this work is done for you: the LIDAR driver publishes a header indicating that it is in a co-ordinate system who’s parent is the top of the gimbal, and the instant the data was recorded at. The system responsible for controlling the gimbal publishes a transform, indicating its position at that instant. And any system that needs the two pieces of data in a common co-ordinate system, such as that of the base of the rover, can simply run the data through a standard filter provided by ROS to get the information it needs. The video below shows our original BLUEtongue 1.0 Mars Rover doing exactly that.

Admittedly if only one system is using those two pieces of data there may not be a massive advantage, but imagine if you had many sensors on top of the gimbal, or many separate systems controlling moving parts…

The Transform Tree

As mentioned previously ROS transforms exist in a tree structure. This has some important implications for how we can structure our graph. The first of these is that a transform can only have one parent. So ROS tf won’t solve more complex graphs. This is especially relevant if you are using something like ROS’s Joint State Publisher to publish the position of joints on your robot as ROS won’t do calculations for multi-link joints. You’ll need to do that yourself.

It also means that if one link in your tree fails you won’t be able to do transforms across that gap as there is no redundancy. However, the system is reasonably resilient. You can do a transform between any two connected points, even if the rest of the graph is not fully connected.

Part of the BLUEtongue 2.0 Rover's Transform (TF) Tree in ROS's RQT tool. ROS tf
Part of the BLUEtongue 2.0 Mars Rover’s Transform (TF) Tree displayed in ROS’s RQT tool.

As well as resilience, the tf tree does offer several advantages. As it’s a tree, each node only needs to know about the two co-ordinate frames it is publishing between; dramatically reducing the complexity of any publisher. This is especially useful in a distributed system with many moving parts, or even a ROS environment with multiple robots!  Furthermore if you follow ROS conventions for naming your frames you can easily combine 3rd party tools, such as one of the many ROS packages used for localisation or calculating the position of your robots joints, without having to modify them in any way.

The Timing Problem

I’d be amiss if I didn’t mention that the ROS tf system is not without its issues and difficulties. Foremost of these is ensuring that you have the transforms you need when you need them. Normally any data you want to transform will have a standard ROS header as part of its message. This header not only identifies the frame, but also the time the data was received. Lets look at our example with the LIDAR gimbal to see why this is important. In that scenario, by the time the LIDAR data  is ready to publish on our ROS network, the gimbal would have moved on. However, we want to do our transforms using the position at the time the data was collected. The timestamp allows us to do this.

But, unsurprisingly, ROS only keeps a limited buffer of transforms in memory. This can cause problems if one of your transform publishers suffers from lag, or your data takes a long time to process before a transformation is applied. Usually your system will need to be able to cope with dropping frames of data, although there are other ways to handle this that I will discuss later in the series.

Next Steps

Well that’s it for now. I hope the first part of this guide to the ROS tf system has proven useful to you! Keep an eye on our blog for my next article, where I’ll be looking at tools and libraries provided by ROS that take advantage of the tf system. In the meantime you may want to take a look at our guide to ROS debugging, which introduces some of the tools we are going to be looking at, or if you are feeling impatient you can skip ahead and look at the official tutorials. If you are interested in learning more about the robotics we do at BLUEsat UNSW you could also take a look at the Off-World Robotics page.


Posted on by ,,,,,,,,, and

It’s been another busy month at BLUEsat UNSW. This month’s major achievements include our breakthrough with the steering module of the NUMBAT rover, the creation of a successful SDR radio player in the groundstation team and progress on a new magnetorquer project in the ADCS team!

Rover

From the Rover CTO

It’s been a fantastic Month for the team and we have reached some major milestones. Earlier this month we received our results for our ERC proposal submission scoring an impressive 24/25. Since then the team have been working hard to put together the preliminary design review document.

We also have been awarded a large grant from the NSW Government of the Chief Scientist and Engineer towards our rover. We hope to put this generous donation to good use.

Thomas Renneberg, Robotics CTO & Mech Chapter Lead

From the Electrical Team

This month we continued our work on a few of the PCBs. After verification of its correct working and some initial configurations last month, the testing of the Generic PCB was handed over to the software members, who have developed working codes for the driving system. In the next phase, bulk production and further testing will be carried out. Progress has also been made in the science module and drive module PCB, which includes finalisation on major design requirements and some research into circuit design. Beyond these, we have also come up with the preliminary wiring scheme of the rover electrical system. Following this, improvements in the power delivery and grounding will be made as the next step.

Jonathan Wong, Rover Electrical Chapter Lead

From the Software Team

Another wonderful month for rover software has seen a breakthrough in testing and operating the new steering module for NUMBAT. In the process, we have also been able to verify other fundamental systems, namely embedded libraries and embedded-CAN implementation.

Elsewhere, progress has been made with altering the ROS library for the Linux-side of our ROS-over-CAN implementation, a lovely collection of GUI widgets/featurettes are in the works and development of the manipulator arm control system has begun!

Simon Ireland, Rover Software Chapter Lead

From the Mechanical Team

The mechanical team has been working on small updates to the rovers suspension system, replacing the old version with our newer, more rigid design. We have also been putting together a prototype of our mechanical manipulator arm and our science module.

Thomas Renneberg, Robotics CTO & Mechanical Chapter Lead

From the Chief Pilot

The older BLUEtongue rover is still under maintenance. We are in the process of debugging the steering system after replacing one of the motors and the arms movement. Some small calibrations to the system are underway to allow us to keep training and testing this coming month.

Sajid Anower, Rover Chief Pilot

Satellite

From the ADCS Team

In the Reaction Wheel System project, the manufacturing and programming of electronics are just wrapping up, ready for integration and testing of the RWS during the following weeks.

We also have a new magnetorquer project that’s just coming out of the research phase and is now looking to implement a magnetorquer-based ADCS on a CubeSat PCB!

Mark Yeo, ADCS Squad Lead

From the Groundstation Team

Progress has been made in implementing the receive subsystem into the new SDR groundstation.

We have successfully created an SDR radio player, capable of receiving FM radio station emissions (commercial radio stations) and playing the audio. This code can be altered to use the data in different ways, for example saving the audio into a .wav file and outputting to a file, which will be used in later stages to process the data.

We will attempt to receive satellite signals using the current code when there is a pass.

Joerick Aligno, Groundstation Squad Lead

From the High Altitude Balloon Team

The HAB team at BLUEsat kicked off April by initiating new members in the workings of a high-altitude balloon mission.

Data and pictures from the recent flight were analysed. Studying the motion data, like in the attached image, will provide an understanding needed to design separation, stabilisation and parachute deployment systems.

The month concluded with a full team meeting, including with our supervisor Elias, where team lead Adithya set out the expected goals and milestones for the next launch.

Adithya Rajendran, Balloon Squad Lead

From the Satellite Power Team

The past month has seen further progress in the power system of the CubeSat.

Within the separate subsystems, there are a few updates since last month. Slow but steady progress is being made to debug the MPPT (Maximum power point tracking) system.

Debugging is continuing for one of the battery charging systems and one of the other competing designs has its PCB ready to print and its components have been ordered. The thermal subsystem is in its infancy and potential components are being researched. A CAD model has also been drawn up and can be seen in the attached photos.

Harry Price, Satellite Power Squad Lead

BLUEsat Operations & Exec

Secretary’s Update

Its been a busy month for the society with progress across all our teams. On the social events side we’ve had more successful board games nights, whilst from an outreach perspective, we have some interesting things planned for next semester. We will also hopefully be organising team merch soon.

We will be holding an EGM in the near future so some roles will be changing hands, including mine as I graduate at the end of the semester. Consequently, this will probably be my last monthly update as secretary. Its been great and I wish good luck to the incoming executive!

Harry  J.E Day, Secretary

Want to keep up to date with BLUEsat UNSW? Subscribe to our monthly newsletter.


Posted on by ,,,,,,,,, and

It’s been another busy month at BLUEsat UNSW. This month’s major achievements include our GreenSat team’s success at the EngSoc pitch fair, and completion of the NUMBAT Mars Rover’s core mechanical construction.

Members of BLUEsat's rover software team including Elliot Smith, William Miles and Sajid Ibne Anower testing software designed to drive the new NUMBAT rover in UNSW's Willis Annex Maker Space.

Rover

From the Rover CTO

BLUEsat UNSW has now enrolled in the European Rover Challenge (ERC), commencing in September of this year. The whole team is looking forward to the competition and getting the NUMBAT rover operational in time.

Thomas Renneberg, Robotics CTO & Mech Chapter Lead

From the Software Team

A excellent month for the rover software team has seen us finalising many components of our system. In embedded, we have made progress with testing hardware libraries for the ADC and PWM modules on the robot’s generic PCBs, as well as developing parts of our CAN bus solution. In backend systems, we are implementing a new driving system to take advantage of our transition to 4-wheel steering which will hopefully be finished and testing within the week. Finally, we have also added a nice little widget to the GUI that will let us know where our rover is facing during its tasks.

Simon Ireland, Rover Software Chapter Lead

From the Mechanical Team

William Miles and Thomas Reneberg carrying the NUMBAT rover with its suspension sytem full assembled.

This month saw the rover mechanical team finalise the manufacturing of the core NUMBAT systems.  With the Chassis, suspension, wheels and steering systems all assembled together, we can begin working on some of the rovers smaller modules.

Thomas Renneberg, Robotics CTO & Mechanical Chapter Lead

From the Electrical Team

It’s been a busy month for BLUEsat’s rover electrical team, with a host of different tasks going on. At the conclusion of the on-boarding workshop earlier this month, we were pleased to see the team doubled in size. A couple of new design projects have unfolded: the drive module PCB which interfaces between the generic PCB and motors; and science module PCB, which is aimed to be a high-tech soil analyser. Testing and assembly are also under way for the NUMBAT rover. The Generic PCB, the brain for almost every module, has successfully delivered PWM signal to a wheel motor via an array of connector boards, which means the drive system is ready for integration. A small part of the team have also been focused on maintenance, repair and review for the old rover, where they gain a lot of new engineering experience.

Jonathan Wong, Rover Electrical Chapter Lead

From the Chief Pilot

The NUMBAT Rover's Generic PCB connected to one of its wheel modules on a desk for electrical testing.
After a bit of panic when our old Battery charger failed, our new charger has arrived and with it Rover training has been resumed. Some of the systems on the old Rover are beginning to show their age,so we are working on porting them over to the new NUMBAT rover for testing.

Sajid Anower, Rover Chief Pilot

Satellite

From the ADCS Team

Part of a prototype satellite reaction wheel. It features four spining mental disks.
Development on the satellite Reaction Wheel System (RWS) has been going swimmingly, with all RWS mechanical components being manufactured and assembled (pictured). Also, PCBs for the RWS and supporting circuitry have been ordered and are currently being manufactured.

In other news, the ADCS team is currently also researching magnetorquer systems – more on this next month!

Mark Yeo, ADCS Squad Lead

From the High Altitude Balloon Team

This month kicked off with a resoundingly successful high-altitude balloon mission. The launch of our payload delivered amazing pictures and valuable data from over 23km altitude.

Development for the next launch has commenced, with the telemetry project already showing progress in transmitting data and pictures over radio. Other projects include manufacturing an integrated enclosure, building an Arduino-based separation mechanism and implementing payload-stabilisation techniques.

Adithya Rajendran, Balloon Squad Lead

From the GreenSat Team

Recently, BLUEsat’s GreenSat team was offered the opportunity to pitch our project at the project and pitch fair 2018, where we won Most Innovative pitch. Also, we have finally been approved for PC2 lab space in Biosciences building. Meanwhile, work on the darkbox and hotbox is continuing thanks to our new members

Ben Koschnick accepting the prize for "Most Inovative" on stage at the UNSW Engineering Society Pitch Fair.

Ben Koschnick, Greensat Squad Lead

From the Satellite Power Team

This month has been busy for the Satellite Power team, with multiple parts of the system being developed. New members have been inducted and are working on some projects as an intro to BLUEsat and electrical engineering on satellites. The main power system has been making steady progress.

The dummy load is operational. There are 3 competing battery chargers in the works all in different stages: one is in the debugging phase, one’s PCB is being designed in Altium, and one is still in its infancy. The Maximum Power Point Tracking PCB is also slowly being debugged.

There has also been some preliminary work on a thermal system, with some fantastic hand drawn engineering drawings being produced. (see photos)A hand-drawn engineering drawing for a thermal system.

Harry Price, Satellite Power Squad Lead

From the Groundstation Team

The Groundstation team has had slow progress for the past month. Mostly trouble installing GNURadio onto to Macbooks, however we have moved past that stage. We have successfully used the USB dongles to receive and plan to move towards using the USRP over the coming weeks.

Joerick Aligno, Groundstation Squad Lead

BLUEsat Operations & Exec

Secretary’s UpdateBLUEsat UNSW members relax after a busy workday to play board games at one our regular social events.

After a resoundingly successful orientation day the focus for this month has been on settling in our new members. We even had a few newbies interested in joining our media and events team and are hopping to revitalise our school outreach program! (Watch this space for more details).

Our social events have continued to be a resounding success with a massive turn out at our most recent board games night! A massive shout-out to Joshua Khan and Taofiq Huq for making these such a big success. Our social events bring together members from all parts of the society and help foster the exchange of ideas and contacts.

Harry  J.E Day, Secretary

Want to keep up to date with BLUEsat UNSW? Subscribe to our monthly newsletter.


Posted on by ,,,,,,,,,, and

Welcome to our February monthly update, its been another busy month at BLUEsat UNSW. As well as an amazing o-week, our teams have made massive progress across the board. Highlights include our Balloon Team’s launch on the 3rd of March, and major progress in the construction of the NUMBAT Mars Rover.

RoverNewly constructed chassis of the NUMBAT Mars Rover.

From the Mechanical Team

The mechanical team has been busy this Month assembling the chassis and drive systems of our rover.  We have also been conducting training of our new members, teaching them how to use different CAD packages as well as manufacturing techniques such as laser cutting, 3d printing and CNC routing.

Thomas Renneberg, Robotics CTO & Mech Chapter Lead

From the Software Team

The rover software team have been preparing for BLUEsat’s orientation day. We have an Arduino workshop prepared for the day. The weeks following will also contain some introductory workshops to how we operate, including a seminar or two on the Robotic Operating System (ROS).

Meanwhile, we have continuing development on a number of key rover systems including the GUI and embedded ROS. In addition, we received two new members (Yubai & Daigo) and are looking forward to even more with the start of the new semester!Altium render of the NUMBAT Rover side module board

Simon Ireland, Rover Software Chapter Lead

From the Electrical Team

The electrical team has made progress in a few projects. We received two new members and introduced them to the society. With their help we’ve also fixed some components on our old Mars Rover BLUEtongue 2.0 so that it can be driven properly. The last major PCBs for the NUMBAT Rover – the side module board – has been designed and is being reviewed. Assembly is scheduled to take place in around 2 weeks.

We’re also starting the work on testing and programming the generic PCB, which will be an priority task this semester. After the electronics induction, we expect to get more of the members working on it.

Jonathan Wong, Rover Electrical Chapter Lead

Rover Electrical Team Lead Jonathan Wong with a new member and the BLUEtongue 2.0 rover, conducting repairs.

From the Chief Pilot

The rover was run each week. Some range testing was attempted, but no conclusive derivation was possible. The team has started debugging the arm. The schedule for the rover training and testing is in the works and is expected to come out soon.

Sajid Anower, Rover Chief Pilot

Satellite

From the CTO (Satellites)

The satellite team this month have been busy crafting an exciting program for the new 2018 BLUESat member Intro Day. With activities spanning across the fields of engineering, science, and operations, the satellite team should be proud of themselves.

Timothy Guo, Chief Technical Officer – Satellites

From the High Altitude Balloon TeamDisassembled High Altitude Balloon Payload on a desk in UNSW's Willis Annex.

February was a fairly busy month for the HAB team. Rigging and parachute configurations were finalised. Tracking systems were tested and found to be working perfectly with some tweaking. An APRS transmitter, which updates GPS location over the amateur radio network, a commercial SPOT GPS tracker, as well as an old mobile phone, running a live GPS tracking application, are all being used on the launch. A launch date has been set for the 3rd and 4th of March, and final preparations, as well as integration testing, are currently taking place. Stay tuned for the outcome of the mission in the next update!

Adithya Rajendran, Balloon Squad Lead

From the Green Sat Team

GreenSat is moving forward, designing an temperature-controlled incubator for biological samples. We are also building a sensor suite prototype to test the electrical system. The biology team is aiming to get into the labs soon to begin working on cynobacteria samples.

Ben Koschnick, Greensat Squad Lead

From the Satellite Power TeamA dumy load used by our Satellite Power Team to test their designs.

Satellite power team has recently finished work on an electronic dummy load. We are planning to use it to test a battery charging design and a maximum power point tracker also developed by the team in the coming semester.

We also prepared for orientation day, with a complete introduction to electronics planned. New members will get a taste for electronics design by making a range finder. They will prototype the circuit, design the PCB and then manufacture it all over two weekends.

Declan Walsh, Satellite Power Squad Lead

From the Groundstation Team

Groundstation has opted to shift towards Software Defined Radio (SDR), which employs the use of USRP receivers and software demodulation. The original setup consists of radio transceivers and the various equipment needed to operate them.

This new setup offers the following advantages:

  • Better control of the demodulation stages
  • More options for processing of data
  • Easier to interface with computers

The use of SDR has been explored by some current members in the past. However it is a foreign concept to the majority of our team. We plan to place a greater focus on SDR starting with the training of new and existing members.

Joerick Aligno, Groundstation Squad Lead

From the ADCS Team

The ADCS team has been working on finalising the designs of the support PCBs to be manufactured in the next few weeks. Initial electrical and mechanical designs for the reaction wheel PCB are also in the works.

Mark Yeo, ADCS Squad Lead

BLUEsat Operations & Exec

The BLUEsat UNSW o-week stall, you can see a mars rover wheel and a satellite reaction wheel on the table.

Secretary’s Update

February has certainly been a busy month for the society. O-Week was a massive success with over a 150 new sign ups, many of whom will hopefully be attending our orientation day on the 3rd of March! A big thank you to everyone who helped at our o-week stall. All of the teams have also been preparing workshops for our orientation day.

In other news our last social event was a big success with many people attending and our regular blog posts have been going well! I’m looking forward to expanding the media team in 2018, and am looking for people to help with our outreach program and website.

Harry  J.E Day, Secretary

Want to keep up to date with BLUEsat UNSW? Subscribe to our monthly newsletter.


Posted on by

Welcome to our monthly updates, we are going to trial one of these each month to keep you updated on what’s going on at BLUEsat! January has certainly been a busy time for all of our teams here.

Rover

From the Mechanical Team

Its been a fantastic month of development. Our team finalised the rovers suspension system, a component that we anticipate will provide a great deal of stability to the platform (see image). Additionally our laser cut Chassis parts have arrived, and after a bit of post machining will be ready for fitment tests and total chassis construction. Looking ahead, the team plans to have the top and base plate of the chassis manufactured in the next few weeks.

Thomas Renneberg, Robotics CTO & Mech Chapter Lead

A screenshot of a serial terminal displaying output from BLUEsat's ROS over CAN system. Succesfully transmitting a message. The text reads "Recived Full Message. Join 'Hello CAN', pwm 100"

From the Software Team

Steady progress is being made across the team. We’ve had a few key developments in the CAN bus network, with our embedded system for publishing ROS messages now sending and the on-board computer receiving and routing the packets (the latter part in final testing). Rover Software still has a few members filtering back from holiday and with the addition of some new members (Oliver and Saksham) joining, we should have a interesting year ahead.

Simon Ireland, Rover Software Chapter Lead

A collage of images. Top left is a wooden prototype of the NUMBAT Rover with Bus PCBs laid out, right is a PCB, bottom left is testing a DC-DC converter for the BLUEtounge Rover and bottom right is a range of DC-DC converters.

From the Electrical Team

Its been a good start of the year. Altium training is on the way, and a few design projects have been proposed for the coming semester, including a pair of side module circuit boards and some optional ones such as power line filter and arm module boards. Work on power module and connector boards should be resumed shortly. There has also been a focus on DC-DC converters testing and renewal, both for BLUEtongue droving and NUMBAT construction.

Jonathan Wong, Rover Electrical Chapter Lead

From the Chief Pilot

Rover Droving has finally started again after a hiatus, and we’ve have had a number people interested in Droving. Over the last few weeks, it has been particularly difficult to get the Rover to get started, and on 27 January, the step-up transformer from the power supply to the NUC was found to be busted. A new transformer has since been ordered and arrived, and if things go according to plan, we will start full-fledged from this month. Also, I plan to start breaking down the ERC rules into smaller tasks so we can practise them.

Sajid Anower, Rover Chief Pilot

SatelliteThe Stratospheric Balloon Payload, complete with sensor loggers, etc in an insulating container.

From the Stratospheric Balloon Team

This month was very productive for balloon team. All our subsystems, including a Raspberry Pi-based data-logger and radio-controlled separation mechanism, were completed and integrated. Ultra-low-temperature testing was conducted in a laboratory fridge set to -70C over the period of 30th-31st January in order to simulate stratospheric temperature conditions that will be experienced on a flight. This testing highlighted some weaknesses in our payload construction, such as power supplies not operating at low temperatures, and measures will be taken to remedy them. We had planned a launch for early February, but that has been pushed back to late February due to unforeseen circumstances not pertaining to our development.

Adithya Rajendran, Balloon Squad Lead

 

From the Green Sat Team

Steady progress has been made on designing experiments for the biology team. With help from our friends at Flinders University we designed a incubator capable of controlling the intensity and wavelength of light available for a sample. This will give us valuable information about the on-board conditions required on the GreenSat and minimise the energy requirements of the payload.

Ben Koschnick, Greensat Squad LeadDeclan Walsh working on the dummy load.

From the Satellite Power Team

The satellite power team focussed on completing several prototype designs this month. The designs for a variable test load were finalised and the components ordered. Prototype designs of a Lithium battery charger were also completed with testing of the design to occur next month. Finally, testing of the teams power regulators in low temperatures was also undertaken in conjunction with balloon team.

Declan Walsh, Satellite Power Squad Lead

From the Groundstation Team

Laser Communication

We have achieved transmission of:

  • Serial Data: Laser on/off is used to send binary data (1’s and 0’s). By reading the voltage across a photoresistor, we can obtain the binary data.
  • Audio Data: By controlling the voltage through the laser, we can control the intensity of the laser, which we can obtain analogue data.

We plan to improve on both systems to a point where we can implement either into a PCB.

Balloon Telemetry

Recently, the Groundstation team has partnered with the Balloon team to implement telemetry. We have decided to use a ‘Pi in the Sky’ telemetry board to send data from the payload which is received and processed by a ‘USRP SDR’. We will research how to use the ‘Pi in the Sky’ board over the coming weeks.

Joerick Aligno, Groundstation Squad LeadCAD rendering of a PCB for BLUEsat's ADCS v3

From the ADCS Team

Development of Reaction Wheel v3 is underway with 5 PCBs already designed. These boards contain the supporting hardware for the reaction wheel board, including power supply and regulation, an on-board computer (OBC), a data logging sensor board, an ADCS central hub and a mini groundstation for communicating to and commanding the reaction wheel while the experiment is in motion.

Mark Yeo, ADCS Squad Lead

Operations & Exec

Secretary’s Update

Its certainly been a busy start to the year. The main focus of the media and events team has been ramping up to o-week, and we have a lot planned for that with more to be finalised in the coming weeks!

We’ve also been trailing a new on-boarding approach where we will be running a structured session every three weeks rather than accepting new members every week. This came as a result of a survey we did last year on onboarding and our recruitment process and aims to help improve member retention in the first few months. It should also give our team leads more time to focus on their projects in between. A big thanks you to Taofiq for spearheading that project!

Our regular social evenings are going well, and we had a very successful “Jackbox Games” night a few weeks back after our Saturday workday.

Finally I’m very pleased to see the first release of our monthly updates and our email news letter! These should help improve awareness of the societies projects, recruit new members, and improve our internal communication between teams. I’m looking forward to seeing them in the following months.

Harry  J.E Day, Secretary

A CAD rendering of the NUMBAT Mars Rover in "space"

Want to keep up to date with BLUEsat UNSW? Subscribe to our monthly newsletter.


Posted on by

So you’ve written the ultimate ROS program: after thousands of lines of code your robot will finally achieve sentience and bring about the singularity!

One by one you launch your nodes. Each one bringing the Apocalypse ever closer. You hit enter on the last command. And. And, nothing happens. What went wrong? How will you find, and forever squash that bug that prevented your moment of triumph? This blog attempts to answer those questions, and more*.

At BLUEsat we’ve had our share of complicated ROS debugging problems. The best ones happen when you are half-way through a competition task, with time ticking on the clock. Although this article will also look at the more common situation of debugging in a less time pressured, and fire prone environment**.

Below are several tools and techniques that we have successfully deployed to debug our ROS environment.

Keep Calm And … FIRE!

You’ve probably heard this before but its very important when debugging to not jump to conclusions or apply fixes you haven’t tested properly. Google for example has a policy of rolling back changes on its services rather than trying to push a fix. A similar idea applies in a competition or time pressured situation: make sure you have thought through that patch that removes the “don’t kill humans” safety from your robot!  That being said, unfortunately a roll back is unlikely to be applicable in a competition situation, nor is it likely to be able to put out that fire you just started on your robot. So we can’t just copy Google, but we should think about what we are doing before we do it.

Basically any patches or configuration fixes you apply during such a situation is a calculated risk, and you should make sure you understand those risks before you do something. During the European Rover Challenge last year I found it was possible to tweak small settings, restart certain nodes, and re-calibrate systems; but it was too risky to power cycle the rover during a task due to the time it took to establish communication. Likewise restarting our drive systems or cameras was very disruptive to our pilot, so could only be done in certain situations where the damage done by not fixing the system could be worse. That being said, after a critical camera failed we did attempt to programmatically power cycle that device – the decision being that the camera was important enough to attempt such a risky move. (In the end we weren’t able to do this during the task, and our pilot managed to navigate the rover without the camera in question.)

In a non time pressured situation you can be more flexible. It is possible to test different options and see if they work. That is provided they don’t damage your robot. However a structured approach is often beneficial for those more complicated bugs. I often find that when I’m debugging an intermittent or difficult to detect problem that it is easy to end up lose track of what I’ve tried, or get results mixed up. A technique I’ve found to be very useful was to record what I was doing as I did it, especially if the problem includes sensor data. We had a number of problems with our Rover‘s steering system when we first implemented our swerve drive and I found writing down ADC values and rotation readings in different situations really helped debug it (You can read more about how we use ADC’s in our steering system in one of our previous articles).

Basically the main point is too keep your head clear, and think through the consequences before you act. Know the risks and have your E-Stop button ready! Now lets look at some tools you can use to aid you in your debugging.

The BLUEtongue 2.0 Rover being debugged during the 2016 ERC, with Team Members Assisting. (LTR): Timothy Chin, Sebastian Holzapfel, Simon Ireland, Nuno Das Neves, Harry J.E Day.
Debugging is often a team effort.

(more…)


Posted on by


 

 

At the start of semester we ran a number of seminars on different skills that we use at BLUEsat. In the first of these videos, former Rover Software Team Lead, Harry J.E Day went through an introduction to “the Robotics Operating System” (ROS) covering setting up ROS catkin workspaces, and basic publishers and subscribers.

You will need the vm image from here: http://bluesat.com.au/vm-image/

The slides from this presentation can be found here: http://bluesat.com.au/an-introduction…

Errata:

  • Some slides in the recording refer to “std_msg” there should be an ‘s’ on the end (i.e “std_msgs”)
  • On the slide “A Basic Node – CMakeLists.txt (cont)” there should only be one ‘o’ in “node.”
  • In step 4 of the “Publisher (cont)” section there should be an e on the end of “pub_node”
  • The person on the last slide was the first robotics COO at BLUEsat not CTO.

These have all been corrected in the slides.


Posted on by

In our last article, as part of our investigation into different Graphical User Interface (GUI) options for the next European Rover Challenge (ERC) we looked at a proof of concept for using QML and Qt5 with ROS. In this article we will continue with that proof of concept by creating a custom QML component that streams a ros sensor_msgs/Video topic, and adding it to the window we created in the previous article.

Setting up our Catkin Packages

  1. In qt-creator reopen the workspace project you used for the last tutorial.
  2. For this project we need an additional ROS package for our shared library that will contain our custom QML Video Component. We need this so the qt-creator design view can deal with our custom component. In the project window, right click on the “src” folder, and select “add new”.
  3. Select “ROS>Package” and then fill in the details so they match the screenshot below. We’ll call this package “ros_video_components” and  the Catkin dependencies are “qt_build roscpp sensor_msgs image_transport” The QT Creator Create Ros Package Window
  4. Click “next” and then “finish”
  5. Open up the CMakeLists.txt file for the ros_video_components package, and replace it with the following file.
    ##############################################################################
    # CMake
    ##############################################################################
    
    cmake_minimum_required(VERSION 2.8.3)
    project(ros_video_components)
    
    ##############################################################################
    # Catkin
    ##############################################################################
    
    # qt_build provides the qt cmake glue, roscpp the comms for a default talker
    find_package(catkin REQUIRED COMPONENTS qt_build roscpp sensor_msgs image_transport)
    include_directories(include ${catkin_INCLUDE_DIRS})
    # Use this to define what the package will export (e.g. libs, headers).
    # Since the default here is to produce only a binary, we don't worry about
    # exporting anything.
    catkin_package(
        CATKIN_DEPENDS qt_build roscpp sensor_msgs image_transport
        INCLUDE_DIRS include
        LIBRARIES RosVideoComponents
    )
    
    ##############################################################################
    # Qt Environment
    ##############################################################################
    
    # this comes from qt_build's qt-ros.cmake which is automatically
    # included via the dependency ca ll in package.xml
    find_package(Qt5 COMPONENTS Core Qml Quick REQUIRED)
    
    ##############################################################################
    # Sections
    ##############################################################################
    
    file(GLOB QT_RESOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} resources/*.qrc)
    file(GLOB_RECURSE QT_MOC RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS include/ros_video_components/*.hpp)
    
    QT5_ADD_RESOURCES(QT_RESOURCES_CPP ${QT_RESOURCES})
    QT5_WRAP_CPP(QT_MOC_HPP ${QT_MOC})
    
    ##############################################################################
    # Sources
    ##############################################################################
    
    file(GLOB_RECURSE QT_SOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS src/*.cpp)
    
    ##############################################################################
    # Binaries
    ##############################################################################
    add_library(RosVideoComponents ${QT_SOURCES} ${QT_RESOURCES_CPP} ${QT_FORMS_HPP} ${QT_MOC_HPP})
    qt5_use_modules(RosVideoComponents Quick Core)
    target_link_libraries(RosVideoComponents ${QT_LIBRARIES} ${catkin_LIBRARIES})
    target_include_directories(RosVideoComponents PUBLIC include)
    
    

    Note: This code is based on the auto generated CMakeList.txt file provided by the qt-create ROS package.
    This is similar to what we did for the last example, but with a few key differences

    catkin_package(
        CATKIN_DEPENDS qt_build roscpp sensor_msgs image_transport
        INCLUDE_DIRS include
        LIBRARIES RosVideoComponents
    )
    

    This tells catkin to export the RosVideoComponents build target as a library to all dependencies of this package.

    Then in this section we tell catkin to make a shared library target called “RosVideoComponents” that links the C++ source files with the Qt MOC/Header files, and the qt resources. Rather than a ROS node.

    add_library(RosVideoComponents ${QT_SOURCES} ${QT_RESOURCES_CPP} ${QT_FORMS_HPP} ${QT_MOC_HPP})
    qt5_use_modules(RosVideoComponents Quick Core)
    target_link_libraries(RosVideoComponents ${QT_LIBRARIES} ${catkin_LIBRARIES})
    target_include_directories(RosVideoComponents PUBLIC include)
    
  6. Next we need to fix our package.xml file, the qt-creator plugin has a bug where it puts all the ROS dependecies in one build_depends and run_depends tag, rather than putting them seperatly. You need to seperate them like so:
      <buildtool_depend>catkin</buildtool_depend>
      <buildtool_depend>catkin</buildtool_depend>
      <build_depend>qt_build</build_depend>
      <build_depend>roscpp</build_depend>
      <build_depend>image_transport</build_depend>
      <build_depend>sensor_msgs</build_depend>
      <build_depend>libqt4-dev</build_depend>
      <run_depend>qt_build</run_depend>
      <run_depend>image_transport</run_depend>
      <run_depend>sensor_msgs</run_depend>
      <run_depend>roscpp</run_depend>
      <run_depend>libqt4-dev</run_depend>
    
  7. Again we need to create src/ resources/ and include/ros_video_components folders in the package folder.
  8. We also need to make some changes to our gui project to depend on the library we generate. Open up the CMakeLists.txt file for the gui package and replace the following line:
    find_package(catkin REQUIRED COMPONENTS qt_build roscpp sensor_msgs image_transport)

    with

    find_package(catkin REQUIRED COMPONENTS qt_build roscpp sensor_msgs image_transport ros_video_components)
  9. Then add the following lines to the gui package’s package.xml,
    <build_depend>ros_video_components</build_depend>
    <run_depend>ros_video_components</run_depend>
    
  10. In the file browser create src and include/ros_video_components folders in the ros_video_components folder.

Building the Video Streaming Component

When we are using the rover the primary purpose the GUI serves in most ERC tasks is displaying camera feed information to users. Thus it felt appropriate to use streaming video from ROS as a proof of concept to determine if QML and Qt5 would be an appropriate technology choice.

We will now look at building a QML component that subscribes to a ROS image topic, and displays the data on screen.

  1. Right click on the src folder of the ros_video_components folder, and select “Add New.”
  2. We first need to create a class for our qt component so select,  “C++>C++ Class”
  3. We’ll call our class “ROSVideoComponent” and it has the custom base class “QQuickPaintedItem.” We’ll also need to select that we want to “Include QObject” and adjust the path of the header file so the compiler can find it. Make sure your settings match those in this screenshot:
    Qt Creator C++ Class Creation Dialouge
  4. Open up the header file you just created and update it to match the following
     
    #ifndef ROSVIDEOCOMPONENT_H
    #define ROSVIDEOCOMPONENT_H
    
    #include <QQuickPaintedItem>
    #include <ros/ros.h>
    #include <image_transport/image_transport.h>
    #include <sensor_msgs/Image.h>
    #include <QImage>
    #include <QPainter>
    
    class ROSVideoComponent : public QQuickPaintedItem {
        // this marks the component as a Qt Widget
        Q_OBJECT
        
        public:
            ROSVideoComponent(QQuickItem *parent = 0);
    
            void paint(QPainter *painter);
            void setup(ros::NodeHandle * nh);
    
        private:
            void receiveImage(const sensor_msgs::Image::ConstPtr & msg);
    
            ros::NodeHandle * nh;
            image_transport::Subscriber imageSub;
            // these are used to store our image buffer
            QImage * currentImage;
            uchar * currentBuffer;
            
            
    };
    
    #endif // ROSVIDEOCOMPONENT_H
    

    Here, QQuickPaintedItem is a Qt class that we can override to provide a QML component with a custom paint method. This will allow us to render our ROS video frames.
    Also in the header file we have a setup function which we use to initialise our ROS subscriptions since we don’t control where the constructor of this class is called, and our conventional ROS subscriber callback.

  5. Open up the ROSVideoComponent.cpp  file add change it so it looks like this:
     
    #include <ros_video_components/ROSVideoComponent.hpp>
    
    ROSVideoComponent::ROSVideoComponent(QQuickItem * parent) : QQuickPaintedItem(parent), currentImage(NULL), currentBuffer(NULL) {
    
    }
    

    Here we use an intialiser list to call our parent constructor, and then initialise our currentImage and currentBuffer pointers to NULL. The latter is very important as we use it to check if we have received any ROS messages.

  6. Next add a “setup” function:
    void ROSVideoComponent::setup(ros::NodeHandle *nh) {
        image_transport::ImageTransport imgTrans(*nh);
        imageSub = imgTrans.subscribe("/cam0", 1, &ROSVideoComponent::receiveImage, this, image_transport::TransportHints("compressed"));
        ROS_INFO("setup");
    }
    

    This function takes in a pointer to our ROS NodeHandle, and uses it to create a subscription to the “/cam0” topic.  We use image transport, as is recomended by ROS for video streams, and direct it to call the recieveImage callback.

  7. And now we implement  said callback:
    void ROSVideoComponent::receiveImage(const sensor_msgs::Image::ConstPtr &msg) {
        // check to see if we already have an image frame, if we do then we need to delete it
        // to avoid memory leaks
        if(currentImage) {
            delete currentImage;
        }
    
        // allocate a buffer of sufficient size to contain our video frame
        uchar * tempBuffer = (uchar *) malloc(sizeof(uchar) * msg->data.size());
        
        // and copy the message into the buffer
        // we need to do this because the QImage api requires the buffer we pass in to continue to exist
        // whilst the image is in use, but the msg and it's data will be lost once we leave this context.
        memcpy(tempBuffer, msg->data.data(), msg->data.size());
        
        // we then create a new QImage, this code below matches the spec of an image produced by the ros gscam module
        currentImage = new QImage(tempBuffer, msg->width, msg->height, QImage::Format_RGB888);
        
        ROS_INFO("Recieved");
        
        // We then swap out of buffer to avoid memory leaks
        if(currentBuffer) {
            delete currentBuffer;
            currentBuffer = tempBuffer;
        }
        // And re-render the component to display the new image.
        update();
    }
    
  8. Finally we override the paint method
    
    void ROSVideoComponent::paint(QPainter *painter) {
        if(currentImage) {
            painter->drawImage(QPoint(0,0), *(this->currentImage));
        }
    }
    
  9. We now have our QML component, and you can check that everything is working as intended by building the project (hammer icon in the bottom right or the IDE or using catkin_make). In order to use it we must add it to our qml file, but first since we want to be able to use it in qt-creator’s design view we need to add a plugin class.
  10. Right click on the src folder and select “Add New” again.
  11. Then select “C++>C++ Class.”
  12. We’ll call this class OwrROSComponents, and use the following settings:OwrROSCOmponents class creation dialouge
  13. Replace the header file so it looks like this
    #ifndef OWRROSCOMPONENTS_H
    #define OWRROSCOMPONENTS_H
    
    #include <QQmlExtensionPlugin>
    
    class OWRRosComponents : public QQmlExtensionPlugin {
        Q_OBJECT
        Q_PLUGIN_METADATA(IID "bluesat.owr")
    
        public:
            void registerTypes(const char * uri);
    };
    
    #endif // OWRROSCOMPONENTS_H
    
  14. Finally make the OwrROSComponents.cpp file look like this
    #include "ros_video_components/OwrROSComponents.hpp"
    #include "ros_video_components/ROSVideoComponent.hpp"
    
    void OWRRosComponents::registerTypes(const char *uri) {
        qmlRegisterType<ROSVideoComponent>("bluesat.owr",1,0,"ROSVideoComponent");
    }
    
  15. And now we just need to add it our QML and application code. Lets do the QML first. At the top of the file (in edit view) add the following line:
    import bluesat.owr 1.0
    
  16. And just before the final closing bracket add this code to place the video component below the other image
    ROSVideoComponent {
       // @disable-check M16
       objectName: "videoStream"
       id: videoStream
       // @disable-check M16
       anchors.bottom: parent.bottom
       // @disable-check M16
       anchors.bottomMargin: 0
       // @disable-check M16
       anchors.top: image1.bottom
       // @disable-check M16
       anchors.topMargin: 0
       // @disable-check M16
       width: 320
       // @disable-check M16
       height: 240
    }
    

    This adds our custom “ROSVideoComponent” who’s type we just registered in the previous steps to our window.

    Note: the @disable-check M16 prevents qt-creator from getting confused about our custom component, which it doesn’t detect properly. This is an unfortunate limit of using cmake (catkin) rather than qt’s own build system.

  17. Then because Qt’s runtime and qt-creator use different search paths we also need to register the type on the first line of our MainApplication::run() function
    qmlRegisterType<ROSVideoComponent>("bluesat.owr",1,0,"ROSVideoComponent");
    
  18. Finally we need to add the following lines to the end of our run function in main application to connect our video component to our NodeHandle
    ROSVideoComponent * video = this->rootObjects()[0]->findChild<ROSVideoComponent*>(QString("videoStream"));
    video->setup(&nh);
    

    And the relevant #include

    #include <ros_video_components/ROSVideoComponent.hpp>
    
  19. To test it publish a video stream using your preferred ros video library.
    For example if you have the ROS gscam library setup and installed you could run the following to stream video from a webcam:

    export GSCAM_CONFIG="v4l2src device=/dev/video0 ! videoscale ! video/x-raw,width=320,height=240 ! videoconvert"
    rosrun gscam gscam __name:=camera_1 /camera/image_raw:=/cam0

Conclusion

So in our previous post we learnt how to setup Qt and QML in ROS’s build system, and get that all working with the Qt-Creator IDE. Then this time we built on that system to develop a widget that takes ROS video data and renders it to the screen, demonstrating how to integrate ROS’s message system into a Qt/QML environment.

The code in this tutorial forms the basis of BLUEsat’s new rover user interface, which is currently in active development. You can see the current progress on our github, where there should be a number of additional widgets being added in the near future.  If you want to learn more about the kind of development we do at BLUEsat or are a UNSW student interested in joining feel free to send an email to info@bluesat.com.au.

Acknowledgements

Some of the code above is based on a Stack Overflow answer by Kornava, about how to created a custom image rendering component, which can be found here.


Posted on by

The BLUEsat Off World Robotics Software team is rebuilding our user interface, in an effort to address maintenance and learning curve problems we have with our existing glut/opengl based gui. After trying out a few different options, we’ve settled on a combination of QT and QML. We liked this option as it allowed us easy maintenance, with a reasonable amount of power and flexibility. We decided to share with you a simple tutorial we made for working with QT and ROS.

In part one of this article we go through the process of setting up a ROS package with QT5 dependencies and building a basic QML application. In the next instalment we will then look at streaming a ROS sensor_msgs/Image video into a custom QML component. The article assumes that you have ROS kinetic setup, and some understanding of how ROS works. It does not assume any prior knowledge of QT.

Full sources for this tutorial can be found on BLUEsat’s github.

Setting up the Environment

First things first, we need to setup Qt, and because one of our criteria for GUI solutions is ease of use, we also need to setup qt-creator so we can take advantage of its visual editor features.

Fortunately there is a ROS plugin for qt-creator (which you can find more about here). To setup we do the following (instructions for Ubuntu 16.04, for other platforms see the source here):


sudo add-apt-repository ppa:beineri/opt-qt571-xenial
sudo add-apt-repository ppa:levi-armstrong/ppa
sudo apt-get update && sudo apt-get install qt57creator-plugin-ros

We also need to install the ROS QT packages, these allow us to easily setup some of the catkin dependencies we will need later (note: unfortunately these packages are currently designed for QT4, so we can’t take full advantage of them)


sudo apt-get install ros-kinetic-qt-build

Setting up our ROS workspace

  1. We will use qt-creator to create our workspace, so start by opening qt-creator.
  2. On the welcome screen select “New Project”. Then chose “Import Project>Import ROS Workspace”.
    The QT-Creator new project dialogue display the correct selection for creating a new ros project.
  3. Name the project “qt-gui” and set the workspace path to a new folder of the same name. An error dialogue will appear, because we are not using an existing workspace, but that is fine.
  4. Then click “Generate Project File”The QT Import Existing ROS Project Window
  5. Click “Next”, choose your version control settings then click “Finish”
  6. For this project we need a ROS package that contains our gui node. In the project window, right click on the “src” folder, and select “add new”.
  7. Select “ROS>Package” and then fill in the details so they match the screenshot below. We’ll call it “gui” and  the Catkin dependencies are “qt_build roscpp sensor_msgs image_transport” QT Creator Create Ros Package Window
  8. Click “next” and then “finish”
  9. Open up the CMakeLists.txt file for the gui package, and replace it with the following file.
    
    ##############################################################################
    # CMake
    ##############################################################################
    
    cmake_minimum_required(VERSION 2.8.0)
    project(gui)
    
    ##############################################################################
    # Catkin
    ##############################################################################
    
    # qt_build provides the qt cmake glue, roscpp the comms for a default talker
    find_package(catkin REQUIRED COMPONENTS qt_build roscpp sensor_msgs image_transport)
    set(QML_IMPORT_PATH "${QML_IMPORT_PATH};${CATKIN_GLOBAL_LIB_DESTINATION}" )
    set(QML_IMPORT_PATH2 "${QML_IMPORT_PATH};${CATKIN_GLOBAL_LIB_DESTINATION}" )
    include_directories(${catkin_INCLUDE_DIRS})
    # Use this to define what the package will export (e.g. libs, headers).
    # Since the default here is to produce only a binary, we don't worry about
    # exporting anything. 
    catkin_package()
    
    ##############################################################################
    # Qt Environment
    ##############################################################################
    
    # this comes from qt_build's qt-ros.cmake which is automatically 
    # included via the dependency ca ll in package.xml
    #rosbuild_prepare_qt4(QtCore QtGui QtQml QtQuick) # Add the appropriate components to the component list here
    find_package(Qt5 COMPONENTS Core Gui Qml Quick REQUIRED)
    
    ##############################################################################
    # Sections
    ##############################################################################
    
    file(GLOB QT_RESOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} resources/*.qrc)
    file(GLOB_RECURSE QT_MOC RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS include/gui/*.hpp)
    
    QT5_ADD_RESOURCES(QT_RESOURCES_CPP ${QT_RESOURCES})
    QT5_WRAP_CPP(QT_MOC_HPP ${QT_MOC})
    
    ##############################################################################
    # Sources
    ##############################################################################
    
    file(GLOB_RECURSE QT_SOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS src/*.cpp)
    
    ##############################################################################
    # Binaries
    ##############################################################################
    
    add_executable(gui ${QT_SOURCES} ${QT_RESOURCES_CPP} ${QT_FORMS_HPP} ${QT_MOC_HPP})
    qt5_use_modules(gui Quick Core)
    target_link_libraries(gui ${QT_LIBRARIES} ${catkin_LIBRARIES})
    target_include_directories(gui PUBLIC include)
    install(TARGETS gui RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION})
    

    Note: This code is based on the auto generated CMakeList.txt file provided by the qt-create ROS package.
    Lets have a look at what this file is doing:

    cmake_minimum_required(VERSION 2.8.3)
    project(gui)
    
    ...
    
    find_package(catkin REQUIRED COMPONENTS qt_build roscpp sensor_msgs image_transport)
    include_directories(${catkin_INCLUDE_DIRS})
    

    This part is simply setting up the ROS package, as you would expect in a normal CMakeLists.txt file.

    find_package(Qt5 COMPONENTS Core Qml Quick REQUIRED)
    

    In this section we setup catkin to include Qt5, and tell it we need the Core, QML, and Quick components. This differs from a normal qt-build CMakeList.txt, because we need QT5 rather than QT4.

    file(GLOB QT_RESOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} resources/*.qrc)
    file(GLOB_RECURSE QT_MOC RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS include/ros_video_components/*.hpp)
    
    QT5_ADD_RESOURCES(QT_RESOURCES_CPP ${QT_RESOURCES})
    QT5_WRAP_CPP(QT_MOC_HPP ${QT_MOC}
    

    This section tells cmake where to find the QT resource files, and where to find the QT header files so we can compile them using the QT precompiler.

    file(GLOB_RECURSE QT_SOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS src/*.cpp)
    

    And this tells cmake where to find all the QT (and ROS) source files for the project

    add_executable(gui ${QT_SOURCES} ${QT_RESOURCES_CPP} ${QT_FORMS_HPP} ${QT_MOC_HPP})
    qt5_use_modules(gui Quick Core)
    target_link_libraries(gui ${QT_LIBRARIES} ${catkin_LIBRARIES})
    target_include_directories(gui PUBLIC include)
    install(TARGETS gui RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION})
    

    Finally we setup a ROS node (executable) target called “gui” that links the C++ source files with the Qt MOC/Header files, and the QT resources.

  10. Next we need to fix our package.xml file, the qt-creator plugin has a bug where it puts all the ROS dependencies in one build_depends and run_depends tag, rather than putting them separately. You need to separate them like so:
     
    <buildtool_depend>catkin</buildtool_depend>
      <buildtool_depend>catkin</buildtool_depend>
      <build_depend>qt_build</build_depend>
      <build_depend>roscpp</build_depend>
      <build_depend>image_transport</build_depend>
      <build_depend>sensor_msgs</build_depend>
      <build_depend>libqt4-dev</build_depend>
      <run_depend>qt_build</run_depend>
      <run_depend>image_transport</run_depend>
      <run_depend>sensor_msgs</run_depend>
      <run_depend>roscpp</run_depend>
      <run_depend>libqt4-dev</run_depend> 
    
  11. Create src/, include/<package name> and resources/ folders in the package. (Note: it doesn’t seem possible to do this in qt-creator you have to do it from the terminal or file browser).

Our ROS workspace is now setup and ready to go. We can start on our actual code.

Creating a Basic QML Application using the Catkin Build System

We’ll start by creating a basic ROS node, that displays a simple qml file.

  1. Right click on the src folder in the gui package, and select “Add New”
  2. Select “ROS>Basic Node” and then click choose
  3. Call it “guiMain” and click Finish. You should end up with a new file open in the editor that looks like this:Qt creator window displaying a main function with a basic ros "Hello World" in it
  4. We’ll come back to this file later, but first we need to add our QML Application. In order to call ros::spinOnce, without having to implement threading we need to subclass the QQmlApplicationEngine class so we can a Qt ‘slot’ to trigger it in the applications main loop  (more on slots later). So to start we need to create a new class: right click on the src directory again and select “Add New.”
  5. Select “C++>C++ Class “, then click “Choose”
  6. Set the “Class Name” to “MainApplication”, and the “Base Class” to “QQmlApplicationEngine.”
  7. Rename the header file so it has the path “../include/gui/MainApplication.hpp” This allows the CMakeLists.txt file we setup earlier to find it, a run the MOC compiler on it.
  8. Rename the source file so that it is called “MainApplication.cpp”. Your dialog should now look like this:Qt Creator "C++ Class" dialogue showing the settings described above.
  9. Click “Next”, then “Finish”.
  10. Change you MainApplication.hpp file to match this one:
    #ifndef MAINAPPLICATION_H
    #define MAINAPPLICATION_H
    
    #include <ros/ros.h>
    #include <QQmlApplicationEngine>
    
    class MainApplication : public QQmlApplicationEngine {
        Q_OBJECT
        public:
            MainApplication();
            //this method is used to setup all the ROS functionality we need, before the application starts running
            void run();
            
        //this defines a slot that will be called when the application is idle.
        public slots:
            void mainLoop();
    
       private:
            ros::NodeHandle nh;
    };
    
    #endif // MAINAPPLICATION_H
    

    Two important parts here, we add the line “Q_OBJECT” below the class declaration. This tells the QT MOC compiler to do QT magic here in order to make this into a valid QT Object.
    Secondly, we add the following lines:

    public slots:
        void mainLoop();
    

    What does this mean? Well QT uses a system of “slots” and “signals” rather than the more conventional “listener” system used by many other gui frameworks. In layman’s terms a slot acts similarly to a callback, when an event its been “connected” to occurs the function gets called.

  11. Now we want to update the MainApplication.cpp file. Edit it so it looks like the following:
    #include "gui/MainApplication.hpp"
    #include <QTimer>
    
    MainApplication::MainApplication() {
        
    }
    
    void MainApplication::run() {
        
        //this loads the qml file we are about to create
        this->load(QUrl(QStringLiteral("qrc:/window1.qml"))); 
        
        //Setup a timer to get the application's idle loop
        QTimer *timer = new QTimer(this);
        connect(timer, SIGNAL(timeout()), this, SLOT(mainLoop()));
        timer->start(0);
    }
    

    The main things here are we load a qml file, at the resource path “qrc:/window1.qml”. In a moment we will create this file. The other important thing is the set of timer code. Basically how this works is we create a timer object, we create a connection between the timer object’s “timeout” event (signal), and our “mainLoop” slot which we will create in a moment. We then set the timeout to 0, causing this event to trigger whenever the application is idle.

  12. Finally we want to add the the mainLoop function to the end of our MainApplication code, it simply calls ros::SpinOnce to get the latest ROS messages whenever the application is idle.
    void MainApplication::mainLoop() {
        ros::spinOnce();
    }
    
  13. In our guiMain.cpp we need to add the following lines at the end of our main function:
        QGuiApplication app(argc, argv);
        MainApplication engine;
        engine.run();
        
        return app.exec();
    

    This initialises our QML application, calls our run function, then enters QT’s main loop.

  14. You will also need to add these two #includes, to the top of the guiMain.cpp file
    #include <QGuiApplication>
    #include <gui/MainApplication.hpp>
    
  15. We now have all the C++ code we need to run our first demo. All that remains is writing the actual QT code. Right click on the “resources” folder, and select “Add New.”
  16. In the New File window, select “Qt” and then “QML File (Qt Quick 2)”, and click “Choose.”
  17.  Call the file “window1” and finish.
  18. We want to create a window rather than an item, so change the qml file so it looks like this:
    import QtQuick 2.0
    import QtQuick.Window 2.2
    
    Window {
        id: window1
        visible: true
    
    }
    
  19. Now we will use the visual editor to add a simple image to the window. With the QML file open, click on the far left menu and select “Design” mode. You should get a view like this:
    QT Creator QML Design View
  20. From the left hand “QML Types” toolbox drag an image widget onto the canvas. You should see rectangle with “Image” written in it on your canvas.
    Qt Creator Design Window with an image added
  21. We need to add an image for it to use. To do this we need a resource file, so switch back to edit mode. Right click on the resources folder and select “Add New.”
  22. Select “Qt>Qt Resource File” and then click “Choose”
  23. Call the resource file “images,” and finish.
  24. This should open the resource file editor, first you need to add a new prefix. Select “add>New Prefix”
    QT Creator Resource File Editor: Select Add>Add Prefix
  25. Change the “prefix” to “/image”.
  26. We now want to add an image. Find an image file on your computer than is appropriate then click “Add Files” and navigate to it. If the file is outside your project, qt-creator will prompt you to save it to your resources folder, which is good. You should now have a view that looks like this:
    QT Creator Resource File Editor with an image added
  27. Switch back to the qml file and design view.
  28. Click the image, on the right hand side will be a drop down marked “source”, select the image you just added from it. (Note if the designer has auto filled this box but the image preview is not appearing you may need to select another image, and then reselect the one you want). I used the BLUEsat logo as my image:
    I used the BLUEsat logo for my example
  29. Now we just need to put the qml somewhere we can find it. As in steps 21 to 26, create a new resource file in the resources folder called qml and add the qml window1.qml to it under the “/” prefix.
  30. At this point you should be able to build your project. You can build using catkin_make as you normally would, or by clicking the build button in the bottom left corner of the ide.
  31. To run your ROS node you can either run it as you would normally from the command line using “rosrun”, or you can run it from the ide. To setup running it from the ide select “Project” from the left hand menu, then under “desktop” select run.
  32. Under the run heading, select “Add Run Step>rosrun step.” You should get a screen like this.new/prefix1QT Creator - Project settings screen
  33. Set the package to “gui”, and the target should auto fill with “gui” as well.
  34. Press run in the bottom left corner. You should get something like this (note: depending on where you places the image you may need to resize the window to see it). Important: as always with ROS, roscore needs to be running for nodes to start correctly. Window Displaying BLUEsat Logo
  35. You have a working gui application now, compiled with catkin and running a ROS Spin Loop at the appropriate loop, but this is a bit useless without using some information from other ROS nodes in the gui. In our next article, we will look at streaming video from ROS into Qt so stay tuned!

 


Posted on by

Welcome back to the second article in our three part series on the BLUEtounge 2.0 Rover’s suspension and drive system. In our last post Chris wrote about the mechanical re-design of the system, and in this post we will look at how we designed the high level software architecture for this system. We will also look at some of the challenges we faced along the way.

The System

The BLUEtounge 2.0 Rover has four wheel's, with the front two being able to steer independently
BLUEtounge 2.0, with its four wheel modules. You can see the front left module is turning.

The BLUEtounge 2.0 Rover has four independently controlled wheels, with the front two wheels also being able to steer. This was a big departure from BLUEtounge 1.0’s skid steer system, which used six wheels, and turned by having the wheels on one side of the rover spin in the opposite direction to those on the other side of the rover. The system was meant as a stepping stone towards a full swerve drive system on either BLUEtounge, or our next rover platform NUMBAT.

Furthermore the BLUEsat Off-World Robotics code base is based around the R.O.S (Robotics Operating System) framework. This framework provides a range of existing software and hardware integrations, and is based around the idea of many separate processes (referred to as nodes), that communicate over TCP based ROS ‘topics’ using data structures called ‘messages’.

That, along with the nature of BLUEsat as a student society placed some interesting requirements on our design:

  • The system needed to be able to work with only two wheel modules being able to steer, but as much as possible the code needed to be reusable for a system with four such modules.
  • The system needed to avoid being heavily tied to the motors and embedded systems used on BLUEtounge, as many of them would be changing for NUMBAT.
  • Due to European Rover Challenge (ERC) requirements, the system needed to support user input, and be able to be controlled by an AI.

As a consequence of the above, and to avoid reinventing the wheel (no pun intended), the system needed to use standard ROS messages and conventions as much as possible. It also needed to be very modular to improve reusability.

User Input

The user controls the rover’s speed and rotation using an xbox controller. After some investigation, our initial approach was to have one of the analogue sticks control the rover’s direction, whilst the other controlled its speed. This was primarily because we had found that using a single stick to control direction and speed was not very intuitive for the user.

As ROS joystick analogue inputs are treated as a range between -1 and 1 on two axes, the first version of the system simply used the up/down axis of the left stick as the magnitude applied to a unit vector formed by the position of right stick. The code looked a bit like this:

double magnitude = joy->axes[SPEED_STICK] * SPEED_CAP;
cmdVel.linear.x = joy->axes[DIRECTION_STICK_X] * magnitude;
cmdVel.linear.y = joy->axes[DIRECTION_STICK_Y] * magnitude * -1; 

(Note: that all code in this article uses the ROS standard of x being forwards <-> backwards, and y being port <-> starboard)

This code produced a geometry_msgs::Twist message that was used by our steering system. However we found that this system had several problems:

  • It was very difficult to do fine manoeuvring of the rover, because the range of slow speeds corresponded to too small an area on the joystick. However, since we could only control the power rather than the velocity of the motors, we couldn’t simply reduce the overall power of the rover as this would mean it was unable to traverse steep gradients.
  • Physical deadzones on the joysticks meant that driving the rover could be somewhat jerky.
  • The code above had a mathematical problem, where the rover’s max speed was higher whilst steering than could be achieved travelling in a straight line.
  • Having a two axis direction control was unintuitive for the driver, and hard to control accurately.

In response to this one of our team members (Sean Thompson) developed a new control system that used only one axis for each stick. In this system the left stick was used for power, whilst the right stick was used for (port/starboard) steering.  The system also implemented dead zone and exponential scaling which allowed for better manoeuvring of the rover at low speeds, whilst still being able to utilise the rover’s full power.

Full source code for this implementation can be found here.

The rover uses the following control configuration whilst driving (diagram credit: Helena Kertesz)
The rover uses the following control configuration whilst driving. Diagram Credit: Helena Kertesz.

Steering

The steering system for the rover allows the rover to rotate about a point on the line formed between the two rear wheels. In order to achieve this, each wheel must run at a separate speed and the two front wheels must have separate angles. The formulas used to determine these variables are displayed below.

Latex formulaLatex formula

The rover steers by adjusting both the speed of its wheels and the angle of its front wheels. (Diagram Credit: Chris Squire)
The rover steers by adjusting both the speed of its wheels and the angle of its front wheels. Diagram Credit: Chris Miller.

In order to accommodate this a software module was built that converted the velocity vector (Latex formula) discussed in the previous section, into the rotational velocities required for each of the wheel modules, and the angles needed for the front two wheels. The system would publish these values as ros messages in a form compatible with the standard ros_command module, enabling easier testing in ROS’s gazebo simulator and hopefully good compatibility with other ROS systems we might need to use in the future.

The following code was used to implement these equations:

        const double turnAngle = atan2(velMsg->linear.y,velMsg->linear.x);
        const double rotationRadius = HALF_ROVER_WIDTH_X/sin(turnAngle);
        
        // we calculate the point about which the rover will rotate
        // relative to the centre of our base_link transform (0,0 is the centre of the rover)

        geometry_msgs::Vector3 rotationCentre;
        // the x axis is in line with the rear wheels of the rover, as shown in the above diagram
        rotationCentre.x = -HALF_ROVER_WIDTH_X;
        // and the y position can be calculated by applying Pythagoras to the rotational radius of the rover (r_turn) and 
        // half the length of the rover
        rotationCentre.y = sqrt(pow(rotationRadius,2)-pow(HALF_ROVER_LENGTH_Y,2));
        // omega_rover is then calculated by the magnitude of our velocity vector over the rotational radius
        const double angularVelocity = fabs(sqrt(pow(velMsg->linear.x, 2) + pow(velMsg->linear.y, 2))) / rotationRadius;

        //calculate the radiuses of each wheel about the rotation center
        //NOTE: if necessary this could be optimised
        double closeBackR = fabs(rotationCentre.y - ROVER_CENTRE_2_WHEEL_Y);
        double farBackR = fabs(rotationCentre.y + ROVER_CENTRE_2_WHEEL_Y);
        double closeFrontR = sqrt(pow(closeBackR,2) + pow(FRONT_W_2_BACK_W_X,2));
        double farFrontR = sqrt(pow(farBackR,2) + pow(FRONT_W_2_BACK_W_X,2));
        
        //V = wr
        double closeBackV = closeBackR * angularVelocity;
        double farBackV = farBackR * angularVelocity;
        double closeFrontV = closeFrontR * angularVelocity;
        double farFrontV = farFrontR * angularVelocity;
        
        //work out the front wheel angles
        double closeFrontAng = DEG90-atan2(closeBackR,FRONT_W_2_BACK_W_X);
        double farFrontAng = DEG90-atan2(farBackR,FRONT_W_2_BACK_W_X);
        
        //if we are in reverse, we just want to go round the same circle in the opposite direction
        if(velMsg->linear.x < 0) {
            //flip all the motorVs
            closeFrontV *=-1.0;
            farFrontV *=-1.0;
            farBackV *=-1.0;
            closeBackV *=-1.0;
        }
        
        
        //finally we flip the values if we want the rotational centre to be on the other side of the rover
        if(0 <= turnAngle && turnAngle <= M_PI) {
            output.frontLeftMotorV = closeFrontV;
            output.backLeftMotorV = closeBackV;
            output.frontRightMotorV = farFrontV;
            output.backRightMotorV = farBackV;
            output.frontLeftAng = closeFrontAng;
            output.frontRightAng = farFrontAng;
            ROS_INFO("right");
        } else {
            output.frontRightMotorV = -closeFrontV;
            output.backRightMotorV = -closeBackV;
            output.frontLeftMotorV = -farFrontV;
            output.backLeftMotorV = -farBackV;
            output.frontLeftAng = -farFrontAng;
            output.frontRightAng = -closeFrontAng;
            ROS_INFO("left");
        }

Separating steering from the control of individual joints also had another important advantage, in that it significantly improved the testability and ease of calibration of the rover’s systems. Steering code could be tested to some extent in the gazebo simulator using existing plugins, whilst control of individual joints could be tested without the additional layer of abstraction provided by the steering system. It also allowed the joints to be calibrated in software (more on this in our next article).

Joint Control System

In BLUEtounge 1.0, our joint control system consisted of many lines of duplicated code in the main loop of our serial driver node. This code took incoming joystick messages and converted them directly into pwm values to be sent through our embedded systems to the motors. This code was developed rapidly and was quite difficult to maintain, but with the addition of the feedback loops needed to develop our swerve drive, the need to provide valid transforms for 3d and automation purposes, and our desire to write code that could be easily moved to NUMBAT – a new solution was needed.

We took an object oriented approach to solving this problem. First a common JointController class was defined, this would be an abstract class that handled subscribing to the joints control topic, calling the joints update functions and providing a standard interface for use by our hardware driver (BoardControl in the diagram below) and transform publisher (part of JointsMonitor).  This class would be inherited by classes for each type of joint, where the control loop for that joint type could be implemented (For example the drive motors control algorithm was implemented in JointVelocityController, whilst the swerve motors where implemented in JointSpeedBasedPositionController).

UML Diagram of the BLUETounge 2.0 Rovers driver control system
The BLUEtounge 2.0 Rover’s joint system consisted of a JointMonitor class, used to manage timings and transforms, as well as an abstract JointController class that was used to implement the different joint types with a standard interface. Diagram Credit: Harry J.E Day, with amendments by Simon Ireland and Nuno Das Neves.

In addition a JointMonitor class was implemented, this class stored a list of joints and published debugging and transform information at set increments. This was a significant improvement in readability from our previous ROS_INFO based system as it allowed us to quickly monitor the joints we wanted. The main grunt of this class was done in the endCycle function, which was called after the commands had been sent to the embedded system. It looked like this:

// the function takes in the time the data was last updated by the embedded system
// we treat this as the end of the cycle
void JointsMonitor::endCycle(ros::Time endTime) {
    cycleEnd = endTime;
    owr_messages::board statusMsg;
    statusMsg.header.stamp = endTime;
    ros::Time estimateTime = endTime;
    int i,j;
    // currentStateMessage is a transform message, we publish of all the joints
    currentStateMessage.velocity.resize(joints.size());
    currentStateMessage.position.resize(joints.size());
    currentStateMessage.effort.resize(joints.size());
    currentStateMessage.name.resize(joints.size());
    
    // we look through each joint and estimate its transform for a few intervals in the future
    // this improves our accuracy as our embedded system didn't update fast enough
    for(i =0; i < numEstimates; i++, estimateTime+=updateInterval) {
        currentStateMessage.header.stamp = estimateTime;
        currentStateMessage.header.seq +=1;
        j =0;
        for(std::vector<JointController*>::iterator it = joints.begin(); it != joints.end(); ++it, j++) {
            jointInfo info = (*it)->extrapolateStatus(cycleStart, estimateTime);
            publish_joint(info.jointName, info.position, info.velocity, info.effort, j);

        }
        statesPub.publish(currentStateMessage);
    }
    // we also publish debugging information for each joint
    // this tells the operator where we think the joint is
    // how fast we think it is moving what PWM value we want it to be at. 
    for(std::vector<JointController*>::iterator it = joints.begin(); it != joints.end(); ++it, j++) {
            jointInfo info = (*it)-&amp;amp;gt;extrapolateStatus(cycleStart, endTime);
	    owr_messages::pwm pwmMsg;
	    pwmMsg.joint = info.jointName;
	    pwmMsg.pwm = info.pwm;
	    pwmMsg.currentVel = info.velocity;
	    pwmMsg.currentPos = info.position;
            pwmMsg.targetPos = info.targetPos;
            statusMsg.joints.push_back(pwmMsg);

    }
    debugPub.publish(statusMsg);  
    
    
}

Overall this system proved to be extremely useful, it allowed us to easily adjust code for all motors of a given type and reuse code when new components where added. In addition the standardised interface allowed us to quickly debug problems (of which there where many), and easily add new functionality. One instance where this came in handy was with our lidar gimbal, the initial code to control this joint was designed to be used by our autonomous navigation system, but we discovered for some tasks it was extremely useful to mount a camera on top and use the gimbal to control the angle of the camera. Due to the existing standard interface it was easy to add code to our joystick system to enable this, and we didn’t need to make any major changes to our main loop which would have been risky that close to the competition.

Conclusion

Whilst time consuming to implement and somewhat complex this system enabled us to have a much more manageable code base. This was achieved by splitting the code into separate ROS nodes that supported standard interfaces, and using an OO model for implementing our joint control. As a result it is likely that this system will be used on our next rover (NUMBAT), even though the underlying hardware and the way we communicate with our embedded systems will be changing significantly.

Next in this series you will hear from Simon Ireland on the embedded systems we needed to develop to get position feedback for a number of these joints, and some of the problems we faced.

Code in this article was developed for BLUEsat UNSW with contributions from Harry J.E Day, Simon Ireland and Sean Thompson, based on the BLUEtounge 1.0 steering and control code by Steph McArthur, Harry J.E Day, and Sam Scheding. Additional assistance in review and algorithm design was provided by Chris Squire, Chris Miller, Yiwei Han, Helena Kertesz, and Sebastian Holzapfel. Full source code for the BLUEtounge 2.0 rover as deployed at the European Rover Challenge 2016, as well as a full list of contributors, can be found on github.


Next Page »