Tag Archives: IoT

IoT Course Week 12 – Remote Firmware Updates

Screen Shot 2015-10-15 at 3.07.21 PM

Last week, we explored Load Testing of HTTP and MQTT and how to measure the scalability of your system.

This Week

This week, we’ll continue our focus on non-functional requirements with Remote Firmware Update.  A typical desk lamp, or other non-IoT device, will have the same functionality 10 years after it leaves the factory.  The functionality and value of a “smart” device, however, can increase over time, as new software functionality is deployed.  

As students have experienced, updating the functionality of the Web is relatively straight-forward: deploying new code to a server updates the web application.  Similarly, as new iOS and Android mobile capabilities are deployed, the new Apps are published on the iTunes and Google Play stores.  But how do you update the software/firmware on your smart device?  There could be hundreds of thousands, or even millions, of devices distributed across the country or world and each embedded system is slightly different.  For Week 12, we show students how to remotely update LAMPi.

Screen Shot 2016-05-23 at 11.05.56 AM

 

Debian Packages

Since we are using Raspbian, a Debian-based Linux system for LAMPi , we settled on the Debian Package System. This addresses the actual packaging and installation of software, as well as the distribution and security (authentication and integrity) of those packages.

Create Folder Structure

First, we need an executable to package. We’re going to make a package called “hi” that contains an executable also called “hi”. Let’s make a directory to build our deb package in:

cloud$ mkdir -p ~/pkg/hi/{DEBIAN,opt/hi}
cloud$ cd ~/pkg/hi/

Viewed in tree (you can install tree through apt-get), this folder structure should look like so:

pkg
├── hi
│ ├── DEBIAN
│ └── opt
│ └── hi

So ~/pkg/hi is the directory that holds everything we want to package.

  • DEBIAN is a special folder that contains all the configuration & metadata for the debian package
  • Everything else in ~/pkg/hi will be installed in the root of the system. So ~/pkg/hi/opt/hi will install into /opt/hi on the system in which it is installed. If we wanted to install some supervisor scripts with our packag. For example, we could make a ~/pkg/hi/etc/supervisor/conf.d/ directory and files in it would install into /etc/supervisor/conf.d.

Create Executable

Now let’s build an executable. When the package is installed, we’ll want the executable to be installed in /opt/hi/ so create it as ~/pkg/hi/opt/hi/hi

#!/usr/bin/env python

import os

version = 'Unknown'
version_path = os.path.join(os.path.dirname(__file__), '__VERSION__')
with open(version_path, 'r') as version_file:
version = version_file.read()

print('Hello Deb! Version {}'.format(version))

Let’s create a file to hold the version of our program. Create ~/pkg/hi/opt/hi/__VERSION__ with the following contents (no whitespace, no newline):

0.1

Save and close both files, mark “hi” as executable, then run it:

cloud$ cd ~/pkg/hi/opt/hi/
cloud$ sudo chmod a+x hi
cloud$ ./hi

Hello Deb! Version 0.1

Create Package Metadata

Now let’s build a control file to describe our package.

Create a file at ~/pkg/hi/DEBIAN/control, replacing {{YOUR_NAME}} with your name:

Package: hi
Architecture: all
Maintainer: {{YOUR_NAME}}
Depends: python, python-dev, python-pip
Priority: optional
Version: 0.1
Description: Hello, Deb!
Section: misc

Note that these metadata files are whitespace sensitive and do not allow additional empty lines so be careful while editing.

Finally, we need to fix file permissions and make root the owner of the entire directory structure. These permissions will travel with the package, so if we don’t do this, the files will be installed with bad permissions.

cloud$ sudo chown -R root:root ~/pkg/hi/

Note that after you do this, further edits to files in this directory will require sudo.

This should be all we need to build our deb package, so let’s go:

cloud$ cd ~/pkg/
cloud$ dpkg-deb --build hi

You should now have a hi.deb in ~/pkg/.
You’ve just created a Debian Package!

Setting up a Debian Repository
We use reprepro, an easy to set up Debian Package Repository, and show students how to publish their packages to that repository, add that repository to LAMPi, and then install the package on LAMPi from the repository.

Automating Deployment

Everytime we change our hi package, there are several things we need to do. We need to increment the version number, create the package, and finally upload it to our package repo. We teach the students how to build an automated script for these so we don’t have to manually run the commands each time. The package and deployment script will act as living documentation of the process we need to do each time the package is updated, so future maintainers of your project don’t need to start from scratch. We use a Python module called bumpversion to accomplish automatic updating of version information.

Finally

After walking through the above creation and deployment of a Debian package, setting up the reprepro repository, and installing the hi package on LAMPi, the students’ assignment for Week 12 was to demonstrate their understanding by applying the tools on the LAMPi code. The assignment required them to package the LAMPi UI application, the Bluetooth service, and the lamp hardware service into a package, including maintainer scripts to run before the package is installed (preinst), after installation (postinst), when removing the package, etc. and demonstrate versioning of the package in class.

Next Week –  IoT platforms

IoT Course Week 7: Platform Security Part B (#3)

Internet of Things CourseWelcome to Week 7, part B #3

Note: This is part 3 of a 3 part series about assignment 7B in the course. This particular assignment was quite information dense, and if you are unfamiliar with some of the pieces involved, things can be quite confusing.

In Part #2 we covered how the authentication and ACL mechanism works and how we integrated that with the Django web app. Now, we’ll cover how we deal with auth and ACL checking through the WebSockets support on Mosquitto.

Real-time Feedback of LAMPI in our Web App

In order to have real-time feedback in the web app, we are using the Paho MQTT JavaScript library and WebSockets to provide data to the Human User that has the web app open in their web browser. The WebSocket traffic is being supplied by Mosquitto, not our Django app. So, how do we secure this so that you can’t just go in and twiddle some data to get access to someone else’s LAMPI devices?

Well, first thing to note, since the WebSocket traffic is going through Mosquitto, is taking a look at what Mosquitto does in relation to the Auth integration with the Django app. Mosquitto expects there to be enough data in the WebSocket request to satisfy the /auth & /acl calls that we reviewed in Part #2. To recap, this is username, password and topic (for /acl you also need clientid and access type).

That seems easy enough, but we don’t want to send the user’s actual username and password through the wire (or exposed in the JavaScript) on every call. How do we solve this problem then? We use a collapsed UUID-based Auth token from Django that is unique to the current Human User session in the web app. By using the Auth token as the username and a blank password field, we can validate the request by looking up the Human User session in the Django app to be able to respond appropriately to the Mosquitto Auth Plugin call as to the authorization of that particular Pub/Sub request from WebSockets.

So, to make this easy, we wrote our Django view template to inject a snippet of JavaScript that sets up a few variables, one of which is our Auth token, that we then leverage in our JavaScript code that coordinates the WebSocket data calls with UI manipulation. That code then uses the token, via the Paho JavaScript client, as the username field so that the Django app can determine an authorization response when Mosquitto asks. This will then allow us to restrict the access to topics by user & device so that you can’t easily sniff other Human User’s data. Easy peasy lemon squeezy, right?

What more security do we need?

Well, for starters we should probably TLS encrypt the local backend HTTP traffic between the Cloud Mosquitto broker auth plugin and the Cloud Django web app. While we transfer this traffic over the local loopback device on the server (which means the IP traffic never exits the physical Ethernet device on the server), if you somehow gained access to the server, you could sniff the traffic on that interface. At that point though, we most likely have bigger security problems.

Having the Django auth token being passed around may not be the best, but it is similar to how many REST-based APIs work in that once you authenticate, you then use a token for all subsequent web requests.

Where else do you think the security of this architecture could be improved?

Next Week

Next week, we take a step back from this foray into security for something entirely new: mobile device control. We will forge a path into the unknown as we begin to implement Bluetooth Low Energy connectivity from the smartphone in your pocket directly to the LAMPI hardware.

IoT Course Week 7: Platform Security Part B (#2)

Internet of Things CourseWelcome to Week 7, part B #2

Note: This is part 2 of a 3 part series about assignment 7B in the course. This particular assignment was quite information dense, and if you are unfamiliar with some of the pieces involved, things can be quite confusing.

In Part #1 we covered the overall solution. Now let’s delve into how we are handling authorization for the LAMPI device(s) and human users of the Django web app. Since the primary interface between LAMPI and the Cloud is MQTT (and hence Mosquitto), let’s look at that first.

Mosquitto’s Authentication Plugin

Mosquitto supports an extensible plugin architecture, for both user authentication and Access Control Lists (ACL). ACLs allow for fine-grained permission models. When applied to Pub/Sub systems, they allow the control of which users can publish to particular topics and which users can subscribe to particular topics.

For these purposes, we will be using mosquitto-auth-plug project. It supports multiple backends such as MySQL, Redis, Postgres and HTTP to name a few. In order to integrate with the Django app, we will use the HTTP backend.

With the plugin configured for HTTP, whenever Mosquitto needs to answer an authentication or ACL question, it will make an HTTP request. What hostname, port, and URLs it uses for the HTTP backend is configurable. The HTTP server then responds with either an HTTP status of 200 (OK), approving the request (e.g., to allow username XYZ to connect with password ABC), or returns a 403 (Forbidden). There is no actual data in either response, just the HTTP response code.

The LAMPI “User”

So, how does the Authentication and Authorization work in regards to LAMPI? Well, the Authentication is basically handled by the TLS certificate. The name of the LAMPI device is the Common Name of the TLS certificate on the device that is used when initiating the MQTT connection to the Mosquitto broker instance in the cloud.

When making a web connection through the auth plugin, Mosquitto creates a POST request to the configured location that contains the following: username, password, topic and acc. In the case of LAMPI, when making a call to /auth, the password field is blank (having already authenticated via TLS) as are the topic and acc fields. If the Django app approves the request when LAMPI connects to the Cloud Mosquitto broker then it will return a 200 HTTP response to Mosquitto.

What about Pub & Sub requests? That is a bit different. When the Mosquitto broker on LAMPI pushes topic requests, through the bridge, to the Mosquitto broker in the Cloud, the auth plugin may defer the ACL check depending on the type of request.

If the LAMPI Mosquitto broker has requested to subscribe to a topic, the ACL of that subscription isn’t immediately checked by the Mosquitto broker in the Cloud. The Cloud Mosquitto broker merely records the subscription request from the LAMPI Mosquitto broker. The Mosquitto broker on LAMPI has no knowledge if it is in fact allowed to subscribe or not.

When a message is published on that topic that the LAMPI Mosquitto broker is subscribed to, then the Mosquitto broker in the Cloud sends an /acl request via the auth plugin to determine if the LAMPI Mosquitto broker is in fact allowed to receive that published message. Once again, the auth plugin creates a POST request with the following fields: username, password, topic, acc and clientid. The topic contains the full path of the message, the acc is a number 1 for a Subscription check, and clientid is the thing that is requesting the action. The Django app will use this information to determine if the LAMPI Mosquitto broker is the correct recipient of the published message. If it is, it will return a 200 HTTP response to the Cloud Mosquitto broker, otherwise a 403 HTTP response will be returned.

If the LAMPI Mosquitto broker attempts to Publish a message to the Cloud Mosquitto broker, an ACL check will be performed immediately via the auth plugin. Once again, the Cloud Mosquitto broker will make a call to /acl on the Django app. As with the Subscription check earlier, the same data will be POSTed to the Django app with the exception of acc being a value of 2 (for Publish). If the LAMPI Mosquitto broker is allowed to push that message up to the Cloud Mosquitto broker, then Django would return a 200 HTTP response.

This sure does seem like a lot of work for the Cloud Mosquitto broker. It would be, if it made these auth calls every time a message came through. This is why Mosquitto actually caches the responses to these Auth & ACL requests for a time. This allows Mosquitto to quickly evaluate ACL checks internally which gives it a very high message throughput. With this out of the way, let us look at the role of the Human being using the Django Web Application.

The Human “User” Interacting with the Django Web App

In order to let a human user into the Django web app, we are using Django’s built in user authentication mechanism. The idea is to allow a simple system to authenticate a human user via username & password, as well as provide an authorization mechanism to associate human users to device users. The device users in this case being one or more LAMPIs registered with their respective users.

The registration of a LAMPI to a human user account in the Django app is used to determine if that human user logged in via the Web has access to publish or subscribe to topics from LAMPIs connected to the Cloud broker. The important point to take note of however, is the web page contains no static data. All of the data the human user interacts with via the web interface is actually generated in real-time via the WebSockets integration with the Cloud Mosquitto broker.

So how do we authenticate and authorize the human user via WebSockets to see messages from Mosquitto? We’ll find out next week in the final installment of this series on Assignment 7B.

IoT Course Week 7: Platform Security Part B (#1)

Internet of Things Course

Welcome to Week 7, part B #1

Note: This is part 1 of a 3 part series about assignment 7B in the course. This particular assignment was quite information-dense, and if you are unfamiliar with some of the pieces involved, things can be a bit confusing.

In the previous week, we added some security to the system with Transport Layer Security (TLS). This encrypts the web, websockets, and MQTT communications. It also allows clients to authenticate that the servers are who they say they are, and Lampi Devices to authenticate the MQTT bridge connection to the EC2 MQTT Broker. This week we will be focusing on a few remaining security gaps in the model, including:

  • protecting the Cloud MQTT broker namespace
  • limiting which devices Django users can communicate with at the MQTT level
  • providing limitations on Lampi devices specifying how and which topics are mapped into the global MQTT topic namespace on EC2 (and a compromised device could potentially wreak havoc on the entire system)
  • requiring authentication of users connecting to the MQTT websockets interface

Where do we begin?

There are a few different ways to skin this security cat. We decided that integrating Mosquitto with Django would provide us with a way to have a single source of truth for Human User authentication, Human User authorization, and Device User authorization. Mosquitto is already configured to authenticate the LAMPIs via the TLS certificates we configured in the previous post. Also, we thought that a REST interface would be easiest to implement for the integration. While we used Django, really any system that supports REST would work with the architecture we’re outlining today.

The high-level components:
Screen Shot 2016-03-07 at 3.13.10 PM

What does this look like overall?

There are a few moving parts to this solution we decided to go with. Let’s take a look at it overall:

Screen Shot 2016-03-07 at 3.10.16 PM

You’ll notice there will be a plugin to Mosquitto that we’ll leverage to connect the Django web app with the MQTT system.

Next post will delve into the ACL support for LAMPI and the Human Users of the system. Stay tuned!

LeanDog and CWRU Team Up For Connected Devices Workshop

Screen Shot 2015-10-15 at 3.07.21 PM

Over the next few years, it is predicted that billions of devices will become increasingly interconnected and posses the ability to transfer data across networks without the need for human interaction. This environment of interconnectivity is known as the Internet of Things (IoT). In response to the emergence of this approach to product design, an unusual partnership between academia and industry has formed to prepare students to thrive in this new environment.  Nick Barendt and the LeanDog Studio have teamed up to offer a Connected Devices Workshop for Case Western Reserve University (CWRU) students during the Fall semester.  Students will get practical experience building a proof of concept system:  physical device, web integration, and mobile development.

Nick is a partner at LeanDog and an adjunct faculty member in the Department of Electrical Engineering and Computer Science at CWRU.  He helps lead the Design and Delivery Studio at LeanDog, building custom software products for clients.

Students will tackle hands-on assignments, learning to use the Unix command line shell, network communication, including Bluetooth Low Energy, embedded systems, cloud/web services, essential User Experience design, and native mobile development.

The course is unique in the breadth of material being covered – the goal being to provide students with a systems-level view of Connected Devices, gaining experience with each component in a complicated system.  Students are expected to leave the class with the ability to build their own proof-of-concept for a new product or service.  Additionally, students will develop enough experience and confidence to dive deeper into any of the technologies involved (e.g., web services, mobile development, etc.)

Given the ambitious syllabus, students will work in two-person teams, a practice known in the engineering and software development world as “pairing.”

“As Agile and Lean practitioners, we believe strongly in the value of pairing. Pairing with another student gives them the ability to share that journey with someone else who may have more experience in a technology area than they do. This allows the student to learn faster. Two heads are better than one when you are trying to solve a complicated problem,” said Nick Barendt, head of the Studio at Leandog.  

The broad spectrum of technologies covered in the class will be matched by the backgrounds of the students in attendance, whose areas of study range from computer science, to electrical engineering, to computer engineering. By working together in rotating pairs, students will learn to successfully collaborate with peers of different backgrounds and experience levels. This practice echos the same cross-functional team environments they will likely encounter in their professional careers.

Though Nick is the instructor of record for the course, he is pulling heavily on the depth and breadth of experience of the LeanDog Software Delivery and Design Studio to help provide this opportunity to students.  The LeanDog Studio has exceptional expertise in Agile/Lean design and development across Web, Mobile, Cloud, DevOps, and comprehensive User Experience.  Each lecture and assignment is being designed collaboratively with this cross-functional team.

About LeanDog

LeanDog is an Agile consulting, training and software design company that is redefining smart design and delivery, while helping to transform our clients’ organizations.