IoT Course Week 10: Analytics


Last week we got our feet wet with an introduction to Bluetooth Low-Energy on iOS. This week, we’ll dive into analytics, provide business value, and make some pretty graphs.

Why Analytics?

When building a new product, there are always a variety of options on the table with which to improve that product. At LeanDog, we practice a software development cycle that includes short sprints coupled with an open and honest feedback loop that provides us with the information we need to make informed decisions about where to focus our efforts and resources. This allows us to make sure that we are building the right thing the first time and minimize the amount of risk inherent in the process.

Until relatively recently, collecting feedback about a product in-use was a long process that required either direct observation or careful reading of written user reviews and complaints. Due to the complex and inconsistent nature of users, collecting strong quantitative data about a product experience can be difficult. In a now infamous incident from 2013, a New York Times journalist wrote a negative review of the Tesla Model S, only to have the car’s onboard analytics refute many of his claims. It is not uncommon for a customer to report one thing, but end up doing something entirely different, and your user experience process will need to account for these inconsistencies. One of the many ways we solve that problem is through the use of analytics platforms and reporting tools.

In addition to uncovering potential pitfalls, analytics are a powerful way for product owners, designers, and developers to understand how a product is actually used. For companies that make physical devices, this provides insights that are difficult to collect otherwise. Imagine receiving a coupon in the mail for a smart GE light bulb you love that’s nearing the end of it’s lifetime. The only way GE could possibly anticipate that your current bulb is about to go out (without calling you every day to ask how often you turned it on in the last 24 hours) is through analytics. With analytics, you get an avenue outside of sales to start to figure out which features and products your users actually love, which have problems or aren’t worth further development, and even identify disengaged users for retention campaigns.

Enter Keen IO
For this class, we will use a popular analytics platform called Keen is a general purpose tool, not locked into web, mobile, or embedded specifics. It has a large number of supported software development kits (SDK’s), including Ruby, iOS, Python, .NET, etc. It also offers a powerful free tier, which is perfect for the amount of traffic currently being driven on student’s LAMPi systems. Registering and sending a notification in Python is as simple as as this:

from keen.client import KeenClient

client = KeenClient(

client.add_event("sign_ups", {
"username": "lloyd",
"referred_by": "harry"

This will send an event containing the signup data to Keen’s database. Now back at LAMPi headquarters we can track those signups on a giant web dashboard:

var series = new Keen.Query(“count”, {
eventCollection: “sign_ups”,
timeframe: “previous_7_days”,
interval: “daily”

client.draw(series, document.getElementById(“signups”), {
chartType: “linechart”,
label: “Sign Ups”,
Title: “Sign Ups By Day”


Keen also provides a number of ways to pull out the analytics data and do additional processing to get exactly the view we wanted. Like if we wanted to build a tree of who our top referrers are what their “network” looks like:


What’s next?
Analytics can also provide a leading indicator to help model the number of users that will be pounding on your infrastructure. To learn more about how to address that issue, join us next week when we talk about load testing!

IoT Course Week 9: Introduction to Bluetooth Low Energy


Internet of Things Course

To continue our goal of providing industry experience to the students of EECS397 Connected Devices, this week we will be diving deep into Bluetooth LE on iOS.


Last week students completed setting up a UI on iOS and Android that mirrored the interactions present on the LAMPi display and the web. The goal for this week is to connect those pieces

CoreBluetooth and Bluetooth 4.0

With the release of iOS 5 and the iPhone 4S, Bluetooth LE was positioned and continues to be one of the most common methods of short range data communication.  CoreBluetooth is the framework that Apple provides to developers to interact with Bluetooth LE hardware and peripherals. This is useful, as the current Bluetooth LE spec weighs in at over 2000 pages in PDF form.

Communication with LAMPi through CoreBluetooth can be broken into a four step process:

  1. Scanning for LAMPi device (from provided array of service id’s)
  2. Connect to discovered service (lamp-service)
  3. Probe characteristics (hsv, brightness, on/off).
  4. Subscription notifies when something changes. notify on property write.

Scanning for LAMPi

Students began the class by making an update to their LAMPi’s. Each team was given a BlueGiga BLED112 to plug into their Raspberry Pis, as well as updated Python services which allow the LAMPi to act as a Generic Attribute Profile (GATT) server. What the GATT server does is broadcast a number of available services to any BLE devices nearby that care to listen. In the case of the LAMPi, there is only one service being exposed, which is aptly called the Lampi Service.

Screen Shot 2016-04-12 at 11.18.43 AM

The service being advertised from the LAMPi includes a device id, which students use to identify their unique LAMPi in a classroom containing many more. Once discovered, it is time to connect.

- (void)startScanningIfEnabled {

if(self.shouldConnect) {

[self.delegate onLoading:@"Searching for sensor..."];

NSArray *services = @[[CBUUID UUIDWithString:LAMPI_SERVICE_UUID]];

[self.bluetoothManager scanForPeripheralsWithServices:services options:nil];



Connecting to a Peripheral

Screen Shot 2016-04-12 at 11.20.51 AM

CoreBluetooth abstracts away much of the detail required in making a connection to a BLE peripheral. When a peripheral is discovered, our code will immediately attempt to connect. If that connection is successful, we search through the set of services that exist on the peripheral, looking for one that is recognized.


– (void)centralManager:(CBCentralManager *)central

didDiscoverPeripheral:(CBPeripheral *)lampPeripheral

    advertisementData:(NSDictionary *)advertisementData

                 RSSI:(NSNumber *)RSSI {


       [self.bluetoothManager connectPeripheral:self.lampPeripheral options:nil];



– (void)centralManager:(CBCentralManager *)central

 didConnectPeripheral:(CBPeripheral *)lampPeripheral {

   NSLog(@”Peripheral connected”);

   [self.delegate onLoading:@”Found lamp! Reading…”];

   lampPeripheral.delegate = self;


       // Search for a known service

       for (CBService *service in {

           if([service.UUID isEqual:[CBUUID UUIDWithString:LAMPI_SERVICE_UUID]]) {

               self.lampService = service;




Services and Characteristics

At this point, we are connected to a Lamp Service, which is now providing us with a collection of characteristics. Characteristics are how communication in BLE works. To make a comparison to software, Services can be thought of as Classes while Characteristics are more like the properties on an Object. Characteristics support four different actions: read, write, notify and indicate. While read and write are arguably fairly straightforward, notify and indicate both have to do with a subscription flow that we will use heavily in the iOS application.

Screen Shot 2016-04-12 at 11.23.20 AM

Subscribing to Characteristics

Because LAMPi has both on device and cloud controls, we want to be able to track the state of the LAMPi in real time while the iOS app is running. If a user were to change the color of LAMPi by using the Raspberry Pi UI, the Bluetooth service would send a notification to iOS that the HSV characteristic had been changed.  The following block of code is an example of a discovered Characteristic being initialized. It reads the current hue and saturation of the HSV Characteristic, and then tells the app to subscribe to the Notify value (the notification) of the lamp peripheral.


           [self.lampPeripheral readValueForCharacteristic:self.hsvCharacteristic];

           if(self.hsvCharacteristic.value != nil) {

               float fHue = [self parseHue:self.hsvCharacteristic.value];

               float fSat = [self parseSaturation:self.hsvCharacteristic.value];

               [self.delegate onUpdatedHue:fHue andSaturation:fSat];



           [self.lampPeripheral setNotifyValue:YES forCharacteristic:self.hsvCharacteristic];


At this point, when the LAMPi HSV Characteristic changes, CoreBluetooth will call a delegate method that is triggered from the setNotifyValue line.


– (void)peripheral:(CBPeripheral *)peripheral

didUpdateValueForCharacteristic:(CBCharacteristic *)characteristic

            error:(NSError *)error ;


It is in this block of code that the HSV value is updated in the app, and logic to refresh the UI is executed.

Fun Fact: Origins of Bluetooth Name

As a bonus for making it this far, did you know that the origin of the word “Bluetooth” comes from a c. 970 King of Denmark, called Harald Bluetooth? In fact, the Bluetooth logo is comprised of the Nordic runes for H(8px-Runic_letter_ior.svg) and B (12px-Runic_letter_berkanan.svg), Harald’s initials. 

Team Building: Diversity Uncovers What Experience Can't



Diversity tends to bring a broader perspective and a broader perspective is critical to strong team building. A good friend of mine recently told me a story that illustrates just how important diversity (in skill set, age, gender, background, etc.) is to building successful teams and how diversity finds things experience alone will not.

My friend’s daughter recently started an internship as a mechanical engineer with a well-respected global company who manufactures plumbing equipment. Her first assignment was with a group of very talented and experienced engineers who were working on a defect issue with one specific line of faucets. Returns were extremely high and customer ill-will toward the company brand was growing.

The faucet sold well because of style and features, however, defects on the model were abnormally high.The engineering team, as all good experienced teams would do, had been pouring over every aspect of the manufacturing process, looking at packaging, looking at suppliers parts, doing detailed reviews of designs and design specs, assembling and dis-assembling loads of units right off the line trying to find the issue. My friend’s daughter, being new to faucets and having never installed one before, grabbed a finished product right off the line, sat down with the instructions, and proceeded to put the faucet together according to the steps provided.

No one else had thought to do this! To her amazement, the instructions walked a customer through a group of steps which not only broke the faucet, it voided the warranty as well. The product was mechanically sound and functioned perfectly when assembled properly; however, the average non-plumber customer follows instructions and doesn’t rely on a mechanical engineering degree or years of experience working with plumbing to install their own faucets.

An issue that had cost a company a considerable amount of money, capacity, and consumer ill-will, was solved by a rookie mechanical engineer intern without her utilizing her engineering skills. All of the team members working on the project had been putting faucets of ANY kind together for many years without ever pulling out instructions. They could assemble a faucet sight-unseen, on the fly and it would work perfectly, so no one even thought about considering the instructions as a source of the issue. It wasn’t ego, it was human nature. The team had been so close to the product for so long they could skip steps to get to a “quicker” result. They also had very similar backgrounds, and experience levels. It happens in every industry.

When asked by management what made her decide to look for problems with the instructions, my friend’s daughter said this:

“I wasn’t, it seemed like a logical place to start. Women and men think differently. I always read the instructions first. You have a lot of women customers so you need more women engineers.”

The perspective that diversity delivers is important. Don’t make the costly mistake of overlooking it.

Learn more about doing things differently in Climbing Mountains With Agile Methods.

IoT Course Week 8: Intro to Mobile Development

Internet of Things Course

For a change of pace, we are taking a step back from the cloud -> server model we have been working so diligently on, and instead turn our eyes towards mobile. As more and more people around the world enter the global smartphone market, the Internet of Things space is becoming increasingly reliant on smartphone interfaces to control connected devices. This is because the components native to the smartphones that many of us carry, are simple and effective media of interaction with the connected world around us.

The Mission

The goals of week 8 are to provide an introductory course on modern mobile development, in both native iOS with Objective-C and native Android in Java. This week will set up a user interface to control the LAMPi, which next week, will be extended to operate over Bluetooth.


While the students were almost equally divided on Android/iOS, most of the students utilized Mac laptops, so we made the pairs heterogeneous as each pair had to build the app for both platforms.  This is necessary due to restrictions that Apple places on its developers, where software meant to run on the iOS platform must be developed on a machine running some recent version of OSX. Students were led through the creation of a project in Xcode, and some initial configuration that Apple requires developers to follow in order to sign and run their code. Students used Xcode to create a single view project, and they got to work.

Working in Interface Builder and a UIViewController, students were guided through adding and connecting a UISlider and UILabel to an IBOutlet.


#import <UIKit/UIKit.h>

@interface LampiViewController : UIViewController

@property (nonatomic, strong) IBOutlet UISlider *slider;

@property (nonatomic, strong) IBOutlet UILabel *label;





#import “LampiViewController.h”

@interface LampiViewController ()


@implementation LampiViewController

– (void)viewDidLoad {

   [super viewDidLoad];

   NSLog(@”slider: %@ \n label: %@”, self.slider, self.label);


-(void)onSliderChanged:(UISlider*)sender {

   NSLog(@”slider changed”);




A this point, a slider appeared on the screen that could be interacted with to update the label.  However a problem existed when the screen was rotated on its side.

Screen Shot 2016-04-05 at 9.36.44 AM

As can be seen, the slider and logo fail to expand out when the screen changes. This can quickly be remedied by utilizing one of the much loved nuances of Xcode’s Interface Builder, which is defining constraints on the views.pin_image

And once applied, everything came together. As can be seen here, the iOS Simulator is capable of fluid rotation from portrait to landscape, without any upsetting of the slider position.



Android development, unlike iOS development, is capable of running on a much larger swath of machines. Since its introduction, Google’s Android has been open sourced and designed to run on the Java Virtual Machine (JVM).

Students began by downloading Android Studio and the latest Android SDK Tools. Using the Android Studio Interface, students created a new Android Studio project, allowing the template to be generated for a blank activity. Android Studio wants to get developers started quickly. They offer some shims that help this process. Because our project will be full of custom layouts and widgets, choosing a blank activity is the ideal scenario. After all was said and done, students were left with an application containing one activity.shim_project

Students were given a custom UI Widget in order to define a consistent slider for the class to use. The slider was added to the created activity layout in the XML here:


<?xml version=”1.0″ encoding=”utf-8″?>

<RelativeLayout xmlns:android=””












       android:layout_alignParentTop=”true” />







       android:layout_alignParentTop=”true” />



and when rendered by the running application it looked like this:


An event listener is added to the HueSlider, which tells a changeColor to run whenever the slider is interacted with.  It also allows the slider to initialize to a certain position or color, which will be useful when starting and restarting the LAMPi appication.


package com.leandog.lampi;


import android.os.Bundle;



import com.leandog.widgets.hsv.sliders.HueSliderView;

import com.leandog.widgets.hsv.sliders.OnSliderEventListener;


public class Lampi extends AppCompatActivity {


   HueSliderView hueSliderView;

   View bar;



   protected void onCreate(Bundle savedInstanceState) {



       bar = findViewById(;




   private void changeColor(int color) {




   private void setupHueSlider() {

       hueSliderView = (HueSliderView) findViewById(;


       hueSliderView.setOnSliderEventListener(new OnSliderEventListener() {


           public void onChange(int color) {





           public void onSliderInitialized() {







Where we end up

The homework for this week was to follow the patterns of user interface and code interaction just demonstrated, and build a fully operational user interface for both iOS and Android. These user interfaces will essentially mirror that which we have already built on both the LAMPi screen, and the web interface.

iOS Simulator screen capture:


Android Emulator screen capture:


What is next!

Next week we will add the bluetooth functionality that connects our devices directly to the bluetooth hardware running on the Raspberry Pi controlled LAMPi.  For the sake of brevity and focus, from this point onward, mobile development will be done primarily on the iOS platform.

IoT Course Week 7: Platform Security Part B (#3)

Internet of Things CourseWelcome to Week 7, part B #3

Note: This is part 3 of a 3 part series about assignment 7B in the course. This particular assignment was quite information dense, and if you are unfamiliar with some of the pieces involved, things can be quite confusing.

In Part #2 we covered how the authentication and ACL mechanism works and how we integrated that with the Django web app. Now, we’ll cover how we deal with auth and ACL checking through the WebSockets support on Mosquitto.

Real-time Feedback of LAMPI in our Web App

In order to have real-time feedback in the web app, we are using the Paho MQTT JavaScript library and WebSockets to provide data to the Human User that has the web app open in their web browser. The WebSocket traffic is being supplied by Mosquitto, not our Django app. So, how do we secure this so that you can’t just go in and twiddle some data to get access to someone else’s LAMPI devices?

Well, first thing to note, since the WebSocket traffic is going through Mosquitto, is taking a look at what Mosquitto does in relation to the Auth integration with the Django app. Mosquitto expects there to be enough data in the WebSocket request to satisfy the /auth & /acl calls that we reviewed in Part #2. To recap, this is username, password and topic (for /acl you also need clientid and access type).

That seems easy enough, but we don’t want to send the user’s actual username and password through the wire (or exposed in the JavaScript) on every call. How do we solve this problem then? We use a collapsed UUID-based Auth token from Django that is unique to the current Human User session in the web app. By using the Auth token as the username and a blank password field, we can validate the request by looking up the Human User session in the Django app to be able to respond appropriately to the Mosquitto Auth Plugin call as to the authorization of that particular Pub/Sub request from WebSockets.

So, to make this easy, we wrote our Django view template to inject a snippet of JavaScript that sets up a few variables, one of which is our Auth token, that we then leverage in our JavaScript code that coordinates the WebSocket data calls with UI manipulation. That code then uses the token, via the Paho JavaScript client, as the username field so that the Django app can determine an authorization response when Mosquitto asks. This will then allow us to restrict the access to topics by user & device so that you can’t easily sniff other Human User’s data. Easy peasy lemon squeezy, right?

What more security do we need?

Well, for starters we should probably TLS encrypt the local backend HTTP traffic between the Cloud Mosquitto broker auth plugin and the Cloud Django web app. While we transfer this traffic over the local loopback device on the server (which means the IP traffic never exits the physical Ethernet device on the server), if you somehow gained access to the server, you could sniff the traffic on that interface. At that point though, we most likely have bigger security problems.

Having the Django auth token being passed around may not be the best, but it is similar to how many REST-based APIs work in that once you authenticate, you then use a token for all subsequent web requests.

Where else do you think the security of this architecture could be improved?

Next Week

Next week, we take a step back from this foray into security for something entirely new: mobile device control. We will forge a path into the unknown as we begin to implement Bluetooth Low Energy connectivity from the smartphone in your pocket directly to the LAMPI hardware.

IoT Course Week 7: Platform Security Part B (#2)

Internet of Things CourseWelcome to Week 7, part B #2

Note: This is part 2 of a 3 part series about assignment 7B in the course. This particular assignment was quite information dense, and if you are unfamiliar with some of the pieces involved, things can be quite confusing.

In Part #1 we covered the overall solution. Now let’s delve into how we are handling authorization for the LAMPI device(s) and human users of the Django web app. Since the primary interface between LAMPI and the Cloud is MQTT (and hence Mosquitto), let’s look at that first.

Mosquitto’s Authentication Plugin

Mosquitto supports an extensible plugin architecture, for both user authentication and Access Control Lists (ACL). ACLs allow for fine-grained permission models. When applied to Pub/Sub systems, they allow the control of which users can publish to particular topics and which users can subscribe to particular topics.

For these purposes, we will be using mosquitto-auth-plug project. It supports multiple backends such as MySQL, Redis, Postgres and HTTP to name a few. In order to integrate with the Django app, we will use the HTTP backend.

With the plugin configured for HTTP, whenever Mosquitto needs to answer an authentication or ACL question, it will make an HTTP request. What hostname, port, and URLs it uses for the HTTP backend is configurable. The HTTP server then responds with either an HTTP status of 200 (OK), approving the request (e.g., to allow username XYZ to connect with password ABC), or returns a 403 (Forbidden). There is no actual data in either response, just the HTTP response code.

The LAMPI “User”

So, how does the Authentication and Authorization work in regards to LAMPI? Well, the Authentication is basically handled by the TLS certificate. The name of the LAMPI device is the Common Name of the TLS certificate on the device that is used when initiating the MQTT connection to the Mosquitto broker instance in the cloud.

When making a web connection through the auth plugin, Mosquitto creates a POST request to the configured location that contains the following: username, password, topic and acc. In the case of LAMPI, when making a call to /auth, the password field is blank (having already authenticated via TLS) as are the topic and acc fields. If the Django app approves the request when LAMPI connects to the Cloud Mosquitto broker then it will return a 200 HTTP response to Mosquitto.

What about Pub & Sub requests? That is a bit different. When the Mosquitto broker on LAMPI pushes topic requests, through the bridge, to the Mosquitto broker in the Cloud, the auth plugin may defer the ACL check depending on the type of request.

If the LAMPI Mosquitto broker has requested to subscribe to a topic, the ACL of that subscription isn’t immediately checked by the Mosquitto broker in the Cloud. The Cloud Mosquitto broker merely records the subscription request from the LAMPI Mosquitto broker. The Mosquitto broker on LAMPI has no knowledge if it is in fact allowed to subscribe or not.

When a message is published on that topic that the LAMPI Mosquitto broker is subscribed to, then the Mosquitto broker in the Cloud sends an /acl request via the auth plugin to determine if the LAMPI Mosquitto broker is in fact allowed to receive that published message. Once again, the auth plugin creates a POST request with the following fields: username, password, topic, acc and clientid. The topic contains the full path of the message, the acc is a number 1 for a Subscription check, and clientid is the thing that is requesting the action. The Django app will use this information to determine if the LAMPI Mosquitto broker is the correct recipient of the published message. If it is, it will return a 200 HTTP response to the Cloud Mosquitto broker, otherwise a 403 HTTP response will be returned.

If the LAMPI Mosquitto broker attempts to Publish a message to the Cloud Mosquitto broker, an ACL check will be performed immediately via the auth plugin. Once again, the Cloud Mosquitto broker will make a call to /acl on the Django app. As with the Subscription check earlier, the same data will be POSTed to the Django app with the exception of acc being a value of 2 (for Publish). If the LAMPI Mosquitto broker is allowed to push that message up to the Cloud Mosquitto broker, then Django would return a 200 HTTP response.

This sure does seem like a lot of work for the Cloud Mosquitto broker. It would be, if it made these auth calls every time a message came through. This is why Mosquitto actually caches the responses to these Auth & ACL requests for a time. This allows Mosquitto to quickly evaluate ACL checks internally which gives it a very high message throughput. With this out of the way, let us look at the role of the Human being using the Django Web Application.

The Human “User” Interacting with the Django Web App

In order to let a human user into the Django web app, we are using Django’s built in user authentication mechanism. The idea is to allow a simple system to authenticate a human user via username & password, as well as provide an authorization mechanism to associate human users to device users. The device users in this case being one or more LAMPIs registered with their respective users.

The registration of a LAMPI to a human user account in the Django app is used to determine if that human user logged in via the Web has access to publish or subscribe to topics from LAMPIs connected to the Cloud broker. The important point to take note of however, is the web page contains no static data. All of the data the human user interacts with via the web interface is actually generated in real-time via the WebSockets integration with the Cloud Mosquitto broker.

So how do we authenticate and authorize the human user via WebSockets to see messages from Mosquitto? We’ll find out next week in the final installment of this series on Assignment 7B.

IoT Course Week 7: Platform Security Part B (#1)

Internet of Things Course

Welcome to Week 7, part B #1

Note: This is part 1 of a 3 part series about assignment 7B in the course. This particular assignment was quite information-dense, and if you are unfamiliar with some of the pieces involved, things can be a bit confusing.

In the previous week, we added some security to the system with Transport Layer Security (TLS). This encrypts the web, websockets, and MQTT communications. It also allows clients to authenticate that the servers are who they say they are, and Lampi Devices to authenticate the MQTT bridge connection to the EC2 MQTT Broker. This week we will be focusing on a few remaining security gaps in the model, including:

  • protecting the Cloud MQTT broker namespace
  • limiting which devices Django users can communicate with at the MQTT level
  • providing limitations on Lampi devices specifying how and which topics are mapped into the global MQTT topic namespace on EC2 (and a compromised device could potentially wreak havoc on the entire system)
  • requiring authentication of users connecting to the MQTT websockets interface

Where do we begin?

There are a few different ways to skin this security cat. We decided that integrating Mosquitto with Django would provide us with a way to have a single source of truth for Human User authentication, Human User authorization, and Device User authorization. Mosquitto is already configured to authenticate the LAMPIs via the TLS certificates we configured in the previous post. Also, we thought that a REST interface would be easiest to implement for the integration. While we used Django, really any system that supports REST would work with the architecture we’re outlining today.

The high-level components:
Screen Shot 2016-03-07 at 3.13.10 PM

What does this look like overall?

There are a few moving parts to this solution we decided to go with. Let’s take a look at it overall:

Screen Shot 2016-03-07 at 3.10.16 PM

You’ll notice there will be a plugin to Mosquitto that we’ll leverage to connect the Django web app with the MQTT system.

Next post will delve into the ACL support for LAMPI and the Human Users of the system. Stay tuned!

Taking Cleveland Tech To New Heights

Whenever you meet a LeanDogger, one thing is always immediately evident: we HEART tech. Like, a lot. We love it so much that we are obsessed with growing and nurturing the tech environment in Northeast Ohio. It’s why we hold five tech meetups a month aboard our floating office, regularly speak at industry conferences, teach a college course on connected devices, and even bring in high-school students for shadowing opportunities. It’s also why we decided to invest in Tech Elevator, an innovative coding bootcamp that prepares novices for careers in the field of software development.


This 14 week Cleveland-based bootcamp helps students seeking to make a career switch rapidly develop coding abilities, while fully supporting their career goals through Tech Elevator’s hiring network and Pathway Program. As stated on their website, the program gives students “an understanding of the foundational computer science concepts and theory necessary for a professional software developer, with special emphasis on practical application, techniques, and tools.“

The founders are so confident in their training, the program comes with a money-back guarantee. If any student graduates from either the Java or .NET program and does not receive a job offer in a software development related role within 120 days, Tech Elevator will actually refund their tuition.

As Anthony Hughes, Tech Elevator Founder and CEO, puts it: “You’re not just “learning to code”. You’re entering the next chapter of your career and investing in your future. And we’re invested right along with you. ”

“We’re big believers in this business model and the team at Tech Elevator,” says Jon Stahl, President of LeanDog. “We work with some of the largest dev shops in the country and every day we witness the need for more quality developers to fill the growing IT needs of today’s business. Tech Elevator’s model is brilliant in its simplicity: find smart people who are hungry for great careers, and train them with the tech skills that companies are willing to pay for.”

Just like LeanDog, Tech Elevator is committed to sharing best practices in the craft and growing the software developer community in Northeast Ohio. To further those efforts, LeanDog and Tech Elevator will work together to help novice programmers and career changers learn more about the growing field through meetups, mentoring, training, and workshops, as well as collaborate on learning tracks that go beyond coding, to explore Lean and Agile delivery practices.

To get in on some of these upcoming events, follow @leandog and @Tech_Elevator on Twitter. You can aslo find out more about both companies and our tech-lovin’ ways at and

IoT Course Week 7: Platform Security (Part A)

Anytime you are making a product, platform security is something that needs to be addressed. In this post, we look at platform security and the steps you can take protect your users.

Platform Security internet of things collage IoT
By now we’ve created a page that controls LAMPI and are serving it out of Django. We have a user authentication scheme, although we’re not putting it to great use just yet. Before we get there, we have a huge problem — there is really no security to speak of anywhere in this system. Our MQTT bridging is “secured” with a username and password, but those are sent unencrypted over the network. It would be trivial to sniff this communication and discover these credentials.

We have two paths into our EC2 server we need to secure — the path from the lamp to the cloud, and the path from the user’s browser back to the cloud.

TLS Certificate

Rather than use username and password, we are going to use Transport Layer Security (TLS) to encrypt our communications. Using this channel of public key cryptography will ensure that messages being sent between the lamp and the cloud are impossible to decrypt or interfere with, without first compromising the private key of the Certificate Authority.

We will create our own local certificate authority which we will use to issue and sign certificates for both the lamp and web client. A CA functions via the following steps:

  1. Someone decides to be a CA
  2. Generate a Public and Private Key Pair
  3. Generate a CA Certificate, including the Public Key, and Sign it with the Private Key
  4. Distribute your CA Certificate to the World

Note: For a real CA, Private Key should be stored securely, probably on a non-networked computer. Compromise of the Private Key compromises all derived certificates!
If you want to read more about how TLS (and it’s predecessor SSL) work, here is an excellent Stack Overflow post.

For the purposes of this discussion, the Common Name of the TLS certificate has to be unique to the LAMPI instance. This is because the Common Name will be used as part of the authentication process later.

Add to Bridging

As was set up in Lecture 4, MQTT bridging between cloud and LAMPi is handled using the tool Mosquitto. Mosquitto supports TLS security natively, and turning it on is as simple as ensuring each device has the appropriate CA and generated keys to communicate with each other. On the LAMPi, the configuration is:


connection b827eb74663e_broker
bridge_cafile /etc/mosquitto/ca_certificates/lampi_ca.crt
 bridge_certfile /etc/mosquitto/certs/b827eb74663e_broker.crt
 bridge_keyfile /etc/mosquitto/certs/b827eb74663e_broker.key
 bridge_tls_version tlsv1.2
topic lamp/set_config in 1 "" devices/b827eb74663e/
 topic lamp/changed out 1 "" devices/b827eb74663e/
 topic lamp/connection/+/state out 2 "" devices/b827eb74663e/
 cleansession true

and on the cloud, the configuration is set to:

 listener 52122
 cafile /etc/mosquitto/ca_certificates/lampi_ca.crt
 certfile /etc/mosquitto/certs/lampi_server.crt
 keyfile /etc/mosquitto/certs/lampi_server.key
 tls_version tlsv1.2
 require_certificate true
 use_identity_as_username true

Next, we need to allow that “user” access, so we configure Mosquitto’s passwords file and add the Common Name of the LAMPI instance. Once these are set, and Mosquitto is restarted, each Mosquitto instance is reset and the bridging will now be TLS secured!

Serving Django pages from NGINX

Next we’ll need to secure our web client. The current Django web server does not support TLS, and we are hoping to use TLS to communicate between the cloud and a user’s web browser. Since we already have nginx installed and it is a TLS capable web server, we will use that. nginx does not natively know how to serve a Django app, so we’ll use uWSGI as a WSGI adapter between nginx and Django. uWSGI provides detailed instructions on making this web server change in their docs.


# configuration of the server
 server {
# the port your site will be served on
 listen 443 ssl;
 ssl_certificate /home/ubuntu/ssl_keys/lampi_server.crt;
 ssl_certificate_key /home/ubuntu/ssl_keys/lampi_server.key;
 ssl_protocols TLSv1.2;
# the domain name it will serve for
 server_name; # substitute your machine's IP address or FQDN
 charset utf-8;

Restarting uWSGI and nginx at this point will apply the changes. Now traffic when navigating to https://{django_web_address} will be secured via the CA that we created!

Securing Websockets

Since we’re getting some of the real-time information over websockets, and MQTT is serving that data using the websockets protocol, we want to also encrypt the transport of that data from MQTT to the Client’s web browser. Also, since this websocket data is in a web page that is already coming from a now encrypted NGINX server, we need MQTT to encrypt that websocket data with the same TLS certificate that we use for our NGINX configuration.


listener 8081
 protocol websockets
 cafile /home/ubuntu/ssl_keys/lampi_ca.crt
 certfile /home/ubuntu/ssl_keys/lampi_server.crt
 keyfile /home/ubuntu/ssl_keys/lampi_server.key

Here is an example of what this would look like:

Platform Security Diagram for IoT Class



  • no services listening on anything but localhost
  • MQTT bridge using TLS with unique certificate


  • Require TLS on MQTT (MQTT and Websockets)
  • Require TLS on NGINX HTTP (HTTPS)

Next Week

This week laid down the groundwork that makes up our platform’s security. Next week, we will continue this robust buildout, specifically focusing on the MQTT layer. User accounts will be limited through Django to lock down communication to specific LAMPi devices. The MQTT web sockets interface will also undergo some lockdown.

In case you missed it, here’s a link to last week’s post: IoT Week 6: Setting Up User Accounts or, if you’re hankering to snack on some Cucumber, check out How to Run Cucumber Tests with Docker.

Selecting the Right User Research Method

User Research can make a huge difference in the success of a product or service. When we understand the behaviors and goals of our users we are in a better place to server their needs. With so many methods and User Research tools out there, how does a person know which tool to use and when?

User Research tools

Have you ever found yourself trying to hang a picture, only to realize that you don’t know where your hammer is? You dig around in the junk drawer for a moment before realizing you left it out in the shed. Rather than put your shoes on and go get it (cuz, ugh, outdoors), you instead do what any right-thinking person would and grab the next hard object you can find, then spend 20 minutes smashing the nail into a wall with a wrench. This ultimately results in a bent nail, a dinged up wall, and you, resignedly walking out to the shed to go get the right tool.

Applying the wrong method when conducting user research is a lot like hammering a nail with a wrench. While a wrench is a great tool for certain tasks, it’s just not the right one for the job at hand. Similarly, you would not attempt to loosen a bolt with a hammer…unless you were like, the worst mechanic ever.

These days, most people have some idea of what user research is, its purpose, and the value behind it. They get that it helps mitigate risk, validates assumptions, guides the prioritization of features, increases the likelihood of product adoption, etc ..the list goes on and on. Unfortunately, the general perception of user research is often confined to a just a few strategies, such as surveys or product testing. While these tactics are not without merit, and certainly provide a level of both quantitative and qualitative value, limiting the scope of your research initiative to these few strategies will result in many metaphorically bent data-nails. Not all methodologies work for every app, user group, or situation. To successfully understand your users, you’ll need to employ right tool based on the research goals you are trying to achieve.

For Example…

A new client recently asked us to send out a survey that would help them understand what their users were thinking while trying to accomplish a certain task. Up until that point, this particular client hadn’t engaged with their users very much and this initiative represented a step in a positive direction; a step towards user-centered design. Sounds great right? After all, any time you can talk to actual users is a win, so conducting a survey to understand their needs would probably be helpful. Well…sort of…

We certainly could have sent off a well-crafted survey, asked all the right open-ended questions, and vetted the thing left and right to rid it of any shred of bias. In the end though, it wouldn’t have mattered. Not because surveys are a bad way to gather data, but because in this case, they simply weren’t the right tool for the job.

User Research and Data Types

To understand why this was the case, we first need to understand the type of data we were looking for. According to Indi Young, user experience consultant, author, and founding partner at Adaptive Path, user research data can be grouped into three categories: Preference, Evaluative, and Generative data. Each one of these data types comes with different issues and require different techniques for extraction. Preference data refers to the opinions, likes, and desires of users, generative data relates to the mental environment in which users get things done, and evaluative data pertains to what is understood or accomplished with a tool.

The chart below covers each of these data categories, the best techniques for extracting that type of data, and their ideal uses.

Indi Young Research Method Types

Young, Indi (2008-02-01). Mental Models Rosenfeld Media. Kindle Edition.

What our clients really wanted was to understand how their users think, their philosophies, motivations, and environments; all examples of generative data. As you can see in the above chart, surveys are a great tool for determining user preferences, or providing demographic information, but they aren’t ideal for gaining an understanding of users’ mental environments.

Employing The Right Tool

After further discussion and collaboration with our client, we decided to conduct non-directed interviews with eight of their users.

During the interviews, we stepped back from the client’s current solution and instead asked users to talk about their roles, allowing the conversation to evolve naturally. The user led the discussion, we actively listened, and asked follow-up questions based on what we heard. We used their words, expressions, terms, and tools, being careful not to introduce our own references.

The results were very insightful. By having this open conversation, guided by the user, we were lead down paths we didn’t even know existed. Prior to the interviews, we had no real information about the users’ mental environments. Through our generative research however, we discovered that these users actually had several different roles, with distinct needs, and that they were approaching the problem space from a completely different direction than our client.

Using the right approach, we were not only able to find out how our client’s users organized material, but we also gained a bunch of other valuable business intelligence. After meeting with the client and presenting our findings, we were left with multiple action items we could pursue:

  • Explore the distinct user roles
  • Prioritize which role to focus on first
  • Address our user’s pain point of lack of time
  • Add data to our proto-persona and lay the groundwork for building our UX personas
  • Use the correct terminology in future prototypes

All information we couldn’t have gotten through a survey.

Moral of the story: Take the time to put on your shoes, go out to the shed, and grab the right tool.

To learn more about UX methodology, check out UX Design: People Over Features, Outcomes Over Output.