IoT Course Week 13 – IoT Platforms

Screen Shot 2015-10-15 at 3.07.21 PM

Last week, we explored remote firmware updates for IoT Devices, using the Debian Package system. This week, we’ll be discussing various IoT platforms.

When we started the course, we had an explicit goal to avoid “black box” solutions, platforms, and vendor lock-in, as much as possible.  We wanted students to understand how these systems are built, as well as architectural and security considerations. The course in some ways is “Learn IoT the Hard Way”, by learning through building various components of an IoT system, stiching those components into a holistic system, and touching on a number of important non-functional requirements, like security, load testing, analytics, and firmware update.  Through that experience (and occasional struggle), we hoped to arm students with enough knowledge and experience to understand both the individual components as well as the overall system.

You can, of course, purchase a complete IoT system – they’re generally referred to as IoT Platforms.  There are many, many choices

Screen Shot 2016-06-13 at 9.30.23 AM

Platform Tradeoffs

When building a product or a business around any technical platform, one must consider the long term implications of that platform. There are the basic questions of functionality and offloading work and operations, but the added complexities of hardware. What does this platform scale to, how quickly can I go from prototype to market, where can I source large quantities of an item, etc. Software as a service also has a few horror stories of products or companies discontinuing a line, which other companies heavily rely on. Controlling your own destiny is very important, and can sometimes be difficult when building your business on a platform that is someone else’s responsibility to keep running. One platform which we feel is here to stay for some time however, is Amazon Web Services.

AWS IoT

From the beginning of this course, the intention was to never take the easy path in building the LAMPi system. Amazon offers a service encompassing much of the functionality we have spent the past several weeks piecing together, AWS IoT, which provides secure, bidirectional communication between internet-connected things and the AWS cloud. This includes a robust security model, device registry, MQTT message broker, as well as integration ease with the remainder of AWS’ cloud offering. Let’s dive in.

Screen Shot 2016-06-13 at 10.15.23 AM

The Message Broker offered through AWS IoT mirrors sections of the MQTT broker, Mosquitto, that we used for LAMPi. AWS takes it to the next level by providing an HTTP RESTful interface to get and change the current state of your devices. The broker does not retain any messages, but simply provides a central point for the pub-sub model.

Aptly named, the Thing Registry, acts as the central location for managing and identifying the things, or devices hooked into the AWS IoT system. The Thing Registry keeps track of any resources or attributes associated with a particular thing. It also provides a location to keep track of MQTT client ID’s and associated certificated, which improve one’s ability to manage and troubleshoot individual things.   

Coupled with the Thing Registry is AWS’ concept of Thing Shadows. This is a persistent digital representation of the state of a device. As well as providing the current reported state of a device, it also will report the state desired, clientToken which it uses to send MQTT environments, and metadata.

AWS IoT comes with the robust Security and Identity Service that our team has come to know and love throughout this course. Things retain their own credentials, and access is granted to the system through the assignment of rules and permissions. Three identity principals are supported in this system, X.509 certificates, IAM, and Amazon Cognito.   

All of these services have the added benefit of being fairly cheap. The current rate is at $5 per million messages.

Next week, join us for the final installment of the IoT Course Blog Series: Week 14 Final Projects and Wrap Up.

Can’t get enough insights? Discover why A Locust Swarm is a Good Thing or how Selecting the Right User Research Method can make all the difference to your product’s success.

Developing an amazing technology product of your own? Take our 1-Minute self-assessment to make sure you’re project is on-track for a successful launch!  Or, reach out to us at LeanDog.com! We’d love to hear all about it!

Does Your Team Have a Blame Well?

Here is an important role for every budding team…It’s called the “Blame Well”.images-1
Now, the way it works is, each week the team rotates the role of Blame Well to a new team member. During that week, if anything goes wrong that team member immediately assumes all blame for anything that goes amiss.

The role of Blame Well enables the team and stakeholders to immediately get past trying to figure out whose fault anything is. Instead, the team can directly move into useful discussion about resolving whatever issue came up.

So…this post is definitely tongue-in-cheek. I think if a team really needed a Blame Well there are deeper problems to address.

What I hope folks can take away from this is that focusing energy on assigning fault and blame is pretty pointless. A better approach is to simply figure out what the next best thing can be done in the current context.

This doesn’t mean teams shouldn’t inspect and adapt…but true reflection and growth happens without fault and blame.

I’d enjoy hearing stories about how people helped their own teams or organizations get out of the blame game.

This post was originally published by the same author on odbox.co. It was reposted with permission because we threatened to throw him in the blame well for a month if he said no. 😀

The Multitasking Myth

As a parent of an ADD (Attention Deficit Disorder) child, I have had the unplanned but eye-opening experience of learning how to deal with ADD.  Why eye-opening?  I have to admit, I have always been skeptical of the validity of some of today’s conditions that become accepted by the medical community at large.  The sheer rate of new conditions grows each year.  When my child was first diagnosed, I wondered if ADD was truly a legitimate issue or if it was created by pharmaceutical companies who conveniently happened to have a medication to manage the condition.  It also seemed to me like a convenient excuse for those who were lazy, unmotivated, or simply capable of handling the daily demands of the modern world.  I have since learned that I was completely wrong.struggling

After experiencing the effects of ADD has on my child and consequently putting together a plan to manage it, I have seen a 180 degree turn in my child’s ability to deal with the anxiety that accompanies ADD.  My child has gone from a very challenged student to a peak performer at school almost immediately.  Also, my child’s satisfaction level with achievements and self-confidence is at an all-time high.

There are three things we implemented which have directly contributed to the successful turn-around:

  1. A sustainable and recognizable daily routine
  2. Prioritizing what is most important, communicating it to our child, and focusing on that list one item at a time
  3. Constantly re-evaluate #2 and adjusting  accordingly

Since being introduced to ADD I have become familiar with the effects it has on performance, self-satisfaction, and self-confidence.   I have also noticed similarities between these effects and the effects of multitasking on performance, self-satisfaction, and self-confidence in the workplace.  Over the years, I have even seen a number of cases of what one could term as “artificially-manufactured ADD”.

Why make this comparison?  Because many in the business community treat multitasking in the same way I first treated ADD; the effects on productivity are really over-hyped and those who can’t multitask effectively are just lazy, unmotivated, or incapable of handling the tasks of today’s business climate. 

Multitasking is not a modern concept; in fact it is believed to have been around for a long time.  Today’s work environments drive multitasking demands on our time almost by default.  CNN describes multitasking as “a post-layoff corporate assumption that the few can be made to do the work of many”.  I’m not sure if I completely agree with this viewpoint, but some studies show that multitasking is a less efficient approach to work than focusing on similar types of tasks at the same time, or focusing on one specific deliverable at a time.   There are many suggestions as to how to address and minimize the effects of multitasking or how to operate to avoid it.  In my experience, the best way to minimize performance loss of multitasking is similar to the approach we have taken to counteract the effects of ADD with my child:

  1. Be consistent and predictable wherever possible
  2. Prioritize your work and single-thread your efforts whenever possible
  3. Constantly re-evaluate #2 for updated priorities and adjust when needed

Highly productive teams groom, prioritize, re-groom, and re-prioritize their work constantly.  They also are consistently inquiring about priority and adjusting accordingly.  Most importantly, they work to keep their efforts as single-threaded (one item at a time) as possible to maximize their productivity.   The effect on your group’s performance, as well as your group’s output quality and agility, will be greater than you think.

Want to learn more ways to create high-performing teams? Check out another post by Mike Jebber – Team Building: Diversity Uncovers What Experience Can’t.

IoT Course Week 12 – Remote Firmware Updates

Screen Shot 2015-10-15 at 3.07.21 PM

Last week, we explored Load Testing of HTTP and MQTT and how to measure the scalability of your system.

This Week

This week, we’ll continue our focus on non-functional requirements with Remote Firmware Update.  A typical desk lamp, or other non-IoT device, will have the same functionality 10 years after it leaves the factory.  The functionality and value of a “smart” device, however, can increase over time, as new software functionality is deployed.  

As students have experienced, updating the functionality of the Web is relatively straight-forward: deploying new code to a server updates the web application.  Similarly, as new iOS and Android mobile capabilities are deployed, the new Apps are published on the iTunes and Google Play stores.  But how do you update the software/firmware on your smart device?  There could be hundreds of thousands, or even millions, of devices distributed across the country or world and each embedded system is slightly different.  For Week 12, we show students how to remotely update LAMPi.

Screen Shot 2016-05-23 at 11.05.56 AM

 

Debian Packages

Since we are using Raspbian, a Debian-based Linux system for LAMPi , we settled on the Debian Package System. This addresses the actual packaging and installation of software, as well as the distribution and security (authentication and integrity) of those packages.

Create Folder Structure

First, we need an executable to package. We’re going to make a package called “hi” that contains an executable also called “hi”. Let’s make a directory to build our deb package in:

cloud$ mkdir -p ~/pkg/hi/{DEBIAN,opt/hi}
cloud$ cd ~/pkg/hi/

Viewed in tree (you can install tree through apt-get), this folder structure should look like so:

pkg
├── hi
│ ├── DEBIAN
│ └── opt
│ └── hi

So ~/pkg/hi is the directory that holds everything we want to package.

  • DEBIAN is a special folder that contains all the configuration & metadata for the debian package
  • Everything else in ~/pkg/hi will be installed in the root of the system. So ~/pkg/hi/opt/hi will install into /opt/hi on the system in which it is installed. If we wanted to install some supervisor scripts with our packag. For example, we could make a ~/pkg/hi/etc/supervisor/conf.d/ directory and files in it would install into /etc/supervisor/conf.d.

Create Executable

Now let’s build an executable. When the package is installed, we’ll want the executable to be installed in /opt/hi/ so create it as ~/pkg/hi/opt/hi/hi

#!/usr/bin/env python

import os

version = 'Unknown'
version_path = os.path.join(os.path.dirname(__file__), '__VERSION__')
with open(version_path, 'r') as version_file:
version = version_file.read()

print('Hello Deb! Version {}'.format(version))

Let’s create a file to hold the version of our program. Create ~/pkg/hi/opt/hi/__VERSION__ with the following contents (no whitespace, no newline):

0.1

Save and close both files, mark “hi” as executable, then run it:

cloud$ cd ~/pkg/hi/opt/hi/
cloud$ sudo chmod a+x hi
cloud$ ./hi

Hello Deb! Version 0.1

Create Package Metadata

Now let’s build a control file to describe our package.

Create a file at ~/pkg/hi/DEBIAN/control, replacing {{YOUR_NAME}} with your name:

Package: hi
Architecture: all
Maintainer: {{YOUR_NAME}}
Depends: python, python-dev, python-pip
Priority: optional
Version: 0.1
Description: Hello, Deb!
Section: misc

Note that these metadata files are whitespace sensitive and do not allow additional empty lines so be careful while editing.

Finally, we need to fix file permissions and make root the owner of the entire directory structure. These permissions will travel with the package, so if we don’t do this, the files will be installed with bad permissions.

cloud$ sudo chown -R root:root ~/pkg/hi/

Note that after you do this, further edits to files in this directory will require sudo.

This should be all we need to build our deb package, so let’s go:

cloud$ cd ~/pkg/
cloud$ dpkg-deb --build hi

You should now have a hi.deb in ~/pkg/.
You’ve just created a Debian Package!

Setting up a Debian Repository
We use reprepro, an easy to set up Debian Package Repository, and show students how to publish their packages to that repository, add that repository to LAMPi, and then install the package on LAMPi from the repository.

Automating Deployment

Everytime we change our hi package, there are several things we need to do. We need to increment the version number, create the package, and finally upload it to our package repo. We teach the students how to build an automated script for these so we don’t have to manually run the commands each time. The package and deployment script will act as living documentation of the process we need to do each time the package is updated, so future maintainers of your project don’t need to start from scratch. We use a Python module called bumpversion to accomplish automatic updating of version information.

Finally

After walking through the above creation and deployment of a Debian package, setting up the reprepro repository, and installing the hi package on LAMPi, the students’ assignment for Week 12 was to demonstrate their understanding by applying the tools on the LAMPi code. The assignment required them to package the LAMPi UI application, the Bluetooth service, and the lamp hardware service into a package, including maintainer scripts to run before the package is installed (preinst), after installation (postinst), when removing the package, etc. and demonstrate versioning of the package in class.

Next Week –  IoT platforms

Why A Locust Swarm Is A Good Thing

locust_io_master_slave2_00Recently, I gave a talk on how “A Locust Swarm Can be a Good Thing!” at Stir Trek in Columbus.  The talk covered our experience of load testing and preparing for hundreds of thousands of users on the first day of the .Realtor site launch.

This was a challenging environment with lots of external dependencies of varying capacity and failure tolerance.  We knew from the beginning that we’d never be able to load test all of our dependencies at once, so we had to figure out ways to test in isolation without spending all our time rewriting our test infrastructure.  Additionally, our load testing started late enough that we would not be able to coordinate an independent test with all of our dependencies in time.  We also needed to figure out what our users would do without having ever observed user behavior on the real site.  We built a model user funnel to capture our expectations and continually tweaked it as we discovered new wrinkles.  This funnel formed the basis for our load test script and allowed us to prioritize our integration concerns.

In the end, we learned a lot about making our workflows asynchronous, linux kernel optimization, decoding performance metrics, and building giant DDoS clouds of load test slaves.  We also learned that load testing should start “earlier”.  Conversations about load and user behavior drive new requirements and testing can uncover fundamental infrastructure problems.  Decoding and isolating performance problems can require a lot of guessing and experimentation; things that are difficult to do thoroughly with an unmovable launch date.  It’s also difficult to make large scale changes to an application with confidence under time pressure.  One of my key takeaways is to be nicer to external partners.  The point of load testing is to find the breaking points of a system and most people don’t like when their toys get broken. Building trust and safety into that relationship is very important before trying to figure out where and how something went wrong.

Check out the slides to the original talk here.

Stir Trek is an excellent conference with extremely thoughtful organizers and friendly people.  20 people came to talk to me in person after my talk, which was great!  Tickets sell out very quickly, but I recommend getting in next year if you can!

Agile…and BEYOND!!

LeanDog Agile experts, Matt Barcomb, Mike Kvintus, and Jeff Morgan, are at Agile and Beyond today and tomorrow. Take a peek at some of the knowledge they will be dropping while they are there:

Screen Shot 2016-05-02 at 4.58.17 PM

Barcomb

Matt (@mattbarcomb) is a product development specialist with a penchant for organization design. He works with companies to turn software development into a core competency by integrating product development activities with business practices. Matt takes a pragmatic, systems approach to improvement, working with stakeholders throughout medium and large organizations. He has experience working with product management and software delivery teams as well as executive leadership teams, sales, services, and operations in a variety of industries. Matt enjoys challenging mental models, simplifying the seemingly complex, and uncovering the “why” behind the what. He shares his experiences, questions and ideas at www.odbox.co

Thursday, May 5 @ 10AM Value-focused prioritization & decision making

Does prioritizing your development portfolio seem unclear or mired in politics? Ever feel like the decisions for what gets worked on when are somewhere between arbitrary and emotional? Ever get tired of providing cost estimates for work of uncertain value? If you answered yes to any of the above questions, this session is for you! Matt Barcomb will open with introductory concepts about shifting from a cost focus to a value focus for development work. Next, providing business value for user stories will be debunked. Then, a collaborative framework for prioritization, Benefit Mapping, will be discussed. Finally, Matt will end with ways to simplify the cost evaluation of work and risk.

Friday, May 6 @ 10AM Using Flow-based Road Mapping & Options

If you’d like an alternative to typical, quarter-by-quarter, schedule oriented road mapping (and all the associated waste) then this session is for you. Matt Barcomb will introduce a Cadenced Flow approach to flow-based road mapping. He will first cover how to layout and execute a road map based on models that better fit software planning as well as how to transform your existing plans. Next, using options thinking to frame work will be explored and how to use starting and stopping triggers for options, reducing the need of blind budgeting or project practices. Finally, Matt will wrap up by touching on a few key metrics that will let you monitor and evaluate your new road map.

Screen Shot 2016-05-02 at 5.00.09 PMCheezy

Cheezy is an international speaker and keynote presenter in different Agile conferences. He has spoken 6 times at the Agile 20XX conferences as well as other ones like Agile development East and West, Mile High Agile, Agile and Beyond, Path to Agility, etc.

Friday, May 6 • 3:00pm – 4:40pm Tested!

You’ve heard that quality belongs to everybody on an Agile team. You’ve heard that testers and developers should “collaborate” in order to drive quality higher. You’ve heard that automated tests help a team continuously validate the quality. It’s time to stop thinking about it! It’s time to stop talking about it! It’s time to make it happen! Watch Ardi and Cheezy do this in front of your eyes. They will build a web application driven by acceptance and unit tests.You will see how a Product Owner, Tester and Developer will create executable User stories, develop the code to validate these stories and refactor along the way. At the end, you will get a taste of what a Continuous Delivery pipeline looks like. Prepare to collect your jaws from the floor!
Screen Shot 2016-05-02 at 4.58.39 PM

Kvintus

Friday, May 6 • 10:00am – 10:45am Worthless Story Card Estimates

How much of your time is wasted estimating story cards? We’ll explore some alternatives to estimating story cards and review real-world comparisons of tracking work using story points vs. counting story cards. Not sure when story card estimates are needed? We’ll discuss that too. All discussions will be based on real-world examples and comparisons of alternatives for several projects. We’ll also discuss #NoEstimates and how it fits in. You’ll leave with an understanding of ways to plan/track agile projects and the tradeoffs involved with alternatives to story card estimates.

IoT Course Week 11: Load Testing

Screen Shot 2015-10-15 at 3.07.21 PM

Last week, we dove into into the importance of incorporating and collecting analytics through your connected device, how that information helps provide business value, and played with some of the ways that information can be displayed using some pretty graphs.

This Week

This week, we’ll continue our focus on non-functional requirements and start load testing. With connected devices, if the device can’t call home to its shared services, it loses a lot of its value as a smart device. These services need to be highly reliable, but things get interesting when thousands or millions of devices decide to call home at the same time.

To load test, we’ll generate concurrent usage on system until a limit, bottleneck, unexpected behavior, or issue is discovered. This usage should model real-life usage as close as possible, so the analytics we put in place last week will be a valuable resource. In instances where we don’t have data to work with, we can build out user funnels and extrapolate based on anticipated usage. Bad things will happen if we ship thousands of products without any idea how our system will react under the load. This data will also be a useful baseline for capacity planning and system optimization experiments.

The Lampi system has two shared services that we need to put under load. One is the Django web server that handles login, and the other is the MQTT broker that handles sending messages to the lamp.

Load Testing with Locust

To test the web server we use Locust. Locust has become a LeanDog favorite due to its simple design, scalability, extensibility, and scriptability. We’ve used it to generate loads of 200,000 simultaneous users distributed across the US, Singapore, Ireland, and Brazil. These simulated users (locusts) walked through multi-page workflows at varying probabilities, modeling the end-to-end user interaction, complete with locusts dropping out of the user funnel at known decision points.

Locusts are controlled via a locustfile.py. The one below shows a user logging in and going to the home page:

from locust import HttpLocust, TaskSet, task

class UserBehavior(TaskSet):

def on_start(self):
self.login()

def login(self):
response = self.client.get("/accounts/login/?next=/")
csrftoken = response.cookies.get('csrftoken', '')
self.client.post("/accounts/login/?next=/", {"csrfmiddlewaretoken": csrftoken, "username": {{USERNAME}}, "password": {{PASSWORD}} })

@task(1)
def load_page(self):
self.client.get("/")

class WebsiteUser(HttpLocust):
task_set = UserBehavior
min_wait = 5000
max_wait = 9000

In order to run locust, we’ll need a machine outside of the system to simulate a number of devices. Locust is a python package, so it can run on most OSes. It uses a master/slave architecture so you can distribute the simulated users and allow for more and more load.

Once you install locust and start the process, you control the test via a web interface.

Screen Shot 2016-05-04 at 12.09.46 PM

Locust will aggregate the requests to a particular endpoint and provide statistics and errors for those requests.  

Screen Shot 2016-05-04 at 12.09.55 PM

Screen Shot 2016-05-04 at 12.10.03 PM

Load Testing with Malaria

To test MQTT we used a fork of Malaria. Malaria was designed to exercise MQTT brokers. Like locust, Malaria spawns a number of processes to publish MQTT messages. Unlike locust, it’s not easy to script; you have to fork it to do parametric testing or randomize data.

usage: malaria publish [-D DEVICE_ID] [-H HOST] [-p PORT] [-n MSG_COUNT] [-P PROCESSES]

Publish a stream of messages and capture statistics on their timing

optional arguments:
-D DEVICE_ID (Set the device id of the publisher)
-H HOST (MQTT host to connect to (default: localhost))
-p PORT, (Port for remote MQTT host (default: 1883))
-n MSG_COUNT (How many messages to send (default: 10))
-P PROCESSES (How many separate processes to spin up (default: 1))

By modulating MSG_COUNT and PROCESSES you can control the load being sent to the broker.

Running some Example loads
Small load: Using 1 process, send 10 messages, from device id [device_id]

loadtest$ ./malaria publish -H [broker_ip] -n 10 -P 1 -D [device_id]

Produces results similar to this:

Clientid: Aggregate stats (simple avg) for 1 processes
Message success rate: 100.00% (10/10 messages)
Message timing mean 344.51 ms
Message timing stddev 2.18 ms
Message timing min 340.89 ms
Message timing max 347.84 ms
Messages per second 4.99
Total time 14.04 secs

Large load: Using 8 processes, send 10,000 messages each, from device id [device_id]

loadtest$ ./malaria publish -H 192.168.0.42 -n 10000 -P 8 -D [device_id]

Monitoring The Broker

MQTT provides a set of topics that allow you to monitor the broker.

This command will show all the monitoring topics (note that the $ is escaped with a backslash):

cloud$ mosquitto_sub -v -t \$SYS/#

The sub topics ‘…\load...’ are of particular interest.

Gather data

Before we start testing, we should figure out what metrics we want to measure. Resources on the shared system (CPU, memory, bandwidth, file handles) are good candidates for detecting capacity issues. Focusing on the user experience (failure rate, response time, latency) will help you hone in on the issues that will incur support costs or retention problems. Building the infrastructure to gather, analyze and visualize those metrics can be a significant part of the load testing process – but those tools are also necessary to do useful operational support in production. For the class, students used sysstat, locust, mqtt and malaria to gather metrics. A production-like system might use AWS Cloudwatch, New Relic, Nagios, Cacti, Munin, or a combination of other excellent tools.

The point of load testing is to find the limits and then decide what to do about them. There will be a point where the cost to rectify the issue is greater than any immediate benefit, load testing will help you find that bar. During the class, limits of a 1000 simultaneous users for web and 5,000-10,000 MQTT messages per process were common.

Final project

For their final project two students from the class, Matthew Bentley and Andrew Mason, decided to take on some of the problems with mqtt-malaria and extend Locust to publish MQTT messages. Using Locust they were able to scale their load test infrastructure across many machines and put a broker under more stress. In their previous testing with malaria, they found the point where a single device could send no more messages (at a reasonable rate), but they could not scale malaria to determine at what point the broker would not process any additional connected devices’ messages. Through their efforts, they reached 100% CPU on the broker, pushing 1 million messages a minute to 4000 users. As a result of their work they also open sourced their contribution to locust.

IoT Course Week 10: Analytics

IoTBackground

Last week we got our feet wet with an introduction to Bluetooth Low-Energy on iOS. This week, we’ll dive into analytics, provide business value, and make some pretty graphs.

Why Analytics?

When building a new product, there are always a variety of options on the table with which to improve that product. At LeanDog, we practice a software development cycle that includes short sprints coupled with an open and honest feedback loop that provides us with the information we need to make informed decisions about where to focus our efforts and resources. This allows us to make sure that we are building the right thing the first time and minimize the amount of risk inherent in the process.

Until relatively recently, collecting feedback about a product in-use was a long process that required either direct observation or careful reading of written user reviews and complaints. Due to the complex and inconsistent nature of users, collecting strong quantitative data about a product experience can be difficult. In a now infamous incident from 2013, a New York Times journalist wrote a negative review of the Tesla Model S, only to have the car’s onboard analytics refute many of his claims. It is not uncommon for a customer to report one thing, but end up doing something entirely different, and your user experience process will need to account for these inconsistencies. One of the many ways we solve that problem is through the use of analytics platforms and reporting tools.

In addition to uncovering potential pitfalls, analytics are a powerful way for product owners, designers, and developers to understand how a product is actually used. For companies that make physical devices, this provides insights that are difficult to collect otherwise. Imagine receiving a coupon in the mail for a smart GE light bulb you love that’s nearing the end of it’s lifetime. The only way GE could possibly anticipate that your current bulb is about to go out (without calling you every day to ask how often you turned it on in the last 24 hours) is through analytics. With analytics, you get an avenue outside of sales to start to figure out which features and products your users actually love, which have problems or aren’t worth further development, and even identify disengaged users for retention campaigns.

Enter Keen IO
For this class, we will use a popular analytics platform called Keen.io. Keen is a general purpose tool, not locked into web, mobile, or embedded specifics. It has a large number of supported software development kits (SDK’s), including Ruby, iOS, Python, .NET, etc. It also offers a powerful free tier, which is perfect for the amount of traffic currently being driven on student’s LAMPi systems. Registering and sending a notification in Python is as simple as as this:

from keen.client import KeenClient

client = KeenClient(
project_id="xxxx",
write_key="yyyy",
)

client.add_event("sign_ups", {
"username": "lloyd",
"referred_by": "harry"
})

This will send an event containing the signup data to Keen’s database. Now back at LAMPi headquarters we can track those signups on a giant web dashboard:

var series = new Keen.Query(“count”, {
eventCollection: “sign_ups”,
timeframe: “previous_7_days”,
interval: “daily”
});

client.draw(series, document.getElementById(“signups”), {
chartType: “linechart”,
label: “Sign Ups”,
Title: “Sign Ups By Day”
});

image01

Keen also provides a number of ways to pull out the analytics data and do additional processing to get exactly the view we wanted. Like if we wanted to build a tree of who our top referrers are what their “network” looks like:

image00

What’s next?
Analytics can also provide a leading indicator to help model the number of users that will be pounding on your infrastructure. To learn more about how to address that issue, join us next week when we talk about load testing!

IoT Course Week 9: Introduction to Bluetooth Low Energy

 

Internet of Things Course

To continue our goal of providing industry experience to the students of EECS397 Connected Devices, this week we will be diving deep into Bluetooth LE on iOS.

Recap

Last week students completed setting up a UI on iOS and Android that mirrored the interactions present on the LAMPi display and the web. The goal for this week is to connect those pieces

CoreBluetooth and Bluetooth 4.0

With the release of iOS 5 and the iPhone 4S, Bluetooth LE was positioned and continues to be one of the most common methods of short range data communication.  CoreBluetooth is the framework that Apple provides to developers to interact with Bluetooth LE hardware and peripherals. This is useful, as the current Bluetooth LE spec weighs in at over 2000 pages in PDF form.

Communication with LAMPi through CoreBluetooth can be broken into a four step process:

  1. Scanning for LAMPi device (from provided array of service id’s)
  2. Connect to discovered service (lamp-service)
  3. Probe characteristics (hsv, brightness, on/off).
  4. Subscription notifies when something changes. notify on property write.

Scanning for LAMPi

Students began the class by making an update to their LAMPi’s. Each team was given a BlueGiga BLED112 to plug into their Raspberry Pis, as well as updated Python services which allow the LAMPi to act as a Generic Attribute Profile (GATT) server. What the GATT server does is broadcast a number of available services to any BLE devices nearby that care to listen. In the case of the LAMPi, there is only one service being exposed, which is aptly called the Lampi Service.

Screen Shot 2016-04-12 at 11.18.43 AM

The service being advertised from the LAMPi includes a device id, which students use to identify their unique LAMPi in a classroom containing many more. Once discovered, it is time to connect.


- (void)startScanningIfEnabled {

if(self.shouldConnect) {

[self.delegate onLoading:@"Searching for sensor..."];

NSArray *services = @[[CBUUID UUIDWithString:LAMPI_SERVICE_UUID]];

[self.bluetoothManager scanForPeripheralsWithServices:services options:nil];

}

}

Connecting to a Peripheral

Screen Shot 2016-04-12 at 11.20.51 AM

CoreBluetooth abstracts away much of the detail required in making a connection to a BLE peripheral. When a peripheral is discovered, our code will immediately attempt to connect. If that connection is successful, we search through the set of services that exist on the peripheral, looking for one that is recognized.

“`

– (void)centralManager:(CBCentralManager *)central

didDiscoverPeripheral:(CBPeripheral *)lampPeripheral

    advertisementData:(NSDictionary *)advertisementData

                 RSSI:(NSNumber *)RSSI {

       …

       [self.bluetoothManager connectPeripheral:self.lampPeripheral options:nil];

}

 

– (void)centralManager:(CBCentralManager *)central

 didConnectPeripheral:(CBPeripheral *)lampPeripheral {

   NSLog(@”Peripheral connected”);

   [self.delegate onLoading:@”Found lamp! Reading…”];

   lampPeripheral.delegate = self;

   …

       // Search for a known service

       for (CBService *service in lampPeripheral.services) {

           if([service.UUID isEqual:[CBUUID UUIDWithString:LAMPI_SERVICE_UUID]]) {

               self.lampService = service;

           }

}

“`

Services and Characteristics

At this point, we are connected to a Lamp Service, which is now providing us with a collection of characteristics. Characteristics are how communication in BLE works. To make a comparison to software, Services can be thought of as Classes while Characteristics are more like the properties on an Object. Characteristics support four different actions: read, write, notify and indicate. While read and write are arguably fairly straightforward, notify and indicate both have to do with a subscription flow that we will use heavily in the iOS application.

Screen Shot 2016-04-12 at 11.23.20 AM

Subscribing to Characteristics

Because LAMPi has both on device and cloud controls, we want to be able to track the state of the LAMPi in real time while the iOS app is running. If a user were to change the color of LAMPi by using the Raspberry Pi UI, the Bluetooth service would send a notification to iOS that the HSV characteristic had been changed.  The following block of code is an example of a discovered Characteristic being initialized. It reads the current hue and saturation of the HSV Characteristic, and then tells the app to subscribe to the Notify value (the notification) of the lamp peripheral.

“`

           [self.lampPeripheral readValueForCharacteristic:self.hsvCharacteristic];

           if(self.hsvCharacteristic.value != nil) {

               float fHue = [self parseHue:self.hsvCharacteristic.value];

               float fSat = [self parseSaturation:self.hsvCharacteristic.value];

               [self.delegate onUpdatedHue:fHue andSaturation:fSat];

           }

           

           [self.lampPeripheral setNotifyValue:YES forCharacteristic:self.hsvCharacteristic];

“`

At this point, when the LAMPi HSV Characteristic changes, CoreBluetooth will call a delegate method that is triggered from the setNotifyValue line.

“`

– (void)peripheral:(CBPeripheral *)peripheral

didUpdateValueForCharacteristic:(CBCharacteristic *)characteristic

            error:(NSError *)error ;

“`

It is in this block of code that the HSV value is updated in the app, and logic to refresh the UI is executed.

Fun Fact: Origins of Bluetooth Name

As a bonus for making it this far, did you know that the origin of the word “Bluetooth” comes from a c. 970 King of Denmark, called Harald Bluetooth? In fact, the Bluetooth logo is comprised of the Nordic runes for H(8px-Runic_letter_ior.svg) and B (12px-Runic_letter_berkanan.svg), Harald’s initials. 

Team Building: Diversity Uncovers What Experience Can’t

 

4fe23c165c017.image

Diversity tends to bring a broader perspective and a broader perspective is critical to strong team building. A good friend of mine recently told me a story that illustrates just how important diversity (in skill set, age, gender, background, etc.) is to building successful teams and how diversity finds things experience alone will not.

My friend’s daughter recently started an internship as a mechanical engineer with a well-respected global company who manufactures plumbing equipment. Her first assignment was with a group of very talented and experienced engineers who were working on a defect issue with one specific line of faucets. Returns were extremely high and customer ill-will toward the company brand was growing.

The faucet sold well because of style and features, however, defects on the model were abnormally high.The engineering team, as all good experienced teams would do, had been pouring over every aspect of the manufacturing process, looking at packaging, looking at suppliers parts, doing detailed reviews of designs and design specs, assembling and dis-assembling loads of units right off the line trying to find the issue. My friend’s daughter, being new to faucets and having never installed one before, grabbed a finished product right off the line, sat down with the instructions, and proceeded to put the faucet together according to the steps provided.

No one else had thought to do this! To her amazement, the instructions walked a customer through a group of steps which not only broke the faucet, it voided the warranty as well. The product was mechanically sound and functioned perfectly when assembled properly; however, the average non-plumber customer follows instructions and doesn’t rely on a mechanical engineering degree or years of experience working with plumbing to install their own faucets.

An issue that had cost a company a considerable amount of money, capacity, and consumer ill-will, was solved by a rookie mechanical engineer intern without her utilizing her engineering skills. All of the team members working on the project had been putting faucets of ANY kind together for many years without ever pulling out instructions. They could assemble a faucet sight-unseen, on the fly and it would work perfectly, so no one even thought about considering the instructions as a source of the issue. It wasn’t ego, it was human nature. The team had been so close to the product for so long they could skip steps to get to a “quicker” result. They also had very similar backgrounds, and experience levels. It happens in every industry.

When asked by management what made her decide to look for problems with the instructions, my friend’s daughter said this:

“I wasn’t, it seemed like a logical place to start. Women and men think differently. I always read the instructions first. You have a lot of women customers so you need more women engineers.”

The perspective that diversity delivers is important. Don’t make the costly mistake of overlooking it.

Learn more about doing things differently in Climbing Mountains With Agile Methods.