IoT Course Week 14 – Final Projects

Screen Shot 2015-10-15 at 3.07.21 PM

It’s been an intense 13 weeks for both us and the students. Now it’s finally time to dig into final projects. Students were invited to come up with an idea that added some sort of value to the LAMPI product, and we provided some possible ideas. Here are some of the highlights from those projects.

Build An Alarm Clock

For this project, the students created an alarm clock system using LAMPI. Through the web interface they already built, the user can create one or several alarms.

Screen Shot 2016-07-19 at 4.14.13 PM

Because LAMPI doesn’t have a speaker, the students had to improvise and blinked the light on and off several times instead.

Screen Shot 2016-07-19 at 4.14.24 PM

From LAMPI, the user can see the current time as well as snooze the alarm.

Challenges – multiple time zones, transmitting time, conflicting alarms, conflict between light settings and alarm

Natural Light Mode

This was an original idea from the students and not one of our suggested projects. This project used LAMPI to reflect the state of the light outdoors for a handful of benefits, as outlined in their presentation:

Screen Shot 2016-07-19 at 4.16.51 PM

Using the API from OpenWeatherMap, they were able to get sunset, sunrise, and daylight conditions and map them to a color spectrum based on LAMPI’s current time. Because we didn’t have all day to watch the light color slowly change, they also built a demo mode that progressed through a 24 hour cycle in the span of couple minutes.

User / Device Association

By the end of the course students had a functional system that connected a single LAMPI to the cloud. This project focused on expanding the system to accept multiple users, each with a unique LAMPI device. The LAMPI doesn’t have a keyboard and noting that an on-screen keyboard would probably result in a poor experience, these students built a key-generation system similar to Netflix and other services that run on set-top boxes and smart TVs.

When the user presses a button to connect to the cloud, the LAMPI would display a randomly-generated code like this:

Screen Shot 2016-07-19 at 4.15.01 PM

The user can then log into their LAMPI web portal, enter the code, and the device is connected to their logged in account. The codes are only good for one minute, afterwards they would need to generate a new code.

Distributed Load Testing

While we had covered some basic load testing scenarios in a previous week, there was still work to be done. This team of students took charge and starting investigating how to load test a protocol such as MQTT using something like Locust, a LeanDog favorite for load testing web sites. Locust supports HTTP out of the box, but has a plugin system for testing other protocols. These students actually created their own MQTT plugin for Locust and open-sourced it on GitHub. From there, they ran a “locust swarm” of distributed clients from Digital Ocean to attack their Mosquitto broker in Amazon EC2.

Their results were very promising. They were able to flood CPU and network traffic but unable to cause catastrophic failure in the Mosquitto broker. Messages with QOS 1 and 2 eventually got where they were intended to go after congestion resolved, demonstrating why Mosquitto continues to be our go-to MQTT broker:

Screen Shot 2016-07-19 at 4.19.20 PM

Wrapping Up

With final projects completed, we also ran a brief retrospective. We asked the students to post what worked, what didn’t work, and what surprised them. Lots of good feedback came out of this. We were able to hone in on content that was too technical and not technical enough. We learned that homework submissions being due on Monday caused issues as students would often wait until the weekend, a time at which we could only provide limited assistance over Slack. It was also a validation that we had done something right — we received an overwhelming amount of positive feedback, with several students saying how much they had gotten out of the class due to the breadth of the covered topics.

Looking Forward

With our first class finally wrapped up, it’s time to look ahead. Preparations are already being made for a second run. We’re taking the feedback given, making some needed tweaks, and we’ll be ready for a new round of students in the fall. See you then!

Developing an amazing technology product of your own? Take our 1-Minute self-assessment to make sure you’re project is on-track for a successful launch!  Or, reach out to us at LeanDog.com! We’d love to hear all about it!

6 Tips To Designing Effective Information Radiators

SWRREC0K3A

Your team space should deliver a message

Recently I’ve been helping a new team setup their team space. Now, I’m a big fan of hanging stuff on walls. Big visible stuff. But there’s more to it than that. When deciding what you should put where you are actually crafting a message to anyone that walks into the space.

My personal metric for knowing when you’ve done this effectively is to answer the following question: Does it change someone’s mood?

In my opinion, when anyone walks into your team space, from the newest dev to the most senior executive, what’s hanging on the walls should make them feel differently within a matter of minutes. If that doesn’t happen, something’s off.

Now I’m not advocating that teams throw a bunch of crap up to simply appease people, especially those not usually in the space. In fact, the big visible stuff should be minimalistic; the least amount of information required to paint a deep, rich picture of what is going on, what value is being added, when things are happening, and anything else that is useful.

I’m not going to go into what specifically any of the stuff should be. There is tons of information out there on that such as information radiators, styles of team boards, card maps and lots more. Instead, I encourage you to think about the message you want your space to convey.

Here are a few things to consider in addition to the actuall stuff you choose to hang up:

1) If it’s hanging up, make sure it’s actually big and visible 

Sometime I wish plotters were never invented. Release burn-down charts and code coverage trends are examples of useful things to hang up. Too often though, I see people (usually PMs with whiz-bang tools) print them out. I assume they do this because they think it’s easier, but walk across to the opposite side of the team space and tell me if you can read it…all of it. It’s not enough to see the trend lines if you don’t know what the chart is trending or what the axes are. Instead, draw it out on some paper or a white board with a big fat marker. Now go walk across the room…yeah, betcha can read that!

2) If it’s hanging up, make sure it’s useful

like to think of this as pruning. Something that was once useful may have run its course. If so, tear it down. If you need it later, recreate it. However, before just tearing stuff down, make sure the whole team agrees that what ever it is is no longer useful. When in doubt leave it up, there are usually ways to make more room if you need it. It’s also important for the team to consider organizational usefulness. Not everything will be the most useful to the team itself, but sometimes to managers or other stake holders. PMs like release burn downs and cost burn-ups, CFOs like value stories etc… These should not dominate the space, but they are still useful, and it’s important for the team to understand their organizational usefulness as well 🙂

3) If you need something useful, make sure you hang it up 

The space is not a fixed thing. Obviously if we can tear stuff down we can put new stuff up. The process of software development is journey or learning and discovery. Visualizing different things along the way can help a team communicate, both with each other as well as those outside the team. If you think something might be useful, hang it up for a while, try it out. If it doesn’t add the value you thought, ask how it can be improved. If after a while it is still not providing value, see #2.

4) If it’s hanging up, make sure it’s in a good position 

Not all wall space is created equal. This could be due to many things such as lighting, vantage point, furniture arrangement etc… The best thing to do is plan a little bit before you hang something up. How often will it be updated? Is it a conversation centerpiece? Should it be visible to a passer-by? Once decided, go hang it up. If things change and it needs moved, move it…it’s only paper and tape right?

5) If it’s hanging up, take pride in making it

Remember, you are crafting a message. You want things to be visible, digestible and useful. Those traits can be hard to achieve if your big visible stuff looks messy, half-assed or cluttered. It doesn’t take that long to use a straight edge instead of free-hand drawing. Create a color scheme of post-its or stickers. Use different size index cards to mean different things. Create a legend. It doesn’t have to be a work of art, but it should be tidy (within reason) and professional looking to communicate your message effectively. If you think something has gotten too messy over time, clean it up or redo it…it usually doesn’t take very long.

6) If it’s hanging up, it’s a living document

Don’t be afraid to enhance anything that’s hanging up. Feel free to draw, stick stickers, add post-its or whatever else adds value. You can usually tell which documents (big visible charts) a team find the most useful by the amount of enhancements made to it!

Finally, don’t forget to test this stuff out when you think you are done. Stand back from your space and look at it. What does it say to you? Stand in the doorway or hallway and look in. What does it look like from there? Have people from other teams walk through. Is there anything that is unclear to them? Ask a manager, director or executive to walk by. What were they able to tell by just looking? Maybe do a few of these every now and then as your space evolves with different stuff.

Ultimately, don’t forget that original question from way up there: Does it change someone’s mood? Folks should feel better about what the team is doing, where the project is headed, and what they are getting for all the team’s effort. If this is the case, then you’ve successfully crafted both an effective team space as well as an effective message.

This post was originally published by the Matt Barcomb on odbox.co and was reposted with permission. 

Preaching To The Choir

church-choir-408412_960_720

Recently, I was giving a class and during a portion of the class I discuss some of the cultural changes usually required for organizational progressive elaboration of work. During the break that followed this particular discussion, one of the attendees came up to me and said:

“Seems you’re preaching to the choir here. We all agree with what you’re sayin’…but it wont do any good unless we get the managers in here.”

While I certainly understand this reaction, and unfortunately find that it’s not that uncommon, I still find it a bit disheartening…and a little frustrating.

This attitude of “I can’t change the things that influence me”, and “what I can control isn’t big enough to really change anything” is entirely the wrong attitude.

Changing the things that you can control, no matter how seemingly insignificant, should never be down-played. One can always take complete control of their own actions, behaviors, and reactions. Combined with a little learning and a bit of creativity this can become very powerful indeed. So, try changing the things you can; the things over which you have direct control. Share what you’ve tried, what you’ve learned, what worked well and what didn’t. Share up, share down, make it visible, let people ask about it and then share with them too. Eventually this can influence up, down, and across the network of an organization.

Sometimes it only take one person to make a significant change. Other times it spreads slowly before snowballing into a major organization shift. And many times making a “major” or “significant” change isn’t even needed. Sometimes it’s the little things that count.

So, the moral of the story is that even the choir can use the message, and even the choir is able to go out into the world and try to make it a better place through their own actions…and employees who think their organization could be better are just as much on the hook for trying to improve it as the managers and executives are.

This post was originally published by the Matt Barcomb on odbox.co and was reposted with permission. 

IoT Course Week 13 – IoT Platforms

Screen Shot 2015-10-15 at 3.07.21 PM

Last week, we explored remote firmware updates for IoT Devices, using the Debian Package system. This week, we’ll be discussing various IoT platforms.

When we started the course, we had an explicit goal to avoid “black box” solutions, platforms, and vendor lock-in, as much as possible.  We wanted students to understand how these systems are built, as well as architectural and security considerations. The course in some ways is “Learn IoT the Hard Way”, by learning through building various components of an IoT system, stiching those components into a holistic system, and touching on a number of important non-functional requirements, like security, load testing, analytics, and firmware update.  Through that experience (and occasional struggle), we hoped to arm students with enough knowledge and experience to understand both the individual components as well as the overall system.

You can, of course, purchase a complete IoT system – they’re generally referred to as IoT Platforms.  There are many, many choices

Screen Shot 2016-06-13 at 9.30.23 AM

Platform Tradeoffs

When building a product or a business around any technical platform, one must consider the long term implications of that platform. There are the basic questions of functionality and offloading work and operations, but the added complexities of hardware. What does this platform scale to, how quickly can I go from prototype to market, where can I source large quantities of an item, etc. Software as a service also has a few horror stories of products or companies discontinuing a line, which other companies heavily rely on. Controlling your own destiny is very important, and can sometimes be difficult when building your business on a platform that is someone else’s responsibility to keep running. One platform which we feel is here to stay for some time however, is Amazon Web Services.

AWS IoT

From the beginning of this course, the intention was to never take the easy path in building the LAMPi system. Amazon offers a service encompassing much of the functionality we have spent the past several weeks piecing together, AWS IoT, which provides secure, bidirectional communication between internet-connected things and the AWS cloud. This includes a robust security model, device registry, MQTT message broker, as well as integration ease with the remainder of AWS’ cloud offering. Let’s dive in.

Screen Shot 2016-06-13 at 10.15.23 AM

The Message Broker offered through AWS IoT mirrors sections of the MQTT broker, Mosquitto, that we used for LAMPi. AWS takes it to the next level by providing an HTTP RESTful interface to get and change the current state of your devices. The broker does not retain any messages, but simply provides a central point for the pub-sub model.

Aptly named, the Thing Registry, acts as the central location for managing and identifying the things, or devices hooked into the AWS IoT system. The Thing Registry keeps track of any resources or attributes associated with a particular thing. It also provides a location to keep track of MQTT client ID’s and associated certificated, which improve one’s ability to manage and troubleshoot individual things.   

Coupled with the Thing Registry is AWS’ concept of Thing Shadows. This is a persistent digital representation of the state of a device. As well as providing the current reported state of a device, it also will report the state desired, clientToken which it uses to send MQTT environments, and metadata.

AWS IoT comes with the robust Security and Identity Service that our team has come to know and love throughout this course. Things retain their own credentials, and access is granted to the system through the assignment of rules and permissions. Three identity principals are supported in this system, X.509 certificates, IAM, and Amazon Cognito.   

All of these services have the added benefit of being fairly cheap. The current rate is at $5 per million messages.

Next week, join us for the final installment of the IoT Course Blog Series: Week 14 Final Projects and Wrap Up.

Can’t get enough insights? Discover why A Locust Swarm is a Good Thing or how Selecting the Right User Research Method can make all the difference to your product’s success.

Developing an amazing technology product of your own? Take our 1-Minute self-assessment to make sure you’re project is on-track for a successful launch!  Or, reach out to us at LeanDog.com! We’d love to hear all about it!

Does Your Team Have a Blame Well?

Here is an important role for every budding team…It’s called the “Blame Well”.images-1
Now, the way it works is, each week the team rotates the role of Blame Well to a new team member. During that week, if anything goes wrong that team member immediately assumes all blame for anything that goes amiss.

The role of Blame Well enables the team and stakeholders to immediately get past trying to figure out whose fault anything is. Instead, the team can directly move into useful discussion about resolving whatever issue came up.

So…this post is definitely tongue-in-cheek. I think if a team really needed a Blame Well there are deeper problems to address.

What I hope folks can take away from this is that focusing energy on assigning fault and blame is pretty pointless. A better approach is to simply figure out what the next best thing can be done in the current context.

This doesn’t mean teams shouldn’t inspect and adapt…but true reflection and growth happens without fault and blame.

I’d enjoy hearing stories about how people helped their own teams or organizations get out of the blame game.

This post was originally published by the same author on odbox.co. It was reposted with permission because we threatened to throw him in the blame well for a month if he said no. 😀

The Multitasking Myth

As a parent of an ADD (Attention Deficit Disorder) child, I have had the unplanned but eye-opening experience of learning how to deal with ADD.  Why eye-opening?  I have to admit, I have always been skeptical of the validity of some of today’s conditions that become accepted by the medical community at large.  The sheer rate of new conditions grows each year.  When my child was first diagnosed, I wondered if ADD was truly a legitimate issue or if it was created by pharmaceutical companies who conveniently happened to have a medication to manage the condition.  It also seemed to me like a convenient excuse for those who were lazy, unmotivated, or simply capable of handling the daily demands of the modern world.  I have since learned that I was completely wrong.struggling

After experiencing the effects of ADD has on my child and consequently putting together a plan to manage it, I have seen a 180 degree turn in my child’s ability to deal with the anxiety that accompanies ADD.  My child has gone from a very challenged student to a peak performer at school almost immediately.  Also, my child’s satisfaction level with achievements and self-confidence is at an all-time high.

There are three things we implemented which have directly contributed to the successful turn-around:

  1. A sustainable and recognizable daily routine
  2. Prioritizing what is most important, communicating it to our child, and focusing on that list one item at a time
  3. Constantly re-evaluate #2 and adjusting  accordingly

Since being introduced to ADD I have become familiar with the effects it has on performance, self-satisfaction, and self-confidence.   I have also noticed similarities between these effects and the effects of multitasking on performance, self-satisfaction, and self-confidence in the workplace.  Over the years, I have even seen a number of cases of what one could term as “artificially-manufactured ADD”.

Why make this comparison?  Because many in the business community treat multitasking in the same way I first treated ADD; the effects on productivity are really over-hyped and those who can’t multitask effectively are just lazy, unmotivated, or incapable of handling the tasks of today’s business climate. 

Multitasking is not a modern concept; in fact it is believed to have been around for a long time.  Today’s work environments drive multitasking demands on our time almost by default.  CNN describes multitasking as “a post-layoff corporate assumption that the few can be made to do the work of many”.  I’m not sure if I completely agree with this viewpoint, but some studies show that multitasking is a less efficient approach to work than focusing on similar types of tasks at the same time, or focusing on one specific deliverable at a time.   There are many suggestions as to how to address and minimize the effects of multitasking or how to operate to avoid it.  In my experience, the best way to minimize performance loss of multitasking is similar to the approach we have taken to counteract the effects of ADD with my child:

  1. Be consistent and predictable wherever possible
  2. Prioritize your work and single-thread your efforts whenever possible
  3. Constantly re-evaluate #2 for updated priorities and adjust when needed

Highly productive teams groom, prioritize, re-groom, and re-prioritize their work constantly.  They also are consistently inquiring about priority and adjusting accordingly.  Most importantly, they work to keep their efforts as single-threaded (one item at a time) as possible to maximize their productivity.   The effect on your group’s performance, as well as your group’s output quality and agility, will be greater than you think.

Want to learn more ways to create high-performing teams? Check out another post by Mike Jebber – Team Building: Diversity Uncovers What Experience Can’t.

IoT Course Week 12 – Remote Firmware Updates

Screen Shot 2015-10-15 at 3.07.21 PM

Last week, we explored Load Testing of HTTP and MQTT and how to measure the scalability of your system.

This Week

This week, we’ll continue our focus on non-functional requirements with Remote Firmware Update.  A typical desk lamp, or other non-IoT device, will have the same functionality 10 years after it leaves the factory.  The functionality and value of a “smart” device, however, can increase over time, as new software functionality is deployed.  

As students have experienced, updating the functionality of the Web is relatively straight-forward: deploying new code to a server updates the web application.  Similarly, as new iOS and Android mobile capabilities are deployed, the new Apps are published on the iTunes and Google Play stores.  But how do you update the software/firmware on your smart device?  There could be hundreds of thousands, or even millions, of devices distributed across the country or world and each embedded system is slightly different.  For Week 12, we show students how to remotely update LAMPi.

Screen Shot 2016-05-23 at 11.05.56 AM

 

Debian Packages

Since we are using Raspbian, a Debian-based Linux system for LAMPi , we settled on the Debian Package System. This addresses the actual packaging and installation of software, as well as the distribution and security (authentication and integrity) of those packages.

Create Folder Structure

First, we need an executable to package. We’re going to make a package called “hi” that contains an executable also called “hi”. Let’s make a directory to build our deb package in:

cloud$ mkdir -p ~/pkg/hi/{DEBIAN,opt/hi}
cloud$ cd ~/pkg/hi/

Viewed in tree (you can install tree through apt-get), this folder structure should look like so:

pkg
├── hi
│ ├── DEBIAN
│ └── opt
│ └── hi

So ~/pkg/hi is the directory that holds everything we want to package.

  • DEBIAN is a special folder that contains all the configuration & metadata for the debian package
  • Everything else in ~/pkg/hi will be installed in the root of the system. So ~/pkg/hi/opt/hi will install into /opt/hi on the system in which it is installed. If we wanted to install some supervisor scripts with our packag. For example, we could make a ~/pkg/hi/etc/supervisor/conf.d/ directory and files in it would install into /etc/supervisor/conf.d.

Create Executable

Now let’s build an executable. When the package is installed, we’ll want the executable to be installed in /opt/hi/ so create it as ~/pkg/hi/opt/hi/hi

#!/usr/bin/env python

import os

version = 'Unknown'
version_path = os.path.join(os.path.dirname(__file__), '__VERSION__')
with open(version_path, 'r') as version_file:
version = version_file.read()

print('Hello Deb! Version {}'.format(version))

Let’s create a file to hold the version of our program. Create ~/pkg/hi/opt/hi/__VERSION__ with the following contents (no whitespace, no newline):

0.1

Save and close both files, mark “hi” as executable, then run it:

cloud$ cd ~/pkg/hi/opt/hi/
cloud$ sudo chmod a+x hi
cloud$ ./hi

Hello Deb! Version 0.1

Create Package Metadata

Now let’s build a control file to describe our package.

Create a file at ~/pkg/hi/DEBIAN/control, replacing {{YOUR_NAME}} with your name:

Package: hi
Architecture: all
Maintainer: {{YOUR_NAME}}
Depends: python, python-dev, python-pip
Priority: optional
Version: 0.1
Description: Hello, Deb!
Section: misc

Note that these metadata files are whitespace sensitive and do not allow additional empty lines so be careful while editing.

Finally, we need to fix file permissions and make root the owner of the entire directory structure. These permissions will travel with the package, so if we don’t do this, the files will be installed with bad permissions.

cloud$ sudo chown -R root:root ~/pkg/hi/

Note that after you do this, further edits to files in this directory will require sudo.

This should be all we need to build our deb package, so let’s go:

cloud$ cd ~/pkg/
cloud$ dpkg-deb --build hi

You should now have a hi.deb in ~/pkg/.
You’ve just created a Debian Package!

Setting up a Debian Repository
We use reprepro, an easy to set up Debian Package Repository, and show students how to publish their packages to that repository, add that repository to LAMPi, and then install the package on LAMPi from the repository.

Automating Deployment

Everytime we change our hi package, there are several things we need to do. We need to increment the version number, create the package, and finally upload it to our package repo. We teach the students how to build an automated script for these so we don’t have to manually run the commands each time. The package and deployment script will act as living documentation of the process we need to do each time the package is updated, so future maintainers of your project don’t need to start from scratch. We use a Python module called bumpversion to accomplish automatic updating of version information.

Finally

After walking through the above creation and deployment of a Debian package, setting up the reprepro repository, and installing the hi package on LAMPi, the students’ assignment for Week 12 was to demonstrate their understanding by applying the tools on the LAMPi code. The assignment required them to package the LAMPi UI application, the Bluetooth service, and the lamp hardware service into a package, including maintainer scripts to run before the package is installed (preinst), after installation (postinst), when removing the package, etc. and demonstrate versioning of the package in class.

Next Week –  IoT platforms

Why A Locust Swarm Is A Good Thing

locust_io_master_slave2_00Recently, I gave a talk on how “A Locust Swarm Can be a Good Thing!” at Stir Trek in Columbus.  The talk covered our experience of load testing and preparing for hundreds of thousands of users on the first day of the .Realtor site launch.

This was a challenging environment with lots of external dependencies of varying capacity and failure tolerance.  We knew from the beginning that we’d never be able to load test all of our dependencies at once, so we had to figure out ways to test in isolation without spending all our time rewriting our test infrastructure.  Additionally, our load testing started late enough that we would not be able to coordinate an independent test with all of our dependencies in time.  We also needed to figure out what our users would do without having ever observed user behavior on the real site.  We built a model user funnel to capture our expectations and continually tweaked it as we discovered new wrinkles.  This funnel formed the basis for our load test script and allowed us to prioritize our integration concerns.

In the end, we learned a lot about making our workflows asynchronous, linux kernel optimization, decoding performance metrics, and building giant DDoS clouds of load test slaves.  We also learned that load testing should start “earlier”.  Conversations about load and user behavior drive new requirements and testing can uncover fundamental infrastructure problems.  Decoding and isolating performance problems can require a lot of guessing and experimentation; things that are difficult to do thoroughly with an unmovable launch date.  It’s also difficult to make large scale changes to an application with confidence under time pressure.  One of my key takeaways is to be nicer to external partners.  The point of load testing is to find the breaking points of a system and most people don’t like when their toys get broken. Building trust and safety into that relationship is very important before trying to figure out where and how something went wrong.

Check out the slides to the original talk here.

Stir Trek is an excellent conference with extremely thoughtful organizers and friendly people.  20 people came to talk to me in person after my talk, which was great!  Tickets sell out very quickly, but I recommend getting in next year if you can!

Agile…and BEYOND!!

LeanDog Agile experts, Matt Barcomb, Mike Kvintus, and Jeff Morgan, are at Agile and Beyond today and tomorrow. Take a peek at some of the knowledge they will be dropping while they are there:

Screen Shot 2016-05-02 at 4.58.17 PM

Barcomb

Matt (@mattbarcomb) is a product development specialist with a penchant for organization design. He works with companies to turn software development into a core competency by integrating product development activities with business practices. Matt takes a pragmatic, systems approach to improvement, working with stakeholders throughout medium and large organizations. He has experience working with product management and software delivery teams as well as executive leadership teams, sales, services, and operations in a variety of industries. Matt enjoys challenging mental models, simplifying the seemingly complex, and uncovering the “why” behind the what. He shares his experiences, questions and ideas at www.odbox.co

Thursday, May 5 @ 10AM Value-focused prioritization & decision making

Does prioritizing your development portfolio seem unclear or mired in politics? Ever feel like the decisions for what gets worked on when are somewhere between arbitrary and emotional? Ever get tired of providing cost estimates for work of uncertain value? If you answered yes to any of the above questions, this session is for you! Matt Barcomb will open with introductory concepts about shifting from a cost focus to a value focus for development work. Next, providing business value for user stories will be debunked. Then, a collaborative framework for prioritization, Benefit Mapping, will be discussed. Finally, Matt will end with ways to simplify the cost evaluation of work and risk.

Friday, May 6 @ 10AM Using Flow-based Road Mapping & Options

If you’d like an alternative to typical, quarter-by-quarter, schedule oriented road mapping (and all the associated waste) then this session is for you. Matt Barcomb will introduce a Cadenced Flow approach to flow-based road mapping. He will first cover how to layout and execute a road map based on models that better fit software planning as well as how to transform your existing plans. Next, using options thinking to frame work will be explored and how to use starting and stopping triggers for options, reducing the need of blind budgeting or project practices. Finally, Matt will wrap up by touching on a few key metrics that will let you monitor and evaluate your new road map.

Screen Shot 2016-05-02 at 5.00.09 PMCheezy

Cheezy is an international speaker and keynote presenter in different Agile conferences. He has spoken 6 times at the Agile 20XX conferences as well as other ones like Agile development East and West, Mile High Agile, Agile and Beyond, Path to Agility, etc.

Friday, May 6 • 3:00pm – 4:40pm Tested!

You’ve heard that quality belongs to everybody on an Agile team. You’ve heard that testers and developers should “collaborate” in order to drive quality higher. You’ve heard that automated tests help a team continuously validate the quality. It’s time to stop thinking about it! It’s time to stop talking about it! It’s time to make it happen! Watch Ardi and Cheezy do this in front of your eyes. They will build a web application driven by acceptance and unit tests.You will see how a Product Owner, Tester and Developer will create executable User stories, develop the code to validate these stories and refactor along the way. At the end, you will get a taste of what a Continuous Delivery pipeline looks like. Prepare to collect your jaws from the floor!
Screen Shot 2016-05-02 at 4.58.39 PM

Kvintus

Friday, May 6 • 10:00am – 10:45am Worthless Story Card Estimates

How much of your time is wasted estimating story cards? We’ll explore some alternatives to estimating story cards and review real-world comparisons of tracking work using story points vs. counting story cards. Not sure when story card estimates are needed? We’ll discuss that too. All discussions will be based on real-world examples and comparisons of alternatives for several projects. We’ll also discuss #NoEstimates and how it fits in. You’ll leave with an understanding of ways to plan/track agile projects and the tradeoffs involved with alternatives to story card estimates.

IoT Course Week 11: Load Testing

Screen Shot 2015-10-15 at 3.07.21 PM

Last week, we dove into into the importance of incorporating and collecting analytics through your connected device, how that information helps provide business value, and played with some of the ways that information can be displayed using some pretty graphs.

This Week

This week, we’ll continue our focus on non-functional requirements and start load testing. With connected devices, if the device can’t call home to its shared services, it loses a lot of its value as a smart device. These services need to be highly reliable, but things get interesting when thousands or millions of devices decide to call home at the same time.

To load test, we’ll generate concurrent usage on system until a limit, bottleneck, unexpected behavior, or issue is discovered. This usage should model real-life usage as close as possible, so the analytics we put in place last week will be a valuable resource. In instances where we don’t have data to work with, we can build out user funnels and extrapolate based on anticipated usage. Bad things will happen if we ship thousands of products without any idea how our system will react under the load. This data will also be a useful baseline for capacity planning and system optimization experiments.

The Lampi system has two shared services that we need to put under load. One is the Django web server that handles login, and the other is the MQTT broker that handles sending messages to the lamp.

Load Testing with Locust

To test the web server we use Locust. Locust has become a LeanDog favorite due to its simple design, scalability, extensibility, and scriptability. We’ve used it to generate loads of 200,000 simultaneous users distributed across the US, Singapore, Ireland, and Brazil. These simulated users (locusts) walked through multi-page workflows at varying probabilities, modeling the end-to-end user interaction, complete with locusts dropping out of the user funnel at known decision points.

Locusts are controlled via a locustfile.py. The one below shows a user logging in and going to the home page:

from locust import HttpLocust, TaskSet, task

class UserBehavior(TaskSet):

def on_start(self):
self.login()

def login(self):
response = self.client.get("/accounts/login/?next=/")
csrftoken = response.cookies.get('csrftoken', '')
self.client.post("/accounts/login/?next=/", {"csrfmiddlewaretoken": csrftoken, "username": {{USERNAME}}, "password": {{PASSWORD}} })

@task(1)
def load_page(self):
self.client.get("/")

class WebsiteUser(HttpLocust):
task_set = UserBehavior
min_wait = 5000
max_wait = 9000

In order to run locust, we’ll need a machine outside of the system to simulate a number of devices. Locust is a python package, so it can run on most OSes. It uses a master/slave architecture so you can distribute the simulated users and allow for more and more load.

Once you install locust and start the process, you control the test via a web interface.

Screen Shot 2016-05-04 at 12.09.46 PM

Locust will aggregate the requests to a particular endpoint and provide statistics and errors for those requests.  

Screen Shot 2016-05-04 at 12.09.55 PM

Screen Shot 2016-05-04 at 12.10.03 PM

Load Testing with Malaria

To test MQTT we used a fork of Malaria. Malaria was designed to exercise MQTT brokers. Like locust, Malaria spawns a number of processes to publish MQTT messages. Unlike locust, it’s not easy to script; you have to fork it to do parametric testing or randomize data.

usage: malaria publish [-D DEVICE_ID] [-H HOST] [-p PORT] [-n MSG_COUNT] [-P PROCESSES]

Publish a stream of messages and capture statistics on their timing

optional arguments:
-D DEVICE_ID (Set the device id of the publisher)
-H HOST (MQTT host to connect to (default: localhost))
-p PORT, (Port for remote MQTT host (default: 1883))
-n MSG_COUNT (How many messages to send (default: 10))
-P PROCESSES (How many separate processes to spin up (default: 1))

By modulating MSG_COUNT and PROCESSES you can control the load being sent to the broker.

Running some Example loads
Small load: Using 1 process, send 10 messages, from device id [device_id]

loadtest$ ./malaria publish -H [broker_ip] -n 10 -P 1 -D [device_id]

Produces results similar to this:

Clientid: Aggregate stats (simple avg) for 1 processes
Message success rate: 100.00% (10/10 messages)
Message timing mean 344.51 ms
Message timing stddev 2.18 ms
Message timing min 340.89 ms
Message timing max 347.84 ms
Messages per second 4.99
Total time 14.04 secs

Large load: Using 8 processes, send 10,000 messages each, from device id [device_id]

loadtest$ ./malaria publish -H 192.168.0.42 -n 10000 -P 8 -D [device_id]

Monitoring The Broker

MQTT provides a set of topics that allow you to monitor the broker.

This command will show all the monitoring topics (note that the $ is escaped with a backslash):

cloud$ mosquitto_sub -v -t \$SYS/#

The sub topics ‘…\load...’ are of particular interest.

Gather data

Before we start testing, we should figure out what metrics we want to measure. Resources on the shared system (CPU, memory, bandwidth, file handles) are good candidates for detecting capacity issues. Focusing on the user experience (failure rate, response time, latency) will help you hone in on the issues that will incur support costs or retention problems. Building the infrastructure to gather, analyze and visualize those metrics can be a significant part of the load testing process – but those tools are also necessary to do useful operational support in production. For the class, students used sysstat, locust, mqtt and malaria to gather metrics. A production-like system might use AWS Cloudwatch, New Relic, Nagios, Cacti, Munin, or a combination of other excellent tools.

The point of load testing is to find the limits and then decide what to do about them. There will be a point where the cost to rectify the issue is greater than any immediate benefit, load testing will help you find that bar. During the class, limits of a 1000 simultaneous users for web and 5,000-10,000 MQTT messages per process were common.

Final project

For their final project two students from the class, Matthew Bentley and Andrew Mason, decided to take on some of the problems with mqtt-malaria and extend Locust to publish MQTT messages. Using Locust they were able to scale their load test infrastructure across many machines and put a broker under more stress. In their previous testing with malaria, they found the point where a single device could send no more messages (at a reasonable rate), but they could not scale malaria to determine at what point the broker would not process any additional connected devices’ messages. Through their efforts, they reached 100% CPU on the broker, pushing 1 million messages a minute to 4000 users. As a result of their work they also open sourced their contribution to locust.