Author: Gary Johnson

IoT Course Week 14 – Final Projects

Screen Shot 2015-10-15 at 3.07.21 PM

It’s been an intense 13 weeks for both us and the students. Now it’s finally time to dig into final projects. Students were invited to come up with an idea that added some sort of value to the LAMPI product, and we provided some possible ideas. Here are some of the highlights from those projects.

Build An Alarm Clock

For this project, the students created an alarm clock system using LAMPI. Through the web interface they already built, the user can create one or several alarms.

Screen Shot 2016-07-19 at 4.14.13 PM

Because LAMPI doesn’t have a speaker, the students had to improvise and blinked the light on and off several times instead.

Screen Shot 2016-07-19 at 4.14.24 PM

From LAMPI, the user can see the current time as well as snooze the alarm.

Challenges – multiple time zones, transmitting time, conflicting alarms, conflict between light settings and alarm

Natural Light Mode

This was an original idea from the students and not one of our suggested projects. This project used LAMPI to reflect the state of the light outdoors for a handful of benefits, as outlined in their presentation:

Screen Shot 2016-07-19 at 4.16.51 PM

Using the API from OpenWeatherMap, they were able to get sunset, sunrise, and daylight conditions and map them to a color spectrum based on LAMPI’s current time. Because we didn’t have all day to watch the light color slowly change, they also built a demo mode that progressed through a 24 hour cycle in the span of couple minutes.

User / Device Association

By the end of the course students had a functional system that connected a single LAMPI to the cloud. This project focused on expanding the system to accept multiple users, each with a unique LAMPI device. The LAMPI doesn’t have a keyboard and noting that an on-screen keyboard would probably result in a poor experience, these students built a key-generation system similar to Netflix and other services that run on set-top boxes and smart TVs.

When the user presses a button to connect to the cloud, the LAMPI would display a randomly-generated code like this:

Screen Shot 2016-07-19 at 4.15.01 PM

The user can then log into their LAMPI web portal, enter the code, and the device is connected to their logged in account. The codes are only good for one minute, afterwards they would need to generate a new code.

Distributed Load Testing

While we had covered some basic load testing scenarios in a previous week, there was still work to be done. This team of students took charge and starting investigating how to load test a protocol such as MQTT using something like Locust, a LeanDog favorite for load testing web sites. Locust supports HTTP out of the box, but has a plugin system for testing other protocols. These students actually created their own MQTT plugin for Locust and open-sourced it on GitHub. From there, they ran a “locust swarm” of distributed clients from Digital Ocean to attack their Mosquitto broker in Amazon EC2.

Their results were very promising. They were able to flood CPU and network traffic but unable to cause catastrophic failure in the Mosquitto broker. Messages with QOS 1 and 2 eventually got where they were intended to go after congestion resolved, demonstrating why Mosquitto continues to be our go-to MQTT broker:

Screen Shot 2016-07-19 at 4.19.20 PM

Wrapping Up

With final projects completed, we also ran a brief retrospective. We asked the students to post what worked, what didn’t work, and what surprised them. Lots of good feedback came out of this. We were able to hone in on content that was too technical and not technical enough. We learned that homework submissions being due on Monday caused issues as students would often wait until the weekend, a time at which we could only provide limited assistance over Slack. It was also a validation that we had done something right — we received an overwhelming amount of positive feedback, with several students saying how much they had gotten out of the class due to the breadth of the covered topics.

Looking Forward

With our first class finally wrapped up, it’s time to look ahead. Preparations are already being made for a second run. We’re taking the feedback given, making some needed tweaks, and we’ll be ready for a new round of students in the fall. See you then!

Developing an amazing technology product of your own? Take our 1-Minute self-assessment to make sure you’re project is on-track for a successful launch!  Or, reach out to us at! We’d love to hear all about it!

Holiday Virtual Reality Card(board)

It’s that time of year again. The weeks leading up to the holidays when corporate holiday gifts from vendors start trickling into the office; delighting one and all with the kind of sugary, salty, scrumptious snacks that laugh in the face of even the most relaxed diets. It’s all part of the magic of the holiday season and LeanDog typically participates in the sugar rush by sending out bowls of chocolate “dog treats” to each of our clients. This year however, we wanted to do something special.

Don’t get me wrong, we still totally made the dog treats (it’s tradition!), but when The New York Times sent each of their subscribers Google Cardboard VR Viewers, we couldn’t help but get inspired. That’s when we decided to make our own Virtual Reality Holiday Cards, showcasing our unique office space and the incredible views from our spot in the North Coast Harbor.

Given the short timeframe, the idea was ambitious (even by our standards), but we had a vision. With less than four weeks to design, build, and deliver the gifts, it was an all-hands-on-deck effort to make this cool idea a reality. We have shared the story below in the hopes that we will inspire others to think big this holiday season.


The first step in determining feasibility was to identify the risks and unknowns, of which there were a few:

  • How to create 3D panoramas?
  • What does building a Unity app for Google Cardboard look like?
  • Are there any suppliers that can deliver customized Google Cardboard units in a short timeframe?
  • Can we get the iOS app submitted with enough lead time to clear the App Store review process?

Until these questions were all answered, we had to keep in mind the possibility of pulling the plug on the project and falling back on a plan B for holiday cards. The hardware was arguably the most important part and would probably require the most lead time, so we started there.


Many of the vendors listed on the Google Cardboard page do full-color prints with lead times in the order of weeks; however, Unofficial Cardboard had a nice solution using vinyl labels requiring a much shorter lead time.


Our own Will Kesling  designed the labels under extremely short notice and built some paper mockups to test them out on unsuspecting LeanDoggers.


In the meantime, we also had to figure out the software side of this project. Google provides a Google Cardboard for Unity SDK, which is how we created the iOS and Android apps. We started with iOS because we knew the App Store review time was about a week, compared to the couple hours needed to publish to the Google Play store.

During one of our hack days, Bill Holmes and I attempted to show a 2D panorama in a custom Unity app using the Cardboard SDK. We created a 2D spherical panorama of Studio on our phones using an app called 360 Panorama. To stabilize the camera better, Bill hacked together a tripod rig to hold the phone while we took the panoramas.


We created a new Cardboard project in Unity and imported the panorama as a texture, applying it to a skybox. That was all it took to show a 2D panorama. We ended the day with a functional app we could use to look around Studio on a Google Cardboard viewer.


Next we needed to show a 3D panorama. Using a skybox was no longer a great option as we needed two images, one rendered by each eye. This link, focused on 3D video for Oculus Rift, proved to be very helpful with moving this forward. Instead of projecting the image on a skybox, we switched to using a left and right sphere with respective images for each eye, and the left and right cameras in the center of the spheres. Once this was done, we finally had a 3D effect in our app.


However, we were still testing with stereo panoramas found using Google Image Search and needed to actually make our own content.


We started by looking at 360Heros‘ offering — a solution using a rig holding 12+ GoPro Heros and special software to stitch all the video together. Unfortunately, the solution was prohibitively expensive for how infrequently we would need it, and there was no option for a local rental. Another tradeoff is that we would need to coordinate staged videos, which would be lower quality than solutions using high-end digital cameras.

The solution we settled on was built mostly off of a forum post about creating 3D panoramas using a single DSLR and a software solution called PTGui to stitch the resulting photos together. It’s actually a really clever workflow — to make a good 2D panorama, you need to find the non-parallax point so that things don’t shift in relation to each other as you rotate the camera. Parallaxing would introduce stitching errors when the photos all get combined into a single panorama. This workflow, however, purposefully introduces parallaxing error and uses the left and right halves of all the images to build the left and right eye view. Here you can see the right side of the photo getting masked out for (counterintuitively) the right eye view:


One side effect, as previously mentioned, is the introduction of stitching errors. To work around this, you just need to take more photos to minimize how drastically the scene changes between photos. I found, for example, that I could make a great 2D panorama with about 12 photos, which means a photo every 30º. To make a good 3D panorama, I needed to take a photo every 5º, which means 72 photos per panorama.

That wasn’t the only challenge. After a day of shooting we noticed that due to the lighting levels, the windows in indoor photos were all blown out:


Since we didn’t have the luxury of being able to take shots whenever we wanted, we had to use exposure bracketing to take the same photo at multiple exposures, and use PTGui to perform exposure fusion.


It handled this very well, though it meant that each panorama was now created from 216 photos — over a GB of photos per scene. Stitching took about 15 minutes per scene, per eye, until I broke out our Mac Pro, aka “The Obelisk”, aka “Lil’ Trashcan”. PTGui can use OpenCL support on the GPU, which the Mac Pro has in spades:


This took stitching down to a few minutes per scene, restoring my sanity and keeping my Macbook from overheating.

To take these photos, we had to rent some camera equipment. Here’s the shopping list:

  • Canon EOS 6D
  • Canon EF 8-15mm f/4L Fisheye USM
  • Manfrotto 190 Tripod Legs
  • Manfrotto 338 Levelling Base
  • Manfrotto 303 Panoramic Head

The circular fisheye is important so you can get a full 180º vertically; otherwise, you’d need to take more photos of the floor and ceiling, making post-processing more difficult and possibly introducing more stitching errors.

The panoramic head is pretty interesting. It’s keyed so that you can set the degree of rotation for each shot and it will “click” into place with each pivot.


The Holiday Card

Friday came and our viewers finally arrived. They came out pretty great!
viewer was stood up as a static page to download the mobile apps from, and the apps both completed their review process shortly thereafter. If you have your own Cardboard VR viewer, you can download them here:

Get it on Google Play

It’s been an intense two weeks but we made a ton of progress, reached a minimum viable product, and now it’s finally time to deliver our holiday cardboards. As for what comes next, expect a few small improvements to the apps to follow shortly and hopefully some awesome new 3D scenes throughout the year.  The whole process was an incredible learning experience and has left us eager to build more apps for Google Cardboard.

Happy Holidays!

Google Cardboard is a trademark of Google Inc.