Monday, March 12, 2018

The Trailhead Electric Imp project

For some time now I've been meaning to complete the Electric Imp Trailhead project. Unfortunately there were a couple of things holding me back from completing it.

Firstly, I needed the physical impExplorer Developer Kit hardware.

The board itself is a pretty reasonable $25 USD. And then I got to the shipping options to New Zealand in the cart and it looked something like this: (circa 2017)

Ouch! A bit hard to justify $116.49 shipping on a $25 purchase. To be fair, I checked it again today while writing this post and they now have some much more reasonable options via USPS starting at $16.33 USD.

Anyway, as luck would have it I found some time to sit down in the Dreamforce '17 Developer Forest and start working through the project on site with the provided hardware. For some reason that I can't recall I opted to work through this module with an existing Developer edition org (lack of time to delete and then create a new trail org?). It was also my downfall, as the org had a namespace defined and the IoT Contexts didn't seem to support namespaces. There were GACKs all over the place. Lesson learned, next time I'll be tackling it against a clean org.

Fast forward to today, some months after Dreamforce and I've got some spare time to sit down and tackle this again. An no, I haven't forgotten the second thing that was holding me back. I'm just getting to that now.

I needed a fridge to use for the project. One that also got a good WiFi connection. I don't know about you, but my actual fridge is a bit of a WiFi dead zone. It's probably either the proximity of the microwave or the Faraday cage I wrapped it in to protect my ripe avocados from EMP attacks. This was an easy enough problem to solve. Like anyone else with a 3D printer, the solution was only several hours of plastic extrusion away. It turns out there was a purpose made fridge ready to go on Thingiverse- Model Fridge for Salesforce IoT Electric Imp Developer Board.

With my new mini fridge, electric imp board, and three AA-batteries-I-borrowed-from-my-sons-remote-control-car-but-will-replace-before-he-wakes-up I embarked on completing the project.

There appears to be two general ways to approach this trailhead project. You can either follow the instructions carefully and double check everything as you go (my chosen path), or you can just jump through pressing as fast as you can because each step ends with the statement:

We won’t check any of your setup. Click Verify Step to go to the next step in the project.

Boo! This is actually a real pain, as there isn't anything to indicate if you've taken a misstep at any point along the way.

That said, it is all fairly straight forward if you are proficient at following instructions plus cutting and pasting.

The IoT orchestrations were new to me and hence the most challenging part of the project. For instance, I initially found it confusing that the conditions you define on a State are the exit criteria that transition to the other states. I guess this makes sense as a finite-state machine would only be concerned with the transitions it can make from the current state. E.g. If I'm in a door open state, I'm only interested in transitioning back to the default state when the door is closed again.

As the Electric Imp is communicating to Salesforce via Platform Events it provided an easy mechanism to send in mock readings via Apex. This helped with testing when your child wanted the AA batteries back.

Smart_Fridge_Reading__e mockReading = new Smart_Fridge_Reading__e();
mockReading.deviceId__c = '23733t1ed87bf1ee';
mockReading.door__c = 'Closed';
mockReading.humidity__c = 10.441;
mockReading.temperature__c = 8.6859;;

Things I'd change? I'd probably take the light sensor LUX reading and relay that directly back to Salesforce rather than having a configured level to indicate that the door is open. I'm assuming it has the current form to show how values can be passed back from the Device to the Agent for additional processing. I'm also interested in the accelerator and air pressure sensor which are part of the imp001 hardware but aren't utilized in this badge. Maybe another state if the fridge door is closed too hard?

If you go away from the project for several days you might get the following error message from the Agent code:

[Agent] ERROR: [ { "message": "Session expired or invalid", "errorCode": "INVALID_SESSION_ID" } ]

In my case that was caused by the agent code persisting the access_token that it gets via the initial OAuth process. There isn't an automated mechanism to drop the expired session details or refresh it. Instead you need to use the Agent URL to complete the OAuth process again.

See Expired OAuth details being restored by getStoredCredentials()

Other related Trailhead modules:

Friday, November 17, 2017

Dreamforce 2017 Round-up / Summary

My carefully curated session agenda. It didn't last long.

In what is becoming a tradition for me I went into Dreamforce 2017 with a wildly optimistic 72 breakouts, theater sessions, and keynotes bookmarked that I wanted to attend. Like Codey desperately holding onto the clock on the Agenda page, I knew that was never going to work.

Instead I had a few key sessions and keynotes that I knew I wanted to attend. Beyond that, I just tried to go with the flow of the conference and to focus on doing things that I knew I could only do there in person.

It was certainly several whirlwind days that I'm only just now starting to piece back together from a trail of photos on my phone and tweets. Even seeing some of it coalescing in a single blog post is somewhat daunting assemblage.

Table of Contents

  1. Preconference
  2. Self Driving Astro in the IoT Grove
  3. Day One
    • Mini Hacks
    • Main Keynote
  4. Day Two
    • Mass Actions, Composite, and Bulk APIs
    • Salesforce Platform Limits
    • Developer Keynote
    • Meet the Apex Developers
    • Dreamfest
  5. Day Three
    • Advanced Logging Patterns With Platform Events
    • Build Custom Setup Apps & Config Tools With the All-New Apex Metadata API
    • The Future of Salesforce DX
    • True to the Core
    • Meet the Developers
  6. Day Four
    • Lightning Round Table
    • Salesforce Tower
  7. Walking Peanut butter distributor
  8. Random Photos


According to my phone the first on the ground Dreamforce thing I did after picking up my badge was head into Moscone West. Pretty much business as usually, except it was the Sunday before the conference opened.

I was there to setup my Self Driving Astro. More on that below.

It was an interesting glimpse into what goes into setting up such a massive conference. There were a lot of people working hard and putting in some big hours to get everything setup and going in time of the opening the following morning.

On Sunday afternoon I also participated in the MVP volunteering event

Self Driving Astro in the IoT Grove

Way back in April this year after giving my automated Einstein powered cat sprinkler talk to the Sydney Developer User Group I'd latched onto the idea to use the Einstein Vision Services to make a rudimentary self driving car. It was an itch that needed to be scratched. So I started working on it on and off since then and submitted it as a possible talk for Dreamforce.

Sadly it didn't get accepted as a talk this year. I suspect it was a little too far off the actual promoted use cases for the Einstein services. So while it was fun in concept, it could have lead to some confusion with customers about what the Salesforce offerings would be used for. Contrast it to something like the Identify Protein Structures Using Einstein talk, which sounds interesting in its own right and is on my list of sessions to catch up on.

It did leave me with a partially finished project that I'd put in all that effort and resources into. So rather than just mothball it I decided to carry on and make it a blog post instead. I'm glad I did, as it has been an excellent excuse to dive deep into the Metamind APIs and contribute to some GitHub projects along the way. It also formed part of an extended Einstien vision talk that I gave to the Auckland Developer User Group.

I published the blog post about it on the 20th of October. That was just over two weeks out from Dreamforce to avoid it getting lost in the noise of the conference. Then the next day Reid Carlberg asked if I was bringing it Dreamforce and if it could be setup on display. I jumped at the opportunity and set about reworking it from something that had only ever been run for 10 to 15 minutes at a time to something that would have to run continuously for 9+ hours a day. I had 11 days before I needed to be on my flight for Dreamforce.

Needless to say, it was a busy run up to departing for Dreamforce to:

  • Print a new slightly larger seat,
  • Design and print a new steering column,
  • Upgrade to a larger metal gear servo,
  • Print a new steering wheel for the new servo,
  • Print a modified case for the Raspberry Pi to support the HAT and relay,
  • Design and print a case for the Larson Scanner board to prevent shorts,
  • Reinforce all the wiring to survive travel in checked luggage.

Somehow it all came together in time along with a number of software updates to minimize the API counts and make the error handling more resilient. Many thanks to:

  • Reid Carlberg for the opportunity to set it up in the IoT Grove.
  • Michael Machado for giving me a temporary bump on the Metamind API limits.
  • Josh Birk for helping to get it setup and running. Plus keeping an eye on it during the conference along with @codefriar and others in the IoT Grove.

Day One

The first day went past in a bit of a blur. I was in and about the Mini hacks area volunteering during the morning and then over to the main keynote in the afternoon. In between I did a bit of exploring around Moscone West.

There were a number of areas touched on in the main keynote. In particular:

  • the customization of myLightning to change the color themes and visual designs of the UX.
  • mySalesforce for creating mobile apps in Lightning.
  • myTrailhead for Trailhead customized to other companies.

Also confetti - lots and lots of confetti.

Day Two

The day started off with finding confetti from the keynote just about everywhere in my bag.

Mass Actions, Composite, and Bulk APIs

I managed to catch my first full session today by @thunderberry and @abhinavchadda. While both the Bulk API v2 improvements and Open API support sound really useful, what intrigued me most was the proposed Mass Actions API. This could radically speed up a number of operations by defining a transform over a query. The exact details on how it will be split into transactions and how limits will be addressed is something to keep an eye on with this.

Salesforce Platform Limits

This session was again with @thunderberry. The proposal here is to move a number of existing limits to become "soft limits". Then depending on the pod heath you would be able to burst past the standard limit for short periods. There would still be an upper maximum burst limit. Consistently running past the soft limit will get you on a naughty list where Salesforce will encourage you to either upgrade or pin you to the existing limits. If the pod is particularly busy you would be more limited in what you could burst to.

This all sounds good in theory, but it will make debugging more difficult. Something that works within the soft limits one moment could fail the next due to the org load. This does seem a bit like the current Apex CPU limit.

A quick tip from this session:

Don't poll the existing limits API to frequently. The calls to that still count against your API call count limit.

Developer Keynote

The first thing that struck me about the Developer Keynote is what a fashionable bunch the developers are.

A "Suspicious Looking Plant" and some Aloe vera.

If time permits I'll come back to this section to recap some of the content.

Meet the Apex Developers

I asked a question in this session about ENTERING_MANAGED_PKG appearing in debug logs and how it can frequently consume 80% plus of a 2MB log in a dev packaging org. They had been forewarned that this question was coming and thankfully the response was they are onto it and will be fixing the excess logging in the future. πŸŽ‰


AT&T park made an excellent venue in terms of proximity to most of the hotels, visibility of the stage, and facilities for dealing with that many attendees.

Day Three

Advanced Logging Patterns With Platform Events

Using Platform Events to push logging events out and then monitoring them using a Utility Bar app.

Build Custom Setup Apps & Config Tools With the All-New Apex Metadata API

The Future of Salesforce DX

This session is well worth a repeat watch. Very dense on upcoming changes with DX. Mostly forward looking, but still interesting.

Change Data Capture: Data Synchronization in the Cloud

True to the Core

Meet the Developers

Steve Tamm rallying the troops prior to the Meet the Developers session

I asked a question in this session about raising support cases for GACKs and those without Premier support being turned away to the developer forums. If you have specific examples of this happening that you want to share please see Examples of being bounced by Salesforce Support to forums when encountering a GACK

Day Four

Today was a bit interrupted from attending sessions, but in a good way!

Salesforce Tower - Ending Dreamforce on a high

Walking Peanut butter distributor

This year I brought 3kg/6.6lb of Pic's peanut butter with me to give out to people. It's made right here in Nelson, New Zealand and I figured I'd be the only person giving out peanut butter at Dreamforce. At the very least it made for an interesting conversation starter. Not too many people thought I was the nutty one in the transaction (at least out loud).
And to those with peanut allergies - no hard feelings, can we still be friends? I wasn't really trying to kill you.

Sessions I mean to catch up on

Random Photos

Friday, October 20, 2017

Teaching Salesforce to drive with Einstein Vision Services

tl;dr. Skip to the results.

For your consideration, the Mad Catter

There's something in that tweet that speaks to me. Maybe it's time to pull a Tony Stark and take a new direction with my creations. Moving away from the world of weapons questionable development and more towards something for the public good. Something topical.

Tesla, Apple, Uber, Google, Toyota, BMW, Ford, these are just a few of the companies working on creating self-driving cars. Maybe I should dabble with that too.

My first challenge was that my R&D budget doesn't stretch quite as far as the aforementioned companies. While they all have market values measured in the billions my resources are a little more modest. So the kiwi Number Eight Wire Mentality will come into play when LIDAR isn't an option.

Steve Rogers: Big man in a suit of armor. Take that off, what are you?
Tony Stark: Genius billionaire playboy philanthropist.
* Actual project did not require hammering an anvil... yet

It's another way I fancy myself a bit like Tony Stark. Less like the genius, billionaire, playboy Stark and more like trapped in a cave with only the resources at hand Stark. I'll make do with whatever I've got on hand. Some servos, a Raspberry Pi, an old web cam, a massive amount of cloud computing resources.

That last point is important, while Stark has J.A.R.V.I.S. I've got Einstein Platform Services.

I don't have a background in AI as a Data Scientist. My AI exposure has been more from copious amounts of science fiction and that one trimester back in university where I did a single paper. I'm assuming the depiction of AI in the media is about as accurate as the depiction of computing in general. So while it's fun to take inspiration from an 80's documentary on self driving cars the most useful learning from something like Knight Rider is that Larson scanners make AI's look cool (more on that later).

Elephant in the room

This is probably a good point to pause for a minute and note that, yes, I know that a solely cloud based AI isn't the best option, or even a very good option, for an autonomous vehicle. An image classification AI would only be part of a larger collection of sensor input used to make a self driving car. Connectivity and latency issues will pretty much rule out any full scale testing or moving at significant speed. That, and, I don't think my vehicle insurance would cover crashing the family car while it was trying to navigate a tricky intersection by itself. But I want to see how far I can push it anyway with a purely optical solution using a single camera.

I know that a number of the components I'm using on the prototype are less than ideal. I.e. using continuous rotation servos rather than stepper motors which would have given a more precise measure of the distance traveled. See my earlier point above that I'm mostly working with what I've got on hand. Feel free to contact me if you have a spare sum of cash burning a whole in your pocket that you want to invest in something like this.

Remember, this is just supposed to be a Mark 1 prototype. If it starts to look promising it can be refined in the future.

The Overly Simplistic Plan

The general idea for a minimalistic vehicle is:

  1. A single forward facing web cam to capture the current position on the road and what is immediately ahead.
  2. A Raspberry Pi to capture the image and process it.
  3. Einstien Predictive Vision Services to identify what is in the current image and the implied probable next course of action to take.
  4. Some basic servo based motor functions to perform the required movements.
  5. Repeat from step (2)

I say overly simplistic here in that it doesn't really localize the position of the vehicle in the environment or provide it with an accurate course to follow. Instead it just makes a best guess prediction based on what is currently seen immediately ahead. There is no feedback mechanism based on where the car was previously or how well it is following the required trajectory.

Hardware - Some assembly required

The model for the Mk2 Prototype

In the ideal world I'd have repurposed an RC toy car that had full steering and drive capabilities already. Or even just 3D printed a basic car. This would have given a realistic steering behavior via the front wheels. Instead I've started out with skid steering and a dolly wheel on a lego frame. This was quick and easy to put together and drive with some servos from a previous project.

While simple is good for v1, it was still not without it's challenges.

The first problem was that I couldn't just go from zero to full power on the drive wheel servos. It tended to either damage the lego gears or send the whole vehicle rolling over backwards. I needed to accelerate smoothly to get predictable movement.

Custom LiPo based high drain power supply

Another challenge was providing a portable power source. Originally I was powering the whole setup off a fairly stock standard portable cellphone charger to power both the Pi and the servovs. However, when I powered up the servos to move forward the Raspberry Pi would reset. The voltage sag when driving the motors was enough to reset the Pi. I was able to work around this by using one of the LiPo (Lithium polymer) batteries from my quadcopters with a high C rating (they can deliver a lot of power quickly if required) and a pair of step down transformers.


The standard software control loop is:

  1. Capture webcam image
  2. Send image off to the Einstein Image service for the steering Model.
  3. Based on the result with the highest probability, activate servos to move the vehicle
  4. Repeat

Capturing the image is fairly straight forward.

Since latency was already going to make fast movement impractical I opted to relay the calls via Salesforce Apex services. This made the authentication easier as I only needed to establish and maintain a valid session with the Salesforce instance to then access the Predictive Vision API via Apex. It also meant I didn't need to reimplement the metamind API and could instead use RenΓ© Winkelmeyer's salesforce-einstein-platform-apex Github project. This already included workarounds for posting multipart requests required by the API.


This is something that has evolved greatly since my last project using Einstein Vision Services on the automated sprinkler. Previously you created a dataset from a pre-collected set of examples and then created your model from that to make the predictions with. If it needed refining you repeated the entire process. Now you can add feedback directly to a dataset and retrain the model in-place with v2 of the Metamind API. No more adding all the same examples again.

I found it really useful to have four operating modes for the control software to facilitate the different training scenarios.

The first mode would kick in if no DataSet could be found. In these case the car would wait for input at each step. Using a wireless keyboard I could indicate the correct course of action to take. The image visible at the time would be captured and filed away with the required label. This made creating the initial DataSet much easier.

The second mode was used if the DataSet existed but didn't have a Model created yet. In this case the input I gave could be used to directly create examples. This mode wasn't so useful and I'd tend to skip it.

The third option applied if the Model could be found. In this case I could still indicate the correct course of action to take. If the model returned the expected prediction nothing further needed to occur. However, if the highest prediction differed from the expected result I could directly add the required feedback into the model.

The final forth mode was to use the model to make predictions and take action solely off the highest prediction returned.

Control theory - PID control

The earlier statement about the software action "Based on the result, activate servos to move the vehicle" is a gross simplification of what is required to have a vehicle follow a desired trajectory. In the real world you don't merely turn your vehicle full lock left or right to follow the desired course (what is referred to as bang-bang steering). Instead you make adjustments proportional to how far off the desired trajectory you are, how rapidly you are approaching the desired trajectory, and finally, if there are any environmental factors that are affecting the steering of the vehicle.

This should be familiar to anyone who makes and flys quadcopters (another interest of mine). It's referred to as PID (proportional–integral–derivative) Control and provides the automated mechanism to keep the vehicle/quadcopter at the desired state where the output needs to adapt to meet the desired input. The following video does an excellent job of explaining how it works.

With my v1 prototype the control loop speed is so slow and the positioning so basic that I'm practically going with bang-bang style steering. It is less than ideal as most scenarios require fine motor control to accurately follow the curve of the road ahead. Definitely something to look at improving in v2

The prototype results

Training example

This video shows the feedback training process for an existing model. The initial model was created off a fairly small number of examples in the dataset. I then refined it with feedback whenever the top prediction didn't match what I indicated it should be.

I've added some basic voice prompts to indicate what is occurring as it mostly runs headless during normal operation.

Self steering example

In this video I'm using an entirely different dataset to the training example above. Getting single laps out of this setup was a bit of a challenge. The real world is far less forgiving than software when something goes wrong. Through trial an error I found it was really important to get the camera angle right to give the best prediction for the current position. It was also important to adjust the amount of movement per action. Trying to make large corrective movements, particularly around corners, would often result in getting irrecoverably off course. This comes back to the "bang-bang" style steering mentioned above.

I've edited out some of the pauses to make it more watchable. So don't consider this a real time representation of how fast it actually goes.

This process was also not without its share of failures. The following video shows a more realistic speed for the device and what happens when it misjudges the cornering.


Lets start with the Million Dollar Question -

Can I use this today to convert my car into an AI powered automated driving machine and then live a life of leisure as it plays taxi for the all the trips the kids need to make?

Umm, no. Even if you're happy with it only being around 50% confident that it should go in a straight line rather than swerve off to the right on occasion there are still a number of things holding it back.

For example, while it could be extended to recognize a stop sign there is currently no way to judge how far away that stop sign is1.

That's not even taking into account other things that real roads have, like traffic, pedestrians, roadworks, ... At least I'm still someway off before I need to start worrying about things like the Trolley car problem

Future Improvements

A couple of days before this post came out Salesforce released the Einstein Object Detection API. It's very similar to the Image recognition API I used above except in addition to the identified label in the image it will also give you a bounding box indicating where it was detected. With something like this I could then guess the relative position of the object in question to the current camera direction and maybe even distance based on a know size and how large the bounding box is coming back as. Definitely something to investigate for the MK2.

See also


Putting together a dedicated Pi Screen
Astro sized 3D printed racing seat
Original clunky power setup. Result of clunky power setup
Creating the blue Larson scanner
Red Astro Larson Scanner
Training setup for driving from Google Street View
Basic servo setup
Lipo battery loaded onto the frame with a mess of wiring