Pages

Friday, October 20, 2017

Teaching Salesforce to drive with Einstein Vision Services

tl;dr. Skip to the results.

For your consideration, the Mad Catter

There's something in that tweet that speaks to me. Maybe it's time to pull a Tony Stark and take a new direction with my creations. Moving away from the world of weapons questionable development and more towards something for the public good. Something topical.

Tesla, Apple, Uber, Google, Toyota, BMW, Ford, these are just a few of the companies working on creating self-driving cars. Maybe I should dabble with that too.

My first challenge was that my R&D budget doesn't stretch quite as far as the aforementioned companies. While they all have market values measured in the billions my resources are a little more modest. So the kiwi Number Eight Wire Mentality will come into play when LIDAR isn't an option.

Steve Rogers: Big man in a suit of armor. Take that off, what are you?
Tony Stark: Genius billionaire playboy philanthropist.
* Actual project did not require hammering an anvil... yet

It's another way I fancy myself a bit like Tony Stark. Less like the genius, billionaire, playboy Stark and more like trapped in a cave with only the resources at hand Stark. I'll make do with whatever I've got on hand. Some servos, a Raspberry Pi, an old web cam, a massive amount of cloud computing resources.

That last point is important, while Stark has J.A.R.V.I.S. I've got Einstein Platform Services.

I don't have a background in AI as a Data Scientist. My AI exposure has been more from copious amounts of science fiction and that one trimester back in university where I did a single paper. I'm assuming the depiction of AI in the media is about as accurate as the depiction of computing in general. So while it's fun to take inspiration from an 80's documentary on self driving cars the most useful learning from something like Knight Rider is that Larson scanners make AI's look cool (more on that later).

Elephant in the room

This is probably a good point to pause for a minute and note that, yes, I know that a solely cloud based AI isn't the best option, or even a very good option, for an autonomous vehicle. An image classification AI would only be part of a larger collection of sensor input used to make a self driving car. Connectivity and latency issues will pretty much rule out any full scale testing or moving at significant speed. That, and, I don't think my vehicle insurance would cover crashing the family car while it was trying to navigate a tricky intersection by itself. But I want to see how far I can push it anyway with a purely optical solution using a single camera.

I know that a number of the components I'm using on the prototype are less than ideal. I.e. using continuous rotation servos rather than stepper motors which would have given a more precise measure of the distance traveled. See my earlier point above that I'm mostly working with what I've got on hand. Feel free to contact me if you have a spare sum of cash burning a whole in your pocket that you want to invest in something like this.

Remember, this is just supposed to be a Mark 1 prototype. If it starts to look promising it can be refined in the future.

The Overly Simplistic Plan

The general idea for a minimalistic vehicle is:

  1. A single forward facing web cam to capture the current position on the road and what is immediately ahead.
  2. A Raspberry Pi to capture the image and process it.
  3. Einstien Predictive Vision Services to identify what is in the current image and the implied probable next course of action to take.
  4. Some basic servo based motor functions to perform the required movements.
  5. Repeat from step (2)

I say overly simplistic here in that it doesn't really localize the position of the vehicle in the environment or provide it with an accurate course to follow. Instead it just makes a best guess prediction based on what is currently seen immediately ahead. There is no feedback mechanism based on where the car was previously or how well it is following the required trajectory.

Hardware - Some assembly required

The model for the Mk2 Prototype

In the ideal world I'd have repurposed an RC toy car that had full steering and drive capabilities already. Or even just 3D printed a basic car. This would have given a realistic steering behavior via the front wheels. Instead I've started out with skid steering and a dolly wheel on a lego frame. This was quick and easy to put together and drive with some servos from a previous project.

While simple is good for v1, it was still not without it's challenges.

The first problem was that I couldn't just go from zero to full power on the drive wheel servos. It tended to either damage the lego gears or send the whole vehicle rolling over backwards. I needed to accelerate smoothly to get predictable movement.

Custom LiPo based high drain power supply

Another challenge was providing a portable power source. Originally I was powering the whole setup off a fairly stock standard portable cellphone charger to power both the Pi and the servovs. However, when I powered up the servos to move forward the Raspberry Pi would reset. The voltage sag when driving the motors was enough to reset the Pi. I was able to work around this by using one of the LiPo (Lithium polymer) batteries from my quadcopters with a high C rating (they can deliver a lot of power quickly if required) and a pair of step down transformers.

Software

The standard software control loop is:

  1. Capture webcam image
  2. Send image off to the Einstein Image service for the steering Model.
  3. Based on the result with the highest probability, activate servos to move the vehicle
  4. Repeat

Capturing the image is fairly straight forward.

Since latency was already going to make fast movement impractical I opted to relay the calls via Salesforce Apex services. This made the authentication easier as I only needed to establish and maintain a valid session with the Salesforce instance to then access the Predictive Vision API via Apex. It also meant I didn't need to reimplement the metamind API and could instead use René Winkelmeyer's salesforce-einstein-platform-apex Github project. This already included workarounds for posting multipart requests required by the API.

Training

This is something that has evolved greatly since my last project using Einstein Vision Services on the automated sprinkler. Previously you created a dataset from a pre-collected set of examples and then created your model from that to make the predictions with. If it needed refining you repeated the entire process. Now you can add feedback directly to a dataset and retrain the model in-place with v2 of the Metamind API. No more adding all the same examples again.

I found it really useful to have four operating modes for the control software to facilitate the different training scenarios.

The first mode would kick in if no DataSet could be found. In these case the car would wait for input at each step. Using a wireless keyboard I could indicate the correct course of action to take. The image visible at the time would be captured and filed away with the required label. This made creating the initial DataSet much easier.

The second mode was used if the DataSet existed but didn't have a Model created yet. In this case the input I gave could be used to directly create examples. This mode wasn't so useful and I'd tend to skip it.

The third option applied if the Model could be found. In this case I could still indicate the correct course of action to take. If the model returned the expected prediction nothing further needed to occur. However, if the highest prediction differed from the expected result I could directly add the required feedback into the model.

The final forth mode was to use the model to make predictions and take action solely off the highest prediction returned.

Control theory - PID control

The earlier statement about the software action "Based on the result, activate servos to move the vehicle" is a gross simplification of what is required to have a vehicle follow a desired trajectory. In the real world you don't merely turn your vehicle full lock left or right to follow the desired course (what is referred to as bang-bang steering). Instead you make adjustments proportional to how far off the desired trajectory you are, how rapidly you are approaching the desired trajectory, and finally, if there are any environmental factors that are affecting the steering of the vehicle.

This should be familiar to anyone who makes and flys quadcopters (another interest of mine). It's referred to as PID (proportional–integral–derivative) Control and provides the automated mechanism to keep the vehicle/quadcopter at the desired state where the output needs to adapt to meet the desired input. The following video does an excellent job of explaining how it works.

With my v1 prototype the control loop speed is so slow and the positioning so basic that I'm practically going with bang-bang style steering. It is less than ideal as most scenarios require fine motor control to accurately follow the curve of the road ahead. Definitely something to look at improving in v2

The prototype results

Training example

This video shows the feedback training process for an existing model. The initial model was created off a fairly small number of examples in the dataset. I then refined it with feedback whenever the top prediction didn't match what I indicated it should be.

I've added some basic voice prompts to indicate what is occurring as it mostly runs headless during normal operation.

Self steering example

In this video I'm using an entirely different dataset to the training example above. Getting single laps out of this setup was a bit of a challenge. The real world is far less forgiving than software when something goes wrong. Through trial an error I found it was really important to get the camera angle right to give the best prediction for the current position. It was also important to adjust the amount of movement per action. Trying to make large corrective movements, particularly around corners, would often result in getting irrecoverably off course. This comes back to the "bang-bang" style steering mentioned above.

I've edited out some of the pauses to make it more watchable. So don't consider this a real time representation of how fast it actually goes.

This process was also not without its share of failures. The following video shows a more realistic speed for the device and what happens when it misjudges the cornering.

Conclusions

Lets start with the Million Dollar Question -

Can I use this today to convert my car into an AI powered automated driving machine and then live a life of leisure as it plays taxi for the all the trips the kids need to make?

Umm, no. Even if you're happy with it only being around 50% confident that it should go in a straight line rather than swerve off to the right on occasion there are still a number of things holding it back.

For example, while it could be extended to recognize a stop sign there is currently no way to judge how far away that stop sign is1.

That's not even taking into account other things that real roads have, like traffic, pedestrians, roadworks, ... At least I'm still someway off before I need to start worrying about things like the Trolley car problem

Future Improvements

A couple of days before this post came out Salesforce released the Einstein Object Detection API. It's very similar to the Image recognition API I used above except in addition to the identified label in the image it will also give you a bounding box indicating where it was detected. With something like this I could then guess the relative position of the object in question to the current camera direction and maybe even distance based on a know size and how large the bounding box is coming back as. Definitely something to investigate for the MK2.

See also

Gallery

Putting together a dedicated Pi Screen
Astro sized 3D printed racing seat
Original clunky power setup. Result of clunky power setup
Creating the blue Larson scanner
Red Astro Larson Scanner
Training setup for driving from Google Street View
Basic servo setup
Lipo battery loaded onto the frame with a mess of wiring

Saturday, September 30, 2017

Dreamforce 2017 Session picks

Here are some of my current picks for Dreamforce 2017 sessions. I'm aiming for a mix of developer related topics in areas I want to learn more about plus anything that sounds informative. There are other sessions that I think are interesting as well, but I don't know how to access my bookmarked sessions.

Keynotes

Meet The *'s

APIs

Salesforce DX

IDE

Peeking under the hood

Platform Events

Platform

It needs two roadmap sessions? That's hardcore!

Einstein / AI

Security

Fun

Lightning

See also

Friday, September 22, 2017

Teaching an Old Log Parser New Tricks - FIT sfdx CLI plugin

I've currently got two problems with the prototype FitDX Apex debug log parser I released a couple of weeks ago.

  1. It comes from the FuseIT SFDC Explorer source and is excessively large for what it does. It's got a lot of baggage for features it doesn't currently expose.
  2. To be a native SFDX CLI plugin I need to be working from node.js.

Enter Edge.js and it's quote from "What problems does Edge.js solve?"

Ah, whatever problem you have. If you have this problem, this solves it.

--Scott Hanselman (@shanselman)

As promised, it does what it says on the box and helps solve both problems by connecting the .NET world with node.js - in process.

In my case I've got a .NET library that has many years of work put into it to parse Apex debug logs from a raw string. It's also still actively developed and utilized in the FuseIT SFDC Explorer UI. Between that and my lack of experience with node.js I wasn't in a hurry to rewrite it all so it could be used as a native SFDX CLI plugin. While I still needed to reduce the .NET DLL down to just the required code for log parsing I don't need to rewrite it all in JavaScript. Bonus!

This seems like a good place for a quick acknowledgement to @Amorelandra for pointing me in the right direction with node.js. Thanks again!

With the edge module the node.js code becomes the plumbing to pass off input from the heroku plugins to my happy place (the .NET code). And it is all transparent to the user (assuming the OS specific edge.js requirements are meet). A big bonus being a plugin is that it gives me a prebuilt framework for gathering all the inputs in a consistent way.

I used Wade Wegner's SFDX plugin as a boiler plate for integrating with the cli, chopped most of it out, and kept the command to pull the latest debug log from an org. This debug log is then feed directly into the .NET library to use the existing FuseIT parser.

Enough patting myself of the back for getting this all working. How does it work in practice?

Summary

Like with fitdx, this will give you a count for each event type that appears in the log.

>sfdx fit:apex:log:latest -u <targetusername> --summary
41.0 : 41.0 APEX_CODE,DEBUG;APEX_PROFILING,NONE;CALLOUT,NONE;DB,INFO;SYSTEM,INFO;VALIDATION,INFO;VISUALFORCE,NONE;WAVE,INFO;WORKFLOW,INFO
INFO;WORKFLOW,INFO
CODE_UNIT_FINISHED : 2
CODE_UNIT_STARTED : 11
CUMULATIVE_LIMIT_USAGE : 18
CUMULATIVE_LIMIT_USAGE_END : 18
DML_BEGIN : 22
DML_END : 22
ENTERING_MANAGED_PKG : 768
LIMIT_USAGE_FOR_NS : 54
SOQL_EXECUTE_BEGIN : 25
SOQL_EXECUTE_END : 25
USER_DEBUG : 2
USER_INFO : 1

768 ENTERING_MANAGED_PKG events! Story of my life.

Debug only

>sfdx fit:apex:log:latest -u  --debugOnly
28.0 APEX_CODE,DEBUG;APEX_PROFILING,INFO;CALLOUT,INFO;DB,INFO;SYSTEM,DEBUG;VALIDATION,INFO;VISUALFORCE,INFO;WAVE,INFO;WORKFLOW,INFO
18:56:59.180 (1181147014)|USER_DEBUG|[4]|ERROR|UserInfo.getSessionId(): 00D700000000001!ApexTestSession
18:56:59.416 (1416651819)|USER_DEBUG|[4]|ERROR|UserInfo.getSessionId(): 00D700000000001!ApexTestSession

Filtering

>sfdx fit:apex:log:latest -u  --filter USER_INFO,CODE_UNIT_STARTED
28.0 APEX_CODE,DEBUG;APEX_PROFILING,INFO;CALLOUT,INFO;DB,INFO;SYSTEM,DEBUG;VALIDATION,INFO;VISUALFORCE,INFO;WAVE,INFO;WORKFLOW,INFO
18:56:58.400 (4013418)|USER_INFO|[EXTERNAL]|005700000000001|daniel.ballinger@example.com|Pacific Standard Time|GMT-07:00
18:56:58.979 (979316201)|CODE_UNIT_STARTED|[EXTERNAL]|01q70000000HFV6|DFB.TestAccountBeforeUpdate1 on Account trigger event BeforeInsert for [new]
18:56:58.982 (982430091)|CODE_UNIT_STARTED|[EXTERNAL]|01q70000000HFVB|DFB.TestAccountBeforeUpdate2 on Account trigger event BeforeInsert for [new]
18:56:58.985 (985467868)|CODE_UNIT_STARTED|[EXTERNAL]|01q70000000blnO|DFB.AccountTrigger on Account trigger event BeforeInsert for [new]
18:56:59.108 (1108633716)|CODE_UNIT_STARTED|[EXTERNAL]|01q70000000Ti5b|DFB.ToolingTest on Account trigger event BeforeUpdate for [0010g00001YxxQ5]
18:56:59.180 (1180950117)|CODE_UNIT_STARTED|[EXTERNAL]|01q70000000H1vs|DFB.RestrictContactByName on Contact trigger event BeforeInsert for [new]

Format Shifting

>sfdx fit:apex:log:latest -u  --json
>sfdx fit:apex:log:latest -u  --csv
>sfdx fit:apex:log:latest -u  --human

The first two should still be fairly self explanatory and the latter is still intended to make reading the log easier for those that run on digesting food rather than electricity.

Installation

Most of the steps are covered in the Github repo. I found the most challenging part here to be piggybacking off the existing install of node.js and npm that lives under sfdx.

Your results may vary. It is likely safer to install your own instance of node.js. The standard disclaimer applies.

One more thing

Friday, September 8, 2017

Back to the command line - FitDx Apex debug log parser

Love it or loathe it, there are times in a developers day where you are going to find yourself at the command line. I'll leave the GUI vs command line debate to others to sort out.

My general position is they both have their strengths and weaknesses and we as developers should try and use the right tool for the job. Very much like clicks and code with Salesforce. Use them together where possible to get the most done in the least amount of effort.

Tooling shouldn't be a zero sum game.

Debug logs

Salesforce Debug logs have been a pet project area for me for some time now E.g. 2011, 2014, 2017. They form such a fundamental part of the development process but don't get a lot of love. Let me paint you a picture for a typical day in development ...

Earlier in the day something broke badly in production and you’ve only just been provided a raw 2MB text version of the debug log to diagnose the problem from. What do you do?

You can’t use any of the tooling that the Developer Console provides for log inspection on a text file. There is no way of opening it. So there will be no stack tree or execution overview to help you process a file approximately the same size as a 300 page ebook. I say approximately here because there are so many factors that could affect the size in bytes for a book. The general point is that a 2MB text file contains a lot of information that you aren't going to read in a few minutes.

The interactive debugger isn't any help for a production fault and you don't even know the steps to reproduce the problem let alone if the same thing would happen in a sandbox.

You could start combing the log file with a variety of text processing tools, or, or...

It's normally about this point that I'd direct you to the FuseIT SFDC Explorer debug log tooling which gives you:

  • (somewhat retro) GUI tooling for grouping and filtering events by category.
  • Helps highlight important events such as fatal errors and failed validations.
  • And, my current favorite, gives you a timeline view to make more sense of execution time measured in nanoseconds.

But I'm not going to cover any of that in this post. Instead we're going to cover something more experimental than the software that's been in perpetual beta for who knows how many years.

Fit DX

The FuseIT DX CLI, or FitDx in usage, is a proof of concept that I could take the debug log parser from the explorer product mentioned above and apply it directly to debug logs at the command line. I'm just putting it out there to see if there is any interest in such a thing.

You can get it right now as part of the zip download for the FuseIT SFDC Explorer.

There are certainly areas for improvement. First and foremost is the executable size. It's swollen up a fair bit from features that it doesn't expose. I'll look at whipping those out in a future release and should result in a significantly smaller package.

But enough about how big FitDx is. What's more important is how you use it.

Summary Command

If you ask sfdx for a debug log, that's exactly what you'll get. The complete, raw, unabridged debug log dumped to the command line. An experienced command line person would at this stage type out some grep regex wizardry to just show them what they wanted to see. Such is the power of the command line, but it isn't always clear where to start.

I wanted something simpler that would give you a very quick overview of what is in the log.

>sfdx force:apex:log:get -i 07L7000004G0U8kEAF -u DeveloperOrg | fitdx --summary

28.0 : 28.0 APEX_CODE,FINE;APEX_PROFILING,FINEST;CALLOUT,ERROR;DB,FINEST;SYSTEM,FINE;VALIDATION,DEBUG;VISUALFORCE,FINER;WORKFLOW,INFO
CODE_UNIT_FINISHED : 14
CODE_UNIT_STARTED : 14
CONSTRUCTOR_ENTRY : 14
CONSTRUCTOR_EXIT : 14
CUMULATIVE_LIMIT_USAGE : 14
CUMULATIVE_LIMIT_USAGE_END : 14
CUMULATIVE_PROFILING : 4
CUMULATIVE_PROFILING_BEGIN : 1
CUMULATIVE_PROFILING_END : 1
ENTERING_MANAGED_PKG : 184
EXCEPTION_THROWN : 2
EXECUTION_FINISHED : 14
EXECUTION_STARTED : 14
FATAL_ERROR : 4
LIMIT_USAGE_FOR_NS : 14
METHOD_ENTRY : 73
METHOD_EXIT : 73
SYSTEM_CONSTRUCTOR_ENTRY : 33
SYSTEM_CONSTRUCTOR_EXIT : 33
SYSTEM_METHOD_ENTRY : 165
SYSTEM_METHOD_EXIT : 165
TOTAL_EMAIL_RECIPIENTS_QUEUED : 14
USER_INFO : 14

The summary will currently give you a count for the events that are covered in the piped input. In this case I can see that there were 4 FATAL_ERROR events

Filtering

Now that I know I what I'm looking for I want to see it just those events of interest. The --filter command accepts a comma separated list of event types that you want to see. Everything else gets dropped. There is also the --debugOnly option which is a preset to filter for USER_DEBUG only.

>sfdx force:apex:log:get -i 07L7000004G0U8kEAF -u DeveloperOrg | fitdx --filter FATAL_ERROR
28.0 APEX_CODE,FINE;APEX_PROFILING,FINEST;CALLOUT,ERROR;DB,FINEST;SYSTEM,FINE;VALIDATION,DEBUG;VISUALFORCE,FINER;WORKFLOW,INFO
20:25:09.695 (698828931)|FATAL_ERROR|System.AssertException: Assertion Failed: Expected: {"filters":{"footer":{"settings":{"enable":"1","text/plain":"You can haz footers!"}}}}, Actual: {"filters":{"footer":{"settings":{"text/plain":"You can haz footers!","enable":"1"}}}}

Class.DFB.SmtpapiTest.setSetFilters: line 116, column 1
20:25:09.695 (698839161)|FATAL_ERROR|System.AssertException: Assertion Failed: Expected: {"filters":{"footer":{"settings":{"enable":"1","text/plain":"You can haz footers!"}}}}, Actual: {"filters":{"footer":{"settings":{"text/plain":"You can haz footers!","enable":"1"}}}}

Class.DFB.SmtpapiTest.setSetFilters: line 116, column 1
20:25:10.160 (1162726841)|FATAL_ERROR|System.AssertException: Assertion Failed: Expected: {"section":{"set_section_key":"set_section_value","set_section_key_2":"set_section_value_2"}}, Actual: {"section":{"set_section_key_2":"set_section_value_2","set_section_key":"set_section_value"}}

Class.DFB.SmtpapiTest.testAddSection: line 80, column 1
20:25:10.160 (1162743640)|FATAL_ERROR|System.AssertException: Assertion Failed: Expected: {"section":{"set_section_key":"set_section_value","set_section_key_2":"set_section_value_2"}}, Actual: {"section":{"set_section_key_2":"set_section_value_2","set_section_key":"set_section_value"}}

Class.DFB.SmtpapiTest.testAddSection: line 80, column 1

Format Shifting

The final set of alpha features are around changing how the debug log is presented. Options include --json, --csv, and --human. The first two should be fairly self explanatory on what the output will be. The human option was just an idea to use more column like alignment rather than vertical bars (|) to separate the elements.

In hindsight I should probably allow for multiple options at once so you can specify both the required filters and output format in one command. For the time being you can just pipe the filtered output back in.

>sfdx force:apex:log:get -i 07L7000004G0U8kEAF -u DeveloperOrg | fitdx --filter EXCEPTION_THROWN | fitdx --human
28.0                           APEX_CODE,FINE;APEX_PROFILING,FINEST;CALLOUT,ERROR;DB,FINEST;SYSTEM,FINE;VALIDATION,DEBUG;VISUALFORCE,FINER;WORKFLOW,INFO
20:25:09.695    EXCEPTION_THROWN              [116]System.AssertException: Assertion Failed: Expected: {"filters":{"footer":{"settings":{"enable":"1","text/plain":"You can haz footers!"}}}}, Actual: {"filters":{"footer":{"settings":{"text/plain":"You can haz footers!","enable":"1"}}}}
20:25:10.160    EXCEPTION_THROWN               [80]System.AssertException: Assertion Failed: Expected: {"section":{"set_section_key":"set_section_value","set_section_key_2":"set_section_value_2"}}, Actual: {"section":{"set_section_key_2":"set_section_value_2","set_section_key":"set_section_value"}}

The Forward Looking Statements

This is the part where I promise you the world with all the cool things I could potentially do in the future. Things like:

  • Direct integration as a SFDX plugin via a node.js to .NET library
  • Expose other features from the FuseIT SFDC Explorer at the command line, such as the alternative version of WSDL2Apex.
  • I've got a crazy idea for extra things like pulling apex class and trigger definitions directly from a Salesforce StackExchange question or answer and then load them directly into a newly created scratch burner org. The process could then also be reversed with the contents of the scratch org created into the required markdown. This could greatly speed up the process of asking and answering questions on the SFSE.

Friday, August 18, 2017

FuseIT SFDC Explorer 3.6.17184.1 and 3.7.17230.1 - Summer '17

The latest v3.6 release of the FuseIT SFDC Explorer is out and contains a couple of new features around Apex Debug logs.

Aaaaaannnd... because I got distracted before publishing this post, 3.7 is also out. So it's now a twofer release post.

Improvements to log parsing

Carrying on from the previous release, I've made a number of modifications to the debug log viewer.

All the calculations for duration's are now done of the underlying nanosecond ticks. This helps prevent compounding errors due to the lack of definition in C# Timespans.

The improved duration calculation is particularly important for the new grouping functionality. Take the two examples below. With the standard log you can get 200 CODE_UNIT_STARTED events for the validation of each record in a transaction (plus the corresponding 200 CODE_UNIT_FINISHED events). With the new "Group Recurring Events" button in the tool bar any repeating events of the same type will be collapsed into a single node with a summary.

Before

After

While still on the topic of the Apex Log Tree View, DML operations now have a basic icons to indicate which variant of Insert, Update, Upsert, or Delete operation they are. Trigger, Validation, and Workflow have consistent color coding between the Tree View and the Timeline.

Auto build a Metadata package

There is a new option on the Metadata Deploy tab under "Deploy Selected Files" called "Compare File Hashes". As per the name, it will do a CRC32 diff of the body of Apex classes, triggers and visualforce pages between the local file system and the connected Salesforce org. If anything is different it gets added to the current package ready for deployment.

If it turns out the local file is the one that should be updated there is a context menu option to update the local file.

Colored logging

Both the Metadata Deployment results and test method error results now include some basic color.

Salesforce DX CLI integration

If you have the Salesforce DX CLI installed (and in the path) you can use it to manage org logins. It might take a moment after pressing "Refresh Orgs" for the list to come back. Once it does, just select an org and press login.

Other changes 3.6

  • Log Tree view. Color code Trigger, Validation, and Workflow events. Open debug log from File. Toggle Timeline and Treeview display.
  • Highlight FLOW_ELEMENT_ERROR messages in the log.
  • Handle log highlighting for Fine, Finer, and Finest logging levels. Provide links for ids for the active org.
  • Optionally display scale on the log. Highlight VALIDATION_FAIL events.
  • Highlight Exception and skipped/maxlog events in the Log Timeline.
  • When parsing a log, identify the log levels applied (if present) in the footer.
  • Attempt to determine if an ApexClass contains tests. Use this to drive direct running tests cases from the class.
  • Improve performance of building the EntityTree from sObject describe results. Especially for orgs with a large number of custom objects and settings.

Other changes 3.7

  • Add optional allowExistingSObjectsWithoutId="true" to the binding configuration element to allow sObjects to be created with a null Id. Typically this isn't allowed as the ID is used to control insert/update operations and to identify relationship types. This setting can be used for more basic SOQL queries where the results won't be subsequently used for DML.
  • Highlight CALLOUT_REQUESTs in the timeline as they often take up large chunks of time.
  • Fix for updating available metadata types with Session change.
  • Improve SessionId validation pre login
  • When exporting results, give the user the choice of folder to export to.
  • Handle SOQL Aggregate Results that return only a single value
  • Improve IDEWorkspace integration. Allow for switching of workspaces.
  • Wsdl2Apex: Handle case where the Schema TargetNamespace is missing
  • Perform Metadata Deploy Hash on Apex Pages. Option to refresh local metadata from Server.

Tuesday, June 13, 2017

Importing the Salesforce Summer 17 (v40.0) Tooling and Metadata APIs to .NET

The Salesforce Summer '17 release includes the return of an old friend in both the Tooling and Metadata APIs. After refreshing to the latest WSDLs and making sure the project builds it starts producing errors like the following with calling the API:

error CS0029: Cannot implicitly convert type 'XYZ.SalesforceConnector.SalesforceTooling.NavigationMenuItem' to 'XYZ.SalesforceConnector.SalesforceTooling.NavigationMenuItem[]'

I say it's an old friend, because I've seen this before with the Partner API when it transitioned to Winter '15 (v32.0) for the QuickActionLayoutItem. It's also cropped up with Winter '13 (v29.0). Thankfully the same fix that was applied in those cases will also work here.

The problem is the navigationLinkSet property that gets generated with a multidimensional array return type. The type for the corresponding XmlArrayItemAttribute is incorrect.

Before Fix

/// 
[System.Xml.Serialization.XmlArrayItemAttribute("navigationMenuItem", typeof(NavigationMenuItem), IsNullable=false)]
public NavigationMenuItem[][] navigationLinkSet {
    get {
        return this.navigationLinkSetField;
    }
    set {
        this.navigationLinkSetField = value;
    }
}

After Fix

/// 
[System.Xml.Serialization.XmlArrayItemAttribute("navigationMenuItem", typeof(NavigationMenuItem[]), IsNullable=false)]
public NavigationMenuItem[][] navigationLinkSet {
    get {
        return this.navigationLinkSetField;
    }
    set {
        this.navigationLinkSetField = value;
    }
}

The same fix needs to be applied to both the Tooling API and the Metadata API.

Monday, June 12, 2017

Controlling analog PWM servos from a Raspberry Pi

As part of a project I'm working on I needed a way to generate motion from my Raspberry Pi. It needed to be a precise variable motion rather than the all or nothing type motion of a straight DC motor. I opted to go with PWM (Pulse Width Modulation) based servos as I'm familiar with them and have some on hand for RC planes. Stepper motors or a DC motor with a rotary encoder were other options, but I have neither in my spare parts bin.

So PWM servos it is. Now, how to control them from the Pi? Getting the Pi to feed the hard real-time response requirements of a servo motor directly can be difficult, so I opted instead to offload the task to an Adafruit 16-Channel PWM / Servo HAT.

Some assembly required

Software

Because I'm a .NET person I used Windows IoT to program the Pi. Adafruit even provide the AdafruitClassLibrary for Windows IoT. The rough code, less exception handling.

var i2cSettings = new Windows.Devices.I2c.I2cConnectionSettings(0x40);

var gpio = Windows.Devices.Gpio.GpioController.GetDefault();

Pca9685 hat = new Pca9685(0x40);
await hat.InitPCA9685Async();

hat.SetPWMFrequency(50);

ushort servoMinPulseLength = 150;
ushort servoMaxPulseLength = 600;

hat.SetAllPWM(0, servoMinPulseLength);

           
while (true)
{
    hat.SetAllPWM(0, servoMinPulseLength);

    await System.Threading.Tasks.Task.Delay(500);

    hat.SetAllPWM(0, servoMaxPulseLength);

    await System.Threading.Tasks.Task.Delay(500);
}

The exception handling turned out to be one of the harder parts of getting this to work. I kept getting debug messages like Value cannot be null.:I2C Write Exception: {0}. Not very helpful. Once I found the corresponding Github project and worked directly from the source the problem was easier to isolate. They are catching the majority of the exceptions in their code and dumping them out to the debug log and then carrying on. This was causing all sorts of problems, with the first occuring in the `InitPCA9685Async` call. The ultimate resolution was to up the Microsoft.NETCore.UniversalWindowsPlatform NuGet package to at least 5.2.2.

After that I just needed to set sensible values for the PWMFrequency and minimum and maximum pulse lengths. Most of the demo code I've seen has a default frequency of 1000. That seemed way to high and the servos became a jittery mess. The general consensus for an analog RC servo is somewhere in the 30Hz to 60Hz range. Some good reading on the difference between PPM and PMW.

The Result

All going well I'll have more posts soon to bring this together with the other parts I'm working on.