Friday, November 17, 2017

Dreamforce 2017 Round-up / Summary

My carefully curated session agenda. It didn't last long.

In what is becoming a tradition for me I went into Dreamforce 2017 with a wildly optimistic 72 breakouts, theater sessions, and keynotes bookmarked that I wanted to attend. Like Codey desperately holding onto the clock on the Agenda page, I knew that was never going to work.

Instead I had a few key sessions and keynotes that I knew I wanted to attend. Beyond that, I just tried to go with the flow of the conference and to focus on doing things that I knew I could only do there in person.

It was certainly several whirlwind days that I'm only just now starting to piece back together from a trail of photos on my phone and tweets. Even seeing some of it coalescing in a single blog post is somewhat daunting assemblage.

Table of Contents

  1. Preconference
  2. Self Driving Astro in the IoT Grove
  3. Day One
    • Mini Hacks
    • Main Keynote
  4. Day Two
    • Mass Actions, Composite, and Bulk APIs
    • Salesforce Platform Limits
    • Developer Keynote
    • Meet the Apex Developers
    • Dreamfest
  5. Day Three
    • Advanced Logging Patterns With Platform Events
    • Build Custom Setup Apps & Config Tools With the All-New Apex Metadata API
    • The Future of Salesforce DX
    • True to the Core
    • Meet the Developers
  6. Day Four
    • Lightning Round Table
    • Salesforce Tower
  7. Walking Peanut butter distributor
  8. Random Photos

Preconference

According to my phone the first on the ground Dreamforce thing I did after picking up my badge was head into Moscone West. Pretty much business as usually, except it was the Sunday before the conference opened.

I was there to setup my Self Driving Astro. More on that below.

It was an interesting glimpse into what goes into setting up such a massive conference. There were a lot of people working hard and putting in some big hours to get everything setup and going in time of the opening the following morning.

On Sunday afternoon I also participated in the MVP volunteering event

Self Driving Astro in the IoT Grove

Way back in April this year after giving my automated Einstein powered cat sprinkler talk to the Sydney Developer User Group I'd latched onto the idea to use the Einstein Vision Services to make a rudimentary self driving car. It was an itch that needed to be scratched. So I started working on it on and off since then and submitted it as a possible talk for Dreamforce.

Sadly it didn't get accepted as a talk this year. I suspect it was a little too far off the actual promoted use cases for the Einstein services. So while it was fun in concept, it could have lead to some confusion with customers about what the Salesforce offerings would be used for. Contrast it to something like the Identify Protein Structures Using Einstein talk, which sounds interesting in its own right and is on my list of sessions to catch up on.

It did leave me with a partially finished project that I'd put in all that effort and resources into. So rather than just mothball it I decided to carry on and make it a blog post instead. I'm glad I did, as it has been an excellent excuse to dive deep into the Metamind APIs and contribute to some GitHub projects along the way. It also formed part of an extended Einstien vision talk that I gave to the Auckland Developer User Group.

I published the blog post about it on the 20th of October. That was just over two weeks out from Dreamforce to avoid it getting lost in the noise of the conference. Then the next day Reid Carlberg asked if I was bringing it Dreamforce and if it could be setup on display. I jumped at the opportunity and set about reworking it from something that had only ever been run for 10 to 15 minutes at a time to something that would have to run continuously for 9+ hours a day. I had 11 days before I needed to be on my flight for Dreamforce.

Needless to say, it was a busy run up to departing for Dreamforce to:

  • Print a new slightly larger seat,
  • Design and print a new steering column,
  • Upgrade to a larger metal gear servo,
  • Print a new steering wheel for the new servo,
  • Print a modified case for the Raspberry Pi to support the HAT and relay,
  • Design and print a case for the Larson Scanner board to prevent shorts,
  • Reinforce all the wiring to survive travel in checked luggage.

Somehow it all came together in time along with a number of software updates to minimize the API counts and make the error handling more resilient. Many thanks to:

  • Reid Carlberg for the opportunity to set it up in the IoT Grove.
  • Michael Machado for giving me a temporary bump on the Metamind API limits.
  • Josh Birk for helping to get it setup and running. Plus keeping an eye on it during the conference along with @codefriar and others in the IoT Grove.

Day One

The first day went past in a bit of a blur. I was in and about the Mini hacks area volunteering during the morning and then over to the main keynote in the afternoon. In between I did a bit of exploring around Moscone West.

There were a number of areas touched on in the main keynote. In particular:

  • the customization of myLightning to change the color themes and visual designs of the UX.
  • mySalesforce for creating mobile apps in Lightning.
  • myTrailhead for Trailhead customized to other companies.

Also confetti - lots and lots of confetti.

Day Two

The day started off with finding confetti from the keynote just about everywhere in my bag.

Mass Actions, Composite, and Bulk APIs

I managed to catch my first full session today by @thunderberry and @abhinavchadda. While both the Bulk API v2 improvements and Open API support sound really useful, what intrigued me most was the proposed Mass Actions API. This could radically speed up a number of operations by defining a transform over a query. The exact details on how it will be split into transactions and how limits will be addressed is something to keep an eye on with this.

Salesforce Platform Limits

This session was again with @thunderberry. The proposal here is to move a number of existing limits to become "soft limits". Then depending on the pod heath you would be able to burst past the standard limit for short periods. There would still be an upper maximum burst limit. Consistently running past the soft limit will get you on a naughty list where Salesforce will encourage you to either upgrade or pin you to the existing limits. If the pod is particularly busy you would be more limited in what you could burst to.

This all sounds good in theory, but it will make debugging more difficult. Something that works within the soft limits one moment could fail the next due to the org load. This does seem a bit like the current Apex CPU limit.

A quick tip from this session:

Don't poll the existing limits API to frequently. The calls to that still count against your API call count limit.

Developer Keynote

The first thing that struck me about the Developer Keynote is what a fashionable bunch the developers are.

A "Suspicious Looking Plant" and some Aloe vera.

If time permits I'll come back to this section to recap some of the content.

Meet the Apex Developers

I asked a question in this session about ENTERING_MANAGED_PKG appearing in debug logs and how it can frequently consume 80% plus of a 2MB log in a dev packaging org. They had been forewarned that this question was coming and thankfully the response was they are onto it and will be fixing the excess logging in the future. 🎉

Dreamfest

AT&T park made an excellent venue in terms of proximity to most of the hotels, visibility of the stage, and facilities for dealing with that many attendees.

Day Three

Advanced Logging Patterns With Platform Events

Using Platform Events to push logging events out and then monitoring them using a Utility Bar app.

Build Custom Setup Apps & Config Tools With the All-New Apex Metadata API

The Future of Salesforce DX

This session is well worth a repeat watch. Very dense on upcoming changes with DX. Mostly forward looking, but still interesting.

Change Data Capture: Data Synchronization in the Cloud

True to the Core

Meet the Developers

Steve Tamm rallying the troops prior to the Meet the Developers session

I asked a question in this session about raising support cases for GACKs and those without Premier support being turned away to the developer forums. If you have specific examples of this happening that you want to share please see Examples of being bounced by Salesforce Support to forums when encountering a GACK

Day Four

Today was a bit interrupted from attending sessions, but in a good way!

Salesforce Tower - Ending Dreamforce on a high

Walking Peanut butter distributor

This year I brought 3kg/6.6lb of Pic's peanut butter with me to give out to people. It's made right here in Nelson, New Zealand and I figured I'd be the only person giving out peanut butter at Dreamforce. At the very least it made for an interesting conversation starter. Not too many people thought I was the nutty one in the transaction (at least out loud).
And to those with peanut allergies - no hard feelings, can we still be friends? I wasn't really trying to kill you.

Sessions I mean to catch up on

Random Photos

Friday, October 20, 2017

Teaching Salesforce to drive with Einstein Vision Services

tl;dr. Skip to the results.

For your consideration, the Mad Catter

There's something in that tweet that speaks to me. Maybe it's time to pull a Tony Stark and take a new direction with my creations. Moving away from the world of weapons questionable development and more towards something for the public good. Something topical.

Tesla, Apple, Uber, Google, Toyota, BMW, Ford, these are just a few of the companies working on creating self-driving cars. Maybe I should dabble with that too.

My first challenge was that my R&D budget doesn't stretch quite as far as the aforementioned companies. While they all have market values measured in the billions my resources are a little more modest. So the kiwi Number Eight Wire Mentality will come into play when LIDAR isn't an option.

Steve Rogers: Big man in a suit of armor. Take that off, what are you?
Tony Stark: Genius billionaire playboy philanthropist.
* Actual project did not require hammering an anvil... yet

It's another way I fancy myself a bit like Tony Stark. Less like the genius, billionaire, playboy Stark and more like trapped in a cave with only the resources at hand Stark. I'll make do with whatever I've got on hand. Some servos, a Raspberry Pi, an old web cam, a massive amount of cloud computing resources.

That last point is important, while Stark has J.A.R.V.I.S. I've got Einstein Platform Services.

I don't have a background in AI as a Data Scientist. My AI exposure has been more from copious amounts of science fiction and that one trimester back in university where I did a single paper. I'm assuming the depiction of AI in the media is about as accurate as the depiction of computing in general. So while it's fun to take inspiration from an 80's documentary on self driving cars the most useful learning from something like Knight Rider is that Larson scanners make AI's look cool (more on that later).

Elephant in the room

This is probably a good point to pause for a minute and note that, yes, I know that a solely cloud based AI isn't the best option, or even a very good option, for an autonomous vehicle. An image classification AI would only be part of a larger collection of sensor input used to make a self driving car. Connectivity and latency issues will pretty much rule out any full scale testing or moving at significant speed. That, and, I don't think my vehicle insurance would cover crashing the family car while it was trying to navigate a tricky intersection by itself. But I want to see how far I can push it anyway with a purely optical solution using a single camera.

I know that a number of the components I'm using on the prototype are less than ideal. I.e. using continuous rotation servos rather than stepper motors which would have given a more precise measure of the distance traveled. See my earlier point above that I'm mostly working with what I've got on hand. Feel free to contact me if you have a spare sum of cash burning a whole in your pocket that you want to invest in something like this.

Remember, this is just supposed to be a Mark 1 prototype. If it starts to look promising it can be refined in the future.

The Overly Simplistic Plan

The general idea for a minimalistic vehicle is:

  1. A single forward facing web cam to capture the current position on the road and what is immediately ahead.
  2. A Raspberry Pi to capture the image and process it.
  3. Einstien Predictive Vision Services to identify what is in the current image and the implied probable next course of action to take.
  4. Some basic servo based motor functions to perform the required movements.
  5. Repeat from step (2)

I say overly simplistic here in that it doesn't really localize the position of the vehicle in the environment or provide it with an accurate course to follow. Instead it just makes a best guess prediction based on what is currently seen immediately ahead. There is no feedback mechanism based on where the car was previously or how well it is following the required trajectory.

Hardware - Some assembly required

The model for the Mk2 Prototype

In the ideal world I'd have repurposed an RC toy car that had full steering and drive capabilities already. Or even just 3D printed a basic car. This would have given a realistic steering behavior via the front wheels. Instead I've started out with skid steering and a dolly wheel on a lego frame. This was quick and easy to put together and drive with some servos from a previous project.

While simple is good for v1, it was still not without it's challenges.

The first problem was that I couldn't just go from zero to full power on the drive wheel servos. It tended to either damage the lego gears or send the whole vehicle rolling over backwards. I needed to accelerate smoothly to get predictable movement.

Custom LiPo based high drain power supply

Another challenge was providing a portable power source. Originally I was powering the whole setup off a fairly stock standard portable cellphone charger to power both the Pi and the servovs. However, when I powered up the servos to move forward the Raspberry Pi would reset. The voltage sag when driving the motors was enough to reset the Pi. I was able to work around this by using one of the LiPo (Lithium polymer) batteries from my quadcopters with a high C rating (they can deliver a lot of power quickly if required) and a pair of step down transformers.

Software

The standard software control loop is:

  1. Capture webcam image
  2. Send image off to the Einstein Image service for the steering Model.
  3. Based on the result with the highest probability, activate servos to move the vehicle
  4. Repeat

Capturing the image is fairly straight forward.

Since latency was already going to make fast movement impractical I opted to relay the calls via Salesforce Apex services. This made the authentication easier as I only needed to establish and maintain a valid session with the Salesforce instance to then access the Predictive Vision API via Apex. It also meant I didn't need to reimplement the metamind API and could instead use René Winkelmeyer's salesforce-einstein-platform-apex Github project. This already included workarounds for posting multipart requests required by the API.

Training

This is something that has evolved greatly since my last project using Einstein Vision Services on the automated sprinkler. Previously you created a dataset from a pre-collected set of examples and then created your model from that to make the predictions with. If it needed refining you repeated the entire process. Now you can add feedback directly to a dataset and retrain the model in-place with v2 of the Metamind API. No more adding all the same examples again.

I found it really useful to have four operating modes for the control software to facilitate the different training scenarios.

The first mode would kick in if no DataSet could be found. In these case the car would wait for input at each step. Using a wireless keyboard I could indicate the correct course of action to take. The image visible at the time would be captured and filed away with the required label. This made creating the initial DataSet much easier.

The second mode was used if the DataSet existed but didn't have a Model created yet. In this case the input I gave could be used to directly create examples. This mode wasn't so useful and I'd tend to skip it.

The third option applied if the Model could be found. In this case I could still indicate the correct course of action to take. If the model returned the expected prediction nothing further needed to occur. However, if the highest prediction differed from the expected result I could directly add the required feedback into the model.

The final forth mode was to use the model to make predictions and take action solely off the highest prediction returned.

Control theory - PID control

The earlier statement about the software action "Based on the result, activate servos to move the vehicle" is a gross simplification of what is required to have a vehicle follow a desired trajectory. In the real world you don't merely turn your vehicle full lock left or right to follow the desired course (what is referred to as bang-bang steering). Instead you make adjustments proportional to how far off the desired trajectory you are, how rapidly you are approaching the desired trajectory, and finally, if there are any environmental factors that are affecting the steering of the vehicle.

This should be familiar to anyone who makes and flys quadcopters (another interest of mine). It's referred to as PID (proportional–integral–derivative) Control and provides the automated mechanism to keep the vehicle/quadcopter at the desired state where the output needs to adapt to meet the desired input. The following video does an excellent job of explaining how it works.

With my v1 prototype the control loop speed is so slow and the positioning so basic that I'm practically going with bang-bang style steering. It is less than ideal as most scenarios require fine motor control to accurately follow the curve of the road ahead. Definitely something to look at improving in v2

The prototype results

Training example

This video shows the feedback training process for an existing model. The initial model was created off a fairly small number of examples in the dataset. I then refined it with feedback whenever the top prediction didn't match what I indicated it should be.

I've added some basic voice prompts to indicate what is occurring as it mostly runs headless during normal operation.

Self steering example

In this video I'm using an entirely different dataset to the training example above. Getting single laps out of this setup was a bit of a challenge. The real world is far less forgiving than software when something goes wrong. Through trial an error I found it was really important to get the camera angle right to give the best prediction for the current position. It was also important to adjust the amount of movement per action. Trying to make large corrective movements, particularly around corners, would often result in getting irrecoverably off course. This comes back to the "bang-bang" style steering mentioned above.

I've edited out some of the pauses to make it more watchable. So don't consider this a real time representation of how fast it actually goes.

This process was also not without its share of failures. The following video shows a more realistic speed for the device and what happens when it misjudges the cornering.

Conclusions

Lets start with the Million Dollar Question -

Can I use this today to convert my car into an AI powered automated driving machine and then live a life of leisure as it plays taxi for the all the trips the kids need to make?

Umm, no. Even if you're happy with it only being around 50% confident that it should go in a straight line rather than swerve off to the right on occasion there are still a number of things holding it back.

For example, while it could be extended to recognize a stop sign there is currently no way to judge how far away that stop sign is1.

That's not even taking into account other things that real roads have, like traffic, pedestrians, roadworks, ... At least I'm still someway off before I need to start worrying about things like the Trolley car problem

Future Improvements

A couple of days before this post came out Salesforce released the Einstein Object Detection API. It's very similar to the Image recognition API I used above except in addition to the identified label in the image it will also give you a bounding box indicating where it was detected. With something like this I could then guess the relative position of the object in question to the current camera direction and maybe even distance based on a know size and how large the bounding box is coming back as. Definitely something to investigate for the MK2.

See also

Gallery

Putting together a dedicated Pi Screen
Astro sized 3D printed racing seat
Original clunky power setup. Result of clunky power setup
Creating the blue Larson scanner
Red Astro Larson Scanner
Training setup for driving from Google Street View
Basic servo setup
Lipo battery loaded onto the frame with a mess of wiring

Saturday, September 30, 2017

Dreamforce 2017 Session picks

Here are some of my current picks for Dreamforce 2017 sessions. I'm aiming for a mix of developer related topics in areas I want to learn more about plus anything that sounds informative. There are other sessions that I think are interesting as well, but I don't know how to access my bookmarked sessions.

Keynotes

Meet The *'s

APIs

Salesforce DX

IDE

Peeking under the hood

Platform Events

Platform

It needs two roadmap sessions? That's hardcore!

Einstein / AI

Security

Fun

Lightning

See also

Friday, September 22, 2017

Teaching an Old Log Parser New Tricks - FIT sfdx CLI plugin

I've currently got two problems with the prototype FitDX Apex debug log parser I released a couple of weeks ago.

  1. It comes from the FuseIT SFDC Explorer source and is excessively large for what it does. It's got a lot of baggage for features it doesn't currently expose.
  2. To be a native SFDX CLI plugin I need to be working from node.js.

Enter Edge.js and it's quote from "What problems does Edge.js solve?"

Ah, whatever problem you have. If you have this problem, this solves it.

--Scott Hanselman (@shanselman)

As promised, it does what it says on the box and helps solve both problems by connecting the .NET world with node.js - in process.

In my case I've got a .NET library that has many years of work put into it to parse Apex debug logs from a raw string. It's also still actively developed and utilized in the FuseIT SFDC Explorer UI. Between that and my lack of experience with node.js I wasn't in a hurry to rewrite it all so it could be used as a native SFDX CLI plugin. While I still needed to reduce the .NET DLL down to just the required code for log parsing I don't need to rewrite it all in JavaScript. Bonus!

This seems like a good place for a quick acknowledgement to @Amorelandra for pointing me in the right direction with node.js. Thanks again!

With the edge module the node.js code becomes the plumbing to pass off input from the heroku plugins to my happy place (the .NET code). And it is all transparent to the user (assuming the OS specific edge.js requirements are meet). A big bonus being a plugin is that it gives me a prebuilt framework for gathering all the inputs in a consistent way.

I used Wade Wegner's SFDX plugin as a boiler plate for integrating with the cli, chopped most of it out, and kept the command to pull the latest debug log from an org. This debug log is then feed directly into the .NET library to use the existing FuseIT parser.

Enough patting myself of the back for getting this all working. How does it work in practice?

Summary

Like with fitdx, this will give you a count for each event type that appears in the log.

>sfdx fit:apex:log:latest -u <targetusername> --summary
41.0 : 41.0 APEX_CODE,DEBUG;APEX_PROFILING,NONE;CALLOUT,NONE;DB,INFO;SYSTEM,INFO;VALIDATION,INFO;VISUALFORCE,NONE;WAVE,INFO;WORKFLOW,INFO
INFO;WORKFLOW,INFO
CODE_UNIT_FINISHED : 2
CODE_UNIT_STARTED : 11
CUMULATIVE_LIMIT_USAGE : 18
CUMULATIVE_LIMIT_USAGE_END : 18
DML_BEGIN : 22
DML_END : 22
ENTERING_MANAGED_PKG : 768
LIMIT_USAGE_FOR_NS : 54
SOQL_EXECUTE_BEGIN : 25
SOQL_EXECUTE_END : 25
USER_DEBUG : 2
USER_INFO : 1

768 ENTERING_MANAGED_PKG events! Story of my life.

Debug only

>sfdx fit:apex:log:latest -u  --debugOnly
28.0 APEX_CODE,DEBUG;APEX_PROFILING,INFO;CALLOUT,INFO;DB,INFO;SYSTEM,DEBUG;VALIDATION,INFO;VISUALFORCE,INFO;WAVE,INFO;WORKFLOW,INFO
18:56:59.180 (1181147014)|USER_DEBUG|[4]|ERROR|UserInfo.getSessionId(): 00D700000000001!ApexTestSession
18:56:59.416 (1416651819)|USER_DEBUG|[4]|ERROR|UserInfo.getSessionId(): 00D700000000001!ApexTestSession

Filtering

>sfdx fit:apex:log:latest -u  --filter USER_INFO,CODE_UNIT_STARTED
28.0 APEX_CODE,DEBUG;APEX_PROFILING,INFO;CALLOUT,INFO;DB,INFO;SYSTEM,DEBUG;VALIDATION,INFO;VISUALFORCE,INFO;WAVE,INFO;WORKFLOW,INFO
18:56:58.400 (4013418)|USER_INFO|[EXTERNAL]|005700000000001|daniel.ballinger@example.com|Pacific Standard Time|GMT-07:00
18:56:58.979 (979316201)|CODE_UNIT_STARTED|[EXTERNAL]|01q70000000HFV6|DFB.TestAccountBeforeUpdate1 on Account trigger event BeforeInsert for [new]
18:56:58.982 (982430091)|CODE_UNIT_STARTED|[EXTERNAL]|01q70000000HFVB|DFB.TestAccountBeforeUpdate2 on Account trigger event BeforeInsert for [new]
18:56:58.985 (985467868)|CODE_UNIT_STARTED|[EXTERNAL]|01q70000000blnO|DFB.AccountTrigger on Account trigger event BeforeInsert for [new]
18:56:59.108 (1108633716)|CODE_UNIT_STARTED|[EXTERNAL]|01q70000000Ti5b|DFB.ToolingTest on Account trigger event BeforeUpdate for [0010g00001YxxQ5]
18:56:59.180 (1180950117)|CODE_UNIT_STARTED|[EXTERNAL]|01q70000000H1vs|DFB.RestrictContactByName on Contact trigger event BeforeInsert for [new]

Format Shifting

>sfdx fit:apex:log:latest -u  --json
>sfdx fit:apex:log:latest -u  --csv
>sfdx fit:apex:log:latest -u  --human

The first two should still be fairly self explanatory and the latter is still intended to make reading the log easier for those that run on digesting food rather than electricity.

Installation

Most of the steps are covered in the Github repo. I found the most challenging part here to be piggybacking off the existing install of node.js and npm that lives under sfdx.

Your results may vary. It is likely safer to install your own instance of node.js. The standard disclaimer applies.

One more thing

Friday, September 8, 2017

Back to the command line - FitDx Apex debug log parser

Love it or loathe it, there are times in a developers day where you are going to find yourself at the command line. I'll leave the GUI vs command line debate to others to sort out.

My general position is they both have their strengths and weaknesses and we as developers should try and use the right tool for the job. Very much like clicks and code with Salesforce. Use them together where possible to get the most done in the least amount of effort.

Tooling shouldn't be a zero sum game.

Debug logs

Salesforce Debug logs have been a pet project area for me for some time now E.g. 2011, 2014, 2017. They form such a fundamental part of the development process but don't get a lot of love. Let me paint you a picture for a typical day in development ...

Earlier in the day something broke badly in production and you’ve only just been provided a raw 2MB text version of the debug log to diagnose the problem from. What do you do?

You can’t use any of the tooling that the Developer Console provides for log inspection on a text file. There is no way of opening it. So there will be no stack tree or execution overview to help you process a file approximately the same size as a 300 page ebook. I say approximately here because there are so many factors that could affect the size in bytes for a book. The general point is that a 2MB text file contains a lot of information that you aren't going to read in a few minutes.

The interactive debugger isn't any help for a production fault and you don't even know the steps to reproduce the problem let alone if the same thing would happen in a sandbox.

You could start combing the log file with a variety of text processing tools, or, or...

It's normally about this point that I'd direct you to the FuseIT SFDC Explorer debug log tooling which gives you:

  • (somewhat retro) GUI tooling for grouping and filtering events by category.
  • Helps highlight important events such as fatal errors and failed validations.
  • And, my current favorite, gives you a timeline view to make more sense of execution time measured in nanoseconds.

But I'm not going to cover any of that in this post. Instead we're going to cover something more experimental than the software that's been in perpetual beta for who knows how many years.

Fit DX

The FuseIT DX CLI, or FitDx in usage, is a proof of concept that I could take the debug log parser from the explorer product mentioned above and apply it directly to debug logs at the command line. I'm just putting it out there to see if there is any interest in such a thing.

You can get it right now as part of the zip download for the FuseIT SFDC Explorer.

There are certainly areas for improvement. First and foremost is the executable size. It's swollen up a fair bit from features that it doesn't expose. I'll look at whipping those out in a future release and should result in a significantly smaller package.

But enough about how big FitDx is. What's more important is how you use it.

Summary Command

If you ask sfdx for a debug log, that's exactly what you'll get. The complete, raw, unabridged debug log dumped to the command line. An experienced command line person would at this stage type out some grep regex wizardry to just show them what they wanted to see. Such is the power of the command line, but it isn't always clear where to start.

I wanted something simpler that would give you a very quick overview of what is in the log.

>sfdx force:apex:log:get -i 07L7000004G0U8kEAF -u DeveloperOrg | fitdx --summary

28.0 : 28.0 APEX_CODE,FINE;APEX_PROFILING,FINEST;CALLOUT,ERROR;DB,FINEST;SYSTEM,FINE;VALIDATION,DEBUG;VISUALFORCE,FINER;WORKFLOW,INFO
CODE_UNIT_FINISHED : 14
CODE_UNIT_STARTED : 14
CONSTRUCTOR_ENTRY : 14
CONSTRUCTOR_EXIT : 14
CUMULATIVE_LIMIT_USAGE : 14
CUMULATIVE_LIMIT_USAGE_END : 14
CUMULATIVE_PROFILING : 4
CUMULATIVE_PROFILING_BEGIN : 1
CUMULATIVE_PROFILING_END : 1
ENTERING_MANAGED_PKG : 184
EXCEPTION_THROWN : 2
EXECUTION_FINISHED : 14
EXECUTION_STARTED : 14
FATAL_ERROR : 4
LIMIT_USAGE_FOR_NS : 14
METHOD_ENTRY : 73
METHOD_EXIT : 73
SYSTEM_CONSTRUCTOR_ENTRY : 33
SYSTEM_CONSTRUCTOR_EXIT : 33
SYSTEM_METHOD_ENTRY : 165
SYSTEM_METHOD_EXIT : 165
TOTAL_EMAIL_RECIPIENTS_QUEUED : 14
USER_INFO : 14

The summary will currently give you a count for the events that are covered in the piped input. In this case I can see that there were 4 FATAL_ERROR events

Filtering

Now that I know I what I'm looking for I want to see it just those events of interest. The --filter command accepts a comma separated list of event types that you want to see. Everything else gets dropped. There is also the --debugOnly option which is a preset to filter for USER_DEBUG only.

>sfdx force:apex:log:get -i 07L7000004G0U8kEAF -u DeveloperOrg | fitdx --filter FATAL_ERROR
28.0 APEX_CODE,FINE;APEX_PROFILING,FINEST;CALLOUT,ERROR;DB,FINEST;SYSTEM,FINE;VALIDATION,DEBUG;VISUALFORCE,FINER;WORKFLOW,INFO
20:25:09.695 (698828931)|FATAL_ERROR|System.AssertException: Assertion Failed: Expected: {"filters":{"footer":{"settings":{"enable":"1","text/plain":"You can haz footers!"}}}}, Actual: {"filters":{"footer":{"settings":{"text/plain":"You can haz footers!","enable":"1"}}}}

Class.DFB.SmtpapiTest.setSetFilters: line 116, column 1
20:25:09.695 (698839161)|FATAL_ERROR|System.AssertException: Assertion Failed: Expected: {"filters":{"footer":{"settings":{"enable":"1","text/plain":"You can haz footers!"}}}}, Actual: {"filters":{"footer":{"settings":{"text/plain":"You can haz footers!","enable":"1"}}}}

Class.DFB.SmtpapiTest.setSetFilters: line 116, column 1
20:25:10.160 (1162726841)|FATAL_ERROR|System.AssertException: Assertion Failed: Expected: {"section":{"set_section_key":"set_section_value","set_section_key_2":"set_section_value_2"}}, Actual: {"section":{"set_section_key_2":"set_section_value_2","set_section_key":"set_section_value"}}

Class.DFB.SmtpapiTest.testAddSection: line 80, column 1
20:25:10.160 (1162743640)|FATAL_ERROR|System.AssertException: Assertion Failed: Expected: {"section":{"set_section_key":"set_section_value","set_section_key_2":"set_section_value_2"}}, Actual: {"section":{"set_section_key_2":"set_section_value_2","set_section_key":"set_section_value"}}

Class.DFB.SmtpapiTest.testAddSection: line 80, column 1

Format Shifting

The final set of alpha features are around changing how the debug log is presented. Options include --json, --csv, and --human. The first two should be fairly self explanatory on what the output will be. The human option was just an idea to use more column like alignment rather than vertical bars (|) to separate the elements.

In hindsight I should probably allow for multiple options at once so you can specify both the required filters and output format in one command. For the time being you can just pipe the filtered output back in.

>sfdx force:apex:log:get -i 07L7000004G0U8kEAF -u DeveloperOrg | fitdx --filter EXCEPTION_THROWN | fitdx --human
28.0                           APEX_CODE,FINE;APEX_PROFILING,FINEST;CALLOUT,ERROR;DB,FINEST;SYSTEM,FINE;VALIDATION,DEBUG;VISUALFORCE,FINER;WORKFLOW,INFO
20:25:09.695    EXCEPTION_THROWN              [116]System.AssertException: Assertion Failed: Expected: {"filters":{"footer":{"settings":{"enable":"1","text/plain":"You can haz footers!"}}}}, Actual: {"filters":{"footer":{"settings":{"text/plain":"You can haz footers!","enable":"1"}}}}
20:25:10.160    EXCEPTION_THROWN               [80]System.AssertException: Assertion Failed: Expected: {"section":{"set_section_key":"set_section_value","set_section_key_2":"set_section_value_2"}}, Actual: {"section":{"set_section_key_2":"set_section_value_2","set_section_key":"set_section_value"}}

The Forward Looking Statements

This is the part where I promise you the world with all the cool things I could potentially do in the future. Things like:

  • Direct integration as a SFDX plugin via a node.js to .NET library
  • Expose other features from the FuseIT SFDC Explorer at the command line, such as the alternative version of WSDL2Apex.
  • I've got a crazy idea for extra things like pulling apex class and trigger definitions directly from a Salesforce StackExchange question or answer and then load them directly into a newly created scratch burner org. The process could then also be reversed with the contents of the scratch org created into the required markdown. This could greatly speed up the process of asking and answering questions on the SFSE.