Wednesday, August 24, 2016

Dreamforce 2016 Session Picks and General Tips

Here are some of my current picks for Dreamforce 2015 sessions. They are mostly development or architecture focused. I'll be refining the list as more information becomes available. As with previous years, it's likely that I won't actually get to all of these and will need to prioritize activities that can only happen at Dreamforce vs. things that are being recorded.

Artifacts

Something is coming in the packaging/change set/metadata deployment area. The first session in this list is definitely worth a visit. Here's hoping for some sort of source control integrated deployments.

Mocking and Testing

The Winter '17 Release notes include the section - Build a Mocking Framework with the Apex Stub API (Pilot). I'm lead to believe that the first of the talks below will have some more details on using the System.StubProvider interface and System.Test.createStub() method. If only because Aaron Slettehaugh from Salesforce is also presenting with Jesse Altman.

Meet The *'s

The Meet the Developers session on the last day of the conference is always an interesting one and might not be recorded. This year I see two other additional variations.

Keynotes

  • Developer Keynote - Thursday Oct 6th 11-12pm
  • Main Dreamforce Keynote - Wednesday Oct 5th 1-3pm
  • Mark & Exec Q&A - Friday Oct 7th 2-3pm

Lightning

It goes without saying that everything will be either Lightning or Trailhead based this year. Probably both.

Custom Metadata

IoT

Miscellaneous


General Tips

Hopefully you've been an adult for long enough by now to know if you're going to do a lot of walking you probably need to wear something comfortable on your feet. Seems like an odd thing to have to remind people about. Then again, all my shoes are comfortable. Why are people buying shoes that aren't comfortable?

  • Don’t bring your laptop. In previous years other channels were promoting travelling light with just a cellphone and maybe a tablet. I say bring a small laptop to the developer zone in Moscone West. Seen something cool in a session and want to try it out straight away? A laptop gives you full access to Salesforce and your favorite developer tools. I've never tried to code Apex on my phone or tablet, but I'm pretty sure it would be a frustratingly slow experience. With Trailhead being such a big part of the developer zone this year, it could be useful to knock out a few modules on the go. There are also the development "Mini Hacks" to be completed. Easier to have your own machine on hand than have to wait for a community machine of unknown configuration.
  • Following on from that, create a blank dev org. Maybe a prerelease org. This gives you a blank canvas to experiment from.
  • Bring a power bank type device to charge your cellphone so you can avoid being tied to a powerpoint. You can probably pick several of these up from vendors as giveaways if need be.
  • Talk to the person next to you, find out what they do for a living with Salesforce. Find out what sessions they liked so far and what they intend to attend.
  • If you get a good photo of a presenter during a session, share it with them. The session audio and slides are often recorded, but there may be no other visual proof that they presented at Dreamforce.
  • Be mindful of who you let scan your badge. By all means, if you want to hear from them again scan away. Otherwise, is it worth giving your contact details to a vendor for some shiny trinkets to take home to the kids?
  • A developer can mostly stay within the Moscone West building and find plenty of suitable sessions and vendors to visit. It will be full of technical sessions and activities. That's not to say the an excursion out to Moscone North for the main expo isn't worth it.
  • Be adaptable with your scheduling. The majority of the sessions are recorded. It's sad for the presenters who have put so much effort into creating their sessions, but focus on things that you can't catch up on later in the recordings.
  • Stop by the Admin Zone. In previous years they have offered professional headshots. Do this early in the conference before lack of sleep starts to catch up with you.
  • Get your twitter handle and avatar on your badge. I spend all year interacting with people via twitter, then struggle to identify them in real life if they don't resemble their abstract avatar.
  • Fleet Week San Francisco is on October 3-10. If you like planes the airshow was worth a detour or an extended stay if you can.

International Traveler

  • Plan on having an extra bag on the way back incase you pickup some oversized swag.
  • Get a local sim card. Have a plan if you previously relied on SMS 2 factor authentication. Update apps like Uber with your new temp contact details.
  • Switch your Calendar to PST.
  • If you can time it right, drop ship things to the FedEx office at 726 Market St. It is only a quick walk from the conference and you can get a "Hold at FedEx location" when shipping.

Tuesday, August 16, 2016

Using Two-Factor Authentication in Salesforce with Windows 10 Mobile

As part of the Trailhead User Authentication - Secure Your Users' Identity module I enabled Two-Factor authentication for a user in my developer org.

Upon logging in with the user required to use 2FA I now get the following prompt to download the "Salesforce Authenticator from the App Store or Google Play":

As a Windows Phone / Windows 10 Mobile user this wasn't really an option for me.

Happily, Salesforce is using the IETF RFC 6238 time-based one-time password (TOTP) algorithm. Being a standard we can substitute in another app that is available - such as Microsoft's Authenticator.

  • Use the "Choose Another Verification Method" link at the bottom of the "Connect Salesforce Authenticator" page.
  • Choose "Use verification codes from an authenticator app"
  • Start the Authenticator app on your phone. Use the "add" app bar button. Use the "Scan" button.
  • Optionally tap the new entry to give it a more meaningful name.
  • Use the generated code to complete the authentication process.

See also:

Wednesday, August 3, 2016

Integrating UAV/drone remote data acquisition with Salesforce to enhance logistics

Late last year I did a brief interview at the Auckland Salesforce APAC tour event and talked about capturing sensor data from quadcopters in flight and the potential of integrating this data with Salesforce.

One of the first scenarios that I explored was one that is topical here in New Zealand.

New Zealand has no native terrestrial mammals (except for bats and seals).
[Biodiversity of New Zealand and DOC Native animals]

If something is running around in the bush and it isn't an insect or bird (or a very confused native bat or seal) then in is an introduced species.

A number of introduced species, such as rats, possums, and mustelids survive by predation of native species. And if the they aren't directly eating the natives they are competing for the same food and resources. Possums and feral cats can also spread diseases. All in all they are unwelcome visitors in the local ecosystem. Monitoring and trapping programs are used by various conservation groups to help control the spread and population of introduced species.

A large number of traps and monitoring stations are deployed out in the wild and on farms in locations where it can be time consuming to check them; either because of their remoteness and distribution over many hectares and/or because of their numerousness. Staff or volunteers need to physically visit each site regularly to check if the trap needs any attention. The labour involved in checking and maintaining the traps can be a significant percentage of the overall cost [Source] and a limiting factor in how many traps can be deployed.

Decline (attenuation) in radio signal strength through
forest with increasing distance from a transmitter of
four frequencies compared to transmission through
free space (i.e. with no vegetation). [Source]

There are existing solutions that utilize a sensor and wireless connection on each trap. These sensors rely of having either an internet, cellular, or satellite link to communicate the trap status back. Transmitting through dense forest also reduces radio signal strength.

Which brings me back to Salesforce and UAVs. Can I cut the cost of the equipment deployed with each trap by keeping the sensor and wireless link very basic and then relying on UAV flights in the area to collect the data periodically? The end goal is to allow trap checkers to focus their attention where they will be most productive, and to expand the area of operation.

High level plan:

  • Put a small short range transmitter on each trap that sends out a periodic signal when the trap needs attention. Ideally the sensor on each trap will be very basic and will be able to transmit for a sufficient period of time once triggered. The transmission range should be able to reach a height above the canopy where the UAV can pass by.
  • Record the geolocation where each trap is deployed using a record in Salesforce.
  • Periodically dispatch a UAV to fly a circuit of the traps in the area. The paths for these flights can be determined from the geolocation data in Salesforce.
  • The UAV will carry the receiver and small computer (Raspberry Pi, Ardunio or similar) to capture the signal data plus additional telemetry (GPS location at time of signal).
  • When the UAV has an internet connection relay the collected data back into Salesforce.
  • Use the collected data to notify which traps need attention.
  • Run analytics over the gathered data to identify gains, such as finding areas that would benefit from having more traps deployed.

There are a lot of moving parts here (some more literally than others). So before I get too far ahead of myself we'll start with some of the basics. I'll cover off all the parts in a number of subsequent blogs posts as there is lots to cover.

Trap tracking in Salesforce

A good place to start will be how the trap records are stored in Salesforce. In the simplest case this can just be a custom record with fields to include the applicable details, such as when and where the trap was deployed. A geolocation Compound field is particularly useful for the latter part as it brings native support for calculations of distances around a latitude and longitude pair.

I'll take a slight detour here from the immediate scenario above to explore something similar but still cover a number of important points. Another introduced pest species here in Nelson is the Great white butterfly. The key difference here is that much of the trapping for the great white butterfly is occurring in a suburban environment around residential addresses. This allows the use of the new automatic geocoding for addresses that became available in Summer '16.

Before the auto geocoding will occur you need to review and activate the Data.com clean rules. I activated them for Leads as a starting point.
Setup > Administer > Data.com Administration > Clean > Clean Rules

I also found it useful to add additional formula fields for the geolocation fields (latitude, longitude and accuracy) as they can't otherwise be directly exposed on the page layout.

Now with just the street address details for the properties of interest entered against Leads in Salesforce a SOQL query can be used to find the points I need to fly to within my operational area. Using the GEOLOCATION function to define the takeoff point and the DISTANCE function to search for sites of interest within the operating area.

SELECT Id, LastName, Latitude, Longitude
FROM Lead
WHERE DISTANCE(Address, GEOLOCATION(-41.264268, 173.291987), 'KM') < 2

Unfortunately the native Visualforce mapping controls aren't available in developer edition orgs. Instead I'll export the locations of interest in the Keyhole Markup Language (KML) used by Google Earth. I was going to use the Google Earth API to embed it in a web page, but it was deprecated by Google (Boo!).


A Visualforce page can be used to generate the KML file from the SOQL query. This is a minimal initial version. I'll parameterize the origin location for the search and likely use it as the starting point for the flight.

Visualforce to generate KML

Controller to generate KML

That's sufficient to export the KML file into Google Earth for the points of interest.

Visualforce for Google Maps loading KML


An additional Visualforce page can be setup to embed a Google Map and directly load the KML file in. I've kept the Google API key in a custom setting. The KML file needed to be publicly accessible to the Google Mapping servers, so I set up Sites to expose it and allowed the Public Access Settings profile access to Leads and the Geolocation fields. It appears Google was caching the KML content. Adding a random query string was sufficent to get it updating.


There is still lots to explore here, with the next pressing part being finding a route between all the sites that need to be flown. If you've ever done any computer algorithms or AI course you'll know this as the Travelling Salesman Problem - how to find the optimal (or close to optimal) path that visits every node.

I'll come back to that shortly, as it is way too interesting not to try something like a genetic algorithm or nearest neighbor algorithm in Apex to look for some solutions.

In the meantime...


Caveats

There are laws and regulations on where and when you can fly a UAV, quadcopter, kite, helium balloon on a really long string, etc...
You will need to educate yourself about the laws and regulations that may be applicable in your country, state, province or locality.

Within New Zealand the Airshare website is a good starting point for the rules defined by the Civil Aviation Authority.

Tuesday, August 2, 2016

Preventing trigger recursion and handling a Workflow field update

Like a good developer, I've included recursion protection in my managed package after update triggers to prevent interactions with other triggers in an org from creating an infinite update loop that would ultimately end in a "maximum trigger depth exceeded" exception. The recursion protection mechanism is fairly basic. It uses a class level static Set of processed record Ids. The first thing the trigger does is skip additional processing from any record ID already in the Set. After the record is first processed by the trigger its Id is added to the static Set.

The functionality for the triggers in question is dynamic in nature. Admins who install the managed package can configure a list of fields of interest on an Opportunity that will be dynamically mapped to custom records related to the Opportunity. E.g. They may map the Opportunity field with the API name "Description" into a custom record related to the Opportunity. This is then used for further processing when integrating with an external system. The important part is that it is entirely dynamic. Users of the managed package should be able to configure any Opportunity API field name and it will be mapped by the trigger to the custom record for further processing.

This setup works well with one exception. What if a subsequent trigger or workflow field update rule makes further changes to one of the mapped fields. In the Triggers and Order of Execution the workflow rules execute after the triggers. The workflow rule will cause the trigger to fire again for the field update, but the current recursion protection will prevent any further processing from occurring.

12. If the record was updated with workflow field updates, fires before update triggers and after update triggers one more time (and only one more time), in addition to standard validations. Custom validation rules, duplicate rules, and escalation rules are not run again. [Source]

I needed a mechanism that detects if one of the dynamically mapped fields has subsequently changed and to run the trigger auto mapping again. In the simplest case where I was only interested in a single field changing a Map from the record ID to the last processed field value could be used (See How to avoid recursive trigger other than the classic 'class w/ static variable' pattern?). The challenge here is that the fields of interest are dynamic in nature so they can't be predefined in a Map.

In my case the trigger field mapping functionality was idempotent. So while it is important that it didn't run recursively if nothing had changed on the base record, I didn't need to be exact in which fields were changing. Given this, I went with storing the System.hashCode(obj) for the Opportunity at the time it was last processed. The hash code helps here as any change to a field on the Opportunity will change the hash code, making it ideal to detect if there has been any field changes on the Opportunity.

The following example was put together directly by hand, so it might contain syntax errors etc...


trigger DynamicOpportunityFieldTrigger on Opportunity (after update) {
    OpportunityFieldMapper ofm = new OpportunityFieldMapper();
    ofm.mapFields(trigger.new, trigger.oldMap);
}

public class OpportunityFieldMapper {
    private static Map<Id, Integer> visitedRecordIdsToLastSeenHashMap = new Map<Id, Integer>();

    // List of applicable Opportunities. Expected to be pruned already and only includes records not already processed in the transaction
    private List<sObject> recordsToMapFieldsFor = new List<sObject>();

    public void addOpportunity(Opportunity opp) {
        if(visitedRecordIdsToLastSeenHashMap.containsKey(opp.Id)) {

     Integer lastSeenHash = visitedRecordIdsToLastSeenHashMap.get(opp.Id);
     Integer currentHash = System.hashCode(opp);

     if(lastSeenHash == currentHash) {
      System.debug(LoggingLevel.Debug, '.addOpportunity skipping visited OpportunityId: ' + opp.Id + '. Unchanged hash');
      return;
     }

     System.debug(LoggingLevel.Debug, 'AdBookDynamicPropertyMapping.addOpportunity Hash for OpportunityId: ' + opp.Id + ' changed from '+ lastSeenHash + ' to ' + currentHash + '. Not skipping due to change');   
 }

        visitedRecordIdsToLastSeenHashMap.put(opp.Id, System.hashCode(opp));
    
        // Queue for latter field mapping.
        recordsToMapFieldsFor.add(opp);
    }

    public void mapFields(List<Account> triggerNew, Map<id, Account> triggerOldMap) {
        for(Opportunity opp : triggerNew) {
            addOpportunity(opp);
        }

        // Use the recordsToMapFieldsFor collection to perform the actual mappings
    }
}

Thursday, June 30, 2016

Monitoring your Salesforce API usage

This seems to be a fairly common request on the Salesforce forums that developers frequent.

What is REQUEST_LIMIT_EXCEEDED: TotalRequests Limit exceeded. error about?
How can I found out what caused me to hit it?
How to determine what is making the API calls?
How can I get insight into an API call that took place on a certain date for one of our connected apps?

First, some context. This is the Total API Request Limits limit. It is a rolling limit for an organization over the last 24-hours. This means an API call made just now will count towards that limit until 24 hours from now. Don't expect the limit to reset back to zero at midnight.

The exact size of this limit depends on the Salesforce Edition and the number of per user licenses of a given type you have. It is possible to purchase additional API calls without needing more user licenses.

Basic monitoring

There are two locations where it is easy to check the current API usage.

Under Setup > Company Profile > Company Information there is an API Requests field. This will show you the current API call count the the maximum you can reach.

Then, within Reports > Administrative Reports there is API Usage Last 7 Days

This provides slightly finer detail, such and the username that held the session and the Client Id that was used to make the call. The Client Id can be useful as it can identify which external app was consuming the API calls.

Receiving a warning

You can configure an email alert to a user when the API requests exceed a percentage or you maximum requests. This is a RateLimitingNotification record that you create from Setup > Administration Setup > Monitoring > API Usage Notifications.

Monitoring via the API

The REST API has a Limits resource and a specific Organizations Limits resource that includes the "DailyApiRequests".

The SOAP APIs have a LimitInfoHeader that can be used to monitor API usage.

Event Monitoring API

The Event Monitoring API can provide much finer details about API calls over a wider history. With this (paid feature) you can see exactly what API calls were made in a time period.

E.g. From the developer console query editor

select Id, EventType, LogDate, LogFileLength from EventLogFile where EventType = 'API' and LogDate = 2016-02-20T00:00:00.000Z

Look for the EventType of API and the LogDate for the UTC day of interest. Unfortunately there isn't a single comprehensive EventType that will allow you to monitor all events that contribute to the limit. There is also the Bulk API, Metadata API Operation, and REST API.

You can then pull down the single `LogFile` data, which is a base 64 encoded CSV with all the API calls for that day.

select Id, LogFile from EventLogFile where ID = '0AT700000005WDaGAM'

The FuseIT SFDC Explorer has an Event Log tab that uses the same API calls and will extract the file for you and export it to a CSV.

For the LogFile, look for the USER_ID, CLIENT_IP, and CLIENT_NAME to help identify which app is making the calls.

See also:

Tuesday, June 28, 2016

Taming the size of a Salesforce Canvas

For an artist, facing a blank canvas can be a real challenge. For a Salesforce developer a Canvas app can be challenging for an entirely different reason - how to even define what size the Canvas is to start with?

In the ideal world you could just follow the docs and Automatically Resize the Canvas App to suit the content size. In practice this doesn't work so well for all scenarios.

I'm looking to embed an existing web application that was previously located in a Web Tab iframe into a Canvas Visualforce page using <apex:canvasApp />. My ideal goal is that the external web application blends in with Salesforce. There shouldn't be any jarring scrollbars on the iframe that make it look out of place.

The Web Tab approached worked well in that the width of the iframe didn't need to be defined and so it would get an <iframe width="100%">. Now the iframe can shrink and grow to follow along with any changes to the browser size. The nested app can correspondingly adjust its width to suit the available dimensions. The downside to this approach is that the iframes height needs to be specified. This is more problematic, and requires a fixed height that is sufficient to hold the majority of content. E.g. 3000px. Ech! Crazy vertical scroll bar!

Back the the Canvas iframe and the problem with dimensions. Firstly, a plain default Canvas app won't size past 2000px in height and 1000px in width. You need to explicitly set the maxHeight and maxWidth attributes to "infinite" (Documented in Docs but undocumented in other locations.). With the browser full width on a 1080 screen the default 1000px width limit is way too low. Sadly there is currently no corresponding minWidth/minHeight attributes.

Now we've pulled the limits off how big the iframe can get, how to correctly size it to both the browser window and the content within? As mentioned above, the auto resize should be just the thing here. Unfortunately it doesn't play so well with content that dynamically scales to the available space. I found it would either default to the minimum page width defined by the content, or worse still, shrink as the iframe content used Javascript to resize and then reacted to the auto resizing in an unending loop. If there was a way to define the minimum height and to let the iframe width stay at "100%" it would be infinitely more useful.

The autogrow() resizing script appears to come from canvas-all.js. It is basically a timer to periodically call resize. I haven't gone through the fine details, but I believe part of the code is for communicating with the parent iframe so that it can be resized accordingly.

How can I size a Canvas apps iframe in Visualforce to be the full window width with a height to fit the content?

At this stage I'm experimenting with either using a custom canvas-all.js implementation or manually calling Sfdc.canvas.client.resize(). I'll update this page when I get somewhere workable.

Another potential problem with the height is dynamic content, such a modal dialogs, that may not register as affecting the page height.

See also:

Friday, June 3, 2016

The importance of reading the Salesforce Release notes - a cautionary tale

By the time you read this the underlying problem is likely to be a non-issue with the transition to Summer '16 complete. It does highlight the importance of going through the release notes with a fine-tooth comb.

How carefully do you read the release notes for each of the big triannual releases? For Spring '16 the complete document weighed in at 486 pages. "Great!" you say, that's a lot of new features and fixes to play with.

Spoiler alert: I'll admit here that I read them, but didn't commit the entire document to memory. This caught me out as follows:

Some names and identifying details have been changed to protect the privacy of individual pods.

  • Client: For our FooBarWidget records, I need to be able to set if it has one of 4 possible values. Only those 4 values are applicable.
  • Me: That sounds like a good candidate for a picklist field. I'll add one with these values you provided and have it to you shortly for testing.
  • Client: Oh, and we need this in production ASAP.
  • Me: Got it, the usual.
  • Me to dev sandbox: Add a new picklist field to FooBarWidget please.
  • Spring '16 Dev Sandbox:
  • Me to dev sandbox: It's not really a global picklist. And when I looked further at those there is a big red BETA next to them. Let's just define the values here as a one off thing. That "Strictly enforce picklist values" option sounds good. Definitely don't want those rascally users putting inappropriate values in the picklist again. No siree!
  • <The sound of hammers, saws and typing. Maybe some random metal grinding to look good on camera. End result is a changeset for the new picklist field.>
  • Me to pre-production sandbox: Validate and then quick deploy this change set.
  • Spring '16 Pre-production sandbox: Done, and have a deployment fish for your troubles.
  • <High fives CS6 instance. Which was tricky with the whole cloud thing, but we made it work.>
  • Me to Client: Please test the functionality in pre-prod. When you are happy with it we can deploy it to production.
  • Client: It works. And you did it so quickly! You sir are the most meaningful and valued member of this team!
  • Me to Client: I do what I can.
  • Me to Production: Validate this change set.
  • Spring '16 Production: Woah, woah, woah, back the change set up.
    "Cannot have restricted picklists in this organization."
    No deployment fish for you!
  • Me mumbling to self: What the?
  • Me to Google: "Cannot have restric...
  • <Google reads mind>
  • Google: Error message: Cannot have restricted picklists in this organization
  • <Re-reads release notes>
  • Spring '16 Release Notes:
    If you have a Developer Edition org or sandbox org, no setup is required to use restricted picklists. You can package restricted picklists with an app only in Developer Edition or sandbox orgs.
    For all other editions, restricted picklists are available as a beta feature, which means they’re highly functional but have known limitations. Contact Salesforce to enable restricted picklists.
  • Me grumbling to self: It's enabled by default in all sandboxes and dev orgs, but won't work in production without begging to get on the pilot. AND THERE IS NO UI INDICATION THAT IT IS A BETA FEATURE!
    That's just brilliant!
  • Summer '16 Release Notes: Eliminate Picklist Clutter with Restricted Picklists (Generally Available)
  • Me to client: Soooooo, we can't deploy the change set as is to production. We need to do one of the following:
    1. Wait until Summer '16 releases for the feature to become GA. The trust website has it scheduled for one week from now.
    2. Remove the restriction from the picklist and look for other ways to prevent incorrect values in the short term. Make the field restricted again once Summer '16 deploys
    3. Raise a support case and ask our AE to get on the pilot for the one week until Summer '16 arrives.
  • Client: I'm not so much with the meaningfulness and valuing right now.
    Anyhow... That last one sounds like fun. Let's do that!
  • ...

Things degenerate a bit from there and are best not recorded in this medium. The point is that this one was a bit of a pain. Adding the new picklist field in the sandbox gave no indication that things were anything but fine and business as usual. There were no warnings that the features were still in pilot outside of the sandbox and dev orgs. It even deployed just fine into the pre-production sandbox. Then it exploded when trying to deploy to production.

Needless to say, I'm not a huge fan of implicitly activated pilot features in all sandbox and dev orgs without the corresponding BETA indication in the UI.

Admittedly it was all documented right there in the Spring '16 release notes. And the imminent release of Summer '16 will make it all a moot point.

Moral of the story - keep reading those release notes. And I salute you if you can remember everything you read in them.