Thursday, April 7, 2016

The Search for Astro

Update: Finding Astro just became a whole lot harder as the module no longer appears to be available.


In what has become become the hallmark of Trailhead emerges a rather cockamamy module to locate the lost Astro mascot. It may seem like an excuse for the creators to make odd videos and run around in the woods blair witch style, and maybe it was considering it went live on the 1st of April, but you might actually learn something along the way (and maybe win a prize).

The key to finding Astro will be in decoding the clues left in the trails. Expect to watch the videos of noir interrogations, Tanooki cosplay, goats, ... and then jump out to the indicated modules to find additional clues to complete the code. What code is this you ask? The first unit provides more details, but it looks something like this:

Don't worry if you never finished reading the Cryptonomicon. Or if you don't know the difference between a one-time pad and using the Apex Crypto class with AES256.

Instead, download the zip from the first part of the module that includes an Excel file you can use to complete the challenge. Alternatively, I've created an equivalent Google Sheet for decoding the clues.

Other things you might learn in the module:

  • How long it takes to get Cloudy the Goat groomed for a World Tour event.
  • Who sleeps on a pillow stuffed with Astro's hair.
  • "uploading data you get from a random dog you meet in the woods is NOT a Salesforce best practice"
  • Items that Cloudy the Goat has been using for mastication
  • #PancakeHands

So, hit the trail. Decrypt the note. And bring Astro home!


See also:

Wednesday, March 30, 2016

Salesforce Force.com IDE superpowers uncovered

Disclaimer: I've been informed by Salesforce that this an exceptional case for functionality is still being refined and will likely be exposed broadly in a future API version (#SafeHarbour). Although perhaps not in this exact form. As with all undocumented API features they could disappear at any time in a future release.

Borrowing the Warranty phrasing from Scott Hanselman:
Of course, this is just some dude's blog. Depending on undocumented API functionality is a recipe for losing your job and a messy divorce. Salesforce are likely to make changes to the existing functionality between major releases. As they don't know you are using this functionality they won't tell you. There's no warranty, express or implied. I don't know you and I don't how how you got here. Stop calling. Jimmy no live here, you no call back! [My current landline phone number used to belong to a Thai takeaway shop - True Story]

My blog disclaimer also applies.


In putting together an answer to a Salesforce StackExchange question I came across something odd with the Force.com IDE source code. The question needed a way to find details about installed packages.

I knew from my keyprefix list that InstalledPackageVersion existed. It wasn't, however, exposed via SOQL to the Partner API or Tooling API. So why then when I was Googling around does it show up in the source code for the Force.com IDE in a SOQL query?:

    // P A C K A G E S
    DEVELOPMENT_PACKAGES("SELECT Id, Name, Description, IsManaged FROM DevelopmentPackageVersion"),
    INSTALL_PACKAGES("SELECT Id, Name, Description, IsManaged, VersionName FROM InstalledPackageVersion"),

What makes the Force.com IDE so special that it can run SOQL queries that other API users can't?

The answer lies in the SOAP API CallOptions.client header. The docs say this is "A string that identifies a client.". I've used it before in the past after passing the app exchange security review to access the Partner API in professional edition orgs. It turns out this is also the key to accessing the hidden abilities of the Force.com IDE. Again from the source code for the Force.com IDE:

    //value is critical for Eclipse-only API support
    private final String clientIdName = "apex_eclipse";

This is later combined with the API version to create the callOptions header.

So what if we use the same string when we establish our Session using the Partner API and then on subsequent calls?

The FuseIT SFDC Explorer supports setting the Client Id on the login New Connection screen and on a saved connection string.

   <add name="ForceCom IDE Login" 
     connectionString="G4S:user id=user@example.com;password=shhSecret;environment=Production;Client=apex_eclipse/v36.0"/>

The raw SOAP POST request:

<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
  <soap:Header>
    <CallOptions xmlns="urn:partner.soap.sforce.com">
      <client>apex_eclipse/v36.0</client>
    </CallOptions>
    <SessionHeader xmlns="urn:partner.soap.sforce.com">
      <sessionId>00D300000000001!AQ0AQDxJ1jtrcX3XNotARealSessionIdThatYouCanUseOAgFtJDkHc.IQ6Wjv6b1rTp7qQlNMOOFcZj6</sessionId>
    </SessionHeader>
  </soap:Header>
  <soap:Body>
    <-- etc... -->
  </soap:Body>

Now we be access to additional sObject Types that were previously inaccessible. So far I've tried:

  • DevelopmentPackageVersion
  • InstalledPackageVersion
  • ApexClassIdentifier
  • ApexClassIdentifierRelationship

I'll do some more poking around as time permits to see if there are any other hidden treasures.

Friday, March 4, 2016

FuseIT SFDC Explorer 3.0.16053.2

The latest v3.0 release of the FuseIT SFDC Explorer now supports TLS 1.1 and 1.2. This was required as Salesforce will be disabling TLS 1.0 via critical updates available in Spring '16.

Part of this was to upgrade to the .NET Framework 4.6.1 to get native support for the newer TLS versions. You can get the .NET Framework 4.6.1 installer from https://www.microsoft.com/en-us/download/details.aspx?id=49981. Note that it does need to be 4.6.1, the 4.6 installer won't work.

Without this version you are likely to get an error like the following when attempting to login:

UNSUPPORTED_CLIENT: TLS 1.0 has been disabled in this organization. Please use TLS 1.1 or higher when connecting to Salesforce using https.

Other notable changes:

  • Update ApexLog parsing to allow for Spring '16 date formats
  • Wsdl2Apex: Add basic WSDL schema validation and reporting via error messages. Detect potential WSDL 1.1 structure issues.
  • Wsdl2Apex: Apex variable names can't start with an underscore. Will be prefixed with 'x'

Thursday, February 18, 2016

Trailhead - Navigate the Salesforce Advantage

From time to time your parents or grandparents will inquire about what I do for a living. With the latter I first explain that there is no plugging of wires involved, unless I'm breadboarding an IoT experiment, which leads to further confusion.

I've also found that starting out with the term "cloud computing" leads to more vague looks. And the name "Salesforce" makes them think I'm in retail. So where to start?

Trailhead to the rescue!

The new Navigate the Salesforce Advantage trial can help explain who and what Salesforce is and how businesses use it. All with a nautical theme!

So what will you learn on this trail?

The entire trail only takes around an hour and gives you a good primer on all things Salesforce.


See also, the Spring '16 Release specific module (available for a limited time)

Thursday, January 7, 2016

FuseIT SFDC Explorer v2.12 Release - Event Log viewer

The latest v2.12 release of the FuseIT SFDC Explorer now supports a viewer for the Event Log API. This can be useful if you want to quickly browse the Salesforce Event Log content without having to process the base64 encoded CSV content from the LogFile field of EventLogFile.

You can also export the CSV to open it in an external tool, such as Excel.

The LogFileFieldTypes and LogFileFieldNames from the EventLogFile are used to improve the formatting of the log file.

See also:

Tuesday, December 22, 2015

A Tale of Two Triggers

The following is a cautionary tale about the importance of having only one* trigger per object type. It is based on actual events. Only the names, apex code, and events have been changed.

Imagine you have a trigger something like the one that appears below, along with its associated support Apex class.

Don't worry too much about reading all the code. Just know that the trigger fires after insert or update of an Opportunity record to do some processing. The first order of business is to ensure that the trigger hasn't already run in the current context due to other triggers that may be present in the org. We want to prevent multiple calls to the trigger in one transaction and the potential for trigger recursion. This is achieved via a static set of Opportunity Ids. Once we've processed a given Opportunity Id and put it in the set there is no need to revisit it again in the current transaction.

trigger MyProduct_Opportunity on Opportunity (after insert, after update) {

    OppTrg_MyProductIntegration productIntegration = new OppTrg_MyProductIntegration ();
    if (trigger.isAfter) {
        productIntegration.handleAfter(trigger.new, trigger.oldMap, trigger.isInsert, trigger.isUpdate);
    }

}
public with sharing class OppTrg_MyProductIntegration {
        private static Set visitedOpportunityIds = new Set();

    public void handleAfter(List triggerNew, Map triggerOldMap, boolean triggerIsInsert, boolean triggerIsUpdate) {

        List tasksToInsert = new List();
        
        for(Opportunity opp : triggerNew) {

            if(visitedOpportunityIds.contains(opp.Id)) {
            System.debug(LoggingLevel.Debug, 'OppTrg_MyProductIntegration.handleAfter skipping OpportunityId: ' + opp.Id + ' triggerIsInsert:' + triggerIsInsert + ' triggerIsUpdate:' + triggerIsUpdate);
            continue;
        } else {
            visitedOpportunityIds.add(opp.Id);
            System.debug(LoggingLevel.Debug, 'OppTrg_MyProductIntegration.handleAfter processing OpportunityId: ' + opp.Id + ' triggerIsInsert:' + triggerIsInsert + ' triggerIsUpdate:' + triggerIsUpdate);
        }

            // Do further processing using opp.Id
            boolean doStuff = triggerIsInsert;
            // If it's an update, check that certain fields have changed
            if(triggerIsUpdate) {
                Opportunity beforeUpdate = triggerOldMap.get(opp.Id);
                // We only need to do something if the Amount has changed
                doStuff = beforeUpdate.Amount != opp.Amount;
            }
            // Otherwise, if it's an insert, proceed with the additional processing
            
            System.debug(LoggingLevel.Info, 'OppTrg_MyProductIntegration doStuff:' + doStuff);
            if(doStuff) {
                //System.assertNotEquals(5000, opp.Amount, 'OppTrg_MyProductIntegration deliberate assertion failure doing stuff');
                Task mockTaskToIndicateStuffHappened = new Task();
                mockTaskToIndicateStuffHappened.WhatId = opp.Id;
             mockTaskToIndicateStuffHappened.Subject = 'Follow Up Test Task';
                tasksToInsert.add(mockTaskToIndicateStuffHappened);
                
            }

        }
        
        insert tasksToInsert;

    }
}

Now the fun begins. Users report that the desired functionality from the after insert trigger code isn't always occurring in production. The first step is to acquire the associated debug log to see what is going on.

Here is an extract from the DEBUG logging messages for the OppTrg_MyProductIntegration class when the assertion fails:

Time            Event            Info 1  Info 2   Message
14:16:31.573 DML_BEGIN  [26]  Op:Insert Type:Opportunity|Rows:1
14:16:31.714 USER_DEBUG  [15]  DEBUG  OppTrg_MyProductIntegration.handleAfter processing OpportunityId: 0067000000c7d3jAAA triggerIsInsert:false triggerIsUpdate:true
14:16:31.717 USER_DEBUG  [11]  DEBUG  OppTrg_MyProductIntegration.handleAfter skipping OpportunityId: 0067000000c7d3jAAA triggerIsInsert:true triggerIsUpdate:false
14:16:31.721 EXCEPTION_THROWN [29]   System.AssertException: Assertion Failed: Expected: 1, Actual: 0

Look closely at the messages, the after insert/update trigger called MyProduct_Opportunity first occurs for update, and secondly for insert. The record was updated before it was inserted!

What's really going on here? The extended debug log shows the cause.

Time            Event              In1  Info 2   Message
14:16:31.573 DML_BEGIN    [26] Op:Insert Type:Opportunity|Rows:1
14:16:31.600 CODE_UNIT_STARTED  [EX] 01q70000000TiLP DFB.OpportunityAfterInsertTest on Opportunity trigger event AfterInsert for [0067000000c7d3j]
14:16:31.601 SOQL_EXECUTE_BEGIN [17] Aggregations:0 SELECT Id FROM Opportunity WHERE Id IN :tmpVar1
14:16:31.604 SOQL_EXECUTE_END   [17] Rows:1
14:16:31.605 DML_BEGIN    [21] Op:Update Type:Opportunity|Rows:1
14:16:31.712 CODE_UNIT_STARTED  [EX] 01q70000000TiLK DFB.MyProduct_Opportunity on Opportunity trigger event AfterUpdate for [0067000000c7d3j]
14:16:31.714 USER_DEBUG    [15] DEBUG  OppTrg_MyProductIntegration.handleAfter processing OpportunityId: 0067000000c7d3jAAA triggerIsInsert:false triggerIsUpdate:true
14:16:31.714 USER_DEBUG    [28] INFO  OppTrg_MyProductIntegration doStuff:false
14:16:31.714 CODE_UNIT_FINISHED   DFB.MyProduct_Opportunity on Opportunity trigger event AfterUpdate for [0067000000c7d3j]
14:16:31.714 DML_END       [21]
14:16:31.717 USER_DEBUG    [11] DEBUG  OppTrg_MyProductIntegration.handleAfter skipping OpportunityId: 0067000000c7d3jAAA triggerIsInsert:true triggerIsUpdate:false
14:16:31.717 DML_END     [26]
14:16:31.717 SOQL_EXECUTE_BEGIN [28]   Aggregations:0 SELECT Id FROM Task WHERE WhatId = :tmpVar1
14:16:31.720 SOQL_EXECUTE_END   [28]   Rows:0
14:16:31.721 EXCEPTION_THROWN   [29]   System.AssertException: Assertion Failed: Expected: 1, Actual: 0

Going carefully through the log, there is another rogue after insert trigger in there that is resulting in an update on the record that was just inserted. This subsequent update causes our trigger of interest to execute first in an update context, before returning to execute for the original insert. The Set of processed Id's is tripping us up here. When the trigger first fires for update the ID isn't in the Set, so the fields are checked for a changed value. Being an update on another field the check doesn't pass, and the additional processing is skipped.

trigger OpportunityAfterInsertTest on Opportunity (after insert) {
     
    List toUpdate = new List();
    for(Opportunity opp : trigger.new) {
        if(opp.Description == 'Hello') {
            toUpdate.add(opp.Id);
        }
    }
    
    if(toUpdate.size() > 0) {
        List opps = [Select Id from Opportunity where Id in :toUpdate];
        for(Opportunity opp : opps) {
            opp.Description = 'World';    
        }  
        update opps; 
    }
}
@IsTest
public class OppTrg_MyProductIntegration_Test {

    @IsTest
    public static void StuffExpectedToHappen(){
        Opportunity opp = new Opportunity();
        opp.Name = 'Test';
        opp.Description = 'FooBar';
        opp.StageName = 'Closed Won';
        opp.CloseDate = DateTime.now().date();
        opp.Amount = 5000;
        insert opp;
        
        List tasksInsertedForNewOpp = [Select Id from Task where WhatId = :opp.Id];
        System.assertEquals(1, tasksInsertedForNewOpp.size());
    }
    
    @IsTest
    public static void WhyDidntStuffHappen(){
        Opportunity opp = new Opportunity();
        opp.Name = 'Test';
        opp.Description = 'Hello';
        opp.StageName = 'Closed Won';
        opp.CloseDate = DateTime.now().date();
        opp.Amount = 5000;
        insert opp;
        
        List tasksInsertedForNewOpp = [Select Id from Task where WhatId = :opp.Id];
        System.assertEquals(1, tasksInsertedForNewOpp.size());
    }
    
}

Which brings us back to the moral of the story, the only way to get predictable trigger ordering is to have one trigger per object type that passes off to other classes in a defined order. This is sometimes easier said than done. Multiple managed packages can all introduce their own set of triggers.

It terms of fixing the example triggers above, we can assume that the action should always occur in a trigger context. Basically bypass the Set check unless it is an update operation.

See also:

Friday, December 18, 2015

Trailhead - Build a Battle Station App

How does one go about building a moon sized death star? That's a lot people and supplies to keep track of for a project with a budget over 1,000,000,000,000 galactic credits. Putting together a system to help manage the build is going to be a major project in and of itself. Or is it?

Build a Battle Station App

Lucky for us the Salesforce Trailhead team has the timely launch of the Build a Battle Station App Trailhead project.

Highlights you'll learn from completing this project:

  • How do you keep the Exhaust Port Inspectors from dropping the ball again and letting magic space wizards drop bombs to the core.
  • How to track all the required supplies. Tractor beams, ultra fast hydraulic units for the doors. Don't forget the light bulbs and toilet paper. Handling the guard rail shortage will be left as an exercise to the readers.
  • Get a quick summary of how many people are actually working on the project.
  • Use Lightning Process Builder and Chatter to announce when you've got a fully armed and operational death star!
  • How to help your fat fingered boss use the mobile app.
  • Test the mobile app in Chrome using the Salesforce1 Simulator app.

As an added bonus, there is also a competition running for those who finish the badge by 2015-12-31 11:59pm PST

Just noticed Model Complex Products with Hierarchical Assets in the Spring `16 release notes. That could be useful in this project.


Apex Integration Services

There is also the new Apex Integration Services module. If you can get past the SOAP bashing there is some good information in there.

How's this for clicks not code? (Or at least minimal code to meet the ceremony required by the challenge)

  1. Take the URL for the SOAP WSDL from the Apex SOAP Callouts Challenge. No need to save it to disk first.
  2. Put it through the FuseIT SFDC Explorer custom WSDL2Apex implementation. Call the output ParkService and check the option to generate test cases.
  3. Rename the generated mock class "ParkServiceMockImpl"
  4. Rename the generated test class "ParkLocatorTest"
  5. Make the ParkLocator class with the static country method to call the generated class.
  6. In ParkLocatorTest duplicate one of the assertions to also call the ParkLocator.country method to give the required coverage.
  7. Add the remote site setting for the callout URL.
  8. Run the generated ParkLocatorTest test case.
  9. Smugly pass the challenge test

OK, maybe not so smugly. There are more steps above than I care for, but most of those are to satisfy the ceremony of the challenge with regards to naming etc... There is also something odd about this WSDL the is throwing off the generated apex_schema_type_info members and requires the elementFormDefault="qualified" boolean (second to last one) to be manually changed from 'true' to 'false'.

Still, show me the same level of initial setup from a REST API in Apex and I'd be impressed.