Thursday, January 7, 2016

FuseIT SFDC Explorer v2.12 Release - Event Log viewer

The latest v2.12 release of the FuseIT SFDC Explorer now supports a viewer for the Event Log API. This can be useful if you want to quickly browse the Salesforce Event Log content without having to process the base64 encoded CSV content from the LogFile field of EventLogFile.

You can also export the CSV to open it in an external tool, such as Excel.

The LogFileFieldTypes and LogFileFieldNames from the EventLogFile are used to improve the formatting of the log file.

See also:

Tuesday, December 22, 2015

A Tale of Two Triggers

The following is a cautionary tale about the importance of having only one* trigger per object type. It is based on actual events. Only the names, apex code, and events have been changed.

Imagine you have a trigger something like the one that appears below, along with its associated support Apex class.

Don't worry too much about reading all the code. Just know that the trigger fires after insert or update of an Opportunity record to do some processing. The first order of business is to ensure that the trigger hasn't already run in the current context due to other triggers that may be present in the org. We want to prevent multiple calls to the trigger in one transaction and the potential for trigger recursion. This is achieved via a static set of Opportunity Ids. Once we've processed a given Opportunity Id and put it in the set there is no need to revisit it again in the current transaction.

trigger MyProduct_Opportunity on Opportunity (after insert, after update) {

    OppTrg_MyProductIntegration productIntegration = new OppTrg_MyProductIntegration ();
    if (trigger.isAfter) {
        productIntegration.handleAfter(trigger.new, trigger.oldMap, trigger.isInsert, trigger.isUpdate);
    }

}
public with sharing class OppTrg_MyProductIntegration {
        private static Set visitedOpportunityIds = new Set();

    public void handleAfter(List triggerNew, Map triggerOldMap, boolean triggerIsInsert, boolean triggerIsUpdate) {

        List tasksToInsert = new List();
        
        for(Opportunity opp : triggerNew) {

            if(visitedOpportunityIds.contains(opp.Id)) {
            System.debug(LoggingLevel.Debug, 'OppTrg_MyProductIntegration.handleAfter skipping OpportunityId: ' + opp.Id + ' triggerIsInsert:' + triggerIsInsert + ' triggerIsUpdate:' + triggerIsUpdate);
            continue;
        } else {
            visitedOpportunityIds.add(opp.Id);
            System.debug(LoggingLevel.Debug, 'OppTrg_MyProductIntegration.handleAfter processing OpportunityId: ' + opp.Id + ' triggerIsInsert:' + triggerIsInsert + ' triggerIsUpdate:' + triggerIsUpdate);
        }

            // Do further processing using opp.Id
            boolean doStuff = triggerIsInsert;
            // If it's an update, check that certain fields have changed
            if(triggerIsUpdate) {
                Opportunity beforeUpdate = triggerOldMap.get(opp.Id);
                // We only need to do something if the Amount has changed
                doStuff = beforeUpdate.Amount != opp.Amount;
            }
            // Otherwise, if it's an insert, proceed with the additional processing
            
            System.debug(LoggingLevel.Info, 'OppTrg_MyProductIntegration doStuff:' + doStuff);
            if(doStuff) {
                //System.assertNotEquals(5000, opp.Amount, 'OppTrg_MyProductIntegration deliberate assertion failure doing stuff');
                Task mockTaskToIndicateStuffHappened = new Task();
                mockTaskToIndicateStuffHappened.WhatId = opp.Id;
             mockTaskToIndicateStuffHappened.Subject = 'Follow Up Test Task';
                tasksToInsert.add(mockTaskToIndicateStuffHappened);
                
            }

        }
        
        insert tasksToInsert;

    }
}

Now the fun begins. Users report that the desired functionality from the after insert trigger code isn't always occurring in production. The first step is to acquire the associated debug log to see what is going on.

Here is an extract from the DEBUG logging messages for the OppTrg_MyProductIntegration class when the assertion fails:

Time            Event            Info 1  Info 2   Message
14:16:31.573 DML_BEGIN  [26]  Op:Insert Type:Opportunity|Rows:1
14:16:31.714 USER_DEBUG  [15]  DEBUG  OppTrg_MyProductIntegration.handleAfter processing OpportunityId: 0067000000c7d3jAAA triggerIsInsert:false triggerIsUpdate:true
14:16:31.717 USER_DEBUG  [11]  DEBUG  OppTrg_MyProductIntegration.handleAfter skipping OpportunityId: 0067000000c7d3jAAA triggerIsInsert:true triggerIsUpdate:false
14:16:31.721 EXCEPTION_THROWN [29]   System.AssertException: Assertion Failed: Expected: 1, Actual: 0

Look closely at the messages, the after insert/update trigger called MyProduct_Opportunity first occurs for update, and secondly for insert. The record was updated before it was inserted!

What's really going on here? The extended debug log shows the cause.

Time            Event              In1  Info 2   Message
14:16:31.573 DML_BEGIN    [26] Op:Insert Type:Opportunity|Rows:1
14:16:31.600 CODE_UNIT_STARTED  [EX] 01q70000000TiLP DFB.OpportunityAfterInsertTest on Opportunity trigger event AfterInsert for [0067000000c7d3j]
14:16:31.601 SOQL_EXECUTE_BEGIN [17] Aggregations:0 SELECT Id FROM Opportunity WHERE Id IN :tmpVar1
14:16:31.604 SOQL_EXECUTE_END   [17] Rows:1
14:16:31.605 DML_BEGIN    [21] Op:Update Type:Opportunity|Rows:1
14:16:31.712 CODE_UNIT_STARTED  [EX] 01q70000000TiLK DFB.MyProduct_Opportunity on Opportunity trigger event AfterUpdate for [0067000000c7d3j]
14:16:31.714 USER_DEBUG    [15] DEBUG  OppTrg_MyProductIntegration.handleAfter processing OpportunityId: 0067000000c7d3jAAA triggerIsInsert:false triggerIsUpdate:true
14:16:31.714 USER_DEBUG    [28] INFO  OppTrg_MyProductIntegration doStuff:false
14:16:31.714 CODE_UNIT_FINISHED   DFB.MyProduct_Opportunity on Opportunity trigger event AfterUpdate for [0067000000c7d3j]
14:16:31.714 DML_END       [21]
14:16:31.717 USER_DEBUG    [11] DEBUG  OppTrg_MyProductIntegration.handleAfter skipping OpportunityId: 0067000000c7d3jAAA triggerIsInsert:true triggerIsUpdate:false
14:16:31.717 DML_END     [26]
14:16:31.717 SOQL_EXECUTE_BEGIN [28]   Aggregations:0 SELECT Id FROM Task WHERE WhatId = :tmpVar1
14:16:31.720 SOQL_EXECUTE_END   [28]   Rows:0
14:16:31.721 EXCEPTION_THROWN   [29]   System.AssertException: Assertion Failed: Expected: 1, Actual: 0

Going carefully through the log, there is another rogue after insert trigger in there that is resulting in an update on the record that was just inserted. This subsequent update causes our trigger of interest to execute first in an update context, before returning to execute for the original insert. The Set of processed Id's is tripping us up here. When the trigger first fires for update the ID isn't in the Set, so the fields are checked for a changed value. Being an update on another field the check doesn't pass, and the additional processing is skipped.

trigger OpportunityAfterInsertTest on Opportunity (after insert) {
     
    List toUpdate = new List();
    for(Opportunity opp : trigger.new) {
        if(opp.Description == 'Hello') {
            toUpdate.add(opp.Id);
        }
    }
    
    if(toUpdate.size() > 0) {
        List opps = [Select Id from Opportunity where Id in :toUpdate];
        for(Opportunity opp : opps) {
            opp.Description = 'World';    
        }  
        update opps; 
    }
}
@IsTest
public class OppTrg_MyProductIntegration_Test {

    @IsTest
    public static void StuffExpectedToHappen(){
        Opportunity opp = new Opportunity();
        opp.Name = 'Test';
        opp.Description = 'FooBar';
        opp.StageName = 'Closed Won';
        opp.CloseDate = DateTime.now().date();
        opp.Amount = 5000;
        insert opp;
        
        List tasksInsertedForNewOpp = [Select Id from Task where WhatId = :opp.Id];
        System.assertEquals(1, tasksInsertedForNewOpp.size());
    }
    
    @IsTest
    public static void WhyDidntStuffHappen(){
        Opportunity opp = new Opportunity();
        opp.Name = 'Test';
        opp.Description = 'Hello';
        opp.StageName = 'Closed Won';
        opp.CloseDate = DateTime.now().date();
        opp.Amount = 5000;
        insert opp;
        
        List tasksInsertedForNewOpp = [Select Id from Task where WhatId = :opp.Id];
        System.assertEquals(1, tasksInsertedForNewOpp.size());
    }
    
}

Which brings us back to the moral of the story, the only way to get predictable trigger ordering is to have one trigger per object type that passes off to other classes in a defined order. This is sometimes easier said than done. Multiple managed packages can all introduce their own set of triggers.

It terms of fixing the example triggers above, we can assume that the action should always occur in a trigger context. Basically bypass the Set check unless it is an update operation.

See also:

Friday, December 18, 2015

Trailhead - Build a Battle Station App

How does one go about building a moon sized death star? That's a lot people and supplies to keep track of for a project with a budget over 1,000,000,000,000 galactic credits. Putting together a system to help manage the build is going to be a major project in and of itself. Or is it?

Build a Battle Station App

Lucky for us the Salesforce Trailhead team has the timely launch of the Build a Battle Station App Trailhead project.

Highlights you'll learn from completing this project:

  • How do you keep the Exhaust Port Inspectors from dropping the ball again and letting magic space wizards drop bombs to the core.
  • How to track all the required supplies. Tractor beams, ultra fast hydraulic units for the doors. Don't forget the light bulbs and toilet paper. Handling the guard rail shortage will be left as an exercise to the readers.
  • Get a quick summary of how many people are actually working on the project.
  • Use Lightning Process Builder and Chatter to announce when you've got a fully armed and operational death star!
  • How to help your fat fingered boss use the mobile app.
  • Test the mobile app in Chrome using the Salesforce1 Simulator app.

As an added bonus, there is also a competition running for those who finish the badge by 2015-12-31 11:59pm PST

Just noticed Model Complex Products with Hierarchical Assets in the Spring `16 release notes. That could be useful in this project.


Apex Integration Services

There is also the new Apex Integration Services module. If you can get past the SOAP bashing there is some good information in there.

How's this for clicks not code? (Or at least minimal code to meet the ceremony required by the challenge)

  1. Take the URL for the SOAP WSDL from the Apex SOAP Callouts Challenge. No need to save it to disk first.
  2. Put it through the FuseIT SFDC Explorer custom WSDL2Apex implementation. Call the output ParkService and check the option to generate test cases.
  3. Rename the generated mock class "ParkServiceMockImpl"
  4. Rename the generated test class "ParkLocatorTest"
  5. Make the ParkLocator class with the static country method to call the generated class.
  6. In ParkLocatorTest duplicate one of the assertions to also call the ParkLocator.country method to give the required coverage.
  7. Add the remote site setting for the callout URL.
  8. Run the generated ParkLocatorTest test case.
  9. Smugly pass the challenge test

OK, maybe not so smugly. There are more steps above than I care for, but most of those are to satisfy the ceremony of the challenge with regards to naming etc... There is also something odd about this WSDL the is throwing off the generated apex_schema_type_info members and requires the elementFormDefault="qualified" boolean (second to last one) to be manually changed from 'true' to 'false'.

Still, show me the same level of initial setup from a REST API in Apex and I'd be impressed.

Friday, November 27, 2015

Rejecting the Salesforce CKEditor and substituting your own.

My biggest pain point with the Salesforce developer forums is the CKEditor. In particular, how it interferes with the browsers native spell checker. My spelling skills aren't always the best, but are good enough when propped up by the red squiggles. The problem is compounded by not being able to edit existing posts. Your spelling mistakes can now haunt you forever unless you are willing to delete a post.

There is also the general problem with WYSIWYG editors in that the underlying formatting can get unwieldy. Copying and pasting content from multiple sources is a quagmire of mismatched HTML. It's what you can't see that is important to the formatting.

Also, good luck to you if you've written an answer that is a contender for a Nobel Prize in Literature only to lose it all due to an expired session. Maybe you'll get lucky and can browse back to recover it. More likely it is lost forever and you'll just knock out a one or two line reply instead. Now devoid of formatting, images and hyperlinks. Because why bother doing it all again?

Status Quo

Right now I've got a few techniques for dealing with the CKEditor used in the success forum.

  • Write everything somewhere else where a spellchecker is available and then copy and paste it over at the last minute. Adding in any links, images, and formatting just before submission.
  • Use the browsers developer tools to manipulate the HTML source that the CKEditor is working from.
  • Before submitting an answer, copy the content out just in case there is a session issue and the form doesn't submit correctly. You'll likely lose formatting, links, etc..., but you'll keep your prose.

The Hackening

I did a little spelunking into the page source and network traffic. You can see that the CKEditor is currently configured with a request to https://developer.salesforce.com/forums/ckeditor/ckeditor-4.x/rel/sfdc-config.js?t=4.4.6.4, It's this config that toggles various functions in the editor.

Of particular interest here is the a.toolbar_ServiceCommunity array. This configures the toolbar buttons that are available.

    a.toolbar_ServiceCommunity = [
        ["Undo", "Redo"],
        ["Bold", "Italic", "Underline",
            "Strike"
        ],
        ["Link", "sfdcImage", "sfdcCodeBlock"],
        ["JustifyLeft", "JustifyCenter", "JustifyRight"],
        ["BulletedList", "NumberedList", "Indent", "Outdent"]
    ];

Now that we know how the CKEditor is being configured we can start hacking at it. Lets see if we can get the native source view going.

Injecting your own configuration

This option is perhaps a bit heavy handed. The basic idea is to use a custom Chrome extension to intercept the HTTP requests to https://developer.salesforce.com/forums/ckeditor/ckeditor-4.x/rel/sfdc-config.js?t=4.4.6.4 and redirect the browser to config content that I control.

Parts of the extension

manifest.json

Configures capturing web requests from https://developer.salesforce.com

background.js

Listens for requests to https://developer.salesforce.com/forums/ckeditor/ckeditor-4.x/rel/sfdc-config.js?t=4.4.6.4 and redirects them to the static resource config.

See chrome.webRequest

config.js

The replacement for the standard Salesforce config.

The main problem with redirecting the browser to pull content alternative content is that it won't easily accept something from another domain.

This can be overcome by hosting the replacement configuration as a static resource in an org of your choosing. I used by developer edition org, as I often have an active session there.

Load the config.js Javascript into an Org of your choice as a static resource. I'd encourage you to modify your own version rather that taking the version I've got here. Who knows what sort of evil Javascript could be hidden away in there. Better to modify something you have full control over.

The Result

With the extension loaded into Chrome as an active unpacked chrome://extensions it will switch out the CKEditor config to the one I control.

I can now jump natively into the HTML source that CKEditor is working from. The built in Chrome spell checked comes back to like, and I can edit things like blockquotes into the source.

The Future

I mostly post to the various StackExchange sites where they have a well tuned markdown editor. There is a Markdown plugin for CKEditor, it would be great to integrate this.

The Chrome extension is probably a bit heavy handed, it should be possible to run Javascript directly in the page once the CKEditor is loaded to reconfigure it on the fly. Maybe with something as simple as a bookmarklet. See Change CKEditor toolbar dynamically

There are indications that you can reenable the native browser spell checker using config.disableNativeSpellChecker = false;. I couldn't get this to work.

See Also:


Wednesday, October 21, 2015

Salesforce Debug logs with the Winter '16 Developer Console

I'm in the process of updating the FuseIT SFDC Explorer to use the Winter '16 (v35.0) APIs. One of the first things I've encountered are a number of changes with how debug logging works. These are somewhat documented in the Release Notes on Debugging.

In terms of actually integrating with Salesforce at the API level, TraceFlag no longer has a scopeId field to optionally represent the user being debugged. Instead it has a logType and DebugLevelId field. It's the latter that is particularly interesting.

Say you have two developers working in an Org. Ideally they would both be developing in separate orgs and then migrating changes into a common org, so let's say they are testing the integration of their changes.

  • Developer A starts the Developer Console.
  • A DebugLevel is created for Dev A with the DeveloperName 'SFDC_DevConsole'. Lets call it 7dl70000000000A, or DLA for short.
  • A TraceFlag is created for Dev A that references DLA. The TracedEntityId is the Developer A's User Id.
  • Developer B starts the Developer Console in the same org in their own session
  • You can't have two DebugLevel records with the same DeveloperName, so the Developer Console uses the existing DLA record. (See where I'm going with this?)
  • A TraceFlag is created for Dev B that references DLA. The TracedEntityId is Developer B's User Id.

So, what have we got? Both developers will see their logs in the Developer Console. However, they are sharing the same DebugLevel definition by default!

If either developer makes changes to the common SFDC_DevConsole debug levels it will affect the other developers logging for any future logs. If one developer deletes/removes the common DebugLevel it will cascade delete any associated TraceFlags. Hours of fun!

What can you do about this? You can add new DebugLevel records via the developer console. The one you have selected when you press Done will be sent back to the TraceFlag.

Thursday, October 8, 2015

Salesforce code quality test cases for Apex via static code analysis

Lets assume for a moment that your code coverage isn't a perfect 100% and there are some areas of your Apex that aren't exercised as part of the automated deployment test cases. If your suspension of disbelief goes that far, lets also assume that sometimes you add temporary code to aid in debugging those murky places. Something like:

System.assert(false, 'TEMP this assertion was only for debugging and will stop the execution so you can check the debug log etc...');

The general idea is that you can use this assertion to halt execution during manual testing. At which point it would be easy to inspect the debug log state or write out some useful state information in the assertion message. It's certainly not the most advanced debugging technique, but it is effective.

The main problem with the approach is what happens if you forget to take out that line. And then package the code base for deployment. Don't do that. It's not a good look.

Remember, the line in question may or may not be covered by the existing test cases. If it isn't it will sail right into the production managed package and then spring out and surprise a user at the most inopportune moment.

What if you could stop the package from proceeding based on the content of the Apex classes?

We need the poor man's static code analysis tool! And we need it to run as part of the packaging process.

Luckily, we already have Apex test cases that run as part of the packaging process. And we have the ApexClass sObject exposed to Apex which includes the Body of the classes within the current namespace. We just need some apex code that will scan through the Body of the apex classes and assert that they don't contain the problem string. Then add said test class to the package components. Something like:

@isTest
private class SampleCodeQualityTests {
 
 @isTest static void ensureThereAreNoTempAssertions() {
  string searchFor = '\'TEMP';

                // Exclude this class from the check. May also need to exclude based on namespace
  for(ApexClass ac : [Select Id, Name, Body From ApexClass where Name != 'SampleCodeQualityTests']) {
   integer index = ac.Body.indexOf(searchFor);
   if(index == -1) { continue; }

   integer lineNumber = lineNumberForIndex(ac.Body, index);
   string problemLine = extractLineByIndex(ac.body, index);
   System.assertEquals(-1, index, 'Apex class ' + ac.Name + '(' + ac.Id + ') contains the text ' + searchFor + ' at index ' + index + ' lineNumber:' + lineNumber + 
    '\n' + problemLine);
   
   //System.assert(!ac.Body.contains(searchFor), 'Apex class ' + ac.Name + '(' + ac.Id + ') contains the text ' + searchFor);
  }
 }

 private static integer lineNumberForIndex(string body, integer targetIndex) {
  integer lineNumber = body.substring(0, targetIndex).countMatches('\n') + 1;
  return lineNumber;
 }

 private static string extractLineByIndex(string body, integer targetIndex) {
  integer startOfLine = body.lastIndexOf('\n', targetIndex);
  integer endOfLine = body.indexOf('\n', targetIndex);

  string lineOfInterest = body.substring(startOfLine + 1, endOfLine);
  return lineOfInterest;
 }
 
}

Think of this more as a proof of concept. There are some obvious problems with it. For example, it isn't really parsing the Apex body. Just doing a dumb string search for a string that starts with TEMP. No consideration is being made for the context. It might be in comment. There are likely other issues around new line detection as well.

Still, it demonstrates the idea. Using Apex test cases to do static code analysis as part of the packaging and deployment steps.

Now if we could get to the SymbolTable in a testing context (i.e. no callout) there could be all sorts of interesting checks1:

  • Does each test method make at least one assertion?
  • Does each assertion include a message to the user?
  • Excessive lines of code in one Apex Class
  • Using System.debug statements - too many of these can be detrimental to performance.
  • Empty exception handling catch blocks
  • Unused local variables
  • Methods that aren't referenced by other code

Maybe having meaningless whitespace at the end of a code line or using somestring != null && somestring != '' rather than String.isEmpty(somestring) shouldn't be grounds to block packaging. You'll need to decide where the line gets drawn.

The other part of this type of testing is that it could form the basis of an automated org health check. Is the coverage percentage too low on a large class? Test fails. Trigger not bulkified? Test fails. An Apex class with 4000 lines of code that is padding for test coverage? Test fails!

You could easily drop them into an existing org to get a feel for the state of things before embarking on an modifications.

See also:


1. may or may not be actually possible with the SymbolTable alone.

Monday, September 28, 2015

Microsoft Ignite 2015 Round Up / Summary

I've summarised some of the most interesting/important parts of my TechEd 2015 NZ notes in this post.

The top five sessions for the conference based on session feedback:

  1. The Secret to Telling Awesome Stories from Microsoft's Chief Storyteller
  2. The Microsoft DevOps Vision
  3. Aligning Architecture to Organisation
  4. Public Speaking Skills for Quiet People
  5. Torment Your Colleagues with the 'Roslyn' .NET Compiler Platform

My top sessions, in no particular order

Great Artists Steal: Build better software by applying patterns and ideas from different languages

Orion Edwards - @borland

github.com/borlande/Ignite2015

  • Ruby
  • C# Scoping lambda for locks and transactions, building up and then committing.
  • Consider the parameter naming. Try and form a phrase. C# named parameters.
  • Swift
  • if let - unwarps null references in if/else.
  • C# - Create Optional generic type. Implicitly converts from null. Unwrap to take lamba to handle existing and null cases. Not great for input parameters.
  • Also, HtmlEncoded, Atomic, Tainted, Either
  • C++ - Zero Cost Abstraction
  • Can be used with struct in C# as well.
  • System.DateTime is an example. Useful for multiple data types in combination. When unit of measure is important.
  • If you can get a Haskell program to compile, it is probably free of bugs.
  • Grand Central Dispatch - from Apple
  • Queues are lightweight threads. You "dispatch" lambda functions to them.
  • Like Dispatcher in C#
  • Serial Queue. Jobs won't run in parallel.
  • github.com/borland/SerialQueue
  • Go
  • goroutines and channels
  • Channel with send and receive.

Torment Your Colleagues with "Roslyn" .NET Compiler Platform

Ivan Towlson

"The Duke Nukem Forever of compilers"
  • C# and Visual Basic compilers
  • Syntax trees for code
  • Semantic model
  • Visual Studio editor integration.
  • Roslyn Overview on dotnet github.
  • Syntax Visualizer - browse the entire tree.

The Secret to Telling Awesome Stories from Microsoft's Chief Storyteller

The Internet of Hackable Things

Kirk Jackson @kirkj - Felix Shi @comradpara

Project Premonition: Mosquito seeking drones and Microsoft Azure

So You Want to Present at Ignite

Chris Jackson

  • 60 seconds to get their attention. Then got to land the point.
  • How to be effective as an influencer.
  • Preparing your submission - earning the right to be heard.
  • Build your reputation. Humans make the decision on who gets the cut. Who is the person, are they credible.
  • Be authenticate.
  • Volunteer to speak. Start a blog if writing is your thing.
  • Less likely to pick the total unknown.
  • Understand the goal of the track. What is the story they are trying to tell.
  • Sell yourself to the track lead.
  • Preparing for your talk.
  • Presentation design and presentation aids.
  • What do the track owners looks for.
  • What is on the screen should only be there to pull people back on track.
  • Keep the number of words on the slide to a minimum.
  • Max 5 words on 5 lines.
  • Make it real with demos.
  • You already know the material. You don't remember what it is like to not know.
  • The first 60 seconds. If it sounds awkward and unrehearsed with it's unrehearsed.
  • Practice the first 60 seconds the most.
  • What is shipping and what are the trends.
  • Inspire, don't teach. Take an action at the end of 60 minutes. One to three things tops.
  • Tell Stories. List of facts do little to inspire us.
  • How do I make this real.
  • Pillars and organization. How do you support the primary objective of the talk. Is it driving the outcome you want.
  • Stick your landing. Try to avoid ending on Q&A
  • Get speaker training. Someone who will tell you stuff you need to hear.

The Cinematic Cloud - Pixar Studios

Developing Cross Platform Mobile Apps with XAML and MVVM

See Also: