Thursday, August 6, 2015

Incremental learning with Salesforce Trailhead - Event Monitoring

For a long time my go to place for doing small incremental learning exercises for Salesforce has been the Salesforce Stackexchange site. It was my Trailhead before Trailhead existed.

If I had a spare 5 minutes due to a long compile, or similar, I'd typically look for questions that I knew a bit about and then research the gaps in my knowledge to hopefully provide the answer. This was great for a number of reasons:

  • I was helping a fellow Salesforce developer/admin.
  • It was someones real world Salesforce problem. Chances are that I'd run into the same or similar problem at some point.
  • I had a focused reason for going into the documentation. Reading the docs front to back is a bit like reading the dictionary. Yes, you could probably learn lots, but I was more likely to learn something if I had a specific goal.
  • Typically it didn't require a great deal of time to answer a question if I already knew part of the problem. I could do some quick research through the documentation and/or put together a basic prototype.
  • Maybe I got the answer wrong. That's OK too, as I can either revisit it or learn more from another users answer.
  • Sometimes I'd stray into areas of Salesforce where I had very little experience. It was a great excuse to at least get familiar with a new area or feature.

From the last point, it was often the case that the more I learnt about Salesforce the more I knew I didn't know. This is still very much the case.

Trailhead

Now there is Trailhead, which provides a focused guide through specific areas of Salesforce in manageable chunks. So if you have 15 minutes to spare you can work through a small exercise to explore new areas. They are usually evaluated with either some applicable questions or, better yet, Salesforce will use the various APIs to either inspect the state of the org and execute functionality to see if you have completed the given tasks.

One area I'd been meaning to investigate but hadn't got round to yet has been the Event Monitoring, which was introduced in the Winter '15 release. I've been skimming through Adam Torman's blog posts on it for some time and meaning to crack open the APIs and have a poke around at the available data.

When the Trailhead Event Monitoring Module appeared it was the perfect excuse to delve into the workings. As a module I knew all the parts I needed to get started with the API would be contained in one place, so I could work through each of the challenges as I got a few minutes to spare.

This module differs a bit from others, as it requires interacting with the APIs directly. The first part of the module does a good job of easing admins into accessing the REST API if it was previously a foreign concept. In many ways it provides a good reason to poke around in the SOAP and REST APIs. Doing so using Workbench to interact with the REST API saves you from having to handle session authorization headers and formatting the responses.

After completing the module it turns out accessing the Event Monitoring data is really easy with the existing tools at hand. For example, I can use a SOQL query to pull the available logs. Once you have the required Id then pull the LogFile field with the CSV data.


One of my earlier experiences with Trailhead was handling problems when using an older Org that was full of testing data. The anonymous apex that Salesforce was using to test the code was trying to insert a new record, and was encountering a storage limit exceeded error. See details in Troubleshooting Salesforces Trialheads execution of Developer Edition org code

Dreamforce

With Dreamforce 2015 just around the corner it would also be timely to do the #DF15 module. It helps with pressing questions such as:

  • Should I attend the keynotes?
  • Where do I get more information to stay updated?
  • What should you pack in your bag?

I put some notes together on my #DF14 experience for things like on site food.

Friday, July 3, 2015

FuseIT SFDC Explorer 2.9 Features

The new v2.9 release of the FuseIT SFDC Explorer has some new features.

Run anonymous apex in a testing context

For some time I've been promoting the idea of being able to run anonymous apex in a testing context. This offered a few benefits.

  • The anonymous apex could then call existing apex test methods. They could then be run in isolation.
  • Helper methods defined in a test class could be accessed.
  • You would be 100% safe from accidentally updating real data in the org. No callouts could occur.

The first point has largely been addressed in The Holy Grail of Apex Testing in Salesforce

To address the other points I've applied the same technique to run anonymous apex in a testing context. This can be access with the "Use Testing Context" check button. With it active, you can do things like access Test.getStandardPricebookId().

Improved support for running selected test methods

Individual test methods can now be checked in the asynchronous test results viewer. Pressing the "Run Selected Tests" button will run those tests in isolation.

Miscellaneous

Fixed the "Open Log" button on the Apex Tests Queued tab. It will now correctly open the log or prompt to create the TraceFlag to capture it for the next run.

Several users reported problems with the 2.7 and 2.8 release with the login process. It would fail with the message:

java.lang.NullPointerException Error Id: 1655897833-5655 (293693299)

This was a Salesforce GACK response from a call to the Metadata API describeMetadata() web method. I'm not sure what causes it for some orgs and not others, but I believe I've isolated it from the login process in our code.

Friday, June 26, 2015

A short history of Salesforce bugs and issues

Below is a history of the Salesforce bugs/issues that I've either directly raised with Salesforce or been involved in. While not comprehensive at the moment, it is mostly useful for me to have a single place the references them all.

  1. Salesforce Winter `16 breaking SOAP Partner API changes - Case 12544039
  2. Bulk API gets error "MISSING_ARGUMENT:fiedXXX__c not specified:--" when using "N/A#"
    A customer using the Scribe data loading tool encountered a MISSING_ARGUMENT error when trying to create OpportunityLineItem records where there is a custom field from our managed package. The custom field is an external id, but not required.
    It turned out to be a limitation in Scribe - Topic: "Upsert issue - issue with multiple External Ids?". It can't handle multiple External Id fields and just uses the first one encountered for the upsert, even if you aren't populating it with data.
  3. Events continue to get delivered to Streaming API client which uses a session id , the session which has been explicitly terminated The Streaming API continues to send messages for expired/deleted Sessions - Case 11745520
  4. Spring '15 - Compile error occurs when deploying a test class using an @isTest(SeeAllData=true) method with a separate class using @TestSetup method
  5. "Generate WSDL" generates a WSDL that does not contain the definition of the compound types address and location if API version is 30.0 or above
  6. WSDL-Based Asynchronous Callouts using Continuations can't be unit tested.
    Known Issue: Running Test for WSDL-based asynchronous callouts fails with Internal Salesforce.com Error
  7. The Tooling API isn't enforcing the package version dependency in the class/trigger metadata
  8. Tooling API SymbolTable contains “private” visibility modifiers instead of “protected”
  9. The SOAP Tooling API is missing the SessionHeader in WSDL operation
  10. Querying ApexExecutionOverlayResult via the Tooling API gives an UNKNOWN_EXCEPTION
  11. Test.setCreatedDate doesn't work with a Note Id as the first parameter. Case 13752643
  12. Null Boolean assigned to a custom Checkbox field is converted to false, but won't insert. Case 14025277
  13. Apex Debug logging stops working after calling DateTime format with current users Timezone details. Case 14135299
  14. Private methods are polymorphic even when the virtual and override keywords are omitted. Case 14163020
  15. Tooling API in Winter `17 (v38.0) describeGlobal() call is returning sObject types in the "autogen__" namespace. Case 14909168

Friday, June 12, 2015

Using the Salesforce Metadata API to run a single test method in isolation

In my previous post The Holy Grail of Testing in Salesforce I covered my journey towards being able to execute a single Apex test method in isolation of the others defined in an Apex class. Here I'll cover how it actually works.

The Metadata API

In working with the Metadata API deploy() method as part of a continuous integration projection it dawned on me that I already had everything I needed to run a single test method in isolation.

As long as the test class and test method in question were public I could create a temporary wrapper test class that calls just the target test method.

For example, say the target was the Foo ApexClass and method bar(). Something like this, but with more actual testing and assertions going on:

@IsTest
public class Foo {
    public static testMethod void bar() {
        System.Debug(LoggingLevel.Debug,'Example Foo.bar() method');
        // Assertions and stuff
    }
}

Then to invoke this via a Metadata API deploy call all I need to do is create a wrapper class, the associated .cls-meta.xml, and package.xml to send in the zip payload. The wrapper class can be as simple as:

@IsTest
public class FooWrapper {
    public static testMethod void barWrapper() {
        Foo.bar();
    }
}

The most important part in deploy call is the DeployOptions.

  • Setting checkOnly to true will prevent any changes being actually deployed to the org. The FooWrapper class only exists to execute the test.
  • runTests(string[]) allows the deployment to run a specific test case. This should be set to the name of the wrapper test class that is being deployed. E.g. "FooWrapper"
    Once v34.0 is widely available the testLevel should also be set to RunSpecifiedTests.

The DebuggingHeader can be used to control the debug log contents that comes back in the response.

After starting the deploy process it is just a matter of monitoring the AsyncResult and DeployResult until it completes and then extracting the RunTestsResult.

The end result

Given an Apex testing class and the name of a test method it contains I can run a checkOnly deployment that will run just that test. As I mentioned before, the test in question needs to be public so that the wrapper class can access it. There are also likely to be issues if the @testSetup annotation is being used in the target class.

The future

It occured to me that this would also be a great mechanism for running anonymous apex in a testing context as well. The anonymous apex could easily be wrapped in a temporary class annotated with @IsTest and a test method. It could optionally specify @IsTest(SeeAllData=true) as well.

Additional test methods could be created in the wrapper class to target other test methods.


Update: Tooling API runTestsAsynchronous

Nate Lipke kindly pointed out to me that the Summer '15 release includes a runTestsAsynchronous method that accepts the target class Id's and a testMethods parameter "containing the name of a test method in the test class."

The REST version takes this as a POST request. The SOAP API also has this method, but it isn't apparent how to specify the target test methods yet.

If you are using the Force.com IDE v34.0.0.20150511 Beta Version you can use the new API to run a specific test method.

From the Summer '15 - Tooling API Updates

POST JSON arrays of test methods to runTestsAsynchronous
You can now POST a JSON array of test methods to the runTestsAsynchronous endpoint. Formerly, in POST methods, therunTestsAsynchronous endpoint accepted only comma-separated lists of class IDs.

Format:
POST
/runTestsAsynchronous/
   Body:
   {"tests":<tests array>}

Example <tests array>:

[{
  "classId" : "<classId 1>",
  "testMethods" : ["testMethod1","testMethod2","testMethod3"]
 },{
  "classId" : "<classId 2>",
  "testMethods" : ["testMethod1","testMethod2"]
}];

Thursday, June 11, 2015

FuseIT SFDC Explorer 2.7 Features

The new v2.7 release of the FuseIT SFDC Explorer has some new features.

Run a single test method in isolation

After running tests, either in synchronous or asynchronous mode, you can right click on in individual test method to get the context menu. This will give you the option to run just that test method in isolation. See The Holy Grail of Testing in Salesforce.

Basic Apex Class editor

There is the new Apex Classes tab that has search and editor. It's no where are full featured as something like Mavens Mate in Sublime, but it might be useful in a pinch if the developer console is misbehaving (or you want to bypass certain permission restrictions).

Highlighting code coverage lines

For larger Apex classes, where it can be difficult to find the largest blocks of code that need coverage, I've added coverage markings to the scroll bar.

It's still a bit of a work in progress getting everything aligned correctly, but it should make it easier to scroll to the blocks of lines without coverage.

Alpha version of an Org Metadata diff tool

Here you establish a secondary Salesforce Session. Then, using the diff button the tool will pull down the metadata for both orgs. Once stored in the file system it is passed off the the Diff tool of your choosing.

Miscellaneous

There are some general enhancements to Wsdl2Apex to deal with WSDL's that have multiple imports of the same namespace. Support for 2 character pod identifiers. And some generally improved error logging.

The Holy Grail of Apex Testing in Salesforce

The quest has been long and difficult, but I think I've finally found it. The Holy Grail of testing Apex in Salesforce. (Skip directly to the end result)

Many would consider the introduction of the @isTest annotation in the Spring 12 release to be one of the most important releases for Apex testing. Or even the subsequent ability to create Price Book Entries when using @isTest(SeeAllData=true) in Summer 14. Freeing the majority of remaining cases that still needed to access live data. There has also been the shift from synchronous test execution to asynchronous test execution.

Those are indeed note worthy, but I think I've found something more useful.

Origins

It seems like a long time ago, but the idea exchange tells me it was "about" 2 years ago (in the frustrating way it refuses to show actual specific dates on anything). It's like there is some psychology at play where not having the specific date doesn't make it feel as old. I digress.

I had the idea that if anonymous apex could be run in a testing context then it could be used to run individual test methods from the suite of tests that an Apex Class contains. Feel free to take a minute and vote for it, I'll wait - Run anonymous apex as if it were a test case.

The end game was to be able to run a single test method of interest without having to manipulate the Apex class to do it. Sure you can comment out the @IsTest annotations or testMethod keywords on the tests that aren't of interest, but that is just an extra step towards yak shaving and away from what you want to be doing. Running a single test method, checking the test result and inspecting the resulting debug log.

There would also be other side benefits to having anonymous apex code able to run in a testing context. Isolation from live data, testing out code before actually using it, callout mocks...

As with most good ideas that are worth having once, someone else had also had the idea. Perhaps more succinctly towards the actual goal. Allow single test method execution from an Apex Class. Vote for that one too.

Dreamforce 2014

No great quest would be complete without a great journey, so in 2014 I voyaged 11,000 km (6800 miles) from Nelson, New Zealand to Dreamforce in San Francisco to pitch my idea to any Salesforce employee I could find. I also found time for a side quest of presenting at said conference.

The Meet the Developers session was a great place to raise the idea in front of those who might make it happen. I also managed to talk directly to Josh Kaplan about it in the halls.

The Product Managers that say Ni No!!

I demand a 2,500 voting point threshold. Ni!!

As you may have already read on my idea, it was largely hacked to bits with the suggestion to cause a rollback by throwing an error or using an assertion. True, in most cases this would rollback the transaction. It does however miss the main goal of running an individual test method. There are other limitations of it as well. For instance, callouts can't be rolled back.

More to the point, as per the response in the meet the developers session and the response in the other idea, there is a backlog item where Salesforce will add native support for running single test methods. Long term this makes more sense for the platform rather than hacking it in by using anonymous Apex.

Still. "You make me sad"

Not to be deterred by such a small flesh wound, I kept at the end game goal, looking for viable alternatives.

Then an accumulation of the work I'd been doing with various parts of Salesforce gave me an idea. It all came together to make something quite useful.

The Holy Hand Grenade

At this stage I'm not sure I want to expose how it works internally. A determined developer could probably root it out once they realize it is possible.
You can see details from how it works in Using the Salesforce Metadata API to run a single test method in isolation. It is also available in the FuseIT SFDC Explorer version 2.7 onward.

You can select an individual test method to run in isolation via a context menu on both the Apex Tests and Apex Test Queued tabs. Then you can see the results of that test and review the resulting Apex log. This will be done in isolation of any other test methods in the class. Be gone "MAXIMUM DEBUG LOG SIZE REACHED"!. No modifications to the Apex test class or org in geneneral are made as part of this process.

As with all such things, there are some requirements. The Apex class containing the test methods and the methods to be targeted both need to be public. Also, I haven't checked it with Apex classes using the @testSetup annotation, but I suspect it won't work. Static constructors should be OK.


UPDATE: Summer '15

So, it turns out in the Summer '15 (v34.0) of the Tooling REST API it is possible to POST JSON to the /runTestsAsynchronous/ that specifies both the Apex class Ids of the test classes and the test method names as well. This comes back to the feature being integrated into the platform overall.

It isn't currently apparent if the SOAP version of the Tooling API has the same capability. It does have the runTestsAsynchronous() method that takes a string parameter. The example in ApexTestQueueIte shows it being passed a comma seperated list of Apex Class Ids.

Further details

Tuesday, May 19, 2015

The mystery of the nondeterministic Salesforce test case

After making some changes to a trigger that can ultimately be fired after changes to an Opportunity are made I encountered an existing test case that started failing. The particular test case bulk inserts Opportunity and OpportunityLineItem records and ultimately goes on to bulk update OpportunityLineItem records that have revenue schedules.

What's odd is:

  • The test case specifically declares
    @IsTest(SeeAllData=false)
  • I checked and the test doesn't use instances of
    DateTime.now()
    or similar
  • Same for
    Math.random()

Due to the nature of the test case I thought the problem may have been around bulkification. As a starting point I dropped the counter down at the start of the test method that specifies how many Opportunity records to test against.

With only 2 opportunities being inserted the test case appeared to pass with every run.

Scaling it up to 50 opportunities resulted in the test case failing with almost every run.

Opportunity Count Run 1 Run 2 Run 3 Run 4 Run 5
2 Pass Pass Pass Pass Pass
28 Pass Fail Pass Fail Pass
30 Pass Fail Pass Pass Fail
31 Fail Pass Fail Pass Fail
50 Fail Pass Fail Fail Pass
62 Fail Fail Fail Fail Fail
150 Fail Fail Fail Fail Fail

This was odd, as I'd expect a test case that wasn't using random data to produce consistent results. What was more odd was that varying the number of Opportunities altered the pass/fail pattern. Generally, the more opportunities there were the greater the odds of the test failing.

The clue to the answer was in the test failure message, which in hindsight I should have focused on more than the bulkification.

System.DmlException: Insert failed. First exception on row 0; first error: CANNOT_INSERT_UPDATE_ACTIVATE_ENTITY, MyNamespace.FooOpportunityTrigger: execution of AfterInsert

caused by: System.DmlException: Upsert failed. First exception on row 25; first error: DUPLICATE_VALUE, duplicate value found: MyNamespace__OpportunityIdDynamicPropertyIdUnique__c duplicates value on record with id: a0E4000001GRmaU: []

Class.MyNamespace.FooOpportunityHelper.mapUdfFields: line 636, column 1
Class.MyNamespace.FooOpportunityHelper.handleAfter: line 515, column 1
Trigger.MyNamespace.FooOpportunityTrigger: line 15, column 1: []

The OpportunityIdDynamicPropertyIdUnique__c custom field is used on a custom object to make a composite key out of the Opportunity Id and another Id. It's primary purpose is to enforce uniqueness.

The field is populated using a workflow rule formula as follows:

Knowing the the issue is around the custom field that is assigned the opportunity id and that there are failures consistently with 62 records and 50% of the time with around 31 records indicated the problem was with the case sensitivity of the opportunity id in the field (See What are Salesforce ID's composed of?).

The Id's of the custom object weren't resetting with each test run. Instead the values were accumulating across each test run. For example, if the Opportunity Ids 0064000000fNiOM and 0064000000fNiOm appeared in the same test run it would fail immediately after the Worflow action completed.

The formula needed to use the case insensitive version of the Opportunity Id or the custom field uniqueness needed to be case sensitive. CASESAFEID() seemed like the easiest change as the case sensitivity couldn't be changed on an existing managed package field. It did create a migration problem for existing data.