After making some changes to a trigger that can ultimately be fired after changes to an Opportunity are made I encountered an existing test case that started failing. The particular test case bulk inserts Opportunity and OpportunityLineItem records and ultimately goes on to bulk update OpportunityLineItem records that have revenue schedules.
What's odd is:
- The test case specifically declares
@IsTest(SeeAllData=false)
- I checked and the test doesn't use instances of
DateTime.now()
or similar - Same for
Math.random()
Due to the nature of the test case I thought the problem may have been around bulkification. As a starting point I dropped the counter down at the start of the test method that specifies how many Opportunity records to test against.
With only 2 opportunities being inserted the test case appeared to pass with every run.
Scaling it up to 50 opportunities resulted in the test case failing with almost every run.
Opportunity Count | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 |
---|---|---|---|---|---|
2 | Pass | Pass | Pass | Pass | Pass |
28 | Pass | Fail | Pass | Fail | Pass |
30 | Pass | Fail | Pass | Pass | Fail |
31 | Fail | Pass | Fail | Pass | Fail |
50 | Fail | Pass | Fail | Fail | Pass |
62 | Fail | Fail | Fail | Fail | Fail |
150 | Fail | Fail | Fail | Fail | Fail |
This was odd, as I'd expect a test case that wasn't using random data to produce consistent results. What was more odd was that varying the number of Opportunities altered the pass/fail pattern. Generally, the more opportunities there were the greater the odds of the test failing.
My mind is about to explode due to a #Salesforce test method that is consistently failing for every _second_ run. Pass>Fail>Pass>Fail>...
— Daniel Ballinger (@FishOfPrey) May 18, 2015
I call it my Schrödinger's #Salesforce Test.
Will it pass or fail? ¯\_(ツ)_/¯
Run it and see. pic.twitter.com/3af6SPLDia
— Daniel Ballinger (@FishOfPrey) May 18, 2015
The clue to the answer was in the test failure message, which in hindsight I should have focused on more than the bulkification.
System.DmlException: Insert failed. First exception on row 0; first error: CANNOT_INSERT_UPDATE_ACTIVATE_ENTITY, MyNamespace.FooOpportunityTrigger: execution of AfterInsert caused by: System.DmlException: Upsert failed. First exception on row 25; first error: DUPLICATE_VALUE, duplicate value found: MyNamespace__OpportunityIdDynamicPropertyIdUnique__c duplicates value on record with id: a0E4000001GRmaU: [] Class.MyNamespace.FooOpportunityHelper.mapUdfFields: line 636, column 1 Class.MyNamespace.FooOpportunityHelper.handleAfter: line 515, column 1 Trigger.MyNamespace.FooOpportunityTrigger: line 15, column 1: []
The OpportunityIdDynamicPropertyIdUnique__c custom field is used on a custom object to make a composite key out of the Opportunity Id and another Id. It's primary purpose is to enforce uniqueness.
The field is populated using a workflow rule formula as follows:
Knowing the the issue is around the custom field that is assigned the opportunity id and that there are failures consistently with 62 records and 50% of the time with around 31 records indicated the problem was with the case sensitivity of the opportunity id in the field (See What are Salesforce ID's composed of?).
The Id's of the custom object weren't resetting with each test run. Instead the values were accumulating across each test run. For example, if the Opportunity Ids 0064000000fNiOM and 0064000000fNiOm appeared in the same test run it would fail immediately after the Worflow action completed.
The formula needed to use the case insensitive version of the Opportunity Id or the custom field uniqueness needed to be case sensitive. CASESAFEID() seemed like the easiest change as the case sensitivity couldn't be changed on an existing managed package field. It did create a migration problem for existing data.