Thursday, March 5, 2015

.NET Integration with Salesforce Tooling API Spring 15 (v33.0)

The Spring 15 Salesforce release brings some changes to the Tooling API. From the release notes:

Feature: Metadata namespace
Description: In previous versions of the SOAP Tooling API, metadata type elements were defined in the same namespace as the sObjects of the API, tooling.soap.sforce.com. In order to uniquely name these types, the suffix “Metadata” was appended to the metadata types. In this version of the API, a new namespace is introduced, metadata.tooling.soap.sforce.com, and the suffix is no longer used. The old names will continue to work with clients against older API endpoints.

You can see the change diffing the v32.0 WSDL with the newer v33.0 WSDL. The metadata ApexClass element from ApexClass is now also called ApexClass and resides in the namespace urn:metadata.tooling.soap.sforce.com (with the alias mns.

Having two complexTypes with the same name but different namespaces in the same WSDL creates a bit of a mess with Web References (WSDL.exe) and Service References (SvcUtil.exe).

With a web reference to the WSDL the ApexClass from the Metadata namespace gets the matching name and the ApexClass from the urn:tooling.soap.sforce.com namespace becomes ApexClass1. Not only does this break all the previous code that referenced ApexClass within the Tooling API but it is damn ugly.

    [System.CodeDom.Compiler.GeneratedCodeAttribute("System.Xml", "4.0.30319.34234")]
    [System.SerializableAttribute()]
    [System.Diagnostics.DebuggerStepThroughAttribute()]
    [System.ComponentModel.DesignerCategoryAttribute("code")]
    [System.Xml.Serialization.XmlTypeAttribute(Namespace="urn:metadata.tooling.soap.sforce.com")]
    public partial class ApexClass : Metadata {
    // ...
    }

    // ...
    /// 
    [System.CodeDom.Compiler.GeneratedCodeAttribute("System.Xml", "4.0.30319.34234")]
    [System.SerializableAttribute()]
    [System.Diagnostics.DebuggerStepThroughAttribute()]
    [System.ComponentModel.DesignerCategoryAttribute("code")]
    [System.Xml.Serialization.XmlTypeAttribute(TypeName="ApexClass", Namespace="urn:tooling.soap.sforce.com")]
    public partial class ApexClass1 : sObject {
        // ...
        private ApexClass metadataField; 
        // ...
        /// 
        [System.Xml.Serialization.XmlElementAttribute(IsNullable=true)]
        public ApexClass Metadata {
            get {
                return this.metadataField;
            }
            set {
                this.metadataField = value;
            }
        }
        // ...
    }

To be fair, this is an issue with the .NET generation of the classes in different namespaces as much as anything. The same WSDL in the FuseIT version of Wsdl2Apex creates separate Apex classes for each namespace to keep the class definitions distinct.

One brute force way to fix this is to rename the classes after the codegen is complete. Find the ApexClass that inherits from Metadata and rename it to ApexClassMetadata. Then rename ApexClass1 to ApexClass. Rinse and repeat for any other types that derive from Metadata (WorkflowRule1, WorkflowFieldUpdate1, ValidationRule1, Profile1, ...).

It might be possible to use the /namespace: option on SvcUtil.exe to move each WSDL namespace into a separate C# namespace. It doesn't appear that Wsdl.exe has the same option.

Thursday, February 19, 2015

Avoiding exception wrapping and exposing Exception.Data via log4net.

In .NET exception handling is often a layered approach involving several classes and methods catching a specific exception and then throwing a new exception with some additional details and the original exception wrapped as the inner exception. The additional details are often useful, but after passing through several layers of try-catch-throw-new-exception the root exception can become obscured somewhere down the stack trace and inner exceptions.

A common methods exception might be something like:

public string Foo(string someInput)
{
    try 
    {
        doSomethingWithTheInput(someInput);
    }
    catch(Exception ex)
    {
         throw new FooIsBrokenException("Foo is bad for: " + someInput, ex);
    }
    finally 
    {
        // Common cleanup code to release any resources etc...
    }
}

If the exception is going to be exposed outside the current project then the wrapping exception will likely be useful. However, if it is only going to be for internal use as a way of adding additional context to the base exception you can save some effort by using Exception.Data.

public string Foo(string someInput)
{
    try 
    {
        doSomethingWithTheInput(someInput);
    }
    catch(Exception ex)
    {
        // Consider what to do if the key is already present.
        ex.Data.Add("someInput", someInput);
        throw;
    }
    finally 
    {
        // Common cleanup code to release any resources etc...
    }
}

log4net

Now you've got the details in the Data dictionary they need to be exposed. I'm working on an older project that started in 2007 and uses log4net. There is a suggestion on Logging Exception.Data using Log4Net to use a custom PatternLayoutConverter on each layout appender to convert the data dictionary.

I've gone with a slightly different approach by registering an IObjectRenderer with log4net you can append the additional Exception.Data values. The advantage here is that it will work with all the appenders to render exceptions.

public class ExceptionObjectLogger : IObjectRenderer
{
    public void RenderObject(RendererMap rendererMap, object obj, TextWriter writer)
    {
        var ex = obj as Exception;

        if (ex == null)
        {
            // Shouldn't happen if only configured for the System.Exception type.
            rendererMap.DefaultRenderer.RenderObject(rendererMap, obj, writer);
        }
        else
        {
            while (ex != null)
            {
                rendererMap.DefaultRenderer.RenderObject(rendererMap, obj, writer);
                RenderExceptionData(rendererMap, ex, writer);
                ex = ex.InnerException;
            }
        }
    }

    private void RenderExceptionData(RendererMap rendererMap, Exception ex, TextWriter writer)
    {
        foreach (DictionaryEntry entry in ex.Data)
        {
            if (entry.Key is string)
            {
                writer.Write(entry.Key);
            }
            else
            {
                IObjectRenderer keyRenderer = rendererMap.Get(entry.Key.GetType());
                keyRenderer.RenderObject(rendererMap, entry.Key, writer);
            }

            writer.Write(": ");

            if (entry.Value is string)
            {
                writer.Write(entry.Value);
            }
            else
            {
                IObjectRenderer valueRenderer = rendererMap.Get(entry.Value.GetType());
                valueRenderer.RenderObject(rendererMap, entry.Value, writer);
            }
            writer.WriteLine();
        }
    }
}

Logging configuration additions. Note that it only applies to classes inheriting from Exception.


    ...
    
    ...

See also:

Tuesday, February 17, 2015

Troubleshooting Salesforces Trialheads execution of Developer Edition org code

I thought I'd knock out a quick Trialhead challenge to see what was involved.

When checking the challenge Manipulating Records with DML I was surprised that my apex class failed with the message:

Challenge not yet complete... here's what's wrong:
Executing the 'insertNewAccount' method failed. Either the method does not exist, is not static, or does not insert the proper account.

I was reasonably confident my Apex class was correct.

I'm not going to post my class here, as that seems to go against the concept of Trialhead if you could just Google for an answer. Rather, I found something interesting when trying to figure out what was wrong.

Trialhead is using executeAnonymous via the tooling API to examine the apex class in my developer edition org.

It appears to take a black box approach to confirming that the Apex class is behaving as expected. If the method can be invoked and satisfy the given assertions it passes the challenge.

If you just open up the log in the developer console you can see the assertions being made, and in my case that the test was failing due to a STORAGE_LIMIT_EXCEEDED DmlException. So I can delete some records, free up some space and get the challenge to pass.

Better yet, if you open the RAW Log you can extract the anonymous Apex that is being used to test the code. Note that the default developer console log viewer won't show the executed code or logging levels like the FuseIT SFDC Explorer does.

Execute Anonymous: Account a = AccountHandler.insertNewAccount('My Test Account'); System.assertNotEquals(a,null); if(a != null) {delete a;}

Once you know this, you can more accurately figure out why your Apex class might be failing. For example, imagine you had implemented a check in your Org to ensure the Account Name is unique. If there was already an Account with the Name 'My Test Account' (maybe from a previous run that partially failed) that this would show up in the debug log. The debug log could provide some good clues for failures. The FuseIT SFDC Explorer will show the debug log directly with the anonymous apex.

In theory there is nothing stopping you implementing the minimal code to pass the assertions being made by the anonymous Apex. For example, with the above anonymous Apex, as long as insertNewAccount returns a non-null Account that can be deleted here the test will pass. Currently no check is made that the returned account has the correct name or was indeed inserted as part of the test. Have I gained anything here? Probably not, the goal with Trialhead is to learn about an aspect of Salesforce rather than game the system to earn a badge on your profile. Chances are that if you can figure out a way to work around the assertions you are also more than capable of completing the actual challenge.

I should note that a challenge may make multiple anonymous apex calls. Salesforce may also change the assertions made for each challenge at any time.


Historically interesting footnote

The following was true when I was originally going to post this last week. After contacting Salesforce about the issue they have subsequently resolved it. At the time the anonymous apex appeared as:

Execute Anonymous: Account a = AccountHandler.insertNewAccount('My Test Account'); System.assertNotEquals(a,null); System.assertEquals('test','success');

An interesting line of apex code here is the System.assertEquals('test','success'); assertion. This is similar to the technique mentioned in Adding Eval() support to Apex where an resulting exception is used to return the result from anonymous apex. If I just start my Apex class with that assertion then in theory the test would pass...

If fact, it does! Replacing the method body with just that assertion and returning null causes the challenge to pass. My guess would be Trailhead code is looking for the "System.AssertException: Assertion Failed: Expected: test, Actual: success" in the response.

UPDATE: As mentioned above, Salesforce have subsequently removed this assertion. I learned through Salesforce that the final assertion was intended to rollback any side effects of the challenge check. This makes sense. It seems like it would be an ideal scenario for my idea to Run anonymous apex as if it were a test case.

Thursday, November 27, 2014

Adding Eval() support to Apex

In the Codefriar blog post EVAL() in Apex Kevin presents a technique to to allow programmatic evaluation of an Apex string and the extraction or the result via an Exceptions message. Here I present an alternative approach using anonymous Apex and the debug log in place of the intentional exception.

Reasons you may or may not want to eval() Apex

Benefits

  • Where a rollback may be required the separate context via the callout doesn't require the invoking Apex to rollback. You can still progress and make further callouts and subsequent DML operations.
  • JSON can be used in the response message to return structured data.
  • An odd way of increasing limits. E.g. Each anonymous apex context gets a new set of limits.

Disadvantages

  • The potential for security issues depending on how the anonymous Apex is composed.
  • The requirement for users to have sufficient permissions to call executeAnonymous. Typically this means having “Author Apex” or running with the restricted access as per Executing Anonymous Apex through the API and the “Author Apex” Permission.
  • The need to parse the DEBUG log message out of the response to get the result. Other code may also write DEBUG ERROR messages, which will interfere with parsing the response. This could be addressed by improving the parsing of the Apex log. I.e. Extract all the USER_DEBUG entries to a list and then read the last one. Another alternative is to use delimiters in the debug message to make it easier to parse out.
  • Each eval() call is a callout back to the Salesforce web services. This creates limits on the number of evals that can be made. (Don't forget to add the Remote Site Setting to allow the callout)

API Background

Both the Tooling API and the older Apex API provide an executeAnonymous web method. The main difference is that the older Apex API will return the resulting Apex debug log in a SOAP header. With the tooling API the Apex debug log is generated but needs to be queried separately from the ApexLog. The older Apex API becomes more attractive here as one API call can execute the dynamic Apex code and return the log that can contains the output of the call.

By setting the DebuggingHeader correctly the size of the Apex debug log can be kept to a minimum. For example, getting on the ERROR level Apex_code messages makes extracting the required USER_DEBUG output easier and reduces the amount of superfluous data returned.

It should be noted that using executeAnonymous won't execute in the same context the way a Javascript eval() does. Any inputs need to be explicitly included in the Apex string to execute. Also, any return values need to be returned via the log and then parsed out to bring them into the context of the calling Apex.

Note that the current native Salesforce version of WSDL2Apex won't read the responses DebuggingInfo soap header. Instead these need to be read via an HttpRequest and parsing the resulting XML response.

Sample Raw SOAP Request/Response

This is sent to the Apex API at https://na5.salesforce.com/services/Soap/s/31.0

Request

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:apex="http://soap.sforce.com/2006/08/apex">
   <soapenv:Header>
      <apex:DebuggingHeader>
         <apex:categories>
            <apex:category>Apex_code</apex:category>
            <apex:level>ERROR</apex:level>
         </apex:categories>
         <apex:debugLevel>NONE</apex:debugLevel>
      </apex:DebuggingHeader>
      <apex:SessionHeader>
         <apex:sessionId>00D700000000001!AQoAQGrYU000NotARealSessionIdUseYourOwnswh4QHmaPFm2fRDgk1zuXcVvWTfB4L9n7BJf</apex:sessionId>
      </apex:SessionHeader>
   </soapenv:Header>
   <soapenv:Body>
      <apex:executeAnonymous>
         <apex:String>Integer i = 314159; System.debug(LoggingLevel.Error, i);</apex:String>
      </apex:executeAnonymous>
   </soapenv:Body>
</soapenv:Envelope>

Response

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="http://soap.sforce.com/2006/08/apex" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
   <soapenv:Header>
      <DebuggingInfo>
         <debugLog>31.0 APEX_CODE,ERROR
Execute Anonymous: Integer i = 314159; System.debug(LoggingLevel.Error, i);
13:24:24.027 (27564504)|EXECUTION_STARTED
13:24:24.027 (27573409)|CODE_UNIT_STARTED|[EXTERNAL]|execute_anonymous_apex
13:24:24.028 (28065096)|USER_DEBUG|[1]|ERROR|314159
13:24:24.028 (28098385)|CODE_UNIT_FINISHED|execute_anonymous_apex
13:24:24.029 (29024086)|EXECUTION_FINISHED</debugLog>
      </DebuggingInfo>
   </soapenv:Header>
   <soapenv:Body>
      <executeAnonymousResponse>
         <result>
            <column>-1</column>
            <compileProblem xsi:nil="true"/>
            <compiled>true</compiled>
            <exceptionMessage xsi:nil="true"/>
            <exceptionStackTrace xsi:nil="true"/>
            <line>-1</line>
            <success>true</success>
         </result>
      </executeAnonymousResponse>
   </soapenv:Body>
</soapenv:Envelope>

You can see the required output in the response next to |USER_DEBUG|[1]|ERROR|.

Given this the basic process becomes:

  • Build up the anonymous Apex string including any required inputs. Use a System.debug(LoggingLevel.Error, 'output here'); to send back the output data.
  • Call the Apex API executeAnonymous web method and capture the DebuggingInfo soap header in the response
  • Parse the USER_DEBUG Error message out of the Apex Log.
  • Convert the resulting string to the target data type if required.

Examples

Here are several examples showing parsing various data types from the Apex log.

 string output = soapSforceCom200608Apex.evalString('string first = \'foo\'; string second = \'bar\'; string result = first + second; System.debug(LoggingLevel.Error, result);');
 System.assertEquals('foobar', output);
 System.debug(output);
 
 integer output = soapSforceCom200608Apex.evalInteger('integer first = 1; integer second = 5; integer result = first + second; System.debug(LoggingLevel.Error, result);');
 System.assertEquals(6, output);
 System.debug(output);
 
 boolean output = soapSforceCom200608Apex.evalBoolean('boolean first = true; boolean second = false; boolean result = first || second; System.debug(LoggingLevel.Error, result);');
 System.assertEquals(true, output);
 System.debug(output);

 string outputJson = soapSforceCom200608Apex.evalString('List<object> result = new List<object>(); result.add(\'foo\'); result.add(12345); System.debug(LoggingLevel.Error, JSON.serialize(result));');
 System.debug(outputJson);
 List<Object> result = 
    (List<Object>)JSON.deserializeUntyped(outputJson);
 System.assertEquals(2, result.size());
 System.assertEquals('foo', result[0]);
 System.assertEquals(12345, result[1]);

Modified Apex API wrapper class

This wrapper around the Apex API SOAP web service uses HTTP Requests so that the DebuggingHeader can be extracted from the response. I've added several methods to execute the eval requests and parse out the expected response type.

Friday, November 7, 2014

Deep linking to the Salesforce Developer console

One of the nice things about Salesforce initially was, given a record ID, the ability to easily build a URL to view or edit that record. E.g. you could easily view an Apex Class with:

https://na17.salesforce.com/01po0000002oeDP

Or edit the same class by appending a /e. E.g.

https://na17.salesforce.com/01po0000002oeDP/e

This is still possible, but more and more of the built in Apex editor functionality is moving into the Developer Console. Previously there was no way to deep link directly to a record in the developer console. Instead you needed to open the record using the provided menus.

With the Winter 15 release I noticed there were deep links to the Lightning Component Bundles under Setup > Build > Develop > Lightning Components

This opens the developer console with the URL:

https://na17.salesforce.com/_ui/common/apex/debug/ApexCSIPage?action=openFile&extent=AuraDefinition&Id=0Ado000000000fxCAA&AuraDefinitionBundleId=0Abo000000000Y3CAI&DefType=COMPONENT&Format=XML

This seems to work equally well with an Apex Class ID and a hand crafted URL. E.g.

https://na17.salesforce.com/_ui/common/apex/debug/ApexCSIPage?action=openFile&extent=ApexClass&Id=01po0000002oeDP

Of course, this link structure isn't officially supported. Salesforce may change it at any point in the future (as confirmed in the Dreamforce 2014 Meet the Developers session). Still, there would be times where a link directly into the Apex class or trigger in the developer console would be convenient.

IdeaExchange: Provide a deep link API to the Developer Console

Tuesday, November 4, 2014

Salesforce integration failures due to disabled SSL 3.0 (POODLE)

In response to the published security vulnerability on SSL 3.0 encryption (POODLE) Salesforce has put out the Knowledge Article Salesforce disabling SSL 3.0 encryption.

One important section with regards to web services that are called by Salesforce: (my emphasis)

Additionally, Salesforce recommends customers disable SSL 3.0 encryption in their own IT environment as soon as possible, unless they use call-out integrations. If a customer uses call-out integrations, and they have not already disabled SSL 3.0 in their own environment, then Salesforce recommends that they wait until after Salesforce has disabled SSL 3.0 for outbound requests.

The current timeline for outbound requests dropping support for SSL 3.0 is the 3rd of December 2014 for sandboxes and the 10th of December 2014 for production orgs.

If you work with a web service that disables SSL 3.0 before then you can start seeing exceptions like:

System.CalloutException
Message:  IO Exception: Remote host closed connection during handshake

I've also seen it reported manifesting in Batch Apex as:

Received fatal alert: handshake_failure

This can occur even if the web service supports TLS1.0 or higher.

Quotes attributed to support:

"We will use TLS with callouts but if it fails. We drop down to SSL and hard code to send it via SSLv3 for 24 hours or an app restart. Which ever comes first. This should address any changes that occur with the way other companies integrate with Salesforce until we completely disable SSL 3.0 on December 10th." Source
"I am from Salesforce Developer Support Team. I have taken ownership of your case regarding POODLE vulnerability. At present some outbound calls are initiated using SSLv3 ClientHello, so if this is disabled on your server, there'll be a handshake failure. Until then, it is advised that you support this for incoming calls (received from Salesforce). At present R&D and our Tech Ops organization are aggressively working on a strategy around this. Once this is finalized, there will be a tech comm broadcast as expected."
"The R&D team shall be releasing this on 11/4/2014. After that you may turn off SSLv3 without running into the handshake failures"
Source

This presents a bit of an immediate problem. Salesforce is trying to fallback to SSL 3.0 on services that only support versions of TLS. At this stage, if you don't have control of the web service or the ability to get it to accept SSL 3.0 again until mid December the only option might be to wrap it in a proxy service that does support SSL 3.0 encryption.

Another fun part of the change is that Inbound SSL 3.0 support will start phasing out from the 7th of November. Between then and the 10th of December there will be cases where Salesforce servers won't be able to call Salesforce hosted web services. At least some of the time anyway when they aren't successfully using TLS.


Update 6/11/2014: Indications are that Salesforce have been silently updating servers to prevent the fallback to SSLv3 if the target server doesn't support it. It's hard to confirm what is going on as there isn't an official known issue that can be linked to.


Update 7/11/2014: Salesforce support responded to some of the outstanding issues:

All I see you guys have various questions and I think I can answer quite a few of them, I manage part of the technical support security team at salesforce. Email me at bestebez@salesforce.com (you can look at my linkedin profile if you question who i am).

Questions that have been raised around Salesforce’s support of SSL 3.0 and TLS 1.0. While we are in the process of disabling SSL 3.0, Salesforce currently supports TLS 1.0 and TLS 1.2 for inbound requests and TLS 1.0 for outbound call-outs.
Our Technology Team has been actively working to address an issue that causes outbound call-outs to use SSL 3.0 more frequently than they should, given we have TLS 1.0 enabled. We understand that this may have caused issues for customers who have already disabled SSL 3.0 in their call-out endpoints. We released a fix to Sandboxes last Friday, October 31, and plan to release the fix to production instances during off-peak hours on Wednesday, November 5, 2014.
Many customers and partners who have tested this fix in their Sandboxes have reported successful connections using TLS 1.0. A few customers continued to experience TLS 1.0 issues on their Sandboxes, and our Technology team is working with them to find a solution.

There was an issue specifically to Na14 that was generating more outbound messages that were using SSLv3 but that has since been fixed. That is probably why a few of you guys saw an issue with.
Source

See Also:

Monday, November 3, 2014

Dreamforce 2014 Presentation - Improved Apex support for SOAP based web services

At Dreamforce this year I gave a breakout presentation on the WSDL2Apex component of the FuseIT SFDC Explorer. The core idea is to increase support for calling SOAP based web services by generating the required Apex classes from the WSDL.

Breakout Session Blurb

Join us as we review the capabilities of the existing WSDL-to-Apex code generation feature, and explain how we built a tool to provide expanded features using the Tooling API. The resulting tool has greater support for more WSDL features, generates test cases and the associated mocks to maximize code coverage, and optionally includes HttpRequest versions of the callouts.


Using the Tooling API to Generate Apex SOAP Web Service Clients

Offsite Session Video on Vidyard.



The demo's went well. I was caught out a bit with the resolution. I went in expecting 1280x720 and it got bumped up to 1920x1080. Hopefully the core parts scaled well enough to be seen. Note to self, have a magnifier tool handy for future demo's.

It was difficult to see the audience in the breakout room with 3 spot lights pointed at the stage - the cellphone camera isn't really showing how blinding it was. I could hear people were out there, but couldn't really see them. Some of the breakout rooms had this set up and some didn't, I'm not sure why.

These were minor things really, and I'm glad I was given the opportunity to present at Dreamforce. I got some great questions and feedback from the session. We are getting more followup queries at work now a couple of weeks after Dreamforce. With so much going on during the conference it seems to take awhile for people to filter back to work and follow up on sessions. Also, with so much going on, it was tempting to skip the sessions that were being recorded and attend other activities, such as labs and meeting with other attendees.