Pages

Friday, October 14, 2016

Dreamforce 2016 Round-up / Summary

This was my third year attending Salesforce's annual Dreamforce conference in San Francisco.

On the first day of the conference, within the first minutes of entering Moscone West, I walked into a both demoing Microsoft HoloLens units. That sort of set the pace for the rest of the conference. I certainly didn't get to as many sessions as I would have liked to. Managing about a dozen over the 4 days. I'd planned to attend many interesting sessions. According to agenda builder I had a laughably optimistic 58 sessions bookmarked. Instead I focused on the activities that could only be done while there in person. The backlog of 40 odd sessions will have to wait for the recordings to become available.

Salesforce DX

The clues I'd been seeing about "artifacts" turned out to be the internal name for a replacement to packaging that formed part of the larger Salesforce DX (Developer Experience) changes. While many of the conference headlines were for Salesforce Einstein, as a developer the Salesforce DX changes will have a more immediate and significant impact to my day to day operations over the coming years (unless they can get a Salesforce Einstein type product to do intelligent debug log parsing - hint, hint.).

Salesforce DX can mostly be thought of as a catch all name for a number of changes coming for developers (and admins). There are a number of areas affected and numerous changes will need to come together over the next few releases to make it a reality. You might be looking at piloting in Summer '17 and GA in Winter '18 (the usual disclaimer applies). You can register your interest to be in the pilot test when it opens up.

You might like to take a detour to the Salesforce DX posts by Peter Knolle or Matt Lacey

Session videos:

Source-driven development

The current status quo for development, especially of a managed package, is that a single packaging org stands as a source of truth for the current release. With Salesforce DX you will be able to drive deployments from a source control system of your choosing.

To make this possible, we're enabling you to export more of your metadata, define data import files, and easily specify the edition, features, and configuration options of your development, staging, and production environments. [Source]

The metadata for a single custom object can be split over multiple files. E.g. Split out each individual field.

Scratch Orgs

Scratch Orgs will be shorted lived and provisioned via the new CLI (and in theory a backing API call that the CLI uses). The most appealing part of these orgs is that a JSON file provided to the CLI tool will configure which features are enabled or disabled for the org (Org Shape by declaration). You will no longer need to wait 4+ days for support to enable multi-currency in your development org at some random inopportune moment. It will be interesting to see how flexible the APIs will need to become to access things there were previously only accessible via the UI. E.g. Remote Site Settings.

I recall hearing that scratch orgs would last around 2 days, but don't hold me to that. Also, they likely wouldn't be hosted on the production instances, which might open up the options for using the interactive debugger.

In another session it was suggested that instead you would have a finite number of scratch orgs and would need to explicitly delete them.

They will have a combination of parts from DE and the Branch Org pilot (giving support for multi orgs with the same global namespace). The Accounts etc... that get created with a standard DE org won't exist.

SourceMetadataMember with RevisionNum field will be used to keep local file system source control in sync with any changes made directly in a scratch org. Might be enhanced in the future with a StreamingAPI topic that can be monitored for direct changes in the org.

force.com CLI interface ++

The extended force cli appears to be a number of additional commands added to the existing force.com CLI, or something like it if you start to merge in the Force.com Migration Tool. I mostly view this as a command line friendly wrapper over the functionality provided by the Tooling and Metadata APIs. The big advantage here is that the command line access to these APIs provides an interface that many other developer tools can integrate with. The increased API and CLI support will move many of the developer tasks away for the UI.

Implemented in Node.js using the Heroku CLI pluggable architecture. Support Web and JWT Bearer Token OAuth flows.

Salesforce Environment Manager

[...] a tool we’ve created to make it easier to manage the orgs you use as part of the development process. Most of these orgs will be scratch orgs, but it also allows you to manage your sandbox and production orgs. Furthermore, the Salesforce Environment Manager/Hub makes it easy to attach your orgs to Heroku so that they can participate inside of Heroku Pipelines, our continuous delivery tool. [Source]

Packaging / Artifacts

Some of the key themes (for the future):

  • Moving from UI centric to an end-to-end API and CLI support
  • Modules & Namespaces help in name isolation and metadata organization
  • Multiple packages share the same namespace. Removes the need to extension packages and global Apex classes.
  • One App Exchange listing that can install multiple packages.

The Tooling API will be extended (GA Winter `17) to allow for package creation. There will also be an API (Enterprise API?) (GA Winter `17) to create Push Requests.

FMA (Feature Management Activation?) to sit besides the LMA in the LMO to allow features to be toggled in the package. It will include an Activation API and Activation Metrics. For admins in the subscriber orgs it will appear similar to how existing Salesforce features are activated.

Register to Learn More about Salesforce DX

Peek Under the Hood of the New Apex Compiler

This was a really interesting session.

Firstly, it covered how Apex classes are compiled to byte code and then to an interpreted form for execution on the app servers. There are various levels of caching involved at each step.

The latter part of the talk was on how the new Apex compiler was being regression tested to ensure it produced the same output as the existing compiler.

Coming Attractions: Change the Game with Event-Driven Computing on Salesforce

Another really interesting talk about providing a mechanism to raise and consume events. It would be accessible in Apex for event publishing and in Triggers for consuming events. (__e suffix.) They could be described via the Metadata, but won't support queries or updates.

Platform Events (Winter 17 Beta) - PDF

Roadmap is:

  • Winter '17 Beta of Platform Events, Apex Subscriber, External Pub/Sub Platform Events
  • Spring '17 GA for Platform Events, Pilot for High Volume Platform Events
  • Summer '17 Beta for High Volume Platform Events. Additional Messaging protocols.

The Dark Art Of CPU Benchmarking

Conclusions...

  • Assignment from static field reference = 20x slower than variable assignment
  • Assignment from dynamic field reference = 30x slower than static reference
  • Use temporary variables instead of referencing a field multiple times!
  • Doing lots of math? Use doubles instead of decimals
    Doubles are 200 times faster (.5 us vs 100 us for decimals)
  • Serializing data can eat up CPU time, depending on the amount of data being serialized.
  • Processes are much less efficient than the equivalent workflow rules.
  • Use Limits.getCPUTime() to find out if you’re getting into trouble, and exit (or go async) if you’re getting close.

True to the core

Meet the developers

Random

No comments:

Post a Comment