Introduction to Dynamic Record Choices in Flow

Dynamic Record Choices are one of the coolest features in Visual Flow that often gets overlooked.  In this post we’ll be going over a brief introduction of what and how Dynamic Record Choices might be used.  I’ll be following this post up with an in-depth dive of how you use these in different scenarios.  So, what is a Dynamic Record Choice?

A Dynamic Record Choice is a query on an Object in Salesforce that you may add filters to, and have those results presented to your End User for selection.  The records that you return will be dynamic based on your filters, and the End User can select one or multiple records to perform an action on.

I like to think of it as a Report or a List View.  You select your Object, add your Filters, and then let the User select the record(s) they want to work.

DRC

Let’s say your Sales team that wants Tasks to be entered with minimal fields and effort.  Sales Users are annoyed because the Contact lookup on Tasks is not filtering to just Contacts on the Account record they’re on.  Dynamic Record Choice would come in with the Contact selection, as you can provide that list of Contacts and allow them to have a short-list of those Contacts.  Here is a visual comparison between Standard functionality and Dynamic Record Choice:

DRC compare.png

As you can tell, the End User Experience is going to be easier with the Dynamic Record Choice.  With Standard you’ve got to search, and with Dynamic Record Choice you’re able to present a dropdown (or multi-select picklist/checkboxes).  You can grab variables for filtering from anywhere in Salesforce very easily, since you’ll be using Flow.  It also allows you to be flexible and select multiple records at once and perform the same action on all of them.

Let’s talk about some negatives here… #1 is we have very Limited Labels.  Unfortunately, the only option we have is to concatenate a group of fields together to give more “information” about the results.  If many “columns” of information are needed, then Dynamic Record Choice might not be the right solution for you.

The second negative is something that is a Flow limitation.  Dynamic Record Choices have User Inputted Filters.  Some complicated filters simply aren’t possible in Flow.  To add, if you do want a filter from an End User, it needs to be on another screen.  Sometimes the clicks can make the End User experience negative.

Now that we’ve got an idea of what it is and how you can use it, let’s cover

Recap: Dynamic Record Choices rock and you should look at how they might streamline your End User’s lives.  They’re not like duck tape, because they can’t fix every problem, but they’re pretty awesome.  Dealing with a Dynamic Record Choice can be difficult the first few times, because there are a few moving parts to it, so be patient and make sure you’re testing thoroughly that the variables are all being passed through.

Tips for Successfully Deploying Wave Analytics

As much as I love Wave Analytics, it is not very fun to deploy from Sandbox (at this time).  There are many different things to keep in mind when moving Wave from one environment to another (even if it is just Sandbox to Sandbox).  Knowing these shortcomings ahead of your pending deployment will save you a big headache, as you can plan accordingly.  There is a good bit to discuss, so let’s jump right in!

Salesforce-Analytics-Cloud-cropped

Apps

These are nice and easy to deploy, but you have to be careful of the username naming convention you’re using, if you have any specific sharing to Users on your App.  This can be a problem if you’re a consultant and you’re setup in a Sandbox separately from the Production environment, and possibly the naming convention for your Username was switched up.  Or, it could be from you assigning the App to a specific Community User and that Community User was only created in your Source Org and not your Target Org.  Either scenario will cause an error.

WaveError.jpg

Dataflow

This is something that you need to be careful with.  The Dataflow that you build in your Source Org will overwrite the existing Dataflow in your Target Org.  This means, if you have anything in your Dataflow (if it already exists) in the Target Org, you need to make sure you still have it in the Source Org’s Dataflow, or it will be deleted.

The key with Dataflows is to immediately run them to get all of your Datasets populated as quickly as possible.  Make sure you’ve got all the fields in your Target Org that the Dataflow references, and that the Analytics Cloud Integration User has FLS.  You can deploy a Dataflow without meeting those requirements, and it will error when you run it if you forget.

Datasets

I’m going to go out and just say that you shouldn’t deploy any of your Datasets.  Let your Dataflow create them upon running the first time.  You run the risk of having the Dataset Name adjusted if you do anything out of sequence.  And, this can cause additional issues when you’re deploying complex Dashboards (as I’ll touch on in more detail shortly).

Dashboards & Lenses

Dashboards and Lenses are annoyingly close to complete.  These will show you an error when you first open them, because the datasets they’re targeting are empty.

Dashboard Error.jpg

What you have to do is make sure your Dataflow has successfully populated the new Datasets, and then go into the JSON of your Dashboard or Lens and make the adjustments.  Note on the above error, it will make you press Continue for every Dataset in your Dashboard… so don’t think it’s broke if you have 10 Datasets, you just have to click 10 times.

The first one, you’ve heard it a million times… don’t hard-code IDs in Salesforce!!  Well, in Wave, you’ve got no other choice.  When you deploy a new Dataset, your Target Org will have a new Dataset ID.  Your Dashboards and Lenses are going to be still referencing the older Dataset ID, and you need to go in and do a “Find and Replace” for the Dataset ID.  This can be pretty easy if you’ve got a simple Dashboard with one dataset, but once you get into double-digit, you run into potentially some of the other areas of trouble…

The Dataset Name also is used inside the Connector aka dataSourceLinks (how you link Datasets together so they dynamically filter) and in any PIGQL.  So, if you deployed a Dataset incorrectly and the name changes, you’re going to have to also update the Name in these spots similar to how you did with the ID.

Recipes

Recipes became GA in Spring ’17.  There extremely powerful and are getting even stronger with the Summer ’17 release.  In the Summer ’17 release they’re becoming accessible through the REST API.  What they allow you to do is to filter and transform an existing dataset extremely easily.  You can do joins to other datasets, bucketing of fields, adding of filters, and more.  The issue here is, they don’t live anywhere in the metadata (at this time).  So, whatever you create in a recipe, you’re going to have to manually re-create in your Target Org.  At the rate they’re improving all aspects of Wave, I am hopeful this becomes deployable with the Winter ’18 release.

Security Predicates

Unfortunately, Security Predicates don’t live anywhere (at this time) to where you can successfully deploy them.  Luckily these are typically straightforward, meaning that it’s typically a quick copy & paste to get your Security Predicate moved into the new environment.  When you’re adding a Security Predicate into your new environment, you need to make sure you meet these basic requirements:

  1. Analytics Security User has READ Access to all referenced Fields
  2. The Running User (you) has READ Access to all referenced Fields

In short, make sure you correctly deployed the FLS for the Fields that you had in your previous environment, or you’ll be running around in circles.

RECAP

Sometimes I wonder why I bother to deploy this at all.  It would be (at the time of this post) almost just as much effort to simply copy & paste the work over to the new environment.  Be careful of the known shortcomings and plan accordingly.  Because of these issues, depending on the size of your deployment, you need to be aware of the additional time it will take to deploy.

3 Best Practices for Optimizing Wave Analytics Dataflows

For those of you that are already using Wave Analytics in a Production environment, you hopefully took a look at the Wave Data Monitor when scheduling your Dataflow.  If you’re working in a Full Sandbox or Production environment and running the Dataflow, you’re typically dealing with large data volumes.  In those scenarios, you really want to make sure your Dataflow is built correctly, because that is when you can start hitting some longer times to refresh your data.

Wave.png

Optimize!  Only Import Records Once

As of Summer ’17, this will be much easier to do!  Importing your list of Accounts three times into Wave is silly.  Reuse the “Extract Account” node for anything that is referencing the Account.  Don’t extract all of your Accounts more than once.  You can use a recipe to do the filtering (if any) that needs to be done on your Dataset.  As mentioned, with the new Dataflow builder, this is going to be much easier and not require any JSON code.  However, you’ve got to be aware of the potential issue and make sure you put the effort into reusing your different sfdcDigests that are extracting records.

bi_integrate_dataflow_editor_nodes_on_canvas.png

Use Incremental Loading Where Possible [Enable Replication (Winter ’17)]

Why would you want to bring in records that haven’t changed?  This allows you to speed up your Dataflow by reducing the records added in and only finding those recently modified.  By default, after Winter ’17, if you have Replication Enabled you’ll have this enabled.  This allows you to only update records that have changed since your Dataflow last ran.  This is a huge boost for speed.  If you have millions or hundreds of millions of records, you’ll greatly benefit from this feature.

wa_integrate_datamanager_replication.png

Don’t Bring In Every Field

If you add every Account field into your dataset, that is going to add more data that Wave has to grab.  While Wave is extremely fast, if you don’t need it… don’t bring it over!  You’re just slowing down how long your Dataflow takes to run, and adding in extra fields that you don’t need in your Lens.  From personal experience, any Long Area Text fields are the worst.

TLDR: Don’t bring in any extra records, extra fields, and turn on Replication (Incremental Loading).

 

 

 

 

 

 

 

 

 

 

 

 

 

Comparing using Flow Loops to a Data Update using Excel

There are many similarities with a Flow Loop and a manual Data Update using Excel. We’re going to walk through a process and compare throughout how the Excel steps relate to the Flow.  When I say Flow Loop, I’m referring process of using a Fast Lookup, Looping through the records, and then Updating the records in a Fast Update.  (If you’ve got a better way to describe it, let me know!)

The below Flow is reassigning all of our Accounts to my new Sales team.  All Low Priority Accounts will go to Astro.  All Medium Priority Accounts will go to Einstein.  And, any Accounts without a Priority will be marked as High and all High Priority Accounts get assigned to Codey.

Easy enough, so let’s take the below finished Flow and walk through how you’d do everything this Flow does in Excel and Workbench (or another Data Loader).

FlowFinished

Part 1 – Creating the Report & Exporting to Excel  [Fast Lookup]

Alright, so let’s get a Report that includes the Customer Priority, Account Owner, and Account ID.

AccountReport

What this looks like in Flow is:

FastLookup.png

Note – AccountsToReassign is a Collection Variable

Alright, so now let’s export this data out so we can work with it in Excel!
ExportReport1

ExportReport

Part 2 – Modifying the Data  [Loop]

Now that we’ve exported out our Data, it’s time to work with it and make our updates!  In Flow, our version of skipping to the next row is by using our Loop element.  In this scenario, it looks like this:

Loop1

 

We’ve got a Medium Priority Account, that means we need to assign it to Einstein.

UpdateRow1

In Flow, this is what we just did:

Estine.png

We update the OwnerId of the Account to Einstein’s ID.

assign2e

Then, we add this Account to the Collection/List of Records we are going to update at the end of the Flow.

Add to Collection

Let’s go to our second record.  This also should get assigned to Einstein.  Notice, we’ve only touched the first two rows, everything else is untouched.  Also, notice that we’ve actually done NOTHING to Salesforce yet… it’s all prep work at this point.   This is important, so I’m going to say it again — Salesforce doesn’t yet know of the two Account Ownership changes, because we’ve not updated the records… we’re just getting them ready for to update.

UpdateRow2

Once again, in Flow, this is what we just did:

Estine.png

As we keep on Looping through these records, we eventually get to a Low Priority Account, and assign it to Astro.

UpdateRow4

In Flow, this is what we just did:

Astro.png

We update the OwnerId of the Account to Astro’s ID.

assign2a

Then, we add this Account to the Collection/List of Records we are going to update at the end of the Flow.

Add to Collection

As we keep updating each record, “looping” through them, we eventually get to the last record.  In this record, we’ve not got any Priority.  We need to update the Priority AND the Owner to Codey.

UpdateRow11

In Flow, this is what we just did:

Codey.png

Notice, our assignment is slightly different than the last two, because we’re updating the Customer Priority Field as well as the OwnerId here.

Assign Updates

For the last time, we add this Account to the Collection/List of Records we are going to update at the end of the Flow.

Add to Collection

Part 3 – Updating the Data  [Fast Lookup] 

Now, we’ve correctly updated our Excel spreadsheet.  We’re ready to commit these changes into Salesforce.  So, let’s navigate to our Data Loader tool of choice, and get ready to Update the data!

Update1

Map the Fields in the Excel file to the Salesforce Fields

Update2

Next, we take that deep breath in, and hit Confirm Update.

Update3.png

In Flow, all that looks like is this:

Update Accounts

Note – AccountsToUpdate is a Collection Variable

Now that we’ve finished our update, we want to navigate back to our report and admire our craftsmanship:

UpdatedAccounts

Recap: If you’re an Admin that has done any sort of data manipulation in Excel and then updated Salesforce, you are equipped to master Fast Lookups, Loops, and Fast Updates/Creates.

Related Resources:

  1. How to use a Fast Lookup
  2. How to use a Loop
  3. Counting in Loops
  4. How to use a Fast Update

Counting Inside a Loop

I often see many different posts around people not quite sure how to use a Loop in Salesforce.  Loops are often used to count a specific number of records.  This could be for a few different reasons:

  • Watching your Limits
    • Remember, you can’t have more than 2,000 elements accessed in one transaction.  You could use a counter to ensure that you stop before you’d hit that limit.
  • Creating n Records
    • If you want to create a specific number of records, like Tasks.  I’ve seen this request come across on the Success Community and other places many times.
  • Custom Roll-up Summary in Flow
    • If you are’t able to use something like Andy’s DLRS or a standard Roll-up field, you can summarize an Amount or do a record count in Flow.

In this post, I’m going to go over the Watching your Limits scenario.  If you want to have more information around a Custom Roll-up Summary in Flow, it’s actually one of my first posts (here)!  I would also HIGHLY recommend watching Pete Fife’s Automation Hour presentation on Loops: http://automationhour.com/2017/02/pete-fife-deep-dive-into-loopsflow-21717/… he does a fantastic job covering the topic!

Let’s jump on into the details!

Most importantly, we’re going to need to create a variable to track our iterations inside our loop.  So, let’s create that variable.

LoopCounter

 

Now, we’ve got our variable that we will be able to track the number of times we go through our Loop.  I’ve seen this inefficiently added it’s own Assignment inside a Loop.  I’ll urge you to add this into on of your existing Assignments inside a Loop, because it won’t do any harm there, and saves you an element each Loop.

Every time we loop through a record, and we’re doing our Ownership reassignment, we’re adding 1 to the value of our LoopCounter.  If we have 20 records, by the end of the transaction this LoopCounter variable would equal 20.  If we had 10, it would equal 10.

LoopCounterAssignment

 

In Flow, you have a limited number of elements that you can run through in each transaction.  This means, you need to be extra careful when dealing with potentially higher data volumes.  Ideally, you really shouldn’t be hitting close to that 2,000 elements limitation… however, it doesn’t hurt to be paranoid and put in a decision to double-check.  So, that’s what we’re going to do.  The value you use will vary based on how complicated your Loop is.  You need to do some math to see the maximum number of records your Flow can take.  I like to always go below that to be extra safe.

Decision.png

Fun Fact: You don’t have to actually send someone back to the Loop for them to exit it.  You can exit mid-Loop through the records, once you hit a specific number that you want to Loop through.

Decision in Loop.png

And just like that, we were able to count and tell if we were about to hit a limit. While this was all concept, did you catch an area I should have included into this?  An alert to the Admin.  Throw in a Chatter Post or an Email to you for when you hit that limit, so you can review everything is still functioning as you’d expect.

 

Mastering Wave Data Security for Communities with Security Predicate

I’ve worked on many projects where one of the main goals of the Community implementation is to display Wave Analytics to their Community Users.  If you’re familiar with security inside of Wave, you might have seen some of the common examples available in the documentation.  Unfortunately, these don’t work well when you’re wanting to implement it for a Community.  In addition, the way you implement a Security Predicate has changed since first releasing, and the documentation sometimes varies for how you would go about doing this.  This post is going to walk you through how you can setup a Security Predicate and master your Communities Wave deployment!

wave.jpg

Business Case

We’re implementing a Customer Community that wants to see Case metrics.  You want the Dataset to be dynamic based on the Running User, and allow them to only see their Account’s data.  This would allow us to use the same Dataset and Dashboard for all of our customers.  

That means being able to control the access of the data at the record (row) level.  Just as you currently do with Sharing Rules in your Org as an Admin today.  This would look like:

Admin View

Admin View

Community User View

RowLevelSecurity

If you’re new to Wave Analytics, you might be confused why Wave doesn’t do this natively.  The reason is because Wave Analytics is connected via an Integration User.  That Integration User is typically going to be an Admin Level Read-Only User.  When we access Wave, we’re accessing the dataset from the credentials of our Integration User.  That means, to enforce Row-Level Security, we have to put our Security Predicate inside every Dataset we need to secure.

Alright, let’s get into the how this is done.

To start off, we need to know what our Root is.  In this case, pun intended, we’re going to be using the Case Object as our Root.  The Root is what record you want returned.  Since we’ve already got the Account ID field native to the Case Object, all we need to do is add a custom Text Formula Field called View All Data, and make the value “View All Data”

ViewAllData.png

On the User Object, we need to have a Formula Field that returns the same value when it is an Internal User.  To keep it simple, I’m just granting all Internal Users View All Data privileges.  If you wanted to make this more complicated, you can easily expand on this.

ViewAllDataOnUser.png

Looking into our Dataset’s JSON, you can see we’ve got the AccountId and ViewAllData__c pulled in.  If you don’t bring in the fields you want to reference in your Security Predicate, it won’t work.

Fields in Dataset

Based on everything above, what we’re looking to have as our Security Predicate is:

‘AccountId’ == “$User.AccountId” || ‘ViewAllData__c’ == “$User.ViewAllData__c”

‘AccountId’ == “$User.AccountId” is how we are adding in the Community User’s dynamic filter.  All Cases where the AccountId matches the Community User’s AccountId will be shown.  Keep in mind, you could get creative and use formula fields here if you wanted it to work for an Account Hierarchy instead of just one Account.

|| yes, we can use operators like OR in this, which gives you control to get creative with your sharing.

‘ViewAllData__c’ == “$User.ViewAllData__c” this is how we grant access to all Internal Users.  Because, all of the Internal Users will have “View All Data” populated on their User, and all Cases have “View All Data” filled out as well.

Previously, we’d have to download this JSON, and modify the sfdcRegister (103) and include our row-level security predicate there.  Now, we just need to navigate to our Dataset in Wave.  Note – if you attempt to add your security predicate in it the old way, it won’t work and you’ll be left scratching your head.

Once you’re to your Dataset, select Edit.

Cases Dataset.png

Insert the Security Predicate

Security Predicate.png

Select Update Dataset.  The next time your Dataflow runs it will update with your Security Predicate.  Just like that… you’re all set!  You’ve mastered row-level security in Wave.

Keep your Automation Simple

I’ll start this off by saying I’m guilty of not always keeping it simple.  But, I strive for simplicity.  Just because you can build something in Salesforce 10 different ways, doesn’t mean all 10 of those ways are right.  The solutions we implement often have a great deal to do with the skills and budget your Org has available.  Back when I was a Solo Admin, I was very guilty of duct taping together solutions, because the alternative solution was to have nothing.  We didn’t have the budget to hire developers for all of our ideas.

The purpose of this post is to discuss how we can simplify our solutions to make them easier for us to comprehend and maintain.  We’re going to walk through this set of requirements our project champion gave us:

  1. On Closed Won Opportunities, Alert Accounting with an Email
  2. Automatically Create a new Project for our Account Manager to run.
  3. Update the Account Owner to be the Account Manager
  4. Alert the Account Manager of their new Project, with a link to take them straight to the project.

Let’s take a first pass at solving this…

kiss-option-1

This accomplishes everything that we were looking to do.  We can now send the project champion a note saying that it has been completed… right?  Hold on!

Looking at this solution from end-to-end, how easy is this going to be for me to maintain?  On the surface, it’s pretty basic, but would an outsider easily comprehend it?

Let’s take another pass at simplifying it…

kiss-option-1b

We were able to simplify our automation by putting the Email Alerts into the Process Builder and Flow.  This looks easier to maintain and understand than our first process.  I would be tempted to call it quits here, but I think we could simplify the process further.

So, let’s take one last pass at simplifying this…

kiss-option-2

I’m feeling pretty good about this now.  Everything is in one location, and I can see all of my automation around this scenario in one spot!  Personally, I would go with our third and final solution if I was going to implement this automation.  Don’t go crazy and (when possible) and have Workflow Rules, Process Builder, Flow, and APEX all working together for one piece of automation.