Driving a successful implementation using Microsoft Dynamics 365 Finance and Operations tools

September 10 2020

Many users are unaware of out of the box tools with Dynamics 365 Finance and Operations (D365FO) that can be used in combination with the cross-application capabilities of Azure DevOps, Lifecycle Services (LCS) and Remote Server Administration Tools (RSAT). Let's take a look at how this tooling comes into play and how it can help to drive a successful implementation.

Configuration Migration

The biggest challenge in any Dynamics implementation in the past (until Dynamics AX 2009) was always the configuration migration strategy across environments. The approach back in the day was to maintain long spreadsheets that would track each configuration and manually make sure that these were marked in the golden environment. Users needed to promote that to production using a SQL restore or have some custom scripts to be able to import these into production.

With Dynamics AX 2012, Microsoft made some investments in DIXF, although even 2012 had its share of challenges in areas like packaging and importing. Nonetheless, DIX was a major improvement from Dynamics AX 2009 out of the box tools.

With D365FO, we now have an extensive list of data entities that are readily available out of the box to aid in configuration migration along with a list of data entities. There are two readily available features that can be used to make configuration migration a less painful process:

  1. Data templates
  2. Data task automation

Note that these tools do not get rid of the need to have a pristine environment. As a best practice, it is always recommended that you have a golden environment in which to perform your configurations. It is also a better practice to make configuration changes first in the golden environment, even if you need to fix some issues later These tools help you migrate the configurations from one environment to another. 

Data templates

Implementers should pay attention to data templates. What are data templates, and what purpose do they serve?

As implementers, I’m sure everyone remembers, the days when we had to toil to prepare the sequence of files that we needed to load, arranging them one by one and uploading them. For example, to load vendors, we would always need to keep in-mind what he had already loaded, particularly Terms of Payment, Methods Of Payment, Cash discounts, Vendor groups, and so on and so forth, and then come to the point of loading the vendors. Over a period of time, you would come to understand the dependencies and get better with them, but you would always wish to have something to begin with.

OOTB data templates in D365FO attempt to solve this problem. To an extent, Microsoft provides some of the data templates by default that give you an overview of how the data templates can be used.

To load the default templates in your environment, under the Data management workspace, click on “Templates”.

Click on “Load Default templates” and you can see a set of templates that can be imported, segregated by each module.

Select templates for whichever module you would like to use. All these default templates are part of the LCS shared asset library.

Once the templates are loaded, you will see that each of the individual templates has sequenced data entities.

If you have additional data entities, for example custom configurations that you need to load, you can add them to the existing templates. And you can modify the data templates to suit and fit your requirements. You may not need all the configurations that are outlined in the list and you can choose to remove them if need be.

So how good are these OOTB data templates in D365FO? They're pretty robust and you can discover dependencies on each of the entities. In fact, you can look for dependent configurations and also set up validations. As an example, if you look for dependencies on Financial dimension values, you will see that Financial dimensions show up as a dependency.

Besides these, you can see different Units, Levels and Sequences that are present in the templates. These define what sequence should be executed.

The unit, level, and sequence of an entity are used to control the order that the data is exported or imported in. Below is a brief explanation of the Units, Levels and Sequences in the templates.

  • Entities that have different units are processed in parallel
  • In the same unit, entities are processed in parallel if they have the same level
  • In the same level, entities are processed according to their sequence order in the level
  • After one level has been processed, the next level is processed
  • We use only unit 1 to help guarantee that all dependencies are handled in the correct order

All the templates, including the custom configurations that you choose to create, can be exported and you can load them into either the configuration and data manager or into the data package asset for the project’s Asset Library.

Exporting a data project

Essentially, when you export a data project that consists of data templates, it is created as a data package. Let's take a brief look at how you export a template. In the data management workspace, if you create either an export or an import project, you would see that there is an option “Add template”.

When you choose this option, you can see that the data entities that are added in the template, in sequence.

Once you export this data project, and download the package, you can see the structure that is similar to a normal data package that you generate when you add data entities manually and import them.

These data packages that are generated utilizing the templates can be utilized in data task automation. Before we jump into how to use data task automation, let us first understand, what it is and what purpose it solves from an implementation perspective.

Many times during our implementation process you have probably come across a common paint point: Getting the data packages loaded using a single click once we are able to load them into a centralized repository.

Data task automation aims to solve this by allowing implementation teams – be it partners or customers--to automate the data migration and configuration migration strategy, leveraging the OOTB data import export framework functionality and working with LCS hand-in-hand.

Furthermore, for customers and partners with understanding of matured ALM (Application Lifecycle Management) processes, data task automation manifests can be managed using source control.

And by no means is data task automation only for configuration and data migration strategy. You could extend this further to use it as a validation tool for your export or integration scenarios. Note that for such validation activities it is not recommended for use in production where any of the import or export tasks depend on APIs. Any data task automation that involves an API should be used strictly for automated testing.

Here is a screenshot of data task automation form, which is located under the Data management workspace.

As you see, there are two options that highlighted:

1. Load tasks from file: This option allows you to load the manifest file from a local drive. This means that the Solution Architect or the Functional lead should own this process. Remember the key idea of having the data or configuration migration be automated is to have better control and allowing multiple people to maintain different manifests would result in chaos. Of course, if there are teams that are split according to workstreams (Supply Chain, Finance, O2C, P2P etc.,) then it would make sense to allow each team to maintain its own manifests, but even in that case the recommended approach would be to have one contact person handling this.

2. Load default tasks: This is where utilizing data task automation along with a defined ALM strategy comes into play. Customers and partners understand the importance of having a configuration and data migration controlled. These manifests in which the tasks are embedded are maintained as “Resources” under AOT. So, once you have a manifest ready, all you need to do is have the manifest checked in as a resource by a developer into the Resources and the next time you see your manifest it will be available there.

Now that we understand what data task automation is and how it can be utilized, as well as the various ways to manage a manifest, let us review what a manifest looks like and use it in a sample test, loading it using an LCS project. In my case, I’m going to load the manifest from my local machine.

Structure of a manifest

Manifests are classified into four groups:

  1. Task Manifest: This is the core structure used to define the name of the manifest, and the common attributes, parameters and behaviors shared by all the tasks within the manifest file.
  2. Data files: This defines all the files that will be loaded using LCS shared asset library or the LCS project asset library.
  3. Data project definition: This defines the data project: if it is an import data project or an export, whether it's async or batch configured, the legal entity where the data needs to be imported, and other details that are basic configurations on the data projects.
  4. Entity setup: This defines the characteristics of the individual entity that are part of the package. For example you may want to choose different kinds of processing for different entities, like setting up default values at staging level, or if staging needs to be skipped.

You can find more detailed explanation of manifests here and there is a tech talk that walks through the use data task automation at detail.

I have a package that has Vendor group and Payment terms as data entities in it and this is loaded into the LCS projects asset library.

My manifest looks like this:

I load this manifest into the data task automation framework and I see that the attributes I have provided appear here:

I select the data that is in this form and click on “Run tasks”.

You can see a new data project for import is created under Data management workspace which is basically a concatenation of the ID and the Title.

When I switch to the Data task automation framework screen, you can see that it is marked as “Completed” and “Passed” and you can verify the results under “Show validation results”.

So to sum it up:

  1. You can use data task automation to streamline your configuration imports
  2. Maintain a manifest that will allow you to import for different companies
  3. Maintain a single repository in LCS and control the manifest using source control

In a future installment, we will take a look at how to use RSAT and deliver excellent results in implementation projects.

FREE Membership Required to View Full Content:

Become a MemberLogin
Joining MSDynamicsWorld.com gives you free, unlimited access to news, analysis, white papers, case studies, product brochures, and more, and it’s all FREE. You’ll also have the option to receive periodic email newsletters with the latest relevant articles and content updates. Learn more about us here
About Vamsi Pranith Donepudi

Vamsi Praneeth Donepudi (LinkedIn) is a senior manager and architect based in Nashville, TN, USA.

More about Vamsi Pranith Donepudi

Comments

ldescamps's picture

Hello, sorry but it seems that you did loose the images in the post.