Table configuration
The table configuration section of the template identifies details about design decisions that influence the overall quality of the data model and helps to avoid Common Data Model table configuration mistakes. You should provide answers for the following questions in the template table configuration slides.
Question | Why it matters |
---|---|
Have you removed all unnecessary options when creating a new table? | All unnecessary options (such as notes and connections) should be cleared when you are creating a new table; otherwise, the options might create unnecessary attributes and relationships that can impact the user experience. These options can be added later. |
Are you using out-of-the-box capabilities instead of creating new columns (for example “Sending email” or “Activity table” for party list support, and so on)? | When you need columns such as Attendees, To, From, and Scheduled start, a custom activity table should be used. Custom activity tables contain many default activity columns (such as To, From, and so on) that can be useful in specific scenarios. If you need to send emails to a custom table record, enable it for emails so that an email address column is automatically added to its columns. |
Do you use N:N relationships? | Many-to-many (N:N) relationships are lightweight but limited in their extensibility, and the data association between two records can be difficult to import. |
Instead of N:N relationships, have you considered bridge custom tables? | Instead of standard N:N relationships, an alternative is a bridge table, also known as an intersect table or a manual many-to-many relationship. Rather than using a standard N:N relationship, you can create a bridge table between the two related tables with a lookup column that references each side of the relationship. This approach enables the creation of more columns on the relationship that describe the relationship. Keep in mind that the user experience is more complicated than a normal N:N relationship. |
Do you take advantage of column mappings in your table relationships? | When creating a relationship, you can specify column mappings to automatically populate columns on the related table. This approach is useful in cases where you need to set common values between parent and child records (such as automatically populating the address of a contact when it's created from a parent account). However, this approach should not be used to create duplicate data for critical columns, and alternatives like quick view forms should be considered for displaying read-only attributes from a parent on the record form of a child record. |
Do you use a single publisher for your customization and a custom prefix? | Avoid having multiple publishers with the same technical name but with different GUIDs. Create the publisher of your project on a single environment and then deploy it to other development environments with an unmanaged solution. If customizations come from different environments with the same solution publisher (name and system name) then components, such as tables and columns, must also be truly unique (same GUID) to avoid deployment issues. If the same schema name is used on the same column that was created separately in two environments, the solution import will fail when you import the column into the environment. |
Choice columns, custom tables, and localization
Choice columns are drop-down list columns that help standardize data capture in Dynamics 365 and make searching for records much more convenient. If everything used text columns, then you would eventually have dramatically different spelling and abbreviations of the same values in multiple records, making reporting and usability difficult. However, proper forethought is necessary to ensure that you are using choice columns in an optimal manner. You should answer the following questions in the choice columns, custom tables, and localization section of the template document.
Question | Why it matters |
---|---|
Are you using custom tables to replace choice columns? | When defining a list of options from which a user can select, you have two options: use a choice column or define a custom table to hold a list of records and replace the choice columns with a lookup column. When selecting between choice columns and lookup columns, consider the following factors: Choice column values support translation for many different languages, allowing users to see values in their own languages. Choices (multi-select choice columns) can display multiple values in a single column or in a row on a view. Lookup columns can be deactivated, leaving the value on older records for historical purposes. Choice column values can only be deleted. Custom table records that are used for lookup column options can be modified, added, and deleted by non-administrators. Choice column values need to be added or changed by administrators and included in the application lifecycle management (ALM) practices in use. Choice column option values move with solutions, custom table records that are used in lookup columns must be created and maintained in each environment individually, or data migration processes must be established to keep custom table data in sync between environments. Custom tables are better suited for large option quantities because they provide more searchability than choice columns. |
Do you use whole number attributes of format language to filter records based on the user’s language? | Creating a whole number column with the language format option displays a list of the languages that are provisioned for your organization. Using a language whole number column allows you to filter records based on the current user’s language. The values are displayed as a drop-down list of language names, but the data is stored as a number that uses LCID codes. Language codes are four-digit or five-digit locale IDs. Valid locale ID values can be found on the Microsoft Locale ID Values page. |
Security, relationships, and performance
Security is covered in greater detail in the Review the security model for your Dynamics 365 solutions module. For the purposes of this module, you will evaluate the impact of data model design on the security model. You should provide answers to the following questions in the Security, Relationships, and Performance section of the data model workshop template.
Question | Why it matters |
---|---|
Have you considered using user/team owned vs. organization-owned tables? | Generally, reference data does not require ownership if visibility for the record doesn't need to be limited to specific users or groups. When in doubt, your best choice is to create the table with a User or Team ownership because you can always grant users organization-level access to these records. Choosing the Organization ownership can't be changed, and as your company grows, you might need to restrict access to some records later. For this reason, the User or Team option should be selected because it provides maximum flexibility for future business changes. |
Have you reviewed relationships within your data model and their impact on security? | Relationship behaviors that impact security are: Assign (cascading – from parent to children), Share (cascading – from parent to children), Unshare (cascading – from parent to children), and Reparent (not cascading – occurs when child record is associated with parent, which is important for implicit sharing). |
Do you use field-level security? | Field-level security is a good option for securing data in specific fields. However, overuse of secured fields can impact performance and add complexity to administration. |
Have you considered moving sensitive data to a separate table? | Field-level security operates at a global level and is not sensitive to the User / Business Unit context (unless these columns are shared on a per-record basis), while using a custom-related table for sensitive data provides additional flexibility to have different permission levels for sensitive data between business units. |
Do you track or have naming conventions for sensitive personal information columns? | Columns that contain personal information must receive an extra level of attention to maintain compliance with data security regulations. One approach to managing personal information columns is to establish naming conventions for personal information columns, such as prepending “person” or “confidential information” to the schema name for personal information columns. |
Columns - Alternate keys, calculated, and rollup
In this section, you can capture details about miscellaneous column types by answering the following questions.
Question | Why it matters |
---|---|
Are you using alternate keys? | Alternate keys allow the administrator to define composite keys, including multiple columns from the table. Alternate keys enforce record uniqueness. Key values are indexed, which increases search speed, and they make data import more efficient by allowing the record to match for update based on the alternate key value. Be careful when adding to an existing system that might break record creation logic that doesn't provide unique values. Alternate keys can be valuable for integration scenarios when you want to benefit from upsert capabilities: By providing the key, the system knows if it could create a record or update an existing one. |
Are you using calculated columns? | Excessive JavaScript, business rules, and plug-ins can degrade system performance. Calculated columns can calculate real time when data is viewed and can be more efficient. Because value is only updated on retrieval, make sure that you don’t need real-time updates as calculated source values change. |
Are you using rollup columns? | When rolling up values from child records to parent, many customers use synchronous plug-ins. This approach has an added overhead on performance when overused. By using standard rollup columns, you can avoid unnecessary plug-ins, simplify future upgrades, and improve system performance. Keep in mind that rollup columns only calculate every 12 hours by default, so if you have rapidly changing data, standard rollup columns might not provide up-to-date information, so plug-ins should be used. However, by using rollup columns for slower changing data, you will improve performance. |
Have you verified that the min/max length or values of each of your columns was consistent with business requirements and with any data integration or data mapping? | If the customer will be migrating or integrating Dynamics 365 data with other systems, you need to verify that configured column lengths are consistent with data column lengths in other systems. |
Have you considered using date only or time zone-independent date columns? | Default date column behavior is User Local. This option displays date time values in the time zone of the user who is viewing the record. This option is beneficial for dates where the specific time matters but it can cause issues for other types of dates. Consider if birthday is in user local format. If the birthday is January 5 12:00 AM, a user in a time zone to the west of the record creator will see the date as the previous day at 11:00 PM. Date only format stores only the date value, and time zone-independent format stores the time, but all users see the correct date value. |
Auditing
Auditing is important for security traceability as well as troubleshooting issues with data. In the auditing section of the Data Model Workshop template, capture the following details about auditing settings in the customer's environment:
Question | Why it matters |
---|---|
Have you configured auditing? | Many customers who assume that they have configured auditing don't truly have a comprehensive audit strategy. Auditing is not enabled for tables by default. We recommend that you only enable audit for the specific columns that you want to track. When tables are added, it might be easy to forget to turn on auditing for the table. Then, if data is deleted later, the administrator discovers that no audit logs exist, so it will be impossible to know who deleted the data. |
Do you periodically delete audit logs? | Audit log data builds up to large volumes over time and should be periodically deleted to save on audit log storage. |
Do you have a process in place to periodically clean log data? | Audit log data builds up to large volumes over time and should be periodically deleted to save on audit log storage. |
Do you have a process in place to periodically archive transactional data? | Transactional data that is imported over many months can grow to large quantities, impacting system performance and increasing storage capacity costs. Data retention policies should be established for all non-actionable transactional data, and data outside of the retention period should be archived or deleted out of the system. System bulk deletion jobs can be used to automatically delete records on a scheduled basis. |
External data display or integration
Many Dynamics 365 deployments include integrations with other systems, which extends the data model outside the walls of Dataverse. Many data migrations can be replaced with more flexible options, which will reduce the size of the database and improve system performance. Answer the following questions in this section of the data model workshop template.
Question | Why it matters |
---|---|
Do you copy external data in Dynamics 365 tables? | Data integration brings data into Dataverse and provides Dynamics 365 users with access to external system data. However, physical data integrations can have issues: Large data migrations and integrations significantly increase the size and storage capacity cost for the system. Large data migrations are time-consuming and can delay the deployment of the application. Large data migrations and integrations create duplicate copies of sensitive system data, which security and compliance officers might disapprove of. Large and frequently running data integrations might degrade system performance. |
Have you considered using canvas apps from Microsoft Power Apps to display external data? | Canvas apps from Power Apps can be embedded in Dynamics 365 model-driven app forms, using over 300 standard connectors and providing convenient wizards to simplify common user tasks. Canvas apps can also create records in other systems by using connectors, so they can be an ideal bridge between applications without the need to physically copy data. |
Have you considered using Power Apps component framework to display external data? | Power Apps component framework controls allow administrators to replace system controls with custom controls and display external data. |
Have you considered embedding Microsoft Power BI tiles to display external data? | By embedding Power BI tiles into your Dynamics 365 model-driven app forms, you can enable the ability to display data that is stored in other systems and filter it to be contextual to a record while eliminating the need to copy data to Dynamics 365. |
Do you use virtual tables? | Virtual tables don't contain data; they connect to an external data source while displaying the data to users as if it were in Dataverse. This data is available for views, subgrids, and advanced find queries. Currently, virtual tables only support reading data, and all virtual table data is visible to all system users. If granular security is required, you should consider alternative options. |
Have you considered using Export to Azure Data Lake or Data Export Services if you need Dynamics data for external BI purposes? | The Export to Data Lake service is a pipeline to continuously export data from Dataverse to Microsoft Azure Data Lake Storage Gen2. The Export to Data Lake service is designed for enterprise big data analytics by delivering scalable high availability with disaster recovery capabilities. Data is stored in Common Data Model format, which provides semantic consistency across apps and deployments. Dynamics Export Service synchronizes Dynamics 365 data with an Azure SQL database, which is ideal for reporting on large datasets from Dynamics because limits are placed on the number of records that can be reported by using direct query from Power BI when connecting directly to Dynamics 365. Reporting from an external database is also preferable from a performance perspective. |
User experience
While the primary point of the data model is to understand how tables relate and how data flows through the system, the data model needs to be influenced by good user experience to achieve maximum user adoption of the system. The way data is modeled in Dynamics 365 can deeply impact user experience and how hard or easy the application is for users. Take the time to "clean" column and table data so that users are not confused with the data to use. Answer the following questions in the User Experience section of the Data Model Workshop.
Question | Why it matters |
---|---|
Have you excluded unused columns and relationships from search? | Setting columns to be non-searchable will exclude these columns and relationships from the Advanced Find filter designer and help make finding information easier for users. Limiting quick search “Find” columns to only the most important columns will help make quick search faster. |
Have you prefixed unused columns (for example “ZZ column“) to make sure that they appear at the end of list? | Making columns non-searchable does not exclude them from view layout design. One approach to simplify building views is to prefix the column display names with “ZZ” or another standardized notation so that they fall last alphabetically and are unlikely to overwhelm users. |
Have you adopted a consistent naming convention for metadata to simplify UX? | When designing a data model, you should create consistent naming conventions to provide consistent and understandable labels. This notion also applies to the column schema names that are displayed for developers or users who are accessing data through APIs (for example, by using Power BI). |
Have you adopted a consistent procedure for managing metadata (such as adding new columns) to avoid conflicts and duplicates? | When multiple teams or project streams work on a single implementation, it is especially important that governance is in place to manage metadata to have a consistent data model and avoid potential column duplicates. |
At the end of the workshop, the solution architect should explain the next steps and the actions that the customer and partner need to take. Explain that findings and recommendations will be sent, and a follow-up meeting will be scheduled with the customer to review status of follow-up items if needed.