Tag: Power BI

DAX – Dynamically Showing Title by Slicer in Power BI

When building a Power BI report page for analysing data by a specific dimension (e.g., customer, product and category), we often need to dynamically show the title of currently selected item(s).


One trick to show the title based on the current filter context is to use Multi-Row Card visual (we can also use the single row Card visual. However, the single row Card visual doesn’t support custom text alignment yet). We can disable the left-side bar and change the style of the text to make it look more like a title.


Now we have the dynamical title based on the selected single item.


However, this approach doesn’t work when there is no item selected or when there are multiple items selected. When there is no item selected, the Multi-Row Card visual will display all the items. As the size of the Multi-Row Card visual is set to be large enough for only a single item, only the first item is visible. The rest of items are still showing but have been cut out.


The title will be misleading when there is no item selected. We expect to see something like “All Categories Selected” or “No filter applied” as title.


We meet similar problem when there are multiple items selected where only the first selected item will be visible in the title.


Instead, the title needs to be able to let viewers aware that multiple items have been selected and how many of them.


Now we have gone through the requirements and we need an solution to be able to handle all the three scenarios: single item selection, no selection and multiple item selection. We can create a single measure to cover all the three scenarios:

Product Category Title = 

  IF (COUNT('SalesLT ProductCategory'[Product Category])=1, 
    VALUES('SalesLT ProductCategory'[Product Category]),
    IF (COUNT('SalesLT ProductCategory'[Product Category])=
              CALCULATE(COUNT('SalesLT ProductCategory'[Product Category]), 
                           All('SalesLT ProductCategory'[Product Category])),
        "All Categories Selected",
        MIN('SalesLT ProductCategory'[Product Category]) & " and the other "
                & COUNT('SalesLT ProductCategory'[Product Category])-1 & " categories"

Firstly, we check whether only a single item is selected, if so, we return the single selected value, if not, we check whether all items are selected, if so, we return the text showing all items are selected in the current filter context, if not, that means multiple items have been selected, we build a text string to show the name of the first item and the count of the rest of items.

Building Power BI Memory Usage Dashboard using DMV

As the VertiPad engine used in Power BI is an in-memory data analytical engine, the key to optimise your Power BI report performance is to reduce the memory usage of your data model. A smaller data model not only increase the data scan speed but also allow you to processing a larger dataset with the same hardware capability.

To support the investigation of memory usage in a Power BI data model, I have created a memory usage Power BI dashboard. The pbix file of this dashboard can be downloaded here.


Kasper de Jonge created a PowerPivot version of Memory Usage Report for tabular Analysis Service. If you are a PowerPivot user, you can find the version here.

To use this dashboard in your data model, you need to edit the two parameters in the Power BI report:


You can find how to get the ServerName and the Database form Chris Webb’s blog post that introduces how to analyse Power BI DMV Queries in Power BI Desktop.

Once you have connected the dashboard to your Power BI report, you can use the filters to find the memory usage of the tables and the columns in your data model, and also you can customise the dashboard as you want using the DMVModelMemoryUsage dataset.

The dashboard is able to help you identify the tables and columns with large memory footage. You can review those columns with heavy memory usage and see what you can do with them. For example, in the snapshot of the dashboard shown above, the Fact Sale table is the largest memory user. When we select this table, on the bar chart showing the memory usage of each column in the Fact Sale, we can see the two largest memory users are the Sale Key column and the WWI Invoice ID. After we reviewed our data model, we can see find both Sale Key and WWI Invoice ID are not used for the relationships with other table, and both columns will not be used for building the reports. Therefore, we can get rid of those key to reduce the memory usage.

Things better to do When Working with Power BI

This is the third part in the blog series summarising my experience with Power BI from the real-world projects.

  1. Create separate QA/UAT app workspace from Dev app workspace

Separate QA/UAT app workspace from Dev app workspace. Only app workspace admins have contribute permission with QA/UAT app workspaces. Normal report authors will not be able to modify the content in QA/Test app workspaces. Any change/bug fixing requested from QA/UAT will be feedback to report authors and will be implemented in Dev app workspace. The final on-prod Power BI app need to be published from QA/UAT app workspace.

  1. Prefer SSAS Tabular Model over SSAS Multidimensional Model if need live connection

For large-scale dataset, we may need to build data model outside of Power BI and to use Power BI for visualisation. Within Microsoft stack we have two options: SSAS Tabular model and SSAS Multidimensional model. If there is no specify restriction that limits you to use the multidimensional model, prefer to use tabular model for the following reasons:

  • SSAS Tabular Model and Power BI use the same analytics engine and support the same query language (DAX). NFUM BI developers can reuse their Power BI development skills for SSAS Tabular Model development, instead of learning other skills or seek external supports.
  • In-memory processing model used by SSAS Tabular Model is identified as the desired model by Microsoft and continued investment in the tabular model has made it a mature capability
  1. Using DirectQuery mode with cautions

The main reason to use DirectQuery mode is to reduce latency on data updates. This mode is recommended to use only when near-real time reporting is required, or the database is too large to fit in the memory of a single Analysis Services server (> 100 Billion rows). While DirectQuery mode is capable to reduce data update latency, this mode requires directly query from the same relational engine used to update the data with the data is not cached and organised in query efficient structure. Therefore, DirectQuery mode will increase the load on data sources, and can reduce performance of end user report rendering as reports in DirectQuery mode cannot take advantage of Power BI in-memory process engine.

  1. Source control Power BI pbix file and release control organisational Power BI app publishing

Keep the Power BI pbix file in source control repository. The OneDrive storage associated with a Power BI app workspace group (will be auto provisioned when create an app workspace) is a good choice as it is accessible for business users and do not require extra licenses. The release control policy, procedure, and management tool (can be built on the team site in app workspace group) need to be planned.

  1. Use ExpressRoute if possible

You can use Azure ExpressRoute to create private connections between your on-premises data sources and Power BI services through public peering (not Microsoft peering or private peering), instead of through public internet connection. ExpressRoute not only provides you with a network with better performance and reliability, but also ensure the better security and privacy of your data.

  1. On-premises gateway needs to be close to the data sources

The on-premises gateway needs to be deployed as close as possible to on-premises data sources to avoid data transport/network performance issues.

Pain Points of Power BI

This is the second post in the blog series summarising my experience with Power BI from the real-world projects.

This blog post covers some very interesting topics, namely the pain points of Power BI that has driven me crazy.

  1. No cross-page slicer

This looks like a small issue but causes very bad user experience. A Power BI report often contains multiple pages covering various categories of metrics. Power BI slicers are used in the pages to filter content based on various dimensions, e.g., BUs, departments and machine models. As the slicers on one page cannot pass the filtering context to other pages, users must repeat all the slicer settings again when they move to another page.

  1. No report-level measure for SSAS Multi-Dimensional model live connection

Report-level measure is only supported for SSAS tabular model live connection but not for SSAS multi-dimensional model live connection. That would cause much pain when you need to design Power BI reports with live connection to an on-production SSAS multi-dimensional cube. It is very common that you need to create additional measures to build a certain type of charts. Without report-level measure, you will have to update the SSAS multi-dimensional cube and go through all deployment and change management procedures in your organisation that may takes a few months.

  1. No migration from SSAS tabular model to Power BI

Power BI, SSAS tabular model and PowerPivot are brother as they share same analytics engine and query language. It is a handy feature that you can migrate you Power BI data model into SSAS tabular model. However, I did meet a requirement to migrate SSAS tabular model to Power BI that is not supported, and I must recreate the SSAS tabular model in Power BI.

  1. No fine-grain permission

The permission model offered by Power BI is very simple and is too simple. There is no fine-grain permission level. When you give a report author the permission to build a dashboard in an app workspace, at the same time, the report author will be able to publish app, edit dataset settings, modify all other content in the app workspace.

  1. Limitation of Power BI real-timing streaming

One of coolest feature of Power BI is the streaming dataset that allows dynamical change of metrics in a Power BI chart. However, this feature is only supported on Power BI dashboard (not on Power BI reports), and the limitations of Power BI streaming dataset with data ingestion rate and streaming dataset size prevent it to be used in many real-world use cases (further explanations of this issue can be found in previous blog post: Issues with Azure Streaming Analytics + Power BI Real-Time Streaming for IoT Hot-Path Analytics).

  1. Role-level security is not supported for external sharing

It is common requirement to distribute Power BI reports to external customers or vendors with personalised content access, i.e. one customer or vendor can only access the data related to them. Role-level security is required for this requirement. However, role-level security is not supported for external sharing, so game over!

  1. No support of incremental refreshing

At the time when this blog post is written, there is no support for incremental dataset refreshing. Without incremental refreshing, Power BI service must reload the full dataset and reprocess entire tabular model. Fortunately, this feature is on Microsoft roadmap and is expected to release this year but is only for premium license.

  1. Cannot move content between app workspace

We often need to move content between app workspace, e.g., from DEV app workspace to UAT app workspace. However, Power BI does not support this feature yet, so we must employ some workaround to this limitation, either to use content pack or through Power BI Desktop republishing (the Power BI Desktop republishing cannot move dashboards created in an app workspace)

Things Better to Know Before Implementing Power BI Service

I love Power BI, sincerely, since as early as the PowerPivot 1.0 era. However, I must admit Power BI does drive me crazy sometimes. It can play very well when you follow the rules Power BI expecting you to follow, otherwise, it can cause you much pain. This blog series summarises my experience with Power BI from the real-world projects, organised in three parts:

This blog post covers the first part, things better to know before implementing Power BI service:

  1. Location! Location! Location!

The location of Power BI tenant is important. The data center where hosts the Power BI tenant should be as near as users and be as near as on-premises data sources if they are used. The reason I emphasise location and put it on the top of the list is that the location of Power BI tenant cannot be changed. You will have awful pain when you realise your Power BI tenant is provisioned in a wrong region and must move the tenant (along with all the data already produced) due to performance or compliance reasons.

  1. Users in your organisations can sign up your Power BI service by themselves

At certain stage of your Power BI service implementation, you may want to release Power BI service to certain user groups. However, you may find some other users out of those user groups starting to use your Power BI service as well. By default, Power BI service allows self-signing up, and this setting can only be changed by Azure AD administrators through changing AllowAdHocSubscriptions AAD setting. Therefore, it would be wise to take this into consideration at early stage of your project.

  1. 70% workload is outside of Power BI

For the medium-to-large size of Power BI development projects, although the title of the project may be “Power BI” something, you will find at least 70% workload is actually outside of Power BI. More efforts are required on data collection, storage, and ETL even before you start to touch Power BI (unless, at early stage of a project, you may use Power BI as a data exploration or prototyping tool).

  1. Kimball is still useful

Thanks to Microsoft’s well-made demos, people tend to believe all they need to do for building nice Power BI dashboards is to simply connect to the LOB databases and drag/drop some charts.  For some extremely simple use cases, it may like that. However, once the requirements are getting a little bit complex, you will find you are walking into a big mess without proper data modelling. For example, you may want to skip the efforts of building star schema, however, you will soon find you have to write much more complex DAX and you may hit into circular relationship issues without star schema. Similarly, you will find you do need to make design decisions following Kimball model, e.g., slow changing dimensions, date dimensions, fact table types.

  1. Keep column storage in mind

When working with Power BI, it is no harm to keep reminding yourself that Power BI is running on the in-memory columnar database (Vertipad engine). Any unnecessary high cardinal columns, e.g., text description, date and time, transaction id, can harm the performance badly.

  1. No more free meal

Microsoft has changed their Power BI licensing model that removing sharing features from the free edition. That means the free edition is strictly for personal use only. Now the only way for report authors to share their work is to send the source pbix file over.

  1. S-shape learning curve

Many new Power BI users don’t realise the S-shape learning curve of Power BI software. When a user initially plays/prototypes with Power BI, they often feel Power BI is easy to use, and they can build some very nice dashboards with a few clicks. However, when they start to use Power BI to solve real-life problems, they realise there are a lot of more stuff they need to master. The learning curve starts to get steep. Some users, especially the users with experiences of traditional reporting tool such as SSRS, start to feel awkward with Power BI. Some work can be done easily using SSRS, the pixel perfect reporting tool, become difficult in Power BI. Users need to learn DAX which is the task many users are trying to keep away from.

For Power BI SMEs within an organisation, it is wise to manage Power BI users’ expectations at early stage, and let them realise that they may meet some challenge on their Power BI learning journey, but after they conquered the challenge, the life will become so beautiful where they will find Power BI is actually a very flexible, powerful and productive tool for their work.

  1. Multi-dimensional model SSAS is not brother, or even friend with Power BI

This one is better for you to know when your project requiring access to multi-dimensional model SSAS. Unlike Tabular Model SSAS which is the big brother of Power BI (who shares same analytics engine and uses same query language with Power BI), multi-dimensional model SSAS is not that close to Power BI, and it is not even friendly with Power BI.

To access multi-dimensional model SSAS through live connections, there are many limitations, e.g.:

  • SSAS Servers must be running SQL Server 2012 SP1 CU4 or later versions
  • Performance is slow compared to Tabular Model SSAS
  • Does not support report level measures as Tabular Model SSAS
  1. Many cool features are coming! … (but only for Premium license)

Many very useful Power BI features are on Microsoft roadmap and expected to release this year, such as incremental loading, pinning datasets to memory, dedicated data refresh Nodes, Ready-only replicas, geographic distribution. However, this is the good news only for users with Premium setup.

This blog post summarises the things better to know before implementing Power BI services. The next blog post will cover some very interesting topics, namely the pain points of Power BI which have driven me crazy.

R Visual – Building Component Cycle Timeline

One common approach to detect exceptions of a machine is to monitor the correlative status of components in the machine. For example, in normal condition, two or more components should be running at the same time, or some components should be running in sequential order. When the components are not running in the way as expected, that indicates potential issues with the machine which need attentions from engineers.

Thanks to IoT sensors, it is easy to capture the data of component status in a machine. To help engineers to easily pick up the potential issues, a machine components timeline chart will be very helpful. However, Power BI does not provide this kind of timeline chart, and it can be time consuming to build custom Javascript-based visuals. Fortunately, we have R visual in Power BI. With a little help from ggplot2 library, we can easily build a timeline chart (as the one shown below) in four lines code.


Firstly, we need add a R visual onto Power BI canvas, and set the data fields required for the chart.


The minimal fields required are the name/code of the component, start date and end date of the component running cycle, such as:

ComponentCode Start DateTime End DateTime
Pump 2017-01-01 09:12:35 2017-01-01 09:18:37
Motor 2017-01-01 09:12:35 2017-01-01 09:18:37

In the R script editor, we use the geom_segment to draw each component running cycles. The x axis is for the time of component running, and the y axis is for the component name/code.


Apart from showing a Power BI timeline chart for engineers to detect the machine issue as this blog post introduced, we can also send alerts to engineers using Azure Functions.