Creating SSAS Partitions through Directly Modifying XML Source file

I have met a very strange issue when I was creating SSAS partitions using Partition Wizard in SSDT 2012. I have gone through all the steps in Partition Wizard successfully, but no partition was created although everything seems running fine. Although I have managed to create the partitions in the SQL Server Management Studio with no problem, I need create the partition definition in the SSDT for source control purpose. After quite a number of tries on the Partition Wizard, I finally gave up the approach, and decide to directly modify the backend xml source file of the partition definitions to create the new partitions.

Firstly, the partitions are not defined in the .cube file but instead in a separate .partition file which defines all the partitions in all the measure groups. In the .partition file, find the <MeasureGroup> within which you want to add the new partition, and add the script like this (see the snapshot below) into the <Partitions> tag.


After you have added the partition definitions into the .partition file, you need to reopen the cube in SSDT to reload the partition tab, and then you will see the new partitions have shown up on the partition tab.


Tip & Tricks – SSAS Performance Tuning Flow Chart

In the Analysis Services Performance Guide white paper, Microsoft has introduces a very handy flow chat for tuning SSAS performance.

SSAS Flow Chat 01

The first and key step of the flow chat is to investigate whether the performance problem is caused by query processor or storage engine. If the problem lies in query processor, SSAS developers need to focus on optimising MDX scripts. If the problem lies in storage engine, SSAS developers need considering to optimise dimension design, design aggregations, or use partitions to improve query performance.

The white paper has introduced a method to determine whether the problem lies in the query processor or storage engine with the help of SQL Server Profiler. Firstly, create a trace, select only query subcube verbose and filter the event on event subclass = 22 (non-cache data). Then check whether the majority of time is spent in the storage engine with long running query subcube events. If the answer is yes, there is a high possibility that the problem lies in storage engine. Otherwise, the problem lies in query processor.