Performance tuning MDX queries can often be a daunting and challenging task. But the first step in deciding where to begin your efforts to improve the performance of your query is to diagnose the source of the problem. There are two areas that could be causing our performance issue: 1. The design of our SSAS solution or 2. The design of our query. We need to figure out if we’ve written a bad query or designed a bad cube :).
How Do We Test The Query?
Usually an issue is discovered when a user comes to the BI team with a report that appears to be running slowly. So for our example, I have a pivot table in an excel workbook that is running a little slow. Let’s walk through this together and diagnose what the problem could be. Below here you can see the pivot table.
The first thing you’ll need to do is test the query and the best way to do this is to execute the query in isolation so that we can eliminate outside factors as part of the problem. So in this example I would execute the report outside of business hours. I want to make sure I do this outside of peak use time because one of the things I’ll need to do is clear the cache. In order to prevent SSAS from satisfying our query by leveraging the formula cache and the storage engine cache, we need to execute a ClearCache command to prevent our results from being corrupted. To do this, I’ll execute the following XMLA script in SSMS.
<ClearCache xmlns=”http://schemas.microsoft.com/analysisservices/2003/engine”>
<Object>
<DatabaseID>AdventureWorksDW2012MutliDimensional-EE</DatabaseID>
<CubeID>Adventure Works</CubeID>
</Object>
</ClearCache>
Also, I’ll execute the following MDX to initialize my cube’s MDX script.
Select {} on 0
From [Adventure Works]
For us to figure out how long our query is taking to execute, we’re going to fire up SQL Server Profiler and execute a trace against SSAS while we execute the query so we can gather all the nitty gritty details of our query execution. By running a trace, we will be able to see all kinds of really helpful details like the total duration of the query, the partitions being queries, if aggregations are being used to answer the query, which attributes are being used, and much more.
To begin our trace, go to Tool > SQL Server Profilers in SSMS.
Flip the Server type to Analysis Services and set the Server name to your SSAS instance that is home to the SSAS database your report is pointed at.
Next you’ll see the Trace Properties window. Go to the Events Selection tab, and check the check box near the bottom right of the window to Show all events. Scroll down a little more than half way and find the events “Get Data From Aggregation” and “Query Subcube Verbose.” The “Get Data From Aggregation” event is fire when an aggregation is used to satisfy a query. This event is also especially helpful when trying to determine if the aggregations you have designed are actually useful. The “Query Subcube Verbose” event will give you very detailed information on which members from which dimension attributes are being queried to satisfy the query. Click Run when you’re done.
Now that the trace is running, its time to conduct our test.
1. The first thing I’ll do is execute the Clear Cache command.
<ClearCache xmlns=”http://schemas.microsoft.com/analysisservices/2003/engine”>
<Object>
<DatabaseID>AdventureWorksDW2012MutliDimensional-EE</DatabaseID>
<CubeID>Adventure Works</CubeID>
</Object>
</ClearCache>
2. Initialize the Calculation script in your cube.
Select {} on 0
From [Adventure Works]
3. Execute the report/query. In my case, my report is an Excel pivot table, so I’ll simply click the refresh button in Excel to execute my query.
The report may take a few seconds or minutes to run depending on the query, but it should probably take longer than you’re used to simply because the query is running against an empty cache and SSAS will have to retrieve all of the data from storage.
When the query is finished executing, pause your Trace in SQL Server Profiler by clicking the pause button at the top of the Trace window. Now its time to take a look at the results. Find the Query End event in the Trace results. Find the Duration column. This number displays the total query duration in milliseconds. In my example, my query took just over 13 seconds to execute. So its not unbelievably slow, but certainly slower than we’d like.
In order to take a more in depth look at our query’s performance, lets save the Trace results to a SQL Server table so we can query it. Go to File > Save As > Trace Table.
Specify where you’d like to save the results and click OK.
Below you’ll see a very useful query that will break down where your query is experiencing a slow down. If you’re using this query for your testing, don’t forget to alter the From clause to query your trace table.
SELECT x.ConnectionID,
x.QueryID,
x.QueryDuration,
p.SEDuration,
CASE
WHEN p.SEDuration > x.QueryDuration THEN NULL ELSE x.QueryDuration - p.SEDuration
END AS FEDuration,
y.[Number of SE Queries],
y.[Thread Duration of SE Queries],
w.[Aggregations Read],
x.TextData
FROM (SELECT a.ConnectionID,
a.Duration AS QueryDuration,
a.TextData,
CAST (HashBytes('SHA1', CAST (reverse(CAST (TextData AS VARCHAR (MAX))) AS NVARCHAR (4000))) AS INT) AS QueryID
FROM MyTraceTable AS a
WHERE a.EventClass = 10) AS x -- Query End Event
LEFT OUTER JOIN
(/* Determine Query Subcube Verbose of Non-cache data */
SELECT ConnectionID,
COUNT(*) AS [Number of SE Queries],
SUM(Duration) AS [Thread Duration of SE Queries]
FROM MyTraceTable
WHERE EventClass = 12 -- Query Subcube Verbose
AND EventSubclass = 22 -- Non-cache data
GROUP BY ConnectionID) AS y
ON y.ConnectionID = x.ConnectionID
LEFT OUTER JOIN
(/* Determine Aggregations that are ready from */
SELECT ConnectionID,
COUNT(*) AS [Aggregations Read]
FROM MyTraceTable
WHERE EventClass = 60 -- Read from Aggregations
GROUP BY ConnectionID) AS w
ON w.ConnectionID = x.ConnectionID
LEFT OUTER JOIN
(/* Determine SE time */
SELECT ConnectionID,
SUM(Duration) AS SEDuration
FROM MyTraceTable
WHERE EventClass = 11 -- Query SubCube
GROUP BY ConnectionID) AS p
ON p.ConnectionID = x.ConnectionID;
The results here are very telling. The column “QueryDuration” shows us the total execution time of the query. The column “SEDuration” shows us the amount of time SSAS spent pulling the data from storage (Storage Engine). The column “FEDuration” shows how long SSAS spent calculating our queries results (Formula Engine).
In this particular example, the vast majority of our query’s execution time is spent in the Formula Engine. Of the 13+ seconds spent executing the query, the query spends more than 12 seconds in the Formula Engine and only 297 milliseconds pulling the data from storage. This tells us that the problem is probably not with the design our SSAS solution, but rather a poorly written query. Unfortunately, this being an MDX query generated by Excel, there’s not a lot we can do about altering the query.
How Do We Fix The Query?
Typically when deciding where to spend your performance tuning efforts you want to start in the area where your query spends more than 30% of its time (If its a 50/50 split make an educated decision). In the previous example, we’ve determined our problem is with the query.
What can I do to improve my MDX query?
If you determine your query’s problem is the query itself, ensure SSAS is utilizing subspace computation instead of cell by cell computation. SSAS will usually evaluate groups of cells in your cube at a time but in certain situations SSAS will evaluate cube space one cell at a time. We want to to avoid that. You can get a hint that SSAS is calculation your results one cell at a time if the query on our trace table shows a large amount of Storage Engine queries. Certain SSAS/MDX functions can disable subspace computation.
1. Late binding functions (ex. StrtoMember, StrtoSet functions)
2. Set aliases
3. LookupCube function
Also, check out this blog for more info on ways to improve your MDX query.
But what if the problem is my SSAS solution’s design?
If you conduct your test and determine the majority of the query duration is spent pulling the data from storage, there’s a lot to consider when discussing cube design best practices. But here are some brief highlights of things to consider.
1. Can we design aggregations to help our query? Look at your test results and see if aggregations are being used to satisfy the query.
2. Can we implement a partition design strategy that keeps SSAS from having to query larger partitions?
3. Are the right partitions being queried? For example, if your query is asking for data for 2010 and you notice in your trace that the partitions for all the other years are being queried, this could indicate that SSAS is having a hard time figuring out which partition has the correct data. You may need to set the Slice property on your partition.
4. Create Attribute relationships
5. Leverage Natural Hierarchies
There are many more best practices for cube design, but this is probably a good starting point.
If you’ve found this helpful, share it with a colleague or a friend and leave me a comment. Feel free to leave me a question or feedback in the comments here or send me a note on Twitter! I love discussing new ideas and learning so don’t hesitate!
Like this:
Like Loading...