Category Archives: MDX

SSAS Static Named Sets Vs. Dynamic Named Sets

So I’m 95% sure that I blogged about this topic at some point over the last couple years, but every time I try to find the link to show a class I’m teaching or to show a client, I can never find the darn thing. This is why I’m writing this blog. That and its also nice to have a good example of this on hand, which is what we have here.

In SSAS we have the ability to create named sets. An named set is basically an aliased set expression that we can use within our MDX queries. This is very useful if we have a set that is commonly used in our organization’s reporting solution.

But there are two types of named sets: static and dynamic. Static and dynamic sets appear very similar but they actually behave very differently, which is why I present to you the following example.

Below you will see a snippet of MDX from my cube script that creates a named set called Top 10 Customers – Static. This is the basic syntax for creating a named set in your cube’s MDX script. You’ll notice the static keyword, highlighted in blue. This specifies that we wish this named set to be static. The static keyword is actually optional, because if we leave the static keyword out of the create set statement, the set will still be created as a static named set.

CREATE STATIC SET CURRENTCUBE.[Top 10 Customers – Static]

AS topcount(

[Customer].[Customer].children,

10,

[Measures].[Internet Sales Amount]

) ;

The next create set statement creates a dynamic named set called Top 10 Customers – Dynamic, the big difference here being obviously the keyword dynamic, highlighted in blue. This specifies that this named set should be created as a dynamic named set.

CREATE DYNAMIC SET CURRENTCUBE.[Top 10 Customers – Dynamic]

AS topcount(

[Customer].[Customer].children,

10,

[Measures].[Internet Sales Amount]

) ;

Here you can see the create set statements in my Adventure Works cube script.

image 

And here we can see the two sets as they appear in our cube’s metadata tab in SQL Server Management Studio.

image

Now this is where it gets interesting. Below we have an example of our static named set being used in a query on the row axis. And we can see that it works fine.

image

But what happens if add a constraint in the Where clause? Uh oh, we run into an issue. The static named set does not respect the Where clause (or a subselect statement in the From clause for that matter). The named set displays the same members instead of displaying the top 10 customers from the year 2006. This could be a problem for our users depending on the requirements of the reporting solution.

image

image

This could be where a dynamic named set may be more useful. Here you can see an example of  a query that uses our dynamic named set.

image

Except when we provide a constraint in the Where clause, the named set listens to the Where clause and displays the correct data. I know the Internet Sales Amount numbers are all the same but that’s just the nature of the Adventure Works data.

image

I think this perfectly demonstrates the differences between static and dynamic named sets. Static named sets behave exactly as their name suggests: They are static and do not respect the Where clause or a subselect statement in the From clause. The dynamic named set is dynamic and will listen to a Where clause slicer or a subselect in the From clause.

If this is all a little overwhelming to you and anytime someone mentions using MDX you curl up into the fetal position, suck your thumb, and sob uncontrollably, I would suggest taking a look at the BI xPress calculation builder. BI xPress has a nifty little wizard that will help you create MDX calculations and named sets without you having to do any of the tough MDX writing on your own.

To create a named set with the BI xPress calculation builder, click the little calculate icon in the Calculations tab of the cube designer in BIDS or SSDT. This will open up the MDX Calculation Builder Wizard part of BI xPress.

image

Choose the Top 10 Count template under the Sets folder and click Next.

image

On the next screen we can pick the attribute required for our set. In this case, I’ll select the Customer attribute of the Customer dimension in order to create the Top 10 Customers set we were playing with earlier.

image 

Then we select the measure we want to use to rank our customers. I’m selecting the Internet Sales Amount measure.

image 

Lastly we give our named set a name and click finish. On this screen we can preview the MDX the BI xPress MDX Calculation Builder wrote for us.

image 

And we’re done!

image 

The BI xPress MDX Calculation Builder wrote all the MDX for us without us having to know a lick of MDX! Pretty nifty if I do say so myself. For more information on BI xPress or the BI xPress MDX Calculation Builder, head over to PragmaticWorks.com and download the free trial of BI xPress.

And if you have any questions or comments, please feel free to leave a comment or shout out on Twitter @SQLDusty! Thanks!

Recording Now Available For The Webinar, Choosing The Right Analysis Services: MOLAP Vs. Tabular

image

Thanks to everyone that attended Devin’s and my webinar called Choosing The Right Analysis Services: MOLAP vs. Tabular. I’m pleased to announce that the recording is now available to watch for free over at PragmaticWorks.com, so please go check it out. It’s a little less than an hour so you can watch it during your lunch break.

Also, the PowerPoint slide deck Devin and I used during the webinar is also available for viewing now! Please visit this link to download the slide deck.

Now for the questions! Many of you asked some great questions but unfortunately we ran out of time to answer all of the questions during the webinar. So here are a few of the questions we didn’t get to.

Q: How do I link if column have more than one column is key column in tabular?
A: If you need to create a composite key in a Tabular model table, you will need to create a calculated column that concatenate the columns that make up your composite key. You’ll need to do this in both tables you wish to relate. Once you’ve done that, then you can create the relationship between the two tables using your new columns.

Q: Can DAX be used to access cubes?
A: In the SQL 2012 SP1 CU4 release, DAX support for multidimensional cubes was added, so as long as you are running on SQL 2012 SP1 CU4 or later, you should be able to query cubes with DAX expressions. On a side note, MDX can also be used to query a Tabular model.

Q: Since tablular solution is many ways better than Muti Dimensional..then my question is when to go for Multi dimensional solution
A: This is one we covered extensively during the webinar. Here are some of the things to consider:

  1. How much data are you dealing with? If its too much to fit into memory for your Tabular model, then MOLAP is the way to go.
  2. Do you have a need for complex relationships? If so, MOLAP may be the answer. Role playing dimensions and many-to-many relationships are possible to create in a Tabular model, but they’re easier to create and manage in a MOLAP cube.
  3. Do you need to perform many complex calculations involving complex Scope assignments? If so, MOLAP is the answer here.

Q: Can you use a Multidimensional database as the source for a Tabular model and improve performance when creating low level granular reports?? This goes back to the performance differences between Multidimensional vs Tabular when creating granular reports.
A: You can use a Multidimensional database as a data source for a Tabular model, but I would suggest getting the data from the original source for the tabular model. If granular type queries are slow against your cube, those same queries are still going to be slow when you execute them to process your Tabular model.

Thanks to everyone that attending Devin’s and my webinar! If you have any other questions, please feel free to leave a comment or send me a message on Twitter!

SSAS Lessons Learned: 29% Better Compression and 11% Better Query Performance

The Importance of Sort Order

This past week I taught the SSAS Masters class which is one of the virtual training classes offered by Pragmatic Works. One of the things we discuss in the class is the importance of sorting the fact data in your data warehouse in preparation for Analysis Services. Simply by sorting your fact data, you can see much improved compression which can also improve your query response time, as well. But how much improvement in compression and query response could you see? Well that’s what I set out to discover by running a couple little tests.

Better Compression? Yes, please.

I started with my beloved Fact Sales measure group in the Contoso Retail Operations cube. The Fact Sales measure group utilizes a named query in the DSV that is a basic select statement from the Fact Sales table in the Contoso Retail database. I checked the size of the single partition that made up the measure group and saw that is was just over 129 MB in size. Not big but I thought we could improve that.

1 Partition Properties Fact Sales Unsorted

So I set out to sort my data. The trick to sorting your data is to start by selecting your top three fields with the least amount of cardinality (or uniqueness). Try experimenting with different sorting to see what kind of results you can get. For the FactSales table, I started with PromotionKey, CurrencyKey, and ChannelKey then went from there. I simply set my partition to Query Bound and utilized the following query:

SELECT   TOP 2147483647 CONVERT (INT, CONVERT (CHAR (8), DateKey, 112)) AS DateKey,
channelKey,
StoreKey,
ProductKey,
PromotionKey,
CurrencyKey,
UnitCost,
UnitPrice,
SalesQuantity,
ReturnQuantity,
ReturnAmount,
DiscountQuantity,
DiscountAmount,
TotalCost,
SalesAmount,
ETLLoadID,
LoadDate,
UpdateDate
FROM     dbo.FactSales
ORDER BY PromotionKey, CurrencyKey, ChannelKey, StoreKey, ProductKey, DateKey;

Arguably there are better ways to sort the data for SSAS but that’s not the point of this blog post so I’ll leave that for you to decide.

I did a quick redeployment of the cube and processed the Fact Sales measure group.

2 Partition Properties Fact Sales Sorted

The partition size dropped down to 93.11 MB in size! That’s a whopping 28% decrease in size! Awesome!

28% is a pretty big storage savings, especially when we could potentially be dealing with a lot more data in an enterprise scenario. Personally, I’ve seen storage savings up to 45% simply by sorting the data in the relational engine.

Better Query Performance? Sign me up!

With small fact .data files, we should see better query performance, right? I mean, theoretically it makes sense, but I was curious about how much better query performance we would see. So I set out with another little experiment.

First, I used Excel to generate a nasty little MDX query for my testing, which I captured with profiler:

SELECT
{
[Measures].[Sales Amount]
,[Measures].[Sales Quantity]
,[Measures].[Sales Unit Cost]
}
DIMENSION PROPERTIES
PARENT_UNIQUE_NAME
,HIERARCHY_UNIQUE_NAME
ON COLUMNS
,NON EMPTY
CrossJoin
(
Hierarchize
(
{
DrillDownLevel
(
{[Date].[Calendar Week].[All Date]}
,,,INCLUDE_CALC_MEMBERS
)
}
)
,Hierarchize
(
{
DrillDownLevel
(
{[Product].[Product Name].[All Products]}
,,,INCLUDE_CALC_MEMBERS
)
}
)
)
DIMENSION PROPERTIES
PARENT_UNIQUE_NAME
,HIERARCHY_UNIQUE_NAME
,[Product].[Product Name].[Product Name].[Product Available For Sale Date]
,[Product].[Product Name].[Product Name].[Product Brand Name]
,[Product].[Product Name].[Product Name].[Product Category Description]
,[Product].[Product Name].[Product Name].[Product Category Label]
,[Product].[Product Name].[Product Name].[Product Class]
,[Product].[Product Name].[Product Name].[Product Color]
,[Product].[Product Name].[Product Name].[Product Description]
,[Product].[Product Name].[Product Name].[Product Image URL]
,[Product].[Product Name].[Product Name].[Product Label]
,[Product].[Product Name].[Product Name].[Product Manufacturer]
,[Product].[Product Name].[Product Name].[Product Size Range]
,[Product].[Product Name].[Product Name].[Product Size Unit Measure]
,[Product].[Product Name].[Product Name].[Product Status]
,[Product].[Product Name].[Product Name].[Product Stock Type]
,[Product].[Product Name].[Product Name].[Product Stop Sale Date]
,[Product].[Product Name].[Product Name].[Product Style]
,[Product].[Product Name].[Product Name].[Product Subcategory Description]
,[Product].[Product Name].[Product Name].[Product Subcategory Label]
,[Product].[Product Name].[Product Name].[Product Subcategory Name]
,[Product].[Product Name].[Product Name].[Product Unit Of Measure]
,[Product].[Product Name].[Product Name].[Product URL]
,[Product].[Product Name].[Product Name].[Product Weight Unit Measure]
,[Date].[Date].[Date].[Asia Season]
,[Date].[Date].[Date].[Calendar Month]
,[Date].[Date].[Date].[Calendar Week Day]
,[Date].[Date].[Date].[Date Description]
,[Date].[Date].[Date].[Europe Season]
,[Date].[Date].[Date].[Fiscal Month]
,[Date].[Date].[Date].[Is Work Day]
,[Date].[Date].[Date].[North America Season]
ON ROWS
FROM
[Operation]

CELL PROPERTIES
VALUE
,FORMAT_STRING
,LANGUAGE
,BACK_COLOR
,FORE_COLOR
,FONT_FLAGS;

I then modified my partition to use the unsorted data, redeployed, and reprocessed. Executing the query against the Contoso database with a cold cache returned the following execution time, which I captured with Profiler again:

3 Unsorted Query Duration Cold Cache

The query finished in just over 56 seconds. Against a warm cache, the query finished in about 50 seconds.

I once again altered my partition to be query bound the TSQL query previously mentioned, redeployed, reprocessed, cleared the cache, and ran my query. This time my query finished executing 49 seconds!

5 Sorted Query Duration Cold Cache

So simply by sorting the data for loading into my partitions, I saved 28% storage space and improved my query’s performance by 11%! Not bad for about 10 minutes worth of work, huh? I conducted my tests several times and each time the results were about the same.

The Tradeoff

It’s not all sunshine and roses. There is a slight drawback that you should be aware of and it has to do with additional time spent processing. By adding the Order By clause to your queries for the partitions, the queries will probably take longer to execute. This is going to add time to processing. This means you have to decide if you can live with the additional time processing in order to gain improved compression and query performance. Depending on many factors, the additional time spent processing could be minimal… or not. But you’ll have to decide if the additional processing time is worth the improvements.

The Conclusion

The lesson to be learned here is the importance of sorting your data for loading into your partitions. The performance improvements seen by simply improving compression of your partitions by sorting your data is an easy way to improve storage of your data as well as query performance.

I’d be interested to see if any of my readers could conduct their own tests and see what kind of performance benefits they see. So if you have a few minutes of your own, try this out and then leave a comment with your results. Good luck!

Thanks For Attending My SQLSat192 Session!

SQLSat192Thanks to all who attended my session MDXplosion! Intro to MDX at SQL Saturday 192 in Tampa over the past weekend! It was a great event and I had a blast presenting, networking, and hanging out with some great friends. And a special thanks to all the sponsors, volunteers, and Pam Shaw for making this great event possible. 

If you’d like to download my slide deck and code examples from the presentation, you can get that here. If you have more questions about MDX and want to learn more, you can find my content on MDX here. And, of course, you can email me at dryan (at) pragmaticworks.com or find me on Twitter here.

I’m Speaking At SQL Saturday 192 In Tampa

sqlsat192_speaking

Coming this 2nd day of March, I will be speaking at SQL Saturday #192 in Tampa, Florida! It’s going to be off the chain no doubt, as the kids say.

I’ll be presenting my session, MDXplosion! Intro To MDX. The MDX query language is a powerful tool for extracting data from your cube but can often seem confusing and overwhelming to the untrained eye. But have no fear! It’s easier than you might think. In this session, we’ll go over the basic MDX query syntax, how to navigate a hierarchy, how to create dynamic time calculations, and how to add calculations to your cube. It’s going to be a blast and I’m really looking forward to it!

If you’re in the area or can make it down, get registered for this event! It’s going to be a great time and with the opportunity for free training at this amazing quality from industry pro’s, it’d be crazy to pass on this event!

Implementing Security With SSAS

Pragmatic Works just published a video on their YouTube channel put together by yours truly on implementing security in your SQL Server Analysis Services cube.

The video covers implementing basic dimensional security, cell security, as well as an extended look at implementing dynamic data driven security. The video is about an hour long so grab a bag of popcorn, sit back, and hopefully learn how to make your cube as secure as if it were guarded by a squad of Segway riding special forces commandos.

So check out my video Implementing Security With SSAS and feel free to post any questions or comments!

NON EMPTY vs Nonempty(): To the Death!

So what is the difference between NON EMPTY and Nonempty()? I’ve had this question asked several times and have been meaning to blog it for a little while but here’s me just getting around to it. So let’s jump right in.

We have two queries we’re going to take a look at it in order for us to better understand the difference between NON EMPTY and Nonempty(). Behold our first very boring query:

SELECT
([Date].[Calendar Year].&[2005]) ON 0,
NON EMPTY
(
[Date].[Calendar Quarter of Year].members) ON 1
FROM [Adventure Works]
WHERE [Measures].[Reseller Sales Amount];

Fig. 1a

Now here are the results:

image

Fig. 1b

As you can see, Q1 and Q2 are excluded from the results because the cells are empty. The NON EMPTY keyword essentially says, “After we decide which members go where and which cells are going to be returned, get rid of the empty cells.” If we take a look at the execution tree using MDX Studio, we can see the 2005 partition is the only partition being hit because NON EMPTY is being applied at the top level as the very last step in the query execution. The 0 axis is taken into account before evaluating which cells are empty.

image

Fig. 1c

Also, its important to note that the NON EMPTY keyword can only be used at the axis level. I used it on the 1 axis, but I could have used it on each axis in my query. I must also mention that the Nonempty function accepts a second argument and its very important that you specify this second argument even though it is not absolutely necessary for you to use the function.

Now lets take a look at our second query:

SELECT
([Date].[Calendar Year].&[2005]) ON 0,
NONEMPTY([Date].[Calendar Quarter of Year].members,[Measures].[Reseller Sales Amount]) ON 1
FROM [Adventure Works]
WHERE [Measures].[Reseller Sales Amount];

Fig. 2a

This time I’m using the Nonempty() function. Because the Nonempty() function is in fact a function, we can use it anywhere in the query: On rows, columns, sub-selects or in the Where clause. I just happen to be using it in the set defined on the row axis. Anyways, check out the results:

image

Fig. 2b

What’s this?! Empty cells! You may be asking yourself, “Self, what gives?”. I’ll give you a hint. Take a look at the query results if we execute the same query across all years rather than just for 2005. Here’s the query:

SELECT
([Date].[Calendar Year].children) ON 0,
NONEMPTY([Date].[Calendar Quarter of Year].members, [Measures].[Reseller Sales Amount]) ON 1
FROM [Adventure Works]
WHERE [Measures].[Reseller Sales Amount];

Fig. 2c

And the results:

image

Fig. 2d

Because there are cells for other years outside of 2005, Nonempty() does not eliminate Q1 and Q2, as seen in Fig. 2b. The Nonempty() is not evaluated as the last step like the NONEMPTY keyword. The Nonempty() function is evaluated when SSAS determine which members will be included in the axis. So before it knows that the query is only limited to 2005, Nonempty() has already determined which cells are going to be excluded and included. In this case, no rows are eliminated. Just take a look at the execution tree:

image

Fig. 2e

We can see all partitions are hit because of the Nonempty() function even though our results only display 2005.

With these facts in mind, its important to use the NONEMPTY keyword and the Nonempty() function because they could get you into trouble. In the case of the query shown above, the NONEMPTY keyword is probably the best bet because only the necessary partition is scanned and less cells are evaluated. But what about in the case of the following query?

Here’s the query:

SELECT
{[Measures].[Internet Sales Amount]} ON COLUMNS
,{
Filter
(
CrossJoin
(
[Customer].[City].MEMBERS
,[Date].[Calendar].[Date].MEMBERS
)
,
[Measures].[Internet Sales Amount] >; 10000
)
} ON ROWS
FROM [Adventure Works];

Fig. 3a

Here’s the count of cells returned:

image

Fig. 3b

Should we use the NONEMPTY keyword or the Nonempty() function? Let’s try NONEMPTY first.

SELECT
{[Measures].[Internet Sales Amount]} ON COLUMNS
,NON EMPTY
{
Filter
(
CrossJoin
(
[Customer].[City].MEMBERS
,[Date].[Calendar].[Date].MEMBERS
)
,
[Measures].[Internet Sales Amount] >; 10000
)
} ON ROWS
FROM [Adventure Works];

Fig. 3c

And the cell count:

image

Fig. 3d

You can see the exact same cell set was returned. In this case, NON EMPTY didn’t do anything for us. This is because our Filter clause is still evaluating empty cells because NON EMPTY has not been applied yet. But let’s try the Nonempty() function. Here’s the query:

SELECT
{[Measures].[Internet Sales Amount]} ON COLUMNS
,{
Filter
(
NonEmpty
(
CrossJoin
(
[Customer].[City].MEMBERS
,[Date].[Calendar].[Date].MEMBERS
),
[Measures].[Internet Sales Amount]
)
,
[Measures].[Internet Sales Amount] >; 10000
)
} ON ROWS
FROM [Adventure Works];

Fig. 3e

But take a look at the cell count:

image

Fig. 3f

Only 40 rows this time! In the case of the query in Fig. 3a, the Nonempty() function was the optimum solution. My point is that its important to understand the differences between the NONEMPTY keyword and the Nonempty() function and to use them properly. I hope you found this useful.

Conclusions

The bottom line difference between the NON EMPTY keyword and the NonEmpty() function is in when the empty space is evaluated.

NON EMPTY keyword: Empty space is evaluated as the very final step in determining which tuples to return.
NonEmpty() function: Empty space is evaluated when the set is determined in the axis, sub-select or Where clause.

The NonEmpty() function also allows you a little more granular control over how empty space is evaluated by using the second argument in the NonEmpty() function.

Depending on your requirements for the query, you may use both NON EMPTY and NonEmpty() or only one of the two.

Resources

Learn more about the NonEmpty() function.

Learn more about the NON EMPTY keyword and working with empty space.

Feedback?

If you found this post useful, please share it! And if you have any questions, please leave a comment or send me a message on Twitter.

Intro to MDX Session Slide Deck and MDX Script

Thanks to everyone who attended my session at SQL Saturday #130 in Jacksonville, FL a couple weeks back. I apologize for posting this so late, but better late than never. To download my session materials, just click this link. In it you’ll find my PowerPoint slide deck and the MDX script I used in the class. If you have any questions about what we went over or any questions regarding the materials, feel free to leave me a comment or shoot me an email. Thanks again to all those who attended… except the guy who gave me a “Did not meet expectations” on my presentation for “talking about SSAS too much”. Last I checked, SSAS and MDX were kind of related, right?

Intro to MDX at SQL Saturday #130 – Jacksonville, FL

This weekend is SQL Saturday #130 in Jacksonville, FL. On Friday, April 27th, is a great pre-con session by SQL Server MVP, Kevin Kline on Performance Tuning SQL Server 2008 R2. Go here to get registered for this phenomenal opportunity to learn from an unquestioned expert.

Also, I’ll be presenting a session on MDX called MDX 101: An Introduction to MDX. I’m going to be giving an introduction into the multidimensional expression language used to query SSAS cubes. In this session, we’ will learn the basic MDX query syntax, how to navigate a hierarchy, how to create dynamic time calculations, and more.

Head here to get registered for this massive event! There are already 500 people registered for this awesome event so if you’re thinking about going you better sign up now before they lock down registrations!

Top 3 Simplest Ways To Improve Your MDX Query

Learning to write MDX is difficult enough, but learning to write efficient MDX and performance tune an MDX query can be even more of a challenge. With that thought, I wanted to put together a few tips that can help you improve the performance of your MDX calculations.

1. Subdivide your calculations

For example, imagine you have an MDX query that looks like this one found in the AW cube:

Create Member CurrentCube.[Scenario].[Scenario].[Budget Variance]

As Case
        When IsEmpty
             (
                (
                  [Measures].[Amount],
                  [Scenario].[Scenario].[Budget]
                )
             )

        Then Null

        When [Account].[Accounts].CurrentMember.Properties(“Account Type”) = “Expenditures”
             Or
             [Account].[Accounts].CurrentMember.Properties(“Account Type”) = “Liabilities”

        Then ( [Measures].[Amount],[Scenario].[Scenario].[Budget] )
             –
             ( [Measures].[Amount],[Scenario].[Scenario].[Actual] )

        Else ( [Measures].[Amount],[Scenario].[Scenario].[Actual] )
             –
             ( [Measures].[Amount],[Scenario].[Scenario].[Budget] )
    End,
 
Format_String = “Currency”

You’ll notice the expressions ([Measures].[Amount],[Scenario].[Scenario].[Budget]), as well as ([Measures].[Amount],[Scenario].[Scenario].[Actual]), appear multiple times in the above calculation. Because a part of a calculation cannot be cached, each time this expression appears in the calculation, it has to be recalculated. We can subdivide this calculation into multiple calculations that can be individually cached the first time they are run.

Create Member CurrentCube.[Scenario].[Scenario].[Budget Amount] as
    ( [Measures].[Amount],[Scenario].[Scenario].[Budget] );

Create Member CurrentCube.[Scenario].[Scenario].[Actual Amount] as
    ( [Measures].[Amount],[Scenario].[Scenario].[Actual] );

Create Member CurrentCube.[Scenario].[Scenario].[Budget Variance]

As Case
        When IsEmpty
             (
                [Scenario].[Scenario].[Budget Amount]
             )

        Then Null

        When [Account].[Accounts].CurrentMember.Properties(“Account Type”) = “Expenditures”
             Or
             [Account].[Accounts].CurrentMember.Properties(“Account Type”) = “Liabilities”

        Then [Scenario].[Scenario].[Budget Amount]
             –
             [Scenario].[Scenario].[Actual Amount]

        Else [Scenario].[Scenario].[Actual Amount]
             –
             [Scenario].[Scenario].[Budget Amount]
    End,
 
Format_String = “Currency”;

Now after the first time the measures Budget Amount and Actual Amount are calculated, they can be cached instead of having to be recalculated all over again.

2. Replace IIF functions with MDX scripting

If your calculation uses an IIF function to test for a specific location in the cube space, chances are it can replaced with better performing MDX scripting. Examples:

a. If the Current Member is in a specific level
b. If the Current Member is a certain member
c. If the Current Member has a certain parent

Here we have a calculation from the Adventure Works cube:

CREATE
  MEMBER CurrentCube.[Measures].[Ratio to Parent Product] AS
    IIF(
        [Product].[Product Categories].CurrentMember.Level.Ordinal = 0
      ,1
      ,[Measures].[Sales Amount]
        /
          (
            [Product].[Product Categories].CurrentMember.Parent
           ,[Measures].[Sales Amount]
          ))
   ,Format_String = “Percent”
   ,Associated_Measure_Group = ‘Sales Summary’ ;

The IIF function is testing for the very top level of the hierarchy. We can rewrite this query to eliminate the IIF function:

CREATE
  MEMBER CurrentCube.[Measures].[Ratio to Parent Product] AS
    [Measures].[Sales Amount]
        /
          (
            [Product].[Product Categories].CurrentMember.Parent
           ,[Measures].[Sales Amount]
          )
   ,Format_String = “Percent”
   ,Associated_Measure_Group = ‘Sales Summary’ ;
  
SCOPE ([Measures].[Ratio to Parent Product],[Product].[Product Categories]);

    THIS=1;
    FORMAT_STRING(THIS)=”Percent”;

END SCOPE;

By using the SCOPE statement, we can still set the top level of the of the Product Categories hierarchy to 1 and eliminate the IIF statement.

3. Don’t use Set Aliases in your calculations

Set Aliases are when you assign a set a name by creating a named set. Named sets are handy when you must define a set multiple times. But there’s a catch when using named sets: Using a named set in a calculation disables block computation. So take this query for example:

with set [SE States] AS

{[Geography].[State-Province].&[FL]&[US],
[Geography].[State-Province].&[GA]&[US],
[Geography].[State-Province].&[SC]&[US],
[Geography].[State-Province].&[TN]&[US]}

member [Measures].[SE States Sales] as

SUM([SE States],[Measures].[Reseller Sales Amount]),
format_string=”currency”

Select [Measures].[SE States Sales] on 0,

[Date].[Calendar Year].Members on 1

From [Adventure Works]

This calculation is calculated cell by cell because of the named set. If you “Analyze” this query in Mosha’s tool, MDXStudio (which is awesome and you should download now), you will see the warning, “Applying aggregation function Sum over named set [SE States] – this disables block computation mode.” Because its always possible to remove a named set, we should rewrite this query to use block computation:

with member [Measures].[SE States Sales] as

SUM({[Geography].[State-Province].&[FL]&[US],
[Geography].[State-Province].&[GA]&[US],
[Geography].[State-Province].&[SC]&[US],
[Geography].[State-Province].&[TN]&[US]},[Measures].[Reseller Sales Amount]),
format_string=”currency”

Select [Measures].[SE States Sales] on 0,

[Date].[Calendar Year].Members on 1

From [Adventure Works]

I hope you found this few simple tips useful. These tips are simple and easy to implement but can save you tons of query time.