Category Archives: SSAS

SQL Saturday #298 Session Material Now Available (SSAS Tabular Models)

Another fantastic SQL Saturday event is in the books and it did not disappoint! There was a great crowd at University of North Florida in Jacksonville, FL and plenty of amazing sessions and speakers. Thank you to everyone that attended my session! I hope you enjoyed the session as much as I enjoyed presenting. It was a blast!

If you’re interested, the slide deck I used during the presentation is now available for your viewing pleasure, which you can see here.

If you have any questions or would like to contact me, you can always send me a message on Twitter or email me here. Again, thank you to all of my attendees!

Learn to Design Tabular Models at SQL Saturday #298 Jacksonville, FL on 5/10/2014

I’m excited to announce that on Saturday May 10th in a couple Saturdays I’ll be speaking at SQL Saturday 298 in Jacksonville, FL! SQL Saturday in Jacksonville always has an awesome turnout and I love speaking at this event! There will be tons of great free training sessions from the likes of Devin Knight, Robert Cain, Andy Warren, Adam Jorgensen, Max Trinidad, and tons of other great experts! If you’re in the North Florida area on May 10, you need to get registered for this event!

The session I’ll be delivering is called What the Tabular??? and will start at 1:15 PM EST on Saturday, May 10th! If you’re interested in learning how to build a Tabular Model, how to follow Tabular Model design best practices, and how to decide if a Tabular Model is the right solution to your business problem, then my session What the Tabular??? is for you! It’s a great beginner sessions and will feature tons of demos and walkthroughs of the technology!

So if you’re nearby, definitely get signed-up for this awesome event! You won’t regret it!

Do You Know Why Your MDX Query Is Slow?

Performance tuning MDX queries can often be a daunting and challenging task. But the first step in deciding where to begin your efforts to improve the performance of your query is to diagnose the source of the problem. There are two areas that could be causing our performance issue: 1. The design of our SSAS solution or 2. The design of our query. We need to figure out if we’ve written a bad query or designed a bad cube :).

How Do We Test The Query?

Usually an issue is discovered when a user comes to the BI team with a report that appears to be running slowly. So for our example, I have a pivot table in an excel workbook that is running a little slow. Let’s walk through this together and diagnose what the problem could be. Below here you can see the pivot table.

image

The first thing you’ll need to do is test the query and the best way to do this is to execute the query in isolation so that we can eliminate outside factors as part of the problem. So in this example I would execute the report outside of business hours. I want to make sure I do this outside of peak use time because one of the things I’ll need to do is clear the cache. In order to prevent SSAS from satisfying our query by leveraging the formula cache and the storage engine cache, we need to execute a ClearCache command to prevent our results from being corrupted. To do this, I’ll execute the following XMLA script in SSMS.

 

<ClearCache xmlns=”http://schemas.microsoft.com/analysisservices/2003/engine>
     <Object>
          <DatabaseID>AdventureWorksDW2012MutliDimensional-EE</DatabaseID>
          <CubeID>Adventure Works</CubeID>
     </Object>
</ClearCache>

 

Also, I’ll execute the following MDX to initialize my cube’s MDX script.

 

Select {} on 0 
From [Adventure Works]

For us to figure out how long our query is taking to execute, we’re going to fire up SQL Server Profiler and execute a trace against SSAS while we execute the query so we can gather all the nitty gritty details of our query execution. By running a trace, we will be able to see all kinds of really helpful details like the total duration of the query, the partitions being queries, if aggregations are being used to answer the query, which attributes are being used, and much more.

To begin our trace, go to Tool > SQL Server Profilers in SSMS.

image

Flip the Server type to Analysis Services and set the Server name to your SSAS instance that is home to the SSAS database your report is pointed at.

image

Next you’ll see the Trace Properties window. Go to the Events Selection tab, and check the check box near the bottom right of the window to Show all events. Scroll down a little more than half way and find the events “Get Data From Aggregation” and “Query Subcube Verbose.” The “Get Data From Aggregation” event is fire when an aggregation is used to satisfy a query. This event is also especially helpful when trying to determine if the aggregations you have designed are actually useful. The “Query Subcube Verbose” event will give you very detailed information on which members from which dimension attributes are being queried to satisfy the query. Click Run when you’re done.

image

Now that the trace is running, its time to conduct our test.

1. The first thing I’ll do is execute the Clear Cache command.

 

<ClearCache xmlns=”http://schemas.microsoft.com/analysisservices/2003/engine>
     <Object>
          <DatabaseID>AdventureWorksDW2012MutliDimensional-EE</DatabaseID>
          <CubeID>Adventure Works</CubeID>
     </Object>
</ClearCache>

 

2. Initialize the Calculation script in your cube.

 

Select {} on 0 
From [Adventure Works]
3. Execute the report/query. In my case, my report is an Excel pivot table, so I’ll simply click the refresh button in Excel to execute my query.

The report may take a few seconds or minutes to run depending on the query, but it should probably take longer than you’re used to simply because the query is running against an empty cache and SSAS will have to retrieve all of the data from storage.

When the query is finished executing, pause your Trace in SQL Server Profiler by clicking the pause button at the top of the Trace window. Now its time to take a look at the results. Find the Query End event in the Trace results. Find the Duration column. This number displays the total query duration in milliseconds. In my example, my query took just over 13 seconds to execute. So its not unbelievably slow, but certainly slower than we’d like.

image

In order to take a more in depth look at our query’s performance, lets save the Trace results to a SQL Server table so we can query it. Go to File > Save As > Trace Table.

image

Specify where you’d like to save the results and click OK.

image

Below you’ll see a very useful query that will break down where your query is experiencing a slow down. If you’re using this query for your testing, don’t forget to alter the From clause to query your trace table.

 

SELECT x.ConnectionID,
       x.QueryID,
       x.QueryDuration,
       p.SEDuration,
       CASE 
WHEN p.SEDuration > x.QueryDuration THEN NULL ELSE x.QueryDuration - p.SEDuration 
END AS FEDuration,
       y.[Number of SE Queries],
       y.[Thread Duration of SE Queries],
       w.[Aggregations Read],
       x.TextData
FROM   (SELECT a.ConnectionID,
               a.Duration AS QueryDuration,
               a.TextData,
               CAST (HashBytes('SHA1', CAST (reverse(CAST (TextData AS VARCHAR (MAX))) AS NVARCHAR (4000))) AS INT) AS QueryID
        FROM   MyTraceTable AS a
        WHERE  a.EventClass = 10) AS x -- Query End Event
       LEFT OUTER JOIN
       (/* Determine Query Subcube Verbose of Non-cache data */
       SELECT   ConnectionID,
                 COUNT(*) AS [Number of SE Queries],
                 SUM(Duration) AS [Thread Duration of SE Queries]
        FROM     MyTraceTable
        WHERE    EventClass = 12 -- Query Subcube Verbose
                 AND EventSubclass = 22 -- Non-cache data
        GROUP BY ConnectionID) AS y
       ON y.ConnectionID = x.ConnectionID
       LEFT OUTER JOIN
       (/* Determine Aggregations that are ready from */
       SELECT   ConnectionID,
                 COUNT(*) AS [Aggregations Read]
        FROM     MyTraceTable
        WHERE    EventClass = 60 -- Read from Aggregations
        GROUP BY ConnectionID) AS w
       ON w.ConnectionID = x.ConnectionID
       LEFT OUTER JOIN
       (/* Determine SE time */
       SELECT   ConnectionID,
                 SUM(Duration) AS SEDuration
        FROM     MyTraceTable
        WHERE    EventClass = 11 -- Query SubCube
        GROUP BY ConnectionID) AS p
       ON p.ConnectionID = x.ConnectionID;

 

 

The results here are very telling. The column “QueryDuration” shows us the total execution time of the query. The column “SEDuration” shows us the amount of time SSAS spent pulling the data from storage (Storage Engine). The column “FEDuration” shows how long SSAS spent calculating our queries results (Formula Engine).

image

In this particular example, the vast majority of our query’s execution time is spent in the Formula Engine. Of the 13+ seconds spent executing the query, the query spends more than 12 seconds in the Formula Engine and only 297 milliseconds pulling the data from storage. This tells us that the problem is probably not with the design our SSAS solution, but rather a poorly written query. Unfortunately, this being an MDX query generated by Excel, there’s not a lot we can do about altering the query.

How Do We Fix The Query?

Typically when deciding where to spend your performance tuning efforts you want to start in the area where your query spends more than 30% of its time (If its a 50/50 split make an educated decision). In the previous example, we’ve determined our problem is with the query.

What can I do to improve my MDX query?

If you determine your query’s problem is the query itself, ensure SSAS is utilizing subspace computation instead of cell by cell computation. SSAS will usually evaluate groups of cells in your cube at a time but in certain situations SSAS will evaluate cube space one cell at a time. We want to to avoid that. You can get a hint that SSAS is calculation your results one cell at a time if the query on our trace table shows a large amount of Storage Engine queries. Certain SSAS/MDX functions can disable subspace computation.

1. Late binding functions (ex. StrtoMember, StrtoSet functions)
2. Set aliases
3. LookupCube function

Also, check out this blog for more info on ways to improve your MDX query.

But what if the problem is my SSAS solution’s design?

If you conduct your test and determine the majority of the query duration is spent pulling the data from storage, there’s a lot to consider when discussing cube design best practices. But here are some brief highlights of things to consider.

1. Can we design aggregations to help our query? Look at your test results and see if aggregations are being used to satisfy the query.
2. Can we implement a partition design strategy that keeps SSAS from having to query larger partitions?
3. Are the right partitions being queried? For example, if your query is asking for data for 2010 and you notice in your trace that the partitions for all the other years are being queried, this could indicate that SSAS is having a hard time figuring out which partition has the correct data. You may need to set the Slice property on your partition.
4. Create Attribute relationships
5. Leverage Natural Hierarchies

There are many more best practices for cube design, but this is probably a good starting point.

If you’ve found this helpful, share it with a colleague or a friend and leave me a comment. Feel free to leave me a question or feedback in the comments here or send me a note on Twitter! I love discussing new ideas and learning so don’t hesitate!

SSAS Static Named Sets Vs. Dynamic Named Sets

So I’m 95% sure that I blogged about this topic at some point over the last couple years, but every time I try to find the link to show a class I’m teaching or to show a client, I can never find the darn thing. This is why I’m writing this blog. That and its also nice to have a good example of this on hand, which is what we have here.

In SSAS we have the ability to create named sets. An named set is basically an aliased set expression that we can use within our MDX queries. This is very useful if we have a set that is commonly used in our organization’s reporting solution.

But there are two types of named sets: static and dynamic. Static and dynamic sets appear very similar but they actually behave very differently, which is why I present to you the following example.

Below you will see a snippet of MDX from my cube script that creates a named set called Top 10 Customers – Static. This is the basic syntax for creating a named set in your cube’s MDX script. You’ll notice the static keyword, highlighted in blue. This specifies that we wish this named set to be static. The static keyword is actually optional, because if we leave the static keyword out of the create set statement, the set will still be created as a static named set.

CREATE STATIC SET CURRENTCUBE.[Top 10 Customers – Static]

AS topcount(

[Customer].[Customer].children,

10,

[Measures].[Internet Sales Amount]

) ;

The next create set statement creates a dynamic named set called Top 10 Customers – Dynamic, the big difference here being obviously the keyword dynamic, highlighted in blue. This specifies that this named set should be created as a dynamic named set.

CREATE DYNAMIC SET CURRENTCUBE.[Top 10 Customers – Dynamic]

AS topcount(

[Customer].[Customer].children,

10,

[Measures].[Internet Sales Amount]

) ;

Here you can see the create set statements in my Adventure Works cube script.

image 

And here we can see the two sets as they appear in our cube’s metadata tab in SQL Server Management Studio.

image

Now this is where it gets interesting. Below we have an example of our static named set being used in a query on the row axis. And we can see that it works fine.

image

But what happens if add a constraint in the Where clause? Uh oh, we run into an issue. The static named set does not respect the Where clause (or a subselect statement in the From clause for that matter). The named set displays the same members instead of displaying the top 10 customers from the year 2006. This could be a problem for our users depending on the requirements of the reporting solution.

image

image

This could be where a dynamic named set may be more useful. Here you can see an example of  a query that uses our dynamic named set.

image

Except when we provide a constraint in the Where clause, the named set listens to the Where clause and displays the correct data. I know the Internet Sales Amount numbers are all the same but that’s just the nature of the Adventure Works data.

image

I think this perfectly demonstrates the differences between static and dynamic named sets. Static named sets behave exactly as their name suggests: They are static and do not respect the Where clause or a subselect statement in the From clause. The dynamic named set is dynamic and will listen to a Where clause slicer or a subselect in the From clause.

If this is all a little overwhelming to you and anytime someone mentions using MDX you curl up into the fetal position, suck your thumb, and sob uncontrollably, I would suggest taking a look at the BI xPress calculation builder. BI xPress has a nifty little wizard that will help you create MDX calculations and named sets without you having to do any of the tough MDX writing on your own.

To create a named set with the BI xPress calculation builder, click the little calculate icon in the Calculations tab of the cube designer in BIDS or SSDT. This will open up the MDX Calculation Builder Wizard part of BI xPress.

image

Choose the Top 10 Count template under the Sets folder and click Next.

image

On the next screen we can pick the attribute required for our set. In this case, I’ll select the Customer attribute of the Customer dimension in order to create the Top 10 Customers set we were playing with earlier.

image 

Then we select the measure we want to use to rank our customers. I’m selecting the Internet Sales Amount measure.

image 

Lastly we give our named set a name and click finish. On this screen we can preview the MDX the BI xPress MDX Calculation Builder wrote for us.

image 

And we’re done!

image 

The BI xPress MDX Calculation Builder wrote all the MDX for us without us having to know a lick of MDX! Pretty nifty if I do say so myself. For more information on BI xPress or the BI xPress MDX Calculation Builder, head over to PragmaticWorks.com and download the free trial of BI xPress.

And if you have any questions or comments, please feel free to leave a comment or shout out on Twitter @SQLDusty! Thanks!

Recording Now Available For The Webinar, Choosing The Right Analysis Services: MOLAP Vs. Tabular

image

Thanks to everyone that attended Devin’s and my webinar called Choosing The Right Analysis Services: MOLAP vs. Tabular. I’m pleased to announce that the recording is now available to watch for free over at PragmaticWorks.com, so please go check it out. It’s a little less than an hour so you can watch it during your lunch break.

Also, the PowerPoint slide deck Devin and I used during the webinar is also available for viewing now! Please visit this link to download the slide deck.

Now for the questions! Many of you asked some great questions but unfortunately we ran out of time to answer all of the questions during the webinar. So here are a few of the questions we didn’t get to.

Q: How do I link if column have more than one column is key column in tabular?
A: If you need to create a composite key in a Tabular model table, you will need to create a calculated column that concatenate the columns that make up your composite key. You’ll need to do this in both tables you wish to relate. Once you’ve done that, then you can create the relationship between the two tables using your new columns.

Q: Can DAX be used to access cubes?
A: In the SQL 2012 SP1 CU4 release, DAX support for multidimensional cubes was added, so as long as you are running on SQL 2012 SP1 CU4 or later, you should be able to query cubes with DAX expressions. On a side note, MDX can also be used to query a Tabular model.

Q: Since tablular solution is many ways better than Muti Dimensional..then my question is when to go for Multi dimensional solution
A: This is one we covered extensively during the webinar. Here are some of the things to consider:

  1. How much data are you dealing with? If its too much to fit into memory for your Tabular model, then MOLAP is the way to go.
  2. Do you have a need for complex relationships? If so, MOLAP may be the answer. Role playing dimensions and many-to-many relationships are possible to create in a Tabular model, but they’re easier to create and manage in a MOLAP cube.
  3. Do you need to perform many complex calculations involving complex Scope assignments? If so, MOLAP is the answer here.

Q: Can you use a Multidimensional database as the source for a Tabular model and improve performance when creating low level granular reports?? This goes back to the performance differences between Multidimensional vs Tabular when creating granular reports.
A: You can use a Multidimensional database as a data source for a Tabular model, but I would suggest getting the data from the original source for the tabular model. If granular type queries are slow against your cube, those same queries are still going to be slow when you execute them to process your Tabular model.

Thanks to everyone that attending Devin’s and my webinar! If you have any other questions, please feel free to leave a comment or send me a message on Twitter!

SSAS Lessons Learned: 29% Better Compression and 11% Better Query Performance

The Importance of Sort Order

This past week I taught the SSAS Masters class which is one of the virtual training classes offered by Pragmatic Works. One of the things we discuss in the class is the importance of sorting the fact data in your data warehouse in preparation for Analysis Services. Simply by sorting your fact data, you can see much improved compression which can also improve your query response time, as well. But how much improvement in compression and query response could you see? Well that’s what I set out to discover by running a couple little tests.

Better Compression? Yes, please.

I started with my beloved Fact Sales measure group in the Contoso Retail Operations cube. The Fact Sales measure group utilizes a named query in the DSV that is a basic select statement from the Fact Sales table in the Contoso Retail database. I checked the size of the single partition that made up the measure group and saw that is was just over 129 MB in size. Not big but I thought we could improve that.

1 Partition Properties Fact Sales Unsorted

So I set out to sort my data. The trick to sorting your data is to start by selecting your top three fields with the least amount of cardinality (or uniqueness). Try experimenting with different sorting to see what kind of results you can get. For the FactSales table, I started with PromotionKey, CurrencyKey, and ChannelKey then went from there. I simply set my partition to Query Bound and utilized the following query:

SELECT   TOP 2147483647 CONVERT (INT, CONVERT (CHAR (8), DateKey, 112)) AS DateKey,
channelKey,
StoreKey,
ProductKey,
PromotionKey,
CurrencyKey,
UnitCost,
UnitPrice,
SalesQuantity,
ReturnQuantity,
ReturnAmount,
DiscountQuantity,
DiscountAmount,
TotalCost,
SalesAmount,
ETLLoadID,
LoadDate,
UpdateDate
FROM     dbo.FactSales
ORDER BY PromotionKey, CurrencyKey, ChannelKey, StoreKey, ProductKey, DateKey;

Arguably there are better ways to sort the data for SSAS but that’s not the point of this blog post so I’ll leave that for you to decide.

I did a quick redeployment of the cube and processed the Fact Sales measure group.

2 Partition Properties Fact Sales Sorted

The partition size dropped down to 93.11 MB in size! That’s a whopping 28% decrease in size! Awesome!

28% is a pretty big storage savings, especially when we could potentially be dealing with a lot more data in an enterprise scenario. Personally, I’ve seen storage savings up to 45% simply by sorting the data in the relational engine.

Better Query Performance? Sign me up!

With small fact .data files, we should see better query performance, right? I mean, theoretically it makes sense, but I was curious about how much better query performance we would see. So I set out with another little experiment.

First, I used Excel to generate a nasty little MDX query for my testing, which I captured with profiler:

SELECT
{
[Measures].[Sales Amount]
,[Measures].[Sales Quantity]
,[Measures].[Sales Unit Cost]
}
DIMENSION PROPERTIES
PARENT_UNIQUE_NAME
,HIERARCHY_UNIQUE_NAME
ON COLUMNS
,NON EMPTY
CrossJoin
(
Hierarchize
(
{
DrillDownLevel
(
{[Date].[Calendar Week].[All Date]}
,,,INCLUDE_CALC_MEMBERS
)
}
)
,Hierarchize
(
{
DrillDownLevel
(
{[Product].[Product Name].[All Products]}
,,,INCLUDE_CALC_MEMBERS
)
}
)
)
DIMENSION PROPERTIES
PARENT_UNIQUE_NAME
,HIERARCHY_UNIQUE_NAME
,[Product].[Product Name].[Product Name].[Product Available For Sale Date]
,[Product].[Product Name].[Product Name].[Product Brand Name]
,[Product].[Product Name].[Product Name].[Product Category Description]
,[Product].[Product Name].[Product Name].[Product Category Label]
,[Product].[Product Name].[Product Name].[Product Class]
,[Product].[Product Name].[Product Name].[Product Color]
,[Product].[Product Name].[Product Name].[Product Description]
,[Product].[Product Name].[Product Name].[Product Image URL]
,[Product].[Product Name].[Product Name].[Product Label]
,[Product].[Product Name].[Product Name].[Product Manufacturer]
,[Product].[Product Name].[Product Name].[Product Size Range]
,[Product].[Product Name].[Product Name].[Product Size Unit Measure]
,[Product].[Product Name].[Product Name].[Product Status]
,[Product].[Product Name].[Product Name].[Product Stock Type]
,[Product].[Product Name].[Product Name].[Product Stop Sale Date]
,[Product].[Product Name].[Product Name].[Product Style]
,[Product].[Product Name].[Product Name].[Product Subcategory Description]
,[Product].[Product Name].[Product Name].[Product Subcategory Label]
,[Product].[Product Name].[Product Name].[Product Subcategory Name]
,[Product].[Product Name].[Product Name].[Product Unit Of Measure]
,[Product].[Product Name].[Product Name].[Product URL]
,[Product].[Product Name].[Product Name].[Product Weight Unit Measure]
,[Date].[Date].[Date].[Asia Season]
,[Date].[Date].[Date].[Calendar Month]
,[Date].[Date].[Date].[Calendar Week Day]
,[Date].[Date].[Date].[Date Description]
,[Date].[Date].[Date].[Europe Season]
,[Date].[Date].[Date].[Fiscal Month]
,[Date].[Date].[Date].[Is Work Day]
,[Date].[Date].[Date].[North America Season]
ON ROWS
FROM
[Operation]

CELL PROPERTIES
VALUE
,FORMAT_STRING
,LANGUAGE
,BACK_COLOR
,FORE_COLOR
,FONT_FLAGS;

I then modified my partition to use the unsorted data, redeployed, and reprocessed. Executing the query against the Contoso database with a cold cache returned the following execution time, which I captured with Profiler again:

3 Unsorted Query Duration Cold Cache

The query finished in just over 56 seconds. Against a warm cache, the query finished in about 50 seconds.

I once again altered my partition to be query bound the TSQL query previously mentioned, redeployed, reprocessed, cleared the cache, and ran my query. This time my query finished executing 49 seconds!

5 Sorted Query Duration Cold Cache

So simply by sorting the data for loading into my partitions, I saved 28% storage space and improved my query’s performance by 11%! Not bad for about 10 minutes worth of work, huh? I conducted my tests several times and each time the results were about the same.

The Tradeoff

It’s not all sunshine and roses. There is a slight drawback that you should be aware of and it has to do with additional time spent processing. By adding the Order By clause to your queries for the partitions, the queries will probably take longer to execute. This is going to add time to processing. This means you have to decide if you can live with the additional time processing in order to gain improved compression and query performance. Depending on many factors, the additional time spent processing could be minimal… or not. But you’ll have to decide if the additional processing time is worth the improvements.

The Conclusion

The lesson to be learned here is the importance of sorting your data for loading into your partitions. The performance improvements seen by simply improving compression of your partitions by sorting your data is an easy way to improve storage of your data as well as query performance.

I’d be interested to see if any of my readers could conduct their own tests and see what kind of performance benefits they see. So if you have a few minutes of your own, try this out and then leave a comment with your results. Good luck!

Building Your First Tabular Model Webinar Recording and Questionnaire Now Available

Thanks to everyone who attended my webinar on the ins and outs and the basics of building your first Tabular Model. I hope everyone learned a little something and maybe picked up a couple tips or tricks along the way.

 Click Here To View The Recording

And thanks to everyone who had questions! I apologize for not being able to get to the questions during the webinar, so here are answers to a few of the questions posted in the chat window.

Question: “Don’t use joins” – that includes views that have joins under the covers, right?
Answer: That’s correct. That’s still going to create additional stress on the relational engine at query time and the last thing you want is to run into a locking/blocking issue during processing because you have complex queries behind the tables in your Tabular Model. If you need to join to other tables to pick up other columns, I would suggest getting with your ETL people and materializing the desired columns in the table so you can eliminate the joins.

Question: Could you please explain what difference of perspective and role?
Answer: A Role is used to define member permissions for your model. The Role defines what actions a user can perform on the model, such as read or process. A Perspective is a viewable subset of the model. A Perspective is similar to a SQL Server View on a Table. It can be used to display a subset of the model to a business unit in order to make it a little more simple to navigate through the model.

Question: What is the difference between Active & Inactive relationships?
Answer: In a Tabular Model, multiple relationships can exist between two table, but to create accurate calculations, we need a single path from one table to another. This means that even though multiple relationships exist between the two tables, only one will be actively used. The inactive relationships between the tables will not be used.

Question: Can a tabular “object” once built be put in source control and deployed by IT like SSIS packages and SSAS cubes?
Answer: Yes, Tabular Models can be deployed using the SSAS Deployment Wizard. The Create Script can also be generated from a deployed copy of the database and used to deploy to another server. For more information on SSAS Tabular deployment methods, see here.

Question: is there a way to change the database name and remove the ugly guid?
Answer:
The guid is only appended to the database on the workspace server. The database can be deployed with whatever name you like. You can set the database name by going to the project properties and setting the Database property. Whatever you enter in as the Database name will be the name of the database once you deploy the database and will not include the yucky guid, unless you company standards are to use the guid :).
image

Questions: Does Tabular model have superior performance over OLAP based cubes? Also can Tabular model be used for time series data?
Answers:
I would say generally speaking that a Tabular Model will have much faster query response on average, which is definitely one of the strengths of an in-memory database. OLAP cubes have the ability to contain much more data and can scale out better but in my experience queries against Tabular Models are usually much faster than queries against Multidimensional databases. Tabular Models also allow for time calculations. For more information on the kinds of time intelligence functions and calculations you can create with DAX, check this out.

Once again, thank you so much to everyone who attended my webinar. If I didn’t get to answer your question, I apologize. Feel free to post your question in the comments or on the forums of BIDN.com.

And don’t forget to follow me on Twitter!

Performance Tuning of Tabular Models in SSAS 2012 Whitepaper Now Available

Tabular Model Via MSDN, there’s now a great whitepaper called Performance Tuning of Tabular Models in SSAS 2012 available for your viewing pleasure. There’s a treasure trove of great information in this whitepaper and I highly recommend anyone developing or reporting on Tabular Models to take a look at this whitepaper. So definitely take a moment to download and read this whitepaper.

While you download this historic piece of literature, here’s three little tidbits of knowledge I picked after my initial skim-through:

Partitions Don’t Help Query Performance

Partitions do not improve processing time and/or query time. In Tabular Models, partitioning tables in the model only serves to allow the administrator to selectively refresh smaller subsets of data as is the case with an incremental load of a fact table. If your fact table is incrementally loaded, you can save processing time and help you make your processing window by only processing the affected partitions, but partitioning alone will not improve processing time if you’re still processing all partitions.

Partitions in a table are processed serially, unlike partitions in a measure group of a Multidimensional Database which are processed in parallel. In a Tabular Model, however, since each table is independent of each other, tables can be processed in parallel even if partitions in a given table aren’t.

Unlike in a Multidimensional Database, dimension tables in a Tabular Model can be partitioned. This opens the door for incremental processing of those dimensions as well as some unique partitioning strategies for those dimensions.

Memory Is Crucial, Duh!

Obviously with a Tabular Model you need to be able to fit the model in memory to fully utilize the magical power that is Tabular. But if you have a 10 GB model, how much memory do you need? The correct answer is about 30 GB of memory. Why 30 GB you ask? During a Process Full of your Tabular Model, the database is kept online until the transaction for the processing operation is committed. That means that for the given 10 GB model, you need to be able to hold two copies in memory: 10GB for the old data and 10GB for the new data. Then you’ll likely need around 5 GB – 10 GB for various processing overheads. So keep in mind that you could need significantly more memory than you might think is necessary to support a single Tabular Model.

Table Queries Are Unaltered During Processing

In a Multidimensional database, the heavy lifting for dimensions is transferred

from SSAS to the relational database by executing “Select Distinct” statements for each attribute. For measure groups in a Multidimensional database, the query is wrapped in a subselect with only the necessary columns returned. This also means that using CTE’s, Order By statements, and procedures are not an option in a Multidimensional database. In a Tabular Model, however, the query for the table is unaltered which means that using CTE’s, procedures, Order By’s, and various other t-sql features is now possible. But this also means that the responsibility for returning only the required columns in the query is now the developer’s. Without removing the unnecessary columns from the query, processing will be less efficient and could take longer.

Like I said earlier, this is a great whitepaper and I highly recommend that you check it out!

The Whaty-What of Data Mining Algorithms in SSAS

SQL Server Analysis Services features nine different data mining algorithms that looks for specific types of patterns in trends in order to make predictions about your data. This is a potentially very powerful tool and since I’ve been learning more about data mining recently I figured I’d put together a little bit of information and research I’ve done on these algorithms for my own reference as well as for the benefit of others. 

Below you will see a list of the Data Mining Algorithms included in SSAS 2008 to 2012. I’ve included the type of the algorithm, what it does, and an example or two of when one might decide its an appropriate algorithm for your data and requirements. I’ve also included links to TechNet for more information.

Decision Tree
Type: Classification

What: This is probably the most popular data mining algorithm simply because the results are very easy to understand. Decision Tree tries to predict the value of a column or columns based on the relationships between the columns you have identified. Decision Tree also determines which input columns are most highly correlated to your prediction column(s).
When: Use the Decision Tree algorithm when you want to try to predict if a customer will buy your product or what characteristics may make a person a potentially good customer. For example, if you want to determine which customers you should send coupons, use Decision Tree to determine if a person has the qualities and characteristics of past customers.

Linear Regression
Type: Regression

What: This algorithm is used to predict continuous variables using continuous input variables. Linear Regression is a simpler version of Decision Tree without the splits in the tree.
When: Use the Linear Regression algorithm when you want to compute a trend for your sales data.

Naïve Bayes
Type: Classification

What: The Naïve Bayes algorithm is based on Bayes’ theorems and good for performing predictive analytics. This algorithm calculates a probability based on input columns you define.
When: An example of when you might use the Naïve Bayes algorithm would be when you want to predict how likely a customer is likely to respond to an email blast of how likely a patient is to get sick or contract a disease based on their demographic information.

Neural Network
Type: Classification/Regression

What: The Neural Network algorithm may be one of the least used data mining algorithms because it is the most difficult to interpret. Basically Neural Network combines each possible input attribute value with each predictable attribute value to determine probabilities. These probabilities can be used for classification or regression, but you may have a difficult time determining how the algorithm reached a particular conclusion.
When: Use this when you want to analyze the relationships between many complex inputs to determine a certain output, such as predicting stock movements or currency fluctuations.

Logistic Regression
Type: Regression

What: The Logistic Regression algorithm is a statistical method for determining the contribution of specified inputs to a particular set of outcomes. This algorithm is similar to Neural Network in the way that it models the relationships between the various inputs.
When: An example of when you might use this algorithm would be to predict what characteristics make a customer a repeat customer or if a convicted criminal is likely to become a repeat offender.

Clustering
Type: Segmentation

What: The Clustering algorithm is probably very close to Decision Tree as far as data mining algorithms that are used most frequently simply because, like Decision Tree, it is also very easy to understand. The Clustering algorithm groups cases in a data set using the input columns into groups or clusters of cases with similar characteristics.
When: This algorithm is great for detecting fraud or anomalies in your data because it is very easy to see which data does not fit into a cluster.

Sequence Clustering
Type: Sequence Analysis

What: Sequence Clustering is similar to clustering except instead of looking for clusters based on the similarity of characteristics, the clusters are based on a model. The algorithm groups sequences of events that are identical.
When: An example of when you might use this algorithm would be when you want to determine which sequence of events are likely to lead to hardware failure in your environment.

Association Rules
Type: Association

What: The Association Rules algorithm defines combinations of items in a set by scanning the data in the specified inputs and identifying frequent combinations of items.
When: Use this algorithm when you want to identify opportunities for cross selling. For example, the Association Rules may determine that if a customer purchases milk there may be an opportunity to market cookies to the customer.

Time Series
Type: Regression

What: The Time Series algorithm is useful in predicting or forecasting continuous variables over time, such as sales metrics. Time Series does not require additional data to make predictions about the future. Time Series makes predictions 3 or 5 units into the future.
When: Use the Time Series algorithm when you want to predict sales for the next 3 or 5 months.

Gotchas With Referenced Dimensions

image

A referenced dimension relationship allows us to create a relationship between a measure group and a dimension that is indirectly related to the measure group using an intermediate dimension. This is great, because we can now create relationships between our facts and dimensions that may not be directly related. Unfortunately, there are a couple of gotchas that you may not be aware of, which I read about here and here. I’m just offering my take on these issues, so thanks to Alberto and Jesse for the great content.

You can see several examples of referenced relationship types in the Adventure Works cube. One of those examples is the relationship between the Sales Target measure group and the Sales Territory dimension.

image

If you’ve ever created a referenced dimension relationship, you know that the Materialize option is checked by default. So one might assume this is the best option to go with.

To Materialize?
image

But before we can decide if the option to materialize the relationship is best, we must first understand what the materialize option really does. When we check the materialize option, the link between the referenced dimension and the facts is materialized. During processing of the measure group, a join is made to the dimension to look up the referenced dimension’s key value. This leads to slower cube processing but better cube query performance.

There is, however, an important thing to realize: The materialized relationship link between the facts and the correct attribute members is created during the processing of the partition, not the dimension! Why is this important? Imagine we need to update the relationships between the facts and the referenced dimension. We can no longer simply process the dimension. The link to the older referenced dimension attribute member still exists in the partition! The partition must be processed! This could be a huge issue if your cube is very large and you’re only planning to process the most recent partition. If the relationships changed in the referenced dimension, you could find yourself forced to process more than just the most recent partition in order to ensure the relationships between the facts and the referenced dimension are correct.

To Not Materialize?
image

So what’s the other option? We could leave the option to materialize the relationship unchecked. The downside to this is that query performance will suffer since the member will need to be looked up on the fly. The upside is that processing will be faster and we no longer need to process the partition to update the relationship since its looked up at query time.

But there is also a gotcha with using un-materialized referenced dimension relationships. Aggregations can not be created for a referenced dimension when the relationship is not materialized. If you run the Aggregation wizard, you’ll notice that any referenced dimensions with un-materialized relationships will be left out. This is by design. An aggregation should not be created (which is possible to do manually) for an un-materialized referenced dimension relationship because the relationship is resolved at query time.

What About the TSQL Query?

But what happens with the TSQL query during processing of the measure group? Here’s an example in the Adventure Works cube that we can look at:

image

This referenced relationship between the Reseller Sales measure group is materialized. When the Reseller Sales measure group is processed, a join is made to the intermediate dimension, to look up the Geography Key, which you can see here:

image

This join allows SSAS to add the referenced key to the measure group providing better MDX query performance for the users. But if we’re dealing with a significant amount of data in the fact table and a large intermediate dimension, the processing performance impact could be very significant.

But what if I de-materialize the relationship? The good news is that the TSQL query used to load the measure group will perform better because we’re no longer making a join to the intermediate dimension:

image

The bad news is that during executions of MDX queries, SSAS will need to look up the key value for the referenced dimension leading to slower MDX query performance.

The Conclusion

My personal recommendation would be to add the referenced key to the fact table so that way you can eliminate the dilemma altogether. I’ve personally seen referenced relationships in a multi TB cube destroy processing performance. The resolution for this particular client’s issue was to add the referenced key to the fact tables and create the regular relationships between the measure groups and the referenced dimensions. After implementing the fix, processing for the partitions was very fast.

If for some reason you can’t add the referenced key to the fact table and you must decide between materializing the relationship or not, of course the answer is “It depends”. If your cube is extremely large featuring an incremental processing strategy, I would recommend to not materialize the relationship. This will prevent you from having to process the entire cube should something in the referenced dimension change. You’ll just have to live with the somewhat slower MDX query performance.

If, however, your cube is relatively small or your data warehouse is completely reloaded each night, I would probably recommend materializing the relationships. The increased time to process won’t be that big of a deal since the cube is small and your query times will benefit because of the materialized relationship. You’ll also be able to create aggregations since the relationship is materialized. Of course, the “it depends” applies differently to every situation.

Feedback?

I hope you found this useful! If you did, please share it!

Also, if you have any questions or feedback, please leave a comment below! Thanks for reading!