Category Archives: SSAS

PowerPivot 101 Recording Now Available

Thanks to everyone who attended my PowerPivot 101 webinar last Tuesday morning! It was a blast and I had a great time presenting for the 300 strong crowd that attended!

In case you missed the webinar, you can easily view the recording here free of charge! Also, don’t forget to check out the dozens of other free webinar recordings. With this kind of free training available, there really is no excuse for you to not be learning something new!

If you’re looking for a great book to get you started with PowerPivot and DAX, I would suggest checking out the following books:

image

PowerPivot for Excel 2010: Give Your Data Meaning
Marco Russo & Alberto Ferrari

 

 

 

image

Practical PowerPivot & DAX Formulas for Excel 2010
Art Tennick

 

 

 

Also, check back for my PowerPivot Q&A responses in a later blog post! Thanks again to everyone who attended my session!

Implementing Security With SSAS

Pragmatic Works just published a video on their YouTube channel put together by yours truly on implementing security in your SQL Server Analysis Services cube.

The video covers implementing basic dimensional security, cell security, as well as an extended look at implementing dynamic data driven security. The video is about an hour long so grab a bag of popcorn, sit back, and hopefully learn how to make your cube as secure as if it were guarded by a squad of Segway riding special forces commandos.

So check out my video Implementing Security With SSAS and feel free to post any questions or comments!

NON EMPTY vs Nonempty(): To the Death!

So what is the difference between NON EMPTY and Nonempty()? I’ve had this question asked several times and have been meaning to blog it for a little while but here’s me just getting around to it. So let’s jump right in.

We have two queries we’re going to take a look at it in order for us to better understand the difference between NON EMPTY and Nonempty(). Behold our first very boring query:

SELECT
([Date].[Calendar Year].&[2005]) ON 0,
NON EMPTY
(
[Date].[Calendar Quarter of Year].members) ON 1
FROM [Adventure Works]
WHERE [Measures].[Reseller Sales Amount];

Fig. 1a

Now here are the results:

image

Fig. 1b

As you can see, Q1 and Q2 are excluded from the results because the cells are empty. The NON EMPTY keyword essentially says, “After we decide which members go where and which cells are going to be returned, get rid of the empty cells.” If we take a look at the execution tree using MDX Studio, we can see the 2005 partition is the only partition being hit because NON EMPTY is being applied at the top level as the very last step in the query execution. The 0 axis is taken into account before evaluating which cells are empty.

image

Fig. 1c

Also, its important to note that the NON EMPTY keyword can only be used at the axis level. I used it on the 1 axis, but I could have used it on each axis in my query. I must also mention that the Nonempty function accepts a second argument and its very important that you specify this second argument even though it is not absolutely necessary for you to use the function.

Now lets take a look at our second query:

SELECT
([Date].[Calendar Year].&[2005]) ON 0,
NONEMPTY([Date].[Calendar Quarter of Year].members,[Measures].[Reseller Sales Amount]) ON 1
FROM [Adventure Works]
WHERE [Measures].[Reseller Sales Amount];

Fig. 2a

This time I’m using the Nonempty() function. Because the Nonempty() function is in fact a function, we can use it anywhere in the query: On rows, columns, sub-selects or in the Where clause. I just happen to be using it in the set defined on the row axis. Anyways, check out the results:

image

Fig. 2b

What’s this?! Empty cells! You may be asking yourself, “Self, what gives?”. I’ll give you a hint. Take a look at the query results if we execute the same query across all years rather than just for 2005. Here’s the query:

SELECT
([Date].[Calendar Year].children) ON 0,
NONEMPTY([Date].[Calendar Quarter of Year].members, [Measures].[Reseller Sales Amount]) ON 1
FROM [Adventure Works]
WHERE [Measures].[Reseller Sales Amount];

Fig. 2c

And the results:

image

Fig. 2d

Because there are cells for other years outside of 2005, Nonempty() does not eliminate Q1 and Q2, as seen in Fig. 2b. The Nonempty() is not evaluated as the last step like the NONEMPTY keyword. The Nonempty() function is evaluated when SSAS determine which members will be included in the axis. So before it knows that the query is only limited to 2005, Nonempty() has already determined which cells are going to be excluded and included. In this case, no rows are eliminated. Just take a look at the execution tree:

image

Fig. 2e

We can see all partitions are hit because of the Nonempty() function even though our results only display 2005.

With these facts in mind, its important to use the NONEMPTY keyword and the Nonempty() function because they could get you into trouble. In the case of the query shown above, the NONEMPTY keyword is probably the best bet because only the necessary partition is scanned and less cells are evaluated. But what about in the case of the following query?

Here’s the query:

SELECT
{[Measures].[Internet Sales Amount]} ON COLUMNS
,{
Filter
(
CrossJoin
(
[Customer].[City].MEMBERS
,[Date].[Calendar].[Date].MEMBERS
)
,
[Measures].[Internet Sales Amount] >; 10000
)
} ON ROWS
FROM [Adventure Works];

Fig. 3a

Here’s the count of cells returned:

image

Fig. 3b

Should we use the NONEMPTY keyword or the Nonempty() function? Let’s try NONEMPTY first.

SELECT
{[Measures].[Internet Sales Amount]} ON COLUMNS
,NON EMPTY
{
Filter
(
CrossJoin
(
[Customer].[City].MEMBERS
,[Date].[Calendar].[Date].MEMBERS
)
,
[Measures].[Internet Sales Amount] >; 10000
)
} ON ROWS
FROM [Adventure Works];

Fig. 3c

And the cell count:

image

Fig. 3d

You can see the exact same cell set was returned. In this case, NON EMPTY didn’t do anything for us. This is because our Filter clause is still evaluating empty cells because NON EMPTY has not been applied yet. But let’s try the Nonempty() function. Here’s the query:

SELECT
{[Measures].[Internet Sales Amount]} ON COLUMNS
,{
Filter
(
NonEmpty
(
CrossJoin
(
[Customer].[City].MEMBERS
,[Date].[Calendar].[Date].MEMBERS
),
[Measures].[Internet Sales Amount]
)
,
[Measures].[Internet Sales Amount] >; 10000
)
} ON ROWS
FROM [Adventure Works];

Fig. 3e

But take a look at the cell count:

image

Fig. 3f

Only 40 rows this time! In the case of the query in Fig. 3a, the Nonempty() function was the optimum solution. My point is that its important to understand the differences between the NONEMPTY keyword and the Nonempty() function and to use them properly. I hope you found this useful.

Conclusions

The bottom line difference between the NON EMPTY keyword and the NonEmpty() function is in when the empty space is evaluated.

NON EMPTY keyword: Empty space is evaluated as the very final step in determining which tuples to return.
NonEmpty() function: Empty space is evaluated when the set is determined in the axis, sub-select or Where clause.

The NonEmpty() function also allows you a little more granular control over how empty space is evaluated by using the second argument in the NonEmpty() function.

Depending on your requirements for the query, you may use both NON EMPTY and NonEmpty() or only one of the two.

Resources

Learn more about the NonEmpty() function.

Learn more about the NON EMPTY keyword and working with empty space.

Feedback?

If you found this post useful, please share it! And if you have any questions, please leave a comment or send me a message on Twitter.

SQL Rally 2012 Recap In Yo’ Face!

This past Thursday and Friday I attended my first PASS SQL Rally event in Dallas, Texas. The week was full of some pretty amazing sessions presented by some even more amazing speakers. If you were a DBA looking for performance tuning tips or other ways to perfect your profession, there were tons of great sessions for you. If you were a developer looking some great development tips and tricks to make your life easier, there were some really informative sessions given. And if you were a business intelligence developer, like myself, hoping to take your skills to the next level, there was definitely some amazing material to take advantage of.

I focused heavily on the business intelligence space and spent most of my time attending those sessions. I learned some great tips, increased my knowledge in several areas, and also networked a little with some really bright and great all-around people.

So What Sessions Did I Attend And What Did I Learn?

Performance Tuning SSAS Processing with John Welch

The first sessions I attended was Performance Tuning SSAS Processing given by John Welch (blog | twitter). There’s no doubt about, John knows his stuff. I learned a ton from his session. Here’s a few of the tips I picked up from John’s great sessions:

    • Instead of bringing entire tables into your DSV, use views. Views will allow you to tweak the SQL and use query hints to improve the performance of the Select statement.
    • When Analysis Services processes data, it does so one buffer at a time. If AS detects duplicate rows in a buffer, it will aggregate the data. Because AS aggregates duplicate records, you can save on space and increase performance if you order the data in your Select statement in your partitions.
    • Tune SSAS processing in 3 steps:
      • Dimension Processing
      • Partition ProcessData
      • Partition ProcessIndexes
    • Set the AttributeHierarchyOptimize property to false for attributes that are not used in queries often or if the attribute has a high cardinality and a near 1:1 relationship with the key attribute.

The Report Part Library with Jessica Moss

The next session I attended was The Report Part Library with Jessica Moss (blog | twitter). Jessica is one lady who is an amazing speaker and an SSRS guru without a doubt. Jessica’s wonderful sense of humor and bubbly personality made the session enjoyable and exciting. We covered how to create report parts, share those report parts across multiple reports, and then update those same report parts. Jessica also covered how to organize and manage a large Report Part Library. It was really great to finally meet Jessica in person.

Stop! Consolidate and Listen! with Jorge Segarra

After lunch, I attend Jorge Segarra’s (blog | twitter) session called, “Stop! Consolidate and Listen!”. Jorge covered some of the different methods of consolidation and dug into virtualization. Between Jorge’s sad rap skills and Patrick Leblanc’s even weaker beat boxing, I managed to pick up a few pointers.

Some of the Pro’s to virtualization:

    • Great isolation!
    • Flexibility
      • Snapshotting virtual machines (so if something goes horribly wrong, you can just roll back to the last snapshot)
      • Creating and dropping virtual machines is easy as pie
      • VM’s can be migrated across servers without having to be shut down!

Some of the Gotcha’s:

    • Behind the scenes resources could actually be shared, so make sure you understand the systems that your VM’s are running on.

Also, check out the free MAP toolkit from Microsoft. Jorge gave a great intro into the tool and how to use it to determine which machines are prime candidates for consolidation and virtualization.

Data Mining with Devin Knight & Adam Jorgensen

The last session I attended was Data Mining with Devin Knight (blog | twitter) and Adam Jorgensen (blog | twitter). Being far less than a master of data mining, I really looked forward to this session.

Data mining does three things:

  • Explores the data
  • Identifies patterns in the data
  • Performs predictions based on those identified patterns

There are several algorithms available for us to use:

  • Decision tree
    • Allows us to see how someone could come to a decision?
  • Clustering
    • Measures the distances between the cluster of points on a graph and an outlier
    • This method is often used for fraud detection and data validation
  • Time Series
    • Predict 3 units (days, months, years, etc) into the future
    • Based on historical values
  • Sequence Clustering
    • Used to determine somebodies next step
    • Where will a person click next on a web site?
  • Association Rules
    • Market basket analysis
    • Amazon uses this: People who bought this also bought this.
  • Naïve Bayes
    • A data mining algorithm that is literally hundreds of years old
    • A classification algorithm that could be used to determine the differences between customer who buy a product and customers who do not buy a product.
  • Neural Networks
    • Typically used with FICO scores
    • Takes multiple factors into account but can’t really point to one or two things to tell you how it came to a decision

Zero To Cube: Fast Track To Analytics with Adam Jorgensen & Dustin Ryan

I also had the wonderful pleasure of presenting on SSAS with Adam Jorgensen (blog | twitter). Our session was titled, “Zero To Cube: Fast Track To Analytics”. We had volunteers (some were volunteered by others) drive through creating a cube from the ground up. Within an hour and 15 minutes, we created a connection to our data source, created the data source view, made some enhancements to the data source view, created our cube, built a couple hierarchies and fleshed out the dimensions, and even built a report in Excel with a pivot table and a chart. We had a great, very participative audience and we had a blast! Thank you so much to all those that came to our session!

All in all, this year’s SQL Rally was an amazing event. I learned a lot of new things, made some great friends, and had a great time. If you weren’t able to make the event, you missed out big time. Definitely put next years SQL Rally on your calendar and make attending that event a priority. You won’t be sorry.

Intro to MDX Session Slide Deck and MDX Script

Thanks to everyone who attended my session at SQL Saturday #130 in Jacksonville, FL a couple weeks back. I apologize for posting this so late, but better late than never. To download my session materials, just click this link. In it you’ll find my PowerPoint slide deck and the MDX script I used in the class. If you have any questions about what we went over or any questions regarding the materials, feel free to leave me a comment or shoot me an email. Thanks again to all those who attended… except the guy who gave me a “Did not meet expectations” on my presentation for “talking about SSAS too much”. Last I checked, SSAS and MDX were kind of related, right?

Intro to MDX at SQL Saturday #130 – Jacksonville, FL

This weekend is SQL Saturday #130 in Jacksonville, FL. On Friday, April 27th, is a great pre-con session by SQL Server MVP, Kevin Kline on Performance Tuning SQL Server 2008 R2. Go here to get registered for this phenomenal opportunity to learn from an unquestioned expert.

Also, I’ll be presenting a session on MDX called MDX 101: An Introduction to MDX. I’m going to be giving an introduction into the multidimensional expression language used to query SSAS cubes. In this session, we’ will learn the basic MDX query syntax, how to navigate a hierarchy, how to create dynamic time calculations, and more.

Head here to get registered for this massive event! There are already 500 people registered for this awesome event so if you’re thinking about going you better sign up now before they lock down registrations!

Create Date Dimension Table Script

On a recent project I created a Date dimension script that I thought, I’d share (mostly so I would have access to it from anywhere). I based this script on one written by Mike Davis, but I included a few changes such as a first and last day of month flag, open flag, relative time period attributes, and a few others. Enjoy! Continue reading Create Date Dimension Table Script

Top 3 Simplest Ways To Improve Your MDX Query

Learning to write MDX is difficult enough, but learning to write efficient MDX and performance tune an MDX query can be even more of a challenge. With that thought, I wanted to put together a few tips that can help you improve the performance of your MDX calculations.

1. Subdivide your calculations

For example, imagine you have an MDX query that looks like this one found in the AW cube:

Create Member CurrentCube.[Scenario].[Scenario].[Budget Variance]

As Case
        When IsEmpty
             (
                (
                  [Measures].[Amount],
                  [Scenario].[Scenario].[Budget]
                )
             )

        Then Null

        When [Account].[Accounts].CurrentMember.Properties(“Account Type”) = “Expenditures”
             Or
             [Account].[Accounts].CurrentMember.Properties(“Account Type”) = “Liabilities”

        Then ( [Measures].[Amount],[Scenario].[Scenario].[Budget] )
             –
             ( [Measures].[Amount],[Scenario].[Scenario].[Actual] )

        Else ( [Measures].[Amount],[Scenario].[Scenario].[Actual] )
             –
             ( [Measures].[Amount],[Scenario].[Scenario].[Budget] )
    End,
 
Format_String = “Currency”

You’ll notice the expressions ([Measures].[Amount],[Scenario].[Scenario].[Budget]), as well as ([Measures].[Amount],[Scenario].[Scenario].[Actual]), appear multiple times in the above calculation. Because a part of a calculation cannot be cached, each time this expression appears in the calculation, it has to be recalculated. We can subdivide this calculation into multiple calculations that can be individually cached the first time they are run.

Create Member CurrentCube.[Scenario].[Scenario].[Budget Amount] as
    ( [Measures].[Amount],[Scenario].[Scenario].[Budget] );

Create Member CurrentCube.[Scenario].[Scenario].[Actual Amount] as
    ( [Measures].[Amount],[Scenario].[Scenario].[Actual] );

Create Member CurrentCube.[Scenario].[Scenario].[Budget Variance]

As Case
        When IsEmpty
             (
                [Scenario].[Scenario].[Budget Amount]
             )

        Then Null

        When [Account].[Accounts].CurrentMember.Properties(“Account Type”) = “Expenditures”
             Or
             [Account].[Accounts].CurrentMember.Properties(“Account Type”) = “Liabilities”

        Then [Scenario].[Scenario].[Budget Amount]
             –
             [Scenario].[Scenario].[Actual Amount]

        Else [Scenario].[Scenario].[Actual Amount]
             –
             [Scenario].[Scenario].[Budget Amount]
    End,
 
Format_String = “Currency”;

Now after the first time the measures Budget Amount and Actual Amount are calculated, they can be cached instead of having to be recalculated all over again.

2. Replace IIF functions with MDX scripting

If your calculation uses an IIF function to test for a specific location in the cube space, chances are it can replaced with better performing MDX scripting. Examples:

a. If the Current Member is in a specific level
b. If the Current Member is a certain member
c. If the Current Member has a certain parent

Here we have a calculation from the Adventure Works cube:

CREATE
  MEMBER CurrentCube.[Measures].[Ratio to Parent Product] AS
    IIF(
        [Product].[Product Categories].CurrentMember.Level.Ordinal = 0
      ,1
      ,[Measures].[Sales Amount]
        /
          (
            [Product].[Product Categories].CurrentMember.Parent
           ,[Measures].[Sales Amount]
          ))
   ,Format_String = “Percent”
   ,Associated_Measure_Group = ‘Sales Summary’ ;

The IIF function is testing for the very top level of the hierarchy. We can rewrite this query to eliminate the IIF function:

CREATE
  MEMBER CurrentCube.[Measures].[Ratio to Parent Product] AS
    [Measures].[Sales Amount]
        /
          (
            [Product].[Product Categories].CurrentMember.Parent
           ,[Measures].[Sales Amount]
          )
   ,Format_String = “Percent”
   ,Associated_Measure_Group = ‘Sales Summary’ ;
  
SCOPE ([Measures].[Ratio to Parent Product],[Product].[Product Categories]);

    THIS=1;
    FORMAT_STRING(THIS)=”Percent”;

END SCOPE;

By using the SCOPE statement, we can still set the top level of the of the Product Categories hierarchy to 1 and eliminate the IIF statement.

3. Don’t use Set Aliases in your calculations

Set Aliases are when you assign a set a name by creating a named set. Named sets are handy when you must define a set multiple times. But there’s a catch when using named sets: Using a named set in a calculation disables block computation. So take this query for example:

with set [SE States] AS

{[Geography].[State-Province].&[FL]&[US],
[Geography].[State-Province].&[GA]&[US],
[Geography].[State-Province].&[SC]&[US],
[Geography].[State-Province].&[TN]&[US]}

member [Measures].[SE States Sales] as

SUM([SE States],[Measures].[Reseller Sales Amount]),
format_string=”currency”

Select [Measures].[SE States Sales] on 0,

[Date].[Calendar Year].Members on 1

From [Adventure Works]

This calculation is calculated cell by cell because of the named set. If you “Analyze” this query in Mosha’s tool, MDXStudio (which is awesome and you should download now), you will see the warning, “Applying aggregation function Sum over named set [SE States] – this disables block computation mode.” Because its always possible to remove a named set, we should rewrite this query to use block computation:

with member [Measures].[SE States Sales] as

SUM({[Geography].[State-Province].&[FL]&[US],
[Geography].[State-Province].&[GA]&[US],
[Geography].[State-Province].&[SC]&[US],
[Geography].[State-Province].&[TN]&[US]},[Measures].[Reseller Sales Amount]),
format_string=”currency”

Select [Measures].[SE States Sales] on 0,

[Date].[Calendar Year].Members on 1

From [Adventure Works]

I hope you found this few simple tips useful. These tips are simple and easy to implement but can save you tons of query time.

Creating Your First Tabular Model (part 1)

With this post I begin a series of blog posts covering one of the most talked about features of SQL Server 2012: Tabular Modeling. Being new to this like most of us are, I hope to learn much about Tabular Modeling as we walk through the basics of building your first Tabular Model. In this installment, we’ll talk about what a Tabular Model is, when a Tabular Model is the right choice, and of course how to create a Tabular Model.

What is a Tabular Model?

With the release of SQL 2012, we are (re)introduced to tabular modeling. If you’re familiar with Power Pivot, you’re going to notice many similarities and will most likely pick up the tabular modeling aspect of SSAS pretty quickly. Basically a Tabular Model is an in-memory database in SQL Server Analysis Services. The Vertipag engine that was previously only used in Power Pivot is now utilized within Power Pivot and SSAS 2012 Tabular as xVelocity. The xVelocity technology allows you to perform complex analytics of your data all in-memory while making use of column store indexes. This eliminates expensive IO unlike SSAS Multi Dimensional Modeling where IO is a viable concern.

The Tabular Model also allows us to bring together multiple data source types very easily, similarly to Power Pivot. Bringing together data stored in a SQL Server Database, Oracle, Excel, and Access is not only possible but straight forward.

1 Data Source Types

Once you’ve imported your data from whatever sources you need, defining relationships is very easy. Simply dragging and clicking an arrow from one object to another is all that is required here.

2 Creating Relationships

When Do I Choose Tabular Over Multidimensional Modeling?

You might be asking yourself, “Self, if Tabular is so fast and great, why would I ever use Multidimensional Modeling?” That’s a valid question, so lets go over some of the perks of each and when one or the other would the optimal choice.

1. If you need access to many different external data sources, choose Tabular. Multidimensional can do this to an extent, but if you need to relate an Excel spread sheet, a text file, an SSRS Report Feed, and your database data, Tabular is the way to go here.
2. If you need complex calculations, scoping, and named sets, choose Multidimensional.
3. If you need mind numbing speed and consistently fast query time, choose Tabular.
4. If you need Many-to-Many relationships, choose Multidimensional. You can model this relationship type in Tabular, but Multidimensional is still easier to create and manage these more complex relationships.
5. If you are planning on using Power View, choose Tabular. At this time its impossible to build Power View reports against a Multidimensional model, but that could change in the future.
6. If you don’t know DAX and want to use Tabular, either take the time to learn or use Multidimensional ; ) .
7. If your solution requires complex modeling, choose Multidimensional.

Take these points into consideration when choosing Tabular vs. Multidimensional. This isn’t every single consideration to think about, but should at least get you started in understanding the differences between Tabular and Multidimensional.

How Do I Create a Tabular Model?

So now that we have a general understanding of what the Tabular Model is and what are some of scenarios we should choose to use the Tabular Model, lets start creating our first Tabular Model.

For you to play along with my example, you’re going to SSAS 2012 installed in Tabular Mode.

Select Tabular Mode

If you’re not sure if your instance of AS is in Tabular Mode, just connect to Analysis Services in SQL Server Management Studio and look at the icons next to your instance of SSAS.

Tabular and Multidimensional Instances

The Tabular SSAS instance has the nifty little blue icon and the Multidimensional instance has the same icon as before in 2008.

You will also need SQL Server Data Tools and the AdventureWorksDW2012 sample database.

So first things first. Open SQL Server Data Tools.

3 Open SSDT

Then go to File, select New, then click Project.

4 Create project

Under Business Intelligence, highlight Analysis Services, and select Analysis Services Tabular Project. I’m naming my project FirstTabularProject. Then click OK.

5 Create SSAS Tab Project

After clicking OK you can see the new project in the Solution Explorer with an empty model, Model.bim.

6 Model in Solution Explorer

With the project created, your empty model should be open in the Designer Window. So now its time to create a connection to our data source(s). In the top left of the menu tool bar, click the Import From Data Source icon.

7 Import From Data Source Icon

Then select the type of data source you want to connect to. In my case, I’m connecting to a SQL Server database. Select Microsoft SQL Server and click Next. Specify the Server name, the credentials, and the Database.

8 Connect to SQL Server

After clicking Next, we must specify the Impersonation Information. These are the credentials that Analysis Services will use to access the data source when importing and processing the data. We can either specify specific credentials or tell it to you use the AS Service Account. I’m specifying credentials.

9 Impersonation Info

On the next screen, we need to choose how to import the data. We have two options: We can either select from a list of the tables and views which objects we’d like to import or we can write a query to specify the data to import. I’m selecting from the list of tables.

On the Select Tables and Views screen, you’ll see a list of the Tables and Views in your database. I can browse through this list and places checks next to all the tables and views I’d like to import. Or I can select a table and then click the button Select Related Tables. This will use the referential integrity of the database to determine which tables to check for you. Be careful clicking Select Related Tables. If you accidently click the button and the wizard selects 20 other tables, there’s no easy way to unselect the newly selected tables. I’ve selected FactInternetSales and allowed the wizard to select the dimensions based on the referential integrity.

10 Select Tables

Important:

Before clicking Finish, you’ll want to make sure that you highlight each table you want to import and then click the Preview & Filter button. This will allow you to not only preview the data, but also uncheck any fields that you do not wish to import into your model. This is important since the database will be stored in memory. We do not want to store any unnecessary data that we do not have to. You can see that I’ve gone through the FactInternetSales table and unchecked the fields I don’t want to import.

11 Uncheck unwanted fields

After filtering out the unnecessary fields, click Finish. The importing of the data will begin.

12 Importing Data

Once it is finished, click Close. You’ll notice the data has been imported and is now viewable in the Designer Window.

If after importing your data you decide you need to bring in another table from the same data source, click the Existing Connections icon.

13 Existing Connections icon

Then click Open and you are able to add new tables, views, or named queries to your model.

In the Designer Window we have two views. The Grid view allows us to see the imported data, with each table on an individual tab.

14 Grid View

We can also switch to the Diagram View by clicking the Diagram View icon at the bottom right of the Designer Window. The Diagram View is ideal for viewing all the imported tables and their relationships at one time.

15 Diagram View

So now that we’ve imported in our data, we need to add some measures to our model. Switch back to the Grid view and click over to the tab for the fact table, FactInternetSales. Select the first text box in the Measures Grid directly below the Sales Amount field. If the Measures Grid is not visible, just click the Show Measures Grid icon to toggle it back on.

17 Measures Grid

After highlighting the text box beneath the Sales Amount field, click the Sum (Sigma) icon. This will automatically create a measure with an aggregation type of Sum. Then go into the properties of your new measure and give it a friendly name since this is the name that your users will see when browsing the cube.

16 Add a measure

Then do the same for the Order Quantity field.

Now lets deploy and process our model. By default, the model will be deployed to default instance of SSAS on the local machine. We can change the server we want to deploy to by right-clicking the project in the Solution Explorer and selecting properties. You can also change the name of the database that will be created when you deploy the Model.

18 Project Properties

In the Menu bar, click Build, then click Deploy ;.

19 Deploy Project

This will begin the deployment and processing steps. If you specified specific credentials to use for impersonation, you’ll need to enter the user’s password during this step.

20 Deploying

Once the model is deployed, we can now view our model deployed to the AS server and browse it with Excel. Click the Analyze in Excel icon and your model will open in Excel so you can browse it.

22 Analyze in Excel

We’ve created our first Tabular Model. I hope this gives you a good introduction on what Tabular is, when Tabular is the right choice, and the basics of creating a Tabular Model.

In the next article, we’ll get more into modifying our model by building hierarchies in our dimensions and other more advanced topics so stay tuned for the next article. And as always, post any questions or comments here and I’ll answer them as best I can.

SQL Rally, Here I Come!


On May 10th and 11th, all SQL hell is breaking loose in Dallas, Texas at SQL Rally! It’s sure to be an incredible event, with tons of experts presenting on an amazingly wide range of SQL-y topics. There will be everything from PowerShell to development to Business Intelligence to SQL 2012 stuff. If you’re not signed up for this great even, make sure you do that soon so you still get the early bird pricing, which ends March 15.

As if attending this great even wasn’t enough, I’ll also be presenting at SQL Rally with Adam Jorgensen! We’ll be presenting a session called, “Zero to Cube – Fast Track to Analytics”. This session is basically 100% demo and walks you through getting up and running on Analysis Services 2008 R2 and 2012 right away.

So head over SQLRally.com and get signed up if you aren’t already. And if you are, make sure you attend our session, “Zero to Cube – Fast Track to Analytics”!