hi welcome to video 10 out of 12 in this dp600 exam preparation course today we're going to be looking at securing and optimizing semantic models so this is the second part in our semantic modeling section of the study guide and as you can see here we're very close to the end of the course so we got two more modules after this one but today let's focus on semantic modeling again specifically we're going to be looking at implementing Dynamic row level security and object level security implementing incremental refresh implementing performance improvements inqueries and Report visuals and
to do that we're going to dive into the use cases for external tools like Dax Studio tabular editor 2 and then we're going to go into a bit more detail about okay what can we actually do in terms of improving Dax performance in Dax studio and also optimizing semantic models using tablet editor as however at the end of this video I'll be asking you five sample questions to test your knowledge of the things that we go over in this part of the study guide now as with the last video I've left links to really good
further learning resources from people that are much more experienced in powerbi development and semantic modeling so I will caveat this lesson by saying you know definitely go and check out the further Learning Resources by MVPs Microsoft mvvs for the powerbi side a lot of this content is powerbi and also third party tools that connect to powerbi as well so let's start by looking at Dynamic rowle security so in general when we talk about row level security we're talking about restricting who can see what data at the row level in specific tables in a powerbi report
now Dynamic Road level security is kind of an extension to Road level security by applying Road level security using the usable principal name now there are other information functions that we can use but user principal name basically gives us the email address of the logged in user so when the user logs into the powerbi service then behind the scenes we're going to get access to their email address and we can use that to apply filters to the data in our powerbi report so that that user only sees the information that you have configured that they
should be seeing now you can configure Road level security in the semantic model the data warehouse and the tsql endpoint of The Lakehouse in fabric but if you're using direct Lake mode then you want to be configuring roow level security in the semantic model otherwise you're going to be falling back to direct query mode and in general the whole One Security model in fabric is still kind of under construction so definitely recommend where we are currently I would recommend just using the conventional semantic model role security Now one thing to bear in mind is that
road level security only really works when the users that you're trying to give Road level security to have viewer permissions in a particular workspace so if they have more than viewer so admin or member or contributor technically this Ro level security is not going to be enforced because they'll have lots of other ways to access access that data so that's one thing to bear in mind there need to be viewers in the workspace your users or your viewers of reports so at a high level these are the steps that are required to implement Road level
security so the steps for implementing Road level security in P desktop first we're going to have to create a role now after you've created a role you want to select the table that you want to apply Road level security to you're going to enter a table filter Dax expression to configure when and who the road level security is applied to and we're going to validate that roow level security has been applied correctly now as well as roow level security we can also apply object level security to our semantic models but the way that we're going
to be doing that is different because object level security can only be configured via third party tools such as table editor and when we talk about object level security what we're talking about is restricting access to a particular table in a semantic model or a specific column so you might have a sensitive column within your PO report and you want to restrict who can see that particular column of data in your semantic model as I mentioned to configure object level security you need to use an external tool such as tabular editor and similarly to row
level security object level security only restricts data access for users with viewer permissions so we can't give them admin member or contributor roles in a workspace now the high level steps to implement object level security in tabular editor well again we're going to need to start by creating a role in power your desktop or you can also create it in tab editor as well then within tab editor you're going to want to find the role that you've created click on the table properties for that role set the permissions for the particular table that you want
to apply object level security on set that permission to either none or read okay so obviously none will be if you don't want to give them access to that table and read will be if you do want give them access to that table similarly we can do the same for a particular column as well then we're going to be wanting to publish the report to the service and add the people uh and groups to the particular role in the service next we're going to talk about incremental refresh in palbi now one thing to bear in
mind here is that incremental refresh is a developing field in the world of fabric but for the purposes of this exam I think what they are looking for is incremental refresh which is a feature in powerbi not talking about incremental refresh in data flow gen 2s or data pipelines or anything like that so I think we're focused here on the specific feature within powerbi called incremental refresh and typically this is used on large fact tables because incremental refresh allows you to pull in only the data that is changed within a given range right so perhaps
in the last 24 hours or the last hour you might only want to bring in the new data that's changed within that period rather than you know loading in all the data that's in that Source database and obviously that's going to have many different benefits if we can do this incrementally rather than allinone for starters we're going to need fewer refreshes the refreshes are going to be a lot quicker CU you're only pulling in what's new your resource consumption could be lower as well because again you're only bringing in what's changed you're not bringing in
everything every time now the refreshes can be more reliable because rather than pulling in hundreds of thousands or millions of rows every time you refresh the data set this could create open connections that are going to run really long on your database and have the potential to be timed out as well obviously when you move to an incremental refresh that problem is likely to go away because your refresh is going to be a lot quicker as I mentioned currently incremental refresh is only possible within the powerbi side so they are working on incremental refresh features
for ETL items like data Factory items like the data flow Gen 2 and the data pipeline but for the purposes of this exam currently when we talk about incremental refresh we're talking about powerbi and incremental refresh in powerbi is available for powerbi premium licenses only so PPU or premium capacity subscriptions and the incremental refresh policies are defined within power desktop so just have a look at how we can Implement incremental refresh so it starts by creating two parameters called range start and range end they must be called range start and range end these are reserved
keywords for these param and then in your power query you're going to be wanting to apply some custom date filters to filter the data based on that table's date column to keep only the range between the range start and the range end then you're going to want to find your incremental refresh policy and this is what this looks like you're going to select the table you're going to set your import and refresh ranges and there's a few optional settings there like only refreshing on complete days and detecting data changes as well and then you're going
to review and apply your policy and then when you publish your report into the service that's when your incremental refresh is going to kick in based on the policy that you've set and the range that you've set so next we're going to switch our attention and we're going to talk about semantic model performance and whenever we talk about performance of anything really it's useful to think of it in two steps number one is around monitoring and observation and Gathering data about what's happening in our semantic model in this case and then we want to talk
about optimization so once we understand what's going wrong how can we optimize it to improve performance ultimately So within powerbi and the external tools that you can connect to powerbi there's quite a few different ways that you can monitor semantic model performance on this slide we'll just do a high level summary of all of them before digging into each of them in a bit more detail so when we're talking about power query performance there's a tool called the query analyzer tool when we're talking about analyzing the visual and query performance well we can use the
performance analyzer in powerbi desktop we can also export that data into other tools like Dax Studio as well for a bit more fine grained analysis we'll take a look at that shortly so when we're talking about Dax performance we can again use Dax Studio we can bring in data from performance analyzer we can also use things like traces to monitor all of the different events both on the client side and on the server side as well and for semantic model performance we can use things like the best practice analyzer which is a tool within tabulate
editor so for the purposes of the exam you're going to need a bit of a knowledge about what you can do in powerbi desktop the limits of performance analysis in power desktop and then knowing when to switch to tools like Dax studio and tabular editor as well when you want a bit more advanced analysis or deeper diving into the things that are going wrong in your Dax and your semantic models let's take a look at these three in a bit more detail so first let's look at Dax studio so one of the core use cases
that we can use Dax studio for is loading the powerbi performance analyzer data in Dax studio for further analysis So within powerbi we can record different actions when we're using the report like clicking on different filters refreshing different pages and it's going to log the query time and the visual load time for every visual on a specific page but the UI in powerbi for analyzing this performance analyzer data is a little bit limited so what we can actually do is export that data and we can import it into Dax studio for a bit of a
deeper dive analysis on that we get better filtering and sorting than in powerbi desktop and you can also view the queries behind each visual load now since powerbi has introduced the Dax query view this has become a bit of a less of an advantage for Dax Studio because now you can also run the underlying Dax query for a specific visual within the Dax query view in power desktop but something to bear in mind another really core use case of Dax studio is using the view metrics to look at the vertac analyzer so the verti PAC
is the engine the analysis Services engine that basically power runs on and so we can analyze what's going on in there using this view metric we can take a look at table and column sizes and obviously the size of your table and your column has a really big impact on performance right and then we can start to look a bit deeper into reasons why things might be slow like the cardinality of a column the different data types that you use using for a specific column and much much more we can also use the verti PAC
analyzer to look for referential integrity violations and what we mean by that is a mismatch in the unique keys on two sides of a relationship you know there's lots more information in the verti PAC analyzer for the purposes of the exam I think it's good just just to have a look at the verti PAC analyzer understand what you can do there and the insights that you can gather from that another really useful feature in Dax studio is the Trace analysis so for Trace analysis there's three traces really we need to be aware of the all
queries Trace is going to capture different query events from client side tools such as power desktop so this is the only one of these traces that's going to able to capture both on the client side and on the server side your queries as well the query plan Trace there is only really going to capture the query plan Trace events from the analysis Services tabular server right so if you're using the Dax editor Within in daak studio and you're using that to run queries and kind of analyze their performance that's where you're going to be using
the query plan because it's going to give you the plan of that specific query on the analysis Services engine and the server timing Trace is going to give us the query timing from a server perspective so next let's switch over to tabular editor and look at some of the use cases that we might want to be using tabular editor for and the main one when it comes to optimizing semantic model performance is the best practice analyzer and this is a tool that basically per per forms a scan of your semantic model and it checks it
for common issues now there's a list of rules that can be downloaded from GitHub and you can also create your own custom rules as well but there's a predefined list of rules that you can download from GitHub and they're organized into these categories we have ones that talk about performance ones that talk about your actual Dax Expressions that you're using error prevention formatting and maintenance so these are different categories of those rules in the best practice analyzer rules set now these checks can also be run from the tabulate editor CLI as part of a cicd
process so when you deploy a semantic model from a development environment to a testing environment you might want to run the best practice analyzer rule sets against your semantic model to give you an automated way of analyzing the quality of that model to surface any particular errors that you might get with that model and things you need to be aware of from equality and maintenance perspective in that model so I've left this use cases for Dax studio and Tabet editor at the end here just to kind of review everything we've talked about when it comes
to Dax studio and tablet editor so on the Dax Studio side we're going to be wanting to use Dax Studio when we want to write and execute and debug Dax queries but they do actually need to be manually copied over to powerbi desktop as Dax Judo is read only now since as I mentioned previously powerbi desktop now has the Dax query View you can actually write and execute Dax queries and view the results similar to how you can do in Dax Studio but that's a relatively new feature that didn't used to exist in powerbi we
can use the verti PAC analyzer to understand the size of your semantic model as well as individual tables and columns within the model we can bring powerbi desktop performance analyzer data into Dax Studio to analyze it further and we can use those Trace analysis functionalities to analyze query events both on the the client side and the server side depending on the trace that you select and when we're talking about the use cases for tabular editor we're going to be able to quickly edit data models so we can create measures perspectives calculation groups from the Dax
editor within tabular editor and we can publish them directly into the semantic model that's a bit of a distinction between the functionalities of tabular editor and Dax studio in tabular editor we can actually update our semantic models in the tool itself there's also functionality for automating repetitive tasks using scripting and as we've mentioned we can incorporate devops into the kind of tabular mod model life cycle using that cicd functionality in the tabular editor CLI and we can use the best practice analyzer to identify common issues in your powerbi static model finally a good use case
for tabular editor is if you want to implement object level security as we mentioned before that's not possible currently within power desktop so if you want to be defining object level security that's going to be done in tabular editor okay we've covered a ground again there so let's just wrap up this video with some practice questions to test your knowledge of this section of the exam question one your goal is to analyze performance analyzer data in Dax Studio to find the visual in your report with the longest total load time put the following steps in
the correct order to achieve this so what you've got here is five steps in a process your goal is to reorder this list so that it makes sense for this particular goal that we're trying to do and anzing performance analyzer data in Dax studio so take a moment here get a bit of paper maybe write these down in the correct order and I'll show you the answer shortly so the answer here is like this so first we're going to be starting a recording in powerbi performance analyzer we need to actually record it in powerbi to
begin with then you're going to be wanting to click refresh visuals just to update the visuals on the page or you can interact with the report as well if that's what you want to be analyzing then clicking stop recording then we can export that performance data Json and then import it into Dax Studio then we can go over to the powerbi performance Tab and sort by the total milliseconds descending to find the longest refresh time for a particular visual on the page question two you want to use the best practice analyz all within tabul Editor
to assess your Dax performance which of the following severity codes for best practice analyzer rule violations indicates an error is it a level zero B level one C Level Two and above D level two e level three and above so the answer here is level three and above so in the best practice analyzer within tabular editor we have many different levels of severity of the different rule violations level one is just for information only a level two is a warning and level three and above is an error so level three and above e is the
correct answer to this question question three when implementing d dnamic Ro level security which information function should you use to filter the data in a specific table based on the logged in users email address is it a user B user object ID C user email D user principal name or E username so the answer here is D user principal name so as you recall when we were talking about Dynamic role of security and specifically in the question is talk talking about using the logged in users email address and the information function to give us that
is the user principal name now the user function doesn't exist the user object ID and the username does exist as information functions in Dax but these are not the correct answer they won't give us the email address the username will just give you the domain and the the user's name rather than the actual email address and user email C is also incorrect that one is is made up so that's not what the function is called it's user principal name d question four in Studio which of the following records queries are generated by a client tool
like powerbi desktop a query plan Trace B SQL profiler C all queries Trace D server timings trace or E the verti PAC analyzer so the answer here is C the all queries Trace so the clue in the question here was talking about the client tool and recording queries that generated specifically within the client tool itself within powerbi desktop so the all queries traces the only one that's going to give you that information the verti pack analyzer well that's just going to analyze things like referential integrity and the sizing of different tables and columns within your
semantic model not necessarily the load time directly within the client tool power VI and the SQL profiler the server timings and the query plan these are all kind of serers side backend tools they're not going to give you information about queries generated in powerbi so the answer here is C the all queries Trace question five the first step in implementing incremental refresh in powerbi is to a add a range start and a range end column to your data set B add refresh start and refresh end parameters to your powerbi desktop project C add a refresh
start and a refresh end column to your data set or add range start and range end parameters to your powerbi desktop project so the answer here is D adding range start and range end parameters to your powerbi desktop project now as we mentioned previously range start and range end they reserved keywords when we're talking about parameters so you do need to specific when you're adding in these parameters should be range start and range end answers A and C talk about adding columns to your data set that's not going to do anything or at least anything
useful when it comes to incremental refresh we need those as parameters because we're going to use those to filter date column in the table that you're interested in so we're going to use those parameters to filter that table and as I mentioned previously B is wrong because we're using refresh start and refresh end rather than range start and range end we need to be specific when we using inal refresh to be using range start range congratulations you've now completed the first three sections of the exam study guide we only have two quite short sections to
go so make sure you click here to join me in the next video where we'll be looking at exploratory analytics in a bit more detail see you there