Skip to main content

Unlocking new self-service BI scenarios with ExecuteQueries support in Power Automate

Headshot of article author Kay Unkroth

As part of our GA work for the ExecuteQueries REST API, we delivered a new Power Automate action to run queries against Power BI datasets. This action delivers a straightforward low-code/no-code experience to BI users who want to streamline repetitive, mundane tasks and processes. You can conveniently select the desired workspace and dataset in the user interface, paste your DAX query into the query textbox, and automate away! Thatโ€™s it! You donโ€™t need to know anything about the REST API. You donโ€™t even need to know DAX to get started because you can copy queries straight from Power BI Desktop by using Performance Analyzer. Our new action to run queries against Power BI datasets unlocks entirely new self-service BI scenarios in Power Automate!

Power BI datasets represent a significant investment and vital asset. Organizations already use shared and certified datasets to support their office workers and decision makers through curated data, verified business logic, and trustworthy analysis. These datasets provide the foundation for rich interactive and paginated data visualizations that deliver business insights that drive success. But what about workflows and other automated business processes? All too often, organizations must reinvent the wheel here because it is just too hard to leverage datasets in these areas directly. You need to be a developer, you need to write code, you need to know Web services and REST APIs, you need to register apps in Azure Active Directory, you need to load client libraries, and so on. Letโ€™s remove these hurdles! You already have the curated data, the verified business logic, the trustworthy analytical measures. Your users already build fantastic Power BI reports on top of your datasets. Let them leverage the very same approved and trustworthy datasets in self-service cloud flows. Letโ€™s maximize the return on investment into Power BI datasets you already have!

The purpose of this blog post is to help you get started running queries against datasets in Power Automate based on the following three scenarios:

  1. Exporting data into csv files. This is a limited scenario because the ExecuteQueries REST API restricts query results to 100,000 rows or 1,000,000 values per query (whichever is encountered first) and the number of queries to 120 per minute. We have no plans to increase these limits. So, if you must export query results, keep the data quantities manageable.
  2. Testing Power BI datasets.The new Power Automate action makes this task a joyful walk in the park. Read in a table with test cases, iterate row-by-row, run a predefined query, compare the returned results with expected results, job done! And each individual query can impersonate a different user account if you want to test Row Level Security (RLS). Note that you must have dataset write permissions to use impersonation, which you generally have if you are the dataset owner or a member of the workspace admin, member, or contributor roles.
  3. Creating BI-driven cloud flows. An event takes place somewhere, another event must take place somewhere else, and BI-based logic in between determines how the events unfold. Power Automate offers a vast assortment of triggers. Perhaps someone uploaded a csv file to SharePoint Online, updated a record in a customer support database, or tweeted an important keyword, or perhaps updated a Power BI Metric, or a data alert just fired. There are hundreds of triggers, but whatever the event, your flow now needs to make some data-driven decisions. Instead of going to individual data silos and stitching results together in true one-off fashion, you can now rely on the curated data and verified measures of your shared and certified datasets that give you trustworthy results.

Sounds too good to be true? Strap on your seatbelt! For starters, to export the results of a dataset query in csv format, you can begin with a manually triggered flow. Add a new step, search for Power BI in the connector search area, and select Power BI, and then on the Actions tab, select the action labelled Run a query against a dataset. Note that there is also a Run a JSON query against a dataset action if you want to construct the entire JSON request body from scratch, but the Run a query against a dataset action is much more user friendly because you can paste your DAX query straight into the Query textbox without having to deal with the JSON details. A simple DAX query would be โ€˜Evaluate <tablename>โ€™ or perhaps โ€˜EVALUATE TOPN(100, <tablename>)โ€™ to make sure the results stay within the row count limits. Of course, you can create more useful and more realistic DAX expressions using your favorite DAX editor tool. Using AdventureWorks sample data, a sample query could be โ€˜EVALUATE TOPN(100, vFactInternetSalesReason)โ€™. Donโ€™t forget to select the desired workspace and dataset conveniently from the listboxes in the action card. Save and test the flow. The results should look as in the following screenshot.

The remaining steps are routine Power Automate actions. For csv output, you must convert the rows into a csv table. Edit the flow again, add a Create CSV table action, as in the screenshot below, and select the content object called First table rows from the Run a query against a dataset action as input. First table rows contains the results of your query. (In the future, the ExecuteQueries API might possibly support queries with multiple Evaluate blocks resulting in multiple tables in the response. For now, however, only a single Evaluate per query is supported.)

The Create CSV table action converts the query results into a csv table and lets you customize the column headers if desired, but it doesnโ€™t save the data. To save it, choose a Create file action as the next step, such as to create a file in SharePoint Online, and assign the Output from the Create CSV table action as the File content. Save and test the flow. The following screenshot shows the results. Of course, you should also have a new csv file with the expected data in your target SharePoint library.

So far so good. How about we take this one step further by adding a Power Automate visual to a Power BI report that launches a cloud flow that exports a snapshot of interesting data, such as the Daily Support Incidents report depicted in the following screenshot. This report shows fictitious support call volume by day and by feature area along with the 30-day moving average of the incident count (green line) and an alert threshold based on the same moving average plus 2 times the 7-day standard deviation (light purple line). Be this as it may, notice the Take snapshot button in the upper right corner of the report? This button lets you export the support incidents of the most recent day directly from your report based on the current selections in the report.

For details about how to add a Power Automate visual to a Power BI report, see the topic Trigger a cloud flow from any Power BI report in the product documentation. Among other things, you can add data fields to this visual, which then serve as input parameters to the cloud flow. For example, the report above lets you slice support incident counts based on feature areas. It therefore makes sense to pass the selected Area values to the visual so that the Take snapshot flow can limit the exported support incidents to the currently selected areas as well. If you follow the instructions in the product documentation to create a new cloud flow, the Power Automate visual automatically wires up the input parameters with the On Power BI button clicked trigger. You only need to get the input parameters from the On Power BI button clicked action into the query text of our Run a query against a dataset action. The following is the starting point for the query text.

DEFINE

ย  ย ย ย ย ย ย  VAR selectedAreas = TREATAS({"Data access", "Front-end"}, 'SupportRequests'[Area])




ย ย ย ย ย ย ย ย  VAR maxAbsDate = MAX('SupportRequests'[Created])

ย ย ย ย ย ย ย ย  VAR maxSelDate = CALCULATE(MAX('SupportRequests'[Created]), selectedAreas)ย ย ย ย ย ย ย ย 

EVALUATE (

ย ย ย ย ย ย ย ย  SUMMARIZECOLUMNS(

ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย  'SupportRequests'[Created],

ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย  'SupportRequests'[Issue],

ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย  'SupportRequests'[Area],

ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย  'SupportRequests'[Subarea],

ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย  'SupportRequests'[Status],

ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย  'SupportRequests'[Owner],

ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย  FILTER(SupportRequests, maxAbsDate = maxSelDate && 'SupportRequests'[Created] = maxSelDate),

ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย  selectedAreas

ย ย ย ย ย ย ย ย  )

)

The above DAX query retrieves the support incidents for two areas “Data access” and “Front-end”. Notice the use of the TREATAS function. As you can glean from the screenshot below, our cloud flow must replace these static values with a dynamically constructed string. Accordingly, our cloud flow must transform the array of Area values passed in from the report into a string that the flow then inserts into the query text. This is a multistep process. First, add a Select action, click in the From box, and select the Power BI data object. Then switch the Select action into text mode by clicking on the little T button for the Map, and then click into the Map textbox. You could directly choose the Area content object, but it is a better idea to use an expression that escapes any quotation marks in the values with a second quotation mark and encloses each value in quotation marks. This will make it easier to convert the array into a correctly formatted string subsequently. Properly escaped and quotation-mark enclosed, you only need to join the array members together to create the string that you then can insert into the query. The following table summarizes the steps.

Action Expression Result
Select concat(‘”‘, replace(item()?[‘Area’], ‘”‘, ‘””‘), ‘”‘) [

“\”Data access\””,

“\”Front-end\””,

“\”Login/Auth\””,

“\”REST APIs\””,

“\”Web service\””

]

Run a query against a dataset join(variables(‘Selected Areas’), ‘, ‘) “\”Data access\”, \”Front-end\”, \”Login/Auth\”, \”REST APIs\”, \”Web service\””

And here is the resulting cloud flow that gets the job done. The Create CSV table and Create file steps are identical to the previous export example. Now you can export data directly from the convenience of your Power BI report! And thatโ€™s all there is to it.

With these data-export scenarios under the belt, letโ€™s now turn our attention to an even more dynamic scenario: dataset testing. The following screenshot shows a table with four columns: a test id, a query to run, a user identity in UPN format to impersonate, and the expected result of the query. Our Power Automate solution will parse each row, run the query with the specified user identity, and verify that the returned result matches the expected result. If the results donโ€™t match, it will send an email to tell us.

This time letโ€™s create a scheduled flow to run the dataset tests daily. Add a List rows present in a table action from Excel Online, specify the correct location and path to your Excel workbook and select the table with the test definitions. Then add an Apply to each control action and select the dynamic output object value from the List rows present in a table action as the input. Now you can click on Add an action in the Apply to each action card and select the Power BI action Run a query against a dataset. Select your workspace and dataset, and then click into the Query text textbox to display the Dynamic Content dialog box. Select DAX Query as the input for the Query text. Then click on Show advanced options, click into the Impersonate user textbox, and select User UPN from the Dynamic Content dialog. The configuration should look as in the following screenshot.

So far, this is relatively easy. Perhaps slightly more complicated is the comparison of the returned result for each query with its expected result. Letโ€™s first complete the flow and then discuss the details. As the next step in the Apply to each action card, add a Condition control action. Click into the left textbox to display the Dynamic Content dialog, click on Expression, and enter the following expression: string(outputs(‘Run_a_query_against_a_dataset’)?[‘body/firstTableRows’]?[0]?[‘[Result]’]). Click OK. The selected comparison should be is equal to and then click into the right box and choose Expected Result from the List rows present in a table action. As mentioned, if the returned result does not match the expected result, the flow should send an email notification. Just add a Send an email action to the If no card and specify recipient, subject line, and message body. It might be useful to include the test id, returned result, expected result, and the actual test query in the email message, as in the following screenshot. Test the flow with an intentionally wrong value in the expected result column of a test case. You should get an email promptly.

The expression to get to the returned results requires a deeper explanation. All test queries use a SUMMARIZECOLUMNS function with the calculated result in a column called [Result]. Per ExecuteQueries REST API convention, columns that are renamed or created in the query will be returned within square brackets. And because the calculations each return a single value, the summary table has effectively only a single row. The following table breaks down the expression string(outputs(‘Run_a_query_against_a_dataset’)?[‘body/firstTableRows’]?[0]?[‘[Result]’]) into its individual parts to get to that calculated value.

Expression Description
outputs(‘Run_a_query_against_a_dataset’) Gets the entire response of the ExecuteQueries REST API for the action named Run a query against a dataset.
outputs(โ€ฆ)?[‘body/firstTableRows’] Selects all the rows of the first result set within the response body.
outputs(โ€ฆ)?[โ€ฆ]?[0] Selects the first row within all the rows of the first result set.
outputs(โ€ฆ)?[โ€ฆ]?[0]?[‘[Result]’] Selects the value of the [Result] column of the first row.
string(โ€ฆ) Converts the selected value into a string. This is necessary because the returned results might be of any data type while the List rows present in a table action reads the expected result value as a string.

And there you have it. Testing datasets can be so easy, including impersonation if your dataset is RLS-enabled!

But wait there is more! BI-driven cloud flows! Going back to the earlier Daily Support Incidents example, wouldnโ€™t you want to know more about this massive spike in support volume on the most recent day? Way over the threshold! This support department is getting hammered! What caused this spike? And perhaps even more important, which support engineers would be best suited to help to get this situation back under control quickly? A BI-driven cloud flow can deliver these answers before you even ask the questions!

The Daily Support Incidents report already measures the most recent support volume and calculates the alert threshold. Out of these two parameters, we can construct a trigger for a cloud flow. One option is to use a data alert, as demonstrated in the How-to guide Integrate Power BI data alerts with Power Automate in the product documentation. Yet, data alerts use static threshold values while our alert threshold is dynamic in nature because it relies on moving average and standard deviation. Fortunately, there is a much, much better choice these days: Power BI Metrics connected to data values! Power BI Metrics, too, can be integrated into a Power Automate flow, as explained in detail in Use Power Automate to update goals automatically in the product documentation.

Note that Power BI Metrics is the new name of the feature formerly known as Goals. Customer feedback during the goals preview indicated that the old name caused confusion. Power BI Metrics does indeed convey much more intuitively the purpose of measuring achievement and progress in an effort or initiative, but the Metrics-related Power Automate actions are still labeled โ€˜Goalsโ€™ until the name changes are fully deployed in Power Automate as well. Because the changes are not fully rolled out yet, the following explanations still refer to goals in the Power Automate actions.

A huge advantage of Power BI Metrics is that data-connected Metrics can apply slicers and filters. This makes it very easy to define top-level and submetrics. Take a look at the following scorecard. It has a submetric for each support area. Itโ€™s the same measures for support volume and alert threshold, just with different Area slicer selections. Status rules define that the metric status should be At risk if the support volume is greater than the alert threshold and On track otherwise. The metric name reflects the support area so that you can already glean important status information just from the scorecard. You can immediately see the Front-end area is running hot.

Identifying the troubled area is a good start, but we want more actionable information delivered right to our fingertips when a critical situation unfolds. Accordingly, our cloud flow could use the trigger When status of a goal changes. Yet, our Product Support Scorecard has multiple goals/metrics. Fortunately, the Power BI connector also includes a Get multiple goals action. A scheduled flow can use this action to retrieve all the goals/metrics from the scorecard and then filter the list down to those that have a status of โ€˜At riskโ€™. An Apply to each loop then iterates over these goals/metrics, queries the dataset for the desired actionable information, and then sends a notification email to the manager who owns the support area. The following screenshot depicts the flow and the resulting email message.

The dataset queries in the above screenshot are abbreviated to save space. A subsequent blog post might explore these queries in greater detail. With the assumption that the goal/metric name matches the name of a support area, the first Run a query against a dataset action takes the name as an input parameter and returns the ids or keys of all the next-level subareas that exceeded their individual alert threshold. This finer level of detail within the broader support area helps to narrow down possible root causes more quickly. The flow uses the following expression to escape possible quotation marks in the goal/metric name and then again encloses the entire string in quotation marks, as already explained previously for the data export example: concat(‘”‘,ย replace(item()[‘name’],ย ‘”‘,ย ‘””‘),ย ‘”‘).

The second query expects the subarea keys and the name of the support manager as input. A Select action creates the array with the subarea keys from the first queryโ€™s response, which the flow then converts into a string by using the join() function, as in the previous data export example. The support manager, on the other hand, can be found in the goal/metricโ€™s owner property (item()[‘owner’]). Again, all values are escaped and enclosed in quotation marks using the same concat() function as before. This second query looks up the support engineers reporting to the indicated manager and then analyzes their past and current support work items to summarize each engineerโ€™s experience and current workload. For example, an engineer who has worked on more than 20 support cases in the same subareas is considered experienced while engineers with no work history have no experience.

The remaining steps assemble the email message body. They are rather uneventful. You can create an HTML table in much the same way as a CSV table demonstrated earlier in this blog post. The flow also applies a few HTML and CSS cosmetics and then sends the email message on its merry way to the intended recipient.

And thatโ€™s it for a whirlwind tour through Power Automate flows that use the new action to run queries against Power BI datasets. Hopefully, you find the covered examples relevant to your scenarios and helpful to get started taking your self-service BI game to the next level. The integration of the ExecuteQueries REST API with Power Automate empowers office workers and decision makers in new and innovative ways, while at the same time maximizing an organizationโ€™s return on investment in shared and certified datasets. You can compound the benefits further by combining the Run a query against a dataset action with other Power BI actions, such as to read, update, and even create Power BI Metrics. You no longer need to hunt down data silos and reinvent the wheel. Power BI and Power Automate let you reuse the curated data, verified business logic, and trustworthy analysis you already have in your datasets. You can build BI-driven flows in minutes without having to write any complex code. A few basic expressions to access properties and escape string values is all it takes. With self-service BI scenarios in Power Automate, your insight-driven business processes get even more insight-driven. You can indeed get answers before you even ask the questions. We hope that you are as excited about these groundbreaking new possibilities as we are. And as always, please provide us with feedback as you try out the Power BI actions in Power Automate. We would love to hear from you!