Quantcast
Channel: Service topics
Viewing all 62482 articles
Browse latest View live

Heavy usage. This report has exceeded the allowed concurrent usage. Try opening it later.

$
0
0

gelsonwirtijr_0-1730322176761.png

Hello everyone.

I'm having a problem with the slow loading of visuals for any report in my workspace via the public link. I currently have the PRO plan. Does anyone know if I upgrade my plan to PPU, will this problem be resolved? If not, why does this problem occur and how can I solve it?


Power BI sharing licensing

$
0
0

Hey, can someone shed some light here.

We have a scenario where we have one developer who wants to share content with a couple of people in the organization using Power BI. The question is, what is the most appropriate and cost-effective licensing for this?

My understanding is that the developer needs to be licensed with Power BI Premium per user and also Microsoft Fabric SKUs F64 or above to be able to share with the rest of the team, who will havebPower BI free licenses. Or the developer needs only Power Bu Premium per User and the rest can view the reports with Power BI Free licenses only?

Could you please help me by either confirming this or correcting me if I am wrong?

Thank you!

 

Best way to manage data - looking for advice…

$
0
0

Hi All,

 

I am looking for some advice as to how to most efficiently make data available on the Power BI Service to our citizen developers, avoiding data duplication wherever possible.

 

We have data held in an Azure data lake as well as in an on prem SQL Server database, which utilises a gateway.

 

I have looked at setting up dataflows (incremental refresh where appropriate) and then pushing the data over to datasets held in a workspace of their own. Devs wanting to build reports would then consume the data from these datasets, importing or direct querying data as required.

 

The Power BI ecosystem is changes so quickly, it's quite difficult to know what to plumb for, so I'm looking for any advice anyone can offer.

 

Thanks,

 

Matty

Azure Map conditional formatting not being published to web.

$
0
0

I have a map on desktop which looks like this:

ColeDixon_0-1730325378631.png

I applied conditional formatting to the shapes under visual/reference layer/polygons/fill color and setting the fill color using this formula:

ColeDixon_3-1730325631622.png

When I publish my report however the result on web looks like this:

ColeDixon_4-1730325682890.png

The conditional formatting has been erased. When I go to apply it on web I don't even see the option to use a function to determine the fill color of the polygons. Has this just not been implemented yet? Or is there something that I am missing?

ColeDixon_2-1730325471012.png

 

Deployment Pipeline - I can't select items

$
0
0

When I click on "Show more", nothing is shown. I suppose this is a bug, because the first time I tried, it showed. I've already tried to unassign/assign and even delete/create new pipeline.

 

I hope some good soul can help me out.

pbi_problem.JPG

Scheduled Data Refres Failure IDbCommand

$
0
0

We are suddenly facing the below errors in our scheduled refreshes on the service.

 

Data source error - The credentials provided for the Snowflake source are invalid. (Source at xxxx.snowflakecomputing.com;<Schema Name>.). The exception was raised by the IDbCommand interface. Table: <Table Name>. No change in the credentials made.

 

DataSource.Error: The table has no visible columns and cannot be queried.. <Table Name>. </ccon>;<ccon>The table has no visible columns and cannot be queried.</ccon>. The exception was raised by the IDbCommand interface. Table: <Table Name>

 

When we refresh the reports on the desktop, we do not face any errors and the refresh is fine.

 

Can someone please help?

 

Regards

Monitoring the Potential Cost impact of Autoscale in Premium Capacity.

$
0
0

We enabled autoscale in premium capactity (p1).  Suddenly we no longer face throttling issues.  However I'm guessing we only replaced one type of issue with another.  I'm not looking forward to seeing the monthly bill from Microsoft. 

Is anyone aware of a tool that we might use to try to predict the impact on our monthly bill?

 

We have been watching the "capacity metrics" dashboard, but the behavior of autoscale can't be interpreted very easily.  Take the following image as an example.

When I click on autoscale in the legend, then a big yellow bar appeares over the utilization chart on the right....

 

dbeavon3_2-1730326775638.png

 

 

 


... isn't easy to interpret the meaning of this graphic.  Should we expect the Power BI bill to be 2x or 20x, as compared to the bill that we were getting before we enabled autoscale. 

Any information would be appreciated.  Perhaps there is some other monitoring tool which is able to more specifically monitor our autoscale costs. 

power bi dataflows connected to google bigquery suddenly stop working

$
0
0

I've been using datalfows (gen 1) in PBI service using the Google BigQuery connector successfully for just over a year now, and it's be running very smoothly until this week; when my daily refreshes started failing and giving me this message:

 

"ERROR [HY000] [Microsoft][BigQuery] (131) Unable to authenticate with Google BigQuery Storage API. Check your account permissions"

 

My permissions within google seem correct in accordance with various posts i've seen about having the read sessions permissions, so i don't actually thing the message is shedding light on the issue (for info my flows have also always had useStorageApi = false).

 

When testing for solutions I have noticed that dataflows work for smaller tables load (e.g a 500 row table) but up to some limit they fail. I limited one of table to 100,000 rows and it failed, not exactly sure what the threshold is.

 

This issue is strictly just for dataflows, I have a semantic model connected to google and it loads a 2 million row table just fine on it's scheduled refresh.

 

When I look in google big query console I see the jobs running as normal (i.e the usual power bi dataflow related jobs), it's just not successful refreshes on the PBI side. It's very frustating because it was running so well before and nothing about the data has changed other.

 

Please let me know if anyone has come up against this issue and have managed to fix! 

 

Cheers

 


TOP N when applying the Single Date Picker in a Slicer visualization

$
0
0

Hello everyone,

I'm following these steps to create a Single Date Picker in Power BI:

https://www.linkedin.com/pulse/power-bi-single-date-picker-without-dax-torsten-wanka/, which works fine. However, I have some tables where I previously applied a TOPN to display data, and this Single Date Picker requires adding another TOPN to ensure it shows the desired date.

To achieve the TOP5 without needing two TOPN filters in the filter tab, I created a measure, but instead of identifying the TOP5, it returns a value of 1 for all records. How can I create a measure that returns 1 for values that are in the TOP5 and 0 for the rest? I want to use this result in my filter tab or in the calculated measures to ensure I always get the top 5 results.

 

TOP5 measure:

filtered_top5=
VARCurrentClient = SELECTEDVALUE(table[name])
VARMinDate =
    CALCULATE(
        MIN(table[date]),
        KEEPFILTERS(table[date])
    )
VARTop5Clients =
    CALCULATETABLE(
        TOPN(
            5,
            VALUES(table[name]),
            [measure_percentage], -- this is a measure that calculates the difference between two values in percentage. With the TOPN from the filters tab it worked as expected. this doesn't have anything for the TOP5 (I'll post it below)
            DESC
        ),
        table[date] = MinDate
    )
RETURN
IF(
    CONTAINS(Top5Clients, table[name], CurrentClient),
    1,
    0
)
 
 Measure using the TOP5:
percentage_top_5 =
VAR MinDate =
    CALCULATE(
        MIN(table[date]),
        KEEPFILTERS(table[date])
    )
RETURN
IF(
    [filtered_top5] = 1,
    CALCULATE(
        [measure_percentage],
        table[date] = MinDate
    ),
    BLANK()
)
 
 Measure Percentage General
measure_percentage =
IF( [budget_total]>300000,
DIVIDE(
    [measure_sales]-[budget_daily],
    [budget_daily],
    0
) ,BLANK())

 

 

 

Power BI Export All Visuals - python notebook

$
0
0

I've created a python notebook that helps to automate review and testing of Power BI reports. It embeds a report and then works through every (visible) page and exports the data from every (visible and exportable) visual.

 

The data is exported in CSV and/or Excel format (both by default). A folder is created for the output, then a sub-folder is created for each page, and a file is created for each visual.

 

This project is mainly intended to help test Power BI reports. Any change to a report's input data, queries, semantic model, or page designs can affect the results shown to the end users. Bugs or unexpected results can quickly destroy the confidence of your audience and frustrate your testers/reviewers. These can be as minor as leaving a slicer or filter set to an inappropriate selection when the report is saved and published.

 

This project can quickly capture snapshots of all the data presented in open formats (CSV, Excel) that can be independently reviewed and compared using a range of tools and techniques.

 

When used to produce CSV files, the project can be used in combination with the Visual Studio Code Compare Folders Extension to very quickly compare every cell of data from every visual in the report. This could be useful for a "regression test" to determine whether changes between versions have only had the desired effect and have not leaked into other pages or visuals.

 

powerbi-export-all-visuals compare.png

 

When used to produce Excel files, a range of techniques can be used to analyse the output. My favourite tool is the Inquire / Compare Files feature of Excel (Inquire Add-In). This works row by row and column by column to compare 2 Excel files, to quickly highlight all the differences for review.

 

I usually start with the VS Extension against CSV files for a first pass of review, then use the Excel Compare Files Add-In when more complex changes need to be reviewed.

 

I zip up the folders of output from each test/review cycle and stash them e.g. to a SharePoint document library. These are "proof" of my testing/review, that any analyst can open and review themselves independently.

 

The notebook embeds a live frame containing the target report. So if you need to first apply specific filters or use slicers, you can run the notebook section-by-section or cell-by-cell down to get the embedded report frame, then interact with it just like it was a Power BI web browser tab. When you run the remaining notebook cells, the output will reflect your filter or slicer changes. Any changes made will not be saved back to the Power BI web report definition.

 

powerbi-export-all-visuals embed report.png

 

I've made this notebook freely available in a GitHub project, so anyone can quickly get started to review and test their own reports. There are more technical notes there, including the requirements, but let me know if you get stuck on anything or raise an issue in GitHub.

https://github.com/Mike-Honey/powerbi-export-all-visuals?tab=readme-ov-file#readme

 

 

Direct Share not Working

$
0
0

I have a Power BI account with a Pro license. Until a few days ago, everything was working fine, from updating reports to sharing them. However, for the past two days, I’ve been encountering an error whenever I try to directly share any report that is already present in service workspaces with recipients who also have a Pro license (and who receive daily emails from us). I immediately receive an error message saying, "Report wasn't shared. We ran into a temporary problem, try again to share the report."

 

I’ve been seeing this message for the last 2-3 days. I’ve tried clearing my history, cache, and cookies, but nothing has worked.

 

Does anyone know if this might be a bug, glitch, or a new error on the service website?

Data update in report combining Direct Query and OneLake Data Hub (Power BI Semantic Model)

$
0
0

Hi, 

I am facing a difficulty with updating a report, which uses onelake datahub and direct query. 

I created a report R2, which uses OneLake Data Hub(Power BI semantic model) S1 connector and added Direct Query connected data S2 from another source, which forced me to create "local model".

I thought that if I now did some semantic model updates on the S1 (and published it), it would show up on the R2 report online immediately. But it seems that it doesn't. 

When I e.g. change the data type of a certain column in the S1 and I want this change to be visible in the R2, I have to open R2 in the desktop, refresh it and then publish it again. 

This seems stupid. I would have expected, that a change to S1 after publishing would immediately take effect on the R2 online (same as if I had just a regular thin report only using the S1). But it doesn't.

Any advice? I don't want to spend time on opening the "thin reports" that combine multiple sources and refreshing them and republishing every time, I have some change done in the main report/semantic model.

Any advice? or is this just the stupid native behavior?

Dashboard In Power BI Apps

$
0
0

Hi everyone, I have a challenge for my Power Bi Apps, I have a dashboard with several tiles, some is pining live and those tiles  I cant see in my apps, i got a error message like i dont have the permission or the model dont exist.
Does not apps support live pining?
BR

Lakehouse and semantic model issues

$
0
0

I have a table I have updated in my lakehouse to add 2 more fields. I have queried this in the SQL analytics endpoint and my table looks refreshed.

 

When I tried to update the semantic model where this table is used by clicking edit tables, refresh source data and confirming, the semantic model won’t add the new fields.

I tried making a new model and updating the default semantic model of the warehouse but the new fields in my table just won’t go through.

 

Is there some sync issue going on? I have tried all the routes I know but I can’t find the problem.

 

Similarly, I made a new table via a notebook and wrote this to the lakehouse and I can see the table in the lakehouse but not in the SQL endpoint which makes me think I do have sync issues but I’m unsure how to resolve this or if that even is my issue.

 

Any help would be appreciated.

Analyze in excel, users accessing through APP

$
0
0

Was there any recent global change? I have lots of users losing the ability to AnalyzeInExcel:

ovonel_0-1730375264187.png

 

 

What could be causing this?

 

(I am not talking about workspaces,  I talk about a live report on an App, and users acessing only through the App)


Replicate a Power BI desktop report in Fabric

$
0
0

Hi All,

 

I have a report that I had built in power bi desktop and now I am looking to replicate in a Fabric workspace where I ingest the data via dataflows into a Warehouse and from there a new semantic model and the report.


Bringing in all the dax queries is pretty straightforward given that we are now able to write dax queries in service. However, 

what I want to know is whether I have to re create the entire report with all the visuals/bookmarks and everything from scratch or is there a quicker way to do this (like copy paste of some sort 🙂)

Power Automate Error

$
0
0
Hello,I've a Power BI report published in Power BI Service with a slicer that filters data by email addresses. The goal is to set up a Power Automate flow that exports the report to PDF for each email address selected in the slicer and then emails each user a report containing only their filtered information.
To start, I simplified my testing approach by attempting a basic export using Power Automate, but I keep receiving the "FeatureNotAvailableError" every time the flow tries to access the report.What I’ve Checked So Far:
  1. Permissions and Licensing:
    • I'm an admin of the workspace and have a Premium Per User (PPU) license.
  2. Functionality in Power BI Service:
    • I can manually export the report to PDF in Power BI Service without issues, which suggests the export functionality is enabled.
  3. Power Automate Connection:
    • I re-authenticated the Power BI connection in Power Automate to ensure it’s valid.
Current Situation:
Despite verifying permissions, licensing, environment settings, and tenant configurations, the FeatureNotAvailableError still appears when the flow attempts to access the report in Power Automate.Any insights into additional troubleshooting steps or possible causes would be greatly appreciated!

 

Semantic model cloud data source connection resets to SSO after getting updates pushed from git

$
0
0

I have a Fabric workspace with a power BI report that is developed locally with Power BI Desktop. The changes to the PBI-project is pushed to a Git repo on Azude DevOps, which is connected to the workspace.

When the semantic model in the Fabric workspace is updated from the git repository, the cloud data source connection authentication is changed to SSO. This is because i authenticate with my SSO Azure Entra ID account when i develop the report in PBI Desktop. This information is then getting propagated through source control and into the semantic model on Fabric. So every time i push changes to the semantic model done in PBI Desktop, i must manually edit the settings of the semantic model on Fabric to use a cloud connection instead of SSO in order for the refresh to work.

 

What are the options to get rid of this manual intervention on each semantic model update?

How to load data in fabric from tabular model (api.powerbi.com/myorg/datamodel)

$
0
0

Is there a way to load the data from a model that i have read acces to, into a lakehouse, warehouse or dataflow?

I have the model link: powerbi://api.powerbi.com/v1.0/myorg/{model} and i can access this in DAX studio or Tabular editor just fine. I can also export all the tables to sql or csv in DAX studio. So there should be some kind of way to load the data into fabric right?

I already tried some options in dataflows gen2 and pipelines, but nothing seems to work so far.

Configuring Clustered Gateways on Separate Machines and Using Different Power BI Accounts

$
0
0

Hello,

I have a few questions regarding the configuration of on-premises data gateways and would appreciate your insights.

  1. Clustered Gateway Configuration: How can I set up a clustered gateway configuration with three gateways installed on separate machines? I want to ensure that they are properly configured for load balancing and failover.

  2. Different Power BI Accounts: Is it possible to configure these three gateways using different Power BI accounts? If so, what steps should I follow to ensure seamless integration and management?

  3. Multiple Gateways on a Single Machine: Can I install multiple on-premises data gateways on a single machine, each associated with a different Power BI account? If this is possible, how should it be configured to avoid conflicts and ensure proper functionality?

Thank you in advance for your assistance!

 

Viewing all 62482 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>