i am using an entra ID token to do a business central API call to fabrics notebook.
the token's expiry time is 1hour, i am aggregating the records imported since notebook takes from BC only 20k records. when i tried to import all data using an aggregate function :
# Your API endpoint and token (assume token variable already exists)
api_endpoint = "https://api.businesscentral.dynamics.com/..."
headers = {
"Authorization": f"Bearer {token['access_token']}",
"Content-Type": "application/json"
}
# Aggregate all records from paginated responses
all_records = []
current_url = api_endpoint
while current_url:
response = requests.get(current_url, headers=headers)
if response.status_code != 200:
print("Error fetching data:", response.text)
break
data = response.json()
# Append the fetched records
records = data.get("value", [])
all_records.extend(records)
# Find the URL for the next page, if there is one
current_url = data.get("@odata.nextLink")
# Create a Pandas DataFrame with all the aggregated records
df = pd.DataFrame(all_records)
which is taking more than the token's expiry time, after that i get an error :
Error fetching data: <error xmlns="http://docs.oasis-open.org/odata/ns/metadata"><code>Unauthorized</code><message>The credentials provided are incorrect</message></error>
and only get around half the data (my table is 17m rows, i got 8.5m).
what can i do in this situation?
PS: the reason i am using notebook is because when we tried to use dataflow gen2, we always got an error when fetching the main fact table, we tried turning on fast copy, but it doesnt work with business central (here is a vlog that samples the error we got with dataflow link ) , and we tried connecting the BC api to a data pipeline instead of a dataflow, but data pipeline doesnt take business central API as a source. so our only option that we were able to find was notebook. if there are any other, simpler options please share them