Save to file

What are you building here

We're going to consume all the calls available on the Activate API and save each of them in a JSON file.

Please note that at some point, the number of calls exposed by the API can be in the thousands, or hundreds of thousands. You might want to implement some limit if this is the case.

To name the JSON file, we're pre-supposing that a field original_file exists in the metadata field of each call. For this to be true, you have to provide it using our JUpload API. As a fallback, the code will use the Unique ID our system has given to the call on input.


For this test, we name the python file Yes, this is quite original!

First things first, you need to initiate a connection to our servers. Our API URL is:

Make sure you have a valid token, otherwise you'll receive an error. We recommend you pass tokens in as environment variables, or persist them in a database that is accessed at runtime. You can add a token to the environment by running the script as:


The code itself looks like:

#!/bin/env python3
import json
import os
import requests

activate_token = os.getenv('ACTIVATE_API_TOKEN')
activate_url = os.getenv('ACTIVATE_API_URL')
headers = {"Authorization": f"Token {activate_token}"}
LIMIT = 20
offset = 0

while True: # Careful here if you have a lot of calls on your account
r = requests.get(f"{activate_url}/calls?limit={LIMIT}&offset={offset}", headers=headers)
data = r.json()
if not data['data']:
# No more data to parse
for call in data['data']:
if call['metadata'].get('original_file'):
filename = call['metadata']['original_file'] # for example: `my_audio_123.mp3`
filename = call['unique_id']
filename = f"{filename}.json" # following our example: `my_audio_123.mp3.json`
with open(filename, 'w') as jsonfile:
# Save the call payload to a dedicated file
json.dump(call, jsonfile)
offset += LIMIT