Having previously moved my Telegraf instance to a Synology-hosted Docker environment, I’ve spent some time adding additional measures for tracking and visualization using Grafana. With that task, I recently discovered the Telegraf Exec plugin, which allows you to execute custom scripts for the purposes of collecting external data for ingress into InfluxDB (or any of the other Telegraf output options). To prove out this concept, I’m using a Bash script to connect to Fitbit and pull data from my Fitbit Blaze into InfluxDB.
If you’re new to Telegraf, InfluxDB or Grafana, you can refer to my Home Automation series for information about these platforms and how to get up and running. For now I’ll assume that you’ve already got those basics down and we’ll jump into the configuration of Telegraf to run the script, the script itself, and a starter visualization in Grafana.
Connecting to Fitbit
For the script itself, I’m building on the work here. Before starting, you’ll need to register a new Fitbit app at https://dev.fitbit.com/apps/new. The only important things to note with this process are that you must ensure that you select the “Personal” OAuth Application Type, and that while a Callback URL is required, it doesn’t have to resolve to anything. That said, the next step will be a lot easier if it does. In my case I’m just using the root of my website for the callback.
Once your app has been registered, make a note of the Client ID and Client Secret, then (using an Incognito or InPrivate session) point your browser to the following URL, replacing CLIENTID and REDIRECTURL with the appropriate values.
Login to your Fitbit account when prompted, select the metrics you want your collection script to have access to, then click Allow.
If your callback URL resolves, this next step will be super simple since you will now be taken to that URL, along with a “code” query string value. Make a note of this code, as this is the OAuth token our script will use to connect the first time. If your callback doesn’t resolve, you can use the developer tools for your browser to look for the attempted call to the callback URL and obtain the code value.
Let me preface this entire section by saying I’m in no way a Bash expert. This script was very much put together to prove out this concept and I’ll likely release an updated and more polished version of this script in the future. While it works, it likely has lots of room for improvement. Note that this script obtains seven days of data in each call. This script must be placed on the same machine/instance/container as Telegraf, and the script path on line 23 of the script must be correct.
Part 1: Configuration
At the top of the script set your clientid, clientsecret, callbackurl, the access code we just obtained, and your units. We’re also storing today’s date, yesterday’s date and the date 7 days ago for use in the calls we’ll make to the Fitbit API. I set a global platform tag on all of my InfluxDB data so I easily know what system/platform that data came from, and as previously noted it is also critical that the path to the folder containing your script is correct.
Part 2: Authentication
The next section in the script performs the actual authentication with Fitbit and updates the OAuth refresh token if necessary. This is done by storing the refresh token in a file called refreshtoken.txt in the same folder as the script. If that file isn’t there, the script assumes it is authenticating for the first time and will use the code we obtained previously. If there is a refresh token stored in the text file, it will use that token, and refresh that token for next time.
Part 3: API Calls
Now that we have obtained a valid access token, we can call any of the Fitbit API’s. One important thing to note is the special work we’ll need to do to get data into the correct timezone. Fitbit returns data based on the user’s configured timezone, and InfluxDB stores everything as UTC. For this reason, the first call we make is to the user profile so that we can determine the UTC offset for the user data and normalize it to UTC for InfluxDB. We also make calls to the APIs for any of the data points we want to track.
Part 4: Processing
In the complete script, I’m processing Fitbit data points for weight, steps, calories, distance, floors, sedentary minutes, lightly active minutes, fairly active minutes, very active minutes, sleep and heartrate. For brevity sake here I’ll explain how step data is processed, and you can refer to the complete script for all of the others (as they’re all very similar).
As you can see in the Part 3 script above, we’re calling the 7d.json endpoint for step activities, and storing it in a variable called getSteps. You can refer to the Fitbit reference documents for specifics on the data formats, but as you can see in the screenshot below, we should expect to see an activities-steps array containing an object for each day of data, complete with a dateTime (which in the case of step data is actually just a date without a time), and the value of steps.
Our script uses a for-loop to iterate through each object inside the activities-steps array. Here we store the following date values:
measurementDate: the raw dateTime value from Fitbit
measurementFullDate: the dateTime at 0000 UTC
measurementTS: Unix timestamp in seconds for the measurement
measurementCorrectedTS: the measurement timestamp corrected for the user’s timezone offset, and converted to nanoseconds for InfluxDB
Note that ideally we’d be storing this data in seconds vs nanoseconds to get better performance out of InfluxDB when analyzing the data. I had some issues getting Telegraf Exec to play nicely with a precision of seconds, so look for an update in the future if I get that working. As this is just my own personal Fitbit data and will be a relatively small data set, I’m not as concerned but suffice to say best practice is certainly not to load everything into InfluxDB in nanoseconds.
In addition, we also store the short day (ie: Wed) and long day (ie: Wednesday) and add them to the globalTags. These are purely for display purposes in Grafana as it nicely tags the data points with a friendly value for the day.
Once we’re done with date math, we extract the value property from the object (the actual step count) and store it in a variable.
The last step is to output an InfluxDB line for Telegraf to see and pick up. This should be formatted as “measure,tag1=tag1value,tag2=tag2value datapoint=value timestamp”, followed by a line break “\n”.
Running the Script
Part 1: Dependencies
Before we can run the script the first time, we need to install jq, the command-line JSON processor the script uses to parse the JSON data returned from the Fitbit API. If you’re running Telegraf on Debian or Ubuntu you can simply install it by running sudo apt-get install jq, but if you’re running in a Docker container like I am, you’ll need to manually execute the following command to install:
wget --no-check-certificate https://raw.githubusercontent.com/stedolan/jq/master/sig/jq-release.key -O /tmp/jq-release.key && wget --no-check-certificate https://raw.githubusercontent.com/stedolan/jq/master/sig/v1.5/jq-linux64.asc -O /tmp/jq-linux64.asc && wget --no-check-certificate https://github.com/stedolan/jq/releases/download/jq-1.5/jq-linux64 -O /tmp/jq-linux64 && gpg --import /tmp/jq-release.key && gpg --verify /tmp/jq-linux64.asc /tmp/jq-linux64 && cp /tmp/jq-linux64 /usr/bin/jq && chmod +x /usr/bin/jq && rm -f /tmp/jq-release.key && rm -f /tmp/jq-linux64.asc && rm -f /tmp/jq-linux64
Part 2: Testing the Script
With jq installed, we can now run the script for the first time by simply executing “./var/scripts/fitbit/fitbit.sh”.
Here we can confirm that the data is correctly formatted for InfluxDB and we can configure Telegraf to run the script automatically and push the data to InfluxDB.
The exec plugin is natively included with the Telegraf Docker image, so setting it up is as simple as defining a new input, including the script you wish to run in the commands array, and setting your interval (in my case I’m updating every 20 minutes). Note that the Fitbit API has a limitation of 150 API calls per hour, so keep this in mind when picking your polling interval as the script currently makes 12 API calls every time it runs. With the Exec input, we also must set the data_format parameter to tell the plugin what data format it should expect our script to output. For future reference, you have quite a few options here, but the script is outputting InfluxDB line protocol for simplicity, so set this to “influx”.
With your exec input added to your conf file, restart your Telegraf instance and you should be good to go.
Visualizing the Data
Now that we’re getting data into InfluxDB the world is our oyster in terms of how we want to visualize it. As a quick sample, here I’m graphing step count and number of active minutes (in hours) for the last seven days (yes, I need to do more walking and less coding). The full panel JSON is also available.