Using Angular and Redis to rate-limit requests to the Twitter API

Twitter’s API limits an application to 15 requests on behalf of the same Twitter user in any 15-minute period. If your application allows an end-user to trigger calls to the Twitter API—say, for instance, your app is a Twitter client like wynno that allows users to scroll their timeline indefinitely into the past—you need to ensure your users can’t cause your application to exceed this rate limit.

Client-side

You can and should implement some logic on the client side of your application that prevents regular users from making too many requests. In the case of an Angular app like wynno, we can keep track of our requests to Twitter by creating a service:

angular.module('myApp.services')
  .factory('TwitterService', ['$q', '$http', function($q, $http) {
    var service = {
      lastGet: null,
      getTweets: function() {
        var d = $q.defer();
        var timeSinceLastGet = service.lastGet ?
            new Date().getTime() - service.lastGet :
            null;
        if (timeSinceLastGet && timeSinceLastGet < 60000) {
          d.reject('Please try again in ' +
            Math.ceil((60000 - timeSinceLastGet) / 1000) +
            ' seconds.');
        } else {
          $http({ method: 'POST', url: '/my_server_endpoint', data: {} })
            .success(function(tweetData, status) {
              service.lastGet = new Date().getTime();
              d.resolve(tweetData);
            })
            .error(function(reason, status) {
              // Conceivably, if reason informed us that error occurred after server
              // successfully called Twitter API, we would want to update
              // service.lastGet here too.
              d.reject(reason);
            });
        }
        return d.promise;
      }
    };

    return service;
  }]);

The code here implements a simple guard against making more than one request per minute: Whenever getTweets is called, we check to see if lastGet is from less than 60 seconds ago. If it is, we’re finished, and resolve the promise with a rejection. If it isn’t, we fire off a request to our server, which in turn will make a request to the Twitter API and send us back the tweet data it gets. When we receive this tweet data, we resolve the promise with it and update lastGet.

An alternative approach, which would accommodate up to 15 requests at any interval in a 15-minute period, would be to keep track of the timestamp of the 15th-to-last request, and check each time getTweets is called whether that timestamp is from less than 15 minutes ago.

Server-side

We also need to implement some kind of rate-limiting logic on the server side. Just as in other aspects of a web application, like user authentication and input validation, client-side enforcement is not sufficient. You can’t rely on requests to your server always following the logic of your client-side application. A malicious user could easily curl 16 requests in rapid succession to the server endpoint used by getTweets. Or in the case of wynno, which fetches new tweets from Twitter when the app first loads up, simply refreshing the browser more than 15 times in 15 minutes would do the trick.

So our server needs to keep track of the requests it makes to Twitter on behalf of each user. Note that that is on behalf of each Twitter user, not simply each user session of our app. If the same person used our app on a desktop and a phone, that would be two different sessions, but Twitter’s API would issue the same user token for each. That token is what we provide in our calls to the API to identify the user whose tweet data we want. So our rate-limiting solution must work independently of user sessions. We can’t, in other words, simply refer to a lastGet timestamp stored in a session.

This is a perfect use case for a key-value store like Redis. To implement a simple one-request-per-minute guard, we can use an id or username for each Twitter user as the key, and store a timestamp of the last call to the Twitter API on behalf of that user as the value. Whenever we’re about to make a call to the Twitter API, we’ll lookup the timestamp for the user and check if it’s more than 60 seconds old.

Actually, we don’t even need to store the timestamp, because Redis allows us to set an expiration time for a key. If we set our key to expire after 60 seconds, the existence of the key itself will tell us whether or not to allow the call to Twitter. If a given key exists in the Redis database, there must have been a call made to Twitter for that user in the last 60 seconds, so we don’t allow another. (Of course, if we want to be able to tell the user how much longer they have to wait, then we would want to store the timestamp.) The advantage of using expiring keys here is that it keeps our Redis database small. We’ll only have as many keys as we have users who’ve called the Twitter API in the last minute. This should make our queries faster than if we kept a key for every user who’d ever called Twitter.

Here’s what the code would look like on an Express server:

var redis = require('redis');
var config = {
  port: 6379, // Redis' default
  host: "127.0.0.1",
  dbNum: 1
};

var client = redis.createClient(config.port, config.host);
client.select(config.dbNum, function() { /* ... */ }); // If we're using Redis
// to store sessions in database 0, then it makes sense to keep track of Twitter
// calls separately in database 1.

var setRateLimiter;
exports.setRateLimiter = setRateLimiter = function(userId, callback) {
  // Creates a key which expires in 60 seconds containing the current time.

  client.setex(userId, 60, new Date().getTime().toString(), function(err, reply) {
    if (err) {
      callback(err);
    } else {
      callback(null);
    }
  });
};

exports.checkRateLimiting = function(userId, callback) {
  client.get(userId, function(err, reply) {
    var lastGet = parseInt(reply, 10); // reply is a string or null
    if (err) {
      callback(err);
    } else if (lastGet) {
      var timeSinceLastGet = new Date().getTime() - lastGet;
      callback('Please try again in ' +
        Math.ceil((60000 - timeSinceLastGet) / 1000) +
        ' seconds.');
    } else {
      // Proceed with call to Twitter after setting a new rate limiter.
      // Note that we want to set this rate limiter BEFORE we make the call to Twitter
      // as well as AFTER, so that user cannot trigger >15 calls to Twitter API
      // in period before response to first call is received.

      setRateLimiter(userId, callback);
    }
  });
};

If we wanted to allow up to 15 requests at any interval within a 15-minute period, rather than just one per minute, we would store an array of the timestamps of (up to) the last 15 requests, setting the key expiration to 15 minutes after each update of the array.