Fairly One-Dimensional

Clustering subreddits by common word usage


One of reddit's best features, along with its voting system, is the ability for users to create their own subreddits, forums dedicated to specific topics. There are subreddits for any and every topic one can think of, and redditors know that subreddits quickly take on dynamic personalities. Some subreddits are known for vigorous discussion, while others simply represent a constantly updated collection of entertaining content. Some serve as learning resources for those new to a field, while others are places for debates among experts. Some are incredibly supportive, while others quickly become havens for trolls.

But what defines a subreddit? There are some obvious answers: topic, content type (images, videos, self-posts, or all of the above), and user population. For example, by topic, one might expect subreddits related to video games to be more similar to one another than any of them are to /r/politics. By content type, it seems reasonable to assume that self-post only subreddits like /r/AskHistorians and /r/AskScience are more similar to one another than either is to /r/AdviceAnimals. But are there more subtle differences between subreddits that can be used to group them in meaningful ways as well? Do users of the different subreddits write in distinct, predictable fashion? How much information does it take to categorize a subreddit? As it turns out, not nearly as much as one might think.

Creating subreddit-specific word frequency distributions

To answer the question of whether users in different subreddits write in distinguishable ways, I analyzed the frequency of words used in the comments of each subreddit. Choosing the right number of words to analyze is a bit of a balance. Choose too few words (the, a(n), etc.), and the subreddits will be entirely indistinguishable. Choose too many, and you'll quickly start getting subreddit specific words, such as names, which will trivialize the problem (e.g. "Clinton" is much more likely to appear in a politics related subreddits, while "Bioshock" is much more likely to appear in video game subreddits). For these analyses, I chose to use the 100 words most frequently used across the comments of the top 50 subreddits. This list includes common articles, a lot of pronouns, and a lot of basic verbs. However, there are no words which should be definitively linked to a given subreddit. For example, the 98the, 99th, and 100th words are "going," "want," and "didn't," respectively. You can see a complete list of the words here. Thus, the distribution of these words should provide an intuition into how users write while remaining agnostic to the "jargon" of each subreddit.

With these words in hand, I analyzed the comments submitted to the 50 most popular subreddits between March 2 and March 8, 2015. If you're interested in how I acquired this dataset, check out this post. To create word frequency distributions for each subreddit, I simply counted the number of occurrences (case-insensitive) of each of the 100 words, and normalized by the total number of words in each subreddit. This normalization step is key, because if one simply uses the absolute counts, subreddits with longer comments (such as /r/AskReddit) will clearly separate from all the other subreddits).

A subreddit distance matrix

As a first pass analysis on these data, I calculated the euclidean distance between the 100-dimensional normalized word distributions for each pair of subreddits, resulting in the following matrix:

Each point in the matrix represents the comparison between two subreddits. Cooler colors signify more similar subreddits, hotter colors subreddits that are more different. Elements along the diagonal represent a comparison of a subreddit to itself, so the distance is 0. Also note that there's no directionality to these comparisons so the matrix is symmetric.

A few observations pop out immediately. First, there are a few bastions of blue off the diagonal. In a lot of ways, these make intuitive sense. /r/funny, /r/pics, /r/gifs, /r/WTF, and /r/videos are all pretty similar to one another. All of these subreddits link to content as opposed to self-posts, they all are or once were default subreddits, and none of them are known for "serious" conversation.

Second, /r/circlejerk is different from every other subreddit. Third, the sports subreddits (/r/nba, /r/nfl, /r/SquaredCircle, and /r/soccer) are all pretty similar as are the variety of video game subreddits.

There are many more observations to be made from this matrix, but it's a little challenging to quickly grasp the clusters using this technique. Let's try a different method which might make this easier.

Clustering subreddits

Instead of plotting a distance matrix, it would be substantially more intuitive to plot the subreddits such that there location described their similarity. Unfortunately, we've yet to find a great way to visualize a 100-dimensional space, so I used principal components analysis (PCA), one of the most basic forms of dimensionality reduction, to allow us to better visualize the data. Briefly, PCA is a method which allows us to reveal the underlying structure in the data. While the data may occupy 100 dimensions, if dimensions are strongly correlated, we might only need a few dimensions to describe the majority of the variability. PCA attempts to do this by remapping or "projecting" the data onto these dimensions. As it turns out, in these data, there's quite a bit of structure, as the first three principal components explain more than 50% of the total variance, and the first 15 explain more than 90%.

I then used affinity propagation, a clustering algorithm based on message passing, to cluster the data in the first 3 principal components. One really nice feature of affinity propagation is that, as opposed to k-means clustering, it doesn't require you to estimate the number of clusters beforehand. The algorithm clustered the data into 7 nicely separated clusters, as displayed in images below.

From this image, we can see that, not only does the data cluster cleanly, the clusters make sense. The orange cluster contains all the sports subreddits, the navy blue cluster contains the content subreddits discussed above, the royal blue cluster contains the video game subreddits, the green cluster contains an odd assortment of subreddits with no clear pattern, and the teal cluster contains the more intellectual subreddits.

Interestingly, the most similar pair of subreddits, /r/gentlemanboners and /r/Celebs, define a cluster all on their own, as does /r/circlejerk.

What defines the subreddit clusters?

So we can cluster the subreddits cleanly, but what defines these clusters? As a general overview, we can look at the contribution of each word to each of the principal components.

The above plot shows the sum of the absolute values of the contributions to each of the first three principal components. If we look at the words which have the largest contribution, they tend to be pronouns and possessive pronouns (my, I, you, she, her, etc.), along with a few other miscellaneous words like "looks."

But what about individual clusters? To analyze the words that define individual subreddits, I calculated the mean frequency for each word across all the subreddits and then divided each subreddit's distribution by the mean distribution. A value of 1 indicates that the word has the same frequency as the mean frequency for that word. Values above/below 1 indicate that the word is over/underrepresented. So, what does this look like for the gentlemanboners/Celebs cluster?

Comically, the cluster is defined by a nine-fold overrepresentation of "she," an eight-fold overrepresentation of "her," and a five-fold overrepresentation of "looks," along with an underrepresentation of "he," "his," and "people." The sports subreddits, on the other hand, are defined by pretty much the opposite phenomenon. Take /r/nfl for example:

Sports subreddits: overrepresentation of male pronouns, along with an underrepresentation of female pronouns along with "looks." What about the subreddits in the green cluster such as /r/trollXChromosomes?

Again, the subreddits in this cluster are defined by pronouns, but this time by pronouns associated with oneself such as "I," "my," "me," and "I'm."

The other clusters are defined by more subtle patterns, and are less dominated by individual words. However, I want to point out one more which I find personally gratifying. What defines /r/science?

Again, some pronouns, but perhaps reflecting the collective spirit of science, the singular pronouns are all underrepresented while the only overrepresented pronoun is "we."


Overall, I'm quite pleased with how this analysis turned out. Not only did subreddits cluster in a reasonable fashion according to topics, many of the clusters can be defined by differences in just a few individual words, with pronouns having a disproportionate influence. Perhaps most surprisingly, one can categorize subreddits based on just a small subset of words and comparatively little processing. I suppose how we write says a lot about us.

You can check out the iPython notebook used to perform these analyses here.

Creating a reddit data set


In preparation for the first set of analyses I'm planning for this blog, I spent some time over the last week preparing a package to create data sets from reddit. The package will collect comments and posts from specified subreddits within a custom date range and save it to a sqlite3 database for later analysis.

To do this, I've used PRAW, a python wrapper for the Reddit API. PRAW allows you to easily retrieve comments and posts from specific subreddits and users and gracefully handles Reddit's API usage limits. However, finding posts within a specific time range is much trickier.

PRAW/Reddit API Basics

This isn't intended as a tutorial for PRAW. If you want that, I recommend visiting their docs. This section will only go through the fundamentals of PRAW necessary to create a data set from reddit.

First, let's import praw and the redditDataset module

import praw
import redditDataset

Next, let's initialize a connection with PRAW as follows:

redditObject = praw.Reddit(user_agent='get_reddit_dataset')

We can grab subreddits using getSubreddits. Here, we'll grab /r/funny and /r/gaming

subreddits = redditDataset.getSubreddits(redditObject, ['funny', 'gaming'])

PRAW also has a variety of functions to grab subreddits. One of the most useful is the method get_popular_subreddits.

popularSubreddits = redditObject.get_popular_subreddits(limit=200)

This will return a generator containing the 200 most popular subreddits. PRAW has many other methods to grab specific submissions, comments, users, etc., but these are the only ones you'll need to know to use the module.

Now that we have a reddit object and the subreddits to query, let's make a data set.

Grabbing a data set from a set of subreddits

Once you have a generator or list of subreddit objects and your praw object, call createDataset to start downloading comments and posts into a sqlite3 database. The database will be saved in ~\Databases\<dbName>db.

Let's grab all the posts from the funny subreddit from March 1, 2015:

funnySubreddit = redditDataset.getSubreddits(redditObject, ['funny'])
redditDataset.createDataset(redditObject, funnySubreddit, startDate='150301000000'
                            endDate='150301235959', dbName='March_01_2015_funny_posts'

Basically, you give createDataset the reddit object, the subreddits (in list or generator form), a start and end date, a base name for the database, and a fine scale (which I'll get to in a moment).

For the start and end date, provide a string in the format 'yymmddHHMMSS'. So, in the above example, we're pulling posts between March 1, 2015 at 12:00:00 AM and March 1, 2015 at 11:59:59 PM.

Unfortunately, the reddit API will only provide a list of 1000 posts for any query. What does this mean for us? Well, say we want to get all the posts from 2014. If we request all those posts, we'll only get the 1000 with whatever sort is specified (createDataset uses a 'top' sort). To get around this, createDataset will make many requests in increments of 'fineScale' hours. So, in the example, above, we'll actually make six separate queries for a theoretical maximum of 6,000 posts. Because of the overhead associated with getting posts, we want to set this parameter to be as large as possible while still getting all the data we want. I've found that 8 works well for all but the most frequented subreddits.

And that's it! It'll work to retrieve all the posts within the desired range and the top comments from each post (by default, this is set to 100). One thing to note: because of the reddit API limits, this process is slow. We can only make 30 requests per minute. Currently, we only get the data for one post per request. I think this can be improved (potentially up to 25 posts per request), but I haven't gotten around to it yet.

Database structure

The sql database is pretty simple. It has two tables: submissions and comments.

Each row in submissions represents a single post. The columns contain the postID, postTitle, postBody (text if a self-post, url if a link), postScore (as of when it was downloaded), subredditName, and subredditID.

Each row in comments represents a single comment in a post. The columns contain the commentDate, user, body, comScore (as of when it was downloaded), and the postID.

How to grab posts within a specified time range

If you're just interested in using the package, you can skip this part. Figuring out how to grab posts within a time range was a bit of a pain as there's no native support for it in the reddit API or in PRAW. Reddit offers native support for filtering based on a set date range relative to now. So, for example, it's easy to grab posts from the last hour, day, week, month, or year, but challenging to grab posts from the month before last, or even the last month except for today. I spent a long time searching for an alternative with little success.

I eventually figured out that the reddit search engine accepts timestamp queries with the date provided in the unix time format. So, the search query timestamp:1425186000..1425229199 will return the 1000 posts sorted however you'd like (new, top, hot, relevance) from March 1, 2015. Importantly, this will not work using the default reddit search engine. You need to add syntax=cloudsearch to the end of the url to enable the native features of Amazon CloudSearch, one of which is timestamps.


You can check out the code for this project here. I've also collected a data set of almost all the posts along with their top 100 comments from the top 200 subreddits from March 2-8, 2015. You can get this database here.

Now that I have the data, it's time to start asking questions!