Last Word Clustering

I recently discovered the Texas Department of Criminal Justice has a website containing links to the last statements of executed offenders dating from 1982 until the present.  Regardless of your view on capital punishment, I think it’s safe to say that this is a unique, publicly available data set.  I thought an interesting project would be to scrape the data and see how well the statements could be clustered (spoiler alert – not very well).  I didn’t want to read any of the statements ahead of time; rather I wanted to do simple unsupervised clustering and see if the clusters corresponded to any common themes.

After extracting all the data, I had 418 statements.  Removing common articles (“a”, “as”, “the”, etc…), and ignoring any terms present in only a single statement, I was left with 1294 unique words.  I then created a 1294 by 418 matrix A where A_{i,j} records the number of occurrences of word i in statement j.  As some statements are very long (longest was over 7000 words), these statements will bias most clustering methods.  At the same time, I don’t want to scale each column to be unit norm as clustering on the sphere doesn’t make intuitive sense either.  Instead, I scale each column by one over the maximum value in that column.  In this way, we have a penalty for long statements with many repeated words while still not greatly penalizing all long statements.  Just as we accounted for differences in statements, some words are very common and tend to be unimportant when clustering.  We account for this by weighting each row in A.  A natural choice is \ln(418/n_i) where n_i is the number of statements containing term i.  This tends to 0 as a term appears in more and more of the 418 statements.  These types of weighting factors are all rolled up under the name “term frequency-inverse document frequency.”   Our task is now to cluster the columns of A.

A common dimensionality reduction method is to use a singular value decomposition (SVD) and take only the components associated with large singular values.  This simple idea takes on many names including principal component analysis (PCA) and Latent Semantic Indexing (LSI).  Under this decomposition, we can examine the contribution of the singular vectors to each statement.  The drawback here is that singular vectors are sometimes difficult to interpret in terms of the original data.  We could also simply cluster our data by grouping statements when they are close to the same singular vector.  Regardless, our matrix A is not ideal; it has one large singular value, and the remaining singular values decay slowly.

singular_vals

That is, A does not have a good low rank approximation – good clustering may be impossible.

Since an SVD can be difficult to interpret in terms of the original data, we’ll instead consider non-negative matrix factorization (NMF).  This is also a low rank decomposition, but here we factor our matrix as A=WH where W and H are low rank matrices with non-negative entries.  Now we do have a nice interpretation: the rows of W correspond to words, so the columns represent common themes which contain these words.  The columns of H represent statements – the entries in each column describe “how much” of the corresponding theme from W appears in this statement.  You can think of the NMF as soft clustering where statements may contain multiple themes.  Since A is nowhere near low rank, I don’t expect anything too miraculous, but we’ll see if any themes can be identified by inspection after performing NMF.  I’ll choose to W and H to be rank 5.

In NMF, we minimize || A-WH || (Frobenius norm) subject to W,H\geq 0.  A solution method for this optimization problem can be derived from Lagrange multipliers, but I’m just using the algorithm from the python library scikit learn.  With my optimal W and H, || A-WH ||=131.  Comparing this with ||A||=140, WH is a crappy approximation to A.  This isn’t unexpected since A is not even close to low rank.  Thresholding the entries of W, I take the dominate words from each “theme”.  These turn out to be:

  1. ‘family’ ‘god’ ‘hope’ ‘life’ ‘like’ ‘sorry’ ‘say’ ‘thank’ ‘would’ ‘yall’ (remorseful?)
  2. ‘amen’ ‘art’ ‘goodness’ ‘leadeth’ ‘shall’ ‘thou’ ‘thy’ valley’ ‘walk’ ‘waters’ (religious?)
  3. ‘declined’ ‘last’ ‘make’ ‘offender’ ‘statement’ (no statement?)
  4. ‘accept’  ‘blessed’ ‘care’ ’caused’ ‘free’ ‘kids’ ‘responsible’ ‘set’ ‘stay’ ‘stood’  (???)
  5. ‘against’ ‘answer’ ‘coming’ ‘commit’ ‘crime’ ‘evidence’ ‘lied’ ‘owe’ ‘swear’ ‘upon’ (confrontational?)

These represent the main groups to which the statements belong, and despite the crappy fit, I think there is still something to see here.  Note number 3 seems to indicate no statement was made.  Statements with this theme probably don’t contain many words from any other group.  Let’s check this.  I’ll make a scatter plot where each point represents one of the 418 statements.  The axes correspond to how closely each statement is aligned with theme 3 and theme 4 respectively.scatterThe points directly along one of the axes indicate a statement contains one theme and not the other, and indeed, this accounts for most points.  Let’s wrap up by examining a different pair of themes.  We’ll see if statements which appear to be religious (theme 2) cluster separate from statements which have more confrontational wording (theme 5). scatter2We again see many points along the axis, but there are more than a few with components from both groups.

Interestingly, despite the high dimensional structure of our data and the terrible low-rank approximation used, NMF did pick out weak but meaningful themes in the data.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s