Simple end-to-end TensorFlow examples

A walk-through with code for using TensorFlow on some simple simulated data sets.

I’ve been reading papers about deep learning for several years now, but until recently hadn’t dug in and implemented any models using deep learning techniques for myself. To remedy this, I started experimenting with Deeplearning4J a few weeks ago, but with limited success. I read more books, primers and tutorials, especially the amazing series of blog posts by Chris Olah and Denny Britz. Then, with incredible timing for me, Google released TensorFlow to much general excitement. So, I figured I’d give it a go, especially given Delip Rao’s enthusiasm for it—he even compared the move from Theano to TensorFlow feeling like changing from “a Honda Civic to a Ferrari.”

Here’s a quick prelude before getting to my initial simple explorations with TensorFlow. As most people (hopefully) know, deep learning encompasses ideas going back many decades (done under the names of connectionism and neural networks) that only became viable at scale in the past decade with the advent of faster machines and some algorithmic innovations. I was first introduced to them in a class taught by my PhD advisor, Mark Steedman, at the University of Pennsylvania in 1997. He was especially interested in how they could be applied to language understanding, which he wrote about in his 1999 paper “Connectionist Sentence Processing in Perspective.” I wish I understood more about that topic (and many others) back then, but then again that’s the nature of being a young grad student. Anyway, Mark’s interest in connectionist language processing arose in part from being on the dissertation committee of James Henderson, who completed his thesis “Description Based Parsing in a Connectionist Network” in 1994. James was a post-doc in the Institute for Research in Cognitive Science at Penn when I arrived in 1996. As a young grad student, I had little idea of what connectionist parsing entailed, and my understanding from more senior (and far more knowledgeable) students was that James’ parsers were really interesting but that he had trouble getting the models to scale to larger data sets—at least compared to the data-driven parsers that others like Mike Collins and Adwait Ratnarparkhi were building at Penn in the mid-1990s. (Side note: for all the kids using logistic regression for NLP out there, you probably don’t know that Adwait was the one who first applied LR/MaxEnt to several NLP problems in his 1998 dissertation “Maximum Entropy Models for Natural Language Ambiguity Resolution“, in which he demonstrated how amazingly effective it was for everything from classification to part-of-speech tagging to parsing.)

Back to TensorFlow and the present day. I flew from Austin to Washington DC last week, and the morning before my flight I downloaded TensorFlow, made sure everything compiled, downloaded the necessary datasets, and opened up a bunch of tabs with TensorFlow tutorials. My goal was, while on the airplane, to run the tutorials, get a feel for the flow of TensorFlow, and then implement my own networks for doing some made-up classification problems. I came away from the exercise extremely pleased. This post explains what I did and gives pointers to the code to make it happen. My goal is to help out people who could use a bit more explicit instruction and guidance using a complete end-to-end example with easy to understand data. I won’t give lots of code examples in this post as there are several tutorials that already do that quite well—the value here is in the simple end-to-end implementations, the data to go with them, and a bit of explanation along the way.

As a preliminary, I recommend going to the excellent TensorFlow documentation, downloading it, and running the first example. If you can do that, you should be able to run the code I’ve provided to go along with this post in my try-tf repository on Github.

Simulated data

As a researcher who works primarily on empirical methods in natural language processing, my usual tendency is to try new software and ideas out on language data sets, e.g. text classification problems and the like. However, after hanging out with a statistician like James Scott for many years, I’ve come to appreciate the value of using simulated datasets early on to reduce the number of unknowns while getting the basics right. So, when sitting down with TensorFlow, I wanted to try three simulated data sets: linearly separable data, moon data and saturn data. The first is data that linear classifiers can handle easily, while the latter two require the introduction of non-linearities enabled by models like multi-layer neural networks. Here’s what they look like, with brief descriptions.

The linear data has two clusters that can be separated by a diagonal line from top left to bottom right:

linear_data_train.jpg

Linear classifiers like perceptrons, logistic regression, linear discriminant analysis, support vector machines and others do well with this kind of data because learning these lines (hyperplanes) is exactly what they do.

The moon data has two clusters in crescent shapes that are tangled up such that no line can keep all the orange dots on one side without also including blue dots.

moon_data_train.jpg

Note: see Implementing a Neural Network from Scratch in Python for a discussion working with the moon data using Theano.

The saturn data has a core cluster representing one class and a ring cluster representing the other.saturn_data_train.jpg

With the saturn data, a line is catastrophically bad. Perhaps the best one can do is draw a line that has all the orange points to one side. This ensures a small, entirely blue side, but it leaves the majority of blue dots in orange terroritory.

Example data has been generated in try-tf/simdata for each of these datasets, including a training set and test set for each. These are for the two dimensional cases visualized above, but you can use the scripts in that directory to generate data with other parameters, including more dimensions, greater variances, etc. See the commented out code for help to visualize the outputs, or adapt plot_data.R, which visualizes 2-d data in CSV format. See the  README for instructions.

Related: check out Delip Rao’s post on learning arbitrary lambda expressions.

Softmax regression

Let’s start with a network that can handle the linear data, which I’ve written in softmax.py. The TensorFlow page has pretty good instructions for how to define a single layer network for MNIST, but no end-to-end code that defines the network, reads in data (consisting of label plus features), trains and evaluates the model. I found writing this to be a good way to familiarize myself with the TensorFlow Python API, so I recommend trying it yourself before looking at my code and then referring to it if you get stuck.

Let’s run it and see what we get.

$ python softmax.py --train simdata/linear_data_train.csv --test simdata/linear_data_eval.csv
Accuracy: 0.99

This performs one pass (epoch) over the training data, so parameters were only updated once per example. 99% is good held-out accuracy, but allowing two training epochs gets us to 100%.

$ python softmax.py --train simdata/linear_data_train.csv --test simdata/linear_data_eval.csv --num_epochs 2
Accuracy: 1.0

There’s a bit of code in softmax.py to handle options and read in data. The most important lines are the ones that define the input data, the model, and the training step. I simply adapted these from the MNIST beginners tutorial, but softmax.py puts it all together and provides a basis for transitioning to the network with a hidden layer discussed later in this post.

To see a little more, let’s turn on the verbose flag and run for 5 epochs.

$ python softmax.py --train simdata/linear_data_train.csv --test simdata/linear_data_eval.csv --num_epochs 5 --verbose True
Initialized!

Training.
0 1 2 3 4 5 6 7 8 9
10 11 12 13 14 15 16 17 18 19
20 21 22 23 24 25 26 27 28 29
30 31 32 33 34 35 36 37 38 39
40 41 42 43 44 45 46 47 48 49

Weight matrix.
[[-1.87038445 1.87038457]
[-2.23716712 2.23716712]]

Bias vector.
[ 1.57296884 -1.57296848]

Applying model to first test instance.
Point = [[ 0.14756215 0.24351828]]
Wx+b = [[ 0.7521798 -0.75217938]]
softmax(Wx+b) = [[ 0.81822371 0.18177626]]

Accuracy: 1.0

Consider first the weights and bias. Intuitively, the classifier should find a separating hyperplane between the two classes, and it probably isn’t immediately obvious how W and b define that. For now, consider only the first column with w1=-1.87038457, w2=-2.23716712 and b=1.57296848. Recall that w1 is the parameter for the `x` dimension and w2 is for the `y` dimension. The separating hyperplane satisfies Wx+b=0; from which we get the standard y=mx+b form.

Wx + b = 0
w1*x + w2*y + b = 0
w2*y = -w1*x – b
y = (-w1/w2)*x – b/w2

For the parameters learned above, we have the line:

y = -0.8360504*x + 0.7031074

Here’s the plot with the line, showing it is an excellent fit for the training data.

.linear_data_hyperplane.jpg

The second column of weights and bias separate the data points at the same place as the first, but mirrored 180 degrees from the first column. Strictly speaking, it is redundant to have two output nodes since a multinomial distribution with n outputs can be represented with n-1 parameters (see section 9.3 of Andrew Ng’s notes on supervised learning for details). Nonetheless, it’s convenient to define the network this way.

Finally, let’s try the softmax network on the moon and saturn data.

python softmax.py --train simdata/moon_data_train.csv --test simdata/moon_data_eval.csv --num_epochs 2
Accuracy: 0.856

$ python softmax.py --train simdata/saturn_data_train.csv --test simdata/saturn_data_eval.csv --num_epochs 2
Accuracy: 0.45

As expected, it doesn’t work very well!

Network with a hidden layer

The program hidden.py implements a network with a single hidden layer, and you can set the size of the hidden layer from the command line. Let’s try first with a two-node hidden layer on the moon data.

$ python hidden.py --train simdata/moon_data_train.csv --test simdata/moon_data_eval.csv --num_epochs 100 --num_hidden 2
Accuracy: 0.88

So,that was an improvement over the softmax network. Let’s run it again, exactly the same way.

$ python hidden.py --train simdata/moon_data_train.csv --test simdata/moon_data_eval.csv --num_epochs 100 --num_hidden 2
Accuracy: 0.967

Very different! What we are seeing is the effect of random initialization, which has a large effect on the learned parameters given the small, low-dimensional data we are dealing with here. (The network uses Xavier initialization for the weights.) Let’s try again but using three nodes.

$ python hidden.py --train simdata/moon_data_train.csv --test simdata/moon_data_eval.csv --num_epochs 100 --num_hidden 3
Accuracy: 0.969

If you run this several times, the results don’t vary much and hover around 97%. The additional node increases the representational capacity and makes the network less sensitive to initial weight settings.

Adding more nodes doesn’t change results much—see the WildML post using the moon data for some nice visualizations of the boundaries being learned between the two classes for different hidden layer sizes.

So, a hidden layer does the trick! Let’s see what happens with the saturn data.

$ python hidden.py --train simdata/saturn_data_train.csv --test simdata/saturn_data_eval.csv --num_epochs 50 --num_hidden 2
Accuracy: 0.76

With just two hidden nodes, we already have a substantial boost from the 45% achieved by softmax regression. With 15 hidden nodes, we get 100% accuracy. There is considerable variation from run to run (due to random initialization). As with the moon data, there is less variation as nodes are added. Here’s a plot showing the increase in performance from 1 to 15 nodes, including ten accuracy measurements for each node count.

hidden_node_curve.jpg

The line through the middle is the average accuracy measurement for each node count.

Initialization and activation functions are important

My first attempt at doing a network with a hidden layer was to merge what I had done in softmax.py with the network in mnist.py, provided with TensorFlow tutorials. This was a useful exercise to get a better feel for the TensorFlow Python API, and helped me understand the programming model much better. However, I found that I needed to have upwards of 25 or more hidden nodes in order to reliably get >96% accuracy on the moon data.

I then looked back at the WildML moon example and figured something was quite wrong since just three hidden nodes were sufficient there. The differences were that the MNIST example initializes its hidden layers with truncated normals instead of normals divided by the square root of the input size, initializes biases at 0.1 instead of 0 and uses ReLU activations instead of tanh. By switching to Xavier initialization (using Delip’s handy function), 0 biases, and tanh, everything worked as in the WildML example. I’m including my initial version in the repo as truncnorm_hidden.py so that others can see the difference and play around with it. (It turns out that what matters most is the initialization of the weights.)

This is a simple example of what is often discussed with deep learning methods: they can work amazingly well, but they are very sensitive to initialization and choices about the sizes of layers, activation functions, and the influence of these choices on each other. They are a very powerful set of techniques, but they (still) require finesse and understanding, compared to, say, many linear modeling toolkits that can effectively be used as black boxes these days.

Conclusion

I walked away from this exercise very encouraged! I’ve been programming in Scala mostly for the last five years, so it required dusting off my Python (which I taught in my classes at UT Austin from 2005-2011, e.g. Computational Linguistics I and Natural Language Processing), but I found it quite straightforward. Since I work primarily with language processing tasks, I’m perfectly happy with Python since it’s a great language for munging language data into the inputs needed by packages like TensorFlow. Also, Python works well as a DSL for working with deep learning (it seems like there is a new Python deep learning package announced every week these days). It took me less than four hours to go through initial examples, and then build the softmax and hidden networks and apply them to the three data sets. (And a bunch of that time was me remembering how to do things in Python.)

I’m now looking forward to trying deep learning models, especially convnets and LSTM’s, on language and image tasks. I’m also going to go back to my Scala code for trying out Deeplearning4J to see if I can get these simulation examples to run as I’ve shown here with TensorFlow. (I would welcome pull requests if someone else gets to that first!) As a person who works primarily on the JVM, it would be very handy to be able to work with DL4J as well.

After that, maybe I’ll write out the re-occurring rant going on in my head about deep learning not removing the need for feature engineering (as many backpropagandists seem to like to claim), but instead changing the nature of feature engineering, as well as providing a really cool set of new capabilities and tricks.

Improving race relations: a path forward

This a long and personal post about racism in the USA. It’s an outpouring of some of what I’ve felt this past year, with an appeal for us all, whether white, black, Hispanic, Asian, mixed, decidedly undeclared, or whatever, to not give up, to keep working to make this country, this world, a better place.

The united colors of Baldridge: my hand, my wife's hand, and my boys' hands.
The united colors of Baldridge: my hand, my wife’s hand, and our boys’ hands.

This a long and personal post about racism in the USA. It’s an outpouring of some of what I’ve felt this past year, with an appeal for us all, whether white, black, Hispanic, Asian, mixed, decidedly undeclared, or whatever, to not give up, to keep working to make this country, this world, a better place.

The catalyst for me to write this was a series of tweets by Shaun King (@shaunking) several months ago. King has emerged as one of the leaders of #BlackLivesMatter, a movement to document and address racism in the USA, and especially focus on police misconduct and brutality. In those tweets, King noted his acceptance of a pessimistic view that racism is a permanent feature of American society. It’s not an unreasonable perspective, but it deeply saddens me. As a white husband of a black woman and father of biracial children, I desperately want to remain optimistic. I need to remain optimistic. My family lives between two worlds, and we can’t pick sides. In this post, I want to give some support for embracing a more optimistic perspective. But first, let’s establish why there is good cause to be pessimistic.

It’s been a hell of a few years for race relations in the United States of America. From Trayvon Martin through to the Charleston shootings to Sam Dubose and Corey Jones, black people have been disproportionately killed. Included in the body count are far too numerous instances of police misconduct and brutality. This is violence meted out by the state, and the individuals are disproportionately people of color. It’s been going on for years and it’s nothing new to the black community. Nearly ubiquitous video cameras and social media are now finally making it less easy for the wider community to ignore.

As sad, frustrating and angering as this all is, this moment presents a tremendous opportunity. To put it simply: systemic racism can’t be addressed effectively without white Americans being aware of it and acting to reduce it. Until recently, most white people in the country seem to have been living under the convenient but false perception that racism is a more or less a problem of the past. Now, white Americans see racism as a national problem, but generally don’t think it is a major problem in their own communities. In general, it seems white Americans tell themselves that perhaps there is some discrimination that we still need to address, but it’s not violent, really serious stuff. Maybe there are some backward people down south who are real racists, but by and large we’ve gotten past it, at least in our own communities. Unfortunately, that’s wishful thinking. Ignorance may at times be bliss, but that only really holds for the privileged. And, anyway, there are outright racist people, and they aren’t just in the south.

My wife is African-American. Our nine years together have been a crash course in race relations for me. There is so much I could never have guessed about the black experience in the United States without being with her. To learn at the age of five that there were people who wanted you dead because of your skin color, and furthermore, to learn this from a six-year-old friend. To wish at the age of seven that you are actually a white girl so that you could avoid the burden of being black (this is not uncommon, and Whoopi Goldberg has a powerful performance about it in her 1985 standup show Direct From Broadway). To hear your mom talk of seeing the severed head of a black man rolling down the street in the 1960s. To ask your husband not to stop in Vidor, Texas—even though you are in a traffic jam, pregnant and really needing to pee, because during college you saw “Nigger, don’t let the sun go down on you in this town” written on a wall there. To fear interactions with the police, even though you are a law-abiding, upstanding citizen with graduate degrees from Harvard and Yale. Just last week, she was driving down a road in Austin, at the speed limit, and a police officer in an SUV pulled up beside her, eyed her and matched her pace for some time—nothing happened, but it felt very threatening. Frankly, I didn’t really get her concerns about the police until last year. Now it is all too clear, and it was really driven home by what happened to Sandra Bland, right here in our home state of Texas, and in a city we often pass on our drive between Houston and Austin.

My wife and I have watched the events of the past year with sadness and horror. We have two bright and beautiful sons. Like any parents, we have huge dreams for them and want to set them up to live the happiest, most fulfilled lives they can possibly realize. Yet, we live in a country and time where not only black men and women are killed without justifiable cause or with extremely fast judgment, but even children like Trayvon Martin and Tamir Rice. Where a 14-year-old girl in a bikini is thrown to the ground and sat on by a police officer for several minutes—an officer who also pulled a gun out to threaten two other teens who were concerned for her and who swore repeatedly at other teens at the same incident in McKinney, Texas. (I wrote to the chief of police to ask that officer be dismissed.) Where Chris Lollie, a man waiting to pick up his kids in the St. Paul skyway, is apprehended without cause, tasered, and body searched by the police (and being polite didn’t help in the least). Where a 7-year-old boy is handcuffed for an hour for being unruly in class. It goes on and on—far to many to enumerate here. And because of all this, my wife and I frequently find ourselves watching and listening to the advice that many thoughtful people are giving about raising black children in the USA (e.g., Greta Gardner, Clint Smith, W. Kamau Bell). These are all concerns that were foreign to me and played no part in my own upbringing.

This July, my family took a vacation road trip from Texas to DC to Michigan and back. You learn a lot about the different parts of the US as a biracial family on such a trip. We nearly always stop at McDonald’s for bathroom breaks because we know there are cameras more consistently than in gas stations. We are quite accustomed to the hate stares directed at us, especially in poorer regions in the south. We also get disapproving looks from many black people, especially in black neighborhoods in cities like Houston and DC. Though it is usually just looks and stares, one white woman in a North Carolina rest stop loudly stated that she found our family “disgusting”. We planned our driving so that we wouldn’t have to stay the night in Missouri because of the recent racial tensions highlighted in Ferguson. (There is great irony in this, of course—our own state of Texas has its own poor track record with racism and police brutality, including recently the McKinney pool incident and Sandra Bland’s wrongful arrest and death and more.)

There was only one time on our trip that we felt real fear of more than looks and words. We were low on gas at one point and exited the highway to refuel, only to find the station we’d spotted was no longer operational—however, there were several trucks idling around this otherwise abandoned gas station. We immediately started to go, but our six-year-old declared he needed to pee, so I took him to the forest line—during which time more trucks started to show up. I hurried back as quickly as I could, and my wife had already hopped into the driver’s seat. We got out and back onto the highway fast. It may have been nothing, but it felt like something was possible. When I looked the location up later, I found out that it is a small township that hosts a chapter of the KKK. (I’m now definitely going to map the locations of such chapters out before we go on such a road trip again.)

So, we’ve thankfully only experienced mild discomfort as a family (my wife has experienced much more on her own, including being called a nigger by two white men in a car while walking on Harvard Square), but there is lots of stuff that is pretty bad going on out there. Shortly after our road trip, a similar biracial family on a long drive was stopped and cross-examined by police in a very intimidating manner. And there are plenty of people having rallies for the Confederate flag, and they don’t know their history, so let us admit is not about “heritage”. They even show up at kids’ birthday parties and threaten people. They definitely don’t seem to like black people.

So… where’s the room for optimism? My best guess is that the availability heuristic is playing a big role here, in multiple ways. If it is possible for you at this point, go read the book “Thinking, Fast and Slow“, by Daniel Kahneman, to learn about the availability heuristic and much more. But you probably can’t do that, so here it is briefly: the availability heuristic is a shortcut used by the human mind to evaluate a topic by using examples that are readily retrieved from memory. As an example, consider the question “is the world more violent today than it was in the past?” Perhaps a majority of people would respond yes—it is certainly easy to come to that conclusion if you watch the news. However, Steven Pinker carefully argues in his excellent book “The Better Angels of Our Nature: Why Violence Has Declined” that the data points convincingly toward the opposite conclusion. In fact, he spends a large portion of a large book to get the reader past their own sense of the problem as biased by the availability heuristic. As it turns out, there has is fact never been a time when the probability of a given individual dying violently has been lower. But sex and violence are what sell news, so that’s what we hear about. Then, when we consider the question, the availability heuristic brings those examples quickly to mind. It’s much harder to think about the billions of people just boringly living their lives. There are obviously many pockets of the world and our society where these trends are not as encouraging, so it isn’t time to sit back and say all is well.

It seems quite likely that when someone like Shaun King considers a question like “is racism a permanent feature of American society?”, examples like the ones I’ve mentioned above easily come to mind and dominate the mental computation. Frankly, it happens to me too—it gets me angry and upset and I find myself listening more regularly to artists like Killer Mike, The Roots and even going back to Rage Against the Machine. And, this is not to say “yes” isn’t the right answer. It is just to say that we need to consider the availability heuristic’s potential role in arriving at that answer. I believe we need more data and perspectives before we truly give up hope. The other thing is that it is notoriously hard to make predictions, especially about the future. As just one related example, I heard one family member lament—just a year before Obama’s candidacy—that we’d never have a black president.

As another example, consider American slavery in the decade before the Civil War. It would have been reasonable to feel that slavery would be a permanent feature of American society. In the concluding chapter of “The Slavery Question” from the 1850’s, the author, John Lawrence, writes:

Are there any prospects that the long and dreary night of American despotism will speedily end in a joyous morning?

If we turn our eye towards the political horizon we shall find it overspread with heavy clouds portentous of evil to the oppressed. The government of the United States is intensely pro-slavery. The great political parties, with which the masses of the people act, vie with each other in their supple and obsequious devotion to the slaveocracy. The wise policy of the fathers of the Republic to confine slavery within very narrow limits, so that it would speedily die out and be supplanted by freedom, has been abandoned; the whole spirit of our policy has been reversed ” and our national government seems chiefly concerned for the honor, perpetuation and extension of slavery.

Lawrence goes on to make further points of how dire the situation is, and quotes Frederick Douglass. But his book is called “The Slavery *Question*”, so he of course isn’t giving up. In fact, he flips it with excellent rhetorical flourish.

But dark as is this picture, there is still hope. The exorbitant demands of the slave power, the extreme measures it adopts, the deep humiliation to which it subjects political aspirants, will produce a reaction.

Inflated with past success it is throwing off its mask and revealing its hideous proportions. It is now proving itself the enemy of all freedom. The extreme servility of the popular churches is opening the eyes of many earnest people to the importance of taking a bolder position. They are finding out that it is a duty to come out from churches which sanction the vilest iniquity that ever existed, or exhaust their zeal for the oppressed in tame resolves, never to be executed.

The truth is gaining ground that slaveholding is a great sin, that slaveholders are great sinners, and that he who apologises for the system is a participator in the guilt and shame.

In other words, it’s a systemic problem, and not taking a position against slavery is to be complicit in its evils. In his concluding paragraph, he declares “The day of deliverance is not distant.” It took a bloody war, but a decade later, slavery was abolished.

And this brings us to what can be so frustrating about discussing current race relations with white Americans—namely that they have a very hard time discussing it. In fact, there is now a term, “white fragility,” that describes the odd sensitivity that nearly all white people have when discussing race. We just aren’t very good at it and it’s for a pretty obvious reason: we aren’t required to navigate race to function in our society, while any person of color must. There is also plenty of ambiguity to deal with since race itself is a social construct with very fluid boundaries, and a frequent white response is the well-intentioned but ultimately naive and counter-productive statement “I don’t see color”. One side of this leads to awkward, relatively harmless everyday encounters that can even be made light of — see “What if black people said the stuff white people say” (see also the videos for latinos and asians). But there is a deeper problem of systemic racial disparities that disproportionately benefit white Americans (for a very effective analogy, see this post comparing it to being a bicyclist on the road). The tricky nature of these benefits is that few white Americans realize and admit they are receiving them. They are working hard, dealing with their own successes, failures, pleasures and pains, and it sounds crazy to them that they are privileged. And in fact, this a natural conclusion to reach when you rely on the availability heuristic to consider the topic.

Another dynamic here is that so few white people have close black friends. I don’t mean your co-worker or a person you see from time-to-time. I mean deep personal connections that allow true sharing and sympathetic understanding of another person’s life and experiences. It’s not uncommon for a black American to be THE black friend for many white people, and they are probably keeping a good share of themselves out of reach. My wife learned to do that after even simple comments led some friends and acquaintances to go into conniptions. One man asked my wife “is the singing in black churches as good as they say?”, to which she responded “the singing is great in all the black churches I’ve been to.” He became hysterical and declared that this was a racist thing for her to say. She tried to continue the conversation by contextualizing it more specifically, saying she hadn’t been to every black church and every white church, and that she was just stating her own experience. He just became more irate, and it really seemed that he just wanted to validate his existing prejudices. After exchanges like this and many others like it, it’s often easier just to avoid racial topics altogether.

It is also just common for white Americans to lack deep experiences with black Americans. Until I started dating my wife, I also was similarly removed. I grew up in Rockford, Michigan. We had just a few black students in our high school and I didn’t know any of them. My eyes were opened to a number of things by listening to rap in the late 1980s, especially Willie Dee’s album Controversy, which included songs like “Fuck the KKK” (and many unfortunate misogynistic songs on the second side). My freshman college roommate at the University of Toledo was black and we got along great, but we didn’t hang out together much outside the dorm. I recognized that there were many problems for black Americans living in the inner cities, but I had little knowledge or appreciation for the day-to-day hurdles that black Americans faced regardless of their social status and location (often referred to as “paying the black tax”). It was never through any personal desire to be distanced, but it just didn’t happen until I fell in love with my amazing, wonderful wife in 2006. (Side note: we actually knew each other as students in Toledo in the 1990s. I had a crush on her, but considered her out of my league and didn’t do anything about it at the time. Doh!)

Much of the nation, it seems, expressed huge outrage about the killing of Cecil the Lion. At the same time, we had footage of a police officer shooting Sam Dubose in the head—and it hardly even seems to register outside the black community. I’m not setting up a false dilemma here: it’s fine to be upset about both killings; however, I’m highlighting the apparent higher proportion of the white population that is moved to express outrage by the former and what that says about priorities (especially when considering that much big game hunting is supporting nature preserves and endangered animal populations). Regardless, what I actually appreciate most about contrasting the two killings is how Cecil provided a platform for humorous, but serious, comparisons—most importantly, to highlight how every killing of an unarmed black person turns into an analysis of their character and actions and how those led or contributed to their being killed (as if it’s okay for police to be executioners). Doing the same for Cecil highlights the absurdity of this. Don’t forget that #AllLionsMatter, and can we also please have a serious discussion about lion on lion crime?

In case it isn’t obvious, many of the common defenses of police violence meted on black Americans are not much different from blaming a rape victim because she wore a particular skirt, flirted too much, drank too much, was out too late, and so on. If you don’t believe me, go back and watch the videos of Chris Lollie, Sandra Bland, and Sam Dubose. Consider that for the latter two, the statements about the stops by the officers involved were contradicted by the video evidence. Then consider the many cases where people have died at the hands of the police and there was no video to check the veracity of their version of events—the police are always cleared of wrong doing. In the case of Sandra Bland, consider that there has been tons of focus on whether she committed suicide or was murdered, but let’s not forget it started with a completely ridiculous traffic stop. She should not have died in that cell because she should have never been there in the first place.

We need the police, but we need them to do their job right. That means to serve and protect all citizens, regardless of race, religion, sexual preference, etc. I hope that efforts in community oriented and evidence-based policing will start to improve matters. It makes a lot of sense, but the data is still inconclusive as to whether it actually reduces crime and improves public perceptions of the police. I’m also encouraged that many police departments are adopting data-driven methodologies that have the potential to help reduce racial profiling and identify problem officers. We must also analyze and evaluate the potential for both improved policing and even worse racial profiling that are offered by new algorithms—a topic I wrote about in my article “Machine Learning and Human Bias“. Getting policing into better shape in the country will nonetheless require sustained efforts such as Justice Together and Campaign Zero, and those have a greater chance of success if white people are agitating for change as well as black people.

My family at the Lincoln Memorial.
My family at the Lincoln Memorial.

I am optimistic that we can get to a better place as a society. My family’s road trip brought us to Washington DC, and we went to the Lincoln Memorial. It’s a powerful place, especially for a family like ours. The words of Lincoln’s second inaugural address are on the wall. At that time, the nation was nearing the end of its greatest existential crisis, but Lincoln showed tremendous restraint and forward-thinking, concluding:

With malice toward none, with charity for all, with firmness in the right as God gives us to see the right, let us strive on to finish the work we are in, to bind up the nation’s wounds, to care for him who shall have borne the battle and for his widow and his orphan, to do all which may achieve and cherish a just and lasting peace among ourselves and with all nations.

They did it—they actually defeated slavery and kept the nation together. One-hundred and fifty years later, we are still working through the divisions created by that vile institution, including how we view that time and institution itself now. It’s hard, but we must remain optimistic as well as realistic.

It’s easy to feel overwhelmed by the scale of racism in the USA. But we can’t just throw up our hands. It’s not enough to be well-meaning and holding good intentions. All of us, black, white and more, must own our part of the solution. There is much that white Americans can do to understand and help. Talk to your kids explicitly about race and racism. My mom has gotten through to white friends who dismiss #BlackLivesMatter by talking about her black daughter-in-law and grandsons and how events impact them directly. Even if you have no strong personal connections to black Americans, you can start by reading books like “Between the World and Me” by Ta-nehisi Coates to get a better sense of what it means to grow up black in the USA. It’s the best book I’ve read this year. I particularly like it because he states things starkly, with no sugar-coating: he puts forth a grounded, atheist viewpoint that doesn’t romanticize. Coates discusses what is done to black bodies, not black spirits, hopes and dreams.

“The spirit and soul are the body and brain, which are destructible—that is precisely why they are so precious.” – Ta-nehisi Coates

The focus on the body allows him to dissociate the cultural from the perceived biological components of race, and remind us that white people aren’t white people, but are “people who believe they are white”. That’s an important, powerful distinction.

Actions such as legislation against racism (possibly limiting free speech) are not likely to improve things, and will likely make other things much worse. Support policies that seek to diminish our out-of-control prison system, which includes locations like Rikers Island and Homan Square where people have been held with out being charged, sometimes for years. These places breathe life into Kafka’s book “The Trial”, and they destroy actual lives. Perhaps some of the biggest payoffs on societal issues like racism is to support policies that truly improve educational and economic opportunities for all Americans (no easy problem, I know—the important thing is to realize this is surely more important than symbolic actions). The more that each of us, regardless of our background, can fulfill our potential, the better our chances of getting along better.

Black lives matter, and ALL our lives depend on that. Spread love, not hate, and work for justice and equality of opportunity for all. My family wouldn’t have been possible if others hadn’t done the same. Meaningful change generally takes a long time, but it can come relatively rapidly too. Consider that couples like my wife and I could not legally marry in Texas and many other states until 1967—just seven years before we were born (many thanks to the Lovings and others of their generation!). Consider that it wasn’t until 1993 that marital rape was illegal in all 50 states. Consider that gay couples could only legally marry each other in all 50 states, well, this very year.

We can do this. We must do this.