Final White Paper

Introduction

I’m interested in the intersection of moral psychology, language, and politics, so I decided to focus my final project on Donald Trump’s use of Twitter. One framework on which I’ve based a lot of my own moral psychology research, and on which this final project is based, is Moral Foundations Theory (Graham et al., 2016), which states that morality can be divided into five categories, or foundations, that each matter differently to different people: care/harm (feeling compassion for the suffering and vulnerable), fairness/cheating (making sure people are getting what they deserve), loyalty/betrayal (keeping track of who is “us” and who is “them”), authority/subversion (valuing order, tradition, and hierarchy), and sanctity/degradation (believing certain things are elevated and pure and shouldn’t be tarnished). 

The creators of this theory describe the moral foundations like taste buds. All people have the same taste receptors, but different cultures use them in different ways to create different cuisines. Similarly, all people have the same moral foundations, but different cultures combine them in different ways to create a diverse range of ethical norms. Recently, studies have been focused on how moral foundations relate to politics, and they’ve shown that people’s sensitivity to different moral foundations can predict their political ideology. For example, political liberals tend to be disproportionately sensitive to issues of care/harm and fairness/cheating, while political conservatives tend to weigh all five foundations equally (Graham, Haidt, & Nosek 2009). 

What motivated me to study how language fits into this picture was a series of studies that showed that changing the moral language of an argument can change who finds it persuasive. In one study, for example, researchers edited the text of a speech by President Obama and either increased or decreased its use of words related to fairness/cheating, and then asked participants to read the speech and rate how much they agreed with it. They found that increasing the use of fairness made the speech more agreeable to people high in baseline sensitivity to fairness/cheating (Miles, 2016). And it seems that politicians themselves have caught on to this phenomenon, whether consciously or not: Another study analyzed the text of political advertisements and found that as politicians move from primary elections to general elections, they change their moral language as a signal of moderation. A Democrat, for example, may rely heavily on care/harm and fairness/cheating appeals during the primary election and then, in an attempt to attract more moderates during the general election, shift to include more appeals to loyalty/betrayal, authority/subversion, and sanctity/degradation (Lipsitz, 2018).

So with this in mind, I wanted to look at Trump’s use of moral language on Twitter to learn more about his moral compass—or what he thinks of the moral compasses of those he’s trying to persuade or motivate. Trump is an interesting case not only because of how freely and expressively he uses Twitter, but also because he has alternated between expressing liberal and conservative viewpoints at various points in his life. Would that change be reflected in the data? And would the data show any aspects of his moral palette that haven’t changed?

Data preparation

After spending a few hours trying to figure out how to scrape all of Trump’s tweets from his profile, I was relieved to find out via Dr. McSweeney’s timeline lab that there’s a website, Trump Twitter Archive, dedicated to that very task. So after easily downloading all of his tweets to a csv file, my next task was to analyze the content of the tweets for moral language. To do this, I used the Moral Foundations Dictionary, created by the researchers behind Moral Foundations Theory, which contains a list of words that have been coded according to a) their connection to one of the moral foundations and b) whether that connection has a positive or negative valence. “Dishonest”, for example, is coded as a negative appeal to the fairness/cheating foundation, while “love” is coded as a positive appeal to care/harm. 

I fed this dictionary, along with Trump’s tweets, to Linguistic Inquiry and Word Count (LIWC), a program that generates word counts for a given body of text that are weighted according to linguistic factors such as the speaker’s/writer’s use of punctation and capitalization. For example, Trump’s repeated tweets of “PRESIDENTIAL HARASSMENT!” score very highly on the authority/subversion foundation (“presidential” is coded as a positive appeal to authority) and the care/harm foundation (“harassment” is coded as a negative appeal to care). Of course, there are limitations to this method: First, the Moral Foundations Dictionary doesn’t include all possible words that can be used to make a moral appeal, so some morally-charged words will inevitably slip through the cracks. And second, there’s a great deal of nuance to moral language—sarcasm, irony, hyperbole, and other habits of speech—which LIWC and the Moral Foundations Dictionary cannot perfectly capture with simple word counts. Still, I think these methods can be useful to get a general sense of the trends in a person’s moral language.

So, once LIWC generated a new csv file with each tweet given an adjusted word count for each of ten dimensions (positive and negative for all five foundations), I created two separate datasets: one in narrow form and one in wide form. Narrow form makes it easy to analyze repeated measures data, so I planned to use it to generate visualizations of average adjusted word count by each moral foundation within each tweet. In my wide-form dataset, I created a new between-subjects variable: the highest-scoring moral foundation for each tweet.

Data visualization

A basic understanding of Moral Foundations Theory is necessary to understand my project, so the first thing I had to do when designing my visualization was to provide that theoretical background. A wall of text can be off-putting to even the most conscientious and patient viewers, so I did my best to keep the exposition as concise and engaging as possible. I had a brief intro paragraph, the definitions of each of the moral foundations, some examples of morally-charged words, and then a segue paragraph that explains what kinds of inferences Moral Foundations data can allow us to make. I learned during the pin-up and via Dr. McSweeney’s feedback to me that in my first iteration of the project, I hadn’t given the names of the moral foundations enough visual emphasis. So when revising my project, I made them bigger and underlined so that the viewer would be more inclined to remember them as they navigated through the rest of the visualization.

One of the main inspirations for the design of the visualization was a project that we had covered in class: Rody Zakovich’s text analysis of A Christmas Carol. I liked the idea of starting a story with big, bold numbers to create context. I was also inspired by this project to use images to complement the data, and to create a single page with a continuous scroll that helps create the feeling that the visualization is a narrative that the viewer is moving through. As for the data itself, the questions I wanted to ask made it easy to decide what kinds of charts to use. To show, for example, how Trump’s use of moral language changes over time, line charts were the obvious choice, while to compare average word counts across different moral foundations, bar charts seemed most effective. 

One issue that caused some confusion for at least one viewer during the final showcase was that my line charts appeared to show changes within each year, though the x axis was merely year to year. This viewer pointed out what seemed to be an interesting trend in early 2016, but it turned out a misleading result of the line connecting the 2015 average to the 2016 average. I played with the idea of showing the month-to-month trends, but word counts changed so much each month that the data turned to a bunch of illegible spikes. One thing left for me to do is figure out how to show the month-to-month data more clearly, or at least show the year-to-year data in a way that doesn’t pretend to be more than it is.

Something I thought a lot about as I worked on this project is the importance of minding the distinction between exploratory and explanatory analysis. In Storytelling with Data, Knaflic suggests using exploratory analyses to determine which trends are worth including in the story, and then letting explanatory analyses drive the final visualization (2015). This was a challenge for me throughout all three projects. How do you know when you’ve done enough exploration to have a story worth telling? How clear a story must there be? How does a designer find the right balance between pointing out the trends they want the viewer to focus on and letting the viewer make their own inferences? Throughout all three projects I had varying degrees of success answering this question, but I feel like the class has helped me hone my intuition and, most importantly, has given me tools that I’m excited to use on my own to continue honing it.

References

Frimer, J. A., Boghrati, R., Haidt, J., Graham, J., & Dehgani, M. (2019). Moral Foundations Dictionary for Linguistic Analyses 2.0. Unpublished manuscript.

Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S. P., & Ditto, P. H. (2013). Moral foundations theory: The pragmatic validity of moral pluralism. In Advances in experimental social psychology (Vol. 47, pp. 55-130). Academic Press.

Graham, J., Haidt, J., & Nosek, B. A. (2009). Liberals and conservatives rely on different sets of moral foundations. Journal of personality and social psychology, 96(5), 1029.

Knaflic, C. N. (2015). Storytelling with data: A data visualization guide for business professionals. John Wiley & Sons.

Lipsitz, K. (2018). Playing with Emotions: The Effect of Moral Appeals in Elite Rhetoric. Political Behavior, 40(1), 57-78.

Miles, M. R. (2016). Presidential Appeals to Moral Foundations: How Modern Presidents Persuade Cross‐Ideologues. Policy Studies Journal, 44(4), 471-490.