From: Jeremy Foote Date: Mon, 19 Feb 2018 23:31:34 +0000 (-0600) Subject: Improving a few of the figures; fixing a legend, improving colors; also fixing a... X-Git-Url: https://code.communitydata.science/social-media-chapter.git/commitdiff_plain/ad7c4cad83ce0ecf5d538640ed34b20010fd70db?ds=sidebyside Improving a few of the figures; fixing a legend, improving colors; also fixing a knitr bug --- diff --git a/paper/foote_shaw_hill-computational_analysis_of_social_media.Rnw b/paper/foote_shaw_hill-computational_analysis_of_social_media.Rnw index bbf9a4e..55f01ba 100644 --- a/paper/foote_shaw_hill-computational_analysis_of_social_media.Rnw +++ b/paper/foote_shaw_hill-computational_analysis_of_social_media.Rnw @@ -157,7 +157,7 @@ Data from social media platforms and online communities have fueled the growth o The combination of large-scale trace data generated through social media with a series of advances in computing and statistics have enabled the growth of `computational social science' \citep{lazer_computational_2009}. This turn presents an unprecedented opportunity for researchers who can now test social theories using massive datasets of fine-grained, unobtrusively collected behavioral data. In this chapter, we aim to introduce non-technical readers to the promise of these computational social science techniques by applying three of the most common approaches to a bibliographic dataset of social media scholarship. We use our analyses as a context for discussing the benefits of each approach as well as some of the common pitfalls and dangers of computational approaches. -The chapter walks through the entire process of computational analysis, beginning with data collection. We explain how we gather a large-scale dataset about social media research from the \emph{Scopus} website's application programming interface. In particular, our dataset contains data about every article in the Scopus database that includes the term `social media' in its title, abstract, or keywords. Using this dataset, we perform multiple computational analyses. First, we use network analysis \citep{wasserman_social_1994} on article citation metadata to understand the structure of references between the articles. Second, we use topic models \citep{blei_probabilistic_2012}, an unsupervised natural language processing technique, to describe the distribution of topics within the sample of articles included in our analysis. Third, we perform statistical prediction \citep{james_introduction_2013} in order to understand what characteristics of articles best predict subsequent citations. For each analysis, we describe the method we use in detail and discuss some of its benefits and limitations. +The chapter walks through the entire process of computational analysis, beginning with data collection. We explain how we gather a large-scale dataset about social media research from the \emph{Scopus} website's application programming interface. The dataset we collect contains metadata about every article in the Scopus database that includes the term `social media' in its title, abstract, or keywords. Using this dataset, we perform multiple computational analyses. First, we use network analysis \citep{wasserman_social_1994} on article citation metadata to understand the structure of references between the articles. Second, we use topic models \citep{blei_probabilistic_2012}, an unsupervised natural language processing technique, to describe the distribution of topics within the sample of articles included in our analysis. Third, we perform statistical prediction \citep{james_introduction_2013} in order to understand what characteristics of articles best predict subsequent citations. For each analysis, we describe the method we use in detail and discuss some of its benefits and limitations. Our results reveal several patterns in social media scholarship. Bibliometric network data reveals disparities in the degree that disciplines cite each other and illustrate that marketing and medical research each enjoy surprisingly large influence. Through descriptive analysis and topic modeling, we find evidence of the early influence of social network research. When we use papers' characteristics to predict which work gets cited, we find that publication venues and linguistic features provide the most explanatory power. @@ -167,17 +167,17 @@ In carrying out our work in this chapter, we seek to exemplify several current b A major part of computational research consists of obtaining data, preparing it for analysis, and generating initial descriptions that can help guide subsequent inquiry. Social media datasets vary in how they make it into researchers' hands. There are several sources of social media data which are provided in a form that is pre-processed and ready for analysis. For example, The Stanford Large Network Dataset Collection \citep{leskovec_snap_2014} contains pre-formatted and processed data from a variety of social media platforms. Typically, prepared datasets come formatted as `flat files' such as comma-separated value (CSV) tables, which many types of statistical software and programming tools can import directly. -More typically, researchers retrieve data directly from social media platforms or other web-based sources. These `primary' sources provide more extensive, dynamic, and up-to-date datasets, but also require much more work to prepare the data for analysis. Typically, researchers retrieve these data from social media sites through application programming interfaces (APIs). Web sites and platforms use APIs to provide external programmers with limited access to their servers and databases. Unfortunately, APIs are rarely designed with research in mind and are often inconvenient and limited for social scientists as a result. For example, Twitter's search API returns a small, non-random sample of tweets by default (what a user might want to read), rather than all of the tweets that match a given query (what a researcher building a sample would want). In addition, APIs typically limit how much data they will provide for each query and how many queries can be submitted within a given time period. +More typically, researchers retrieve data directly from social media platforms or other web-based sources. These `primary' sources provide more extensive, dynamic, and up-to-date datasets, but also require much more work to prepare the data for analysis. Typically, researchers retrieve these data from social media sites through application programming interfaces (APIs). Web sites and platforms use APIs to provide programmers with limited access to their servers and databases. Unfortunately, APIs are rarely designed with research in mind and are often inconvenient and limited for social scientists as a result. For example, Twitter's search API returns a small, non-random sample of tweets by default (what a user might want to read), rather than all of the tweets that match a given query (what a researcher building a sample would want). In addition, APIs typically limit how much data they will provide for each query and how many queries can be submitted within a given time period. APIs provide raw data in formats like XML or JSON, which are poorly suited to most data analysis tasks. As a result, researchers must take the intermediate step of converting data into more appropriate formats and structures. Typically, researchers must also construct measures from the raw data, such as user-level statistics (e.g., number of retweets) or metadata (e.g., post length). A number of tools, such as NodeXL \citep{hansen_analyzing_2010}, exist to make the process of obtaining and preparing digital trace data easier. However, off-the-shelf tools tend to come with their own limitations and, in our experience, gathering data amenable to computational analysis usually involves some programming work. -Compared to some traditional forms of data collection, obtaining and preparing social media data has high initial costs. It frequently involves writing and debugging custom software, reading documentation about APIs, learning new software libraries, and testing datasets for completeness and accuracy. However, computational methods scale very well and gathering additional data often simply means expanding the date range in a programming script. Contrast this with interviews, surveys, or experiments, where recruitment is often labor-intensive, expensive, and slow. Such scalability, paired with the massive participation on many social media platforms, can support the collection of very large samples. +Compared to some traditional forms of data collection, obtaining and preparing social media data has high initial costs. It frequently involves writing and debugging custom software, reading documentation about APIs, learning new software libraries, and testing datasets for completeness and accuracy. However, computational methods scale very well and gathering additional data often simply means expanding the date range in a program . Contrast this with interviews, surveys, or experiments, where recruitment is often labor-intensive, expensive, and slow. Such scalability, paired with the massive participation on many social media platforms, can support the collection of very large samples. \subsection{Our application: The Scopus Bibliographic Database} -We used a series of Scopus Bibliographic Database APIs to retrieve data about all of the publications in their database that contained the phrase `social media' in their abstract, title, or keywords. We used the Python programming language to write custom software to download data. First, we wrote a program to query the Scopus Search API to retrieve a list of the articles that matched our criteria. We stored the resulting list of \Sexpr{f(total.articles)} articles in a file. We used this list of articles as input to a second program, which used the Scopus Citations Overview API to retrieve information about all of the articles that cited these \Sexpr{f(total.articles)} articles. Finally, we wrote a third program that used the Scopus Abstract Retrieval API to download abstracts and additional metadata about the original \Sexpr{f(total.articles)} articles. Due to rate limits and the process of trial and error involved in writing, testing, and debugging these custom programs, it took a few weeks to obtain the complete dataset. +We used a series of Scopus Bibliographic Database APIs to retrieve data about all of the publications in their database that contained the phrase `social media' in their abstract, title, or keywords. We used the Python programming language to write custom software to download this data. First, we wrote a program to query the Scopus Search API to retrieve a list of the articles that matched our criteria. We stored the resulting list of \Sexpr{f(total.articles)} articles in a file. We used this list of articles as input to a second program, which used the Scopus Citations Overview API to retrieve metadata about all of the articles that cited these \Sexpr{f(total.articles)} articles. Finally, we wrote a third program that used the Scopus Abstract Retrieval API to download abstracts and additional metadata about the original \Sexpr{f(total.articles)} articles. Due to rate limits and the process of trial and error involved in writing, testing, and debugging these custom programs, it took a few weeks to obtain the complete dataset. -Like many social media APIs, the Scopus APIs returns data in JSON format. Although not suitable for analysis without processing, we stored this JSON data in the form it was given to us. Retaining the `raw' data directly from APIs allows researchers to construct new measures they might not have believed were relevant in the early stages of their research and to fix any bugs that they find in their data processing and reduction code without having to re-download raw data. Once we obtained the raw data, we wrote additional Python scripts to turn the downloaded JSON files into CSV tables which could be imported into Python and R, the programming languages we used to complete our analyses. +Like many social media APIs, the Scopus APIs returns data in JSON format. Although not suitable for analysis without processing, we stored this JSON data in the form it was given to us. Retaining the `raw' data as it was provided by APIs allows researchers to construct new measures they might not have believed were relevant in the early stages of their research and to fix any bugs that they find in their data processing and reduction code without having to re-download raw data. Once we obtained the raw data, we wrote additional Python scripts to turn the downloaded JSON files into CSV tables which could be imported into Python and R, the programming languages we used to complete our analyses. \subsection{Results} @@ -212,7 +212,7 @@ ggplot(subj.by.year) + aes(x=year, y=papers, fill = subject) + geom_density(stat="identity", position="stack") + xlab('Year') + ylab('Number of papers') + - guides(fill = guide_legend(reverse = TRUE)) + + guides(fill = guide_legend(title="Scopus Discipline", reverse = FALSE)) + # scale_fill_grey(name="Discipline") theme_bw() #$ @@ -264,17 +264,17 @@ print(xtable(results, \label{tab:citedby} \end{table} -We then consider the impact of this set of papers as measured by the citations they have received. Like many phenomena in social systems, citation counts follow a highly skewed distribution with a few papers receiving many citations and most papers receiving very few. Table \ref{tab:citedby} provides a list of the most cited papers. These sorts of distributions suggest the presence of `preferential attachment' \citep{barabasi_emergence_1999} or `Matthew effects' \citep{merton_matthew_1968}, where success leads to greater success. +We then consider the impact of this set of papers as measured by the citations they have received. Like many phenomena in social systems, citation counts follow a highly skewed distribution with a few papers receiving many citations and most papers receiving very few. Table \ref{tab:citedby} provides a list of the most cited papers. These sorts of distributions suggest the presence of `preferential attachment' \citep{barabasi_emergence_1999} or the `Matthew effect' \citep{merton_matthew_1968}, where success leads to greater success. \subsection{Discussion} The summary statistics and exploratory visualizations presented above provide an overview of the scope and trajectory of social media research. We find that social media research is growing -- both overall and within many disciplines. We find evidence that computer scientists laid the groundwork for the study of social media, but that social scientists, learning scientists, and medical researchers have increasingly been referring to social media in their published work. We also find several business and marketing papers among the most cited pieces of social media research even though neither these disciplines nor their journals appear among the most prevalent in the dataset. -These results are interesting and believable because they come from a comprehensive database of academic work. In most social science contexts, researchers have to sample from a population and that sampling is often biased. For example, the people willing to come to a lab to participate in a study or take a phone survey may have different attributes from those unwilling to participate. This makes generalizing to the entire population problematic. When using trace data, on the other hand, we often have data from all members of a community including those who would not have chosen to participate. One of the primary benefits of collecting data from a comprehensive resource like Scopus is that it limits the impact of researcher biases or assumptions. For example, we do not have backgrounds in education or medical research; had we tried to summarize the state of social media research by identifying articles and journals manually, we might have overlooked these disciplines. +These results are interesting and believable because they come from a comprehensive database of academic work. In most social science contexts, researchers have to sample from a population and that sampling is often biased. For example, the people willing to come to a lab to participate in a study or take a phone survey may have different attributes from those unwilling to participate. This makes generalizing to the entire population problematic. When using trace data, on the other hand, we often have data from all members of a community including those who would not have chosen to participate. One of the primary benefits of collecting data from a comprehensive resource like Scopus is that it can reduce some types of bias in the data collection process. For example, we do not have backgrounds in education or medical research; had we tried to summarize the state of social media research by identifying articles and journals manually, we might have overlooked these disciplines. That said, this apparent benefit can also become a liability when we seek to generalize our results beyond the community that we have data for. The large \textit{N} of big data studies using social media traces may make results appear more valid, precise, or certain, but a biased sample does not become less biased just because it is larger \citep{hargittai_is_2015}. For example, a sample of 100 million Twitter users might be a worse predictor of election results than a truly random sample of only 1,000 likely voters because Twitter users likely have different attributes and opinions than the voting public. Another risk comes from the danger that data providers collect or filter data in ways that aren't apparent. Researchers should think carefully about the relationship of their data to the population they wish to study and find ways to estimate bias empirically. -Overall, we view the ease of obtaining and analyzing digital traces as one of the most exciting developments in social science. Although the hurdles involved represent a real challenge to many scholars of social media today, learning the technical skills required to obtain online trace data is no more challenging than the statistics training that is part of many PhD programs and opens opportunities for important, large-scale studies. Below, we present examples of a few computational analyses that can be done with this sort of data. +Overall, we view the ease of obtaining and analyzing digital traces as one of the most exciting developments in social science. Although the hurdles involved represent a real challenge to many scholars of social media today, learning the technical skills required to obtain online trace data is no more challenging than the statistics training that is part of many PhD programs. Below, we present examples of a few computational analyses that can be done with this sort of data. \section{Network analysis} @@ -304,12 +304,12 @@ A citation graph is only one possible network representation of the relationship \hline Community & Description & Journals \\ \hline - \colorbox{CarnationPink}{\color{black}Community 1} & biomedicine; bioinformatics & Journal of Medical Internet Research; PLoS ONE; Studies in Health Technology and Informatics \\ - \colorbox{Green}{\color{black}Community 2} & information technology; management & Computers in Human Behavior; Business Horizons; Journal of Interactive Marketing \\ + \colorbox{Thistle}{\color{black}Community 1} & biomedicine; bioinformatics & Journal of Medical Internet Research; PLoS ONE; Studies in Health Technology and Informatics \\ + \colorbox{LimeGreen}{\color{black}Community 2} & information technology; management & Computers in Human Behavior; Business Horizons; Journal of Interactive Marketing \\ \colorbox{Black}{\color{white}Community 3} & communication & Information Communication and Society; New Media and Society; Journal of Communication \\ \colorbox{Cyan}{\color{black}Community 4} & computer science; network science & Lecture Notes in Computer Science; PLoS ONE; WWW; KDD \\ \colorbox{Orange}{\color{black}Community 5} & psychology; psychometrics & Computers in Human Behavior; Cyberpsychology, Behavior, and Social Networking; Computers and Education\\ - \colorbox{Red}{\color{black}Community 6} & multimedia & IEEE Transactions on Multimedia; Lecture Notes in Computer Science; ACM Multimedia \\ + \colorbox{WildStrawberry}{\color{black}Community 6} & multimedia & IEEE Transactions on Multimedia; Lecture Notes in Computer Science; ACM Multimedia \\ \hline \end{tabular} \end{adjustbox} @@ -319,50 +319,50 @@ A citation graph is only one possible network representation of the relationship \subsection{Results} -As is common in social networks, the large majority of articles with any citations connect to each other in one large sub-network called a `component'. Figure \ref{fig:hairball} shows a visualization of this large component. The optimal way to represent network data in two-dimensional space is a topic of research and debate. Figure \ref{fig:hairball} uses a force-directed drawing technique \citep{fruchterman_graph_1991}, the most widely used algorithm in network visualization, using the free/open source software package Gephi \citep{bastian_gephi:_2009}. The basic idea behind the algorithm is that nodes naturally push away from each other, but are pulled together by edges between them. Shades in each graph in this section reflect the communities of documents identified by Rosvall and colleagues' `map' algorithm \citep{rosvall_maps_2008, rosvall_map_2010}. Although the algorithm identified several dozen communities, most are extremely small, so we have shown only the largest 6 communities in Figure \ref{fig:hairball}. Each of these communities are summarized in Table \ref{tab:clusters} where the right-most column lists the three most common journals for the articles included in each community. +As is common in social networks, the large majority of articles with any citations connect to each other in one large `component' or sub-network. Figure \ref{fig:hairball} shows a visualization of this large component. The optimal way to represent network data in two-dimensional space is a topic of research and debate. Figure \ref{fig:hairball} uses a force-directed drawing technique \citep{fruchterman_graph_1991}, the most widely used algorithm in network visualization, using the free/open source software package Gephi \citep{bastian_gephi:_2009}. The basic idea behind the algorithm is that nodes naturally push away from each other, but are pulled together by the edges between them. Shades in each graph in this section reflect the communities of documents identified by Rosvall and colleagues' `map' algorithm \citep{rosvall_maps_2008, rosvall_map_2010}. Although the algorithm identified several dozen communities, most are extremely small, so we have shown only the largest 6 communities in Figure \ref{fig:hairball}. Each of these communities are summarized in Table \ref{tab:clusters} where the right-most column lists the three most common journals for the articles included in each community. -At this point, we can look in more depth at the attributes of the different communities. For example, in a bibliometric analysis published in the journal \emph{Scientometrics}, \citet{kovacs_exploring_2015} reported summary statistics for articles in each of the major communities identified (e.g., the average number of citations) as well as qualitative descriptions of the nodes in each community. We can see from looking at Table \ref{tab:clusters} that the communities point to the existence of coherent thematic groups. For example, \colorbox{CarnationPink}{\color{black}Community 1} includes biomedical research while \colorbox{black}{\color{white}Community 3} contains papers published in communication journals. Earlier, we relied on an existing category scheme applied to journals to create Figure \ref{fig:pubstime}; all articles published in particular journals were treated as being within one field. Network analysis, however, can identify groups and categories of articles in terms of who is citing whom and, as a result, can reveal groups that cross journal boundaries. PLoS ONE, for example, is a `megajournal' that publishes articles from all scientific fields \citep{binfield_plos_2012}. As a result, PLoS ONE is one of the most frequently included journals in both \colorbox{CarnationPink}{\color{black}Community 1} and \colorbox{Red}{\color{black}Community 6}. In a journal-based categorization system, articles may be misclassified or not classified at all. +At this point, we could look in more depth at the attributes of the different communities. For example, in a bibliometric analysis published in the journal \emph{Scientometrics}, \citet{kovacs_exploring_2015} reported summary statistics for articles in each of the major communities identified (e.g., the average number of citations) as well as qualitative descriptions of the nodes in each community. We can see from looking at Table \ref{tab:clusters} that the communities point to the existence of coherent thematic groups. For example, \colorbox{Thistle}{\color{black}Community 1} includes biomedical research while \colorbox{black}{\color{white}Community 3} contains papers published in communication journals. Earlier, we relied on an existing category scheme applied to journals to create Figure \ref{fig:pubstime}; all articles published in particular journals were treated as being within one field. Network analysis, however, can identify groups and categories of articles in terms of who is citing whom and, as a result, can reveal groups that cross journal boundaries. PLoS ONE, for example, is a `megajournal' that publishes articles from all scientific fields \citep{binfield_plos_2012}. As a result, PLoS ONE is one of the most frequently included journals in both \colorbox{Thistle}{\color{black}Community 1} and \colorbox{Cyan}{\color{black}Community 4}. In a journal-based categorization system, articles may be misclassified or not classified at all. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{figures/cluster_connections.pdf} - \caption{Graphical representation of citations between communities using the same grayscale mapping described in Table \ref{tab:clusters}. The size of the nodes reflects the total number of papers in each community. The thickness of each edge reflects the number of outgoing citations. Edges are directional, and share the color of their source (i.e., citing) community.} + \caption{Graphical representation of citations between communities using the same mapping described in Table \ref{tab:clusters}. The size of the nodes reflects the total number of papers in each community. The thickness of each edge reflects the number of outgoing citations. Edges are directional, and share the color of their source (i.e., citing) community.} \label{fig:cluscon} \end{figure} -Network analysis can also reveal details about the connections between fields. Figure \ref{fig:cluscon} shows a second network we have created in which our communities are represented as nodes and citations from articles in one community to articles in the other communities are represented as edges. The thickness of each edge represents the number of citations and the graph shows the directional strength of the relative connections between communities. For example, the graph suggests that the communication studies community (\colorbox{Black}{\color{white}Community 3}) cites many papers in information technology and management (\colorbox{Green}{\color{black}Community 2}) but that this relationship is not reciprocated. +Network analysis can also reveal details about the connections between fields. Figure \ref{fig:cluscon} shows a second network we have created in which our communities are represented as nodes and citations from articles in one community to articles in the other communities are represented as edges. The thickness of each edge represents the number of citations and the graph shows the directional strength of the relative connections between communities. For example, the graph suggests that the communication studies community (\colorbox{Black}{\color{white}Community 3}) cites many papers in information technology and management (\colorbox{LimeGreen}{\color{black}Community 2}) but that this relationship is not reciprocated. \subsection{Discussion} -Like many computational methods, the power of network techniques comes from the representation of complex relationships in simplified forms. Although elegant and powerful, the network analysis approach is inherently reductive in nature and limited in many ways. What we gain in our ability to analyze millions or billions of individuals comes at the cost of speaking about particular individuals and sub-groups. A second limitation stems from the huge number of relationships that can be represented in graphs. A citation network and a co-citation network, for example, represent different types of connections and these differences might lead an algorithm to identify different communities. As a result, choices about the way that edges and nodes are defined can lead to very different conclusions about the structure of a network or the influence of particular nodes. Network analyses often treat all connections and all nodes as similar in ways that mask important variation. +Like many computational methods, the power of network techniques comes from representing complex relationships in simplified forms. Although elegant and powerful, the network analysis approach is inherently reductive in nature and limited in many ways. What we gain in our ability to analyze millions or billions of individuals comes at the cost of speaking about particular individuals and sub-groups. A second limitation stems from the huge number of relationships that can be represented in graphs. A citation network and a co-citation network, for example, represent different types of connections and these differences might lead an algorithm to identify different communities. As a result, choices about the way that edges and nodes are defined can lead to very different conclusions about the structure of a network or the influence of particular nodes. Network analyses often treat all connections and all nodes as similar in ways that mask important variation. -Network analysis is built on the assumption that knowing about the relationships between individuals in a system is often as important, and sometimes more important, than knowing about the individuals themselves. It inherently recognizes interdependence and the importance of social structures. This perspective comes with a cost, however. The relational structure and interdependence of social networks make it impossible to use traditional statistical methods and SNA practitioners have had to move to more complex modeling strategies and simulations to test hypotheses. +Network analysis is built on the assumption that knowing about the relationships between individuals in a system is often as important, and sometimes more important, than knowing about the individuals themselves. It inherently recognizes interdependence and the importance of social structures. This perspective comes with a cost, however. The relational structure and interdependence of social networks make it impossible to use traditional statistical methods. SNA practitioners have had to move to more complex modeling strategies and simulations to test hypotheses. \section{Text analysis} Social media produces an incredible amount of text, and social media researchers often analyze the content of this text. For example, researchers use ethnographic approaches \citep{kozinets_field_2002} or content analysis \citep{chew_pandemics_2010} to study the texts of interactions online. Because the amount of text available for analysis is far beyond the ability of any set of researchers to analyze by hand, scholars increasingly turn to computational approaches. Some of these analyses are fairly simple, such as tracking the occurrence of terms related to a topic or psychological construct \citep{tausczik_psychological_2010}. Others are more complicated, using tools from natural language processing (NLP). NLP includes a range of approaches in which algorithms are applied to texts, such as machine translation, optical character recognition, and part-of-speech tagging. Perhaps the most common use in the social sciences is sentiment analysis, in which the affect of a piece of text is intuited based on the words that are used \citep{asur_predicting_2010}. Many of these techniques have applications for social media research. -One natural language processing technique -- topic modeling -- is used increasingly frequently in computational social science research. Topic modeling seeks to identify topics automatically within a set of documents. In this sense, topic modeling is analogous to content analysis or other manual forms of document coding and labeling. However, topic models are a completely automated, unsupervised computational method -- i.e., topic modeling algorithms do not require any sort of human intervention, such as hand-coded training data or dictionaries of terms. Topic modeling scales well to even very large datasets, and is most usefully applied to large corpora of text where labor-intensive methods like manual coding are simply not an option. +One natural language processing technique---topic modeling---is used increasingly often in computational social science research. Topic modeling seeks to identify topics automatically within a set of documents. In this sense, topic modeling is analogous to content analysis or other manual forms of document coding and labeling. However, topic models are a completely automated, unsupervised computational method---i.e., topic modeling algorithms do not require any sort of human intervention, such as hand-coded training data or dictionaries of terms. Topic modeling scales well to even very large datasets, and is most usefully applied to large corpora of text where labor-intensive methods like manual coding are simply not an option. When using the technique, a researcher begins by feeding topic modeling software the texts that she would like to find topics for and by specifying the number of topics to be returned. There are multiple algorithms for identifying topics, but we focus on the most common: \emph{latent Dirichlet allocation} or LDA \citep{blei_latent_2003}. The nuts and bolts of how LDA works are complex and beyond the scope of this chapter, but the basic goal is fairly simple: LDA identifies sets of words that are likely to be used together and calls these sets `topics.' For example, a computer science paper is likely to use words like `algorithm', `memory', and `network.' While a communication article might also use `network,' it would be much less likely to use `algorithm' and more likely to use words like `media' and `influence.' The other key feature of LDA is that it does not treat documents as belonging to only one topic, but as consisting of a mixture of multiple topics with different degrees of emphasis. For example, an LDA analysis might characterize this chapter as a mixture of computer science and communication (among other topics). LDA identifies topics inductively from the observed distributions of words in documents. The LDA algorithm looks at all of the words that co-occur within a corpus of documents and assumes that words used in the same document are more likely to be from the same topic. The algorithm then looks across all of the documents and finds the set of topics and topic distributions that would be, in a statistical sense, most likely to produce the observed documents. LDA's output is the set of topics: ranked lists of words likely to be used in documents about each topic, as well as the distribution of topics in each document. \citet{dimaggio_exploiting_2013} argue that while many aspects of topic modeling are simplistic, many of the assumptions have parallels in sociological and communication theory. Perhaps more importantly, the topics created by LDA frequently correspond to human intuition about how documents should be grouped or classified. -The results of topic models can be used many ways. Our dataset includes \Sexpr{nrow(abstracts[grep('LDA',abstracts[['abstract']]),])} publications with the term `LDA' in their abstracts. Some of these papers use topic models to conduct large-scale content analysis, such as looking at the topics used around health on Twitter \citep{prier_identifying_2011,ghosh_what_2013}. Researchers commonly use topic modeling for prediction and machine learning tasks, such as identifying how topics vary by demographic characteristics and personality types \citep{schwartz_personality_2013}. In our dataset, papers use LDA to predict transitions between topics \citep{wang_tm-lda:_2012}, to recommend friends based on similar topic use \citep{pennacchiotti_investigating_2011}, and to identify interesting tweets on Twitter \citep{yang_identifying_2014}. +The results of topic models can be used many ways. Our dataset includes \Sexpr{nrow(abstracts[grep('LDA',abstracts[['abstract']]),])} publications with the term `LDA' in their abstracts. Some of these papers use topic models to conduct large-scale content analysis, such as looking at the topics used around health on Twitter \citep{prier_identifying_2011,ghosh_what_2013}. Researchers commonly use topic modeling for prediction and machine learning tasks, such as predicting a user's gender or personality type \citep{schwartz_personality_2013}. Papers in the dataset also use LDA to predict transitions between topics \citep{wang_tm-lda:_2012}, to recommend friends based on similar topic use \citep{pennacchiotti_investigating_2011}, and to identify interesting tweets on Twitter \citep{yang_identifying_2014}. \subsection{Our application: Identifying topics in social media research} -We apply LDA to the texts of abstracts in our dataset in order to identify prevalent topics in social media research. We show how topics are extracted and labeled and then use data on topic distributions to show how the focus of social media research has changed over time. We begin by collecting each of the abstracts for the papers in our sample. Scopus does not include abstract text for \Sexpr{f(total.articles - nrow(abstracts))} of the \Sexpr{f(total.articles)} articles in our sample. We examined a random sample of the entries with missing abstracts by hand, and found that abstracts for many simply never existed (e.g., articles published in trade journals or books). Other articles had published abstracts, but the text of these abstracts, for reasons that are not clear, were not available through Scopus.\footnote{This provides one example of how the details of missing data can be invisible or opaque. It is easy to see how missing data like this could impact research results. For example, if certain disciplines or topics are systematically less likely to include abstracts in Scopus, we will have a skewed representation of the field.} +We apply LDA to the texts of abstracts in our dataset in order to identify topics in social media research. We show how topics are extracted and labeled and then use data on topic distributions to show how the focus of social media research has changed over time. We begin by collecting each of the abstracts for the papers in our sample. Scopus does not include abstract text for \Sexpr{f(total.articles - nrow(abstracts))} of the \Sexpr{f(total.articles)} articles in our sample. We examined a random sample of the entries with missing abstracts by hand, and found that abstracts for many simply never existed (e.g., articles published in trade journals or books). Other articles had published abstracts, but the text of these abstracts, for reasons that are not clear, were not available through Scopus.\footnote{This provides one example of how the details of missing data can be invisible or opaque. It is easy to see how missing data like this could impact research results. For example, if certain disciplines or topics are systematically less likely to include abstracts in Scopus, we will have a skewed representation of the field.} We proceed with the \Sexpr{f(nrow(abstracts))} articles in our sample for which abstract data was available. The average abstract in this dataset is \Sexpr{f(round(mean(word_count),2))} words long, with a max of \Sexpr{f(max(word_count))} words and a minimum of \Sexpr{f(min(word_count))} (``\Sexpr{abstracts[which.min(word_count),'abstract']}''). -We then remove `stop words' (common words like `the,'`of,' etc.) and tokenize the documents by breaking them into unigrams and bigrams (one-word and two-word terms). We analyze the data using the Python \emph{LatentDirichletAllocation} module from the \emph{scikit-learn} library \citep{pedregosa_scikit-learn:_2011}. Choosing the appropriate number of topics to be returned (typically referred to as \textit{k}) is a matter of some debate and research \citep[e.g.,][]{arun_finding_2010}. After experimenting with different values of \textit{k}, plotting the distribution of topics each time in a way similar to graph shown in Figure \ref{fig:ldaplots}, we ultimately set \textit{k} as twelve. At higher values of \textit{k}, additional topics only rarely appeared in the abstracts. +We then remove `stop words' (common words like `the,'`of,' etc.) and tokenize the documents by breaking them into unigrams and bigrams (one-word and two-word terms). We analyze the data using the Python \emph{LatentDirichletAllocation} module from the \emph{scikit-learn} library \citep{pedregosa_scikit-learn:_2011}. Choosing the appropriate number of topics to be returned (typically referred to as \textit{k}) is a matter of some debate and research \citep[e.g.,][]{arun_finding_2010}. After experimenting with different values of \textit{k}, plotting the distribution of topics each time in a way similar to the graphs shown in Figure \ref{fig:ldaplots}, we ultimately set \textit{k} as twelve. At higher values of \textit{k}, additional topics only rarely appeared in the abstracts. \subsection{Results} Table \ref{topic_table} shows the top words for each of the topics discovered by the LDA model, sorted by how common each topic is in our dataset. -At this point, researchers typically evaluate the lists of words for coherence and give names to each of the topics. For example, we look at the words associated with Topic 1 and give it the name `Media Use.' Of course, many other names for this topic could be chosen. We might call it `Facebook research' because it is the only topic which includes the term `facebook.' Researchers often validate these names by looking at some of the texts which score highest for each topic and subjectively evaluating the appropriateness of the chosen name as a label for those texts. For example, we examined the abstracts of the five papers with the highest value for the `Media Use' topic and confirmed that we were comfortable claiming that they were examples of research about media use. In this way, topic modeling requires a mixture of both quantitative and qualitative interpretation. The computer provides results, but making sense of those results requires familiarty with the data. +At this point, researchers typically evaluate the lists of words for coherence and give names to each of the topics. For example, after looking at the words associated with Topic 1 we gave it the name `Media Use.' Of course, many other names for this topic could be chosen. We might call it `Facebook research' because it is the only topic which includes the term `facebook.' Researchers often validate these names by looking at some of the texts which score highest for each topic and subjectively evaluating the appropriateness of the chosen name as a label for those texts. For example, we examined the abstracts of the five papers with the highest value for the `Media Use' topic and confirmed that we were comfortable claiming that they were examples of research about media use. In this way, topic modeling requires a mixture of both quantitative and qualitative interpretation. The computer provides results, but making sense of those results requires familiarty with the data. \begin{table} \tiny @@ -425,12 +425,11 @@ The figures provide insight into the history and trajectory of social media rese \subsection{Discussion} -Some of the strengths of topic modeling become apparent when we compare these LDA-based analyses with the distribution of papers by discipline that we created earlier (Figure \ref{fig:pubstime}). In this earlier attempt, we relied on the categories that Scopus provided and found that early interest in social media was driven by computer science and information systems researchers. Through topic modeling, we learn that these researchers engaged in social network analysis (rather than interface design, for example). While some of our topics match up well with the disciplines identified by Scopus, a few are more broad (e.g., `Media Use') and most are more narrow (e.g., `Sentiment Analysis'). This analysis might provide a richer sense of the topics of interest to social media researchers. Finally, these topics emerged inductively without any need for explicit coding, such as classifying journals into disciplines. This final feature is a major benefit in social media research where text is rarely categorized for researchers ahead of time. +Some of the strengths of topic modeling become apparent when we compare these LDA-based analyses with the distribution of papers by discipline that we created earlier (Figure \ref{fig:pubstime}). In our earlier attempt, we relied on the categories that Scopus provided and found that early interest in social media was driven by computer science and information systems researchers. Through topic modeling, we learn that these researchers engaged in social network analysis (rather than interface design, for example). While some of our topics match up well with the disciplines identified by Scopus, a few are more broad (e.g., `Media Use') and most are more narrow (e.g., `Sentiment Analysis'). This analysis provides a richer sense of the topics of interest to social media researchers. Finally, these topics emerged inductively without any need for explicit coding, such as classifying journals into disciplines. This final feature is a major benefit in social media research where text is rarely categorized for researchers ahead of time. -Topic modeling provides an intuitive, approachable way of doing large-scale text analysis. Its outputs can be understandable and theory-generating. The inductive creation of topics has advantages over traditional content analysis or `supervised' computational methods that require researchers to define labels or categories of interest ahead of time. While the results of topic models clearly lack the nuance and depth of understanding that human coders bring to texts, the method allows researchers to analyze datasets at a scale and granularity that would take a huge amount of resources to code manually. +Topic modeling provides an intuitive, approachable way of doing large-scale text analysis. Its outputs can be understandable and theory-generating. The inductive creation of topics has advantages over traditional content analysis or `supervised' computational methods that require researchers to define labels or categories of interest ahead of time. While topic models clearly lack the nuance and depth of understanding that human coders bring to texts, the method allows researchers to analyze datasets at a scale and granularity that would take a huge amount of resources to code manually. -There are, of course, limitations to topic modeling. Many of LDA's limitations have analogues in manual coding. One we have already mentioned is that researchers must choose the number of topics without any clear rules about how to do so. A similar problem exists in content analysis, but the merging and splitting of topics can be done more intuitively and intentionally. -An additional limitation is that topic modeling tends to work best with many long documents. This can represent a stumbling block for researchers with datasets of short social media posts or comments; in these cases posts can be aggregated by user or by page to produce meaningful topics. The scope of documents can also affect the results of topic models. If, in addition to using abstracts about `social media,' we had also included abstracts containing the term `gene splicing,' our twelve topics would be divided between the two fields and each topic would be less granular. To recover topics similar to those we report here, we would have to increase the number of topics created. +There are, of course, limitations to topic modeling. Many of LDA's limitations have analogues in manual coding. One we have already mentioned is that researchers must choose the number of topics without any clear rules about how to do so. Although a similar problem exists in content analysis, the merging and splitting of topics can be done more intuitively and intentionally when using traditional methods. An additional limitation is that topic modeling tends to work best with many long documents. This can represent a stumbling block for researchers with datasets of short social media posts or comments; in these cases posts can be aggregated by user or by page to produce meaningful topics. The scope of documents can also affect the results of topic models. If, in addition to using abstracts about `social media,' we had also included abstracts containing the term `gene splicing,' our twelve topics would be divided between the two fields and each topic would be less granular. To recover topics similar to those we report here, we would have to increase the number of topics created. As with network analysis, a goal of LDA is to distill large, messy, and noisy data down to much simpler representations in order to find patterns. Such simplification will always entail ignoring some part of what is going on. Luckily, human coders and LDA have complementary advantages and disadvantages in this regard. Computational methods do not understand which text is more or less important. Humans are good at seeing the meaning and importance of topics, but may suffer from cognitive biases and miss out on topics that are less salient \citep{dimaggio_exploiting_2013}. Topic models work best when they are interpreted by researchers with a rich understanding of the texts and contexts under investigation. @@ -450,11 +449,11 @@ attach(pred.descrip) We use multiple attributes of the papers in our dataset, including text of their abstracts, to predict citations. About \Sexpr{f(round(table(cited)[["TRUE"]] / total.articles * 100, 0))}\% of the papers (\Sexpr{f(length(cited[cited]))} out of \Sexpr{f(total.articles)}) received one or more citations ($\mu = \Sexpr{f(round(mean(cites),2))}$; $\sigma = \Sexpr{f(round(sd(cites), 2))}$). Can textual features of the abstracts explain which papers receive citations? What about other attributes, such as the publication venue or subject area? A prediction analysis can help evaluate these competing alternatives. -To begin, we generate a large set of features for each paper from the Scopus data. Our measures include the year, month, and language of publication as well as the number of citations each paper contains to prior work. We also include the modal country of origin of the authors as well as the affiliation of the first author. Finally, we include the publication venue and publication subject as provided by Scopus. Then, we build the textual features by taking all of the abstracts and moving them through the following sequence of steps similar to those we took when performing LDA: we lowercase all the words; remove all stop words; and create uni-, bi-, and tri-grams. +To begin, we generate a large set of features for each paper from the Scopus data. Our measures include the year, month, and language of publication as well as the number of citations each paper contains to prior work. We also include the modal country of origin of the authors as well as the affiliation of the first author. Finally, we include the publication venue and publication subject area as provided by Scopus. Then, we build the textual features by taking all of the abstracts and moving them through the following sequence of steps similar to those we took when performing LDA: we lowercase all the words; remove all stop words; and create uni-, bi-, and tri-grams. -We also apply some inclusion criteria to both papers and features. To avoid subject-specific jargon, we draw features only from those terms that appear across at least 30 different subject areas. To avoid spurious results, we also exclude papers that fall into unique categories. For example, we require that there be more than one paper published in any language, journal, or subject area that we include as a feature. These sorts of unique cases can cause problems in the context of prediction tasks because they may predict certain outcomes perfectly. As a result, it is often better to focus on datasets and measures that are less `sparse' (i.e.,~characterized by rare, one-off observations). Once we drop the \Sexpr{ f( length(cited) - length(covars['cited']))} papers that do not meet these criteria, we are left with \Sexpr{f(length(covars['cited']))} papers. +We also apply some inclusion criteria to both papers and features. To avoid subject-specific jargon, we draw features only from those terms that appear across at least 30 different subject areas. To avoid spurious results, we also exclude papers that fall into unique categories. Specifically, we remove papers which are the only publications in a given language, journal, or subject area. These sorts of unique cases can cause problems in the context of prediction tasks because they may predict certain outcomes perfectly. As a result, it is often better to focus on datasets and measures that are less `sparse' (i.e.,~characterized by rare, one-off observations). Once we drop the \Sexpr{ f(length(cited) - length(covars['cited']==TRUE))} papers that do not meet these criteria, we are left with \Sexpr{f(length(covars['cited']==TRUE))} papers. -We predict the dichotomous outcome variable \emph{cited}, which indicates whether a paper received any citations during the period covered by our dataset (2004-2016). We use a method of \emph{penalized logistic regression} called the least absolute shrinkage and selection operator (also known as the \emph{Lasso}) to do the prediction work. Although, the technical details of Lasso models lie beyond the scope of this chapter, it, and other penalized regression models, work well on data where many of the variables have nearly identical values (sometimes called collinear variables because they would sit right around the same line if you plotted them) and/or many zero values (this is also called `sparse' data) \citep{friedman_regularization_2010, james_introduction_2013}. In both of these situations, some measures are redundant; the Lasso uses clever math to pick which of those measures should go into your final model and which ones should be, in effect, left out.\footnote{To put things a little more technically, a fitted Lasso model \emph{selects} the optimal set of variables that should have coefficient values greater than zero and \emph{shrinks} the rest of the coefficients to zero without sacrificing goodness of fit \citep{tibshirani_regression_1996}.} The results of a Lasso model are thus more computationally tractable and easier to interpret. +We predict the dichotomous outcome variable \emph{cited}, which indicates whether a paper received any citations during the period covered by our dataset (2004-2016). We use a method of \emph{penalized logistic regression} called the least absolute shrinkage and selection operator (also known as the \emph{Lasso}) to do the prediction work. Although, the technical details of Lasso models lie beyond the scope of this chapter, it, and other penalized regression models work well on data where many of the variables have nearly identical values (sometimes called collinear variables because they would sit right around the same line if you plotted them) and/or many zero values (this is also called `sparse' data) \citep{friedman_regularization_2010, james_introduction_2013}. In both of these situations, some measures are redundant; the Lasso uses clever math to pick which of those measures should go into your final model and which ones should be, in effect, left out.\footnote{To put things a little more technically, a fitted Lasso model \emph{selects} the optimal set of variables that should have coefficient values greater than zero and \emph{shrinks} the rest of the coefficients to zero without sacrificing goodness of fit \citep{tibshirani_regression_1996}.} The results of a Lasso model are thus more computationally tractable and easier to interpret. We use a common statistical technique called cross-validation to validate our models. Cross-validation helps solve another statistical problem that can undermine the results of predictive analysis. Imagine fitting an ordinary least squares regression model on a dataset to generate a set of parameter estimates reflecting the relationships between a set of independent variables and some outcome. The model results provide a set of weights (the coefficients) that represent the strength of the relationships between each predictor and the outcome. Because of the way regression works (and because this is a hypothetical example and we can assume data that does not violate the assumptions of our model), the model weights are the best, linear, unbiased estimators of those relationships. In other words, the regression model fits the data as well as possible. However, nothing about fitting this one model ensures that the same regression weights will provide the best fit for some new data from the same population that the model has not seen. A model may be overfit if it excellently predicts the dataset it was fitted on but poorly predicts new data. Overfitting in this way is a common concern in statistical prediction. Cross-validation addresses this overfitting problem. First, the training data is split into equal-sized groups (typically 10). Different model specifications are tested by iteratively training them on all but one of the groups, and testing how well they predict the final group. The specification that has the lowest average error is then used on the full training data to estimate coefficients.\footnote{For our Lasso models, cross-validation was used to select $\lambda$, a parameter that tells the model how quickly to shrink variable coefficients. We include this information for those of you who want to try this on your own or figure out the details of our statistical code.} This approach ensures that the resulting models not only fit the data that we have, but that they are likely to predict the outcomes for new, unobserved results. For each model, we report the mean error rate from the cross-validation run which produced the best fit. @@ -462,7 +461,7 @@ Our analysis proceeds in multiple stages corresponding to the different types of \subsection{Results} -Table \ref{tab:predict_models} summarizes the results of our prediction models. We include goodness-of-fit statistics and prediction error rates for each model as we add more features. A `better' model will fit the data more closely (i.e.,~it will explain a larger percentage of the deviance) and produce a lower error rate. We also include a supplementary error rate calculated against the `held-back' data, created from a random subset of 10\% of the original dataset that was not used in any of our models. An intuitive way to think about the error rate is to imagine it as the percentage of unobserved papers for which the model will correctly predict whether or not it receives any citations. The two error rate statistics are just this same percentage calculated on different sets of unobserved papers. Unlike a normal regression analysis, we do not report or interpret the full battery of coefficients, standard errors, t-statistics, or p-values. In part, we do not report this information because the results of these models are unwieldy -- each model has over 2,000 predictors and most of those predictors have coefficients of zero! Additionally, unlike traditional regression results, coefficient interpretation and null hypothesis testing with predictive models remain challenging (for reasons that lie beyond the scope of this chapter). Instead, we focus on interpreting the relative performance of each set of features. After we have done this, we refer to the largest coefficients to help add nuance to our interpretation.\\ +Table \ref{tab:predict_models} summarizes the results of our prediction models. We include goodness-of-fit statistics and prediction error rates for each model as we add more features. A `better' model will fit the data more closely (i.e.,~it will explain a larger percentage of the deviance) and produce a lower error rate. We also include a supplementary error rate calculated against the `holdout' data created from a random subset of 10\% of the original dataset that was not used in any of our models. An intuitive way to think about the error rate is to imagine it as the percentage of unobserved papers for which the model will correctly predict whether or not it receives any citations. The two error rate statistics are just this same percentage calculated on different sets of unobserved papers. Unlike a normal regression analysis, we do not report or interpret the full battery of coefficients, standard errors, t-statistics, or p-values. In part, we do not report this information because the results of these models are unwieldy -- each model has over 2,000 predictors and most of those predictors have coefficients of zero! Additionally, unlike traditional regression results, coefficient interpretation and null hypothesis testing with predictive models remain challenging (for reasons that lie beyond the scope of this chapter). Instead, we focus on interpreting the relative performance of each set of features. After we have done this, we refer to the largest coefficients to help add nuance to our interpretation.\\ \begin{table} \begin{adjustbox}{center} @@ -503,7 +502,7 @@ print(xtable(as.matrix(head(nz.coefs, 10)), \subsection{Discussion} -The results of our prediction models suggest that two types of features -- publication venue and textual terms -- do the most to explain whether or not papers on social media get cited. Both types of features substantially improve model fit and reduce predictive error in ten-fold cross-validation as well as on a holdout 10\% sub-sample of the original dataset. However, the venue features appear to have a much stronger relationship to our outcome (citation), with the vast majority of the most influential features in the model coming from the venue data (\Sexpr{f(round(prop.table(table(nz.coefs$Type[1:100]))*100, 2)["venue"])} of the 100 largest coefficients). +The results of our prediction models suggest that two types of features -- publication venue and textual terms -- do the most to explain whether or not papers on social media get cited. Both types of features substantially improve model fit and reduce predictive error in ten-fold cross-validation as well as on a holdout sub-sample of the original dataset. However, the venue features appear to have a much stronger relationship to our outcome (citation), with the vast majority of the most influential features in the model coming from the venue data (\Sexpr{f(round(prop.table(table(nz.coefs$Type[1:100]))*100, 2)["venue"])} of the 100 largest coefficients). As we said at the outset of this section, statistical prediction offers an exploratory, data-driven, and inductive approach. Based on these findings, we conclude that the venue where research on social media gets published better predicts whether that work gets cited than the other features in our dataset. Textual terms used in abstracts help to explain citation outcomes across the dataset, but the relationship between textual terms and citation only becomes salient in aggregate. On their own, hardly any of the textual terms approach the predictive power of the venue features. Features such as author affiliation and paper-level features like language or authors' country provide less explanatory power overall. @@ -521,7 +520,7 @@ Despite our diffuse approach, we report interesting substantive findings about t In the process of describing our analyses, we tried to point to many of the limitations of computational research methods. Although computational methods and the promise of `big data' elicit excitement, this hype can obscure the fact that large datasets and fast computers do nothing to obviate the fundamentals of high quality social science: researchers must understand their empirical settings, design studies with care, operationalize concepts in ways that are valid and honest, take steps to ensure that their findings generalize, and ask tough questions about the substantive impacts of observed relationships. These tenets extend to computational research as well. -Other challenges go beyond methodological limitations. Researchers working with passively collected data generated by social media can face complex issues around the ethics of privacy and consent as well as the technical and legal restrictions on automated data collection. Computational analyses of social media often involve datasets gathered without the sort of active consent considered standard in other arenas of social scientific inquiry. In some cases, data is not public and researchers access it through private agreements (or employment arrangements) with companies that own platforms or proprietary databases. In others, researchers obtain social media data from public or semi-public sources, but the individuals creating the data may not consider their words or actions public and may not even be aware that their participation generates durable digital traces \citep{boyd_critical_2012}. A number of studies have been criticized for releasing information that researchers considered public, but which users did not \citep{zimmer_okcupid_2016}. In other cases, researchers pursuing legitimate social inquiry have become the target of companies or state prosecutors who selectively seek to enforce terms of service agreements or invoke broad laws such as the federal Computer Fraud and Abuse Act (CFAA).\footnote{See \citepos{sandvig_why_2016} blogpost, ``Why I am Suing the Government,'' for a thoughtful argument against the incredibly vague and broad scope of the CFAA as well as a cautionary tale for those who write software to conduct bulk downloads of public website data for research purposes.} +Other challenges go beyond methodological limitations. Researchers working with passively collected data generated by social media can face complex issues around the ethics of privacy and consent as well as the technical and legal restrictions on automated data collection. Computational analyses of social media often involve datasets gathered without the sort of active consent considered standard in other arenas of social scientific inquiry. In some cases, data is not public and researchers access it through private agreements or employment arrangements with companies that own platforms or proprietary databases. In others, researchers obtain social media data from public or semi-public sources, but the individuals creating the data may not consider their words or actions public and may not even be aware that their participation generates durable digital traces \citep{boyd_critical_2012}. A number of studies have been criticized for releasing information that researchers considered public, but which users did not \citep{zimmer_okcupid_2016}. In other cases, researchers pursuing legitimate social inquiry have become the target of companies or state prosecutors who selectively seek to enforce terms of service agreements or invoke broad laws such as the federal Computer Fraud and Abuse Act (CFAA).\footnote{See \citepos{sandvig_why_2016} blogpost, ``Why I am Suing the Government,'' for a thoughtful argument against the incredibly vague and broad scope of the CFAA as well as a cautionary tale for those who write software to conduct bulk downloads of public website data for research purposes.} We advise computational researchers to take a cautious and adaptive approach to these issues. Existing mechanisms such as Institutional Review Boards and federal laws have been slow to adjust to the realities of online research. In many cases, the authority and resources to anticipate, monitor, or police irresponsible behaviors threaten to impose unduly cumbersome restrictions. In other cases, review boards' policies greenlight research that seems deeply problematic. We believe researchers must think carefully about the specific implications of releasing specific datasets. In particular, we encourage abundant caution and public consultation before disseminating anything resembling personal information about individual social media system users. Irresponsible scholarship harms both subjects and reviewers and undermines the public trust scholars need to pursue their work.