Why LGBT Communities and Our Allies Should Care about Net Neutrality

The LGBT Technology Partnership released research that I co-authored with media scholar and sociologist Jessie Daniels earlier this year. In it, we lay out the reasons that LGBT-identifying individuals and our communities became early adopters of broadband technology and why the Internet continues to play such a pivotal part in our political and social lives. Maintaining Net Neutrality—keeping all information equally accessible on the Internet—is something that all LGBT-identifying people and our allies should care about and fight to maintain.

I have researched the Internet’s role in LGBT life for more than a decade. I study how and why LGBT-identifying young people and youth questioning their identities use the Internet and other media. There are 2 main reasons that marginalized communities, including LGBT people, use the Internet more than the typical U.S. citizen: 1) we are able to go online and connect to people we identify with, without having to battle the stigma and potential physical threat that comes with accessing LGBT-supportive physical spaces and 2) we are able to access services and information specifically for us—from dating sites to health information—tailored to our needs…not just a clumsy version of what’s made available to our heterosexual peers. Let me give 2 concrete examples from my fieldwork among LGBT youth in rural towns throughout Southeast Appalachia. When a young person living in Eastern Kentucky wanted to access other young African-American, bi-identifying people to talk through the pros and cons of coming out before turning 18, he literally knew no one and found no organizations in his town of 5,000 where he could meet other out, bi-identifying youth. He went online and found chat rooms for his region. All of them were dominated by adults. He had to spend a significant amount of time, searching through various websites and YouTube videos to access other kids his age to talk with. In a perfect world, he wouldn’t need to work so hard to find someone just like himself online and he’d have neighbors and friends in his high school to turn to for support. But there is not a critical mass of LGBT-identifying people in his home town (yet! We can hope that changes for him). That makes the Internet an important communication channel connecting him to a broader community of LGBT-identifying folks. But the Internet is not just for accessing other LGBT-identifying people online. As I said, it has become a vital resource for accessing information specifically tailored to us. So, for example, many of the towns I worked in had no LGBT-specific public health services or HIV prevention information available for LGBT-identifying youth. That meant braving the school nurse or walking into a local health clinic and talking with someone who they could not assume to be an advocate for LGBT rights. Adults in big cities like DC might struggle with doing that. Imagine being a 14 year old in a very small town doing that. Youth I work with depend on web-based resources, like Trevor Project, Advocates for Youth, YouTube, and other non-profits that list resources for LGBT-specific health information. The Internet is a vital communication and information channel. The presumption that heterosexuality is the default setting makes the Internet a precious resource for LGBT-identifying people. LGBT and questioning youth in particular need places for them and information written for them readily available. It’s not a perk. The Internet has become a basic need and a public good.

From my perspective, the Net Neutrality debate is important to LGBT communities because, simply put: LGBT-identifying people will be collateral damage if Internet Service Providers (ISPs) are allowed to discriminate among content, apps, or services. without Net Neutrality protections, content providers generating critical information would likely have to pay more to get their content into (and from!) the hands of LGBT people. That means ISPs become the defacto gatekeepers controlling what content survives and what content falls by the wayside in the wake of a market-driven content tsunami. This, in turn, will raise the cost of providing LGBT content, reducing the overall amount of LGBT content available. This will be a significant barrier to the non-profit sources of content that have proven critical to LGBT communities, including information provided by the U.S. Government.

Net Neutrality is a simple principle: don’t make it harder to access or download something on the Internet based on the content of that information or service. Individuals, not our Internet Service Providers, should determine the information that they can access online. ISPs should not be legally allowed to block content or limit a private citizen’s opportunity to see what information is available online for them to purchase or made available to them for free.

Like broadcast TV, phones, and libraries, the Internet plays a special and critical role in connecting and educating citizens. I wish that every public school, community center, and local radio and public access TV station offered a wealth of LGBT-specific resources. They do not. The Internet, currently, picks up this important duty for the public.

Right now, like all citizens, LGBT people and our allies have the basic right to access any information available on the Internet. LGBT-specific information on the Internet—from other young people’s websites to the It Gets Better campaign on YouTube—can be vital to LGBT lives, particularly young people looking for affirmation and reflections of themselves. LGBT-specific information is typically hosted or created by non-profits and private individuals who care about LGBT people’s needs. In the same way that it should not be harder at the public library to see the stack of books most relevant to LGBT communities, it shouldn’t be harder or cost more to access information specific to LGBT communities.

The providers of Internet access are not just delivering binge TV through Netflix. They are serving up those webpages that LGBT-identifying and questioning young people rely on to survive and thrive. As much as I love the entire catalogue of Queer as Folk, it is not the same content—and cannot do the same vital community-building work—as coming out videos accessible on YouTube or HIV prevention information, local resource lists, and opportunities to access other LGBT-identifying people available through non-profit websites. If ISPs are allowed to sort content differently, those random, youth-created and driven websites that offer crucial, eclectic information to small, niche audiences, are, potentially, at risk of being lost to us. I don’t think we, as LGBT people and allies, can afford that loss.

On Monday November 10, 2014, President Obama made a statement outlining four “bright-line rules” for maintaining Net Neutrality, including no blocking, no throttling, increased transparency and no paid prioritization. I wish that we could achieve keeping content equally accessible without regulation. I sincerely do. But, right now, all we have are promises from the Internet service provider’s major companies that they will not block content, throttle download/upload rates, decrease transparency behind their billing or let content owners pay ISPs to “cut to the front of the line” of the information highway onramp. There are several cases, dating back to the beginning of the content-rich web of the mid-2000s, that suggest Internet service providers will block or slow down content delivery and price some content higher if they know that it’s going to strain their upload and download rates. There are 2 options: 1) make it illegal for Internet service providers to discriminate among content, apps, or services online or 2) fund municipal broadband for every community in the United States so that all citizens have access to the Internet’s content. The Internet operates as a public good. We need it to register for many government services at this point. We can’t go back and say, “Internet content and services are just extras that society can do without.” We’ve got to have clear guidance and enforceable rules to maintain the deep investments we’ve already made in making the Internet one of the world’s greatest information repositories and sites for community connection, particularly among communities, like those of LGBT folks, with limited resources and social opposition offline.

Having worked in the rural U.S. for some time, my sense is that the best solution for ensuring an open Internet is by recognizing what ISPs have become: common carriers and stewards of a critical public resource. We use our Internet connections to talk to people, pay our parking tickets, and make appointments to get our drivers licenses. LGBT communities use Internet connections to reach people like them and share strategies on how to move through a world that still can’t decide if we have the right to marry the people we love. Those are services and information resources necessary for a robust and healthy civic and civil society. It’s too late to treat the Internet like an expendable, frivolity. LGBT communities are particularly dependent on the Internet to find and connect with the people and information that we need to live healthy and productive lives.

MSR Faculty Summit 2014 Ethics Panel Recap

[Cross-posted to the Social Media Collective]

When the Facebook Emotions Study first made international news, I felt strongly (still do) that researchers, from those honing algorithms to people like me studying the social impact of media and technologies, need to come together. There are no easy answers or obvious courses of action. But we all have a stake in understanding the ethical implications of studying social media as equal parts data analysis and human subjects research. And we need common ground.

At the end of the day, researchers are also well-positioned to change things for two simple reasons: 1) individual researchers design and execute research and data analysis for both corporations and universities. If we change how we do things, our institutions will follow suit. 2) Today’s social media researchers and corporate data scientists will mentor and train the next generation of data researchers. Our students will continue and advance the exploration of social media data at jobs based in industry and university settings. The ethical principles that they learn from us will define not only the future of this field but the general public’s relationship to it. But it’s not easy to bring together such a wide range of researchers. Social media researchers and data scientists are rarely all in the same place.

As luck would have it, Microsoft Research’s Faculty Summit, held annually on the MSR Redmond campus in the great state of Washington, USA, gathers just such a mixed scholarly audience. It was scheduled for July 14-15, a mere two weeks into the public fallout over the Study. Through the support of Microsoft Research and MSR’s Faculty Summit organizers, we organized an ad-hoc session for July 14, 2014, 11:30a-12:30p PT, entitled “When Data Science & Human Subject Research Collide: Ethics, Implications, Responsibilities.” Jeff Hancock, co-author of the Facebook Emotions Study, generously agreed to participate in the discussion. I scoured the list of Faculty Summit attendees and found three other participants to round out the conversation: Jeffrey Bigham, Amy Bruckman, and Christian Sandvig. These scholars (their bios are below) offer the expertise and range of perspectives we need to think through what to do next.

Below, you will find a transcript of the brief panel presentations and a long, long list of excellent questions generated by the more than 100 attendees. I have anonymized the sources of the questions, but if you contact me and would like your name attached to your comment or question, please let me know and I’ll edit this document.

I asked that the session not be recorded for public circulation because I wanted all those present to feel completely free to speak their minds. I encouraged everyone to “think before they tweet” which did not bar social media reports from the event (but, I was delighted to see how many of us focused on each other rather than our screens). We agreed early on that the best contribution we could, collectively, make was to generate questions rather than presume anyone had the answers. I hope that you find this document helpful as you work through your own thoughts on these issues. My thanks to MSR and the Faculty Summit organizers (particularly Jaya, who was so patient with the ever-changing details), the panelists for their participation, to the audience for their collegiality and kindness, and a special shout out to Liz Lawley for sharing her notes with me.

Sincerely,

Mary L. Gray

Session title: When Data Science & Human Subject Research Collide: Ethics, Implications, Responsibilities

Chair: Mary L. Gray, Microsoft Research

Abstract: Join us for a conversation to reflect on the ethics, implications, and responsibilities of social media research, in the wake of the Facebook emotion study. What obligations must researchers consider when studying human interaction online? When does data science become human subjects research? What can we learn as a collective from the public’s reaction to Facebook’s recent research as well as reflection on our own work? Mary L. Gray (Microsoft Research) and Jeff Hancock (Cornell University and co-author of the Facebook emotion study), will facilitate a panel discussion among researchers based at Microsoft Research and across academia from the fields of data science, computational social science, qualitative social science, and computer science.

Panel expertise:

–      anthropology

–      communication studies

–      data science

–      experimental research design

–      HCI

–      human computation

–      information sciences

–      social psychology

–      usability studies

 

Each panelist had 5 minutes to reflect on:

  1. What can we learn?
  2. Where do we go from here?
  3. What is one BURNING QUESTION we should address together?

House rules:

  • think B4 you tweet
  • not a “gotcha!” session
  • step up/step back (if you tend to talk a lot, let someone else take the mic first)

BIOs:

Christian Sandvig—Speaker 1 (able to speak from an Information Sciences perspective)

Associate Professor of Information, School of Information, Faculty Associate, Center for Political Studies, ISR and Associate Professor of Communication, College of Literature, Science, and the Arts. Sandvig is a faculty member at the School of Information specializing in the design and implications of Internet infrastructure and social computing. He is also a Faculty Associate at the Berkman Center for Internet & Society at Harvard University. Before moving to Michigan, Sandvig taught at the University of Illinois at Urbana-Champaign and Oxford University. Sandvig’s research has appeared in The Economist, The New York Times, The Associated Press, National Public Radio, CBS News, and The Huffington Post. His work has been funded by the National Science Foundation, the MacArthur Foundation, and the Social Science Research Council. He has consulted for Intel, Microsoft, and the San Francisco Public Library. Sandvig received his Ph.D. in Communication Research from Stanford University in 2002. https://www.si.umich.edu/people/christian-sandvig

Jeffrey P. Bigham—Speaker 2 (able to speak from a computer science/accessible technologies perspective)

Associate Professor of the Human-Computer Interaction Institute and Language Technologies Institute in the School of Computer Science at Carnegie Mellon University. Jeffrey’s work sits at the intersection of human-computer interaction, human computation, and artificial intelligence, with a focus on developing innovative technology that serves people with disabilities in their everyday lives. Jeffrey received his B.S.E degree in Computer Science from Princeton University in 2003. He received his M.Sc. degree in 2005 and his Ph.D. in 2009, both in Computer Science and Engineering from the University of Washington. http://www.cs.cmu.edu/~jbigham/

Amy Bruckman—Speaker 3 (able to speak from the builder/designer perspective)

Professor in the School of Interactive Computing in the College of Computing at Georgia Tech, and a member of the Graphics, Visualization, and Usability (GVU) Center. She received her PhD from the Epistemology and Learning Group at the MIT Media Lab in 1997, and a BA in physics from Harvard University in 1987. She does research on online communities and education, and is the founder of the Electronic Learning Communities (ELC) research group. Bruckman studies how people collaborate to create content online, with a focus on how the Internet can support constructionist, project-based learning. Her newer work focuses on the products of online collaboration as ends in themselves. How do we support people in this creative process, and what new kinds of collaborations might be possible? How do interaction patterns shape the final product? How do software features shape interaction patterns? How does Wikipedia really work, and why do people contribute to it? http://www.cc.gatech.edu/fac/Amy.Bruckman/

Jeff Hancock—Speaker 4 (co-author of the Facebook Emotions Study)

Dr. Jeffrey T. Hancock is a Professor in the Communications and Information Science departments at Cornell University and is the Co-Chair of the Information Science department. He is interested in social interactions mediated by information and communication technology, with an emphasis on how people produce and understand language in these contexts. His research has focused on two types of language, verbal irony and deception, and on a number of cognitive and social psychological factors affected by online communication. https://communication.cals.cornell.edu/people/jeffrey-hancock

Opening remarks (Mary L. Gray):

I asked each of our speakers to introduce themselves, tell us a little bit about the perspective they’re coming from. The goal of the panel was to bring together as many different disciplinary perspectives as possible among people who are studying what is perhaps best understood as a shared object: social media. We came together to think about the implications and ramifications of the public response to the Facebook study. I gave a special shoutout and thanks to Jeff Hancock for being willing to attend Faculty Summit at the very last minute. I want to publicly say how impressed I am by his collegiality and his willingness to engage. I think we are so lucky that this is the case that became the opportunity for us to talk about this. I think all of us researching social media can imagine really bad cases that could have come to light and instantly eroded public trust in our efforts to understand social media. So I’m really very happy that this opportunity to talk about how to move forward in our research was prompted by the work of a scholar who I really respect and admire. So with that, I handed it off to our first speaker, Christian Sandvig. Each person spoke for a little bit and then we had a chance for them to pose one burning question.

Panelist statements:

CHRISTIAN SANDVIG:

Thanks, Mary. Mary asked us to say a little bit where we might relate to this topic. I’m a Professor at the School of Information and the Communications Studies Department at the University of Michigan. I’m interested in information and public policy. I’m interested in this particular controversy because I have a forthcoming book about studying human behavior online. I’ve taught about applied ethics and research methods. I have a graduate class called Unorthodox Research Methods, about new research methods and the controversies they provoke. And I’m a former member of an Institutional Review Board. So that’s my background. I want to use my very brief time to mention a study that often comes up in historical reviews of psychology. It’s the Middlemist “bathroom study” (http://www2.uncp.edu/home/marson/Powerpoints/3610Bathroom1.pdf). It’s sometimes called the micturition study, if you have a preference for scientific terminology. To be clear: I’m not trying to say that the Facebook experiment is like the bathroom experiment. But there are some interesting parallels. So I’ll just give you a quick rundown of those parallels. This is a research study conducted by psychologists in a men’s restroom at a large Midwestern university. Basically the researchers built a small periscope-like device that allowed a professor sitting in a toilet stall to observe patrons at the urinals from a side angle. The reason that the researchers did this is that they had a hypothesis about physiologic excitation and personal space. So they designed an experimental design where a confederate, a student on the research team, would stand near or at a distance from an individual who came into the bathroom to use the urinal. They did this without consent and they didn’t have a debriefing process. They timed, with a stopwatch, the urination to help them draw conclusions about physiologic excitation and physical proximity of strangers.

The reason that the Middlemist “bathroom study” is a useful parallel to today’s uproar over the Facebook Emotions Study is that public criticism of the research did not focus on physical harms to human subjects but, rather, the perceived indignity and disregard for individual privacy that the study suggested. The researchers defended themselves and used reasonably sound logic, arguing that going to the bathroom is an everyday experience. They studied a public bathroom after all. The worst that could happen is that a subject feels a little weird that someone’s watching them in a public bathroom. And, in fact, they argued debriefing would have produced the harm in this study. If they’d told men that they’ve been watched in a public bathroom, it may then make them uncomfortable. So in fact, telling subjects about the study produces the only harm that could happen. So, they reasoned, we shouldn’t debrief subjects about the study. The debate about this study is extensive. But one of the conclusions that followed from it is that researchers in this case focused on the wrong harms. They argued that individuals in this study probably couldn’t be harmed because it’s only mildly embarrassing or creepy to be watched in a public bathroom. But the harm that the researchers should have addressed or considered was the potential harm to the image of the profession or all of science. Some research subjects were actually very upset about the study and felt it violated human decency and their individual dignity. They were not harmed individually, but found this study creepy and invasive. Avoiding telling people that you’re doing this kind of research because telling them would upset them doesn’t help at all. Researchers simply delay the harm that will follow when the public eventually find out how the study was conducted. Such delays only leave the public more angry that researchers didn’t tell subjects, at some point in the study, because it suggests that the researchers are hiding something. So the question I have for the panel and the audience is: Is it possible for us to anticipate this kind of harm? Is it possible for us as researchers to design research and say this is something that’s going to cause controversy because people are going to think it’s very creepy, versus this is something that no one’s going to have a problem with. That’s actually a difficult question to answer.

Some people have argued, well, you know, Facebook’s already done a variety of other studies that changed users’ information without their knowledge, so why does this one produce the controversy? I would argue that there research cases and topics where there are foreseeable harms because we know that people feel differently about certain areas of their lives. People feel differently about whether there’s an intervention or not. People feel differently about the valence of the intervention. For example, people will feel differently about whether an intervention or research experiment is done for science or for a corporation. But, really, the only way that we’re going to be able to predict whether the “creepiness factor” will register as a problem is to involve research participants in the research design at some level. Participants’ involvement could help researchers figure out the level of threat before we execute our research. Fundamentally, researchers aren’t the ones who decide what is threatening or crossing the line for the public. If participants feel our research methods are creepy and they hate it, we don’t want to be in the business of doing that research. We’re not going to be able to argue participants out of their feelings and say “no, it’s all right; people look at you in the bathroom all the time. We’re not going to be able to do that. We need a different approach and understanding of “harm” to conduct social research.

JEFF BINGHAM:

I’m Jeff Bingham from Carnegie Mellon University. I approached this research area a little bit differently. I work on building systems to support people with disabilities, often using human computation. Mary asked us to think about what skin we have in this game. So the skin I have in this game is that social media are the primary way that we recruit the people that power the systems we build for people with disabilities, via friend sourcing, community sourcing, citizen science, traditional crowdsourcing. It’s also the resource we have to understand those people using our systems. As Mary said in her talk earlier this morning, “crowds are people,” and it is really important for us to make these systems work well–make them sustainable and make them scalable.

We’re increasingly moving away from, say, Amazon Mechanical Turk, to services like Facebook, to power our systems for people with disabilities. Ultimately, we need users to trust the platforms on which we are recruiting workers. So if they don’t trust Facebook, for instance, they may not use it or they may move to closed systems that don’t allow us the kind of access or the ability to incorporate human work into our systems. I’ve tried bootstrapping sociotechnical systems on my own, and it’s actually really hard without piggybacking on existing platforms. So it’s really important that we have continued access to the general public using commercial platforms. I think that we can all agree this is about a lot more than one study or one research article. And so my fear is that, as a result of this experience, we will be more likely to miss out on the upsides and rewards that could come from engaging with users of these services in interesting ways. My hope is that we can find a way to preserve the utility of these sites and our ability to do important research and innovate on social media platforms. I also hope that researchers can continue partnering with industry while addressing the very real concerns of users. So my question is what practical steps should researchers take right now, while public opinion and corporate policies are still being sorted out, to help ensure our long-term ability to work with companies who are running these very interesting platforms?

AMY BRUCKMAN:

Thank you, Mary, so much for organizing this. It’s really timely. I launched an online, programmable virtual world for children in 1995. I got interested in Internet research ethics because I asked people what is the ethical way to do this and nobody knew. So I had to think ethically and invent the ethical things to do. In the 1990s, I was part of three different working groups focused on developing ethical policies for Internet research: One for the Association of Internet Researchers; another for AAS; and a third one for the APA. The APA group, led by Bob Kraut, resulted in a paper which you may find useful and is available on my website, along with a long list of other papers on research ethics. I think it may be time for us to have another round of working groups. It’s been a long time since the ’90s. There are some new issues emerging and we could use some updated statements of what the ethical issues are here and how to handle them. Several of my papers on research ethics have dealt with the issue of disguising subjects’ online identities.

I argue that, in many cases, contrary to the traditional approach of always disguising research subjects, if they are doing creative work on the Internet, for which they deserve credit, we are ethically obligated to ask them: “Do you want me to use your real name?” It would be unethical to hide their names without their consent. I want to be a little bit deliberately provocative here: I have done research on Internet users without their consent, and I would do it again. According to U.S. law, you can do work without consent. You can get a full waiver of consent if the research can’t be practicably done without a waiver, if the benefits outweigh the risk and if the risk is low. I have a post on my blog at nextbison.wordpress.com about a study that I did in 2003 where we walked into IRC chat rooms and recorded chat room participants’ reactions. Actually, we were really studying whether we would get kicked out of the chat room. We had four conditions: A control, where we walked in and didn’t say anything; a treatment where we walked in and said “Hi. I’m recording this for a study of language online;” an opt-in treatment; and an opt-out treatment. I know this gets very meta. And a little circular. But we found that people really didn’t want us to be in their IRC chat rooms. Almost no one opted in. And no one opted out. We have a colorful collection of the boot messages we received as we were kicked out of these chat rooms. My favorite is “Yo mama’s so ugly she turned Medusa to stone.” So ironically, despite the fact that our research documents that we made people angry, I still think the study itself was ethical. It’s certainly not something that we did lightly. But the level of disturbance we created was relatively small. I think what we learned from it was beneficial to people and to science in general. The original papers are available on my blog. And if you’re interested in more details, I’d be happy to discuss it with you. But my point in referencing this study is to argue that it is possible to do research that upsets people and we should be careful about overreactions to our work.

I want to say that the reaction to the Facebook study was out of proportion. And I hope that Jeff knows that we, his colleagues, are behind him. The reaction to Facebook, the company, also was excessive. I love a lot of the research that Facebook does. I’m not saying it’s perfect. There’s a lot that all of us have to learn about researching social media. And I will say there’s a lot we can learn from this incident. I’m glad it started these series of conversations. A couple of questions that I have for the future are: Should companies be required to have something more like a real IRB? That’s a tough one. It has a lot of complications. Distinguishing social science research from how companies do their business and make their sites usable is almost impossible. My other burning question, that I hope we can discuss, is should conferences and journals that do peer review also review the ethics of a study?

A while ago I reviewed an Internet-based study submitted to the conference, CHI. I objected to the ethics of the study, and objected violently. I was really offended by this study. I put my objections in my CHI review and I gave the paper a 1. I never give 1s; I’m nice. I got back a response from the program committee that year that the researchers had their study approved by their campus Institutional Review Board (IRB) and they proceeded in good faith; so, we declare this study to be ethical. Therefore, it’s not the reviewer’s place to question the ethics of the study. I’m not sure that’s how we should be handling things. I think we need to think about our ethics review as an incredibly complicated socio technical system, with tools and rules and divisions of labor and different activity systems run by different IRBs that come to different solutions. Somehow, there has to be some error correction when we come together to share our work. On the other hand, the practical question of how we do this without causing tremendous practical problems and unfairness in the meta review is difficult, too. So I don’t think it’s easy. But I don’t think the hand waving, “oh, it was approved, it’s not our business,” is the right answer, either. So I’m looking forward to more conversations from here. Thanks.

JEFF HANCOCK:

Thank you everybody for coming in today. Thank you Mary for organizing this. And for the fellow panelists for being part of this on pretty short notice. And thank you all for this morning. I’ve seen many colleagues and friends. It’s been great to feel supported and people reaching out to make sure I’m doing okay. It was my first experience with global worldwide Internet heat wrath, and it was very difficult. I will admit. My family paid a price for it. I paid a price, but I feel much better being amongst colleagues. Mostly because this is a really important conversation, and I feel now a privilege and a responsibility to be a part of that. I thought I would take a different approach from the rest of the panelists and describe a little bit of what I learned from the various e-mails I received from around the world in response to this. And I’ll keep it a little bit higher level, away from specific identities. Some of them are pretty intense. And I think that the intensity actually points to something important.

I received a couple hundred e-mails from people from around the world. The e-mails that I want to discuss with you are ones from the people using Facebook. This was their role as a stakeholder. These e-mails are distinct from those that I received from other academics with questions about ethical issues, around informed consent, around how IRB dealt with this, et cetera.

Facebook users’ emails tended to fall into three main categories. The first one was: How dare you manipulate my news feed! And this was a really fervent response—and very common. I think it points to something that Christian Sandvig and other scholars, thinking about algorithms and the social world have been taking up in their work. As Tarleton Gillespie puts it, we don’t have metaphors in place for what the news feed is. We have a metaphor for the postal service: messages are delivered without tampering from one person to the next. We have a metaphor from the newsroom: editors choose things that we think will be of interest. But there’s no stable metaphor that people hold for what the news feed is. I think this is a really important thing. I’m not sure whether this means we need to bring in an education component to help people understand that their news feeds are altered all the time by Facebook? but the huge number of e-mails about people’s frustration that researchers would change the news feed indicates that there’s just no sense that the news feed was anything other than an objective window into their social world.

The second category of e-mail that I received signals that the news feed is really important to people. I got a number of e-mails saying things like: “You know my good friend’s father just died. And if I didn’t have the news feed I may not have known about it.” This surfaced a theme that the news isn’t just about what people are having for breakfast or all the typical mass media put-downs of Twitter and Facebook. Rather, this thing that emerged about seven years ago [Facebook] is now really important to people’s lives. It’s central and integrated in their lives. And that was really important for me to understand. That was one of the things that caught me off guard, even though maybe in hindsight it shouldn’t have.

The last category of e-mail that I received: A lot of people asked me why I thought this study attracted this kind of attention and controversy, whereas other similar studies did not. I thought a lot about that. One of the things that came out of the e-mails is that, as Christian Sandvig argued earlier, we were looking at the wrong place for what would register as “harm.” People have a very strong sense of autonomy. We know that quite well from social psychology and from sociology. I think our study violated people’s sense of autonomy and the fact that they do not want their emotions manipulated or mood controlled. And I think it’s a separate issue whether we think emotions are being manipulated all the time, through advertising, et cetera. What became very clear in the e-mail was that emotions are special. And I think it’s one example of a class of things that will fall into some of the spaces that Christian Sandvig talked about. If we work on one of these special classes or categories of human experience, like emotion, without informed consent, without debriefing, we could do larger harm than just harm to participants.

I can now have some sense of humor around some of the hate mail. And it’s been an amazing learning experience for me. I hope that by turning it over to the floor here and having ongoing conversations, we can really move things forward. My burning question would be: I think that this is a huge turning point or advance for social sciences potentially in the same way that, say, evolutionary theory was important for biology or the microscope was for chemistry. And I would want us to think about how we would continue doing the research on social media platforms ethically. So in the same way that Stanley Milgram’s study caused us to rethink what ethical research practices are, in the same way that Amy Bruckman’s calling on us to return to reflecting on how we do Internet research, now that we can do social psychology essentially at scale, how do we bring ethics along with that?

MARY L. GRAY:

I think what we can do concretely, with the time we have left — we have a little bit of time remaining. But I think the most productive thing we could do, I would argue, is get a lot of questions on the table. Because we are recording this, I can get a transcript and we can collect all the questions. And I would honestly say I don’t really listen to anybody who tells me right now they have the answer, because we’ve only been studying this thing for about ten years. This is entirely new to us. I don’t know that our objective should be answering anything today. I think we should be listening to each other, hearing our concerns and hearing some really important questions. So with that in mind, let’s hear some questions.

Questions and comments generated by the audience:

  1. Where do you think this [conversation about what to do next] should happen? I don’t think it’s just a matter of us having a special issue of a journal and people publish their opinions, and I know that stuff like that is happening. But it feels like we have to have some real dialogue. Who are the people who you think need to be involved in these conversations and where do you think some of these conversations can happen?
  1. I think the value of this experiment and the reaction to it is that it has raised the awareness of the algorithmic power that these organizations [social media companies] have. What is the responsibility of the Facebooks and the Googles of the world to be aware of this?
  1. Do we all agree that corporations have a role in this conversation?
  1. Information is being presented and it’s being manipulated [through social media interfaces] by definition. If you’re working in a mass medium with a corporation, you’re changing the presentation of information all the time. How do we draw any lines about this to distinguish what is ethical or unethical presentation of this kind of information?
  1. How can we take this up to be a national and an international conversation. I think we need to be thinking [beyond] the campus level. The variability among IRBs is hopeless because if one campus IRB has approved something that doesn’t mean that meets some national/international level of standard. How can we think about this internationally, since these are international corporations and international data we’re talking about. These aren’t just Cornell, Berkeley or UCLA data.
  1. For the most part Facebook is occupied all time by highly vulnerable populations. Even if there was an open consent process there, how do you know the populations there really would have been in a position to fully give informed consent?
  1. Could there be something that companies with social media sites actually do to let end users know this is or specify how they want their information to be reused or it’s like the organic food sticker on foods? Could we create some way to very simple allow people to say to us, “sure, go ahead, modify my stuff, or don’t touch my stuff” or something like that? Maybe there’s some trigger especially for anything that’s private.
  1. How, as industrial researchers, do we maintain ethical obligations to our subjects similar to those of academic researchers?
  1. As a community how do we agree, when we acknowledge they’re going to be many, many different partners, some industry, some academia, doing lots of kinds of research who’s responsible for the ethical treatment of human subjects and their data?
  2. I think if you have a Ph.D., perhaps part of that professional training should mean that we can assume that you can behave ethically until it’s proven otherwise.
  3. What is the argument towards industry [for tighter ethical regulation] that’s going to make sense? And number one is losing your customer base. I’m sure Facebook has taken a hit and every single advertiser has taken a hit because you’re going to think twice about clicking on the button. How do we speak to corporate organizations and convince them that they should change their actions?
  4. So I’m somewhat still puzzled by what you [Jeff Hancock] think about your findings. Do you really feel like you imposed some sort of negative valence on people that hurt them, or is there a lot of uncertainty here? And how is this different than the day-to-day interactions we have? Why is this special?

Must-reads for how to study people’s online behavior (and navigate the ethical challenges that entails!)

I realized after posting my thoughts on how to think about social media as a site of human interaction (and all the ethical and methodological implications of doing so) that I forgot to leave links to what are, bar none, the best resources on the planet for policy makers, researchers, and the general public thinking through all this stuff.

Run, don’t walk, to download copies of the following must-reads:

Charles Ess and the AOIR Ethics Committee (2002). Ethical decision-making and Internet research: Recommendations from the AoIR ethics working committee. Approved by the Association of Internet Researchers, November 27, 2002. Available at: http://aoir.org/reports/ethics.pdf

Annette Markham and Elizabeth Buchanan (2012). Ethical decision-making and Internet research: Recommendations from the AoIR ethics working committee (version 2.0). Approved by the Association of Internet Researchers, December 2012. Available at: http://aoir.org/reports/ethics2.pdf

When Science, Customer Service, and Human Subjects Research Collide. Now What?

I’m frustrated that the state of public intellectualism allows us, individually, to jump into the conversation about the recently published Facebook “Emotions” Study [1]. What we—from technology builders and interface designers to data scientists and ethnographers working in industry and at universities alike—really (really) need right now is to sit down together and talk. Pointing the finger or pontificating doesn’t move us closer to the discussions we need to have, from data sharing and users’ rights to the drop in public funding for basic research itself. We need a dialogue—a thoughtful, compassionate conversation among those who are or will be training the next generation of researchers studying social media. And, like all matters of ethics, this discussion will become a personal one as we reflect on our doubts, disagreements, missteps, and misgivings. But the stakes are high. Why should the Public trust social media researchers and the platforms that make social media a thing? It is our collective job to earn and maintain the Public’s trust so that future research and social media builders have a fighting chance to learn and create more down the line. Science, in particular, is an investment in questions that precede and will live beyond the horizon of individual careers.

As more and more of us crisscross disciplines and work together to study or build better social media, we are pressed to rethink our basic methods and the ethical obligations pinned to them. Indeed “ethical dilemmas” are often signs that our methodological techniques are stretched too thin and failing us. When is something a “naturalistic experiment” if the data are always undergoing A/B tweaks? How do we determine consent if we are studying an environment that is at once controllable, like a lab, but deeply social, like a backyard BBQ? When do we need to consider someone’s information “private” if we have no way to know, for sure, what they want us to do with what we can see them doing? When, if ever, is it ok to play with someone’s data if there’s no evident harm but we have no way to clearly test the long-term impact on a nebulous number of end users?

There is nothing obvious about how to design and execute ethical research that examines people’s individual or social lives. The reality is, when it comes to studying human interaction or behavior (for profit or scientific glory), it is no more (or less) complicated whether we’re interviewing someone in their living room, watching them in a lab, testing them at the screen, or examining the content they post online. There is no clearer sign of this than the range of reactions to the news (impeccably curated here by James Grimmelmann) that for one week, back in January 2012, researchers manipulated (in the scientific sense) what 689,003 Facebook users read in their individual News Feed. Facebook’s researchers fed some users a diet containing fewer posts of “happy” and positive words than their usual News Feed; other users received a smaller than their average allotment of posts ladled with sad words. Cornell-based researchers came in after the experiment was over to help sift through and crunch the massive data set. Here’s what the team found: By the experiment’s last day (which, coincidentally, landed on the day of the SOPA online protests! Whoops), it turned out that a negligible—but statistically detectable—number of people produced fewer positive posts and more negative ones if their Feed included fewer positive news posts from friends; when the researchers scaled back the number of posts with negative cues from friends, people posted fewer negative and more positive posts. This interesting, even if small, finding was published in the June 2014 issue of the Proceedings of the National Academy of Sciences (PNAS). That’s how Science works—one small finding at a time.

At issue: the lead author, Facebook Data Scientist, Adam Kramer, never told users in the study that their News Feeds were part of this experiment, either before or after that week in January. And Cornell University’s researchers examining the secondary data set (fancy lingo for the digital records of more than half a million people’s interactions with each other) weren’t, technically, on the hook for explaining that to subjects either. Mind you, it’s often acceptable in human subjects research to conduct experiments without prior consent, as long as everyone discussing the case agrees that the experiment does not impose greater risk to the person than they might experience in a typical day. But even in those cases, at some point the research subjects are told (“debriefed”) about their participation in the study and given the option to withdraw data collected about them from the study. Researchers also have a chance to study the impact of the stimulus they introduced into the system. So, the question of the hour is: Do we cross a line when testing a product also asks a scientifically relevant question? If researchers or systems designers are “just” testing a product on end users (aka humans) and another group has access to all that luscious data, whose ethics apply? When does “testing” end and “real research” begin in the complicated world of “The Internet?”

Canonical Science teaches us that the greater the distance between researchers and our subjects (often framed as objectivity), the easier it is for us to keep trouble at arm’s length. Having carried out what we call “human subjects research” for much of my scholarly life—all of it under the close scrutiny of Institutional Review Boards (IRBs)—I feel professionally qualified to say, “researching people ain’t easy.” And, you know what makes it even harder? We are only about 10 years into this thing we call “social media”—which can morph into a telephone, newspaper, reality TV show, or school chalkboard, depending on who’s wielding it and when we’re watching them in action. Online, we are just as likely to be passionately interacting with each other, skimming prose, or casually channel-surfing, depending on our individual context. Unfortunately, it’s hard for anyone studying the digital signs of humans interacting online to know what people mean for us to see—unless we ask them. We don’t have the methods (yet) to robustly study social media as sites of always-on, dynamic human interaction. So, to date, we’ve treated the Internet as a massive stack of flat, text files to scrape and mine. We have not had a reason to collectively question this common, methodological practice as long as we maintained users’ privacy. But is individual privacy really the issue?

My brothers and sisters in data science, computational social science, and all of us studying and building the Internet of things inside or outside corporate firewalls, to improve a product, explore a scientific question, or both: We are now, officially, doing human subjects research. Here’s some background to orient us and the people who pay our research bills (and salaries) to this new reality.

Genealogy of Human Subjects Research Oversight in the United States
In 1966, the New England Journal of Medicine published an article by Harvard research physician, Henry Beecher, chronicling 22 ethically questionable scientific studies conducted between 1945 and 1965 (Rothman, 2003: 70-84). Dr. Beecher’s review wasn’t exposing fringe science on the margins. Federally and industry-funded experiments conducted by luminaries of biomedicine accounted for most of the work cited in his review. Even if today we feel like it’s a no brainer to call ethical foul on the studies Beecher cited, keep in mind that it took DECADES for people to reach consensus on what not to do. Take, for example, Beecher’s mention of Dr. Saul Krugman. From 1958-1964, Dr. Saul Krugman injected children with live hepatitis virus at Willowbrook State School on New York’s Staten Island, a publicly-funded institution for children with intellectual disabilities. The Office of the Surgeon General, U.S. Armed Forces Epidemiological Board, and New York State Department of Mental Hygiene funded and approved his research. Krugman directed staff to put the feces of infected children into milkshakes later fed to newly admitted children, to track the spread of the disease. Krugman pressed poor families to include their children in what he called “treatments” to secure their admission to Willowbrook, the only option for poor families with children suffering from mental disabilities. After infecting the children, Krugman experimented with their antibodies to develop what would later become the vaccines for the disease. Krugman was never called out for the lack of consent or failure to provide for the children he infected with the virus, now at risk of dying from liver disease. Indeed, he received the prestigious Lasker Prize for Medicine for developing the Hepatitis A and B vaccines and, in 1972, became the President of the American Pediatric Society. Pretty shocking. But, at the time, and for decades after that, Willowbrook did not register as unequivocally unethical. My point here is not to draw one to one comparisons of Willowbrook and the Facebook Emotions study. They are not even close to comparable. I bring up Willowbrook to point out that no matter how ethically egregious something might seem in hindsight, often such studies do not appear so at the time, especially when weighed against the good they might seem to offer in the moment. Those living in the present are never in the best position to judge what will or will not seem “obviously wrong.”

News accounts of risky experiments carried out without prior or clear consent, often targeting marginalized communities with little power, catalyzed political will for federal regulations for biomedical and behavioral researchers’ experiments (Rothman, 2003: 183-184). Everyone agreed: there’s a conflict of interest when individual researchers are given unfettered license to decide if their research (and their reputations) are more valuable to Science than an individual’s rights to opt out of research, no matter how cool and important the findings might be. The balance between the greater good and individual risk of research involving human subjects must be adjudicated by a separate review committee, made up of peers and community members, with nothing to be gained by approving or denying a researcher’s proposed project.

The Belmont Report
The National Research Act of 1974 created the Commission for the Protection of Human Subjects of Biomedical and Behavioral Research [2].  Five years later, the Commission released The Belmont Report: The Ethical Principles and Guidelines for the Protection of Human Subjects of Research. The Belmont Report codified the call for “respect for persons, beneficence, and justice” (The Belmont Report, 1979). More concretely, it spelled out what newly mandated university and publicly funded agency-based IRBs should expect their researchers to do to safeguard subjects’ informed consent, address the risks and benefits their participation might accrue, and more fairly distribute science’s “burdens and benefits” (The Belmont Report, 1979).
The Belmont Report now guides how we define human subjects research and the attendant ethical obligations of those who engage in it.

Put simply, the Belmont Report put a Common Rule in place to manage ethics through a procedure focused on rooting out bad apples before something egregious happens or is uncovered, after the fact. But it did not—and we have not—positioned ethics as an on-going, complicated discussion among researchers actively engaging fellow researchers and the human subjects we study. And we’ve only now recognized that human subjects research is core to technology companies’ product development and, by extension, bottom lines. However, there is an element of the Belmont Report that we could use to rethink guidance for technology companies, data scientists, and social media researchers alike: the lines drawn in the Belmont Report between “practice and research.”

The fine line between practice and research
The Belmont Report drew a clear line demarcating the “boundaries between biomedical and behavioral research and the accepted and routine practice of medicine”—the difference between research and therapeutic intervention (The Belmont Report 1979). This mandate, which was in fact the Report’s first order of business, indexes the Commission’s most pressing anxiety: how to reign in biomedicine’s professional tendencies to experiment in therapeutic contexts. The history of biomedical breakthroughs—from Walter Reed’s discovery of the causes of yellow fever to Jonas Salk’s polio vaccines—attest to the profession’s culture of experimentation (Halpern 2004: 41-96). However, this professional image of the renegade (mad) scientist pioneering medical advances was increasingly at odds with the need, pressing by the 1970s, for a more restrained and cautious scientific community driven first by an accountability to the public and only second by a desire for discovery.

In redrawing the boundaries between research and practice, the Belmont Report positioned ethics as a wedge between competing interests. If a practitioner simply wanted to tweak a technique to see if it could improve an individual subjects’ experience, the experiment did not meet the threshold of “real scientific inquiry” and could be excused from more formal procedures of consent, debriefing, and peer review. Why? Practitioners already have guiding codes of ethics (“do no harm”) and, as importantly, ongoing relationships built on communication and trust with the people in their care (at least, in theory). The assumption was that practitioners and “their” subjects could hold each other mutually accountable.

But, once a researcher tests something out for testing’s sake or to work on, more broadly, a scientific puzzle, they are in the realm of research and must consider a new set of questions: Cui bono, who benefits? Will the risk or harm to an individual outweigh the benefits for the greater good? What if that researcher profits from the greater good? The truth is, in most cases, the researcher will benefit, whether they make money or not, because they will gain credibility and status through the experience of their research. Can we say the same for the individual contributing their experiences to our experiments? If not, that’s, typically, an ethical dilemma.

Constructing ethical practice in a social media world
Social media platforms and the technology companies that produce our shared social playgrounds blur the boundaries between practice and research. They (we?) have to, in many cases, to improve the products that companies provide users. That’s no easy thing if you’re in the business of providing a social experience through your technology! But that does not exempt companies, any more than it exempts researchers, from extending respect, beneficence, and justice to individuals sharing their daily interactions with us. So we need to, collectively, rethink when “testing a feature” transitions from improving customer experience to more than minimally impacting someone’s social life.

Ethical stances on methodological practices are inextricably linked to how we conceptualize our objects of study. Issues of consent hinge on whether researchers believe they are studying texts or people’s private interactions. Who needs to be solicited for consent also depends on whether researchers feel they are engaged in a single site study or dealing with an infrastructure that crosses multiple boundaries. What ethical obligations, then, should I adhere to as I read people’s posts—particularly on commercial venues such as Facebook that are often considered “public domain”—even when they may involve participants who share personal details about their lives from the walled garden of their privacy settings? Are these obligations different from those I should heed with individuals not directly involved in my research? How can I use this information and in what settings? Does consent to use information from interviews with participants include the information they publicly post about themselves online? These questions are not easily grouped as solely methods issues or strictly ethical concerns.

For me, the most pragmatic ethical practice follows from the reality that I will work with many of the people I meet through my fieldwork for years to come. And, importantly, if I burn bridges in my work, I am, literally, shutting out researchers who might want to follow in my footsteps. I can give us all a bad reputation that lasts a human subject’s lifetime. I, therefore, treat online materials as the voices of the people with whom I work. In the case of materials I would like to cite, I email the authors, tell them about my research, and ask if I may include their web pages in my analyses. I tread lightly and carefully.

The Facebook Emotions study could have included a follow up email to all those in the study, sharing the cool results with participants and offering them a link to the happy and sad moments that they missed in their News Feed while the experiment was underway (tip of the hat to Tarleton Gillespie for those ideas). And, with more than half a million people participating, I’m sure a few hundred thousand would have opted-in to Science and to let Facebook keep the results.

We do not always have the benefit of personal relationships, built over time with research participants to guide our practices. And, unfortunately, our personal identities or affinities with research participants do not safeguard us from making unethical decisions in our research. We have only just started (like, last week) to think through what might be comparable practices for data scientists or technology designers, who often never directly talk with the people they study. That means that clear, ethical frameworks will be even more vital as we build new toolkits to study social media as sites of human interaction and social life.

Conclusion
Considering that more and more of social media research links universities and industry-based labs, we must coordinate our methodologies and ethics no matter who pays us to do our research. None of us should be relieved from duty when it comes to making sure all facets of our collaborations are conducted with an explicit, ethical plan of action. There are, arguably, no secondary data sets in this new world.

The Belmont Report was put in place to ensure that we have conversations with the Public, among ourselves, and with our institutions about the risks of the scientific enterprise. It’s there to help us come to some agreement as to how to address those risks and create contingency plans. While IRBs as classification systems can and have provided researchers with reflexive and sometimes necessary intervention, bureaucratic mechanisms and their notions of proper science are not the only or even the best source of good ethics for our work—ongoing and reflexive conversations among researchers and practitioners sharing their work with invested peers and participants are.

Whether from the comfort of a computer or in the thick of a community gathering, studying what people do in their everyday lives is challenging. The seeming objectivity of a lab setting or the God’s eye view of a web scraping script may seem to avoid biases and desires that could, otherwise, interfere with the social situations playing out in front of us that we want to observe. But, no matter how removed we are, our presence as researchers does not evaporate when we come into contact with human interaction. One of the values of sustained, ethnographic engagement with people as we research their lives: it keeps researchers constantly accountable not only to our own scientific (and self) interests but also to the people we encounter in any observation, experiment, or engagement.

Some of my peers argue that bothering people with requests for consent or efforts to debrief them will either “contaminate the data” or “seem creepy” after the fact. They argue that it’s less intrusive and more scientifically powerful to just study “the data” from a distance or adjust the interface design on the fly. I get it. It is not easy to talk with people about what they’re doing on online. Keep in mind that by the end of USENET’s long life as the center of the Internet’s social world, many moderated newsgroups blocked two kinds of lurkers: journalists. And researchers. In the long run, keeping a distance can leave the general public more suspicious of companies’, designers’, and researchers’ intentions. People may also be less likely to talk to us down the road when we want to get a richer sense of what they’re doing online. Let’s move away from this legalistic, officious discussion of consent and frame this debate as a matter of trust.

None of us would accept someone surreptitiously recording our conversations with others to learn what we’re thinking or feeling just because “it’s easier” or it’s not clear that we are interested in sharing them if asked outright. We would all want to understand what someone wants to know about us and why they want to study what we’re doing—what do they hope to learn and why does it matter? Those are completely reasonable questions. All of us have a right to be asked if we want to share our lives with strangers (even researchers or technology companies studying the world or providing a service) so that we have a chance to say, “nah, not right now, I’m going through a bad break up.” What would it look like for all of us—from LOLcat enthusiasts and hardcore gamers, to researchers and tech companies—to (re)build trust and move toward a collective enterprise of explicitly opting-in to understand this rich, social world that we call “The Internet?”

Scientists and technology companies scrutinizing data bubbling up from the tweets, posts, driving patterns, or check-ins of people are coming to realize that we are also studying moments of humans interacting with each other. These moments call for respect, trust, mutuality. By default. Every time we even think we see social interactions online. Is working from this premise too much to ask of researchers or the companies and universities that employ us? I don’t think so.


Notes/Bibliography/Additional Reading

[1] The United States Department of Health, Education and Welfare (HEW) was a cabinet-level, U.S. governmental department from 1953-1979. In 1979, HEW was reorganized into two separate cabin-level departments: the Department of Education and the Department of Health and Human Services (HHS). HHS is in charge of all research integrity and compliance including research involving human subjects.

[2] I wanted to thank my fellow MSR Ethics Advisory Board members, MSR New England Lab, and the Social Media Collective, as well as the following people for their thoughts on drafts of this essay: danah boyd, Henry Cohn, Kate Crawford, Tarleton Gillespie, James Grimmelmann, Jeff Hancock, Jaron Lanier,  Tressie Cottom McMillan, Kate Miltner, Christian Sandvig, Kat Tiidenberg, Duncan Watts, and Kate Zyskowski

Bowker, Geoffrey C., and Susan Leigh Star

1999    Sorting Things Out: Classification and Its Consequences, Inside Technology. Cambridge, Mass.: MIT Press.

Brenneis, Donald

2006    Partial Measures. American Ethnologist 33(4): 538-40.

Brenneis, Donald

1994    Discourse and Discipline at the National Research Council: A Bureaucratic Bildungsroman. Cultural Anthropology 9(1): 23-36.

Epstein, Steven

2007    Inclusion : The Politics of Difference in Medical Research. Chicago: University of Chicago Press.

Gieryn, Thomas F.

1983    Boundary-Work and the Demarcation of Science from Non-Science: Strains and Interests in Professional Ideologies of Scientists.” American Sociological Review 48(6): 781-95.

Halpern, Sydney A.

2004    Lesser Harms: The Morality of Risk in Medical Research. Chicago: University of Chicago Press.

Lederman, Rena

2006    The Perils of Working at Home: Irb “Mission Creep” as Context and Content for an Ethnography of Disciplinary Knowledges. American Ethnologist 33(4): 482-91.

Rothman, David J.

2003    Strangers at the Bedside: A History of How Law and Bioethics Transformed Medical Decision Making. 2nd pbk. ed, Social Institutions and Social Change. New York: Aldine de Gruyter.

Schrag, Zachary M.

2010    Ethical Imperialism: Institutional Review Boards and the Social Sciences, 1965-2009. John Hopkins University Press.

Stark, Laura

2012    Behind Closed Doors: IRBs and the Making of Ethical Research. University of Chicago Press. 2012

Strathern, Marilyn

2000    Audit Cultures: Anthropological Studies in Accountability, Ethics, and the Academy. London New York: Routledge, 2000.

United States. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research.

1978    Report and Recommendations: Institutional Review Boards. [Washington]: U.S. Dept. of Health, Education, and Welfare : for sale by the Supt. of Docs., U.S. Govt. Print. Off

Matrix Algebra: how to be human in a digital economy

By Sara C. Kingsley and Dr. Mary L. Gray (cross-posted at CultureDigitally and the Center for Popular Economics 

 ExhibitionMathamatica

Ray and Charles Working on a Conceptual Model for the Exhibition Mathematica, 1960, photograph. Prints & Photographs Division, Library of Congress (A-22a) http://www.loc.gov/exhibits/eames/images/uc9616.jpg

 

“Certainly the cost of living has increased, but the cost of everything else has likewise increased,”[1] H.G. Burt, the President of the Union Pacific Railroad, asserted to railroad company machinists and boilermakers.  For Burt, the “cost of everything else” included the cost of labor. His remedy: place “each workman on his [own] merit.”  In 1902, “workman merit” to a tycoon like H.G. Burt squarely meant equating the value of labor, or the worth of a person, to the amount of output each individual produced.  Union Pacific Railroad eventually made use of this logic by replacing the hourly wages of workers with a piece rate system.  Employers switched to piecework systems around the turn of the 19th century largely to reduce labor costs by weeding out lower skilled workers, and cutting the wages of workers unable to keep apace with the “speeding up” of factory production.

Employers historically leveraged piecework as a managerial tool, reconfiguring labor markets to the employers’ advantage by allowing production rates, rather than time on the job, to measure productivity.  Whatever a person produced that was not quantifiable as a commodity, in other words, did not constitute work.  We’ve seen other examples of discounted labor in spaces outside the factory.  Feminist economists fight to this day, for example, for the work of caregivers and housewives, largely ignored by mainstream economic theory, to gain recognition as “real” forms of labor.  Real benefits and income are lost to those whose work goes unaccounted.

As the historical record shows, workers do not typically accept arbitrary changes to their terms of employment handed down by management.  In fact, the Union Pacific Railroad machinists protested Burt’s decision to set their wages through a piecework system.  H.G. Burt met their resistance with this question: is it “right for any man to ask for more money than he is actually worth or can earn?”

But what is a person truly worth in terms of earning power?  And what societal, cultural, and economic factors limit a person from earning more?

In 2014, the question of a person’s worth in relation to their work, or the value of labor itself, is no less prescient.  The rhetoric surrounding workers’ rights compared to those of business differs little whether one browses the archives of a twentieth century newspaper or scrolls Facebook posts.  Ironically enough though, in the age of social media and citizen reporting, the utter lack of visibility and adequate representation of today’s workers stands in stark contrast to the piece rate workers of H.G. Burt’s day.  Few soundbites or talking points, let alone byline articles, focus on the invisible labor foundational to today’s information economies.  Nowhere is this more clearly illustrated than with crowdwork.

Legal scholar Alek L. Felstiner’s defines crowdworking as, “the process of taking tasks that would normally be delegated to an employee and distributing them to a large pool of online workers, the ‘crowd’” (2011).  Hundreds of thousands of people regularly do piecework tasks online for commercial, crowdsourcing sites like Amazon.com’s Mechanical Turk (“AMT”).

Over the last year, we’ve worked with Dr. Siddharth Suri and an international team of researchers, to uncover the invisible forms of labor online, and people who rely upon digital piecework for a significant portion of their income.  Crowdwork is, arguably, the most economically valuable, yet invisible, form of labor that the Internet has ever produced.  Take Google’s search engine for instance.  Each time you search for an image online (to create the next most hilarious meme, or find a infograph for a conference presentation) you’re benefitting from the labor of thousands of crowdworkers who have identified or ranked the image your search populates. While this service may be valuable to you, the workers doing it, only receive a few cents for their contributions to your meme or slideshow presentation.  Additionally, a typical crowdworker living in the United States makes, on average, 2 to 3 dollars an hour.  We need to ask ourselves: what is fair compensation for the value that workers bring to our lives?  How would you feel if tomorrow, all your favorite, seemingly free, online services that depend on these digital pieceworkers, disappeared?

Last fall, we spent four months in South India talking with crowdworkers and learning about their motivations for doing this type of work.  In the process we met people with far ranging life experiences, but a common story to tell – perhaps familiar to all of us who’ve earned a wage for our keep: work is not all we are, but most of what we do is work.  And increasingly, the capacity to maintain a living above the poverty line is elusive, and complicated by what “being poor” means in a global economy. Our hopes for finding more satisfying work, a life valued for what it is rather than what it is not — is no less, even as we confront the realities of today.

Moshe Marvit spoke to the complexities of crowdwork as a form of viable employment in a compelling account of U.S. workers’ experience with Amazon Mechanical Turk. He describes this popular crowdsourcing platform as “one of the most exploited workforces no one has ever seen.” Marvit emphasizes how crowdwork remains a thing universally unacknowledged, in that more and more tasks, from researchers’ web-based surveys and to Twitter’s real-time deciphering of trending topics, depend on crowdwork.  However, most people still don’t know that behind their screen is an army of click workers.  Anyone, who has ever browsed an online catalogue or searched the web for a restaurant’s physical address, has benefited from a person completing small, crowdworked task online.  Pointedly, our web experience is better because of the thousands of unknown workers who labor to optimize the online spaces we employ.

As Marvit points out, and our research also notes, people only earn pennies at a time for doing the small crowd tasks not yet fully automatable by computer algorithms. These crowd tasks, however, add up to global systems whose monetary worth sometimes trumps that of small nations.  Yet, when we ask our peers and colleagues, “do you know who the thousands of low income workers are behind your web browser?”  We receive mystified stares, and many reply “I don’t know.”

The hundreds of thousands of people who regularly work in your web browser are not the youth of Silicon Valley’s tech industry.  They likely cannot afford Google glass, or ride to work in corporate buses.  Some are college educated, but, like people today – they are stuck in careers that undervalue their real worth, in addition to discounting the investments they’ve already made in their education, skills, and the unique set of values they’ve gained from their own life experiences.

Yet, the more our research team learns about crowdworkers’ lives, the more we realized how little we know about the economic value of crowdwork and the makeup of the crowdworking labor force. And as Marvit notes, we still don’t have a good grasp of what someone is doing, legally speaking, when they do crowdwork. Should we categorize crowdwork as freelance work? Contract labor? Temporary or part-time work?

In the absence of answers to these questions, some have called for policy solutions to mitigate the noted and sometimes glaring inequities in power distributed between those posting tasks (or, jobs) to crowdwork platforms, and those seeking to do crowdwork online.  But, we argue, good labor policy that makes sense of crowdwork, from a legal or technical point of view, can’t be adequately drafted until we understand what people expect and experience doing task-based work online. Who does crowdwork? Where, how, and why do they do it? And how does crowdworking fit into the rest of their lives, not to mention our global workflows? When we can answer these questions, we’ll be ready to talk about how to define crowdwork in more meaningful ways. Until then, we resist dubbing crowdwork “exploitative” or “ideal,” because doing so is meaningless to the millions of people who crowdwork, and ignores the builders and programmers out there trying to improve these technologies.

We are all implicated in the environments we rely on and utilize in our daily lives, including the Internet.  Those who request and outsource tasks to the crowd without regard to crowdworkers’ rights, are perhaps, no more at fault than the rest of us who expect instant, high quality web services every time we search or do other activities online.  An important lesson from Union Pacific Railroad still holds true: workers are not expendable.



[1]Omaha daily bee. (Omaha [Neb.]), 01 July 1902. Chronicling America: Historic American Newspapers. Lib. of Congress. <http://chroniclingamerica.loc.gov/lccn/sn99021999/1902-07-01/ed-1/seq-1/>

How to think about ringing in your ears as something bigger than you

I’ve been feeling the need to freshen up the Blog-ish. What better way to do that than to post a link to a new blog post by another researcher I really love to read:

Check out Mack Hagood’s new piece on tinnitus and the prospects of an applied sound studies:

http://soundstudiesblog.com/2012/07/16/listening-to-tinnitus-roles-of-media-when-hearing-breaks-down/

A Message to the “First Responders” in Gay Kids’ Lives: Why We Need to Ditch the Politics of Blame, Stop Talking About “Cyberbullying,” and Move Toward Sharing Responsibility for the Loss of Tyler Clementi

Mary L. Gray

Senior Researcher Microsoft Research New England, Cambridge, MA

Versions of this post will be cross-posted to socialqueery; the Huffington Post; SocialMediaCollective; and Cultural Digitally

Associate Professor of Communication and Culture, Indiana University

Tyler Clementi’s death on 22 September 2010 was one of the first in a wave of highly publicized youth suicides that fall. In several cases, media coverage and political discourse connected these tragedies to cases of on and offline harassment saturated in homophobic sentiment. Research among students suggests that these hostilely charged environments are the norm rather than the exception. For lesbian, gay, bisexual, transgender, and questioning (LGBTQ) youth contemplating suicide, parents, peers, educators, faith leaders, and LGBTQ community advocates are key “first responders”—caring individuals on the scene, providing support—in the wake of this ubiquitous animus. Rallying to punish Dharun Ravi, the former Rutgers student standing trial for 15 criminal counts, including tampering with witnesses and evidence, invasion of privacy, and bias intimidation of Tyler Clementi, does not do justice to Clementi’s life nor does it move us one step closer to preventing another young person, like him, from turning to suicide.

Yet, for the past 2 years, anti-bullying advocates have had their collective frustration and political clout harnessed to further criminalize bullying rather than bolster the roles and resources of invaluable LGBTQ youth “first responders” on the ground. States and school districts rushed to crack down on bullies, prompted, in some cases, by their own convictions but, surely in others, by a political desire to appease constituencies without having to take an explicit stand on anything (who could be for bullying, after all). As a result, a record number of anti-bullying policies are now on the books. However, we have no concrete evidence that such top-down policies prevent or counteract bullying, particularly so-called “cyberbullying”—harassment carried out through texting and online social networks. Worse yet, some research on violent harassment among youth suggests that framing the problem as “bullying” actually works against youth reporting violence or identifying themselves as targets of it (Marwick and boyd 2011).

As we move forward, we need to shift from a dead-end politics of blame to build out the sources of support and ethos of shared responsibility that could make a difference, literally, between life and death for LGBTQ young people.

Rethinking homophobia. Tracing a causal link between Ravi’s homophobic actions and Tyler Clementi’s suicide dangerously oversimplifies homophobia. This formula suggests that homophobia is something “individuals have” rather than what our cultural norms perpetuate. Rather than presume homophobia vents an individual’s fear of homosexuality, researchers, such as sociologist CJ Pasco (2009), have persuasively argued that it is a portable (I would argue concealable) weapon for policing sexuality and shoring up the fragile gender identities emblematic of tween and teen life. Young people, like Clementi, searching for communities to reflect who they are must constantly weigh if talking about how they feel, whether it’s with parents, close friends, or complete strangers, will work for or against them. If we are serious about preventing bullying and suicide, we need a calculus that always works in a young person’s favor.

The homophobia expressed in Ravi’s disgust for Clementi’s intimacy with another man, as much as the racism conveyed in Clementi’s joking suggestions that Ravi’s South Asian parents owned aDunkin’ Donuts, signal our limited capacity to celebrate difference. We need to stop telling young people what they shouldn’t say or do and start teaching them—and ourselves—the social and emotional literacies they need to challenge the way they see themselves and each other. It’s time to start having direct conversations with students (beyond the platitude that such name calling “isn’t nice)” about the power that words like “fag,” “no homo,” “bitch,” and others circulate, not only through the person targeted by the slur, but also the person hurling it. Only then can we hope to turn homophobia from an easy insult to a powerful analytic tool for mining our own fears, insecurities, and discomforts with difference.

Expanding parental support/holding parents accountable. One of the few things we know for sure is that parents, guardians, and adult mentors make a difference in the lives of LGBTQ youth. A young person, for example, who lives in fear of a parent’s condemnation is more likely to hurt themselves than a young person who feels supported and accepted at home (Ryan 2009). This is not surprising. But by not explicitly recognizing parents’ roles, we undermine their importance as a strategy for combating LGBTQ youth bullying and suicide. Parents and guardians provide a measure of incomparable respite when they celebrate, rather than stand neutral or second-guess a young person’s decision to question what it means to be straight. A modest expression of acceptance makes a measureable difference. But even that can be a tall order. Adults must negotiate and account for their own doubts and anxieties when a child asks such questions before they can effectively offer first responder support. Parents shouldn’t have to go it alone and, realistically, can’t do it all. They need allies, from family, faith communities, and other positive social networks, to counter the violence and hostility rampant in school environments and circulating online. We will know we’ve reached our goal when every young adult imagines they’d celebrate, rather than endure or suffer through, having an LGBTQ-identifying child of their own.

Focusing on basic research. Educators, researchers, and policy makers need to acknowledge that we know next to nothing about the quality of young LGBTQ people’s lives before we can even begin to contribute to meaningful strategies for supporting them. The data we arm ourselves with, even the universally cited statistics on higher suicide rates among lesbian and gay youth perpetuate a rudimentary, generic picture (Waidzunas 2011). But we have no idea what daily life is like for the average LGBTQ-identifying teen. Right now, there is no national instrument for measuring young people’s positive experiences around sexuality and gender. Most states don’t ask a single question about LGBTQ youth on their annual Youth Risk Behavior Survey, effectively erasing them from the discussion at the state and district level. Indeed, Massachusetts remains the only state with a standing Commission on GLBT Youth that funds support programs in its public schools through its department of education that gather data on the effectiveness of LGBTQ-specific outreach and education. What we need is a nationally funded, coordinated effort that links programming, outreach, and research on behalf of LGBTQ youth. Harvard University’s Born This Way Foundation, launching February 29, and the Massachusetts GLBT Youth Commission’s Research Consortium are 2 good examples of what needs to be done.

Where to go from here. Focusing our collective outrage on prosecuting an individual, whether seeking the harshest punishment we can wring out of Ravi’s case or lobbying for so called “zero-tolerance” policies that automatically expel any student implicated in bullying, implies that homophobia can be rooted out, one bad apple at a time. Turning this into a case of one individual driving Clementi over the edge moves us no closer to seeing the journey that brought Clementi to that edge. When it comes to understanding and preventing youth suicide, our research, educational policies, and legal actions can’t stop at weeding out the presence of homophobic individuals but must demand systems of accountability that address how we individually and collectively perpetuate homophobia in everyday ways. That is why the “first responders” fighting for young people’s federal rights to an equal education and the human right to free expression must call on us to more broadly share responsibility in making those rights universal over narrowly seeking the right bully to blame and lock up.

Citations:

Alice Marwick and danah boyd. (2011). “The Drama! Teen Conflict in Networked Publics.” Paper presented at the Oxford Internet Institute Decade in Internet Time Symposium, September 22. http://ssrn.com/abstract=1926349

CJ Pascoe. (2007). Dude, You’re a Fag: Masculinity and Sexuality in High School. Berkeley: University of California Press.

Caitlin Ryan, David Huebner, Rafael M. Diaz, and Jorge Sanchez. (2009). “Family Rejection as a Predictor of Negative Health Outcomes in White and Latino Lesbian, Gay, and Bisexual Young Adults.” Pediatrics January 2009; 123:1 346-352; doi:10.1542/peds.2007-3524

Tom Waidzunas. (2011). “Young, Gay, and Suicidal: Dynamic Nominalism and the Process of Defining a Social Problem with Statistics.” Science, Technology & Human Values, 0162243911402363-. doi:10.1177/0162243911402363

BIO

Mary L. Gray is Senior Researcher at Microsoft Research New England and Associate Professor of Communication and Culture, with affiliations in American Studies, Anthropology, and the Gender Studies Department at Indiana University. She draws on an interdisciplinary background in anthropology and critical media studies to study how people use digital and social media in everyday ways to shape their social identities and create spaces for themselves. Her most recent book, Out in the Country: Youth, Media, and Queer Visibility in Rural America (NYU Press, 2009) examined how youth in rural parts of the United States fashioned “queer” senses of gender and sexual identity and the role that media—particularly internet access—played in their lives and political work.

2 big things happened 1/12/12…

Thursday’s the start of my last grad seminar, at least for the next few years. And it’s also the day word goes out that I’ve officially joined Microsoft Research New England (@MSNewEngland), in Cambridge, MA, as a Senior Researcher. Nancy Baym (@nancybaym) and Kate Crawford (@katecrawford) will be joining the team as well (!!!!!). And, of course, danah boyd (@zephoria) is already on the scene doing amazing work, building bridges between social media research done with a critical, qualitative twist and the phenomenal mathematics, physics, and computer science research that distinguishes MSR. danah boyd’s posted the details here.

And the grad seminar: I’ve got a batch of stellar young scholars here at Indiana University. Together, we’re tackling the thicket of ethnographic approaches to digital media studies to think through what makes ethnographic work different from other approaches to the everyday experiences of media that shape our worlds.

Pretty exciting day.
: )

big day

Tomorrow’s the big day. What makes it so big, you might ask?

Wait for it. Wait for it…