The Electronic Frontier Foundation (EFF) publishes an annual “Who Has Your Back Report” to evaluate which companies defend their users when the government comes knocking. Since 2011, the report has focused on government requests for user information. This year, the report takes on a different topic: government requests to take down information—in other words, censorship on online platforms.
As background, EFF explains how the prevalence of HTTPS and mixed-use social media sites have made it harder for governments themselves to directly censor content. As a result, governments are increasingly turning to online platforms to censor for them.
EFF used five criteria to rate how well companies (“some of the biggest online platforms that publicly host a large amount of user-generated content”) protect their users from unwarranted censorship:
Based on EFF’s description of those criteria, GitHub meets each one. As we explain in our contribution to the UN’s free expression expert’s report on content moderation, we minimize censorship on our platform by providing transparency, notice, appeals, and geographically limited blocking when we find a takedown unavoidable.
Among EFF’s observations in the report are that companies that scored well “serve to provide examples of strong policy language for others hoping to raise the bar on content moderation policy” and that helping companies to review each other’s policies around content moderation “can serve as a guide for startups and others looking for examples of best practices.” A strong motivation behind open sourcing our policies is that we hope to contribute to industry best practices while offering those examples to startups and others who are looking for them. We recognize how important transparency is in how we develop our policies. We also recognize that being transparent about how we moderate content is essential to maintaining our community’s trust and our legitimacy as a platform.
We thank EFF for taking on online censorship in this year’s report. Get in touch with us through email or Twitter if you’re interested in collaboration toward raising the standard among companies involved in online content moderation.
Earlier this month, we shared our contribution to a report about content moderation and free expression written by David Kaye, the United Nations Special Rapporteur on freedom of expression. That report is now available.
While the report focuses on social media platforms that see large volumes of hate speech and misinformation, use automation to moderate content, and receive government takedown requests based on their Terms of Service, many of Kaye’s points are relevant to GitHub’s users.
For example, on how to respond to government takedown requests, the report cites GitHub’s contribution where it states:
Companies should ensure that requests are in writing, cite specific and valid legal bases for restrictions and are issued by a valid government authority in an appropriate format.
At GitHub, when we receive a government takedown request, we confirm:
- that the request came from an official government agency;
- that the official sent an actual notice identifying the content; and
- that the official specified the source of illegality in that country.
Here are some other relevant points from the report.
The report’s top recommendation to companies is to recognize human rights law as “the authoritative global standard for ensuring freedom of expression on their platforms.” As we note in our contribution:
GitHub promotes the freedom of expression in our policies and in our application of those policies to specific cases, consistent with international human rights law’s articulation of the right to freedom of expression and its limitations in the International Covenant on Civil and Political Rights (ICCPR).
The ICCPR allows limitations on free expression when provided by law and necessary, including for the respect of others’ rights or where the content constitutes harassment, abuse, threats, or incitement of violence toward others.
The report calls for companies to provide public input and engagement, and use transparent rulemaking processes. We develop the rules on our platform collaboratively with our community.
The report notes that companies can develop “tools that prevent or mitigate the human rights risks” caused when national laws or demands are inconsistent with international standards.
In these situations, we look for ways to comply that are the least restrictive on human rights—for example, by asking the user to remove part of a repository instead of blocking it entirely, and by geo-blocking content only in the relevant jurisdiction.
One of Kaye’s five recommendations for governments responds to a concern we raised in our contribution. We explained that measures like the European Union’s proposal to require upload filters for copyright infringement “are overly broad in their scope and, as applied to GitHub and our user community, could be so cumbersome as to prevent developers from being able to launch their work.”
The report noted that “automated tools scanning music and video for copyright infringement at the point of upload have raised concerns of overblocking,” and made this recommendation:
States and intergovernmental organizations should refrain from establishing laws or arrangements that would require the “proactive” monitoring or filtering of content, which is both inconsistent with the right to privacy and likely to amount to pre-publication censorship.
Thanks to Special Rapporteur Kaye for his in-depth study of how human rights principles apply to content moderation on online platforms.
If you’d like to participate in the development of our policies, watch our site-policy repository, and look out for posts announcing new policies for public comment on our policy blog or follow our Policy Twitter account. To follow takedowns in real time, watch our gov-takedowns and DMCA repositories.
RightsCon—an annual conference on human rights in the digital age—brought together more than 2,000 people from 115 countries last week in Toronto. On the first day of the conference, we joined non-profits, academics, and other tech companies for a session on working together to protect and promote human rights.
Alongside conversations on bias in artificial intelligence (AI) decision-making and cybersecurity capacity-building, we led the discussion on working with our community to develop the policies that govern the use of our site. In the face of public discourse on who should be deciding what speech is legal—and who should be held accountable for these decisions—we provided this example of how a platform can adopt rules through a transparent, democratic process.
At the session, we also highlighted several other ways in which our policy work promotes human rights, like freedom of expression and privacy. Some examples:
To promote freedom of expression, we limit censorship by making sure requestors meet our detailed requirements for takedown requests and by limiting the impact of the takedown when possible. For example, we geo-block content that isn’t illegal in all jurisdictions and, when possible, ask users to remove parts of a repository that contain infringing content, rather than blocking an entire repository. In addition, we promote the right of access to information (related to the right to free expression) and transparency by publishing transparency reports and posting takedown notices in real time in our government-takedowns and DMCA repositories. We also described there (and at another RightsCon session) our work on the global implications of the EU’s copyright proposal on free expression.
In our submission to United Nations Special Rapporteur David Kaye’s upcoming report on content moderation and free expression, we note that our approach is consistent with international human rights law. As many speakers at RightsCon pointed out, those international standards are useful for companies looking for a baseline for evaluation that applies to users globally, without imposing one country’s norms on countless others.
Millions of developers trust us with their data—and protecting their privacy is a top priority for us. We didn’t need to change the way we handle user data to comply with the EU’s General Data Privacy Regulation (GDPR), which recognizes data protection as a fundamental right. We are proudly in compliance with the GDPR ahead of the law’s deadline this Friday.
GitHub’s Statement Against Modern Slavery and Child Labor outlines the steps we take to make sure modern slavery and child labor are not in our business or supply chain. RightsCon participants were interested to hear how companies that aren’t typically associated with these abuses are taking steps to show how they prevent them, including by placing requirements on their suppliers.
Beyond these examples, a human rights perspective runs through much of our work, such as immigration, open source, net neutrality, and cybersecurity. Hopefully, this illustrates how important it is for tech companies to consider the human rights implications of so much of what we do.
Coming off the heels of an invigorating week of learning and collaborating at RightsCon, we look forward to continuing our work to keep the internet free, open, and secure, and to protect human rights.
From fake news to copyright infringement, content moderation—and who should do what to address it—is all over the news and policymaking arenas. Although we are a platform that hosts primarily code uploaded by developers, many of those discussions are relevant to GitHub.
Earlier this year, United Nations Special Rapporteur on the right to freedom of opinion and expression, David Kaye, visited GitHub’s headquarters to discuss how content moderation on our platform affects free expression. His visit was part of his research for a report he will present to the United Nations Human Rights Council for its adoption in June. To gather views from governments, companies, and others, Special Rapporteur Kaye issued a call for written submissions with questions on topics ranging from how companies handle takedown requests to what role automation plays (and should play) in content moderation.
In GitHub’s response to the Special Rapporteur’s questions:
We walk through our processes for handling takedown requests (government takedowns and copyright infringement notices under the Digital Millennium Copyright Act (DMCA)) and we describe how we work to reduce abuse on our platform without unnecessarily chilling speech. For instance, we geo-block content if it’s not illegal globally and we consider the right of fair use in handling DMCA takedown notices.
We highlight how we promote transparency, for example by involving our community in the development of the policies that govern use of our platform and by posting takedown notices in public repos in real time. We explain that users can appeal removal of content and that we’ll provide reasons for our decision.
We note that our approach is consistent with international human rights law—specifically Articles 19 and 20 of the International Covenant on Civil and Political Rights, which establish the right to free expression and prohibition of propaganda and hate speech. We also explain that we designed our Community Guidelines to protect the interests of marginalized groups and encourage users to respect each other.
Finally, we explain that we open source our site policies (we’re GitHub, after all!) and hope that our approach gets recognized as a best practice that other platforms adopt.
Contributing to Special Rapponteur Kaye’s report is one way we’re working to define and build on best practices for platform moderation. We also directly participate in the discourse about content moderation, for example at last week’s Conference Moderation Summit and this week at RightsCon. In addition, we continue to advocate for approaches to content moderation that promote transparency and free expression while limiting abuse.
We thank the Special Rapporteur for his thoughtful attention to this timely issue and we look forward to reading his report!
At GitHub, we believe that maintaining transparency is an essential part of our commitment to our users. For the past three years we’ve published transparency reports to better inform the public about GitHub’s disclosure of user information and removal of content.
GitHub promotes transparency by:
We hope our transparency report will interest GitHub users and contribute to broader discourse on platform governance. If you’re unfamiliar with GitHub terminology, please refer to the GitHub Glossary.
In this report, we fill you in on 2017 stats for:
New in 2017 are:
GitHub’s Guidelines for Legal Requests of User Data explain how we handle legally authorized requests, including law enforcement requests, subpoenas, court orders, search warrants, and national security orders.
A subpoena (a written order to compel someone to testify on a particular subject) does not require review by a judge or magistrate. By contrast, a search warrant or court order does require judicial review.
As we note in our guidelines:
In 2017, GitHub received 51 legal requests to disclose user information, including 42 subpoenas (30 criminal and 12 civil), three court orders, and six search warrants. These include every request we received for user information, regardless of whether we disclosed information or not. Not all of these came from law enforcement; one came from a U.S. government agency, and 12 came from civil litigants requesting information about another party. We also received two cross-border data requests, as described in the next section. Of the 51 requests received, we produced information 43 times.
Governments outside the U.S. can make cross-border data requests for user information through the U.S. Department of Justice via a mutual legal assistance treaty (MLAT) or similar form of cooperation. Of the 51 requests for legal information described above, GitHub received two requests (one court order and one search warrant) from the U.S. Department of Justice on behalf of non-U.S. government agencies through the MLAT process.
Note legislative developments could lead to increased cross-border data requests and a need for more oversight.
In many cases, legal requests are accompanied by a court order that prevents us from notifying users about the request due to a non-disclosure order, commonly referred to as a gag order. In 2017, of the 43 requests for which we produced information, we did so without being able to notify users 35 times. This represents a considerable increase from last year and continues a rising trend, up from 27 non-disclosure orders in 2016, seven in 2015, and four in 2014.
We did not disclose user information in response to every request we received. In some cases, the request was not specific enough, and the requesting party withdrew the request after we asked for some clarification. In other cases, we received very broad requests, and we were able to limit the scope of the information we provided.
We are very limited in what we can say about national security letters and Foreign Intelligence Surveillance Act (FISA) orders. The U.S. Department of Justice has issued guidelines that only allow us to report information about these types of requests in ranges of 250, starting with zero. As the chart below shows, in 2017, we received 0-249 notices in 2017, affecting 0-249 accounts.
Below, we describe two main categories of requests we receive to remove or block user content: government takedown requests and DMCA takedown notices.
From time to time, GitHub receives requests from governments to remove content that they judge to be unlawful in their local jurisdiction (government takedown requests). When we block content at the request of a government, we post the official request that led to the block in a publicly accessible repository. Regarding our process, when we receive a request, we confirm whether:
If we believe the answer is yes to all three, we block the content in the narrowest way we see possible. For instance, we would restrict the removal only to the jurisdictions where the content is illegal. We then post the notice in our government takedowns repository, creating a public record where people can see that a government asked GitHub to take down content.
In 2017, GitHub received eight requests—all from Russia—resulting in eight projects being taken down or blocked (all or part of six repositories, one gist, and one website taken down).
Most content removal requests we receive are submitted under the DMCA, which provides a method by which copyright holders may request GitHub to take down content they believe is infringing. The user who posted the content can then send a counter notice to reinstate content when the alleged infringer states that the takedown was erroneous. Each time we receive a complete DMCA takedown notice, we redact any personal information and post it to a public DMCA repository.
Our DMCA Takedown Policy explains more about the DMCA process, as well as the differences between takedown notices and counter notices. It also sets out the requirements for complete requests, which include that the person submitting the notice take into account fair use.
In 2017, GitHub received and processed 1,380 DMCA complete takedown notices and 55 complete counter notices or retractions, for a total of 1,435. In the case of takedown notices, this is the number of separate notices where we took down content or asked our users to remove content.
The notices, counter notices, retractions, and reversals we processed look like this (by month):
From time to time, we receive incomplete or insufficient notices regarding copyright infringement. Because these notices don’t result in us taking down content, we don’t currently keep track of how many incomplete notices we receive, or how often our users are able to work out their issues without sending a takedown notice.
Often, a single takedown notice can encompass more than one project. So, we looked at the total number of projects, such as repositories, gists, and Pages sites, that we had taken down due to DMCA takedown requests in 2017. The projects we took down, and the projects that remained down after we processed retractions and counter notices, looked like this (by month):
Based on DMCA data we’ve compiled over the last few years, we’ve seen an increase in DMCA notices received. This isn’t surprising given that the GitHub community also continues to grow over time. When we overlay the number of DMCA notices with the approximate number of registered users over the same period of time, we can see that the growth in DMCA notices correlates with the growth of the community.
Transparency reports by internet platforms have served to shine a light on censorship and surveillance. The very first of the genre, Google’s 2010 Report, stated “greater transparency will lead to less censorship.” In 2018, platforms are under far greater pressure to censor than they were then, and transparency reports have potential to instead show how willing platforms are to cooperate with censors. More thorough transparency can mitigate this risk—particuarly if platforms, users, advocates, academics, and others interested in free speech, privacy, law enforcement, and more use the data to engage in shared conversations that acknowledge common goals.
As the beginning of this report reflects, GitHub sees transparency reports as necessary, but not sufficient, for good governance. We look forward to continuing to engage in discussions with those stakeholders, including our users, as we strive to promote transparency on our platform.
We hope you enjoyed this year’s report and encourage you to let us know if you have suggestions for additions to future reports.