The Git project has disclosed CVE-2018-17456, a vulnerability in Git that can cause arbitrary code to be executed when a user clones a malicious repository. Git v2.19.1 has been released with a fix, along with backports in v2.14.5, v2.15.3, v2.16.5, v2.17.2, and v2.18.1. We encourage all users to update their clients to protect themselves.
Until you’ve updated, you can protect yourself by avoiding submodules from untrusted repositories. This includes commands such as
git clone --recurse-submodules and
git submodule update.
GitHub Desktop versions 1.4.1 and older included an embedded version of Git that was affected by this vulnerability. We encourage all GitHub Desktop users to update to the newest version (1.4.2 and 1.4.3-beta0) available today in the Desktop app.
Atom included the same embedded Git and was also affected. Releases 1.31.2 and 1.32.0-beta3 include the patch.
Ensure you’re on the latest Atom release by completing any of the following:
In order to be protected from the vulnerability, you must update your command-line version of Git, and any other application that may include an embedded version of Git, as they are independent of each other.
Neither GitHub.com nor GitHub Enterprise are directly affected by the vulnerability. However, as with previously discovered vulnerabilities, GitHub.com will detect malicious repositories, and will reject pushes or API requests attempting to create them. Versions of GitHub Enterprise with this detection will ship on October 9.
This vulnerability is very similar to CVE-2017-1000117, as both are option-injection attacks related to submodules. In the earlier attack, a malicious repository would ship a
.gitmodules file pointing one of its submodules to a remote repository with an SSH host starting with a dash (
ssh program—spawned by Git—would then interpret that as an option. This attack works in a similar way, except that the option-injection is against the child
git clone itself.
The problem was reported on September 23 by @joernchen, both to Git’s private security list, as well as to GitHub’s Bug Bounty program. Developers at GitHub worked with the Git community to develop a fix.
The basic fix was clear from the report. However, due to to the similarity to CVE-2017-1000117, we also audited all of the
.gitmodules values and implemented stricter checks as appropriate. These checks should prevent a similar vulnerability in another code path. We also implemented detection of potentially malicious submodules as part of Git’s object quality checks (which was made much easier by the infrastructure added during the last submodule-related vulnerability).
The coordinated disclosure date of October 5 was selected by Git developers to allow packagers to prepare for the release. This also provided hosting sites (with custom implementations) ample time to detect and block the attack before it became public. Members of the Git community checked the JGit and libgit2 implementations. Those are not affected by the vulnerability because they clone submodules via function calls rather than separate commands.
We were also able to use the time to scan all repositories on GitHub for evidence of the attack being used in the wild. We’re happy to report that no instances were found (and now, with our detection, none can be added).
Please update your copy of Git soon, and happy cloning!
Software developers are most productive when software development is inclusive and accessible. At GitHub, we conduct research in machine learning, design, and infrastructure to make sure everyone can do their best work with the next generation of developer tools and workflows.
This research can take considerable time to reach you, our end users, if it reaches you at all. We rigorously evaluate products for stability, performance, and security. And many experiments don’t meet our success criteria for product release, even when they present a path forward for future innovation.
Although we can’t share everything we do, we’ve launched a collection of demonstrations highlighting our most exciting research projects—and the ideas behind them—with Experiments. We hope these will not only give you insight into our research but inspire you to think audaciously about the future of software development.
For our first demo, we’ve chosen Semantic Code Search. We’ve used machine learning to build semantic representations of code that allow you to use natural language to search for code by intent, rather than just keyword matching. See our blog post for additional detail on how this works.
We’re just getting started, so stay tuned for more examples. If this research excite you as much as they excite us, why not join our team?
Abhijeet Pratap Singh is a student at the Indian Institute of Information Technology in Tiruchirappalli, India. He was selected to be one of the Google Summer of Code (GSoC) students for the Probot project. In this post, Abhijeet recounts his experience working with the Probot Team at GitHub and what he learned about working with other developers on an open source project.
This spring, I was selected to participate in Google Summer of Code (GSoC), a program that pairs student developers with open source projects. I was placed with the Probot Team at GitHub. Probot is an open source framework used to build applications for GitHub based on NodeJS. These applications improve and automate your workflow while working on GitHub.
I’m a Computer Science and Engineering undergraduate at the Indian Institute of Information Technology. When I entered the Institute, I was very interested in knowing more about software. I explored different opportunities and found out about GSoC through Quora earlier this year.
I learned about Probot while researching the different projects partnering with GSoC. I had some previous experience working with bots and NodeJS and decided to try out applications built with Probot. I found the Probot community, a Slack community of developers who use Probot, which pointed me to the Probot Summer of Code and Probot’s GSoC project ideas. The project idea list helped me identify what project I should submit a proposal for.
I initially focused on the Twitter integration proposed in Probot’s ideas repo. After a few days of tinkering, I successfully developed my first Twitter integration bot. I recorded a screencast to log my progress with the bot and shared it with Brandon Keepers (@bkeepers), Jason Etcovitch (@JasonEtco), and Gregor Martynus (@gr2m).
I later submitted proposals for the Twitter integration and Weekly Digest projects when the student application period started. My proposal for the Weekly Digest project was accepted, and I became the Student Developer for Probot. The results were announced on April 23. I was so happy to see that my project was accepted and was really excited for this summer. Gregor Martynus (@gr2m) and Wilhelm Klopp (@wilhelmklopp) were assigned as my mentors.
GSoC kicked off in late April, and I spent more time exploring the community and its best practices. I also met my mentors over video conference. We discussed a plan to move forward with and implement our projects. Then we created the Weekly Digest repository in order to track progress.
The coding round began in mid May. I started by opening a few issues and pull requests, making some commits, and exploring GitHub’s REST API and GitHub’s REST API client for NodeJS. My mentors and I met weekly to go over the work I did and help out whenever I got stuck. It was a great way to discuss best practices, standards, and tools that developers use.
After working on the project, I got very close to a pre-release version of the app and released an alpha version of the Weekly Digest.
Here’s a GIF of the Weekly Digest app in action:
When you install the Weekly Digest in your (or your organization’s) repository, it curates the following data and publishes it as an issue:
The app, as the name suggests, generates these digests and publishes them on a weekly basis, typically on a Sunday. You can change the default configuration of the app by adding a .github/weekly-digest.yml file in your GitHub repository, which allows you to configure the publish date and the specific information included in the digest.
Becoming a software developer has been my dream ever since I was a kid. My mentors were happy to answer all of my questions and took time out of their busy schedules to meet with me. Through GSoC, I learned how a project is maintained and how developers work together to deliver solutions. And I got to develop a love for open source through my work with Probot!
I was really lucky to be guided by my mentors, Gregor Martynus (@gr2m) and Wilhelm Klopp (@wilhelmklopp). Special thanks to them, Brandon Keepers (@bkeepers), Jason Etcovitch (@JasonEtco), and the awesome Probot community for accepting me as a Student Developer for this year’s GSoC!
Previously, we checked our asset files into the GitHub repository. This workflow was restrictive and confusing for contributors who might want to iterate on or update an Octicon. We wanted anyone to be able to make contributions, but they needed all of these things to work in order to contribute.
To support your project’s contributors it’s important to make the contributing experience as frictionless as possible. Migrating our Octicons to Figma let us cut out painful steps in our previous workflow. Having their API available for automating the work has allowed contributors to contribute using powerful platform-agnostic design tools without any overly complex setup.
Robots are great for doing repeatable tasks, and handing that work off to automated systems frees us up to think about the big picture. We lean on continuous integration to build, export, and distribute the icons.
On every pull request we use CI to export our icons from the file and distribute alpha versions of the libraries.
We also take advantage of Probot, an out-of-the-box robot that makes automating GitHub tasks easy.
Probot has the ability to check our pull requests on Octicons and look for changes in the Figma source URL. When this occurs, our Probot app will query Figma’s platform and look for changes to any of the icons. When it finds those changes, it will comment on the pull request with before and after images. This makes the process easier for both contributors and maintainers.
The API’s potential is the most exciting part, and we can’t wait to see how it improves our workflow. The Design Systems Team at GitHub are designers with an engineering focus. We want to keep our components in code, then make them available for our designers to prototype.
The upcoming Figma write API will allow us to maintain our component library in code and export those as Figma components. Using a team library we can publish updates and make them available to the GitHub Product Design Team to use in their design mockups and prototypes.
Last month GitHub celebrated the fourth year of our Security Bug Bounty program. As we’ve done in the past, we’re sharing some details and highlights from 2017 and looking ahead to where we see the program going in 2018.
Last year was our biggest year yet as our Bug Bounty program continued to grow in participation by researchers, program initiatives, and the rewards paid out.
Diving straight into the numbers, we can review the details of this growth. In 2017, we reviewed and triaged a total of 840 submissions to the program. Of these submissions, we resolved and rewarded a total of 121 reports with an average payout of $1,376 (and swag!). Compared to our previous statistics for 2016, this was a significant increase from 48 out of 795 reports being resolved. In 2017, our rate of valid reports increased from 6% to almost 15%.
Our total payouts also saw a significant increase from $95,300 in 2016 to $166,495 in 2017. We attribute this to the increased number of valid reports and in October we took time to re-evaluate our payout structure. Corresponding with HackerOne’s Hack the World competition, we doubled our payout amounts across the board, bringing our minimum and maximum payouts to $555 and $20,000, bringing our bug bounty in line with the industry’s top programs.
To accelerate our program’s growth in 2017, we launched a number of initiatives to help engage researchers. Among the changes to the program was the introduction of GitHub Enterprise to the scope of the Bug Bounty program, which allowed researchers to focus on areas of our applications that may not be exposed on GitHub.com or are specific to certain enterprise deployments. In the beginning of 2017, a number of reports impacting our enterprise authentication methods prompted us to not only focus on this internally, but also identify how we could engage researchers to focus on this functionality. To promote a more targeted review of these critical code paths we kicked off two new initiatives beyond our public Bug Bounty program.
Providing researcher grants is something that has been on our radar since Google launched their Vulnerability Research Grants in 2015. The basic premise is that we pay a fixed amount to a researcher to dig into a specific feature or area of the application. In addition to the fixed payment for the grant, any vulnerabilities identified would also be paid out through the Bug Bounty program. During the beginning of the year, we identified a researcher with specialty in assessing troublesome enterprise authentication methods. We reached out and launched our first researcher grant. We couldn’t have been happier with the results. It provided a depth of expertise and review that was well worth the extra monetary incentive.
In March 2017 we launched GitHub for Business, bringing enterprise authentication to organizations on GitHub.com. We used this feature launch as an opportunity to roll out a new part of the Bug Bounty program: private bug bounties. Through a private program on HackerOne, we reached out to all researchers who had previously participated in our program and allowed them access to this functionality before its public launch. This added to our internal pre-ship security assessments with review by external researchers and helped us identify and remediate issues before general exposure. With the extra review, we were able to limit the impact of vulnerabilities in production while also providing fresh code and functionality for researchers to look into.
Internal improvements to the program have helped us more efficiently triage and remediate submissions from researchers. ChatOps and GitHub-based workflows are core to how we deal with incoming submissions. As soon as new ones arrive, we receive alerts in Slack using HackerOne’s Slack integration. From there, we can triage issues directly from chat, letting the team know which issues are critical and which can wait until later. At the end of our triage workflow, we use ChatOps to issue rewards through HackerOne, so we can close the loop and pay researchers as quickly as possible.
To support these workflows, we’ve continued to build on our Ruby on Rails HackerOne API client and extensively use these and GitHub APIs in our internal processes.
So far, these improvements have made us significantly more efficient. Our average response time in 2017 was 10 hours, valid issues were triaged to developers on average within two days, and bounties were rewarded on average in 17 days. Given the time and effort that researchers dedicate to participating in our program, we feel great about these improvements. And in 2018, we’ll continue to refine our process. We’re always looking for ways to make sure our researchers receive a prompt and satisfactory response to their submissions.
Also in 2018, we’re planning to expand the initiatives that proved so successful last year. We’ll be launching more private bounties and research grants to gain focus on specific features both before and after they publicly launch. Later in the year, we’ll announce additional promotions to continue to keep researchers interested and excited to participate.
Given the program’s success, we’re also looking to see how we can expand its scope to help secure our production services and protect GitHub’s ecosystem. We’re excited for what’s next and look forward to triaging and fixing your submissions this year!