Want to receive content like this twice a month, right in your inbox? Sign up for the newsletter now >
The post Insider newsletter digest: 4 things you didn’t know you could do with GitHub Projects appeared first on The GitHub Blog.
]]>This is abridged content from October 2023’s Insider newsletter. Like what you see? Sign up for the newsletter to receive complete, unabridged content in your inbox twice a month. Sign up now > |
Are you ready to unlock the secrets of organization, collaboration, and project magic? Buckle up, because we’ve got a handful of GitHub Projects tips and tricks that will turn you into a project management wizard! 🧙♂️💼 Keep reading for list of things you can do with GitHub Projects:
Some folks prefer to work in the terminal, and with the GitHub CLI project
command, you can manage and automate workflows from the command line. For example, you can create a new project board for your repository with a command like gh repo create-project
. Then, you can add issues to this board using the gh issue create
command, making it easy to manage and track your project’s progress from the command line.
If you often find yourself recreating projects with similar content and structure, you can set a project as a template when creating new projects. To set your project as a template, navigate to the project “Settings” page, and under the “Templates” section, toggle on Make template
. This will turn the project into a template that can be used with the green Use this template
button at the top of your project or when creating a new project.
If you’re an open source maintainer or a developer with multiple clients, you may be working across various organizations at a time. This also means you have multiple issues to keep track of, and GitHub Projects can help you collate issues from any organization onto a single project. You can do this in one of two ways:
Rather than spending time manually updating individual items, you can edit multiple items at once with the bulk editing feature. Let’s say you wanted to assign multiple issues to yourself. On the table layout, assign one issue, highlight and copy the contents of the cell, then select the remaining items you want to be assigned and paste the copied contents. And there you have it: you just assigned yourself to multiple issues at once. Check out this GIF for a visual representation:
Want even more tips and tricks? Check out this blog post for 10 more GitHub Projects tips, or learn how we use GitHub Projects to standardize our workflows and stay aligned. You’re now equipped to work your magic with GitHub Projects!
Want to receive content like this twice a month, right in your inbox? Sign up for the newsletter now >
The post Insider newsletter digest: 4 things you didn’t know you could do with GitHub Projects appeared first on The GitHub Blog.
]]>The post Found means fixed: Introducing code scanning autofix, powered by GitHub Copilot and CodeQL appeared first on The GitHub Blog.
]]>Starting today, code scanning autofix will be available in public beta for all GitHub Advanced Security customers. Powered by GitHub Copilot and CodeQL, code scanning autofix covers more than 90% of alert types in JavaScript, Typescript, Java, and Python, and delivers code suggestions shown to remediate more than two-thirds of found vulnerabilities with little or no editing.
Our vision for application security is an environment where found means fixed. By prioritizing the developer experience in GitHub Advanced Security, we already help teams remediate 7x faster than traditional security tools. Code scanning autofix is the next leap forward, helping developers dramatically reduce time and effort spent on remediation.
Even though applications remain a leading attack vector, most organizations admit to an ever-growing number of unremediated vulnerabilities that exist in production repositories. Code scanning autofix helps organizations slow the growth of this “application security debt” by making it easier for developers to fix vulnerabilities as they code.
Just as GitHub Copilot relieves developers of tedious and repetitive tasks, code scanning autofix will help development teams reclaim time formerly spent on remediation. Security teams will also benefit from a reduced volume of everyday vulnerabilities, so they can focus on strategies to protect the business while keeping up with an accelerated pace of development.
Want to try code scanning autofix? If your organization is new to GitHub or does not yet have GitHub Advanced Security (or, its prerequisite, GitHub Enterprise), contact us to request a demo and set up a free trial.
When a vulnerability is discovered in a supported language, fix suggestions will include a natural language explanation of the suggested fix, together with a preview of the code suggestion that the developer can accept, edit, or dismiss. In addition to changes to the current file, these code suggestions can include changes to multiple files and the dependencies that should be added to the project.
Want to learn more about how we do it? Read Fixing security vulnerabilities with AI: A peek under the hood of code scanning autofix.
Behind the scenes, code scanning autofix leverages the CodeQL engine and a combination of heuristics and GitHub Copilot APIs to generate code suggestions. To learn more about autofix and its data sources, capabilities, and limitations, please see About autofix for CodeQL code scanning.
We’ll continue to add support for more languages, with C# and Go coming next. We also encourage you to join the autofix feedback and resources discussion to share your experiences and help guide further improvements to the autofix experience. Together, we can help move application security closer to a place where a vulnerability found means a vulnerability fixed.
To help you learn more, GitHub has published extensive resources and documentation about the system architecture, data flow, and AI policies governing code scanning autofix.
If you want to give code scanning autofix a try, but your organization is new to GitHub or does not yet have GitHub Advanced Security (or, its prerequisite, GitHub Enterprise), contact us to request a demo and set up a free trial.
The post Found means fixed: Introducing code scanning autofix, powered by GitHub Copilot and CodeQL appeared first on The GitHub Blog.
]]>The post How we’re using GitHub Projects to standardize our workflows and stay aligned appeared first on The GitHub Blog.
]]>To best collaborate across diverse teams, we use GitHub Projects to plan, manage, and provide updates for each release right next to our code. Project templates allow us to standardize our process and create a reusable framework to ensure the success of each release, with status updates keeping stakeholders informed of progress along the way.
Creating and using project templates ensures each upcoming release can get off the ground running quickly and is smooth sailing all the way to general availability. We don’t want to spend time manually setting up a new project or creating tasks for each release (given there are a lot of them!), so we build templates to track repeatable tasks and establish best practices for our teams.
There are two kinds of templates we use to get started:
While we have created templates that our teams use for our own feature release management, let’s dive into how teams across all of GitHub are utilizing project templates!
At GitHub, we build project templates to standardize our workflows across the organization and create consistency within and across teams, with a growing collection of over 50 templates that we have built to help us accomplish our repeatable tasks. Some of these templates include:
Our template collection at GitHub continues to grow and expand to more use cases across teams and departments, so it’s one click and you’re off!
I use our “Product Feature Release” template to plan out and manage each upcoming release, so once we have officially kicked off and are working through our issues and pull requests, I want to make sure that all involved teams and stakeholders are kept in the loop on the progress along the way. To do this we use project status updates to keep everyone aligned on how the release is progressing, when it is expected, and any risks that we should be aware of, all in a single place in the project.
Status updates allow us to provide short and regular summaries on the progress of the release, such as the Status
, Start date
, and Target date
. We tend to provide additional high-level details such as:
@
mentions.By providing regular updates, we have a feed of history so our stakeholders can follow along and understand why the target date shifted, why it shifted from On track
to At risk
, or what the cross-functional dependencies are.
I can then see the status of all of my relevant work and projects all in a single place, so I can drill in to understand more details.
We are continuing to build our collections of project templates at GitHub to help us standardize our processes across teams, spanning from feature releases, employee onboarding, and building and sharing our roadmaps. We encourage teams and organizations to build their library of project templates to help establish and share best practices, and share status updates on their projects to easily communicate progress of those tasks with their teams and stakeholders all in a single place.
Harness the power of GitHub Projects. Learn more or get started now.
The post How we’re using GitHub Projects to standardize our workflows and stay aligned appeared first on The GitHub Blog.
]]>The post GitHub Copilot Enterprise is now generally available appeared first on The GitHub Blog.
]]>Since the early days of GitHub Copilot, our customers have asked us for a copilot that is customized to their own organization’s code and processes. Developers spend more time deciphering rather than shipping when they can’t pinpoint and solve the issues, bugs, or vulnerabilities that are unique to their organization’s codebase. What’s more, developers often write code for only a couple hours a day and, instead of being creative, are bogged down with mundane tasks throughout their day. The inaccessibility of institutional knowledge acts as a blockade preventing developers from fully exercising their creativity and building more for you.
We’re changing that.
Just by integrating generative AI into the editor, GitHub Copilot has quickly defined a new age of software development, resulting in clear gains of developer productivity and happiness. Today, we are bringing the next frontier of developer tools with the general availability of GitHub Copilot Enterprise–a companion that places the institutional knowledge of your organization at your developers fingertips. Now, team members can ask questions about public and private code, get up to speed quickly with new codebases, build greater consistencies across engineering teams, and ensure that everyone has access to the same standards and work that’s previously been done. Let’s jump in.
Ubiquitous, customized, and tailored to you
Learn more about what’s included with GitHub Copilot Enterprise >
GitHub Copilot Enterprise comes with three core features:
1. Gain a deeper understanding of your organization’s unique codebase. Copilot Enterprise streamlines code navigation and comprehension for developers, enabling faster feature implementation, issue resolution, and code modernization. It empowers junior developers to contribute quicker, assists senior developers in handling live incidents, and aids in modernizing aging codebases by offering clear code summaries, relevant suggestions, and quick answers to queries about code behavior.
2. Quickly access organizational knowledge and best practices. Copilot Enterprise integrates chat directly into GitHub.com, enabling developers to ask questions and receive answers in natural language on your codebase, and will guide them to relevant documentation or existing solutions. This can facilitate rapid iteration at scale while improving code with personalized assistance and suggestions tailored to an organization’s specific codebase and standards.
3. Review pull requests faster. With generated pull request summaries, developers can spend less time drafting and more time merging. And with Copilot Enterprise’s capability to analyze pull request diffs, reviewers can quickly get up to speed with proposed changes and save time understanding the changes while spending more time providing valuable feedback.
As the technology landscape continues to rapidly evolve, we are expanding capabilities of GitHub Copilot to not only understand your own internal knowledge bases, but to bring in the latest information from the internet as well. By integrating Bing search directly into Copilot Chat—available in beta for GitHub Copilot Enterprise—you can find the latest software development-related information like updates to CSS or JavaScript frameworks. This means GitHub Copilot can now help your developers explore their curiosity and gain outside knowledge near instantly, at scale.
We’re already hearing from engineering leaders about the benefits they’re seeing from Copilot Enterprise:
GitHub Copilot is becoming an integral part of the developer experience. Its capabilities, such as quickly understanding existing codebases, analyzing code, and accessing knowledge bases, enables developers to concentrate more on what truly counts: delivering impactful results. And not only can they be more productive, developers will be more happy and fulfilled, too.
Our vision extends to making conversational capabilities ubiquitous by integrating context-driven and customized assistance across the GitHub platform. Throughout the development and evolution of GitHub Copilot, we have always placed a priority on security, privacy, compliance, and transparency—and we’ve made that a key focus of GitHub Copilot Enterprise as well. We do not use any of your organization’s private repositories or prompts and suggestions to train the machine learning models that power our products, unless you expressly instruct us to do so, for example with custom models.
Building on the proven success of GitHub Copilot
In the last year, we collaborated with Accenture to evaluate the impact of GitHub Copilot on accelerating innovation within a real-world, enterprise environment. Here’s what we found:
And this was all just with GitHub Copilot as an autocomplete function in the editor. With GitHub Copilot Enterprise, we are building on top of the demonstrated results, dramatically multiplying existing GitHub Copilot productivity gains by adding extensive customization to enable organizations and engineering teams to accomplish more, faster and happier.
The age of copilots has begun. In this new frontier of software development, copilots are ubiquitous, customized, and always at your side. With GitHub Copilot Enterprise we’re bringing the industry’s premier AI developer tool available to every organization for just $39 per user per month. Built with the world’s leading large language model, customized to your organization, and deeply integrated into GitHub’s surfaces, GitHub Copilot Enterprise brings immense value to every organization.
Alongside GitHub Enterprise, our end-to-end developer platform, organizations of any size can now start integrating generative AI across the software development lifecycle—from understanding existing code and internal best practices to fixing bugs and improving functionality to accelerated code reviews and beyond.
Collaboration between humans and intelligent machines will redefine the possibilities of innovation, unlocking novel solutions, and accelerating the pace of software development like never before. Ready to harness the power of GitHub Copilot Enterprise today? Learn more or get started now.
The post GitHub Copilot Enterprise is now generally available appeared first on The GitHub Blog.
]]>The post Insider newsletter digest: Cook up a controller with GitHub Copilot appeared first on The GitHub Blog.
]]>This is abridged content from September 2023’s Insider newsletter. Like what you see? Sign up for the newsletter to receive complete, unabridged content in your inbox twice a month Sign up now > |
Whether you’re a software architect, QA engineer, technical lead, or any other type of developer, you’ll most likely benefit from building a controller function in Node.js. A controller function is a specific piece of code within an application’s architecture—often found in the Model-View-Controller (MVC) pattern—that facilitates communication between the user and the underlying application logic.
Think of the controller function as the master chef in your code kitchen . It takes in your ingredients (user input), works its culinary magic behind the scenes by interacting with the data model, and serves up a deliciously updated UI . So, whether you’re coding up a storm or cooking up code, the controller function is your secret sauce to a flavorful application.
And here’s a little extra food for thought: did you know that you can build a controller on the fly with GitHub Copilot?
Follow this recipe to cook up your own controller function in Node.js with our new AI-pair programming tool:
One of our very own developer advocates, @LadyKerr, used this exact recipe to help concoct a GPT-3 application with GitHub Copilot that finds the nutritional content of any recipe with AI . You can find the source code and build it yourself here.
Or, if you’re a more visual learner, you can click through to follow the steps in this video:
Ready to whip up a controller function of your own? Let’s get cookin’ ! Sign up for GitHub Copilot.
Want to receive content like this twice a month, right in your inbox? Sign up for the newsletter now >
The post Insider newsletter digest: Cook up a controller with GitHub Copilot appeared first on The GitHub Blog.
]]>The post Get started with v4 of GitHub Actions Artifacts appeared first on The GitHub Blog.
]]>We recently released the latest iteration (v4) of the actions to upload and download artifacts in GitHub Actions. Artifacts are a powerful tool to augment your GitHub Actions workflows, allowing data to be persisted between jobs, and even after a workflow run has completed. For example, artifacts can store test results, code coverage reports, binary files, core dumps, and more. There are approximately four million artifacts created every day on GitHub, and we’re proud to announce they have become up to 10x faster, more reliable, and even more flexible to use.
In this post, we’ll give a brief overview of how the artifacts feature in GitHub Actions came to be, how it’s evolved over the years, and all the decisions on rebuilding it to better support our customer’s needs.
When GitHub Actions launched in 2019, our initial versions of upload-artifact and download-artifact were extremely simple. In this version, aptly referred to as v1, we utilized existing functionality and plugins within the GitHub Actions runner. For example, the runner’s log upload service was adapted to facilitate artifact uploads and downloads. This approach allowed us to provide crucial download and upload capabilities as soon as GitHub Actions was launched.
Because the v1 upload and download functionality was so tightly coupled with the runner, releases between the artifact actions and the runner were also tied together. This proved to be problematic, so we decided to decouple the upload and download functionality from the runner and rewrite everything in NodeJS, just like other actions. However, we wanted developers to be able to leverage artifact upload and download functionality in their own actions, so we decided to create an @actions/artifact npm package with all the functionality that anyone could use. This npm package served as the foundation for v2 of upload-artifact and download-artifact. There were also releases of v3 upload-artifact and download-artifact; however, they were largely identical to the v2 releases just with a newer node runtime. Fundamentally, v1-v3 were all powered by the same internal APIs to upload and download artifacts that were based off of the runners log upload service.
The retrofitted log upload service that powers v1-v3 artifact upload/download was fine for small uploads, and it covered basic customer needs, but it wasn’t without its drawbacks. The following issues became recurring pain points for our users and so it was essential for us to address them in the latest release.
The first three versions of the actions to upload and download artifacts were initially created to facilitate sharing artifacts between isolated jobs within the same workflow run. Because of this, an artifact is scoped to an entire workflow run and is not “finalized” until the very end of that run. This is necessary for things like billing where the size of an artifact could be changed between jobs, and it wouldn’t be until the end of the entire workflow run that the final size would be known. Scoping artifacts to the entire workflow run meant that the artifacts would not be available in the UI or REST API until after the workflow run is completed. Many users, however, wanted to share artifacts with jobs outside of the current workflow run. This inflexibility made it difficult to scaffold more solutions on artifacts during the course of a workflow run to achieve certain processes, like viewing artifacts produced from a “build” job before an approval gate to use those artifacts in a “release” job. We often found users building around the internal APIs which are not publicly documented and lack strict API stability or polling public APIs in different workflow runs until a run was completed to list artifacts to fit their use cases, which shouldn’t be necessary.
In the previous upload-artifact versions, it’s possible to upload from the same named artifact multiple times, in multiple jobs, concurrently. This, unfortunately, made it extremely easy to accidentally overwrite same named files in matrix scenarios, causing undefined behavior with concurrent writes to different chunks of a file.
Another common case of corruption was transient errors during artifact downloads (via zip). This is due to the nature of how artifact archives are generated for downloads in the previous versions. In the backend blob storage, v3 and below are stored as loose files. On upload, files are gzipped, chunked, and reassembled by the backend service before their final rest in blob storage. When artifact zips are requested via the REST API or the UI, they must be dynamically generated and populated with each file. This design not only increased the likelihood of transient network errors and corruption but also prevented us from providing any sort of checksum or estimated size for the artifact archive.
Often, it was unclear how the reported and billed size of an artifact, the size of the files on upload, and the size of the zip archive downloaded from the UI were computed. As mentioned previously, artifacts in v3 and below are stored in the blob backend as loose files, and the zip is generated dynamically on download. Depending on the type of content, gzip compression could cause the reported size to be drastically different from the actual size and both of those sizes are completely different from the dynamically generated archive size.
The most frequently heard pain point related to v3 and below was the overall speed of artifact download and upload operations. There are two specific cases that can cause major slowdowns:
This is especially a problem for our customers on self-hosted runners outside our hosted compute environment, which worsened the speed of both upload and download operations.
Our primary focus with the new release was speed. To achieve better operations holistically, we had to simplify the entire process and remove as many moving parts as possible.
First, we eliminated having a proxy service in between uploads for the runner and blob storage. Our backend API delegates shared access signatures (SAS) based on token authorization to a specific file path in blob storage. This allows for direct, secure and limited access for the runner clients to upload the artifact. The SAS is minted against a well-known path, which is how we properly scope customer data and track usage on the uploaded content size. The process is exactly the same for downloads, except the SAS is read only. This is agnostic to all clients, so the same process for the runner, UI, and REST API downloads.
As part of the above solution, we scoped all the artifact content to a single archive zip on upload. The runner will assemble the zip archive in memory, streaming in files as part of the upload specification and chunked uploads to blob storage. This allows us to better calculate the file size, as well as compute a checksum of the content. And since this is a singular file upload, it saves precious time on all the network round trips that previously had to be made for every single file. We also have additional inputs like compression-level
that can be tweaked to further increase upload speed (or artifact size) depending on how well the content can be compressed.
With all of these improvements, it’s night and day for the worst case scenarios. For instance, like uploading hefty node_modules
.
And now that our file is already stored in blob as a zip, we no longer need to manually assemble the archive anymore, it’s just a direct download from blob.
From numerous artificial and real world tests we’ve seen over 10x improvement in upload and download operations from these changes.
name: Artifacts Comparison
on:
workflow_dispatch:
jobs:
node-modules:
runs-on: ubuntu-latest
steps:
- uses: actions/setup-node@v4
- run: |
npm i react react-dom react-script create-react-app webpack
- uses: actions/upload-artifact@v4
name: "Upload Node Modules - v4"
with:
name: node_modules-v4
path: node_modules
- uses: actions/upload-artifact@v3
name: "Upload Node Modules - v3"
with:
name: node_modules-v3
path: node_modules
- uses: actions/download-artifact@v4
name: "Download Node Modules - v4"
with:
name: node_modules-v4
path: node_modules-v4
- uses: actions/download-artifact@v3
name: "Download Node Modules - v3"
with:
name: node_modules-v3
path: node_modules-v3
big-artifact:
runs-on: ubuntu-latest
steps:
- run: |
dd if=/dev/urandom of=1gb-random bs=1M count=1000
- uses: actions/upload-artifact@v4
name: "Upload 1GB - v4"
with:
name: 1gb-v4
path: 1gb-random
compression-level: 0
- uses: actions/upload-artifact@v3
name: "Upload 1GB - v3"
with:
name: 1gb-v3
path: 1gb-random
- uses: actions/download-artifact@v4
name: "Download 1GB - v4"
with:
name: 1gb-v4
path: 1gb-v4
- uses: actions/download-artifact@v3
name: "Download 1GB - v3"
with:
name: 1gb-v3
path: 1gb-v3
Another common request from our users was the ability to download artifacts from the UI or API while the workflow run is in progress. Previously, this was not possible because of the internal APIs and how the data is populated in the backend and because artifacts were scoped to an entire workflow run and not “finalized” until the very end.
Now, in v4, the artifact becomes immediately available in the UI and the API. Even better, the actions/upload-artifact action now has outputs for both the ID of the artifact, and the URL for the artifact.
This allows for even more powerful workflows that use artifacts. For instance, you now have the ability to create an approval gate and inspect an artifact’s content manually via the UI before approving for a release. Artifact URLs can also be embedded using bots and comments in pull requests for easy sharing.
The action to download artifacts has some new addons as well. In the list of inputs, we now have github-token
, repository
and run-id
. Given a properly scoped token with actions:read
, artifacts can now be downloaded from other workflow runs and repositories. By default with no token specified, the action will only be able to download from the current workflow run and any previous run attempts.
Thanks to the single archive file upload, the stated size of the artifact is now consistent between the content uploaded, content downloaded, and what is reported in UI and APIs. Similarly, we are now able to produce checksums as the data is being uploaded on the runner, which we plan to expose for integrity checks soon. Stay tuned for future enhancements that will expose even more artifact metadata.
In an ideal world, we’d add all these new improvements, and everything would just work. Unfortunately, these improvements do come with some tradeoffs. First and foremost, the new major version tag, v4, cannot be mixed and matched with v3 and below. If you are uploading with actions/upload-artifact@v4, you must download with actions/download-artifact@v4.
In v4, users lose the ability to upload to the same named artifact multiple times. Once an artifact is uploaded cannot be altered, and there cannot be multiple v4 artifacts with the same name, in the same workflow run. An example scenario where this would cause friction is a job matrix where multiple jobs would concurrently push uploads to the same artifact. In v4, this would cause an error. To make this scenario possible, artifacts can have the matrix attributes suffixed to the name, and then all the artifacts can be downloaded to the same directory.
jobs:
upload:
strategy:
matrix:
runs-on: [ubuntu-latest, macos-latest, windows-latest]
runs-on: ${{ matrix.runs-on }}
steps:
- name: Create a File
run: echo "hello from ${{ matrix.runs-on }}" > file-${{ matrix.runs-on }}.txt
- name: Upload Artifact
- uses: actions/upload-artifact@v3
+ uses: actions/upload-artifact@v4
with:
- name: my-artifact
+ name: my-artifact-${{ matrix.runs-on }}
path: file-${{ matrix.runs-on }}.txt
download:
needs: upload
runs-on: ubuntu-latest
steps:
- name: Download All Artifacts
- uses: actions/download-artifact@v3
+ uses: actions/download-artifact@v4
with:
- name: my-artifact
path: my-artifact
+ pattern: my-artifact-*
+ merge-multiple: true
- run: ls -R my-artifact
To ensure fair use of these actions, we've imposed a limitation where a single job can only produce 500 artifacts at maximum.
Since there is no longer a proxy, users configured on self-hosted runners must have appropriate network configuration to reach our new array of storage backends. See the self hosted documentation to ensure all endpoints are reachable.
For additional common migration scenarios, you can reference our migration documentation.
For updates on bringing v4 of the artifact actions to GitHub Enterprise Server, please follow the GitHub public roadmap.
Harness the power of GitHub Actions. Learn more or get started now.
The post Get started with v4 of GitHub Actions Artifacts appeared first on The GitHub Blog.
]]>The post Copilot in GitHub Support is now available! appeared first on The GitHub Blog.
]]>Copilot in GitHub Support is trained on the official GitHub documentation and offers a responsive conversational experience to deliver reliable advice on a wide range of GitHub-related topics.
Following an initial release in August 2023 to a randomly selected cohort of GitHub Enterprise customers, it’s now ready and waiting to converse with a much wider audience!
Copilot in GitHub Support reduces the need to manually search for the right context across different pages in the official GitHub documentation. The assistant efficiently distills relevant information from multiple GitHub documents at once into a concise, tailored response—significantly reducing the time required to research a topic and often removing the need for a traditional round-trip response from our Support team.
Copilot in GitHub Support also includes links to any documentation it uses when crafting an answer so that you’re able to review the material or bookmark it for future reference.
The great news is that you don’t have to do anything unusual to start a conversation, there’s no magic incantation—the assistant is built right into the existing GitHub Support contact form. All you need to do is choose your account, briefly describe the issue you need some help with, and click Start chat.
If at any point during the conversation you would like to stop chatting with Copilot in GitHub Support and open a ticket for assistance from a human, you can click Submit a ticket. Remember to click My problem is solved if the assistant was able to help you.
You can chat with Copilot in GitHub Support as you would in a typical conversation with one of our Support team members or any other person.
When describing your issue, remember to include relevant information, such as error messages, along with any steps you’ve already tried and their outcomes. If the assistant misunderstands you, you can gently correct it by pointing out its mistake or adding additional context. Doing so often leads to a more relevant response.
Knowing which answers were helpful to you is valuable feedback as we work on continuously improving Copilot in GitHub Support. If you’d like to let us know how it did, you can click thumbs up or thumbs down under each response.
We’re excited for you to try Copilot in GitHub Support and can’t wait to hear what you think of it.
The post Copilot in GitHub Support is now available! appeared first on The GitHub Blog.
]]>The post 10 unexpected ways to use GitHub Copilot appeared first on The GitHub Blog.
]]>GitHub Copilot is widely known for its ability to help developers write code in their IDE. Today, I want to show you how the AI assistant’s abilities can extend beyond just code generation. In this post, we’ll explore 10 use cases where GitHub Copilot can help reduce friction during your developer workflow. This includes pull requests, working from the command line, debugging CI/CD workflows, and much more!
Let’s get into it.
If you ever forget how to run a particular command when you’re working in your VS Code, GitHub Copilot Chat is here to help! With the new @terminal
agent in VS Code, you can ask GitHub Copilot how to run a particular command. Once it generates a response, you can then click the “Insert into Terminal” button to run the suggested command.
Let me show you what I mean:
The @terminal
agent in VS Code also has context about the integrated shell terminal, so it can help you even further.
We’ve all been there where we made a sizable pull request with tons of files and hundreds of changes. But, sometimes, it can be hard to remember every little detail that we’ve implemented or changed.
Yet it’s an important part of collaborating with other engineers/developers on my team. After all, if I don’t give them a summary of my proposed changes, I’m not giving them the full context they need to provide an effective review. Thankfully, GitHub Copilot is now integrated into pull requests! This means, with the assistance of AI, you can generate a detailed pull request summary of the changes you made in your files.
Let’s look at how you can generate these summaries:
Now, isn’t that grand! All you have to do is go in and edit what was generated and you have a great, detailed explanation of all the changes you’ve made—with links to changed files!
Note: You will need a Copilot Enterprise plan (which requires GitHub Enterprise Cloud) to use PR summaries. Learn more about this enterprise feature by reading our documentation.
I came across this one recently while making changes in VS Code. GitHub Copilot can help you generate commit messages right in your IDE. If you click on the source control button, you’ll notice a sparkle in the message input box.
Click on those sparkles and voilà, commit messages are generated on your behalf:
I thought this was a pretty nifty feature of GitHub Copilot in VS Code and Visual Studio.
Another way to get help with terminal commands is to use GitHub Copilot in the CLI. This is an extension to GitHub CLI that helps you with general shell commands, Git commands, and gh cli commands.
GitHub Copilot in the CLI is a game-changer that is super useful for reminding you of commands, teaching you new commands or explaining random commands you come across online.
Learn how to get started with GitHub Copilot in the CLI by reading this post!
If you’ve ever gone to a new repository and have no idea what’s happening even though the README is there, you can now use GitHub Copilot Chat to explain the repository to you, right in GitHub.com. Just click on the Copilot icon in the top right corner of the repository and ask whatever you want to know about that repository.
On GitHub.com you can ask Copilot general software related questions, questions about the context of your project, questions about a specific file, or specified lines of code within a file.
Note: You will need a Copilot Enterprise plan (which requires GitHub Enterprise Cloud) to use GitHub Copilot Chat in repositories on GitHub.com. Learn more about this enterprise feature by reading our documentation.
Did you know that in addition to asking for suggestions with comments, you can get help with your code inline? Just highlight the code you want to fix, right click, and select “Fix using Copilot.” Copilot will then provide you with a suggested fix for your code.
This is great to have for those small little fixes we sometimes need right in our current files.
My team and I had a use case where we needed to close over 1,600 invalid GitHub Issues submitted to one of our repositories. I created a custom GitHub Action that automatically closed all 1,600+ issues and implemented the solution with GitHub Copilot.
GitHub Copilot Chat helped me to create the GitHub Action, and also helped me implement the closeIssue()
function very quickly by leveraging Octokit to grab all the issues that needed to be closed.
You can read all about how I bulk closed 1000+ GitHub issues in this blog post, but just know that with GitHub Copilot Chat, we went from having 1,600+ open issues, to a measly 64 in a matter of minutes.
We all love documenting our code, but just in case some of us need a little help writing documentation, GitHub Copilot is here to help!
Regardless of your language, you can quickly generate documentation following language specific formats—Docstring for Python, JSDoc for Javascript or Javadoc for Java.
Error messages can often be confusing. With GitHub Copilot in your IDE, you can now get help with error messages right in the terminal. Just highlight the error message, right click, and select “Explain with Copilot.” GitHub Copilot will then provide you with a description of the error and a suggested fix.
You can also bring error messages from your browser console into Copilot Chat so it can explain those messages to you as well with the /explain
slash command.
Whenever I have a speaking engagement, I like to create my slides using Slidev, an open source presentation slide builder for developers. I enjoy using it because I can create my slides in Markdown and still make them look splashy! Take a look at this one for example!
Anyway, there was a point in time where I had an issue with deploying my slides to GitHub Pages and I just couldn’t figure out what the issue was. So, of course, I turned to my trusty assistant—GitHub Copilot Chat that helped me debug my way through deploying my slides.
Read more about how I debugged my deployment workflow with GitHub Copilot Chat here.
As you see above, GitHub Copilot extends far beyond your editor and code completion. It is truly evolving to be one of the best tools you can have in your developer toolkit. I’m still learning and discovering new ways to integrate GitHub Copilot into my daily workflow and I hope you give some of the above a chance!
Be sure to sign up for Github Copilot if you haven’t tried it out yet and stay up to date with all that’s happening by subscribing to our developer newsletter for more tips, technical guides, and best practices! You can also drop me a note on X if you have any questions, @itsthatladydev.
Until next time, happy coding!
The post 10 unexpected ways to use GitHub Copilot appeared first on The GitHub Blog.
]]>The post Insider newsletter digest: Build from anywhere appeared first on The GitHub Blog.
]]>This is abridged content from August’s Insider newsletter. Like what you see? Sign up for the newsletter to receive complete, unabridged content in your inbox twice a month Sign up now > |
In this newsletter, we’ll explore a few insider tips and tricks you can use to make the most of GitHub Mobile, such as:
Let’s dive in.
Have you ever gone for a midday coffee run and then all of a sudden remembered,“Oh, I should’ve updated those issues before I left!” No worries, Issue Composer has your back. With Issue Composer, you can create issues, document bugs, propose features, and engage in discussions directly from your mobile device. And now, with our new Property Bar feature, you can effortlessly add metadata to your issues, like assignees, labels, milestones, and projects. Plus, you can do all of this while you’re in line to place that coffee order.
Check this out:
There are times when taking the scenic route makes sense, but maybe not necessarily when you’re working under a deadline. And that’s the beauty of shortcuts, right? With GitHub Mobile’s Shortcuts, you can access all of the important things, like issues, discussion, or pull requests, right on the home page of your mobile app.
To start customizing your shortcuts, simply open the app and click the “Get started” button.
From there, you can either choose from the “suggested shortcuts” selection or create your own. When creating your shortcuts, the scope for customization is pretty vast. You can select:
Once you save the shortcut, all the things you need are simply a tap away.
GitHub Mobile empowers developers to make meaningful contributions and stay productive no matter where they are. Whether you’re reviewing code, managing issues, or participating in discussions, you can stay connected and engaged with your projects at all times, even during downtime. And now that you’re armed with these tidbits, you’re on your way to becoming a mobile power user.
Ready to try out these tips and features for yourself? Download GitHub Mobile.
Want to receive content like this twice a month, right in your inbox? Sign up for the newsletter now >
The post Insider newsletter digest: Build from anywhere appeared first on The GitHub Blog.
]]>The post GitHub-hosted runners: Double the power for open source appeared first on The GitHub Blog.
]]>GitHub is thrilled to announce the completion of an upgrade to our GitHub Actions-hosted runners by offering larger more powerful machines to open source developers.
From now on, any Linux or Windows workflow triggered from a public repository, using GitHub’s default labels, will run on our faster, more powerful 4-vCPU runners.
These new machines provide up to a 25% increase in performance for most CI/CD workloads without any configuration change required.
Since its inception in 2019, GitHub Actions has always been free for public repositories. While it has been very widely adopted by our open source communities, the underlying machines have remained the same 2-vCPU based virtual machines. From December 1,2023, we started upgrading our fleet of Linux and Windows Action runners to newer 4-vCPU based virtual machines. We are pleased to announce that we completed that upgrade today, and we now provide machines that are double their previous specification, with 4-vCPUs, 16 GiB of memory.
Through the rollout, we have seen real-world improvements of around 25% for GitHub Actions workflows on these new machines without any changes required from you as a developer. These performance improvements will help garner faster feedback loops in your pull requests and decrease time spent waiting for builds to finish. For those teams with larger workloads, the new machines offer twice the memory.
GitHub Actions plays a vital role for many open source projects, providing them with free hosted build servers and automation capabilities that are easy to get set up and run with instantly. But the open source community also plays a huge part in why GitHub Actions was quickly adopted by developers around the world. The GitHub Marketplace offers over 20,000 actions and apps for organizations and the GitHub community to discover and build. This wealth of knowledge helps developers at every organization, regardless of size, get started on GitHub Actions and Apps and improve their workflows today.
“GitHub is the home for the open source community, so I’m thrilled we’ve been able to give all open source projects access to Linux and Windows machines that have twice the v-CPUs, twice the memory and 10 times the storage for their builds, for free,” said Martin Woodward, GitHub’s vice president of developer relations. “This investment will give valuable time back to the maintainers of open source projects who benefited from over 7 billion build minutes with GitHub Actions in the past year alone.”
This upgrade offers increased performance and memory with 4-vCPUs and 16 GiB of memory. Public repositories can see the benefits of these updates today, with no required changes. Simply run your public workflows using any of the existing Ubuntu or Windows labels, and they now will automatically run on the new 4-core hosted runners.
The post GitHub-hosted runners: Double the power for open source appeared first on The GitHub Blog.
]]>