4 Major Misunderstandings About Pull Requests

This is a guest post by Cameron Laird.

Pull requests are often treated as a kind of medical examination: An untrusted developer receives invasive guidance from a review board of peers or superiors who correct the request so it’s safe to advance. That’s far from the best way to handle such requests, though.

Let’s dissect four confusions about pull requests and explore some corresponding remedies.

1. Style correction is a major goal

Start with a clarification. The focus of this article is on what GitHub users typically call a pull request, which is almost identical to GitLab’s merge request. Other communities have other names. This article uses “review request” to label the central idea of all these variations: A programmer has implemented a correction or feature, and, before the implementation becomes “official,” involves others in a disciplined inspection of the update.

For a sad number of programmers, review requests largely consist of adjusting spacing, punctuation placement, or spelling corrections. It’s deflating to work hard on a difficult caching improvement, say, only to have peers spend time fussing about where brackets and semicolons appear in the code.

It’s a fixable disappointment, though. Most programming languages have freely available style checkers. These utilities scan source code and report on its conformance in regard to certain qualities, including:

  • Spaces versus tabs
  • Depth of indent
  • Newline versus CRLF
  • Placement of such grouping punctuation as brackets, braces, and parentheses
  • Declaration of unused referents
  • Spelling rules

The most productive way to resolve stylistic review is to institute a continuous testing pipeline that, along with other tests, only allows requests that pass an automated team-wide standard of style confirmation. At that point, reviewers needn’t quibble over the placement of spaces and semicolons; automation of the standard already takes care of that burden. Reviewed and reviewers alike can focus their attention on weightier matters.

As liberating as this kind of stylistic automation is, it doesn’t solve all problems. Occasionally there might need to be a discussion of whether a particular variable should carry the name collection, group, collection_id, or so on.

Modern Test Case Management Software for QA and Development Teams

2. Reviews are a good time for code explanations

Another common anti-pattern in reviews is the explanation. Some teams expect answers to questions such as, “Why is Q implemented with a T?” and “Is it safe to use a K before B?” to fill reviews.

As well-intentioned as this is, it’s almost certainly a mistake. For instance, what if a reviewer asks why 43200 appears in

limit = 43200

A direct answer is almost never the best response. It’s far better to update the source to something like

// Product Management decided on a guarantee of 12 hours in May 2020.
 limit = 60 * 60 * 12

One aim of successful review requests is that they stand on their own. Source code that requires explanation is, in general, not the right source code. While we often think of source code as a means of expressing what the computer should do, it’s almost equally an expression of intent and method to future maintainers. Names, comments, and structure should be chosen to communicate to those readers in months to come.

One prominent exception to explanation-free reviews exists: When a reviewer asks a question such as, “What is the purpose of $KEYWORD_M?” or “What does $EXPRESSION_D mean?” sometimes it’s appropriate to answer, “Commercial practice commonly uses that idiom in this domain to construct a continuation,” or, “The language standard now recommends that kind of reference and deprecates the older $ALTERNATIVE.”

While it’s rarely wrong to include such an explanation in source comments, occasionally a coder with deep experience can see that a particular idiom really is on the upswing. If an idiom is easy to research — if it depends on an easily searched keyword, for instance — and is well understood by a majority of readers, then it might be right not to document it further in source comments.

3. Seniors correct and juniors learn the technique

Plenty of teams seem to believe that senior developers in reviews straighten out the errors of juniors, while the latter learn good technique from the former.

While it probably happens occasionally, strong patterns of that sort are symptoms of dysfunction. Everyone should come to a review with a critical eye, juniors and seniors alike. Everyone is likely to have areas of expertise and others of ignorance. Part of the point of reviews is that every line can benefit from the best collective insight of the team as a whole.

Think of a related example: Suppose someone new to the team feels unsure about the best way to write a particular class definition. Don’t wait until review time to spring that uncertainty on the reviewers, or have them discover it for themselves. It’s far better to ask a simple question ahead of time, outside of the formal review process.

One possible outcome is that the newcomer has a better technique than the established team has been using. That happens. It’s valuable to do a little preparation ahead of time to allow for that possibility; without a preliminary conversation, it’s too easy for reviewers to see something different and conclude it should be rewritten in the established style.

Another aspect of the teamwork that reviews represent has to do with quality assurance. While it’s rare for testers to participate in reviews and even rarer to make a habit of it, it can be a great benefit. Reviews are an opportunity for testers to learn about the issues and techniques that matter to specific applications. Hands-on participation in reviews also helps testers learn the position of reviews in elevating quality.

The same reviews are equally an opportunity for quality assurance specialists to provide their perspective on the testability of implementations. If an update is weak in that regard–if it looks OK, but its hard to verify in its actual function–it’s vital to know that as early as possible. A review is an ideal time to explore such a weakness.

4. Product functionality defines the scope of a review

A final leading confusion is to equate reviews with the functionality customers see. In this view, each review implements one feature or resolves one error.

Traceability from business requirements to development effort is indeed important. It’s more of a 1:n than 1:1 relationship, though. Sometimes it’s healthiest to introduce a particular request with the descriptive context. For example: “This implementation originated as a task to scrub secret passwords and codes from source code. I determined that it would be best to do so in three distinct steps, each of which is much easier to read and understand on its own than when combined. The current request just reorganizes definitions to centralize secret management in the source file $SOME_NAME. Follow-up appears in assignments 152 and 153.”

The last section mentioned a potential role in reviews for quality assurance. That’s a model for several alternative profiles for review scopes: while many requests will correspond to functionality requirements from product management, others might result from a testability threshold, or repair of a security vulnerability, or a new “observability” standard. The latter three dimensions change no functionality that end-users see directly, but their impact on overall quality might be large.

Conclusion

Reviews should be one of the highlights of your DevOps day; they’re all about learning and progressing.

If your team finds them intimidating, boring, tedious, pointless, or too difficult, then it’s time for the team to improve their execution. Automate away the routine parts, fix the attitudes that get in the way of cooperation, and streamline what’s left to the essential.

Are reviews so painful for your team that they do them as a last resort? Procrastination is a sure prescription to make them even more unpleasant! Lighten your reviews, cut down their size, and practice, so doing two or three daily is easy and rewarding. Bring in reviewers from other departments, including testing and operations, for the special perspective they provide.

Everyone wins when reviewers see reviews not as a chore to endure, but as an opportunity to learn about techniques and parts of their program they otherwise wouldn’t know.

All-in-one Test Automation Cross-Technology | Cross-Device | Cross-Platform

Cameron Laird is an award-winning software developer and author. Cameron participates in several industry support and standards organizations, including voting membership in the Python Software Foundation. A long-time resident of the Texas Gulf Coast, Cameron’s favorite applications are for farm automation.

In This Article:

Sign up for our newsletter

Share this article

Other Blogs

Key Factors to Consider When Selecting the Right Test Case Management Tool
General, Software Quality, TestRail

Choosing the Right Test Case Management Tool: Key Factors

Understanding the need you have is often the first step in defining the method for managing test cases that will work for you and your team.

Understanding QA Roles and Responsibilities
Agile, General

Understanding QA Roles and Responsibilities

The software development process relies on collaboration among experts, each with defined roles. Understanding these roles is crucial for effective management and optimizing contributions throughout the software development life cycle (SDLC). QA roles, respons...
DevOps Testing Culture: Top 5 Mistakes to Avoid When Building Quality Throughout the SDLC
General, Business, Software Quality

DevOps Testing Culture: Top 5 Mistakes to Avoid When Building Quality Throughout the SDLC

Building QA into your SDLC is key to delivering quality. Here are the mistakes to avoid when building quality throughout the SDLC.