In today’s landscape of work, organizations everywhere are not just accepting remote and hybrid teams—they’re fully embracing them. So what does that mean for your QA team? While QA lends itself well to a distributed work environment, there are still special considerations to consider when managing distributed QA teams.
Expect to learn:
- The various types of hybrid and remote work models
- How to leverage team working agreements
- Implementing a definition of “done”
- How to gain greater visibility of QA “tech debt”
- Four specific examples of how to enhance processes for distributed teams
- Mechanisms for continuous improvement
Defining “distributed QA teams”
Given the evolving nature of remote work, organizations are adopting different models and approaches. For the purpose of this article and overall improvement, I have provided definitions for four distinct distributed work models:
1. Hybrid distributed work model
This approach involves a team composition with members working both on-premise and remotely, spanning various time zones and locations (e.g., having on-premise members in New York and remote members in Lisbon).
2. Remote distributed work model
In this model, teams consist of members spread across different time zones or different locations, with all members working fully remotely.
3. Hybrid centralized work model
In this model, teams blend on-premise and remote members within the same time zone or region.
4. Remote centralized work model
In this model, teams are composed of fully remote members, all situated within the same time zone or region.
Defining and understanding the various hybrid and remote work models is essential when building teams and progressing through Tuckman’s stages of team development (storming, forming, norming, and performing). It also provides valuable insights into challenges and opportunities for improvement unique to each model.
Common challenges
Now that we have a clear understanding of the types of remote and hybrid team structures, let’s pinpoint some common challenges that these models share:
- Inefficiency when dealing with priority items: Difficulty in efficiently concentrating efforts or being able to “swarm” top priority items.
- Duplication of SDLC artifacts: Repetition of various software development lifecycle (SDLC) elements, including test cases, defects, and user stories.
- Vague accountability within SDLC: There is a lack of clear responsibility, leading to ambiguity, as exemplified by statements like, “My tests passed before the code merge, so it’s not my fault…”
- Inconsistent team velocity: Team velocity, or the amount of work a development team can complete during a specific iteration or sprint, lacks consistency or predictability.
Common challenges within regulated industries
Additionally, when teams work within regulated industries, they face additional, unique challenges. Regulated industries are governed by strict government regulations, which apply to professions like teaching or financial services. Here’s an overview of the specific challenges that leaders and team members should consider:
- Diverse compliance standards for international teams: Remote teams spanning international borders may need to navigate and adhere to varying compliance standards across different industries.
- Cloud configurations for disaster recovery: It may become necessary to establish specific cloud configurations for disaster recovery and replication and ensure multiple availability zone coverage for application environments.
- Data access restrictions for confidential information: It may be required to implement stringent data access restrictions for team members not domestic to a specific country, particularly concerning confidential data.
Strategies to maximize communication
Engaging in hybrid and fully remote teams offers numerous benefits, yet effective communication can pose a challenge. To enhance team performance through communication, key areas to focus on include establishing “working agreements” and adopting a “shift left” mindset.
Team working agreements
Team working agreements are a mutually agreed-upon set of “rules” that all team members consent and adhere to. These agreements are treated as dynamic, “living documents” revisited during sprint retrospective meetings (for agile teams) and root cause analysis sessions.
Considerations for working agreement items can encompass administrative and software development lifecycle (SDLC) topics. These may include aspects such as capacity planning, delineation of team member roles and responsibilities, and workflows for release approvals.
In this example, the team working agreement addresses considerations spanning both administrative (capacity planning) and SDLC (release workflows) aspects.
Team working agreement example
During the sprint retrospective meeting
Agreement 1: Capacity Planning
- Current State: There have been instances where team members felt overwhelmed due to unevenly distributed workloads.
- Discussion: The team discusses the importance of balancing workloads for improved efficiency.
- Adjustment: The team agrees to update the working agreement: “During sprint planning, the team will collectively assess individual workloads. If imbalances are identified, adjustments will be made to ensure equitable distribution of tasks.”
During the root cause analysis session
Agreement 2: Release Workflow
- Current State: The release process has been prone to delays and miscommunications.
- Discussion: The team conducts a root cause analysis to identify bottlenecks in the release workflow.
- Adjustment: The team agrees to include a new working agreement: “A designated release coordinator will be assigned for each sprint. A documented workflow for release approvals and communication channels will be established and adhered to.”
Addressing considerations spanning both administrative and SDLC aspects ensures that the team is aligned not only on software development practices but also on broader organizational and administrative processes that impact their effectiveness.
The definition of “done”
As the vast majority of software development teams now adopt various forms of agile methodology, achieving alignment on the concept of “done criteria” becomes even more critical for distributed teams.
The concept of “done criteria” can vary among teams. Leading Agile defines done (DoD) as “when all conditions or acceptance criteria that a software product must satisfy are met and ready to be accepted by a user, customer, team, or consuming system.”
Examples of “done criteria”
Here are some examples of “done criteria” for various tasks in a software development context:
User story implementation:
- All acceptance criteria are met
- Code is written, reviewed, and approved
- Unit tests and integration tests are written and passed
- User documentation is updated
- Code is merged into the main branch
Bug fix:
- The identified bug is fixed and verified
- Relevant unit tests and regression tests are created and pass
- Documentation is updated to reflect the bug fix
- Code changes are merged and deployed
Feature development:
- All feature requirements are implemented
- Code adheres to coding standards and best practices
- Comprehensive unit tests and integration tests are written and pass
- User documentation and API documentation are updated
- Code is merged into the main branch
Having a shared understanding of the definition of “done” will ensure your distributed team members are aligned with the standards set for the completion of work items. Team members can leverage team working agreements when situations arise that require clarification and ensure the team continues to execute without impediments.
Enhancing processes for distributed teams
Defining and implementing specific processes within a software development team can significantly influence quality and output. When operating within a distributed team, these factors can be magnified positively or negatively. Here are some key processes that can significantly impact the effectiveness and efficiency of your distributed team:
1. Test case review process
Ensuring the production of high-quality tests treated as an “asset” rather than a liability should be a collective focus, involving not just QA team members but everyone, including testers, QA engineers, and stakeholders. The team should follow a structured review process regardless of the test type (unit, integration, functional, manual, etc.).
Key items to consider include:
- Be aligned on the team working agreement: Peer reviews on test cases should adhere to the guidelines set in the team working agreement.
- Quality gate before code merge: The review process should serve as a quality gate, ensuring thorough examination before test cases are run against the code to be merged.
- Utilize a common platform: Employ a unified platform for tracking, viewing, and resolving comments across various QA testing types, promoting efficient collaboration.
Image: With the TestRail Enterprise test case review and approval process, users can set up collaborative review and approval processes to ensure test cases accurately define your application and meet your organization’s standards.
2. Defining “environment claims”
Many teams employ several environments of the product or system under test to facilitate rapid development, testing, and acceptance of features being developed. Issues may arise in decentralized teams or where processes are not well-established, leading to confusion and reduced productivity in determining “how, what, and when” environments are deployed and updated.
Leveraging the concept of “environment claims”
Using the concept of “claiming” or tracking the version and purpose of the team’s environments will empower team members to leverage them throughout the development and milestone promotion process. Here are some examples of processes to help better support the management of your team’s environments:
- Identify team owners and purpose: Clearly identify team owners and the purpose for each deployed environment. Consider adding this information to the team working agreement.
- Maintain an “environment claims” page: Create and maintain an “environment claims” page as a dynamic working document, either manually or through automation.
- Align CI/CD pipelines: Align Continuous Integration/Continuous Deployment (CI/CD) pipelines to deploy automatically or manually, in accordance with the team working agreement on environment deployment and promotions.
- Implement CI/CD and test management integrations: Implement Continuous Integration/Continuous Deployment (CI/CD) and test management integrations that enable the tracking of test executions against corresponding environment promotions before release. This ensures a streamlined process and comprehensive visibility into the testing progress aligned with environment changes.
Image: Create and manage unique, custom test case fields in TestRail Enterprise to tag and track what test cases have been executed across test environments as code is promoted prior to release.
3. Enhancing the visibility of QA technical debt
Collaboration within a development team extends beyond software engineers and QA/test roles. Distributed teams often gain from heightened visibility of technical debt related to infrastructure and testing. Here are practices that different teams should look at to increase the visibility of technical debt between product owners, stakeholders, and QA:
- Maintain a product backlog: Maintain a dedicated product backlog for testing and quality-related technical debt within your team’s agile work management/tracker tool (e.g., Jira, Rally). This ensures visibility and prioritization.
- Automate test candidate tracking: Track manual tests that are potential candidates for automation versus those already integrated into the team’s automation suite. This aids in efficient decision-making on automation priorities.
- Treat tests like application code: Consider tests on par with application code. Create defects or tasks for flaky or broken tests, initiating reviews and addressing them based on priority and impact during regular “triage” sessions.
Image: Custom fields in TestRail Enterprise provide a valuable feature for tracking automation testing candidates. By establishing a linkage between custom fields and your team’s agile work management or tracker tool, you can enhance visibility into your testing processes.
4. Focus on continuous improvement
When working on and managing distributed teams, fostering an environment where everyone has opportunities to evaluate their performance and improve becomes even more important.
Conducting “one-plus-one” meetings
For leaders managing distributed teams, gaining insights into individual struggles or successes can be challenging in comparison to centralized teams. Implementing scheduled team meetings using the “one-plus-one” format can be highly effective. This involves:
- Reflect on ONE item that you feel you could personally improve on. You can use objective reflection, reports, and metrics, such as team velocity, defects, etc.
- Reflect on ONE aspect where the team excelled by objectively reflecting on metrics such as team velocity, defects, and release quality.
- What ACTIONS do you feel need to be taken based on your reflection?
Image: With TestRail, users can automatically generate comprehensive project reports, track test coverage, and build traceability between requirements, tests, and defects. They can also report on test results from dozens of DevOps tools for efficient analysis.
Team “upskilling”
As QA team members face increasing demands in their roles, maintaining a dedication to continuous learning, often labeled as “upskilling,” becomes crucial. Leaders overseeing distributed teams should prioritize and allocate time for learning new skills, testing tools, and testing processes to ensure ongoing professional development.
Two key aspects should be considered:
- Prioritization in sprint planning: Allocate time for self-guided learning and training within team sprint planning, making it an integral part of the overall sprint capacity.
- Measurable objectives: Establish measurable training objectives, incorporating targets like certifications, course completion, and skill-based assessments such as LeetCode challenges and the TestRail Academy. This ensures a tangible and goal-oriented approach to continuous learning.
Image: The TestRail Academy provides free and regularly updated multimedia courses where you can learn best practices, master product features, and train your team at scale!
Managing and working within distributed QA teams can be challenging when you don’t take appropriate steps to maximize the team’s potential. Implementing the tips and strategies in this article will greatly increase communication and collaboration within your distributed team.
Key takeaways
- Define and enforce team working agreements
- Utilize an agreed-upon definition of “Done” to ascertain work item completeness
- Organize and track quality assurance technical debt in the product backlog for visibility
- Maintain environment “claims” and usage throughout the SDLC
- Implement test case review and approvals according to the working agreement
- Conduct “one-plus-one” meetings to reflect on performance and drive improvements
Interested in learning more about how to manage distributed teams? Watch this webinar, “Strategies for managing distributed QA teams,” to get insights on enhancing hybrid and remote QA models applicable across all sectors, including highly regulated industries.
Chris Faraglia is currently a Solution Architect and testing advocate for TestRail. Chris has 15+ years of enterprise software development, integration and testing experience spanning domains of nuclear power generation and healthcare IT. He has managed and interfaced with distributed testing teams in the United States, Central Europe, and Southwest Asia.