Articles Tagged 'graph'

Articles
DORA Metrics: Mean Time To Recover

The DevOps Research and Assessment (DORA) metrics revolutionized how the software industry measures software organization performance and delivery capabilities. In this article we show you how you can use create a custom graph in Spira that displays the standard DORA Metric: Mean Time To Recover.

Mean Time to Recover (MTTR) measures how quickly you restore normal service after a production incident, defined as the elapsed time from incident start or detection (e.g., alert fired, SLO breach began) to service restoration (impact ended/SLO back in compliance).

Compute it per service over a recent window as a distribution (median/P90 plus counts), using incident-management timestamps or monitoring data; include incidents tied to deployments as well as other causes unless you explicitly scope to change-related failures. MTTR highlights the effectiveness of detection, rollback/roll-forward, and on-call practices—short MTTR paired with a low Change Failure Rate indicates strong resilience and recovery discipline.

DORA Metrics: Change Failure Rate

The DevOps Research and Assessment (DORA) metrics revolutionized how the software industry measures software organization performance and delivery capabilities. In this article we show you how you can use create a custom graph in Spira that displays the standard DORA Metric: Change Failure Rate.

Change Failure Rate is the percentage of production deployments that cause a service degradation and require remediation—such as a rollback, hotfix, or incident—within a defined window. Compute it as: failures ÷ total successful production deployments (e.g., in the last 30 or 90 days), where “failure” is operationally defined up front (sev-1/2 incidents, rollbacks, emergency patches, feature flags forced off, etc.).

Measure and report it per service/team to avoid averaging away hotspots, and show both the rate and the underlying counts. Track alongside Mean Time to Recover (MTTR): a low CFR with fast restore times indicates healthy quality and recovery; a high CFR suggests issues in testing, change size, approvals, or release practices.

DORA Metrics: Deployment Frequency

The DevOps Research and Assessment (DORA) metrics revolutionized how the software industry measures software organization performance and delivery capabilities. In this article we show you how you can use create a custom graph in Spira that displays the standard DORA Metric: Deployment Frequency.

Deployment Frequency is how often your organization successfully deploys code to production, typically counted as the number of production releases per service (or product) over a standard interval (e.g., per day or per week). It reflects delivery cadence and should be normalized by system/team to avoid masking variation; only successful production deployments are counted, while rollbacks are excluded or tracked separately.

Report it as a time series (e.g., weekly counts and moving averages) and pair with Lead Time for Changes: elite teams ship many small releases frequently (often daily or more), while lower frequencies can signal batching, manual gates, or pipeline friction.

DORA Metrics: Lead Time To Change

The DevOps Research and Assessment (DORA) metrics revolutionized how the software industry measures software organization performance and delivery capabilities. In this article we show you how you can use create a custom graph in Spira that displays the standard DORA Metric: Lead Time To Change inside Spira.

The Lead Time to Change measures how long it takes a code change to reach users, defined as the elapsed time from when a change is integrated (typically a PR is merged to the main branch) to when a successful production deployment that includes that change finishes. It captures the speed of your delivery pipeline.

Shorter lead times generally indicate smoother, more automated paths to production and tighter feedback loops, especially when paired with healthy deployment frequency; longer times can reveal bottlenecks in reviews, builds, approvals, or release practices.

Graphically display the number of test cases in a test set

A customer in a webinar recently questioned how to get a count of the test cases in a test set that is not marked as completed. This KB article answers this request.

Summary graph by test execution status and build

To get a table that summarizes the test run count by status / by build, this article explains how to accomplish this.

Create a Custom Monthly Business Unit Trend Report

Customers frequently want to review the monthly processing times of requirements across various requirement types in specific Work-in-Progress (WIP) stages like Developed and Tested, categorizing them into four timelines: under 30 days, under 60 days, under 90 days, and above 90 days. This article addresses this request.

Test Case Creation Productivity Graph

A customer asked is how they could run a report to display the number of test cases created per day for a specific date range. You can of course just run the test case summary report and export the data to Excel, but using Spira's custom reporting functionality, you can also do it inside the application.

Accessing the SpiraTeam Graph Data Grid as CSV

When you display a graph in the SpiraTeam reporting page, you can download a graph as a CSV file. Some customers have asked about ways to get the data making a REST call. This article explains the components of the API.

Testing Web Applications that use SVG

One of the more challenging types of web application is that with uses of embedded SVG (Scalable Vector Graphics) in additional to HTML DOM elements. This article describes how to use Rapise to write automated testing scripts for such applications.

Graph of requirements by status in open releases

This article explains how to create a graph of the number of requirements by status, assigned to any open release in a product, using the custom graphing engine.

Creating a Custom Requirement Summary Graph using ESQL

A customer recently posted on the forums that they wanted to create a similar graph to the built-in Requirements Summary one using Spira custom graphs and ESQL. In this article we include an example.

Writing a Custom Report to Display the Count of Incidents By Project and Priority

A customer of ours asked for a custom report / graph for displaying the count of incidents by project and by priority.

Creating a Custom Graph based on a set of values in a Custom List

A customer recently asked about creating a custom graph based on a set of values in a custom list on an artifact. This article explains how this can be done. 

Displaying a Graph of Requirements Test Coverage by Custom Property

Imagine you have a situation where you want to display a requirements test coverage graph for requirements organized by a multi-select custom property. In this article we show how you can use that property to display a custom graph in the Spira reporting dashboard.

Creating a Graph of Automated vs. Manual Test Execution Durations

Sometimes you will want to get an idea how fast your manual and automated tests are taking. You can use the custom graphing feature to create a custom graph for this.

Pie Chart of Test Run by Status for Release

Sometimes customers need to show a Graph with all the Test Runs by status for a specific Release in Spira. This article explains how to use the custom graphing engine to achieve this.

Pie Chart of Incident Status for a Specific Component

Sometimes customers need to show a Graph with all the Incidents by status for a specific Component in Spira. This article explains how to use the custom graphing engine to achieve this.