New SpiraApps On Marketplace
We have released a new set of AI plugins for Spira on our SpiraApps marketplace that add the completely new AWS Bedrock option as well as updates for our Azure OpenAI and ChatGPT (OpenAI) plugins. All of the plugins provide the same functionality, but let customers choose which LLM platform they want to use and which model in that platform to use. In the coming months we will be rolling out a native Spira AI assistant based on our AWS Bedrock plugin that will provide the same functionality, but allow you have the billing and account management built-into your Spira subscription with inflectra.
Access to New Models
The new plugins provide access to a wider range of Generative AI LLMs than before. In addition to the OpenAI GPT3.5 and GPT4 models previously supported, the new plugins support the Meta Llama family of models as well as the Anthropic Claude family of models. These are accessible to any customer that has an AWS account and has enabled the AWS Bedrock service.
The new plugin lets you choose the family of model you wish to use and also allows you to now specify additional options:
- Detailed Test Steps - do you want to generate detailed test steps for each test case vs. just a single default step
- Use Artifact Descriptions - by default the plugins send just the artifact name to the LLM, with this option you can now choose to also send the long description as well.
If you choose to use the Llama family of models, you can specify the exact Llama model to use (we default to Llama 3) as well as the temperature, Top p and maximum number of generated tokens to use:
If you choose to use the Claude family of models, you can specify the exact Claude model to use (we default to Haiku as well as the temperature, and maximum number of generated tokens to use.
Code Generation Options
The latest versions of the plugins now include the option to generate sample implementation and unit testing code for each Task in the project. Since the implementation choices will vary per project, you can specify which coding languages and/or unit test frameworks are being used by the current project:
The End User Experience
When a user logs into Spira, they can create requirements and user stories in either the grid or planning board views as they would using the previous version:
Using the AI Assistant, they can generate a set of Behavior Driven Development (BDD) scenarios for this requirement, written in the Gherkin syntax. The AI Assistant will automatically convert them into an easy to read format with bold for the Gherkin keywords and bullets separating out the components. The previous version only performed limited text formatting and was not as readable as the new version:
The user can also generate the development tasks appropriate for implementing the feature in question. The assistant will attempt to generate the most likely tasks for the type of feature. It is expected that the user will review and refine the tasks. Unlike the previous version, the tasks will be generated with both a short name and a long description.
Once the tasks have been generated (and for any additional tasks the user manually creates), the user can use the new AI Assistant to generate sample source for the current development task.
The user can choose whether to just generate the development code in one of the languages that was specified for the current project, or to generate the development code and the associated unit tests in the matching test automation framework.
Once the source code files are created, they will be displayed in the Attachments tab of the current task.
When a user clicks on the appropriate filename, Spira will display the generated source code, with the appropriate syntax highlighting in place:
The same is true for the generated unit testing source code as well:
In parallel with the software development activities, the project manager will usually want to assess the potential risks associated with the current feature. They can use the option in the AI Assistant to automatically Identify possible risks:
Unlike the previous version, the risks will include both a short name and a separate long description.
Once the risks have been identified, the user can click on each risk in turn to edit them and make adjustments. In addition, there is additional functionality whereby the AI Assistant can automatically suggest potential mitigations to the identified risk:
This functionality can be used to generate potential mitigations for both AI-generated risks as well as those that were manually identified by the project team:
Finally, the AI assistant can help the quality engineering team by automating the process of writing test cases and test scripts associated with the requirements.
Unlike with the previous version of the plugins, each generated test case is created with detailed test steps, expected results and (where applicable) sample data.
In some cases, there may be test cases that have been created manually that are missing requirements. For example, the team imported some test cases from a spreadsheet or MS-Word document and now they need to reverse-engineer the requirements. The good news is that the AI assistant can now help with this process as well:
When you click on the option to generate requirements, the system will create multiple requirements for the test case in question:
Conclusion
The new and updated SpiraApps extend the already ground breaking GenAI functionality in Spira to streamline common project management and quality engineering tasks. Using the latest LLM models and updated SpiraApps framework, they can now speed up the generation of test cases, tasks and code by a factor of ten (10x).