Overview
The new Generative Artificial Intelligence features (GenAI) we have introduced in Rapise 8.2 are designed to assist human testers in creating automation test scripts more efficiently and easily. We have designed it to be a flexible and extensible platform for customers to access our existing GenAI features and craft their use cases. The goal of the functionality is to shorten the chain from a human-readable test scenario to a working automation script. In addition, the functionality will simplify test script maintenance, speed up synthetic test data generation, and act as a general-purpose assistant to the automation engineer and manual tester.
Using GenerativeAI in Rapise:
When you open the new AI features in Rapise, you will be taken to the AI dashboard. This is where you can use one of the new AI assistants to start helping you automate your test scenarios.
In the following sections, we will illustrate some of the critical default GenAI workflows and capabilities available in Rapise 8.2.
Generating Code from AI Commands
Rapise 8.1 introduces a new conversational interface for interactively communicating with the Rapise AI assistant. Using this assistant, we can provide the details of a test scenario (e.g., logging into a specific application, waiting a few seconds, and then logging out) in natural language, and the AI assistant will write the RVL test actions for us:
We can refine the suggestions using the conversational interface until we are happy with the provided solution. Then, we can use the Apply button to tell Rapise to implement the suggestion into the codeless RVL test scenario. If you prefer a scripted test approach, you can also do the same thing with the JavaScript code editor.
This looks like we're getting close to the "holy grail" of test automation - namely, taking a manual test case or test scenario and automatically turning it into running code. Well, the following workflow gets us pretty close to that...
Using AI Commands in RVL
In addition to using the conversational interface to write natural language test scenarios, we can use the new "AI" Rapise Visual Language (RVL) command to directly execute manual test scenarios from within the RVL editor. In the example below, we have used a single-line RVL statement that contains the natural language test scenario:
Login as librarian/librarian
Click Books link
Click Create New Book link
Logout
And from that simple statement, Rapise will dynamically ask the AI engine to generate the appropriate automated test script "on the fly" and then execute it.
The result is that you can now execute a manual test case via the AI engine in one single step! In the background, Rapise uses the same set of AI prompts to get the JavaScript version of the test script from the manual test scenario and then executes it in real time. For this to work, you will already have had to learn the appropriate UI objects or API endpoints using the standard Rapise Learn tools.
Generating Data Lists
In addition to automating manual test cases, the GenAI functionality can accelerate other tedious manual tasks, such as creating synthetic test data. For example, suppose we need to generate ten fake company names for use in a test script. Typically we'd create an RVL Map and then populate it from some external source. With Rapise 8.2, you can ask your AI assistant to create them for you:
You can use the conversational interface to refine the question to get exactly what you want. For example, if the company names are too long, you could ask for ten shorter ones. Once you are happy with the results, you can use the Apply button to simply add them to your RVL map:
The nice thing about our AI implementation is that it maintains a "session history", so it remembers you previous questions. That way, if you want to get ten more companies that are similar to the ten you previously entered, you can simply ask it to "Generate 10 more rows" rather than having to write out the full prompt again:
You can then apply these new company names to your RVL Map in exactly the same way:
You now have 20 company names ready for use in your test scripts. You can use the AI data table generator default workflow to generate a table of data with multiple columns in exactly the same way.
Generic Interactive AI Assistant
Finally, another common use case is to have a "copilot" that helps you with basic (and some not-so-basic) test automation tasks. For example, you might need to create a simple JavaScript utility function to get the current day of the week. Since Rapise uses standard JavaScript and ECMAScript for its engine, you could probably go to StackOverflow or a similar website to find some sample code.
However, with Rapise 8.2, you can skip that step and simply ask Rapise to help you:
Once you are happy with its recommendation, you can then simply insert the generated JavaScript utility function right into your Rapise test script:
Ensuring Audit Control using AI Session History
As part of our commitment to responsible AI, Rapise includes a full session history of all AI conversations. This allows you to see all the previous interactions and check for any hallucinations or discrepancies.
Configuring AI in Rapise
When you go to the new Rapise AI dashboard, you will see that it comes with a predefined set of models, agents, and workflows. You can use the default workflows to immediately start working with, or you can create your own models, agents, and workflows to customize the AI functionality. In the previous section, we illustrated how you can use some of the out-of-the-box AI workflows to perform common tasks. In this section, we'll explain the different elements that make up the AI functionality and how you can configure them to create AI workflows of your own.
Configuring AI: Models
The models dialog lets you define the different large language Models (LLM) that you want to use with Rapise. For example, you might want to use GPT 3.5 turbo, GPT4, or another model. It also lets you specify the connection URL, API Key, and API version. This is important as you may want to use a public model such as OpenAI GPT4 or a private LLM such as GPT4 running under Microsoft Azure OpenAI.
You then give the model a friendly name that it will be referred to elsewhere. In the example, we have called the model "Sample OpenAI GPT-4 model".
Configuring AI: Agents
The agent dialog lets you specify how to process the data coming back from the LLM. For example, you want to return back JSON or a comma-separated list of values. You then associate the agent with a specific model and also specify the temperature of the AI prompt:
The temperature determines how "predictable" the answer will be.
Temperature in AI is like a special setting that helps decide how the AI behaves and what kind of answers it gives. Imagine you have a robot that can talk. If you set its ‘temperature’ high, the robot might give you lots of different and sometimes surprising answers.
But if you set it low, the robot will give you safer, more expected answers. This ‘temperature’ setting is really important for people who work with AI because it helps them control how creative or careful the AI is when it answers questions or solves problems.
Configuring AI: Workflows
Finally, you use the workflow dialog to create a specific AI use case that you want Rapise to provide. The workflow screen allows you to write the actual system prompt that will be sent to the AI model. Any data returned back from the model will then be processed and parsed using the agent you specify:
With these three components (model, agent and workflow) you can create new AI use cases and assistants right from within Rapise itself.
Unlock your testing team's full potential with Rapise 8.2. Don't just automate—amplify your team's ingenuity with AI. Make tedious tasks a thing of the past, and your testers will be free to focus on what they do best: ensuring exceptional software quality. Explore the possibilities with Rapise 8.2 today!