Software Verification
Ensuring the reliability and safety of Software as a Medical Device (SaMD) requires a balanced approach between automated and manual testing. Automated testing is crucial for early and efficient assessment of software quality, allowing fast execution of unit, integration, and system tests and ensuring regulatory compliance, such as adherence to IEC 62304, thanks to consistent and continuous validation during the entire software lifecycle.
Manual testing remains relevant, especially for usability and exploratory testing, where human judgment is necessary to evaluate how clinicians and patients interact with the software, and for edge-case validation, ensuring the system correctly handles unexpected inputs or real-world complexities that automated tests might overlook. By combining automation for efficiency and coverage with manual testing for critical judgment and user experience, SaMD developers can create software that is not only compliant and robust but also practical and safe for medical use.
Overview
P4SaMD provides a comprehensive overview of all the tests planned for a version of the software system, organized in three main tabs:
- All tests: Displays the list of individual tests which are applicable for the current Software Version. The list includes all the tests associated to the current and other system versions, as long as they are not deprecated.
- Test suites: Shows the available Test Suites and allows navigation into suite details. The Test Suites are collectors of tests, to facilitate the test grouping and their execution.
- Executions: Lists all test executions with access to execution details and reports. The executions can refer to a single or multiple tests, achieved by the execution of Test Suites.
The nominal definition of tests originates from the integrated ALM, where you can create new ones, update the information, and delete them. The P4SaMD table dynamically reflects any changes made inside the ALM.
Users are assisted in evaluating the quality and compliance of the tests thanks to AI-powered evaluation features.
All tests
For each test the following information are provided:
- Title: the unique identifier (ID or key) and title of the test;
- Suggestions: a list of suggestions generated by P4SaMD (for example if a test has never been executed or is not linked to a requirement);
- Quality: the latest evaluation performed using AI, see a legend of the different icons below.
- Type: the type of test, like integration or system;
- Execution Mode: if the test is executed automatically or manually;
- Test Suite: if the test is part of an automated Test Suite;
- Latest Execution: details about the last text execution, including when it was performed and the outcome. The outcome can refer to the overall test (e.g. passed or failed for manual tests) or to the underneath test cases of the test (e.g. count of successful, failed or skipped test cases);
- Software Items: the number of software items associated to the test;
- Requirements: the number of requirements verified by the test.
Under the Quality column, you can see the following icons indicating the evaluation status of each test. For further details, please refer to the AI evaluation section.
| Evaluation | Icon |
|---|---|
| Missing | |
| Very low quality | |
| Low quality | |
| High quality | |
| Very high quality |
Actions
Under the last column on the table you can perform the following actions:
- Link to Implementation: By clicking the arrow icon, you will be redirected to the implementation of the test. This action is available only for tests with the Implementation Link field populated.
- Link Software Items: by clicking on the link icon, you can link a software item to the test or unlink an already associated one. The linked software items are displayed in the drawer under the Traceability tab.
Drawer
Clicking on a table row opens a drawer displaying detailed information about the selected test.
The drawer shows test related information into four tabs: Details, Traceability, Suggestions and Executions.
You can navigate between the linked entities - requirements and software items - by selecting them under the Traceability section in the detailed view.
You can browse back to previous entities by accessing the history menu at the top of the detailed view and selecting the entity of interest.
Details
In addition to the information displayed in the table, this tab shows:
- Description: A paragraph describing the test including the steps and phases.
- Implementation Link: Define the link related to the implementation of the test.
Traceability
This tab shows the linked issues of the requirement grouped by:
Suggestions
This tab shows related suggestions of the test. For further details, please check Insight & Suggestions.
Executions
This tab provides a list of all test executions, from the most recent to the least.
For each test execution, the following information are available:
- the user who executed the test;
- when the test was executed;
- the test outcome of selected test suite (passed, failed, etc.)
- Download report if available, download the report for the specific execution.
AI evaluation
When performing an evaluation using AI, Assignee, Reporter and Approver information are not provided to AI. Test title, description and other information, including related requirements and software items, are shared with AI service.
Do not insert any personal or sensitive information in the AI-process fields. For more details about third-party organizations privacy and security measures, please check the [FAQ section][faq-data-sharing].
Also remember that information generated by AI may be inaccurate or misleading, so never make any assumption or decision based solely on those information and always verify them.
Each test can evaluated through the AI-powered evaluation features, and the results will be displayed both in the test details drawer and as a dedicated column in the test table.
You can assess a test by hovering on the icon under the Quality column in the corresponding table row and click on the Get evaluation button.
The assessment may take a while, usually around a minute, so while we process it in the background you can keep working on P4SaMD and come back to check the progress at any time.
After the evaluation has been completed, the icon on the table is going to assume different colors depending on the overall rating and, by hovering it, you can see a preview of the results.
The rating provides an overall score, which is the result of the aggregation of four different scores on specific criteria:
- Clarity and Specificity: if the test is clear, detailed and unambiguous;
- Traceability: if the test is uniquely identified and is linked to requirements and software items;
- Testability and Verification: if the test is written in a way that is easy to execute and replicate;
- Appropriateness: if the test is appropriate according to IEC 62304.
If you select the row, in the modal on the right side of the page, under the Suggestions tab, you can see detailed information about the evaluation.
At the top you can see a suggested description, which provides an example of how you could rewrite your test description to address its main weaknesses.
Also, you can check how it scored on each specific criteria mentioned above, including the specific areas of strength and weakness.
Test Suites
In this tab, you can manage the tests through Test Suites: create, update and also delete them. The displayed Test Suites are referring to the current system version and allow to group multiple tests of the same type. In fact, Test Suites can be automatic or manual, and the related tests must be coherent to the defined type.
For each Test Suite, the following information is displayed:
- Title: Name of the Test Suite.
- Execution Mode: Automatic or manual, strictly tied to the Execution Mode of the associated tests.
- Tests: Number of tests included in the suite.
- Last Execution: Status and date of the last execution.
Managing Test Suites
Creating Test Suites
- Add suite: Create a new Test Suite. NB. Test suites can be created for software versions that are not yet released. Each Test Suite can contain one or more tests.
Running Test Suites
- Run all: Initiate execution of all automated Test Suites that are configured with an External Test Executor.
- Run Test Suite: Execute a specific automated Test Suite when its API Trigger is configured. Multiple Test Suites can be selected and executed simultaneously.
- Manual executions: Must be handled in the ALM tool, where testers initiate and update execution information.
Other Actions
- View Details: Click on a row to access detailed information about the selected Test Suite.
- Edit Test Suite: Modify Test Suite properties such as the title.
- Delete Test Suite: Remove a specific Test Suite without deleting the associated tests.
Test Suite Details
Clicking on a row displays the Test Suite details in three tabs:
Tests Tab
Shows all tests included in the Test Suite with their individual properties and status.
Executions Tab
Displays chronological executions of the specific Test Suite, with options to download corresponding reports when available.
External Test Executor Tab
Configure the external service required for automatic test suite execution.
Configuration Requirements:
- P4SaMD requires a
jobIdto trigger the executor correctly - Include the
{{@jobId}}placeholder in at least one of: Endpoint URL, Payload, or Header - Without proper
jobIdconfiguration, the executor cannot run
Automatic Test Suite Execution
Execution Flow
- Initialization: User triggers execution → new job created with assigned
JobId - Running:
jobIdinserted into configured location → External Test Executor runs → job updates based on response - Results: External executor sends updates via webhook → execution status is updated → results are processed
- Reports: JUnit format reports are automatically processed and displayed; other received formats are saved and are downloadable
Configuration Prerequisites
- External test executor must be configured
{{@jobId}}placeholder must be set in test suite details- Without proper configuration, P4SaMD cannot trigger or run test suites
Webhook Integration
External Test Executors update job status via webhook:
https://{{domain}}p4samd/webhook/update-run-test-suite/:jobId
Supported Parameters:
| Name | Type | Description |
|---|---|---|
| outcome | string | Updates the status of the job execution |
| result | string | Final test results (if report not in JUnit format) |
| message | string | Execution description |
| junitXml | string | JUnit format report (automatically processed) |
| reportFile | file | Report file (downloadable, not auto-processed) |
Executions
This tab lists all test executions sorted by date, newest first. The executions relate to single procedure in which the user: select one or more tests, execute them and can access to the outcomes and reports. NB. The selection is allowed just for automatic Test Suites hence the executions and their reports refer to automatic tests only. The manual tests are therefore performed by human testers and need to be handled in the integrated ALM tool. The related repert can be produced through the templating system; see the reference Documentation engine section.
For each execution, the following pieces of information are available:
- Date and time: When the execution was performed.
- Note: Show an icon if there are some notes.
- Test suites: Number of Test Suites involved in the execution.
- Outcome: Result of the execution of all Test Suites included in that execution (passed, failed, etc.).
- Executed by: User who performed the execution.
- Download report: Download a ZIP archive containing a summary of the execution and all available individual reports from that execution.
Execution Reports
Each execution provides a downloadable ZIP archive containing:
- README.md: A summary file with execution details, including:
- Execution date and executor name
- Overview of all Test Suites included in the execution
- Final result for each Test Suite (passed, failed, etc.)
- References to corresponding report files when available
- Individual Reports: All available Test Suite reports from that execution
Detail
Clicking on a row opens a new page showing the execution details. Here you will find details including the execution date, the user who performed the execution, and any associated notes. The view also lists all Test Suites involved in the execution along with their related tests. You can download a comprehensive report for the Test Suite execution, which includes a summary file and all available individual Test Suite reports as attachments.