Skip to main content
Version: 2.2.x (Current)

Software Verification

Ensuring the reliability and safety of Software as a Medical Device (SaMD) requires a balanced approach between automated and manual testing. Automated testing is crucial for early and efficient assessment of software quality, allowing fast execution of unit, integration, and regression tests and ensuring regulatory compliance, such as adherence to IEC 62304 and FDA guidelines, thanks to consistent and continuous validation during the entire software lifecycle.

Manual testing remains relevant, especially for usability and exploratory testing, where human judgment is necessary to evaluate how clinicians and patients interact with the software, and for edge-case validation, ensuring the system correctly handles unexpected inputs or real-world complexities that automated tests might overlook. By combining automation for efficiency and coverage with manual testing for critical judgment and user experience, SaMD developers can create software that is not only compliant and robust but also practical and safe for medical use.

Overview

P4SaMD provides a comprehensive overview of all the tests planned for a version of the software system. The main information are displayed in the table and additional details can be found by clicking on a table row, accessing the drawer.

The tests displayed in the table originate from the integrated ALM, where they are created, updated, and edited. The table dynamically reflects any changes made inside the ALM.

Furthermore, the user is assisted evaluating the quality and compliance of the test thanks to the AI-powered evaluation features: the users can leverage AI to evaluate tests and collect suggestions for improvement according to IEC 62304.

Table

For each test the following information are provided:

  • Title: the unique identifier (ID or key) and title of the test;
  • Suggestions: a list of suggestions generated by P4SaMD (for example if a test has never been executed or is not linked to a requirement);
  • Quality: the latest evaluation performed using AI, see a legend of the different icons below.
  • Type: the type of test, like integration or system;
  • Execution Mode: if the test is executed automatically or manually;
  • Test Suite: if the test is part of an automated test suite;
  • Latest Execution: details about the last text execution, including when it was performed and the outcome (passed or failed);
  • Software Items: the number of software items associated to the test;
  • Requirements: the number of requirements covered by the test.

Under the Quality column, you can see the following icons indicating the evaluation status of each test. For further details, please refer to the AI evaluation section.

EvaluationIcon
MissingMissing evaluation icon
Very low qualityVey low quality icon
Low qualityLow quality icon
High qualityHigh quality icon
Very high qualityVey high quality icon

Actions

Under the last column on the table you can perform the following actions:

  • Link Software Items: by clicking on the link icon, you can link a software item to the test or unlink an already associated one. The linked software items are displayed in the drawer under the Traceability tab.

Drawer

Clicking on a table row opens a drawer displaying detailed information about the selected test.

The drawer shows test related information into four tabs: Details, Traceability, Suggestions and Executions.

You can navigate between the linked entities - requirements and software items - by selecting them under the Traceability section in the detailed view.

You can browse back to previous entities by accessing the history menu at the top of the detailed view and selecting the entity of interest.

Details

In addition to the information displayed in the table, this tab shows:

  • Description: A paragraph describing the test.

Traceability

This tab shows the linked issues of the requirement grouped by:

Suggestions

This tab shows related suggestions of the test. For further details, please check Insight & Suggestions.

Executions

This tab provides a list of all test executions, from the most recent to the least.

For each test execution, the following information are available:

  • the user who executed the test;
  • when the test was executed;
  • the test outcome (passed, failed, etc.)

Test suites

P4SaMD enables developers to manage automated tests, including integration and system tests, organized in test suites.

A test suite is a collection of tests that you can managed directly from P4SaMD, although you cannot yet create them directly from the P4SaMD Control Panel.

AI evaluation

danger

When performing an evaluation using AI, Assignee, Reporter and Approver information are not provided to AI. Test title, description and other information, including related requirements and software items, are shared with AI service.

Do not insert any personal or sensitive information in the AI-process fields. For more details about third-party organizations privacy and security measures, please check the [FAQ section][faq-data-sharing].

Also remember that information generated by AI may be inaccurate or misleading, so never make any assumption or decision based solely on those information and always verify them.

Each test can evaluated through the AI-powered evaluation features, and the results will be displayed both in the test details drawer and as a dedicated column in the test table.

You can assess a test by hovering on the icon under the Quality column in the corresponding table row and click on the Get evaluation button.

The assessment may take a while, usually around a minute, so while we process it in the background you can keep working on P4SaMD and come back to check the progress at any time.

After the evaluation has been completed, the icon on the table is going to assume different colors depending on the overall rating and, by hovering it, you can see a preview of the results.

The rating provides an overall score, which is the result of the aggregation of four different scores on specific criteria:

  • Clarity and Specificity: if the test is clear, detailed and unambiguous;
  • Traceability: if the test is uniquely identified and is linked to requirements and software items;
  • Testability and Verification: if the test is written in a way that is easy to execute and replicate;
  • Appropriateness: if the test is appropriate according to IEC 62304.

If you select the row, in the modal on the right side of the page, under the Suggestions tab, you can see detailed information about the evaluation.

At the top you can see a suggested description, which provides an example of how you could rewrite your test description to address its main weaknesses.

Also, you can check how it scored on each specific criteria mentioned above, including the specific areas of strength and weakness.