Industrial communication via OPC UA has arrived in operations and is used in increasing numbers. To achieve smooth data transmission between different devices, each device has to adhere to the same communication standards. An accredited certification for these devices can ensure equivalence to standards. Thus, expensive adaptations in commis-sioning of devices can be reduced. Along with standards for the communication mechanisms, in OPC UA the data transmitted can be standardized as well. The respective standards are called Companion Specifications. Each companion specification describes a data model - building blocks for data representation, including context information like relationships between data. OPC UA device manufacturers implement these standards, the devices are then tested in an accredited test lab. Graphic 1 shows the context of data model tests. On the left hand side, a data model of a specification is sketched. On the right hand side, the data model of an implementation is pictured. The implementation is erroneous, the element "C" is of type "Bool" instead of "Int". Test cases for the specifications are developed to find such errors. As sketched in Graphic 2, test cases are developed today by the people that wrote the Companion Specification (orange). They are based on the data model and might e.g. test the existence of elements or their data types.
To the standardizing bodies, test case development results in high manual efforts. This effort is twofold: the standardization groups develop test cases, that describe what is to be tested and what results are expected. The other effort results from implementing these test cases in test scripts, that are usable with the existing test environment in OPC UA. Developing both the test cases and the test scripts is time-consuming. Standardization groups could use the time available in a better way. Test script developers see a rising workload due to a rising number of Companion Specifications. This poses a problem for the future in standardization in communication. Both the test scripts and the test cases are often repetitious, as in the sketched example. Elements of the data model are tests for existence, correct data types and correct references. Such monotonous tasks often result in human error, e.g. copy-paste-errors. In order to transfer the data models of the Companion Specifications to test cases and test scripts, the connections between the model and test cases need to be explored. Graphic 3 depicts this: the existing specifications contain test cases developed by humans. The connections, drawn with the purple arrows, need to be identified and provided in a suitable way for automating the test case generation.
The project's main goal is a generic automation for test case generation of a large number of different Companion Specifications (purple in Graphic 2). Graphic4 shows the principle: with a data model as the base, the test cases and test scripts are generated using the connections shown in orange. This automation is intended to reduce the time to design test cases from weeks or months to hours or minutes. A similar reduction is expected for test script generation. In addition, the human error is expected to be much lower due to the automation.