TÜV SÜD has developed a new test mark for so-called Low Risk AI systems. It is aimed at manufacturers and operators of AI applications that are not subject to the strict requirements of the European AI Act1, but who still wish to demonstrate transparency and quality in a verifiable manner. The first areas of application include AI algorithms that improve the behaviour of automatic sliding doors to reduce a building’s energy consumption, and autonomous transport vehicles in intralogistics.
Text by TÜV SÜD

With EU Regulation 2024/1689 (also known as the “AI Act”), Europe is introducing uniform requirements for artificial intelligence (AI) for the first ime, based on the risk posed by the respective application. There are currently no legal testing requirements for low-risk systems. Nevertheless, there is a growing need for guidance and confidence-building measures. “Many companies already want to take responsibility and have their AI systems assessed voluntarily, even if they are not obliged to do so,” says Benedikt Pulver, Head of the Machine Safety Department at TÜV SÜD. “Our new test mark makes exactly that visible – in a transparent, structured manner and based on technical criteria.”
Objective: Transparency for experts and users
The new TÜV SÜD test mark is intended to provide experts with an objective basis for evaluation, while appealing to end users in areas such as automated building functions and robotics applications in retail and logistics. It is based on application-specific test programmes that take into account factors such as energy efficiency, robustness, data processing and functional safety. One example is autonomous mobile robots that coordinate with each other within factory buildings to optimise routes. Such systems are considered “low risk” under the AI Act – the new TÜV SÜD test mark confirms that they have also been assessed according to traceable standards.
Voluntary participation with a signal effect
The test mark is part of a modular concept that also takes future developments into account, such as the voluntary self-commitment of manufacturers in accordance with Article 69 of the AI Act. “We believe that trust in AI cannot be achieved through regulation alone,” continues Benedikt Pulver. “Voluntary certifications send a strong signal for quality and responsibility – especially where there are no regulations (yet).” The test mark is available in German and English and can also be provided with additional consumer information on request.
1 Such as the recording obligations and monitoring of the operation of high-risk AI systems.
About this Technical Story
This Technical Story is an article from our Valve World Magazine, September 2025 issue. To read other featured stories and many more articles, subscribe to our print magazine. Available in both print and digital formats. DIGITAL MAGAZINE SUBSCRIPTIONS ARE NOW FREE.
“Every week we share a new Technical Story with our Valve World community. Join us and let’s share your Featured Story on Valve World online and in print.”
