SEQGEN-AI

EXPERIMENT OVERVIEW

The experiment aims to improve the way industrial test procedures are created and executed by integrating a Digital Intelligent Assistant into the GEMS platform, an open-source software solution developed by Gemesis for test bench control and automation. GEMS is a modular software environment used to manage sensors and actuators, control real-time processes, implement safety procedures, and collect and analyse data in laboratory and production testing systems.

In many industrial sectors, including hydrogen technologies, battery systems, and advanced manufacturing components, test sequences are still created manually using spreadsheets or custom programming scripts. These sequences can be extremely long and complex, often containing thousands of steps with conditional logic, loops, and dependencies between measurements. The preparation of such procedures requires significant engineering expertise and time. As a result, the process is resource-intensive, prone to human error, and difficult to scale when new requirements or standards arise.

The experiment proposes the development and integration of a Digital Intelligent Assistant capable of transforming natural language descriptions or structured documents into executable test sequences. Engineers will be able to describe a desired test procedure using plain language or by uploading documents such as Excel files, technical specifications, or standards. The assistant will interpret this input and automatically generate a structured sequence compatible with the GEMS execution environment.

The core artificial intelligence technology is provided by ThinkDeep through its DeepBrain platform, which is based on large language model technology. This platform is designed to interpret technical content and structure it into formalised, machine-readable instructions. The generated test sequence is then transferred to GEMS, where it can be visualised, edited, and validated before execution.

A key component of the experiment is the development of a visual sequence viewer by Gemesis. This viewer will display the generated test steps in a clear, time-based representation, allowing engineers to review the logic, dependencies, and transitions between phases. This human-in-the-loop validation ensures that the system remains safe and transparent, particularly in production or safety-critical environments.

The experiment will demonstrate the solution through real industrial use cases in hydrogen and battery testing. Industrial and research actors will be involved as early adopters to validate the practical applicability of the assistant in real test bench scenarios. The Digital Innovation Hub CARA will support industrial engagement, expert review, and dissemination within its network of manufacturing stakeholders.

The relevance of the experiment lies in its ability to address a clear industrial bottleneck: the complexity and cost of designing test procedures. By combining artificial intelligence with structured automation and human supervision, the solution aims to reduce setup time, improve consistency, and enhance reliability in industrial testing. This contributes directly to increased competitiveness, improved productivity, and safer digital transformation within European small and medium-sized enterprises.

The experiment addresses a structural challenge in industrial testing environments: the growing complexity, cost, and rigidity of test sequence design and management. In sectors such as hydrogen systems, battery technologies, and advanced manufacturing components, test procedures have become increasingly sophisticated. They often involve long sequences with conditional logic, loops, interdependencies between measurements, and strict safety constraints. Despite this complexity, the creation of these test sequences remains largely manual, relying on spreadsheets or custom scripts developed by highly specialised engineers.

From an operational perspective, this situation creates significant inefficiencies. Engineers spend a considerable amount of time translating technical requirements into structured sequences, validating logic, and debugging configuration errors. Each modification to a specification, standard, or customer requirement may require manual rewriting and revalidation of large parts of the sequence. This slows down project delivery, reduces responsiveness to market changes, and limits the scalability of testing activities.

Technically, the challenge lies in managing the increasing volume and interdependence of data within test procedures. Modern test benches integrate numerous sensors, actuators, and control variables that must be synchronised in real time. Ensuring coherence between parameters, thresholds, safety interlocks, and measurement conditions requires deep domain knowledge and careful validation. Manual configuration increases the risk of inconsistencies or omissions that can compromise the reliability of test results.

From a regulatory and compliance perspective, industrial testing environments,particularly in hydrogen and energy-related sectors,are subject to strict safety and quality requirements. Test procedures must be traceable, transparent, and reproducible. However, when sequences are manually defined and modified, maintaining full traceability becomes challenging. The absence of structured, standardised generation methods increases the risk of deviations and makes audits more complex.

Data management is another critical pain point. Test sequences are often defined in formats that are not fully interoperable with digital tools or documentation systems. This limits integration with broader digital transformation initiatives and prevents efficient reuse of test knowledge across projects or teams. As companies seek to adopt more digital and automated workflows, the gap between manual sequence definition and digital execution becomes increasingly problematic.

The challenge also has a strategic dimension. In fast-growing and innovation-driven sectors such as hydrogen and battery technologies, time-to-market is critical. The inability to rapidly configure and validate new test procedures can delay product qualification and industrialisation. Furthermore, the strong dependency on expert knowledge creates bottlenecks and exposes companies to operational risk if key personnel are unavailable.

Overall, the current situation reflects a mismatch between the increasing sophistication of industrial systems and the largely manual methods used to define and manage test procedures. Addressing this challenge is essential to improve efficiency, reduce costs, enhance safety and traceability, and support the digital transformation of industrial testing processes.

Objective 1: 

The first objective of the experiment is to develop and integrate a Digital Intelligent Assistant into the GEMS test bench platform in order to automate the generation of complex industrial test sequences. This objective aims to enable engineers to describe test procedures in natural language or structured documents and automatically convert them into structured, executable sequences compatible with real-time industrial environments.

Objective 2:

The second objective is to ensure safe, transparent, and reliable use of artificial intelligence in industrial testing by implementing a visual sequence viewer and human-in-the-loop validation mechanism. This objective guarantees that all automatically generated sequences can be reviewed, verified, and adjusted by engineers before execution, ensuring compliance with operational and safety constraints.

Objective 3: 

The third objective is to validate and demonstrate the solution in real industrial use cases, particularly in hydrogen and battery testing environments, and to prepare its replication through dissemination within the European Digital Innovation Hub ecosystem. This objective ensures that the developed solution is technically robust, economically relevant, and scalable for small and medium-sized enterprises across Europe.

The experiment is positioned within the advanced manufacturing and energy technology sectors, with a primary focus on hydrogen systems and battery technologies. These sectors are characterised by rapid technological evolution, strict safety requirements, and increasing pressure to accelerate qualification and validation processes. Industrial actors operating in these domains rely heavily on complex laboratory and pre-production test benches to validate components such as fuel cells, electrolyzers, battery cells, and fluidic systems before integration into larger systems or market deployment.
The experiment will be carried out in France, primarily at the premises of Gemesis, where test benches and the GEMS software platform are developed and validated. The operational environment includes laboratory and pilot-scale test systems used for component validation and system qualification. Demonstrations will take place in real testing conditions, where sequences are executed on physical test benches equipped with sensors, actuators, safety interlocks, and data acquisition systems. The experiment therefore operates in a realistic industrial setting rather than a purely simulated environment.
The target users of the solution are test engineers, laboratory operators, automation specialists, and research and development teams working in hydrogen, battery, and related advanced manufacturing sectors. Additional stakeholders include system integrators, industrial SMEs, and research laboratories that require flexible and reliable test automation tools. The Digital Innovation Hub CARA will support engagement with these stakeholders, ensuring alignment with real industrial needs and facilitating broader replication within European manufacturing ecosystems.
Several operational and technical constraints shape the context of the experiment. Hydrogen and battery testing environments are subject to strict safety standards due to high pressures, reactive gases, thermal risks, and electrical hazards. Test procedures must therefore comply with internal safety rules, traceability requirements, and, where applicable, certification frameworks relevant to energy and mobility applications. Any automated generation of test sequences must preserve full transparency and allow human validation before execution.
Integration constraints are also significant. The solution must be compatible with existing test infrastructure, including programmable logic controllers, data acquisition systems, and industrial communication protocols. It must support interoperability with existing software environments and data formats used in industrial testing. Access to data is controlled and limited to operational test parameters and documentation provided within secure environments, ensuring compliance with data protection and confidentiality requirements.
Overall, the experiment takes place in a demanding industrial context where safety, reliability, interoperability, and regulatory compliance are critical. This real-world setting ensures that the developed solution is robust, applicable, and directly relevant to industrial testing challenges faced by European small and medium-sized enterprises.

EXPECTED IMPACT

EXPECTED IMPACT

The expected impact of the experiment is a significant improvement in the efficiency, reliability, and scalability of industrial test sequence design and execution. By integrating a Digital Intelligent Assistant into the GEMS test bench platform, the experiment aims to transform a manual and expertise-dependent process into a structured, assisted, and traceable workflow. Success will be measured through quantifiable improvements in engineering efficiency, reduction of configuration errors, and validated industrial adoption.

From an operational perspective, the primary measurable impact concerns the reduction of test sequence preparation time. Today, complex sequences in hydrogen and battery testing environments may require several days or even weeks of engineering work depending on their complexity. The experiment targets a reduction of at least 50 percent in configuration time through automated sequence generation from natural language or structured documents. This will directly increase productivity and shorten project lead times.

A second measurable impact relates to error reduction and process consistency. Manual sequence creation increases the risk of configuration mistakes, inconsistencies, and logic gaps. By introducing structured generation combined with a visual sequence viewer and human validation, the experiment aims to reduce configuration-related errors and improve repeatability of test execution. Success will be evaluated through qualitative assessments and comparison of detected configuration issues before and after implementation.

For end users, including test engineers and laboratory operators, the solution is expected to simplify interaction with complex test systems. The assistant will lower the barrier to entry for less experienced users while maintaining expert-level control for advanced engineers. This improves labour efficiency and reduces dependency on a small number of highly specialised profiles, strengthening organisational resilience.

From a competitiveness standpoint, the experiment is expected to enhance Gemesis’ capacity to scale its solutions across multiple industrial sectors. Faster configuration and improved reliability will increase customer satisfaction and strengthen the company’s market positioning in hydrogen, battery, and advanced manufacturing applications. Industrial demonstrators and early adopters will provide tangible validation of the solution’s relevance.

In terms of sustainability, the experiment contributes indirectly to environmental performance by optimising testing workflows and reducing unnecessary test repetitions caused by configuration errors. More efficient validation cycles can shorten development time for energy technologies such as hydrogen and batteries, which are critical for the energy transition. Furthermore, the digitalisation of test procedures reduces reliance on paper-based documentation and improves structured data reuse.

The experiment will be considered successful if the defined performance indicators are achieved, including measurable reduction in setup time, validated industrial use cases, positive user feedback, and successful deployment of the solution through the European innovation ecosystem. Together, these impacts demonstrate not only technical feasibility but also economic relevance and long-term sustainability of the developed solution.

Mindia

EXPERIMENT OVERVIEW

The MINDIA (Manufacturing Intelligent Digital Assistant) experiment focuses on designing, deploying, and validating a multimodal digital assistant tailored for real-time decision support and operator assistance in the plastics manufacturing environment. The primary objective is to bridge the gap between fragmented operational data and real-time shop-floor decision-making by turning scattered experience and static documentation into an accessible, intelligent interface. This solution is built around the WASABI ecosystem and is implemented as an Open Voice OS (OVOS) based digital assistant, which is deployed through a containerized infrastructure using Docker Compose. To ensure a robust data integration layer, the system utilizes standard industrial communication protocols, specifically Open Platform Communications Unified Architecture (OPCUA) and Message Queuing Telemetry Transport (MQTT), to collect real-time machine and sensor data.

The assistant significantly enhances operator efficiency by connecting to a specialized component called DocuBoT, which enables users to query technical manuals, material specifications, and operational guidelines using natural language. This allows for instant summarization and contextual retrieval of information, ensuring that operators can access critical instructions without interrupting their manual tasks. During the experiment, the system will demonstrate realtime monitoring of production deviations and a streamlined scrap management workflow. Interaction is primarily designed to be hands-free through voice commands, but it also supports visual guidance via Augmented Reality (AR) interfaces on mobile devices or lightweight glasses, such as Xreal, through the COALA application.

This effort is a collaborative project involving three specialized partners: Cromic Plastik serves as the industrial coordinator and provides the real-world manufacturing environment for validation; Lider Teknoloji Geliştirme (LTG) acts as the technical developer responsible for software design and system integration; and Eskişehir Osmangazi University Intelligent Factory and Robotics Laboratory (IFARLAB-EDIH) provides scientific guidance and ethical validation. MINDIA is highly relevant to modern manufacturing as it supports the transition to Industry 5.0 and adheres to Trustworthy Artificial Intelligence principles by prioritizing human oversight and sustainability. Ultimately, the validated solution will be packaged and distributed via the WASABI White Label Shop to ensure that the outcomes are market-ready and easily replicable by other Small and Medium-sized Enterprises across Europe.

Before implementing the MINDIA experiment, Cromic operates its manufacturing processes using largely manual and fragmented workflows where production-related information is distributed across multiple systems, physical documents, and informal communication channels. This fragmentation limits real-time visibility on the shop floor and makes timely decision-making difficult because there are no integrated tools to support the operators. Consequently, process deviations or equipment issues are often detected only after manual inspections or routine checks, resulting in slow response times, increased downtime, and a higher likelihood of scrap generation before any corrective actions can be taken.

Operators currently rely heavily on printed manuals or the verbal experience of colleagues to interpret faults, which makes identifying the root cause of an issue time-consuming and dependent on individual expertise. Accessing operational knowledge is inefficient because the necessary information is typically stored in static documents dispersed across different systems, making it difficult to retrieve during active production. As a result, operators must frequently interrupt their manual tasks to search for guidance, which increases their cognitive load and heightens the risk of human error.

From a sustainability perspective, the lack of structured and digital reporting for material usage and scrap events hinders the company’s ability to implement timely corrective measures. Communication with the logistics department regarding mold and color changes is entirely manual, leading to significant delays in scrap collection and recycling coordination. These systemic challenges negatively impact production costs and resource utilization, highlighting a critical need for a digital solution that can transform scattered experience and static documentation into an accessible, intelligent interface.

Objective 1: 

The primary technical objective of the MINDIA experiment is to develop and validate a multimodal Digital Intelligent Assistant (DIA) that integrates real-time machine monitoring, voice-based interaction, and augmented reality (AR) visualization to support manufacturing operations. By leveraging Open Voice OS (OVOS) and Large Language Model (LLM) reasoning, the project aims to turn scattered production data and operational experience into an accessible, intelligent interface that provides operators with hands-free, context-aware guidance. This integration is designed to bridge the gap between theoretical plans and field execution, significantly reducing operator cognitive load while accelerating response times to production deviations by an estimated 30%.

Objective 2: 

The second objective is to enhance sustainability and operational efficiency within the plastics manufacturing sector through optimized resource management and improved process transparency. MINDIA targets measurable improvements in material usage, specifically aiming for a 10% reduction in raw material consumption and a 5% increase in the recovery of recyclable materials through structured scrap reporting and real-time alerts. By enabling the early detection of inefficiencies and fostering human-AI collaboration, the experiment promotes a more sustainable and productive shop-floor environment that maintains full human oversight in line with Industry 5.0 principles.

Objective 3: Trustworthy and privacy-first AI deployment

The final objective is to ensure the scalability and market-readiness of the developed solution by packaging MINDIA as a reusable, open-source asset for distribution via the WASABI White Label Shop. The experiment validates how a domain-specific manufacturing assistant can be containerized and successfully deployed in real industrial conditions, providing a replicable blueprint for other SMEs to adopt with minimal integration effort. By demonstrating the commercial viability and technological maturity of open-source components within the WASABI ecosystem, the project supports the broader goal of fostering human-centered digital transformation and digital sovereignty across European manufacturing.

The MINDIA experiment is situated within the plastics manufacturing sector, a field that increasingly requires high levels of process awareness and efficient material management to remain competitive. The experiment will be conducted at the industrial facilities of Cromic Plastik in Eskişehir, Turkey, which serves as the primary pilot site. While initial development and functional testing take place in the controlled laboratory environment of IFARLAB-EDIH, the core of the experiment involves a real-world pilot deployment on the production floor. This setting allows the system to be validated under actual operating conditions, focusing on specific production lines where material waste and process deviations have the highest impact.

The target users for this digital assistant are shop-floor operators and production supervisors who require instant, hands-free access to machine data and technical documentation during their daily tasks. Key stakeholders include the MINDIA consortium partners—Cromic Plastik as the industrial coordinator, Lider Teknoloji Geliştirme (LTG) as the technology provider, and IFARLAB-EDIH as the scientific advisor, as well as the broader WASABI consortium, which provides the underlying architectural framework. These stakeholders are collectively invested in proving that AI-driven tools can enhance operator performance while maintaining human-centered control in an industrial environment.

Operating in a real manufacturing environment introduces several critical constraints and requirements that the MINDIA solution must address. From a safety perspective, the system is designed for fully hands-free voice interaction to ensure that operators can receive guidance without diverting their attention from dangerous machinery or interrupting manual tasks. Technical development follows rigorous engineering standards, including IEEE 12207 and AQAP2210, while the AI deployment is governed by the EU AI Act and GDPR to ensure ethical and trustworthy operation. Furthermore, the solution must seamlessly integrate with existing factory systems via OPC-UA and MQTT protocols and operate within the secure, containerized WASABI Docker Compose stack to maintain data integrity and role-based access control.

The pilot is carried out together with our manufacturing partner who is experienced in regulated medical device production environments. The experiment focuses on an assembly process involving wearable devices and sensitive electronic components. It examines how a conversational Digital Intelligent Assistant (DIA) can be introduced as a supportive layer within existing workflows. Particular attention is given to privacy-by-design principles, user acceptance and maintaining a non-intrusive interaction model.

Primary users are assembly workers performing detailed tasks, while stakeholders include technical teams and organisational decision-makers interested in practical approaches to human-centred digitalisation. The exploration seeks to better understand how such assistants are perceived in everyday work contexts and what conditions support meaningful adoption.

EXPECTED IMPACT

EXPECTED IMPACT

The MINDIA experiment is situated within the plastics manufacturing sector, a field that increasingly requires high levels of process awareness and efficient material management to remain competitive. The experiment will be conducted at the industrial facilities of Cromic Plastik in Eskişehir, Turkey, which serves as the primary pilot site. While initial development and functional testing take place in the controlled laboratory environment of IFARLAB-EDIH, the core of the experiment involves a real-world pilot deployment on the production floor. This setting allows the system to be validated under actual operating conditions, focusing on specific production lines where material waste and process deviations have the highest impact.

The target users for this digital assistant are shop-floor operators and production supervisors who require instant, hands-free access to machine data and technical documentation during their daily tasks. Key stakeholders include the MINDIA consortium partners—Cromic Plastik as the industrial coordinator, Lider Teknoloji Geliştirme (LTG) as the technology provider, and IFARLAB-EDIH as the scientific advisor, as well as the broader WASABI consortium, which provides the underlying architectural framework. These stakeholders are collectively invested in proving that AI-driven tools can enhance operator performance while maintaining human-centered control in an industrial environment.

Operating in a real manufacturing environment introduces several critical constraints and requirements that the MINDIA solution must address. From a safety perspective, the system is designed for fully hands-free voice interaction to ensure that operators can receive guidance without diverting their attention from dangerous machinery or interrupting manual tasks. Technical development follows rigorous engineering standards, including IEEE 12207 and AQAP2210, while the AI deployment is governed by the EU AI Act and GDPR to ensure ethical and trustworthy operation. Furthermore, the solution must seamlessly integrate with existing factory systems via OPC-UA and MQTT protocols and operate within the secure, containerized WASABI Docker Compose stack to maintain data integrity and role-based access control.

ELECTRA

EXPERIMENT OVERVIEW

ELECTRA addresses a common situation in small and medium-sized enterprises (SMEs) in food manufacturing: heavy reliance on manual supervision and legacy systems with limited real-time data access. Key tasks such as counting packaged products, registering production batches, checking stock levels, and monitoring machine status are typically performed manually, leading to fragmented information, delayed reporting, and higher operational costs. Inventory checks often require physical verification in storage (e.g., repeatedly opening fridges to confirm availability), creating energy management issues, and increasing the risk of inefficient stock handling and even expired stock.

Within WASABI, ELECTRA will develop, deploy, and demonstrate the use of a task-oriented Digital Intelligent Assistant (DIA) that combines real-time Closed-Circuit Television (CCTV) video streams from packaging machinery with data from energy meters installed on key machinery operating within the food production facility. The aim is to generate actionable insights through an Artificial Intelligence (AI) conversational interface, automatically monitoring machinery activity, counting and logging packaged items, detecting visual inconsistencies that may indicate defects or packaging errors. The DIA correlates this with batch data and energy‑consumption patterns, providing important information to workers via a natural‑language conversational interface (e.g., questions on current stock, energy use, or products nearing expiry). In this way, the experiment directly supports inventory tracking, packaging-line supervision, and quality assurance, enhancing efficiency, traceability, and sustainability across HELIOS’s manufacturing processes.

 

How the solution works

ELECTRA adopts a retrofit, low-barrier approach that builds directly on data sources that are often already available in production facilities (CCTV cameras and low-cost energy meters). Instead of introducing new proprietary hardware, ELECTRA leverages these ubiquitous data sources and transforms them into intelligent, actionable insights through a DIA which combines computer vision, energy monitoring, and conversational AI to support operators in day-to-day manufacturing activities, including inventory management, product counting, packaging quality assurance, machinery anomaly detection, and order fulfilment based on available stock.

The architecture of ELECTRA links the food manufacturing machinery, the DIA, and the WASABI integration layer through a unified and modular data flow. CCTV cameras stream video via RTSP, while energy meters provide real-time power and operational data from packaging-line equipment. These data sources are processed by the DIA’s analytics modules, where computer vision algorithms extract production metrics and correlate them with energy performance indicators.

The DIA will be integrated with different WASABI components (e.g. OVOS, WISE, and PREVENTION), enabling natural, task-oriented interaction to facilitate workers in the food production environment. ELECTRA’s architecture is deliberately modular, replicable and scalable. Built on open-source frameworks such as OVOS, Docker, and open LLMs such as Llama and standard data exchange formats (JSON, MQTT, REST APIs), it can be adapted to other production environments with minimal integration effort. In support of replicability, the developed assistant will be published through the WASABI White- Label Shop (WWLS) instance, together with documentation, and demonstration materials, allowing other SMEs to reuse, extend, or commercialise the solution.

 

What will be demonstrated

The experiment is designed to demonstrate the DIA in realistic conditions and show both operational value at HELIOS and replicability for other food-manufacturing SMEs.

First, the ELECTRA solution’s technical components will be developed and integrated in the DIA, leveraging consistent data flow from energy meters and cameras, to support workers through natural language interface for at least four food-manufacturing tasks, such as packaged items counting, monitoring machinery activity and logging production batches.

Once the development of the ELECTRA solution is finalized, it will be deployed on the operational packaging line at HELIOS, tested in real time, and validated under real production conditions so that usability, reliability, and impact on production efficiency and sustainability can be evaluated. The solution’s outcomes and impact will be monitored and assessed, aligned with the experiment’s key performance indicators.

Finally, the developed DIA will be published via a PrestaShop-based WWLS marketplace instance so it can be offered as a reusable skill for other SMEs. The marketplace instance will include documentation and demonstration material to enable straightforward replication and adoption, including at least one open technical webinar and a short video demonstration. The shop and its core modules will also be evaluated using the evaluation forms provided by WASABI.

 

Who will be involved

ELECTRA is implemented by a three-entity consortium with clearly defined roles:

HELIOS: Coordination and pilot owner

HELIOS is the lead SME and project coordinator, providing a real production environment, with already deployed monitoring hardware, as the pilot site for ELECTRA. HELIOS leads overall coordination, and provides business/process requirements for the experiment, leads deployment and ensures operator training on the packaging line, and contributes to evaluation through operator feedback and iterative improvements.

 

Plegma Labs: Technical development and exploitation

Plegma leads the technical work on the DIA and WASABI integration and its key activities include the design of the system architecture and interfaces that will integrate WASABI components, analytics, and data integration through CCTV/energy meters, including WWLS integration. Plegma will also implement the DIA’s core functionalities and integrate the DIA with HELIOS data sources via open data exchange formats and REST APIs, and provide technical support during deployment, validation, and improvement activities in HELIOS and lead exploitation planning. Finally Plegma will deploy/configure the WWLS marketplace instance, distribute the ELECTRA DIA via WWLS, set up the core functional module for the seller profile, and upload the related OVOS skill to the shop.

 

DIGIAGRIFOOD: Ethics, legal compliance and dissemination

 

DIGIAGRIFOOD EDIH will lead communication activities, referencing this experiment as a regional showcase of trustworthy, human-centric AI adoption. Also, they will contribute their capacity in digital transformation and sustainability for agri-food SMEs, ensuring that the experiment aligns with ethical AI principles, the AI Act, and GDPR compliance. They will consult on responsible AI compliance, support data governance, oversee ethics compliance.

 

Why the experiment is relevant

ELECTRA is directly relevant as it targets the core operational challenges of food-manufacturing SMEs. By turning real-time video and energy-meter data into actionable insights accessible through natural-language interaction, ELECTRA aims to enhance human and AI collaboration in manufacturing, as the developed DIA will reduce manual counting and logging effort, provide continuous digital visibility of inventory levels, assist workers towards evidence-based decision making, improve packaging-line quality assurance, and increase traceability through automated batch logging. Sustainability and resilience are supported by integrating energy monitoring with production information (including reducing unnecessary cold-storage checks), while responsible deployment is ensured through full compliance with EU principles for trustworthy AI and relevant legislation, including the AI Act and GDPR.

Currently, SMEs largely depend on manual supervision and legacy systems with limited real-time data access, despite significant efforts towards improving efficiency and sustainability in the food manufacturing domain. This makes it difficult to optimize production processes and energy usage, since multiple tasks such as product counting, registering batches, and machine monitoring are often manual, resulting in fragmented data, delayed reporting, and higher operational costs. Existing systems for packaging control and inventory management mostly rely on complex high-cost infrastructure, such as dedicated computer vision solutions, barcode/RFID tracking systems, and advanced warehouse management platforms. Such solutions require substantial capital investment and specialized expertise, placing them beyond the practical reach of most food manufacturing SMEs.

Challenge 1: Limited real-time data access in food manufacturing.

SMEs often face inventory management inefficiencies, product batch handling issues, and higher production line costs, as the integration of data is often delayed and/or fragmented.

Challenge 2: Manual activities can lead to inefficiencies and wasted resources.

Existing manual processes adopted by SMEs, such as counting and verifying packaged products, slow production and potentially introduce errors, resulting in overproduction and energy management issues (e.g., opening fridges often to check inventory), and even expired stock.

Challenge 3: Low adoption of intuitive and human-centered digital tools.

Food manufacturing facilities either do not integrate digital tools to aid workers or adopt tools with complex dashboards that discourage engagement and require specialized training.

Challenge 4: Lack of openness and interoperability across digital solutions.

SMEs often adopt proprietary or isolated digital tools, which limits interoperability and knowledge transfer. This fragmentation slows digital transformation and increases scaling costs.

Objective 1: 

Develop, and demonstrate a DIA for inventory management, production line efficiency, and packaged product quality assurance in food manufacturing facilities.

A task-oriented, conversational assistant will be developed, integrated with the Closed-Circuit Television (CCTV) system and energy-metering infrastructure at HELIOS Bakery. The solution will automatically count packaged items, monitor machinery activity, log production batches, and provide real-time natural-language insights to workers, and it will be deployed on the operational packaging line at HELIOS with real-time testing and iterative feedback from operators to validate usability, reliability, and sustainability. Through this approach, the assistant will enhance production efficiency, improve packaging-line quality assurance, and reduce manual workload and human errors in stock handling and process supervision.

Objective 2: 

Promote sustainability, resilience, and human-AI collaboration in food manufacturing and ensure transparency and interoperability with open, modular, and replicable technologies.

Integration of energy meters for packaging-line machinery will enable the monitoring and analysis of power consumption, and the assistant will correlate energy data with production throughput, helping operators identify periods of unnecessary consumption or anomalies and supporting optimization of energy usage and insights for equipment efficiency. The solution will also integrate open components from the WASABI ecosystem (such as OVOS, RASA, and open LLM models (e.g., Llama) for task-oriented dialogue, rely on open data exchange formats, and be deployed with Docker, while complying with GDPR and AI Act principles.

Objective 3: 

Distribute the developed DIA via a WASABI marketplace instance through deployment of WWLS instance hosting the developed DIA as a reusable skill for other SMEs in the food manufacturing sector. The marketplace instance will include documentation and demonstration material, enabling straightforward replication and adoption across the food manufacturing domain.

ELECTRA is carried out in the food manufacturing sector, focusing on the packaging quality assurance and inventory management of a food manufacturing SME. The experiment targets day‑to‑day shop‑floor needs such as inventory tracking, packaging‑line supervision, and quality assurance, by introducing a DIA that combines real‑time CCTV feeds and energy‑meter data from packaging machinery to generate actionable insights through a conversational AI interface.

The experiment will be carried out at HELIOS’s facilities in Spata (Attica), Greece, in real-life food manufacturing conditions. The ELECTRA solution will be deployed on the operational packaging line at HELIOS, with real‑time testing and iterative feedback from operators to validate usability in a real food manufacturing environment. HELIOS provides the pilot site and access to a real production environment with already deployed hardware (IoT sensors, energy meters, CCTV) that will be used as a basis for the DIA integration and validation.

The target users are employees in the facility who interact with packaging and inventory processes, i.e., workers/operators and supervisors who will use the assistant through natural, task‑oriented dialogue (through intuitive voice or text commands) to ask for information (e.g., inventory levels, production needs, energy consumption) and receive real‑time answers without need for technical expertise or manual data entry. Beyond the pilot, the solution is intended for other SMEs in the food manufacturing sector via distribution through a WASABI marketplace instance.

ELECTRA does not introduce any additional safety, regulatory, or technical constraints beyond those already present in the pilot environment. The solution operates purely at software level and builds upon already installed and operational infrastructure, including energy meters and CCTV systems at HELIOS, without requiring new physical installations, electrical modifications, or changes to certified equipment. Therefore, it does not pose any additional safety risks nor does it require new certifications, as it neither replaces nor alters existing certified components but functions as a data-driven analytical and conversational assistance layer. Access to the necessary operational data is already ensured through the established collaboration between HELIOS, as consortium leader, and PLEGMA, with existing data governance and technical pathways in place. Furthermore, integration is limited to the systems already described in the experiment documentation (energy meters and cameras). As such, all relevant constraints regarding safety standards, certification, data access, and system interoperability have already been addressed within the current framework.  In terms of data flow, the DIA will integrate with HELIOS data sources and the WASABI ecosystem using open and standardized data exchange formats, i.e., data from cameras and energy meters will be transmitted via open protocols such as RTSP, HTTP, and MQTT, while the energy data will be using formats, such as JSON and XML, exposed through through RESTful APIs.

EXPECTED IMPACT

EXPECTED IMPACT

The ELECTRA experiment is expected to deliver measurable impact at HELIOS by introducing a DIA that acts as an interactive co-worker on the production floor, and supports inventory tracking, packaging-line supervision, and quality assurance through natural, task-oriented dialogue. The assistant will combine real-time video analytics and real-time energy-metering data to automatically count packaged items, monitor machinery activity, and log production batches, while providing contextual, voice or text-based feedback to supervisors.  Expected operational benefits include reduced manual counting/logging effort and improved real-time visibility and evidence-based decisions, resulting in reduced over-production, which in-turn leads to cost-savings; improved packaging-line Quality Assurance, and higher traceability via automated batch logging, with improved worker support through natural language interaction. ELECTRA also targets sustainability and environmental benefits through improved energy awareness and more efficient control of energy-intensive processes. A practical and measurable impact will come from improved control of energy-intensive cold-storage facilities. By providing continuous digital visibility of inventory levels, the DIA will eliminate the need for workers to repeatedly open fridge doors simply to verify stock, reducing thermal losses and unnecessary energy consumption.

Beyond the pilot site, ELECTRA is expected to generate technological, economic, and business value for all consortium members while contributing to the wider digital transformation of Europe’s manufacturing SMEs. For DIGIAGRIFOOD EDIH, the experiment reinforces its role as a regional enabler of responsible AI adoption in the agri-food and manufacturing sectors, strengthening advisory and training activities through a concrete demonstration of trustworthy, human-centric AI integration. Outreach and dissemination activities aim to reach over 300 SMEs and stakeholders and will include at least one open technical webinar, and a short video demonstration published on the WWLS. For Plegma Labs, the experiment advances its applied industrial AI activities by extending capabilities toward building conversational, task-oriented AI assistants and opens new commercial opportunities through distribution via the WASABI White Label Shop (WWLS) and an AI-as-a-Service (AIaaS) direction. Success will be demonstrated through real deployment and validation in HELIOS’s operational environment (the solution deployed on the operational packaging line, real-time testing, iterative feedback from operators, and an evaluation including adoption of the DIA and its deployment through the WWLS).

Some of the experiment’s measurable KPIs that will be monitored are the following, with stated baselines and post-experiment targets:

  • KPI 2 Food-manufacturing tasks handled by DIA: Current data (pre-experiment): 0. Expected outcome (post-experiment): ≥ 4.
  • KPI 5 Number of meters integrated in the DIA: Current (pre-experiment): Existing meters but not connected to DIA. Expected outcome (post-experiment): > 3.
  • KPI 8 Number of open components integrated to DIA: Current (pre-experiment): 0. Expected outcome (post-experiment): > 3.
  • KPI 9 Number of IoT devices integrated: Current (pre-experiment): Existing devices not connected to DIA. Expected outcome (post-experiment): > 7.
  • KPI 10 Positive feedback: Current data (pre-experiment): 0. Expected outcome (post-experiment): ≥ 5.
  • KPI 11 Total outreach of SMEs/stakeholders: Current data (pre-experiment): Expected outcome (post-experiment): ≥ 300.
  • KPI 12 Newsletters: Current data (pre-experiment): 0. Expected outcome (post-experiment): 2.
  • KPI 13 Posts on social media: Current data (pre-experiment): Expected outcome (post-experiment): ≥ 4.

TERIYAKI

EXPERIMENT OVERVIEW

The TERIYAKI experiment takes place in a manufacturing SME operating in textile printing, where task coordination, quality assurance and production tracking are currently manual and experience-driven. This leads to inefficiencies, variability in quality control, and limited use of production data for performance monitoring and improvement.

The goal of the experiment is to design, implement and validate a modular Digital Intelligent Assistant (DIA) for manufacturing environments, addressing the above issues. The system will integrate voice-enabled interaction, intelligent task coordination, computer vision-based quality assurance and structured performance data collection into a unified solution. The experiment will demonstrate how such an assistant can be deployed in a real SME production setting using open, interoperable technologies and a containerised architecture.

 

TERIYAKI DIA components & architecture

The TERIYAKI DIA will be composed of four interconnected functional components designed to address the current inefficiencies and limited data visibility within the production environment.

First, a voice interaction component will be the main DIA interface, enabling workers to communicate with the system using natural speech, reducing reliance on manual reporting and tracking. This component will handle speech recognition, intent detection and spoken feedback, supporting hands-free and structured task logging. Furthermore, a computer vision-based quality assurance module will capture product images, during manufacturing, and run a tailored computer vision model to detect quality defects in near real-time. This capability will aim to reduce inspection variability and strengthen consistency in quality control while maintaining human oversight. Moreover, a task coordination module will manage production orders by organising, sequencing and assigning tasks based on predefined workflows and available resources, supporting more structured coordination across concurrent activities. Finally, an analytics and visualisation component will collect structured production data such as task durations, defect rates and workload distribution, presenting them through an intuitive dashboard to enable improved performance monitoring and data-driven decision-making over time.

Architecturally, the system will be divided into a client layer and a server layer (see Figure 1 below). The client layer will host the user interfaces and voice services, while the server layer contains the task coordination logic, data storage and computer vision processing pipeline. Communication between layers will be handled through an application programming interface (API), enabling modular upgrades and potential replication in other manufacturing environments.

 

TERIYAKI Operational Scope and Use Case Definition

The following analysis defines the functional boundaries and operational scenarios that will drive the design, development and validation of the TERIYAKI DIA. This structured definition establishes the core system capabilities to be implemented, clarifies the roles and responsibilities of the actors interacting with the system, and defines the concrete operational scenarios that will be validated throughout development and testing.

The tables below therefore act as a formal bridge between system architecture, implementation planning and experimental validation activities.

Table 1. Teriyaki core system functionalities

ID

Core System Functionality

Description

F1

Voice-Enabled Human–System Interaction

Wake-word activation, speech-to-text, intent recognition and real-time verbal feedback between workers and the DIA.

F2

Automatic CV-Backed Quality Assurance

HD image capture, template alignment, defect detection, heatmap generation, quality scoring, and escalation of borderline cases.

F3

Intelligent Task Coordination, Optimization & Control

Assisted task scheduling considering worker availability, order priorities and resource constraints, with PM review and control.

F4

Production Performance Analytics & Visualization

Performance logging (defect rates, task durations and more) and dashboard-based visualization for monitoring and decision support.

 

The four functionalities listed on the above table define the technological and operational scope of the experiment. Each subsequent use case references one or more of these functionalities to ensure traceability between requirements, development and testing. In the table below, the types of actors that interact with the system during the experiment are detailed:

Table 2. Teriyaki system actors

ID

Actor Name

Description

DIA Interactions

ACT1

Screen Printing Specialist (SPS)

Operator responsible for executing printing tasks on membrane switches.

Start/end printing tasks via voice; Listen to real-time defect alerts during printing; Continue or adjust printing based on system feedback.

ACT2

Hybrid Professional (HP)

Operator responsible for non-printing production tasks and final QA operations.

Start/end assigned tasks via voice; Execute final QA tasks; Receive QA feedback; Escalate issues if required.

ACT3

Production Manager (PM)

Supervisor responsible for order creation, oversight and decision-making.

Create production orders via UI; Review scheduling proposals; Monitor production performance via PM dashboard; Review borderline QA cases;

 

Finally, the use cases presented in Table 3 below will be the basis of the primary interaction scenarios that will be executed and validated during the experiment. They demonstrate how voice interaction, automated quality inspection, task coordination and performance logging operate together within real production workflows.

Table 3. The core Teriyaki use cases

Use Case ID

Title

Description

Actor

Involved

Functionalities

UC1

Voice-Based Task Execution & Logging

A HP issues a voice command (e.g., “TERIYAKI start task X” / “TERIYAKI finish task X”) to start or complete a production task; the DIA registers the task and logs its execution.

HP

F1, F4

UC2

In-Process QA automation

A SPS issues a voice command (e.g., “TERIYAKI start printing product 579 red”); the DIA activates the in-process QA inspection pipeline, which issues alerts when a defect is found.

SPS

F1, F2, F4

UC3

Final QA automation

A HP issues a voice command (e.g., “TERIYAKI start final QA product 579”); the DIA activates the final QA inspection pipeline, which issues alerts when a defect is found.

HP

F1, F2, F4

UC4

Production Order Setup & Oversight

A PM creates a new production order via the PM Dashboard; the DIA generates the corresponding task plan and provides production-related data visualization.

 PM

F3, F4

TERIYAKI Implementation, Testing and Evaluation Phases

The functional scope, actors and operational use cases, defined above, form the basis for the evaluation and validation framework of the experiment. These elements will be further refined into structured user stories guiding development activities and internal verification procedures. Moreover, a set of System Integration Tests (SITs) and User Acceptance Tests (UATs) will be defined for the different validation phases.

During the first implementation phase (M2-M6), the user stories will drive internal testing cycles to ensure traceability between defined functionalities and technical integration. The SITs will be conducted to validate the first fully integrated version of the DIA, marked as a project milestone (MS2) in Month 6. At this stage, the interaction layer, computer vision module, task coordination logic and analytics component will be deployed together for the first time within the factory environment. Following this milestone, the period from Month 7 to Month 10 will focus on stabilising integrations across modules, enhancing the CV algorithm, and improving overall system robustness. During the same period, the Web3-related capabilities, including the NFT module, will be integrated alongside the technical preparation for packaging and publication within the WASABI marketplace environment.

The core TERIYAKI evaluation will take place during Months 10–12, encompassing the execution of the UATs and the measurement of defined KPIs following the completion of training activities for the involved production personnel. This phase will assess usability, operational compatibility, system stability and overall performance under real production conditions, ensuring that the TERIYAKI DIA operates effectively within everyday workflows. During the same period, WASABI marketplace evaluation activities will be conducted to validate the assistant’s containerised packaging, interoperability compliance and Web3-enabled components prior to formal publication.

Demonstration activities to the WASABI consortium will be coordinated according to the maturity level reached at each point in the timeline. Depending on the phase of the project, demonstrations may range from technical integration walkthroughs and backend processing showcases to application interface demonstrations and live operational scenarios within the production environment.

The experiment addresses several interrelated operational and technical challenges within a small manufacturing environment where production coordination and quality assurance processes are largely experience-driven and manually executed. Production orders are managed through human supervision, while visual inspections depend heavily on skilled personnel. This creates bottlenecks, variability in inspection consistency and limited structured data for performance analysis. The challenge is not only technological but also organisational, as any introduced system must integrate into existing workflows without disrupting productivity.

From an operational perspective, task coordination across multiple concurrent orders requires continuous managerial oversight. In environments with limited staff, this can lead to suboptimal task sequencing and idle periods between production steps. Introducing a digital assistant must therefore balance automation with human control, ensuring that the system supports rather than overrides managerial decision-making.

Technically, the reliability of voice interaction in an industrial environment presents a significant challenge. Background noise from printers and machinery, short or ambiguous commands, worker accents and potential multi-device interference may reduce speech recognition accuracy. Ensuring robust wake-word activation, reliable intent detection and effective clarification mechanisms is essential to avoid operational disruption caused by misinterpreted commands.

Similarly, computer vision-based quality assurance introduces its own complexity. Variability in lighting conditions, camera positioning and image alignment can directly affect defect detection performance. The system must ensure stable hardware setup, controlled lighting, calibration procedures and threshold tuning to minimise false positives and false negatives while maintaining practical inspection speed.

Deployment within the factory environment presents additional constraints. The introduction of tablets, cameras and embedded computing equipment must not interfere with established production flows or workspace ergonomics. Hardware positioning and mounting structures must therefore be carefully designed to prevent operational disturbance.

Finally, organisational adoption and user trust represent critical challenges. Workers and managers must perceive the system as supportive rather than intrusive. Ensuring intuitive interaction, gradual integration and adequate training is essential to achieving sustainable adoption and long-term impact.

Currently, finding the right support tools for these specific environments is difficult for many SMEs. Traditional monitoring systems are often costly, technically complex or difficult to integrate into agile SME environments, while standard training materials and safety instructions tend to be static and provide no contextual or real-time support during work. Furthermore, generic voice assistants often rely on cloud processing, which raises concerns regarding latency, reliability and data privacy. 

As a result, there is a lack of accessible, privacy-first technologies that align with operational realities and support worker wellbeing without compromising established processes or data protection expectations.

Objective 1: 

Develop and integrate a fully operational Digital Intelligent Assistant that combines voice interaction, task coordination, computer vision-based quality assurance and analytics into a unified system, ensuring functional coherence and technical correctness across all modules.

Objective 2: 

Evaluate and validate the Digital Intelligent Assistant within the real production environment of the manufacturing SME, executing pilot scenarios, measuring performance against defined indicators and assessing its operational feasibility and user acceptance.

Objective 3: 

Prepare, publish and validate the Digital Intelligent Assistant within the WASABI marketplace ecosystem by packaging the solution according to marketplace requirements, integrating Web3-enabled capabilities and completing the necessary interoperability and compliance evaluations.

The experiment is positioned within the industrial electronics manufacturing sector, specifically focusing on membrane switch production. The primary goal is to integrate and test the TERIYAKI Digital Intelligent Assistant directly within real manufacturing conditions at the SKOUPAS production premises in Greece. The validation will take place under normal operating conditions, including screen printing, assembly and quality control, ensuring that the solution is assessed in an authentic industrial environment rather than a laboratory or simulated setting.

The main users involved in the experiment include the screen printing specialist, responsible for operating the printing process and interacting with the quality assurance system during production; the Production Manager, who oversees workflow coordination, scheduling, and prioritization across multiple production orders; and a hybrid professional role combining technical understanding with operational responsibilities, acting as a bridge between production staff and the digital solution deployment. These users will interact with the system through workflow coordination tools, computer-vision-assisted quality inspection, and voice-enabled task support.

Relevant stakeholders include company management evaluating operational efficiency improvements, technical partners responsible for solution development and integration, digital innovation support organizations facilitating the experiment, and potentially end customers interested in improved product consistency and traceability. These stakeholders contribute to defining requirements, validating outcomes, and assessing the broader applicability of the solution within manufacturing environments.

Several operational and technical constraints must be carefully addressed during the experiment. The computer-vision-supported quality assurance system operates in-process during screen printing and therefore requires high sensitivity and stability in image acquisition. Multiple variables may affect consistency and defect detection accuracy, including lighting conditions, camera alignment, surface reflections, vibration, print variability, and environmental factors. Ensuring reliable defect identification under these dynamic production conditions represents a critical technical challenge. In parallel, the voice-enabled interaction must function within a real factory environment characterized by background noise from printers and other machinery. Potential issues such as wake-word false positives or false negatives, worker accents, short command structures, and possible interference between multiple devices must be considered to maintain usability and operator trust. These constraints require robust calibration and careful system integration to ensure that TERIYAKI performs reliably without disrupting production continuity or operator workflow.

EXPECTED IMPACT

EXPECTED IMPACT

The TERIYAKI experiment is expected to generate measurable operational improvements within SKOUPAS’ membrane switch manufacturing environment by enhancing task coordination, CV based quality assurance, and workflow visibility under real production conditions. The primary impact will be demonstrated through improvements in production efficiency, reduction of waste, and enhanced decision-making support for production staff and management.

Operationally, the experiment aims to reduce average order lead time by approximately 18% compared to the current baseline. This reduction will reflect improved task prioritization, reduced waiting time between process steps, and faster and more reliable quality validation cycles. Scrap rate, currently averaging approximately 33% in critical stages, is expected to decrease to 25% through the implementation of computer-vision-assisted quality checks and earlier defect identification during screen printing. Additionally, worker idle time and scheduling time are both targeted for a reduction of approximately 20%, reflecting improved coordination and reduced dependency on centralized manual task allocation.

For end users within the factory environment — namely the screen printing specialist, Production Manager, and hybrid professional — the expected benefits include clearer task visibility, more structured workflow guidance, faster identification of print defects, and reduced need for repeated coordination exchanges. The Production Manager is expected to experience reduced cognitive load through improved workflow transparency and structured task support. Operators should benefit from more consistent inspection support and improved responsiveness during high-production periods.

For broader stakeholders, including company management and digital innovation partners, success will be measured through quantifiable improvements in efficiency and consistency, as well as validation of the Digital Intelligent Assistant in a real industrial setting. Demonstrating successful integration of voice interaction and in-process computer vision under real manufacturing constraints will strengthen scalability potential across similar small and medium-sized manufacturing environments.

From a sustainability perspective, scrap reduction directly contributes to lower material waste, reduced energy consumption per accepted product, and decreased rework cycles. Earlier defect detection limits value-added processing on defective units, minimizing unnecessary use of inks, substrates, laminates, and labour. Reduced idle time and improved scheduling efficiency also contribute indirectly to better resource utilization and optimized energy use across production operations.

Workwell

EXPERIMENT OVERVIEW

The experiment focuses on improving well-being & safety of workers in high-precision manufacturing sectors, such as medical devices, wearables & electronics. Workers in these environments perform detailed assembly tasks that demand focus, accuracy and physical endurance.

We develop and validate the WorkWell Assistant, a conversational Digital Intelligent Assistant (DIA) designed to support workers directly on the shop floor. The system follows a modular approach that combines voice-based interaction with visual perception capabilities, enabling touch-free communication, context-aware guidance and accessible feedback without interrupting ongoing work processes.

The experiment brings together expertise in artificial intelligence, edge computing and medical device manufacturing to evaluate the assistant under real-world conditions. The goal is to demonstrate technical feasibility and evaluate user experience, interaction quality and privacy-aware deployment in real manufacturing environments.

High-precision assembly environments require sustained concentration, fine motor control, and prolonged static postures. Workers may experience physical strain, fatigue or reduced ergonomic comfort during repetitive tasks, while maintaining strict quality requirements. Supporting worker wellbeing without disrupting established workflows remains a significant challenge.

Currently, finding the right support tools for these specific environments is difficult for many SMEs. Traditional monitoring systems are often costly, technically complex or difficult to integrate into agile SME environments, while standard training materials and safety instructions tend to be static and provide no contextual or real-time support during work. Furthermore, generic voice assistants often rely on cloud processing, which raises concerns regarding latency, reliability and data privacy. 

As a result, there is a lack of accessible, privacy-first technologies that align with operational realities and support worker wellbeing without compromising established processes or data protection expectations.

Objective 1: Human-centred multi-channel worker support

The project aims to explore how a conversational digital assistant can support workers’ well-being and situational awareness during precision assembly tasks. The assistant combines voice-based interaction with selected vision-supported features to enable touch-free use, accessible feedback and context-aware guidance. It is designed to integrate naturally into existing workflows without interrupting the working process. The system will follow a modular architecture that allows flexible configuration and future extension, while enabling adaptation to different workplace needs without changing the core interaction experience.

Objective 2: Scalability and long-term sustainability

This objective focuses on preparing the WorkWell OVOS Skill for broader adoption and distribution through the WASABI ecosystem. It will be packaged as a modular deployment so that it can be integrated and adapted by other European SMEs according to their specific needs. The approach supports long-term sustainability by providing a structured framework that allows future extensions without requiring fundamental redesign.

Objective 3: Trustworthy and privacy-first AI deployment

The project places strong emphasis on privacy, transparency and ethical AI practices. The assistant is designed to operate primarily on local edge devices, reducing dependency on cloud processing and supporting GDPR-aligned data handling. This objective focuses on demonstrating that conversational and multi-channel AI systems can be implemented in a way that respects user trust while remaining technically practical.

The experiment takes place in the context of high-precision manufacturing. These environments require consistent procedures, careful handling of materials, and adherence to strict quality and safety requirements. Workers often perform repetitive or fine-motor tasks that demand sustained concentration and static postures. In many cases, the use of protective equipment or specialised work attire further limits conventional interaction methods such as touch-based interfaces, reinforcing the need for accessible and non-intrusive interaction modalities.

The pilot is carried out together with our manufacturing partner who is experienced in regulated medical device production environments. The experiment focuses on an assembly process involving wearable devices and sensitive electronic components. It examines how a conversational Digital Intelligent Assistant (DIA) can be introduced as a supportive layer within existing workflows. Particular attention is given to privacy-by-design principles, user acceptance and maintaining a non-intrusive interaction model.

Primary users are assembly workers performing detailed tasks, while stakeholders include technical teams and organisational decision-makers interested in practical approaches to human-centred digitalisation. The exploration seeks to better understand how such assistants are perceived in everyday work contexts and what conditions support meaningful adoption.

EXPECTED IMPACT

EXPECTED IMPACT

The most important impact we aim to achieve is to improve the wellbeing and daily working experience of people involved in high-precision manufacturing. The experiment focuses on ergonomic support by enabling hands-free access to guidance and interaction during demanding assembly tasks. The assistant is intended to contribute to a more supportive working environment by encouraging ergonomic awareness, making interaction intuitive and allowing workers to stay engaged with their tasks while maintaining comfort.

Success will be assessed through a combination of human-centred and technical indicators. On the human-centric side, evaluation focuses on worker acceptance, perceived usefulness and overall satisfaction. From a technical perspective, we examine practical indicators like stable interaction behaviour, integration within the modular architecture and feasibility for deployment into operational workflows.

For organisations, the expected benefit is practical insight into how privacy-aware, edge-based assistants can be introduced in ways that encourage worker acceptance without requiring major infrastructure changes. The WorkWell OVOS skill provides a modular foundation that supports continued development and allows for adaptation to specific operational needs. Beyond the project itself, the work aligns with broader Industry 5.0 developments by exploring human-centred and privacy-conscious digitalisation approaches with potential relevance for future implementations.