GENius PM

EXPERIMENT OVERVIEW

The GENius PM experiment focuses on the development and deployment of a Digital Intelligent Assistant (DIA) designed to enhance the efficiency of maintenance operations in manufacturing environments. The solution integrates Generative AI with an existing AIoT (Artificial Intelligence of Things) platform, utilizing the OpenVoiceOS (OVOS) framework and containerization technologies to create a responsive, voice-enabled interface. The primary goal is to increase the value of predictive maintenance outputs by making them accessible to shop floor workers through natural language interaction.

The experiment will demonstrate a modular architecture featuring three core skills: a predictive maintenance skill for detecting anomalies, a voice-driven feedback skill for capturing worker insights to improve AI models, and a troubleshooting skill that uses Retrieval-Augmented Generation (RAG) with Large Language Models (LLMs) to provide step-by-step repair instructions. The project involves The Data Cooks as the technology developer, Cefriel as the Digital Innovation Hub for validation, and BEKO Europe as the industrial partner providing the validation environment. This experiment is highly relevant because it addresses the disconnect between complex AIoT systems and the practical reality of the shop floor, ensuring that technical knowledge is turned into actionable guidance for operators.

The experiment addresses critical operational and technical challenges in the current maintenance management landscape, specifically the “disconnect” between field operations and digital systems. Currently, maintenance personnel rely on Human-Machine Interfaces (HMIs) that display high volumes of notifications, which are often ignored due to information overload. There is a lack of voice-enabled interfaces, forcing workers to rely on screens when hands-free operation is needed. Furthermore, reporting is often delayed until workers return to an office, leading to significant data loss and reduced accuracy in predictive maintenance insights.

Another major pain point is the ineffective integration of maintenance documentation with AI systems. Current solutions cannot dynamically access factory-specific knowledge, such as Original Equipment Manufacturer (OEM) handbooks, standard operating procedures, and maintenance logs. Text-based interfaces are inadequate for the immediate, contextual needs of shop floor conditions. Consequently, valuable technical knowledge remains inaccessible to non-technical operators during critical maintenance tasks, limiting the adoption and effectiveness of advanced manufacturing assistance solutions.

Objective 1: 

The primary objective is to enhance maintenance operations by transforming complex technical data into actionable guidance. This involves integrating voice-enabled AI with simulated real-time sensor data and documentation using the OpenVoiceOS framework. By doing so, the experiment aims to provide manufacturers with intelligent digital assistants that streamline maintenance workflows and improve decision-making on the shop floor.

Objective 2:

The experiment seeks to significantly improve accessibility for non-technical operators. By developing advanced AI capabilities that are accessible via intuitive voice and text interfaces, the project aims to lower the barrier to entry for using sophisticated predictive maintenance tools. This ensures that operators can interact naturally with the system without needing specialized data science expertise.

Objective 3: 

A further objective is to establish a continuous learning cycle through a RAG-LLM architecture. The system is designed to incorporate operator feedback loops, allowing the AI models to refine their accuracy and relevance over time based on human input. This objective creates a scalable solution that adapts to specific manufacturing environments and improves its troubleshooting capabilities through actual usage.

 

The experiment targets the manufacturing sector, specifically focusing on the home appliances industry and broader maintenance management applications. The use case context involves the “G45 cavity production line” at the BEKO Europe facility in Cassinetta, Italy. This setting serves as the industrial validation environment where the system will be tested against the rigors of discrete manufacturing processes.

The operational environment for this specific experiment utilizes a simulation-based approach using historical production and maintenance data provided by BEKO Europe. The target users are maintenance managers, engineers, and shop floor operators who require immediate assistance with equipment anomalies. Relevant constraints include the need for strict adherence to data privacy regulations (GDPR and AI Act) regarding voice data, and the technical requirement to integrate with existing legacy documentation and machine-extracted data logs.

EXPECTED IMPACT

EXPECTED IMPACT

The experiment is expected to deliver significant operational efficiency benefits by providing intelligent, context-aware assistance that reduces the time required for maintenance tasks. Success will be determined by validating the system’s ability to provide accurate, voice-enabled guidance and its acceptance by users. Expected benefits for end-users include a reduction in unplanned downtime, improved maintenance scheduling, and a decrease in wasted materials and energy consumption, leading to overall cost savings.

To measure success, the project will track several Key Performance Indicators (KPIs). The experiment aims for a Voice Command Recognition Accuracy of at least 75% and a Contextual Accuracy of suggestions of at least 80% in user-rated tests. Usability will be measured using the System Usability Scale (SUS), targeting a score of 75 or higher. While real-time operational metrics like Mean Time To Repair (MTTR) are difficult to measure in a simulated environment, the project projects a 3-5% reduction in MTTR and a similar improvement in First-Time Fix Rates (FTFR) during the validation phase.

skite-mAIn

Company and Team

OVERVIEW

The Skite-mAIn project has officially launched, designed to revolutionise remote assistance in the maintenance of complex machinery, from automotive to aerospace.
Thanks to the collaboration between Nextage and InRebus, Skite™ – InRebus’ flagship diagnostic platform – will be enhanced with multilingual, hands-free, real-time conversational AI technologies developed by Nextage and delivered through the software architecture provided by the European consortium WASABI.

EXPERIMENT

MAIN OBJECTIVES:

  • Improve the remote assistance experience with artificial intelligence tools
  • Implement natural, multilingual voice interaction
  • Test the solution on real critical infrastructures thanks to its integration into the Skite™ platform

 

ROLES IN THE PROJECT:

● Nextage srl: development of conversational AI and communication activities
● InRebus Technologies: integration of the solution on Skite™, definition and monitoring of performance indicators, testing on critical infrastructures, business support
● Neural: dissemination and commercialisation support

Wallabi

EXPERIMENT OVERVIEW

This experiment focuses on integrating OVOS (Open Voice OS) as a Digital Assistance solution into the quality control process of Wall-AI battery manufacturing. Wall-AI is an advanced energy management device designed to improve how solar energy is stored and used in self-consumption photovoltaic systems, helping users reduce electricity costs and increase energy efficiency.

At the core of Wall-AI is the CAVE battery, which is built using recycled electric vehicle battery modules. After being carefully inspected and tested, they are reconditioned and assembled into new battery packs. This approach supports circular economy principles by reducing electronic waste and lowering the environmental footprint of renewable energy systems.

The manufacturing process takes place in Tenerife and is organized into two main stages: battery disassembly and module reassembly. During disassembly, electric vehicle battery modules are extracted, tested, and classified according to their condition and remaining capacity. In the assembly stage, selected modules are interconnected, integrated with control electronics, and prepared for final validation.

The experiment will introduce OVOS to digitally support and enhance the quality control workflow. The system will guide technicians through inspection procedures, assist in data collection, and help detect performance deviations or potential safety issues. By digitalizing and structuring the validation process, the solution aims to improve consistency, traceability, and efficiency.

This experiment is relevant because ensuring the safety and reliability of second-life batteries is essential for large-scale deployment in renewable energy systems. By combining battery reuse with intelligent digital support, the project contributes to more sustainable, reliable, and cost-effective solar energy solutions.

The experiment addresses several operational and technical challenges related to the quality control of second-life batteries used in the Wall-AI. The manufacturing process involves the disassembly of electric vehicle batteries and the reassembly of selected modules into new CAVE battery packs. While this approach supports circular economy and sustainability goals, it introduces complexity in ensuring consistent quality, safety, and performance.

One of the main challenges is the lack of full digital traceability throughout the disassembly and assembly stages. Currently, information related to battery module testing, classification, and integration may be recorded manually. This can limit the ability to track each module’s history, monitor its condition over time, and ensure complete transparency across the production workflow.

Another important issue is the dependence on manual data entry and subjective evaluations during inspection and classification. Technicians must assess parameters such as voltage behavior, remaining capacity, and physical condition. While their expertise is critical, manual processes can introduce variability and inconsistencies between operators.

There is also limited real-time visibility of quality metrics and test results across the manufacturing line. Without centralized and structured digital monitoring, it becomes more difficult to detect anomalies quickly or compare performance data between batches. This reduces the ability to react proactively to emerging issues and may delay corrective actions.

Finally, identifying early trends or deviations that could affect long-term battery performance remains a challenge. Reused battery modules may show gradual performance changes due to aging or prior usage conditions. Without systematic data analysis and structured monitoring, subtle patterns may go unnoticed until they affect system reliability.

The experiment therefore focuses on improving digitalization, standardization, and data-driven quality control within the Wall-AI manufacturing process.

Objective 1: 

Enhancement of product quality:

Integrating the Digital Assistant (DA) into the testing process enables precise, consistent, and automated quality control of each Wall-AI battery. This integration ensures standardized testing procedures, minimizes variability, and improves traceability. As a result, product reliability and long-term performance are strengthened, enhancing customer satisfaction and reinforcing EAVE’s reputation as a provider of high-quality, sustainable energy solutions.

Objective 2:

Competitiveness and cost reduction:

The incorporation of digital assistance into the testing workflow increases operational efficiency by providing operators with clear, standardized guidance and automated data management. This reduces process time, minimizes errors and rework, and optimizes resource utilization. Consequently, production costs are lowered, and EAVE’s competitive position in the energy storage market is reinforced.

Objective 3: 

Labour security and process effectiveness:

Although operators remain responsible for all physical testing tasks, the Digital Assistant supports them by delivering step-by-step guidance, real-time instructions, and consistent enforcement of safety protocols. This support enhances workplace safety, reduces the likelihood of human error, and improves overall process reliability and effectiveness.

 

The experiment is positioned within the renewable energy sector, specifically in the field of solar energy integration and second-life battery reuse. It focuses on the application of OVOS to support the quality control of Wall-AI battery manufacturing. Wall-AI is an advanced energy management device designed to optimize energy storage and consumption in self-consumption photovoltaic systems, helping users reduce electricity costs and improve system efficiency.

The experiment will be carried out at the Institute of Technology and Renewable Energy (ITER) in Tenerife, Spain. It will take place in a real industrial production environment where Wall-AI batteries are disassembled, tested, reassembled, and validated. The Digital Assistance solution will therefore be tested under operational manufacturing conditions rather than in a laboratory-only setting.

The main users involved are battery manufacturing quality control operators. Key stakeholders include renewable energy system operators and future end users of Wall-AI systems.

The implementation of the Digital Assistant must comply with data protection regulations, as limited operational and user interaction data may be processed. The system is designed to comply with the General Data Protection Regulation (GDPR) by minimizing personal data collection and focusing primarily on process-related information.

EXPECTED IMPACT

EXPECTED IMPACT

The primary technological impact involves applying DA to optimise product testing and production processes, leading to reduced waste and testing times, lower energy consumption, and enhanced product quality. Additionally, WASABI aims to equip the workforce to efficiently manage Wall-AI production using AI technologies, fostering human-AI collaboration.

The experiment’s emphasis on reducing energy consumption and improving production efficiency can lead to substantial economic impact through cost savings in manufacturing. The Wall-AI device also provides potential savings for end-users by lowering electricity bills. By integrating advanced technologies and sustainability practices, the experiment can boost the competitive advantage of EAVE, enabling them to stand out in the market.

The commitment to sustainability and technological innovation can enhance the brand reputation of companies involved in the EAVE and WASABI projects, leading to increased commercial impact with customer loyalty. Furthermore, the success of the Wall-AI device and the associated production improvements can facilitate expansion into new geographic markets.

TiconAi

EXPERIMENT OVERVIEW

The experiment titled TiConAI (AI-Powered Conversational Assistant with RAG Integration for Timber Operations) is designed to digitize and optimize knowledge management and operational workflows within the timber trade industry. The project is executed by SWMS Consulting (serving as the technology provider). To validate the solution in a real-world industrial setting, Holzhandel Vogt provides the operational test environment and serves strictly as the use-case provider (representing the manufacturing/trade SME sector), without acting as an officially funded partner of the experiment.

The primary objective of TiConAI is to deploy a Digital Intelligent Assistant (DIA) that acts as a central interface for employees to access complex technical information and real-time operational data. By integrating advanced Artificial Intelligence technologies into the daily workflow, the experiment aims to reduce the dependency on senior experts, accelerate the onboarding of new employees, and significantly reduce the time spent answering regularly recurring internal and operational inquiries.

Technical Solution and Functionality

The core of the solution is a voice-based assistant built upon the Open Voice OS (OVOS) framework. This ensures a modular, privacy-preserving, and customizable architecture. The intelligence of the system relies on a hybrid approach combining two distinct technologies:

  1. Retrieval-Augmented Generation (RAG): The system processes unstructured data sources, such as technical datasheets, supplier catalogs, and internal guidelines (mostly in PDF format). When a user asks a complex technical question (e.g., regarding wood properties or installation guidelines), the system retrieves the relevant document segments and uses a Large Language Model (LLM) to generate a precise, natural language answer based solely on the retrieved context.
  2. Agent-Based API Integration: For real-time data, the DIA is equipped with an “Agent” capability. This allows the system to recognize when a query requires live data from the SME’s internal systems, such as the Enterprise Resource Planning (ERP) or Warehouse Management System (WMS). The assistant can autonomously query these databases via Application Programming Interfaces (APIs) to fetch structured data like stock levels or order statuses.

Demonstration Scenarios and Use Cases

The experiment will demonstrate the DIA’s capability to handle specific, high-value tasks that were previously time-consuming or required manual lookup. Among the key functionalities to be demonstrated are real-time stock availability checks, where employees can verbally ask the DIA via a radio if a specific article is in stock. The system then queries the ERP system and provides an immediate verbal response, eliminating the need to return to a desktop terminal. Furthermore, the assistant handles order status and tracking by retrieving current order statuses and identifying the specific employee who packed a commission, which facilitates faster internal communication and accountability. To further reduce search times in the warehouse, the DIA guides employees directly to the exact location of ready-to-ship packed commissions. Finally, the system enables technical knowledge retrieval by answering specific product questions, such as whether a certain timber is treated for outdoor use, by synthesizing information directly from uploaded manufacturer documents.

Involved Parties and Roles

Regarding the involved parties and their respective roles, SWMS Consulting acts as the sole project executor and technology provider. SWMS is exclusively responsible for the execution of the experiment, which includes the complete software architecture, the development of the OVOS skills and the integration of the RAG pipeline. Holzhandel Vogt serves strictly as the use-case provider and testbed. They act solely to provide the operational test environment needed to validate the solution in a real-world industrial setting. To this end, they supply access to internal data, including ERP access and documents, as well as end-users like warehouse staff and internal advisors for testing and validation purposes only. Holzhandel Vogt does not act as an officially funded partner within this experiment.

Relevance of the Experiment

The timber trade is characterized by a high variance of complex products and a reliance on “tribal knowledge” held by long-term employees. TiConAI addresses the critical industry challenge of knowledge loss due to demographic changes and the skilled labor shortage. By making expert knowledge accessible via a simple conversational interface, the experiment demonstrates how traditional SMEs can leverage Generative AI to increase productivity, reduce error rates in logistics, and improve customer satisfaction through faster, data-driven service. The solution is designed to be compliant with data privacy standards (GDPR), ensuring that sensitive internal data is handled securely, potentially utilizing on-premise hosting for the LLMs.

Information Fragmentation and Accessibility

Operationally, the warehouse workforce faces a practical “information disconnect” that creates noticeable inefficiencies in daily logistics. Frontline workers handling physical goods usually lack direct access to digital systems while moving through the facility. To answer routine logistical or technical questions, such as “Is article X currently in stock?”, “Is this timber batch treated for outdoor use?”, or “Where is commission Y stored?” they frequently need to interrupt their workflow to walk to a central desktop terminal. Alternatively, they must locate and ask a more experienced colleague. This manual information retrieval causes recurring disruptions and unnecessarily ties up the resources of senior staff. Ultimately, the lack of immediate, hands-free data access at the point of action slows down the picking process and leads to avoidable delays in overall warehouse operations.

Operational Inefficiencies in Logistics

From a logistical perspective, the lack of real-time, hands-free information access creates bottlenecks. Warehouse staff often struggle to locate packed commissions or verify stock levels instantly while operating machinery or handling goods. The current environment lacks a digital interface that supports the mobile, hands-busy nature of timber logistics, leading to avoidable search times, picking errors, and delayed order processing.

Acoustic and Hardware Constraints for Speech-to-Text

Beyond operational hurdles, establishing a reliable voice interface in this specific setting presents a massive technical challenge. The timber yard is inherently loud, characterized by the continuous noise of heavy forklifts, machinery, and the physical handling of massive wooden goods. Furthermore, the workforce relies on standard two-way radios for mobile communication. These devices transmit highly compressed, narrowband audio that frequently suffers from static, frequency interference, and clipped sentences (push-to-talk delays). Achieving accurate Speech-to-Text (STT) recognition and extracting exact data points, such as alphanumeric article numbers or matchcodes, from such noise-polluted, low-quality audio streams is highly complex. Standard STT models struggle under these harsh conditions, requiring the underlying AI system to be exceptionally robust in processing garbled input to ensure the assistant remains a helpful tool rather than a source of frustration.

Objective 1: Enable Real-Time, Hands-Free ERP Interaction

The first and main objective is to bridge the gap between the physical workforce and digital record-keeping systems. The experiment aims to develop and deploy “Agent” capabilities within the Open Voice OS (OVOS) framework that allow the DIA to autonomously query the ERP and WMS via APIs. This will enable logistics staff to perform stock checks, track order statuses, and locate packed commissions using voice commands via radios, targeting a 40% reduction in average information retrieval times.

Objective 2: Validate User Acceptance in a Non-Desk Environment

The second objective is to successfully integrate the conversational AI into the daily routine of the logistics workforce. The experiment seeks to demonstrate that a voice-first interface via radios is a highly viable and preferred tool for a rough, hands-busy industrial setting. A key success metric is achieving an 80% user adoption rate among the warehouse staff within the first two months, proving that the solution effectively reduces their cognitive load without disrupting physical workflows.

Objective 3: Democratize Technical Knowledge via RAG

The third objective is to make complex, unstructured technical knowledge instantly accessible to all operational employees, regardless of their tenure. By implementing a Retrieval-Augmented Generation (RAG) pipeline, the experiment aims to ingest and index vast libraries of supplier documents and internal guidelines. The goal is to enable the Digital Intelligent Assistant (DIA) to answer specific technical queries with high accuracy (target: 90%), thereby reducing the dependency on senior experts and accelerating the autonomy of junior warehouse staff.

Sector and Operational Environment

The experiment takes place in the Timber Trade and Logistics sector. To validate the solution in a real-world scenario, the pilot is carried out in an operational test environment provided by Holzhandel Vogt in Germany, which acts strictly as the use-case provider. The setting focuses specifically on the active warehouse and logistics yard, where goods are actively picked, packed, and loaded. This rough industrial environment is characterized by high noise levels, heavy machinery such as forklifts, and a vast inventory of varied physical products, necessitating robust and hands-free communication methods.

Target Users and Stakeholders

The primary target users are warehouse employees and logistics staff who require immediate, hands-free information access to maintain safety and efficiency during their daily physical operations. By utilizing two-way radios, these workers can retrieve essential operational data without interrupting their physical tasks. Secondary stakeholders include the IT department of the use-case provider, which manages the underlying ERP and Warehouse Management Systems (WMS), as well as the management team, which is highly interested in workforce resilience, knowledge preservation, and process optimization.

Constraints and Requirements

The experiment must adhere to strict data privacy and security standards, including GDPR, particularly regarding any employee data or internal operational metrics processed by the LLM. From a technical perspective, the solution must seamlessly integrate with the existing ERP and other system infrastructure via REST APIs and MQTT, and function reliably through the audio interface of two-way radios operating within the warehouse’s existing communication network. Safety is paramount in this setting; the voice interface must provide clear, concise information without distracting operators while they are maneuvering heavy machinery or handling materials.

EXPECTED IMPACT

EXPECTED IMPACT

Operational Efficiency and KPI Targets

The immediate impact of TiConAI will be a measurable increase in operational speed and accuracy. By automating routine inquiries, the project targets a 40% reduction in response times (lowering the average from 10-20 minutes to 6-12 minutes). Furthermore, the experiment aims to automate 50% of routine internal and logistical inquiries by month 12, freeing up warehouse staff for complex value-added tasks. We expect the DIA to process queries within 5 seconds, drastically cutting down the time currently spent walking to terminals or searching through files.

Strategic Benefits for the SME

Beyond efficiency, the strategic impact lies in workforce resilience. The solution will capture and preserve critical company knowledge, mitigating the risks associated with employee turnover. For new employees, the DIA acts as an on-the-job tutor, significantly shortening the learning curve. This leads to higher employee satisfaction (less frustration finding information) and improved customer satisfaction due to faster, more accurate service.

Sustainability and Error Reduction

From a sustainability perspective, optimizing logistics reduces unnecessary movement within the warehouse (energy saving on forklifts) and minimizes shipping errors (reducing reverse logistics and waste). We plan to track the Accuracy of Data Retrieval with a target of 90%, ensuring that the digital instructions match the physical inventory, thereby reducing resource wastage due to picking errors.

Onboard AI

EXPERIMENT OVERVIEW

The experiment focuses on improving the onboarding process for new employees in complex manufacturing systems by integrating advanced digital technologies, specifically Human–Machine Interaction (HMI) and conversational artificial intelligence (AI). With this approach, we aim to modernize and streamline training, enhancing both efficiency and effectiveness in manufacturing environments. Our production operations are divided into two sectors: plastic injection molding and wire harness assembly.

Traditional onboarding is highly dependent on knowledge transfer, which can lead to variability in training quality and place additional pressure on senior workers. The introduction of a digital intelligent assistance (DIA) and virtual reality (VR) based training enables more consistent and structured knowledge transfer, reducing the training burden on experienced workers.

The ONBOARDING-AI experiment addresses the training challenges associated with these complex manufacturing processes, which involve intricate tasks and specialized machinery.

The experiment utilized HMI technology to create a digital training platform that simulates real-world manufacturing tasks. This includes interactive interfaces and virtual simulations that allow new employees to practice and learn in a controlled environment. Conversational AI has been integrated into the training platform to provide real-time guidance, feedback, and support.

The main building blocks of the Digital Intelligence Assistant (DIA) are speech-to-text (SST) and text-to-speech (TTS). These technologies recommend voice interaction between humans and machines, which is especially important in manufacturing environment, where manual interaction with digital interfaces is often not optimal. SST enables the conversion of spoken language into digital text. The ONBOARDING-AI project uses the open source OpenVoiceOS (OVOS) platform, which includes support for various SST engines (e.g. Vosk, DeepSpeech). The system is designed to allow local or cloud processing. TTS on the other hand, allows a digital assistant to respond to the user in the form of spoken language. In our experiment, ONBOARDING-AI, this component is particularly important for real-time instruction, warnings or confirmation of commands. This significantly contributes to creating a more natural and intuitive user experience. With this approach, the ONBOARDING-AI provides a robust, flexible and open-source voice interaction system that is also suitable for manufacturing environments.

AI-driven chatbot and virtual assistants are being implemented into the platform to help trainees navigate through tasks, answer questions, and adapt the training content based on individual process. The platform also features visual guides, video tutorials, and interactive exercises to help trainees develop new skills and knowledge for their future roles. For the training, we used Meta Quest 3 VR Headset.

The VR training scenarios follow a structured sequence of steps that mirror the actual workflow within the production environment in ELVEZ. The designing phase of the experiment included capturing spaces in ELVEZ with a 3D camera by ARGOxr team, that followed editing the captured space of the equipment in the production floor with 3D virtual environment editors. The prototype presented clearly defined areas and functional zones designed to simulate the actual operational conditions and workflows. The main elements of the prototype included almost all planned key points for the final version of the solution:

Machine operation zone – conveyor belt for part-handling, machine for hot-plate welding, machine for tightness testing, quality control points, and packing of the product. All machines, tools, and equipment are modeled with a high degree of realism, closely reflecting all physical counterparts. The prototype and the final version also include interactive and movable elements – such as functional buttons, welding tool mechanisms, and rotating parts to simulate real machine behavior and usage.

To outline all steps of the process, and to help the training participant, systematically learn and remember the flow of the operations, we first created a flowchart.  These diagrams represent a sequence of the steps that a user must follow to perform certain tasks, allowing for an accurate simulation of a real-world environment. The correctness of the work operation then depends on the decision made by the individual.

The VR training environment incorporates intuitive user interactions through hand gestures and feedback systems. These interactions allow employees to perform realistic tasks such as picking parts, operating machines, press buttons to stars certain action, and handling tools within the virtual workspace. Also, the voice command system allowed hand-free workflow, which helped participants navigate effectively in the virtual working environment.

ELVEZ operates within a labor market that is increasingly diverse in terms of age, educational background, and linguistic capability. Differences in prior technical experience and digital literacy often lead to unequal onboarding outcomes. By enabling immersive VR training with real-time voice interaction, ONBOARDING-AI provides a controlled, supportive environment that can adapt to individual abilities and learning speed. Local, client speech processing reduces the need for high language proficiency to interact with the system, enabling more equitable access for multilingual or nonnative speakers.

Usability and effectiveness tests will be carried out in ELVEZ. We have selected 10 participants for the testing phase, 5 of whom will be trained traditionally and 5 in a VR environment.

By the end of the experiment, ELVEZ will have fully integrated digital training system that optimizes onboarding, reduces errors, and enhances worker competence, ultimately contributing to more efficient and sustainable production processes. Through digital, simulation-based training, we also contribute to sustainability by reducing the need for physical training materials and lowering the occurrence of mistakes on the production line.

Being a part of the automotive industry, we are subject to a range of legal and regulatory frameworks relating to labor law, workplace safety, data protection, product quality, and constructural compliance within the automotive standards. As a European SME, we operate in accordance with national and European labour legislation, which covers everything from recruitment to induction, equal treatment, skills development and employee rights, and obliges us to provide adequate and verifiable training, including safety instructions, before employees take on specific production tasks.

These legal aspects impact the design, operation and deployment of the proposed digital solution. Therefore, the trial must be aligned with applicable legal obligations to ensure safe, ethical and compliant implementation.

The onboarding process relied primarily on traditional training, a hands-on knowledge transfer, leading to creating variability in training quality. Mostly informal, with senior workers transmitting skills through demonstrations and written/oral instructions. Training materials and communication are often in the native language, which can be a barrier for non-native speakers. This often resulted in errors, inconsistent delivery, material loss and increased workload.

One of our primary objectives was to enhance workforce readiness by simulating our real manufacturing environment to effectively prepare newly hired employees for actual production operations before they enter the production floor.

Objective 1: 

Design and deploy an interactive digital training platform utilizing Conversational AI and HMI that mimics real-world manufacturing tasks, focusing on plastic injection molding and wire harness assembly. The created virtual platform will allow trainees to practice procedures, troubleshoot scenarios, and build confidence in a safe and controlled environment.

Objective 2:

Create a training module in VR, integrating visual tools such as step-by-step video tutorials, incorporating gamified solutions that reward accuracy, speed, and adherence to standard operating procedures.

Objective 3: 

Implement a feedback mechanism through AI-driven analytics to track trainee performance and adapt training content in real-time. These tools will also help measure the effectiveness of the training platform and identify areas for improvement. Performance metrics such as task completion time, accuracy, and safety in the testing environment will be continuously monitored through simulation (VR cast) on a tablet device.

 

The experiment will be conducted at ELVEZ, a manufacturing SME based in Višnja Gora, Slovenia, which specializes in producing highquality automotive components. The company operates two main production lines: injection molding of complex plastic parts and assembly of cable harnesses. The initial phase of the experiment focuses on a work procedure within the injectionmolding production line, where operators are responsible for assembling intricate plastic components. In a later phase, the experiment will expand to include the development of a new training program dedicated to the production of cable harnesses and assemblies.

The experiment will take place directly on the injection molding production floor at ELVEZ, representing a real industrial environment rather than a laboratory or pilot setting. This environment includes multiple injection molding machines, welding and assembly equipment, and testing stations used for quality control. The goal is to evaluate the solution under real production conditions (TRL6), where operators must work with precision and adhere to established procedures. On the production floor, we have established a dedicated clean and controlled area specifically prepared for safe VRbased training. This space ensures that operators can participate in immersive training sessions without interfering with ongoing production activities and while maintaining all required safety standards. Before participating in the training, individuals must complete a questionnaire specifically designed to determine whether they are suitable for using VR technology. This assessment helps ensure that participants can safely and comfortably engage with immersive training tools. In addition, each individual participant is given the opportunity to choose whether they prefer to take part in the VR-based training program or follow the traditional training approach, allowing them to choose what method best aligns with their comfort level and learning preferences.

The use case is based on a multistep procedure in which an operator is required to produce a fully functional injectionmolded plastic part. The component is manufactured in two separate pieces that must be joined through a precise welding process. Operators must follow detailed instructions and perform several critical tasks, such as correctly positioning parts in the cradle, executing the welding operation, testing the components for tightness, inspecting them for defects, and marking and documenting the finished parts. Common errors observed among operators include incorrect placement of parts, insufficient or omitted tightness testing, failure to identify defects in components or assemblies, and incorrect marking or documentation, which can lead to qualitycontrol issues.

The primary users involved in the experiment are production operators, particularly newly hired employees who often face challenges due to the complexity of the processes. Additional stakeholders include shift leaders and supervisors who oversee daily operations, quality assurance personnel responsible for maintaining product standards, process engineers who design and optimize workflows, and company management. Customers in the automotive supply chain also indirectly benefit from improved process reliability and product quality.

The experiment must comply with several operational and regulatory constraints. These include adherence to automotive industry safety and quality standards such as IATF 16949, compliance with machine safety regulations for working near injection molding and welding equipment, and meeting certification requirements for operators performing specialized tasks.

Access to production data may be restricted due to proprietary or customersensitive information, and the solution must integrate with existing digital systems such as manufacturing execution systems and qualitytracking tools. Additionally, production lines have limited availability for downtime, which requires careful planning to avoid disruptions.

EXPECTED IMPACT

EXPECTED IMPACT

The experiment is expected to result in significant improvements in training effectiveness, operational quality, and workforce readiness within the ELVEZ production environment. While improving the training effectiveness with the VR technology solution, we also expect that the adoption of advanced Digital Intelligent Assistant systems will modernize our manufacturing processes, improving accuracy in assembly and enhancing the capability for complex wire harness production. This is expected to result in reduced training costs and increased productivity, strengthening our competitive edge potentially attracting new clients.

By introducing VRbased training for complex injectionmolding procedures, we anticipate a measurable reduction in operator errors, particularly among newly employed individuals who often struggle with the complexity of the tasks. The immersive nature of VR training is expected to enhance knowledge retention, improve procedural accuracy, and increase operator confidence before they begin working on the production line.

Success of the individuals will be evaluated through a combination of key performance indicators. Key performance metrics will include the reduction of common operator mistakes, such as incorrect decisions made during the process, incomplete tasks, failure to identify defects, and inaccurate marking of finished components. Additional indicators of success will include shorter onboarding times, fewer interventions required by supervisors, and improved consistency in following established work instructions. Feedback from operators, supervisors, and quality assurance personnel will also play an important role in assessing the perceived usefulness and usability of the training program.

The experiment is expected to benefit a wide range of stakeholders. Operators will gain a safer, more engaging, and more effective learning experience, while supervisors and quality teams will benefit from a more reliable and betterprepared workforce.

ELVEZ on the other hand, will gain insights into how advanced training technologies can support productivity, reduce scrap rates, and improve overall process stability. In the later phase, when the training program expands to cable harness production, the company will be able to apply the same methodology to another critical area of its operations.

There are also potential sustainability benefits. By improving operator accuracy and reducing the number of defective or improperly assembled components, the experiment can contribute to lower material waste and reduced energy consumption associated with rework. More efficient training processes may also reduce the need for repeated physical demonstrations, saving time and resources.