

Thank you for Subscribing to Environmental Business Review Weekly Brief
Artificial Intelligence (AI) has rapidly advanced in recent years, generating both excitement and skepticism. As organizations explore AI projects, they question whether these efforts will yield tangible results or just be another wave of technological hype. Narrow AI is current most common kind of AI. It operates within a limited, pre-defined range of situations to enhance the efficiency of repetitive tasks, often under human supervision and following specific training. It excels at convergent activities—i.e. tasks with clear and specific outcomes. For example, it can detect whether a worker is wearing protective equipment to improve safety; suggest optimized routing in logistics taking traffic into account or differentiate between plastic and other materials on a selection line.
While narrow AI can deliver impressive short-term results, it has hidden pitfalls. Narrow AI systems typically feature their own data collection pipelines, dedicated databases and specialized algorithms. Their deployment requires infrastructural adjustments, security assessments and ongoing maintenance, similar to any modern software project. Communication between different systems is rare and often complex, leading to additional integration and maintenance costs. This common silo-oriented approach is inherently short[1]sighted: fragmentation of data and repeated efforts across multiple processes hinder seamless integration and optimization of AI capabilities. A more effective approach involves an integrated data strategy, which includes centralized data collection from various devices and platforms — such as machines and field devices, mobile apps and other software — all funneling data into a single storage solution. The experience we’ve made with the VIDIM platform following this approach has shown significant reductions in both implementation costs and maintenance expenses, while enhancing security by reducing the number of potential breach points. Importantly, such an approach lays the foundation for another, more powerful, kind of AI: so called General AI, which provides broad human-like cognitive capabilities, can autonomously tackle new and unfamiliar tasks. This form of AI can discern, assimilate and utilize its intelligence to resolve challenges without human guidance. It has the ability to detect and report inefficiencies in a process as they arise, even in situations where people are unaware of them. Given the complexity and novelty of the field, taking informed decisions and defining a clear AI strategy upfront is essential. As it is important to work with a multidisciplinary team to make sure that no aspect is overlooked – a software engineer to properly design data flows, a database expert to define data modeling, a data scientist to create best possible AI models, an interface designer to make sure operators use the systems correctly. Such an approach is fundamental as a small design flaw can make the difference between cost-effective results and costly failures. The first step I recommend at any case is to establish a comprehensive data collection workflow and create a data lake — IE a centralized repository for all collected data. Even without initially involving AI, such architecture can provide immediate insights into the current situation and answer critical questions like “What happened last month?”, “How much plant time was wasted due to a particular machine fault?” and “What was the situation when this issue occurred?” These are valuable answers that many plant managers still lack, or have in terms too vague to be useful. Notably, most clients are already happy when they reach this situation, discovering that its not AI what they really needed. Or better, that there was something else they needed before AI – a clear, objective representation of plant situation. Once a data lake is established, AI can be incrementally implemented to extend functionalities. It can predict upcoming issues, uncover unknown correlations and insights, and improve operational efficiency and decision-making. However, predicting what’s going to happen is only valuable if you have a clear overview of the current situation and a clear, objective baseline. Without these elements, there is a high risk of being overwhelmed by irrelevant data and alerts generated by the AI. Moreover, without a sound data management strategy, significant issues could arise, and AI might even mislead you. Inconsistent scales, incorrect measurements and protocol deviations can cause AI not only to make mistakes but to exacerbate them over time due to the learning nature of AI and the amplification of errors when training is based on inaccurate data. AI is a powerful tool with substantial potential for efficiency improvement. However, organizations must resist the temptation to invest in the latest AI tools with enticing short-term promises without a strategic approach. A long-term vision and foundational architecture are essential to address current needs efficiently and to enable future developments. A clear AI strategy will reduce costs, improve security, provide significant short-term results and pave the way for advanced AI capabilities.