The 90% behind AI: engineering for reliability

Julian Hessels, Jochem van Leeuwen & Anuschka Diderich
July 3, 2025 · 5 min read English

AI is not just a technological advancement—it’s a catalyst for smarter decision-making, operational efficiency, and strategic innovation. While discussions often shift between limitless potential and existential risks, the real opportunity lies in harnessing AI as a trusted enabler of business growth. By implementing AI thoughtfully—ensuring transparency, security, and clear governance—organizations can position it as a reliable partner rather than a disruptive force.

Many companies are already leveraging AI in transformative ways, from automating complex processes to generating insights that drive competitive advantage. The technology delivers highly detailed and confident responses, surpassing traditional search capabilities. However, confidence alone does not guarantee accuracy. Like a talented but still-maturing team member, AI thrives when guided by strong oversight and continuous learning.

The key question for leaders isn’t whether to adopt AI, but how to do so in a way that maximizes trust, safeguards compliance, and strengthens strategic autonomy. Businesses that integrate AI with a clear vision and structured safeguards will not only mitigate risks but unlock new opportunities for agility, innovation, and long-term success.

The foundation of trust: narrow AI
The most effective way to make AI a trustworthy partner is by focusing on Narrow AI—AI designed for specific, well-defined tasks. Unlike Artificial General Intelligence (AGI), which aims to replicate human-like reasoning across multiple areas, Narrow AI is purpose-built for precision, control, and reliability, making it the best strategic foundation for businesses integrating AI into their operations.

Already, it is transforming industries through applications such as AI-driven medical diagnostics, automated tax assistance, compliance monitoring, and logistics optimization. The real strength of Narrow AI lies in its ability to meet three critical requirements: accuracy, transparency, and robustness. Accuracy ensures AI produces consistent, high-quality results, often matching or surpassing human performance in specialized tasks.

Transparency is essential for trust and accountability—AI systems must provide clear reasoning behind their decisions, particularly in regulated industries where explainability is a legal and ethical necessity. Robustness ensures that AI can handle real-world complexities, such as incomplete data, input errors, or unexpected scenarios, without breaking down or generating unreliable results.

Because Narrow AI operates within a clearly defined scope, it can be fine-tuned to maximize precision, comply with industry standards, and adapt to business needs, making it far more practical and reliable than broad, one-size-fits-all AI models. By prioritizing these three pillars, businesses can integrate AI with confidence, ensuring it enhances human expertise rather than introducing uncertainty or risk. This structured approach not only strengthens trust in AI but also creates a foundation for sustainable, high-impact innovation, where AI serves as a secure, high-performing tool that drives long-term value.

From 80% to 100%: how engineering turns AI into a reliable business asset

AI alone is not enough—engineering is what bridges the gap between an 80% accurate model and a fully reliable business tool. While AI models provide a foundation, they are just 10% of the equation. The remaining 90% comes from engineering—specifically, data engineering, cloud engineering, and software engineering—which transforms AI from a promising prototype into a robust, enterprise-ready solution. Without engineering, AI remains a promising but unreliable tool. By adding the remaining 90% of engineering, AI becomes a scalable, secure, and strategic asset that businesses can confidently integrate into mission-critical processes.

Data Engineering ensures that AI operates on high-quality, accurate, and complete data. Poor data leads to hallucinations, biased decisions, and irrelevant outcomes. To prevent this, organizations must monitor data integrity, consistency, and validity. In mission-critical environments—where thousands of data updates occur per hour—errors can cause costly disruptions. AI must be supported by a strong data governance framework to maintain real-time accuracy and prevent faulty conclusions.

Cloud Engineering gives businesses control over where and how AI models run. While general computing can rely on external providers, AI models often impact customer interactions and critical workflows, making security and reliability essential. OpenAI, for example, does not guarantee response times—sometimes taking minutes to generate answers, which is unacceptable for business applications. Companies need alternative AI strategies, such as hosting open-source models and implementing failover mechanisms to ensure continuous performance.

Software Engineering ensures that AI integrates seamlessly into existing IT environments. AI is not just an independent tool—it must function predictably, securely, and efficiently within enterprise systems. Using Test-Driven Development (TDD) helps validate AI behavior before deployment, reducing errors and improving reliability. AI-driven testing tools can further refine models by iterating toward business-specific, high-accuracy outputs, preventing unpredictable behavior.

AI models provide an 80% solution, but engineering closes the gap to 100%. AI must be designed, tested, and governed like any other mission-critical system. Companies that invest in these foundations will unlock AI’s full potential while maintaining control, security, and long-term reliability.

Accountability: AI in the human loop

AI may operate autonomously, but ultimate responsibility remains with humans. The concept of AI in the human loop emphasizes that AI should not replace human judgment but rather support and enhance it. While AI can assist in processes and decision-making, it is the duty of corporate leaders to implement clear safeguards that ensure responsible use and mitigate risks. This is not just about having a human in the loop as a passive observer—it is about acknowledging that leaders and decision-makers remain fully responsible for every AI-driven process and outcome. The accountability cannot be offloaded to algorithms, software vendors, or automated systems.

Regulatory frameworks reinforce this reality: “human in the loop” is not a sufficient safeguard. Instead, organizations must embrace AI in the human loop, underscoring that responsibility always remains with the individuals and boards overseeing AI, not the technology itself.

To build trustworthy and secure AI systems, businesses must focus on four key areas: prevention, monitoring, recovery, and security. Prevention means designing AI models that minimize errors before they occur, ensuring that flawed decisions are caught early. Monitoring is essential, not just to track AI’s input and output but also to continuously learn from its behavior, refining systems to improve accuracy and fairness over time. Recovery mechanisms must be in place to quickly correct errors and manage outages, preventing AI-driven mistakes from escalating into serious operational failures. Security is another critical responsibility—organizations must protect AI systems from threats such as prompt injection attacks, unauthorized data manipulation, and adversarial exploits that could compromise decision-making or expose sensitive information.

Unlike traditional automation, AI is not just an execution tool; it is an adaptive system that learns and evolves. This means oversight is not optional—it is a core requirement. AI systems must be designed to function within clear ethical and regulatory boundaries, ensuring that humans remain in control. While some aspects of oversight can be automated, accountability always lies with the people using AI, not the AI itself. Companies that embed these principles into their AI strategies will be better positioned to harness AI’s potential while maintaining control, compliance, and trust.

Moving beyond proof of concept: Fire-Ready-Aim

Many organizations get stuck in traditional Proof of Concepts (PoCs) that test AI in controlled environments but in the end fail to connect with real business workflows. While these PoCs may provide technical validation, they often don’t translate into meaningful operational impact. To fully leverage AI, businesses must shift from the classic "Ready, Aim, Fire" approach to a more agile "Fire, Ready, Aim" method—experimenting with AI in real-world scenarios, learning from initial results, and then refining the approach. Instead of months of planning, documentation, and risk analysis before deployment, AI adoption should focus on quick experimentation and iterative improvements. This is especially achievable with narrow AI, which excels at solving well-defined problems with high precision. Modern AI tools enable rapid prototyping, allowing organizations to integrate AI into their workflows faster and more effectively.

AI adoption should be fast, hands-on, and results-driven, with teams validating AI’s value through real business cases, not just controlled tests. Unlike traditional IT implementations, which introduce new functionality case by case, AI can tackle a much broader range of operational challenges at once, sometimes automating processes that would previously take teams months or even years to complete. This means organizations must think beyond individual AI use cases and instead adopt a strategic, domain-wide perspective, identifying areas where AI can drive efficiency, automation, and competitive advantage.

This shift in approach requires a fundamental change in mindset. AI adoption is not just about technology—it’s about transforming the organization to integrate AI effectively. The most successful AI implementations actively involve employees from the start, allowing them to gain experience, shape AI’s role in their workflows, and contribute to its refinement. Rather than lengthy validation processes that slow down progress, organizations should take a more agile and collaborative approach. For example, bi-weekly co-creation sessions with subject matter experts, product managers, and business leaders allow AI models to be fine-tuned in real time, ensuring they align with actual business needs.

Instead of waiting until an AI system is "perfect" before deploying it, businesses should embrace AI’s iterative nature, where learning and refinement happen continuously. AI adoption is not about having every detail worked out in advance but about quickly testing, adjusting, and optimizing based on real-world usage. Companies that adopt this mindset will unlock AI’s full potential faster, more effectively, and with greater adaptability, ensuring AI moves from theoretical discussions to delivering real, measurable value.

Scaling AI: from experimentation to enterprise integration

AI adoption should go beyond isolated experiments—a structured AI platform is essential for secure, scalable, and enterprise-wide implementation. The Fire-Ready-Aim approach works best when AI is embedded in a platform that supports continuous innovation while maintaining security, reliability, and governance.

Experimentation should still happen, but within a controlled, ring-fenced environment, ensuring that AI can be tested without the risk of security breaches or data leaks. To maintain control and consistency, organizations must prevent shadow IT, where different teams develop AI solutions without oversight, and instead centralize model governance to enforce uniform standards for compliance, security, and performance. This means implementing failover mechanisms to ensure uninterrupted operations if an AI model fails and enforcing security protocols such as access controls and audit trails to prevent unauthorized modifications and data exposure. By centralizing AI governance, businesses can foster innovation while keeping AI applications reliable, secure, and aligned with strategic objectives.

Beyond security, AI platforms serve as knowledge repositories, helping organizations retain and scale expertise. For example, a bank struggling with compliance and vendor management initially considered hiring 15 additional full-time employees but found that AI could handle the workload with just 5 to 6 employees, increasing both efficiency and scalability. AI platforms ensure that knowledge is not lost when employees leave and that AI-driven processes continuously improve.

The rise of multi-agent AI

AI’s next evolution lies in multi-agent systems, where multiple Narrow AI agents operate together to execute complex processes across the value chain. A single AI agent has sensors, memory, instructions, decision-making capabilities, and specific goals. When multiple agents collaborate, they optimize workflows, automate decision-making, and drive operational efficiency. The technology is already available—frameworks like LangChain and AutoGen enable organizations to deploy AI agents that monitor operations, analyze supply chains, and even redesign processes dynamically.

The first step in adopting AI agents is ensuring operational consistency—for example, AI can monitor whether contractual agreements are being followed, reducing the need for human oversight in compliance processes. The next step is leveraging AI for real-time optimization, such as supply chain adjustments, dynamic pricing, and self-adapting production lines based on market fluctuations. Ultimately, AI allows organizations to move beyond incremental improvements to greenfield thinking—redesigning entire processes from the ground up.

The question is no longer whether AI can optimize existing processes, but how businesses can fully integrate AI to create autonomous, self-improving value chains. The technology is ready, but adoption and change management remain the biggest challenges. The first organizations to build fully autonomous, AI-driven operations will set the benchmark for the future of business.

The governance challenge: balancing innovation and risk

AI is both an enabler and a risk factor, and organizations must find the right balance between leveraging AI’s potential and maintaining control over decision-making processes. The key question is not whether AI introduces risk—it does—but whether organizations actively manage that risk or passively expose themselves to it. Business is inherently about taking calculated risks, and AI should be approached the same way: not as an uncontrolled force but as a powerful tool within a structured governance framework. The real challenge lies in creating an environment where AI can drive efficiency and innovation while risks remain controlled and manageable. That balance will look different for every organization, but those that actively engage with both AI’s potential and its limitations will accelerate responsibly rather than slow down due to uncertainty.

The biggest governance risk is setting rules that are too rigid, stifling innovation rather than guiding it. AI is here to stay, and attempting to block or excessively restrict its use is both unrealistic and counterproductive. Employees will find workarounds, leading to shadow IT and uncontrolled AI applications that introduce far greater risks. Instead, governance should focus on empowering teams with clear guidelines that ensure AI is used safely, ethically, and effectively. This means fostering a culture of responsibility, where employees understand the implications of their AI interactions—where data is sent, how it is processed, and whether AI usage aligns with security, compliance, and business goals. A structured AI Manifesto or internal guidelines can set shared expectations without imposing unnecessary restrictions.

At its core, AI governance is about striking the right balance between soft and hard controls. While human responsibility is critical, organizations cannot ignore the technical safeguards required to keep AI secure. Security and compliance must be embedded into AI platforms from the ground up, ensuring data protection, auditability, and alignment with privacy regulations such as GDPR. The best way to achieve this is through cross-functional collaboration—bringing together technology, security, compliance, business, operations, and legal teams to establish governance that works in practice, not just in policy.

A structured approach, such as the "whole system in the room" principle, ensures that AI risks are mitigated without slowing down adoption. By working together, teams can identify challenges early, balance technical and organizational measures, and create AI systems that are both innovative and compliant.

The risks associated with AI are not entirely new—they resemble traditional third-party risk management, data security, and regulatory challenges, now applied to a new technology. The same fundamental risk assessments apply: Where does AI process data? Who controls the infrastructure? What are the dependencies and potential vulnerabilities?

For example, AI models like DeepSeek claim to match OpenAI’s performance while using fewer computing resources, making AI more affordable and accessible. However, a closer look at its privacy policies reveals potential data retention on Chinese servers, raising serious compliance concerns. This highlights why organizations must carefully evaluate vendor lock-in risks, data sovereignty issues, and security implications before integrating AI into core business processes.

Ultimately, transparency and accountability must be at the core of AI governance. AI systems should be designed with built-in auditing mechanisms and human oversight, preventing them from becoming uncontrolled black boxes.

Organizations that integrate AI governance into their strategy from the start will not only reduce regulatory and security risks but also enable responsible, scalable AI adoption. By embedding security, compliance, and risk management into AI from day one, businesses can ensure AI is a trusted, high-impact enabler rather than an unmanaged liability.

AI as a responsible business enabler

AI is rapidly becoming a core driver of business innovation, but its success depends on how it is engineered, governed, and integrated into operations. Businesses must prioritize precision, oversight, agility, security, and governance to ensure AI delivers measurable value. Narrow AI should be the focus for domain-specific applications, where it can provide high accuracy and tangible business impact. At the same time, AI must always operate within the human loop, ensuring human oversight and accountability. Engineering is critical—data, cloud, and software engineering are what transforms AI from an experimental tool into a fully integrated, high-performance business asset.

A Fire-Ready-Aim approach accelerates AI adoption, allowing businesses to move beyond drawn-out proof-of-concept cycles and embrace rapid, iterative deployment that delivers real impact. AI platforms must be secure, stable, and scalable, mitigating risks such as shadow IT, data leaks, and uncontrolled model behaviors. At the same time, strong AI governance frameworks are essential to balance innovation with compliance, transparency, and regulatory alignment.

AI is neither a risk nor a silver bullet—it is a business enabler, and its success depends on how it is designed, deployed, and governed. Organizations that take a strategic, structured approach will gain more than just automation; they will build a trusted AI ecosystem that enhances decision-making, strengthens resilience, and drives competitive advantage—without compromising security, autonomy, or ethical standards.