The Dumb Reason Your AI Project Will Fail

Here is a common story of how companies trying to adopt AI fail. They work closely with a promising technology vendor. They invest the time, money, and effort necessary to achieve resounding success with their proof of concept and demonstrate how the use of artificial intelligence will improve their business. Then everything comes to a screeching halt — the company finds themselves stuck, at a dead end, with their outstanding proof of concept mothballed and their teams frustrated.

What explains the disappointing end? Well, it’s hard — in fact, very hard — to integrate AI models into a company’s overall technology architecture. Doing so requires properly embedding the new technology into the larger IT systems and infrastructure — a top-notch AI won’t do you any good if you can’t connect it to your existing systems. But while companies pour time and resources into thinking about the AI models themselves, they often do so while failing to consider how to make it actually work with the systems they have.

The missing component here is AI Operations — or “AIOps” for short. It is a practice involving building, integrating, testing, releasing, deploying, and managing the system to turn the results from AI models into desired insights of the end-users. At its most basic, AIOps boils down to having not just the right hardware and software but also the right team: developers and engineers with the skills and knowledge to integrate AI into existing company processes and systems. Evolved from a software engineering and practice that aims to integrate software development and software operations, it is the key to converting the work of AI engines into real business offerings and achieving AI at a large, reliable scale.

Start with the Right Environment

Only a fraction of the code in many AI-powered businesses is devoted to AI functionality — actual AI models are, in reality, a small part of a much larger system, and how users can interface with them matter as much as the model itself. To unlock the value of AI, you need to start with a well-designed production environment (the developers’ name for the real-world setting where the code meets the user). Thinking about this design from the beginning will help you manage your project, from probing whether the AI solution can be developed and integrated into the client’s IT environment to the integration and deployment of the algorithm in the client’s operating system. You want a setting in which software and hardware work seamlessly together, so a business can rely on it to run its real-time daily commercial operations.

A good product environment must successfully meet three criteria:

Dependability. Right now, AI technologies are fraught with technical issues. For example, AI-driven systems and models will stop functioning when being fed wrong and malformed data. Furthermore, the speed they can run at is bound to diminish when they have to ingest a large amount of data. These problems will, at best, slow the entire system down and, at worst, bring it to its knees.

Avoiding data bottlenecks is important to creating a dependable environment. Putting well-considered processing and storage architectures in place can overcome throughput and latency issues. Furthermore, anticipation is key. A good AIOps team will consider ways to prevent the environment from crashing and prepare contingency plans for when things do go wrong.

Flexibility. Business objectives — and the supporting flows and processes within the overall system — change on an ongoing basis. At the same time, everything needs to run like clockwork at a system level to enable the AI models to deliver their promised benefits: data imports must happen at regular intervals according to some fixed rules, reporting mechanisms must be continuously updated, and stale data must be avoided by frequent refreshing.

To meet the ever-evolving business requirements, a production environment needs to be flexible enough for quick and smooth system reconfiguration and data synchronization without compromising running efficiency. Think through how to best build a flexible architecture by breaking down it into manageable chunks, like LEGO blocks that can subsequently be added, replaced, or taken off.

Scalability and extendibility. When businesses expand, the “plumbing” within the infrastructure inevitably has to adapt. This can involve scaling up existing capabilities and extending into new competencies. Yet, an inescapable fact is that different IT systems often carry different performance, scalability, and extendibility characteristics. The result: Many problems will likely arise when they try to cross system boundaries.

Being able to simultaneously stay “business as usual” while embedding upgraded AI models is critical to business expansion. The success depends greatly on the ability of the team to constantly adjust, tinker, and test the existing system with the new proposed solution, reaching equilibrium through functionality of old with new systems.

Good Systems Come from Good Teams

The question, therefore, isn’t whether you need an AIOps team, it’s what kind of AIOps team makes the most sense for your business. For most businesses, the most important decision they’ll make with their AIOps team is whether they want to build it in house or contract it out. There are advantages to both, but here’s what the tradeoffs look like:

Do it yourself. On the plus side, creating your own team to build and maintain a production environment gives you full control over the entire setup. It can also save a lot of potential management and contractual hassles resulting from having to work with external suppliers. This applies to both large companies, which may want to verticalize the AIOps team, as well as for small- to medium-sized enterprises that may want to expand the competencies of their IT team to be able to deal with the production environment directly.

That said, DIY is no small undertaking — it involves significant administrative and organizational burdens, not to mention overhead. Additionally, companies need to develop expertise and knowledge of AIOps in house. The upfront economic impact is also likely to huge: High initial cash outlay are needed and tied up to buy depreciating assets like storage hardware and servers. Even with cloud infrastructure, the “trial and error” setup activities will likely push installation costs up.

Plug and play. An alternative is to partner with an AIOps vendor. A good vendor will be able to work closely with its client, offering the required expertise to construct and run a production environment that sits well within the client’s IT infrastructure and can support AI models, be they self-developed or supplied by third parties. (This is what our company, Nexus FrontierTech, does.) With such a service, enterprises can now access a robust production environment and a trustworthy AIOps team yet freeing up the enormous resources otherwise necessary to run their own AIOps.

However, for many businesses, this may mean losing the right to own a proprietary system and a full say in the running of AIOps. It may come across as a compromise between financial constraints and access to a solid and robust AI architecture, which may not be as bespoke as in the case of a native AIOps project but good enough to help the firm digitize its production.

***

All too often, we are bombarded by news on the wonders created by AI — what it’s going to do for us and how it’s going to change our lives. But that coverage misses a critical point: For any business wanting to leverage on the benefits of AI, what truly matters is not the AI models themselves; rather, it’s the well-oiled machine, powered by AI, that takes the company from where it’s today to where it wants to be in the future. Ideals and one-time projects don’t. AIOps is therefore not an afterthought; it’s a competitive necessity.

This article is also featured on Harvard Business Review, and is written by Danny Goh, Mark Esposito, and Terence Tse.

Danny Goh

Serial entrepreneur and an early-stage investor, co-founded and been CEO of Nexus FrontierTech, investing in early-stage start-ups with 20+ portfolios; currently serves as an entrepreneurship expert with the Entrepreneurship Centre at Said Business School, University of Oxford.

Mark Esposito

Professor of business and economics at Hult International Business School and at Thunderbird Global School of Management at Arizona State University; a faculty member at Harvard University since 2011; a socio-economic strategist researching the Fourth Industrial Revolution and global shifts.

Terence Tse

Professor at ESCP Business School and a co-founder and executive director of Nexus FrontierTech, an AI company. He has worked with more than thirty corporate clients and intergovernmental organisations in advisory and training capacities.

Don't miss these stories: