Deploy a Successful AI Strategy // Part 5: Governance

Tobias Faiss
3 min readSep 14, 2022

As in any company, a certain amount of bureaucracy is unavoidable and also necessary. AI projects are not exempt from this.

Consequently, it is indispensable that a basic “operating model” for AI products or AI services should also be defined in this domain.

In the traditional corporate practice of IT units, three different scenarios have proven themselves in this regard:

  1. Central organization: the entire AI/ML organization is centrally anchored in the company and works across divisions for the individual departments. The best economies of scale can be achieved here, but the individuality of the individual projects or the timely provision of certain services may suffer.
  2. Decentralized organization: In this scenario, customized solutions are easier to implement than in a centralized organization. In general, there is more domain knowledge in this setup than in a central organization. In addition, decentralized teams are “closer to the business,” which translates into faster response or provision of functions and services. On the other hand, there is a risk of redundant functions in the individual teams, which can lead to significantly lower economies of scale from a business perspective.
  3. Hybrid organization: The hybrid model provides for isolated services to be provided centrally, but the basis to be provided from a central location. For example, a company can offer Hadoop/Spark Clusters and data preprocessing centrally, while the actual model development takes place decentrally in the individual teams.

As is so often the case, there is no right or wrong setup here. Depending on the company and its IT strategy, the industry and associated requirements, all operating models can be suitable. The important thing is to get clarity in advance about which operating model is best to implement in your own company. Technology companies will probably want (and need) to be “closer to the technology”, whereas the FMCG or Finance industry tends to be more technology-averse and prefers coverage that is as broad and generic as possible.

A second point is the provision of AI services in production. This topic is still treated comparatively stepmotherly today and all too often neglected. However, it is necessary to create seamless and smooth transitions from the project to production, especially in terms of a good customer experience. The key to success here is: End-to-end responsibility! You will need a “caretaker” who has decision-making authority and responsibility over the service from concept to productive deployment. Give him the freedom he needs in terms of customer experience. It’s like so often in life: The dose makes the poison. Control your units as much as necessary, but also as little as possible.

Your team and your customers will thank you.

Key Takeaways

  • What is the current “IT operating model” in the enterprise and what can a matching AI organization look like?
  • What should an E2E process look like that moves AI projects into production?
  • Governance rule of thumb: As much as necessary, as little as possible!

Want to read more? Return to overview

--

--