Or the opportunity for effective operational governance in the enterprise
At a time when there is a lot of talk about Artificial Intelligence and its promises in forums, it is striking to note the significant gap between this theoretical universe and what is actually being done in the field, in the day-to-day life of companies. This is because AI first and foremost requires data, lots of it, its own data, and in traditional companies, at least those with a certain track record, this basic problem is the first thing to go.
Non-integrated IT systems, Office tools that are still very much part of the operating process, IT projects that are perceived as long and complex, and manual procedures mean that, in the end, in many entities, the support teams whose day-to-day activities de facto involve working with data (Finance & Risk, Performance Management, Compliance, Marketing, etc.) are often still using methods and tools that have evolved little or not at all, and which are becoming less and less suited to the volumes and complexity of the operations to be carried out, and to the increasing pressure of production. The result – and this is the point – is a loss of productivity that is as difficult to quantify as it is increasingly significant. Who hasn’t waited for their Excel to refresh, sometimes for hours on end, forgotten where THE key formula in the file was, hidden somewhere in the bottom right-hand corner of the nth tab, or tried to find a previous version after an unfortunate crash? In some expert teams today, day-to-day operations boil down to MS Office engineering rather than functional analysis. Not to mention the difficulty, if not the impossibility, of reconciling these methods with new regulatory obligations on data management, such as the GDPR or BCBS239.
And that’s the irony of the GDPR: while it was written to limit the use of customer data by new companies that have perfect mastery of Data, for the vast majority of them it involves the challenge of simply finding the said data, copied and recopied within file services, according to operational needs.
So yes, Artificial Intelligence could eventually be a lever for a large number of companies, in all sectors. But our conviction is that today, the major Data lever to be activated in the company is more in terms of the methods and tools of the support teams. Properly equipped and trained, and without even mentioning ChatGPT, a business analysis team has the power to increase its production efficiency by several hundred percent, by working on tables rather than files, by working with scripts rather than with pseudo-code imprisoned and partitioned in cells. The results speak for themselves: with some of our customers, we have seen control procedures go from more than a day to less than a minute. And let’s be realistic: the time saved is not the same in this kind of process, because very often the calculations have to be restarted after correction and/or adjustment, which multiplies the time lost. Not to mention quality problems, because the longer the operations take, the easier it is to make mistakes.
Faced with these challenges, our conclusion is that today’s analysis teams simply have to change their paradigm, by accepting the fact that their job, these days, includes a degree of technicality. And this conclusion has two consequences:
1) An increase in the skills of the teams involved, by overcoming the fear of “becoming IT”.
2) Organising the use of this data at a corporate level, which I believe is the ultimate role of Data Governance.
Data Governance. As it stands, the term is a bit reductive, as is the scope and/or perception within the company of the mission of the teams dedicated to this subject, where they exist. Data Governance is often associated with the notion of control, and reduced to Quality Assurance made necessary by regulatory obligations. But its ultimate aim must be to improve the performance of Data-focused business teams, with very tangible benefits in terms of efficiency, processing times, quality of service and results.
When it comes to improving this performance, the quality of source data is indeed one of the most important issues, which is why it should be included in the scope of data governance. However, there are two principles to bear in mind:
1) Error detection costs money. So does correcting it. If decision-makers don’t accept the latter, the former is a sunk investment.
2) Data quality is a team effort: without collaboration and good communication between consumers and producers, and the involvement of IT departments, quality issues cannot be resolved. And here again, error detection is a sunk cost.
But beyond this quality management, it is in the operation itself that we see the key role of Data governance: broadening the ‘domain of the possible’ for business analysts by illustrating in concrete terms the gains they can expect from a change in their habits, increasing the technical level of the teams, through training and support, and putting in place platforms and environments that are better suited to their day-to-day work today: enabling the added value created to be shared more quickly, while at the same time guaranteeing a sufficient level of security.
In other words, the ultimate goal of a dedicated team such as a Data Office should be to support the transformation of Data support teams. Because very often, these calculation or other files contain a very strong business logic: the analysts are experts in their subject. Ultimately, it’s a question of freeing up intelligence that is currently difficult to produce and lock away, and of better distributing the value created throughout the company.
Artificial Intelligence will certainly lead to spectacular progress in the years to come. But as we move down a level of expertise, the immediate and equally significant progress is to be found in channelling Real Intelligence. And that’s what good operational governance is all about.