On May 19, the CPMC will celebrate its debut as a think tank for the controlling industry with the Controlling & Performance Management Dialog at the Frankfurt School for Finance and Management. The panel could hardly be more top-class – leading theorists and practitioners take aim at the currently most discussed controlling topics in their contributions. So it’s hard to avoid the occasional industry buzzword.
In such cases, it may be worthwhile to ask. A risky undertaking, to be sure, since it can result in the unsatisfactory (then at least reassuring) debunking of pure empty words. But you can also get a little lucky – and end up with someone like Prof. Matthias Mahlendorf, research director at CPMC, who has sophisticated and thoughtful answers to such questions.
Dr. Matthias Mahlendorf, who holds a professorship in Managerial Accounting at the Frankfurt School of Finance & Management, focuses on performance measurement, dynamic target adjustments, data analytics and the digital transformation of controlling and is the initiator and academic director of the part-time Master of Science program Corporate Performance & Restructuring.
As part of the Controlling and Performance Management Dialog, he will give a keynote speech on innovative data use. We took a closer look at what this is all about.
Professor Mahlendorf, what is innovation in controlling? Do the two actually go together?
Mahlendorf: For me, innovation in controlling means evolving with changing information needs and new technologies. I think this not only fits together, but is of central importance if controlling wants to remain relevant for decision-making. The first writing in ancient Mesopotamia started with bookkeeping. From then (about 3,500 years B.C.) to today’s cloud-based enterprise resource planning (ERP) systems with predictive analytics capabilities and the like, there have been plenty of innovations!
What are innovative ways for controllers to use data?
Mahlendorf: There are hardly any limits to creativity; let me give you three examples: First, there is an abundance of statements from customers in social media, for example on Twitter. Automated text analytics help to better understand customers. Controlling can use these analyses to focus resources on value-creating activities, identify new trends, quickly detect quality deficiencies, and more.
Secondly, more and more technical devices are becoming “smart”. The resulting data can be used by Controlling to better allocate resource consumption and thus increase efficiency.
Third, there are rapid advances in automated analysis of image data. The application examples range from inventory analyses (not only of one’s own company) from satellite images to the recording and optimization of processes of all kinds – such as in the Amazon Store, where video image analyses are used to record which products a customer buys, thus making the checkout redundant.
Utz Schäffer says, “Digitization and the potential of Big Data call into question whether Small Financial Data and the associated tools of the trade are sufficient as the traditional basis of controlling.” Does that mean controllers can’t get around Big Data? What exactly is the potential of Big Data?
Mahlendorf: First, of course, the classic financial data in the company must be well mastered and linked with other internal data. For many companies, it is still a major challenge to merge data from different internal systems. However, the most conceptually innovative developments are taking place in unstructured data (i.e., texts and images, for example), which are created outside of the classic ERP system. If unstructured data/Big Data can be tapped, it can be used to gain new insights into competitors, customers, employees, internal processes and much more.
This makes the role of the controller even more exciting because – assuming a certain openness to data science – he or she can provide new fact-based suggestions for management.
People tend to stick to their course of action even when new information suggests that the course of action is not leading to the desired goal. A phenomenon known as “Escalation of Commitment.” Change processes in particular have a high fail potential, as several recent studies suggest. How can an innovative use of data counteract this and make (strategic) controlling more effective?
Mahlendorf: According to Escalation of Commitment research (on which I wrote my doctoral thesis, by the way), the reasons for sticking with failing projects are often due to psychological reasons (biases) such as selective perception, self-justification (i.e., wanting to save face), sunk costs effect, and overoptimism. On the other hand, there is of course also the opposite problem, i.e. that innovative projects are abandoned too early. Innovative data use can help uncover and challenge unconscious biases. To give a finance example: When buying shares, there is a so-called home bias, which means that people invest too much in their own home country. There are systems that automatically warn and ask if you really want to do that if you make a kind of decision that has turned out to be not so successful in the past (i.e. investing in a company in your own country again). Of course, this is only possible if enough data is available, which is often not the case with strategic projects.
The approach of driver-based controlling seems to reflect a school of thought you have also referred to in some writings: that of diminishing abstraction. The more abstract models are, the more manageable they are (and the more they ‘explain’) – but the more ‘biased’ the explanandum necessarily is. However, the reverse conclusion does not seem to be necessarily valid: Because a maximum granularity of my cause-effect queues does not seem to be helpful when I am searching for a suitable target corridor in controlling. So what is to be done?
Mahlendorf: In my opinion, driver-based models are a very exciting and promising field. More input factors tend to be better, of course. But one must not forget that there is always a lot of “noise” in data. If you analyze data too granularly and at too small time intervals, you may see only random events. This is not only a waste of time, but can also lead to overreactions. What is the right level of abstraction and time horizon depends on the specific decision problem and the nature of the data. Personally, I’m most attracted to the interface between driver-based models and the strategy map. In the strategy map, after all, top management outlines how it believes the world works. In other words, what are the causal relationships that the company can use to improve its performance. Ideally, driver-based analyses can be used to statistically determine whether causal relationships are actually confirmed by the data. So, for example, does customer satisfaction with a particular product actually lead to more sales, or is customer satisfaction irrelevant in the specific context because everyone just buys the cheapest?
“More transparency!” seems to be an imperative of this time. A trend that the digital transformation and new forms of work and organization are causing and fueling. Schäffer writes that this “dramatically increases the demands on data quality.” Why?
Mahlendorf: More transparency means that more people have insight – into certain key figures, for example. On the one hand, this increases the likelihood that errors will be noticed by third parties – an embarrassment that you might want to reduce by improving data quality. On the other hand, more transparency also means that erroneous data can have far-reaching consequences if data errors are not detected but many people make decisions based on the data.
Why do in-memory technologies represent a change in the way controllers handle data?
Mahlendorf: In-memory technology enables real-time data processing. This means that instead of static monthly reports on paper or in PowerPoint, developments can suddenly be analyzed in real time in dashboards and the causes can be investigated more quickly with so-called “drill-downs” – i.e. clicking down to the transaction level.
What controversy do you expect to see in the panel discussion?
Mahlendorf: The controversy perhaps lies primarily in which data the cost-benefit ratio is right. There are an incredible number of new data sources, but the question is how exactly to derive value-generating decisions from them in a limited amount of time. The data provider Quandl put it very well: Finding truly decision-relevant innovative data is like looking for a needle in a haystack. But the haystack is now growing faster than the number of needles. Trotzdem sollten wir meiner Meinung nach nicht den Heuhaufen ignorieren, sondern gespannt beobachten, welche Nadeln gefunden werden, um zu entscheiden wann es für das eigene Unternehmen auch lohnend wird innovative Daten zu erschließen!