Revolutionizing data management at a major central bank
All central banks have proprietary data, but few are managing the challenge perfectly. This case study aims to show how Macrobond helped a central bank unlock its full potential.
Our customers have taught us that there are three pain points commonly found at central banks:
- Data security,
- staff silos and
- inconveniently dispersed data.
Opportunities lost: The expensive pitfalls of an in-house solution
Before Macrobond, this central bank spent hundreds of hours developing its own data analytics solution. It also spent a fortune on outside consultants.
The results left many staff dissatisfied. The tool wasn’t very flexible. Keeping it fresh was a constant burden: development was ad hoc. New features were added on a per-request basis, resulting in more hours lost dealing with expensive consultants. There was no road map: the central bank had no broad overview of where this data solution was headed.
The silo problem
This central bank has eight main divisions. The platform was developed at only one of the divisions – and only used there! The other divisions had access to the platform, but it was not designed for their workflow and the data relevant to those teams was often not even included. So they didn’t use it. The expensive, in-house solution did not foster collaboration.
Even inside departments, staff opted to use a multitude of programming languages – Python, Matlab, “R” – to extract and handle data, usually depending on what they were used to before joining the central bank.
The pain points
Working with this central bank, Macrobond helped senior leadership zero in on key challenges.
- Data security. Above all else, this central bank sees itself as a government institution that must preserve market integrity and, more broadly, maintain the public’s trust. Central banks amass sensitive, real-time data that must be protected from breaches and misuse.
- Staff silos. This central bank employs some of the world’s most brilliant economists and public policy experts, but these people were often not communicating with each other as brilliantly as they could be. Insights were trapped in unwieldy Excel spreadsheets. As stated above, different departments preferred different programming languages – and staff who weren’t whizzes at using code to manipulate data were at a disadvantage. The results: duplicated efforts and teams of PhDs who didn’t know what they didn’t know.
- Dispersed data. Central banks use third-party data feeds. But they also collect data from financial institutions and the wider economy. Their expert staff create proprietary models. Often, these things are not in the same place or organised in a user-friendly way.
The next step: Powerful time travel with data
As central banks know, macroeconomic data is constantly being revised. This presented an interesting challenge for the forecasting department, which uses past trends to model the future – and hopefully, therefore, make the right call on inflation and interest rates. Our client published some forecasts externally, with others reserved for internal discussion.
This central bank’s forecasting team aims to predict inflation by generating forecasts for different scenarios and then attempting to weight their relative probability. One scenario might have unemployment rising; another, the reverse. But between a central bank’s rate-setting meetings, government statistics bureaus might retroactively, substantially change past data points – most famously with US non-farm payrolls data.
These revisions have the potential to play havoc with central banks’ models, and thus, the projected trajectory of interest rates. Our central bank told us that they were attempting to track 1) when data was released, 2) when it was revised, 3) understanding the reason for the revisions, and 4) understanding why it had made a specific forecast at a specific time as a result.
This was all done by storing snapshots of the data at different points in the forecasting process. This meant that each graph or spreadsheet had a huge number of versions – which was hard to navigate using the software tool developed in-house.
Macrobond’s Revision History function has enabled this central bank to easily travel back in time to show what information was available at a specific point in time – particularly, the last rate-setting meeting.
Disparities between data releases are revealed. Vintage data for specific dates is easily accessed. Models can be back-tested to reveal trends and avoid look-ahead bias.
The results
Macrobond has become this institution’s comprehensive solution to leverage internal and external data more effectively.
Our integrated analytics and data visualisation tools streamlined the way this central bank build and updated charts and tables, saving hours of repetitive work manually creating and recreating assets each time new data is released. And Macrobond is a single source of truth, adding the central bank’s internal, confidential data to our proprietary data universe and feeds from third-party providers.
We empower this central bank to spend less time organising data and more time collaborating with colleagues to produce valuable insights.