This is an excerpt from a transcript of The Axway Podcast, “What larger banks do to minimize the difficulty of automating stress testing.”
ANNOUNCER: Forbes writer Tom Groenfeldt recently published a piece titled, “Compliance Efforts Can Bring Business Benefits for Banks,” and he quoted a risk consultant in it who noted that “For the larger banks, over $50 billion, it is more difficult to automate the stress testing because of the sheer volume of data and the number of systems they have in place. Big banks this year identified data integration as their big challenge. That was not the top challenge for banks with $10 to $50 billion.”
PETER BENESH: A lot of the challenges around data integration are the ability to access data that especially resides on mainframes and legacy systems. Or in this case, it may be in the cloud. Or it may not even be in a database. It could be in Excel spreadsheets. It could be unstructured data that doesn’t even reside in a formatted database.
ANNOUNCER: That’s Peter Benesh, Axway’s director of solution marketing for the Financial Services industry. We asked him to describe what those larger banks do to minimize the difficulty of automating stress testing.
PETER BENESH: The bigger the bank is, the more likely it is that it has a greater variety of both hardware and software technologies. Probably the bigger they are, probably the more mainframes they might have. The more in-house developed reporting systems they may have. The data integration challenge really becomes “Does the technology that they have provide all of the various connectors that are required to enable their data integration platform to extract information from every possible data source?” The next challenge in that is once you extract it, depending on what the source is, the data formats are most likely going to be in various structures. In order to get all of that information into an analytic server, you’ve got to translate all of that into one common data structure. The greater variety of sources you have — not only do you need to have more connectors, but you also have to have a greater library of data transformation capabilities. Such that you can translate or transform data formats that are coming from Oracle, that are coming from a mainframe, that are coming from flat files. Because all of that has to be put into a common format that an analytic server like Hadoop, for example, can digest or ingest for analytics. That whole process is traditionally called ETL — extract, transform, load. That’s what that acronym means. We aren’t really in the business of ETL, per se, but to the extent that any of this data that they need to integrate also needs to come from outside their organization… Let’s say they have information in the cloud, or maybe they have information for partners that they want to integrate into these exercises. Anything that they would need to do more of a B2B-type integration with, bringing information through their firewalls… And, of course, our gateway technologies could help them with that. It’s really a function of how broadly do they want to scope the integration exercise. Do they want to restrict it simply to information that resides in their internal systems? If that’s the case, then pretty much a very robust ETL tool is what they need. If they want to expand the scope to include both internal and external data, then they need strong ETL and they need strong MFT.
To read the article in its entirety on Forbes.com, please click here.
To listen to the podcast on YouTube (audio only), please click here.