Can real-time Analytics play a role in Financial Research?
Real-time analytics is generally not a straightforward match for financial research. Historical data typically takes on a rather static formation, and the general view is that you can apply certain ‘filters’ on a subset of fields to get what you need. This is one reason for Microsoft Excel’s popularity among analysts when it comes to using an application for data collection and filtering.
Financial researchers tend to observe historical data, and the concept of real-time analytics playing a role therein is difficult to comprehend.
But what if you have to deal with a very large volume of indirectly related data collected over a long period of time, and the usual filtering you have been using does not scale up to support it?
At a glance, it may appear that you could apply multiple levels of filtering until you arrive at the desired output, but this could take much time and effort if done manually. Moreover, if you are dealing with large volumes of unstructured and indirectly related data, it would require a technical team to build complex correlation logic for each filter criteria.
On the other hand, technology solutions built by leveraging the power of real-time analytics already provide capabilities to process large data within the shortest possible time. For example, emerging industry trends such as Complex Event Processing (CEP) frameworks have been designed to combine simplified information-processing logic with high volumes of fast-moving data. For financial research, we could probably give less prominence to the ultra-high speeds supported by these frameworks, but processing logic simplification, combined with high scalability, could certainly play a significant role in financial data analysis.
Let us take an example of an analyst wanting to obtain seasonal variances of sales data of a particular product across several geographies, overlapped with some news feeds that may indicate corresponding market demand. This could be a very large volume of data demanding great deal of time to filter manually. What if we could create a procedure to feed this data in batches to a CEP engine in which multiple filter rules are applied immediately to a ‘free-flowing’ stream of data?
As depicted in the diagram, event definitions can be applied in real time to a stream of data loaded either by a batch runner or data feeds. The CEP engine will use these rule definitions to match multiple data streams; the output can be fed straight into the analyst model (possibly through an interim data store) or even be used as input to another CEP cycle.
Where the processing component of the solution is concerned, CEP frameworks generally provide an Event Processing Language (EPL) — a declarative language similar to SQL for dealing with high-frequency, time-based event data. Esper and WSO2 CEP are two of the leading frameworks in the market that can be used when building such solutions related to the data capturing and processing needs of financial research.
Orignal source : https://www.acuitykp.com/