|Murex performance tuning and optimization||How to turn the right knobs...|
A complex system such as Murex MxG2000 or MX.3 gives you a number of benefits, empowering cross-asset trading and risk management while reducing operational risks due to a high degree of automation, easing the processing in the front-to-back worksflows, yet keeping time-to-market on a reasonable level due to it's modular approach and orthogonal architecture.
However, such an omnipotent tool will grow over time, and thus bears some inevitable risks.
It is getting big. Too big, one day. And there are subtle hints that an implementation may be that large even from the beginning.
The growth may result from market data volume, market data complexity, trade data overhead, processing inefficiencies, audit facilities, and more. Some are implied by legal requirements, others required to allow retrospect analysis, most from just letting it happen. Read: not caring, not cleaning up.
But there is a countermeasure against this degradation:
performance tuning will improve business functionality, optimization will improve technical functionality.
Short: Murex will not longer distract your users, but instead enable them and revitalize your business.
Housekeeping procedures, performed regularly, are an approach to counter the growth, these include the coarse steps (just saving space)
- Purging market data with a reasonable scheme, while allowing archiving and restoration of market data at future times
- Purging audit trails, yet enforcing revision policy
- Purging workflow history and data dictionary history
- Purging flow document history
- Performing general housekeeping activities
to the more refined steps (in addition, having positive effect on resource usage like memory and CPU)
- Apply templates to generators, by matching their patterns thus reducing generator instances
- Remove unused and unreferenced generators
- Netting of trades, to aggregate positions, implementing netting contraints where applicable
- Purging trade data, after capturing and storing their relevant P&L (logical and physical purge)
up to the more complex ones (including a reduction in exposure/risk or degree of detail in high-volume markets)
- closing out or reducing positions with counterparties due to risk offsetting
- aggregating positions coming from upstream trading facilities
- reducing processing overhead in workflows
- Introducing new Murex modules like the near-time simulation server LiveBook
- Utilizing parallel processing facilities on GPUs or grid services
- Parallelizing at the right places as turn-key solutions
- Applying specific cost-intensive general settings only for users that require them (e.g. simulated sensitivities)
- Tuning of curve calibrations, esp. in the context of OIS discounting and custom convexity adjustments
Together with the housekeeping and process optimization activities, there is a number of activities applicable on the technical side, including
- Measuring the relevant key factors of database and application servers and graphing them while monitoring thresholds of unusual activity
- Database table and index maintenance
- File system maintenance
then going to a next level
- Active performance tuning on database server parameters
- Active and passive monitoring of performance factors inside the application, e.g. index utilization and proper index creation
- Load balancing over hardware resources, using the launcher architecture or more sophisticated measures
- Performance optimization in the application (parametrisation or flows)
- Java JVM tuning
up to optimization and tuning of
- Dynamic creations, as used in Mreport reports or extractions
- Datamart feeders / batches of feeders
- SQL queries from within filters, or externally in surrounding environments scripts and interfaces
- Payment processing / matching
- MxML exchange (architecture parametrization and JVM tuning)
- Trade processing and import
- Confirmations creation
This list of activies or approaches shall give an idea on where the means of performance tuning and optimization can help with a Murex system and keep it lean and efficient, or regain it's former responsiveness. Others may also support the process of Murex environment maintenance and help reduce regresions during upgrade and extension works. The results are measurable and obvious, as some examples in figures shall show:
- Report runtime dropped from 4h to 15min, others (interactively used) dropped from 2h to 1h
- Post-processing Perl/SQL script combo runs 3* faster, decreasing processing delay in downstream systems
- Simulation performance (detailed, as well as runtime reduction in the relevant consolidating process) was increased up to 28* for a high-volume flow-trading desk
- Deal import was reduced to 7h (140k swaps) from 4 days before, resulting in a successfull participation in the LCH DMP firedrill
- Simulation time was reduced by a factor of 6+ due to database optimizations in a hybrid query plan/statement reordering approach
- Memory usage was reduced by 50% for sessions hitting the 4GB 32-bit memory limit repeatedly
- Database utilization was reduced by 20% on non-market data due to purge approach
- Near-time access to bank-wide risk figures in multiple portfolios was enabled, avoiding a 4hrs calculation process before current figures are available
- Database space reduced by 35% due to initial netting and trade purge, while ensuring consistent P&L figures
- Proper report configuration template saved 30% of runtime on a detailed bond revaluation report
- Manual trade capture from low-volume yet complex structured products FO trading systems improved from 30min-1h30 to 5min
- During release changes or major overhauls, a regression testing tool digests 2*3GB of raw data produced in the EOD process and analyses it for expected and unexpected differences while cross-refering to known topics and issues; this resolves the strain off time-consuming and error-prone data mangling and creates a reliable basis for the test management process. In figures: it saves 100+ man days per testing event.
- Oracle database export with Sybase's dump performance: get your 100GB database dumped in 20min (instead of 28h with an crude export); allowing essential Murex environment features as a daily t-1 copy
- Identified bottlenecks in Murex and Flex code by applying stochastic sampling on process level followed by hot spot visualization, thus enabling correction of settings (in BA role) and correction of bugs (in Flex dev role)
- Implemented graphical profiling (flamegraph creation) using embedded profiling code within Murex (mx binary and custom Flex libraries) to transparently track down time-consuming activities due to misconfiguations
- Helped a large installation with severe datamart performance problems with root cause analysis of performance degradation, and subsequent solutions or improvement approaches
- Supercharged a MxML exchange workflow for EMIR snapshot reporting (FPML delivery to DTCC), speed up by a factor of >40 (from ~60h to 1:15h)
- Suggested a speed up of liquidation maintenance and accounting for positions based on that by 30-45min each, implemented by carefully designed indices to reduce cpu load on the database server as well as a liquidation position purge (where the original procedure does not work at all)
- Speed up of three times in the XML extraction and XSLT transformation in key reporting components by applying some JVM tuning and utilizing xsltc
- Suggested a hybrid architecture of logical deal purge combined with proper dynamic creation settings to gain performance from the purge while retaining detailed reporting information for down-stream systems (where not otherwise feasable)
What are your concerns with your Murex installation? What are your users complaining about? What bites you most? How can I help you?
While working on Murex projects and contracts in the long term, I am available for short-term assignments to solve your very specific tuning or optimization problems or hold training-on-the-job sessions on how to approach your requirements.