Skip to content

Probabilistic Logic Interpretation and Uncertainty Quantification of Neural Network Based Decision Making on Financial Time Series

License

Notifications You must be signed in to change notification settings

Nicholas-McColgan/Neural-Network-Interpretability

Repository files navigation

Probabilistic Logic Interpretation and Uncertainty Quantification of Neural Network Based Decision Making on Financial Time Series

This repository contains the official implementation of the full paper Masters_paper.pdf

Summary

While neural networks excel at learning unspecified functions from training samples, they often lack interpretability, formal verifiability, and accurate uncertainty quantification. In contrast, logic is both interpretable and verifiable. In this work, we introduce a novel probabilistic logic approach that combines the learnability of neural networks with the verifiability of logic while accounting for the uncertainties of neuron activations, and apply it to financial time series data to enhance portfolio optimisation, risk management, and market trend identification.

Our framework represents a significant step forward in the rigorous application of deep learning in financial trading, ensuring both interpretability and accurate uncertainty quantification. While focused on financial applications, the methodology can be readily adapted to wider domains in deep learning, especially in other high-risk fields such as medicine.

Key Contributions:

  • Developed a novel post-hoc probabilistic logic framework to enhance neural network uncertainty estimates and interpretability, improving the accuracy of uncertainty estimates by up to 58% and enabling the derivation of novel trading strategies.
  • Utilised enhanced uncertainty estimates to optimise a long/short portfolio, achieving Sharpe ratios up to 4 times higher than conventional models through dynamic position hedging on synthetic financial time series data.
  • Improved the robustness and stability of financial feature importance estimates by deriving them directly from the trained network’s parameters, leading to more reliable feature dependence discovery.
  • Reconstructed and analysed the trading strategies derived from the network, revealing key insights into the characteristics of the time series data.
  • Demonstrated that trading strategies derived from the neural network achieved Sharpe ratios equal to those of the network itself, showing that interpretability need not come at the expense of performance.

Setup

1) Clone the Repository

Clone this repository to your local machine using the following command:

git clone https://github.com/Nicholas-McColgan/Neural-Network-Interpretability.git
cd Neural-Network-Interpretability

2) Install Dependencies

To install all required Python libraries, run the following command:

pip install -r requirements.txt

If you encounter any issues installing pyeda, you can manually clone and install it using the following commands:

git clone https://github.com/cjdrake/pyeda.git
cd pyeda
python3 setup.py install

3) Run the Code

To demonstrate the effectiveness of our framework, we utilise both a classification and a policy learning setting.

Classification Setting:

Focused on assessing the frameworks abiltiy to enhance the accuracy of uncertainty estimates by evaluating the calibration of model predictions.

Execute the Classificiation setting by running:

python3 Classification.py

Policy Learning Setting:

Here, we evaluate the framework’s ability to quantify risk within a portfolio allocation setting, encompassing a broader concept of uncertainty. This setting also highlights the interpretability capabilities of the framework, including:

  • Feature importance estimates
  • Formula reconstruction
  • Insight into the characteristics of the time series data.

Execute the Policy Learning setting by running:

python3 Policy_Learning.py

Data and Reproducing Results

The results presented in the paper are based on three variations of Ornstein-Uhlenbeck (OU) processes, each representing different financial time series behaviors:

  • Upwards Trend: Captures a persistent upward movement.
  • Reversion: Models mean-reverting behavior.
  • Switching Trend: Simulates alternating trend patterns.

To replicate the results for each of these datasets in Classification.py:

  1. Locate the relevant line of code, as shown in the image below.
  2. Comment out all datasets except the one you wish to use. For example, in the image below, the selected dataset is the Switching Trend process.

alt text

For Policy_Learning.py, follow a similar approach:

  1. Locate the corresponding line of code in the script.
  2. Comment out all datasets except the desired one, as illustrated in the image below.

alt text

Once the desired dataset is selected, you should be able to reproduce the core results of the paper.

Releases

No releases published

Packages

No packages published

Languages