Current projects


Multi-agent insurance pricing using model-based bandits

This project is a part of my PhD research. The goal is to research the problem of pricing insurance premium using bandit algorithms in multi-agent environments. Within the project, we have developed a pricing environment using bottom-up approach. We have proved existence of Nash equilibrium for a single stage pricing competition. Furthermore, based on the analysis of the payoff functions, we have developed a set of assumptions that guarantee the uniqueness of Nash equilibrium in pure strategies.

Then, we perform numerical experiments with bandit algorithms competing within the environment. For that purpose, we have developed a logistic model of the environment that can be incorporated within Bayesian bandit algorithms.

The numerical experiments showed that while the model can accelerate the learning process when the environment and all opponents are stationary. However, when the opponents are learning, the model does not provide any advantage over the standard bandit algorithm.

Therefore, when any model of the environment, as perceived by the learning agent, it has to faithfully represent the truth, i.e. both the pricing environment and the possibly dynamic opponents.


LS314 Project

The LS314 Project is my startup vehicle. It does not have a specific business proposition but instead it formalises my work on the projects that interest me and that I believe have potential value and that could be converted to business propositions in the future.

The main goal of the LS314 project is to develop a set of framework, procedures and rules that will allow me to work efficiently on new ideas and projects. That is, I want to make sure that if I follow these frameworks, I will be able to:

  • Avoid wasting time on projects with little value.
  • Plan and execute projects within specified timeframe with concrete outcomes.
  • Increase and document my validated learning.
For more information about the LS314 project, please visit the LS314 website.