This is a fun paper for me. This paper involves symbolic regression as a core methodology. My machine learning journey started with symbolic regression and I didn’t even realise it at the time I was doing it. As in I didn’t realise the journey I was about to undertake. I was simply experimenting with creating a symbolic regression library in C#. Eventually that would grow into a career and my interest in the field would be ever expanding.
The idea in this paper is the use of Graph Neural Network to produce results and then running a symbolic regression library against those results to create a new equation. This equation is a brand new way of modelling our universe and appeared to be better than the output of the neural network itself. This is something we find in the physical world quite often, that simple equations seem to model things in our universe better than large models.
This combination of processes is fairly rare and it is interesting to see. I have felt on a personal level symbolic regression seems like a pretty useful technique that goes under utilised so it was nice to see a paper using it. The training process was interesting since before attempting to discover new equations they were able to simply rediscover known ones from datasets. This is an interesting idea and I can see it being effective in training a network that can recover known laws, it can then potentially be effective at deriving new laws from unfamiliar datasets.
I could see the application of this dataset being pretty broad and I can’t help but wonder about the many applications. Any process that can derive laws from data where we currently don’t know any is a pretty interesting process that I hope to explore myself.