It would be useful to apply deep learning to the complex plane because it would allow us to work with data that has both a phase and magnitude component. Signal and audio processing would be good examples of places where complex numbers can be extremely useful. If neural networks can be adapted to work with complex numbers then techniques designed specifically for complex analysis like fourier transforms become usable in the deep learning context. Approximations are often used to make things more suitable due to the inability for most current implementations to work with complex numbers. This paper moves to some degree past that, trying to use complex numbers in neural networks as much as is currently possible.
THEORY AND IMPLEMENTATION OF COMPLEX-VALUED NEURAL NETWORKS
https://arxiv.org/pdf/2302.08286v1.pdf
J. A. Barrachina, C. Ren, G. Vieillard, C. Morisseau, and J.-P. Ovarlez
It starts with quite a good rundown of the possibilities currently possible and frameworks available for this methodology. The gives a great description of the mathematical background of this approach. Before going into a description of the implementation of common neural network features in regards to complex numbers. Many adaptions are made to make this all possible. Then tests are conducted against real and complex datasets. It is a paper well worth a read and is definitely enlightening on the subject matter.
Leave a Reply