Files
Abstract
Artificial neural networks (ANNs) performance can be influenced by factorssuch as weight initialization, activation function types, and the overall
design. Common activation functions include sigmoid, tanh, and Rectified
Linear Unit (ReLU), with most techniques applying the same activation across
every neuron in a layer.
In our research, we introduce a variation of the neural network where the
activation functions in each layer come together to form a polynomial basis.
We’ve named this method SWAG, a title derived from the authors’ surnames.
We evaluated SWAG on a variety of intricate non-linear functions, the
MNIST handwritten dataset, and other renowned classification datasets and
CNN designs. The results suggest that SWAG delivers superior performance
and converges faster than other top-tier fully connected neural networks. With
its efficient computation and its capability to solve challenges autonomously,
SWAG holds promise to reshape deep learning methodologies