dc.description.abstract | network aims to optimize for minimizing the cost function and provide better
performance. this experimental optimization procedure is widely recognized as gradient descent, which is a form of iterative learning that starts from a random point
on a function and travels down its slope, in steps, until it reaches to the steepest
point which is time-consuming and slow to converge. over the last couple of decades,
several variations of the non-iterative neural network training algorithms have been
proposed, such as random forest and quicknet. however, the non-iterative neural
network training algorithms do not support online training that given a very largesized training data, one needs enormous computing resources to train neural network.
in this thesis, a non-iterative learning strategy with online sequential has been exploited. in chapter 3, a single layer online sequential sub-network node (os-sn)
classifier has been proposed that can provide competitive accuracy by pulling the
residual network error and feeding it back into hidden layers. in chapter 4, a multilayer network is proposed where the first portion built by transforming multi-layer
autoencoder into an online sequential auto-encoder(os-ae) and use os-sn for
classification. in chapter 5, os-ae is utilized as a generative model that can construct new data based on subspace features and perform better than conventional
data augmentation techniques on real-world image and tabular datasets. | en_us |