With the availability of more data, classification is increasingly important. However, traditional classification algorithms do not scale well to large data sets and are often not suited when only limited samples of the dataset are available at any point in time. The latter arises, for example, in streaming data when the accumulation of data a priori is infeasible either due to limitations in memory or computation, or due to privacy and data ownership limitations. In these situations, traditional classification algorithms are difficult to apply since they are generally not incrementally trainable on changing data sets. To address this, this paper presents a novel approach that first uses Reinforcement Learning to learn a policy to incrementally build neural network classifiers for a broad distribution of problems and subsequently applies it to new data to learn a classifier for this specific problem. In both phases, learning operates on a sequence of small, randomly drawn subsets of the data, thus making it suitable for streaming data and for very large data sets where processing the entire set is not feasible. Experiments comparing this approach with kernel SVMs and large neural networks applied to the complete dataset show that this approach achieves comparable performance. Additional experiments were done to evaluate the performance of this approach for real world, streaming datasets and datasets with concept drift properties.
Paper code for: https://ieeexplore.ieee.org/abstract/document/7844549 neural network classifier w dynamic architecture