A Feature Selection Technique based on Distributional Differences


Sung-Dong Kim, Journal of Information Processing Systems Vol. 2, No. 1, pp. 23-27, Mar. 2006  

https://doi.org/
Keywords: Feature selection, Distributional Differences
Fulltext:

Abstract

This paper presents a feature selection technique based on distributional differences for efficient machine learning. Initial training data consists of data including many features and a target value. We classified them into positive and negative data based on the target value. We then divided the range of the feature values into 10 intervals and calculated the distribution of the intervals in each positive and negative data. Then, we selected the features and the intervals of the features for which the distributional differences are over a certain threshold. Using the selected intervals and features, we could obtain the reduced training data. In the experiments, we will show that the reduced training data can reduce the training time of the neural network by about 40%, and we can obtain more profit on simulated stock trading using the trained functions as well.


Statistics
Show / Hide Statistics

Statistics (Cumulative Counts from November 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.




Cite this article
[APA Style]
Kim, S. (2006). A Feature Selection Technique based on Distributional Differences. Journal of Information Processing Systems, 2(1), 23-27. DOI: .

[IEEE Style]
S. Kim, "A Feature Selection Technique based on Distributional Differences," Journal of Information Processing Systems, vol. 2, no. 1, pp. 23-27, 2006. DOI: .

[ACM Style]
Sung-Dong Kim. 2006. A Feature Selection Technique based on Distributional Differences. Journal of Information Processing Systems, 2, 1, (2006), 23-27. DOI: .