Construction of a Meta-Learner for Unsupervised Anomaly Detection
Abstract
Many real-world applications, such as network security and medical and health equipment, Unsupervised identification of anomalies. Given the wide range of unique situations associated with AD work, no single approach has been demonstrated to be superior to the others. Academics have been particularly interested in the Algorithm Selection Problem (ASP), also known as algorithm selection, when it comes to supervised classification issues employing AutoML and meta-learning; unsupervised AD tasks, on the other hand, have gotten less attention. This work presents a novel meta-learning technique that generates an efficient unsupervised AD algorithm given a set of meta-features extracted from the unlabeled input dataset. It is discovered that the recommended meta-learner outperforms the state-of-the-art option.
Keywords:
Model Selection, Unsupervised Identification of Anomalies, Meta-Learning, And Meta-Features.References
- Hodge, V. J., & Austin, J. (2004). A survey of outlier detection [1] Chandola, V., Banerjee, A., & Kumar, V. (2009). Anomaly detection: A methodologies. Artificial intelligence review, 22(2), 85-126.
- Lemke, C., Budka, M., & Gabrys, B. (2010). Meta-learning for time series forecasting and forecast combination. Neurocomputing, 73(10-12), 2006-2016.
- Vanschoren, J. (2018). Meta-learning: A survey. arXiv preprint arXiv:1810.03548.
- Torra, V., Narukawa, Y., & Shyamanta, M. (2005). Metalearning in distributed data mining systems. IEEE Transactions on Knowledge and Data Engineering, 17(5), 691-702.
- Rahman, M. M., Islam, M. R., & Murase, K. (2017). Deep meta-learning: Learning to learn in the concept space. arXiv preprint arXiv:1703.03019.
- Swearingen, T. (2000). A semantic approach to the automatic recognition of computer-generated music. In Proceedings of the International Computer Music Conference (pp. 250-253).
- Dai, W., Yang, Q., Xue, G. R., & Yu, Y. (2007). Boosting for transfer learning. In Proceedings of the 24th International Conference on Machine learning (pp. 193-200).
- Jankowski, N., & Grochowski, M. (2006). Generalized instance-based learning algorithm. IEEE Transactions on Neural Networks, 17(6), 1411-1425.
- Fan, H., Zhang, H., Yang, J., & Li, H. (2007). Active transfer learning for boosting. In Proceedings of the 24th International Conference on Machine learning (pp. 273-280).
- Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1), 1929-1958.
- Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
- Chollet, F., & others. (2015). Keras. https://github.com/fchollet/keras.
- Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., ... & Zheng, X. (2016). TensorFlow: A system for large-scale machine learning. In 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16) (pp. 265-283).
- Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., ... & Vanderplas, J. (2011). Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12, 2825-2830.
Downloads
Published
Issue
Section
How to Cite
License
Copyright (c) SHISRRJ
This work is licensed under a Creative Commons Attribution 4.0 International License.