Construction of a Meta-Learner for Unsupervised Anomaly Detection

Authors

  • S. E. Suresh Assistant Professor, Department of MCA, Annamacharya Institute of Technology & Sciences, Tirupati, Andhra Pradesh, India
  • Chennuru Srihari Post Graduate, Department of MCA, Annamacharya Institute of Technology & Sciences, Tirupati, Andhra Pradesh, India

Abstract

Many real-world applications, such as network security and medical and health equipment, Unsupervised identification of anomalies. Given the wide range of unique situations associated with AD work, no single approach has been demonstrated to be superior to the others. Academics have been particularly interested in the Algorithm Selection Problem (ASP), also known as algorithm selection, when it comes to supervised classification issues employing AutoML and meta-learning; unsupervised AD tasks, on the other hand, have gotten less attention. This work presents a novel meta-learning technique that generates an efficient unsupervised AD algorithm given a set of meta-features extracted from the unlabeled input dataset. It is discovered that the recommended meta-learner outperforms the state-of-the-art option.

Keywords:

Model Selection, Unsupervised Identification of Anomalies, Meta-Learning, And Meta-Features.

References

  1. Hodge, V. J., & Austin, J. (2004). A survey of outlier detection [1] Chandola, V., Banerjee, A., & Kumar, V. (2009). Anomaly detection: A methodologies. Artificial intelligence review, 22(2), 85-126.
  2. Lemke, C., Budka, M., & Gabrys, B. (2010). Meta-learning for time series forecasting and forecast combination. Neurocomputing, 73(10-12), 2006-2016.
  3. Vanschoren, J. (2018). Meta-learning: A survey. arXiv preprint arXiv:1810.03548.
  4. Torra, V., Narukawa, Y., & Shyamanta, M. (2005). Metalearning in distributed data mining systems. IEEE Transactions on Knowledge and Data Engineering, 17(5), 691-702.
  5. Rahman, M. M., Islam, M. R., & Murase, K. (2017). Deep meta-learning: Learning to learn in the concept space. arXiv preprint arXiv:1703.03019.
  6. Swearingen, T. (2000). A semantic approach to the automatic recognition of computer-generated music. In Proceedings of the International Computer Music Conference (pp. 250-253).
  7. Dai, W., Yang, Q., Xue, G. R., & Yu, Y. (2007). Boosting for transfer learning. In Proceedings of the 24th International Conference on Machine learning (pp. 193-200).
  8. Jankowski, N., & Grochowski, M. (2006). Generalized instance-based learning algorithm. IEEE Transactions on Neural Networks, 17(6), 1411-1425.
  9. Fan, H., Zhang, H., Yang, J., & Li, H. (2007). Active transfer learning for boosting. In Proceedings of the 24th International Conference on Machine learning (pp. 273-280).
  10. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1), 1929-1958.
  11. Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
  12. Chollet, F., & others. (2015). Keras. https://github.com/fchollet/keras.
  13. Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., ... & Zheng, X. (2016). TensorFlow: A system for large-scale machine learning. In 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16) (pp. 265-283).
  14. Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., ... & Vanderplas, J. (2011). Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12, 2825-2830.

Downloads

Published

2024-04-30

Issue

Section

Research Articles

How to Cite

[1]
S. E. Suresh, Chennuru Srihari, " Construction of a Meta-Learner for Unsupervised Anomaly Detection" Shodhshauryam International Scientific Refereed Research Journal (SHISRRJ), ISSN : 2581-6306, Volume 7, Issue 2, pp.43-49, March-April-2024.