Comparative Analysis of AI-Supported and Manual JMeter Tests: The Role of Generative AI and LLM in Software Performance Testing


Abstract views: 17 / PDF downloads: 9

Authors

  • Burak TUZLUTAŞ Ostim Technical University
  • Murat ŞİMŞEK Ostim Technical University

Keywords:

Software Performance Testing, Artificial Intelligence, Generative AI, Large Language Models, Software Testing

Abstract

This paper addresses the challenges of conducting software performance testing and the
challenges encountered in the pre-testing process. The focus is on the importance of software performance
testing and evaluation methodologies. At the same time, the main theme of large language models (LLM)
and the characteristics of modeling and its role in this process are examined.
The overall aim of the study is to investigate how Generative AI- Large Language Models (LLM) can be
used efficiently in performance testing in important stages such as creating test plans, constructing test
profiles, creating and preparing data, and interpreting the reports received as a result of the tests. The
advantages of Artificial Intelligence, more precisely Generative AI- Large Language Models (LLM), are
discussed in terms of optimizing the processes carried out in performance testing in a positive sense and
accelerating the process.
This study is envisioned as a contribution to the traditional methods used in performance testing. The
potential of Generative AI-Large Language Models (LLM) to effectively solve the problems in traditional
testing methods and to create more efficient testing processes may guide the development of performance
testing methodologies in the future.

Downloads

Download data is not yet available.

Author Biographies

Burak TUZLUTAŞ, Ostim Technical University

Software Engineering / Institute of Science, Türkiye

Murat ŞİMŞEK, Ostim Technical University

Artificial Intelligence Engineering / Institute of Science, Türkiye

References

A. Avritzer and E.J. Weyuker, "Deriving Workloads for Performance Testing", SoftwarePractice and Experience, vol. 26, no. 6, pp. 613-633, June 1996.

F.I. Vokolos and E.J. Weyuker, "Performance Testing of Software Systems", Proc. ACM Workshop Software and Performance (WOSP 98), pp. 80-87, 1998-Oct.

Junjie Chen, Guancheng Wang, Dan Hao, Yingfei Xiong, Hongyu Zhang, and Lu Zhang. 2019. History-guided configuration diversification for compiler testprogram generation. In 2019 34th IEEE/ACM International Conference on Auto-mated Software Engineering (ASE). IEEE, 305–316

Armbrust, M., Fox, A., Griffith, R., Joseph, A.D., Katz, R., Konwinski, A., Lee, G., Patterson, D., Rabkin, A., Stoica, I., et al.: A view of cloud computing. Communications of the ACM 53(4), 50–58 (2010).

Downloads

Published

2024-03-11

How to Cite

TUZLUTAŞ, B., & ŞİMŞEK, M. (2024). Comparative Analysis of AI-Supported and Manual JMeter Tests: The Role of Generative AI and LLM in Software Performance Testing . International Journal of Advanced Natural Sciences and Engineering Researches, 8(2), 109–117. Retrieved from https://as-proceeding.com/index.php/ijanser/article/view/1703

Conference Proceedings Volume

Section

Articles