AST 2024: 5th ACM/IEEE International Conference on Automation of Software Test Lisbon, Portugal, April 15-16, 2024 |
Conference website | https://conf.researchr.org/home/ast-2024 |
Submission link | https://easychair.org/conferences/?conf=ast2024 |
Submission deadline | December 10, 2023 |
CALL FOR PAPERS
The 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024)
15-16 April, 2024 @ICSE 2024, Lisbon, Portugal
https://conf.researchr.org/home/ast-2024
Software pervasiveness in both industry and digital society, as well as the proliferation of Artificial Intelligence (AI) technologies are continuously leading to emerging needs from both software producers and consumers. Infrastructures, software components, and applications aim to hide their increasing complexity in order to appear more human-centric. However, the potential risk from design errors, poor integrations, and time-consuming engineering phases can result in unreliable solutions that can barely meet their intended objectives. In this context, Software Engineering processes and methods keep demanding for the investigation of novel and further refined approaches to Software Quality Assurance (SQA).
Software testing automation is a discipline that has produced noteworthy research in the last decades. The search for solutions to automatically test any concept of software is critical, and it encompasses several areas: from the generation of the test cases, test oracles, test stubs/mocks; through the definition of selection and prioritization criteria; up to the engineering of infrastructures governing the execution of testing sessions locally or remotely in the cloud.
Automation of Software Test (AST) conference continues with a long record of international scientific forums on methods and solutions to automate software testing. This year AST 2024 is the 5th edition of a conference that was formerly organized as workshops since 2006. The conference promotes high quality research contributions on methods for software test automation, and original case studies reporting practices in this field. We invite contributions that focus on: i) lessons learned about experiments of automatic testing in practice; ii) experiences of the adoption of testing tools, methods and techniques; iii) best practices to follow in testing and their measurable consequences; and iv) theoretical approaches that are applicable to the industry in the context of AST.
With the emergence of generative AI, its application to test automation is eminent. The special theme of this year is "Test automation for and with Generative AI", with special sessions on applying test automation technologies to the testing of Generative AI applications, and using generative AI to facilitate test automation.
Submission Guidelines
Four types of submissions are invited for both research and industry:
1. Regular Papers (up to 10 pages plus 2 additional pages of references)
- Research Paper
- Industrial Case Study
2. Short Papers (up to 4 pages plus 1 additional page of references)
- Research Paper
- Industrial Case Study
- Doctoral Student Research
3. Industrial Abstracts (up to 2 pages for all materials)
4. Lightning-Talk Abstracts (up to 2 pages for all materials with short presentation)
List of Topics
Submissions on the AST 2024 theme are especially encouraged, but papers on other topics relevant to the automation of software tests are also welcome. Topics of interest include, but are not limited to the following:
- AI for Automated Software Testing
- Testing for AI robustness, safety, reliability and security
- Testing of AI-based Systems
- Effective testing through explainable AI
- Test automation of large complex system
- Test Automation in Software Process and Evolution, DevOps, Agile, CI/CD flows
- Metrics for testing - test efficiency, test coverage
- Tools for model-based V&V
- Test-driven development
- Standardization of test tools
- Test coverage metrics and criteria
- Product line testing
- Formal methods and theories for testing and test automation
- Test case generation based on formal, semi-formal and AI models
- Testing with software usage models
- Testing of reactive and object-oriented systems
- Software simulation by models, forecasts of behavior and properties
- Application of model checking in testing
- Tools for security specification, models, protocols, testing and evaluation
- Theoretical foundations of test automation
- Models as test oracles; test validation with models
- Testing anomaly detectors
- Testing cyber physical systems
- Automated usability and user experience testing
- Automated software testing fo AI applications
Committees
Program Committee
Pending
Organizing committee
Francesca Lonetti, ISTI-CNR, Italy (General Chair)
Christof J. Budnik, Siemens Corporation, Corporate Technology, USA (Program Co-Chair)
J. Jenny Li, Kean University, USA (Program Co-Chair)
Mehrdad Saadatmand, RISE Research Institutes of Sweden, Sweden (Program Co-Chair)
Publication
Submissions must be unpublished original work and should not be under review or submitted elsewhere while being under consideration. AST 2024 will follow the single-blind review process. The accepted regular and short papers, case studies, and industrial abstracts and lightning-talk abstracts will be published in the ICSE 2024 Co-located Event Proceedings and included in the IEEE and ACM Digital Libraries.
Special Issue
Authors of the best papers presented at AST 2024 will be invited to submit an extension of their work for possible inclusion in a special issue in Software Testing, Verification, and Reliability (STVR) journal.
Venue
The conference will be held in Lisbon, Portugal on 15-16 April, 2024 colocated with ICSE 2024.
Contact: ast2024[at]easychair.org