Companion of the 2022 ACM/SPEC International Conference on Performance Engineering

Companion of the 2022 ACM/SPEC International Conference on Performance Engineering

Author: Dan Feng

Publisher:

Published: 2022

Total Pages: 0

ISBN-13:

DOWNLOAD EBOOK

ICPE'22 is in the past, and for the first time the conference's companion proceedings are published in form of post-conference proceedings. The main motivation of this was to give authors of workshop or short papers an opportunity to improve their archived research papers based on discussions during the conference. This post-proceedings collect material for the following tracks: Work-in-Progress and Vision Track: The work-in-progress and vision track this year was organized by Cristina L. Abad. The goal of this track was for attendees to present, and get feedback on, early ideas. Two papers were accepted in this track. Poster and Demonstrations Track: Christoph Laaber and Wen Xia headed the poster and demonstrations track. Four papers were accepted and presented in a special session on the first conference day. Tutorials: Under the leadership of David Daly and Shuibing He, three high-quality tutorials were organized at the conference this year: - "Optimizing the Performance of Fog Computing Environments Using AI and Co-Simulation", by Shreshth Tuli and Giuliano Casale - "Automated Benchmarking of cloud-hosted DBMS with benchANT", by Daniel Seybold and Jörg Domaschka - "SPEC Server Efficiency Benchmark Development - How to Contribute to the Future of Energy Conservation", by Maximilian Meissner, Klaus-Dieter Lange, Jeremy Arnold, Sanjay Sharma, Roger Tipley, Nishant Rawtani, David Reiner, Mike Petrich, Aaron Cragin Data Challenge Track: The first data challenge track ever at ICPE was organized by Cor-Paul Bezemer (University of Alberta), David Daly (MongoDB) and Weiyi Shang (Concordia University), with the support of 5 PC members. In this track, an industrial performance dataset was provided by MongoDB. The participants were invited to come up with research questions about the dataset, and study those. The challenge was open-ended: participants can choose the research questions that they find most interesting. The data challenge track accepted 4 short papers, in which the proposed approaches and/or tools and their findings are discussed.