Workshop on Programming and Performance Visualization Tools (ProTools 20)
Virtual (details forthcoming)
November 12, 2020 (Thursday) 2:30-5:45 pm Eastern Standard Time
Held in conjunction with SC20: The International Conference on High Performance Computing, Networking, Storage and Analysis, and
in cooperation with TCHPC: The IEEE Computer Society Technical Consortium on High Performance Computing
Background
Understanding program behavior is critical to overcome the expected architectural and programming complexities, such as limited power budgets, heterogeneity, hierarchical memories, shrinking I/O bandwidths, and performance variability, that arise on modern HPC platforms. To do so, HPC software developers need intuitive support tools for debugging, performance measurement, analysis, and tuning of large-scale HPC applications. Moreover, data collected from these tools such as hardware counters, communication traces, and network traffic can be far too large and too complex to be analyzed in a straightforward manner. We need new automatic analysis and visualization approaches to help application developers intuitively understand the multiple, interdependent effects that algorithmic choices have on application correctness or performance. The ProTools workshop combines two prior SC workshops: the Workshop on Visual Performance Analytics (VPA) and the Workshop on Extreme-Scale Programming Tools (ESPT).
The Workshop on Programming and Performance Visualization Tools (ProTools) intends to bring together HPC application developers, tool developers, and researchers from the visualization, performance, and program analysis fields for an exchange of new approaches to assist developers in analyzing, understanding, and optimizing programs for extreme-scale platforms.
Workshop Topics
- Performance tools for scalable parallel platforms
- Debugging and correctness tools for parallel programming paradigms
- Scalable displays of performance data
- Case studies demonstrating the use of performance visualization in practice
- Program development tool chains (incl. IDEs) for parallel systems
- Methodologies for performance engineering
- Data models to enable scalable visualization
- Graph representation of unstructured performance data
- Tool technologies for extreme-scale challenges (e.g., scalability, resilience, power)
- Tool support for accelerated architectures and large-scale multi-cores
- Presentation of high-dimensional data
- Visual correlations between multiple data source
- Measurement and optimization tools for networks and I/O
- Tool infrastructures and environments
- Human-Computer Interfaces for exploring performance data
- Multi-scale representations of performance data for visual exploration
- Application developer experiences with programming and performance tools
Previous Workshops
The ProTools workshop combines two prior SC workshops: the Workshop on Visual Performance Analytics (VPA) and the Workshop on Extreme-Scale Programming Tools (ESPT).
- ProTools 19 (Denver, CO, USA)
- ESPT 18 (Dallas, TX, USA)
- VPA 18 (Dallas, TX, USA)
- ESPT 17 (Denver, CO, USA)
- VPA 17 (Denver, CO)
- ESPT 16 (Salt Lake City, UT, USA)
- VPA 16 (Salt Lake City, UT)
- ESPT 15 (Austin, TX, USA)
- VPA 15 (Austin, TX)
- ESPT 14 (New Orleans, LA, USA)
- VPA 14 (New Orleans, LA)
- ESPT 13 (Denver, CO, USA)
- ESPT 12 (Salt Lake City, UT, USA)
Papers
Call for Papers
We solicit papers that focus on performance, debugging, and correctness tools for parallel programming paradigms as well as techniques and case studies at the intersection of performance analysis and visualization.
Papers must be submitted in PDF format (readable by Adobe Acrobat Reader 5.0 and higher) and formatted for 8.5” x 11” (U.S. Letter). Submissions should be a minimum of 6 pages and a maximum of 10 pages in the IEEE Conference format. The 10-page limit includes figures, tables, and references.
All papers must be submitted through the Supercomputing 2020 Linklings site. Submitted papers will be peer-reviewed and accepted papers will be published by IEEE TCHPC.
Reproducibility at ProTools20
For ProTools20, we adopt the model of the SC20 technical paper program. Participation in the reproducibility initiative is optional, but highly encouraged. To participate, authors provide a completed Artifact Description Appendix (at most 2 pages) along with their submission. We will use the format of the SC20 appendix for ProTools submissions (see template). Note: A paper cannot be disqualified based on information provided or not provided in this appendix, nor if the appendix is not available. The availability and quality of an appendix can be used in ranking a paper. In particular, if two papers are of similar quality, the existence and quality of the appendices can be part of the evaluation process. For more information, please refer to the SC20 reproducibility page and the FAQs below.
FAQ for authors
Q. Is the Artifact Description appendix required in order to submit a paper to ProTools 20?
A. No. These appendices are not required. If you do not submit any appendix, it will not disqualify your submission. At the same time, if two papers are otherwise comparable in quality, the existence and quality of appendices can be a factor in ranking one paper over another.
Q. Do I need to make my software open source in order to complete the Artifacts Description appendix?
A. No. It is not required that you make any changes to your computing environment in order to complete the appendix. The Artifacts Description appendix is meant to provide information about the computing environment you used to produce your results, reducing barriers for future replication of your results. However, in order to be eligible for the ACM Artifacts Available badge, your software must be downloadable by anyone without restriction.
Q. Who will review my appendices?
A. The Artifact Description and Computational Results Analysis appendices will be submitted at the same time as your paper and will be reviewed as part of the standard review process by the same reviewers who handle the rest of your paper.
Q. Does the Artifacts Description appendix really impact scientific reproducibility?
A. The Artifacts Description appendix is simply a description of the computing environment used to produce the results in a paper. By itself, this appendix does not directly improve scientific reproducibility. However, if this artifact is done well, it can be used by scientists (including the authors at a later date) to more easily replicate and build upon the results in the paper. Therefore, the Artifacts Description appendix can reduce barriers and costs of replicating published results. It is an important first step toward full scientific reproducibility.
Dates
Important Dates
- Submission deadline:
August 24September 10, 2020 (AoE) - Notification of acceptance: September 28, 2020 (AoE)
- Camera-ready deadline: October 7, 2020 (AoE)
- Workshop (virtual): November 12, 2020 (2:30-5:45 pm Eastern Standard Time)
Program
Technical Program
The workshop will be held as a virtual event on Thursday, November 12 2020 from 2:30pm - 5:45pm Eastern Standard Time, and rebroadcast at the same times Japan Standard Time.
Time | Session |
---|---|
2:30pm - 2:45pm | Welcome and Opening Remarks. Abhinav Bhatele, David Boehme, Markus Geimer, Andreas Knuepfer. |
2:25pm - 3:15pm | OpenACC Profiling Support for Clang and LLVM using Clacc and TAU. Camille Coti, Joel E. Denny, Kevin Huck, Seyong Lee, Allen D. Malony, Sameer Shende, Jeffrey S. Vetter. |
3:15pm - 3:45pm | Usability and Performance Improvements in Hatchet. Stephanie Brink, Ian Lumsden, Connor Scully-Allison, Katy Williams, Olga Pearce, Todd Gamblin, Michela Taufer, Katherine E. Isaacs, Abhinav Bhatele. |
3:45pm - 4:00pm | Break |
4:00pm - 4:30pm | Exascale Potholes for HPC: Execution Performance and Variability Analysis of the Flagship Application Code HemeLB. Brian J. N. Wylie. |
4:30pm - 5:00pm | Empirical Modeling of Spatially Diverging Performance. Alexandru Calotoiu, Markus Geisenhofer, Florian Kummer, Marcus Ritter, Jens Weber, Torsten Hoefler, Martin Oberlack, Felix Wolf. |
5:00pm - 5:30pm | Simulation-Based Performance Prediction of HPC Applications: A Case Study of HPL. Gen Xu, Xin Jiang, Huda Ibeid, Vjekoslav Svilan, Zhaojuan Bian. |
5:30pm - 5:45pm | Closing Remarks. |
Committees
Workshop Chairs
Abhinav Bhatele, University of Maryland, College Park, USA
David Boehme, Lawrence Livermore National Laboratory, USA
Markus Geimer, Juelich Supercomputing Centre, Germany
Andreas Knuepfer, ZIH, Technical University Dresden, Germany
Program Committee
Jean-Baptiste Besnard, Paratools, France
Harsh Bhatia, Lawrence Livermore National Laboratory, USA
Holger Brunst, TU Dresden, Germany
Alexandru Calotoiu, TU Darmstadt, Germany
Karl Fuerlinger, LMU Munich, Germany
Todd Gamblin, Lawrence Livermore National Laboratory, USA
Michael Gerndt, TU Munich, Germany
Judit Gimenez, Barcelona Supercomputing Centre, Spain
Kevin Huck, University of Oregon, USA
Kate Isaacs, University of Arizona, USA
Joshua A. Levine, University of Arizona, USA
John Linford, ARM Ltd
Allen Malony, University of Oregon, USA
Barton Miller, University of Wisconsin-Madison, USA
Heidi Poxon, Cray / Hewlett Packard Enterprise, USA
Paul Rosen, University of South Floriday, USA
Martin Schulz, TU Munich, Germany
Nathan Tallent, Pacific Northwest National Laboratory, USA
Gerhard Wellein, FAU, Germany
Brian J. N. Wylie, Juelich Supercomputing Centre, Germany\