LLM4HPCAsia: The 1st International Workshop on Foundational Large Language Models Advances for HPC in Asia Osaka International Convention Center Osaka, Japan, January 29, 2026 |
Conference website | https://ornl.github.io/events/llm4hpcasia2026/ |
Submission link | https://easychair.org/conferences/?conf=llm4hpcasia |
Abstract registration deadline | October 13, 2025 |
Submission deadline | October 20, 2025 |
The 1st International Workshop on Foundational large Language Models Advances for HPC in Asia
to be held in conjunction withSCA/HPC Asia 2026
29 January, 2026Osaka, Japan
Introduction
Since their development and release, modern Large Language Models (LLMs), such as the Generative Pre-trained Transformer (GPT) model and the Large Language Model Meta AI (LLaMA), have come to signify a revolution in human-computer interaction spurred on by their high-quality results. LLMs have repaved this landscape thanks to unprecedented investments and enormous training models (hundreds of billions of parameters). The availability of LLMs has led to increasing interest in how they could be applied to a large variety of applications. The HPC community made recent research efforts to evaluate current LLM capabilities for some HPC tasks, including code generation, auto parallelization, performance portability, correctness, among others. All these studies concluded that state-of-the-art LLM capabilities have proven so far insufficient for these targets. Hence, it is necessary to explore novel techniques to further empower LLMs to enrich the HPC mission and its impact.
Call For Papers
Objectives, scope and topics of the workshop
This workshop objectives are focused on LLMs advances for any HPC major priority and challenge with the aims to define and discuss the fundamentals of LLMs for HPC-specific tasks, including but not limited to hardware design, compilation, parallel programming models and runtimes, application development, enabling LLM technologies to have more autonomous decision-making about the efficient use of HPC. This workshop aims to provide a forum to discuss new and emerging solutions to address these important challenges towards an AI-assisted HPC era. Papers are being sought on many aspects of LLM for HPC targets including (but not limited to):
- LLMs for Programming Environments and Runtime Systems
- LLMs for HPC and Scientific Applications
- LLMs for Hardware design (including non-von Neumann Architectures)
- Reliability/Benchmarking/Measurements for LLMs
The 1st International Workshop on Foundational large Language Models Advances for HPC in Asia
to be held in conjunction withSCA/HPC Asia 2026
29 January, 2026Osaka, Japan
Introduction
Since their development and release, modern Large Language Models (LLMs), such as the Generative Pre-trained Transformer (GPT) model and the Large Language Model Meta AI (LLaMA), have come to signify a revolution in human-computer interaction spurred on by their high-quality results. LLMs have repaved this landscape thanks to unprecedented investments and enormous training models (hundreds of billions of parameters). The availability of LLMs has led to increasing interest in how they could be applied to a large variety of applications. The HPC community made recent research efforts to evaluate current LLM capabilities for some HPC tasks, including code generation, auto parallelization, performance portability, correctness, among others. All these studies concluded that state-of-the-art LLM capabilities have proven so far insufficient for these targets. Hence, it is necessary to explore novel techniques to further empower LLMs to enrich the HPC mission and its impact.
Call For Papers
Objectives, scope and topics of the workshop
This workshop objectives are focused on LLMs advances for any HPC major priority and challenge with the aims to define and discuss the fundamentals of LLMs for HPC-specific tasks, including but not limited to hardware design, compilation, parallel programming models and runtimes, application development, enabling LLM technologies to have more autonomous decision-making about the efficient use of HPC. This workshop aims to provide a forum to discuss new and emerging solutions to address these important challenges towards an AI-assisted HPC era. Papers are being sought on many aspects of LLM for HPC targets including (but not limited to):
- LLMs for Programming Environments and Runtime Systems
- LLMs for HPC and Scientific Applications
- LLMs for Hardware design (including non-von Neumann Architectures)
- Reliability/Benchmarking/Measurements for LLMs