LLM4AI: Theories and Applications in Large-scale AI Models Long Beach Convention & Entertainment Center California, CA, United States, August 6-10, 2023 |
Conference website | https://llm-ai.github.io/llmai/ |
Submission link | https://easychair.org/conferences/?conf=llm4ai |
Theories and Applications in Large-scale AI Models-Pre-training, Fine-tuning, and Prompt-based Learning (https://llm-ai.github.io/llmai/)
Workshop held in conjunction with KDD 2023
Deep learning techniques have advanced rapidly in recent years, leading to significant progress in pre-trained and fine-tuned large-scale AI models. For example, in the natural language processing domain, the traditional "pre-train, fine-tune" paradigm is shifting towards the "pre-train, prompt, and predict" paradigm, which has achieved great success on many tasks across different application domains such as ChatGPT/BARD for Conversational AI and P5 for a unified recommendation system. Moreover, there has been a growing interest in models that combine vision and language modalities (vision-language models) which are applied to tasks like Visual Captioning/Generation.
Considering the recent technological revolution, it is essential to have a workshop at the KDD conference that emphasizes these paradigm shifts and highlights the paradigms with the potential to solve different tasks. This workshop will provide a platform for academic and industrial researchers to showcase their latest work, share research ideas, discuss various challenges, and identify areas where further research is needed in pre-training, fine-tuning, and prompt-learning methods for large-scale AI models. The workshop will also foster the development of a strong research community focused on solving challenges related to large-scale AI models, providing superior and impactful strategies that can change people’s lives in the future.
We invite submissions of long (eight papers) and short (four pages) papers, representing original research, preliminary research results, and proposals for new work in academia or industry. All submissions will be single-blind and will be peer-reviewed by an international program committee of researchers and industrial professionals and experts. Accepted submissions will be required to be presented at the workshop and will be published in a dedicated workshop proceeding by the workshop organisers.
Topics of interest in this workshop include but are not limited to:
Pre-training:
- Improvements in pre-training: supervised pre-training, self-supervised pre-training with various auxiliary tasks, meta-learning, prompt-based Learning, multi-modal pre-training etc.
- Novel pre-training methods to maximize generalization
- Model selection for pre-trained models
- Pre-training for various application domains, such as computer vision, natural language processing, robotics, etc
Fine-tuning:
- Domain/task adaptive fine-tuning
- Intermediate-task, multi-task, self-supervised, MLM fine-tuning
- Parameter-efficient fine-tuning: sparse parameter tuning, pruning
- Text-to-Text, Text-to-image, Image-to-text, multi-modal fine-tuning, effectively using large autoregressive pre-trained models
- Fine-tuning for various application domains, such as computer vision, natural language processing, robotics, etc
Prompted/Instruction-based:
- Manual Template Engineering
- Automated Template Learning
- Multi-Prompt Learning; Multi-tasks instruction tuning
- Instruction tuning with HF/RLHF
- chain-of-thought (CoT) prompting
Performance:
- Model compression techniques
- Large-scale model deployments
- Efficient and effective training/inference
- Empirical analysis of various pre-training and fine-tuning methods
- Generalization bounds of different pre-training and fine-tuning methods
- Stability, sparsity and robustness strategies
Downstream tasks of large-scale models:
- NLP models for Text Generation,Text Summarization,Question Answering and other downstream tasks
-CV models for Image Captioning, Semantic Segmentation,Object Tracking and other downstream tasks
Applications powered by large-scale models:
-Conversational AI, Conversational Chatbots
- Enhanced Web Search, Search Engine
- Unified, Personalized next generation recommender systems
Call for Papers
Paper Submission Deadline: May 26, 2023, 11:59 PM AoE.
Paper Notification: Jun. 13, 2023, 11:59 PM AoE.
Camera Ready Version: Jun. 27, 2023, 11:59 PM AoE.
Half-Day Workshop: Aug. 7, 2023
This workshop follows the submission requirement by KDD.
Instructions:
- Long paper (up to 8 pages) and short paper (up to 4 pages). The page limit includes the bibliography and any possible appendices.
- Single-blind peer review
- All papers must be formatted according to ACM sigconf template manuscript style, following the submission guidelines available at: https://www.acm.org/publications/proceedings-template.
- Papers should be submitted in PDF format, electronically, using the EasyChair submission system
- All selected papers will invited for presentation.
Inquiry Email: llmai.workshop@gmail.com