AI training acceleration refers to the use of techniques and technologies to speed up the training process of artificial intelligence (AI) models. Training AI models, especially deep learning models, can be computationally intensive and time-consuming.
Accelerating this process is crucial for improving efficiency, reducing costs, and enabling the development of more sophisticated and larger models.
This workshop provides a great venue for the international research and industry community to share ideas and techniques to accelerate AI training.
The workshop aims to incorporate the following topics:
- operator fusion, automatic optimized operator generation/compilation/integration (TorchDynamo, TorchInductor)
- optimized heterogenous & distributed training (gradient updates quantization, host/device offloading, comm. collectives)
- future workloads: RAG-aware training, episodic memory, state-space models (SSMs), domain-expert model extraction/training
- Fault tolerant & dynamically scalable system architectures and methods/algorithms; unified training & inference systems
- Rapid model customizations & deployment tuning (multi-token inference/Medusa, lightweight low-rank adapters/LoRA, PiM noise resilience, ...)
- Intra-device optimizations: hardware architectures, softmax & gradient computation acceleration, on-device interconnect, low-power optimizations, compression/decompression, quantization
- Scale-out interconnect topologies, Supernode architectures
- Optimization Technology for Transformer Operators Oriented to AI accelerators
- AI-Acceleration through Processing in Memory (PiM)
Submission Guidelines
Papers will be in the ACM conference format no more than 5 pages long including everything except references.
Submissions are to be single anonymized (include author information in the paper, but the reviewers will remain anonymized)
Submission Link: https://aita2024.hotcrp.com/
Important Dates
Submission Deadline: March 30, 2024 AoE
Responses to Authors: April 13, 2024
Camera Ready due: TBD (in line with HPDC Camera Ready deadline)
Workshop: TBD (June 3 or 4)
Workshop Organization
Dr. Emily Rozenshine (Huawei European Research Institute)
Dr. Jose Yallouz (Huawei European Research Institute)