ML for Computer Architecture and Systems
(MLArchSys 2025)
ISCA 2025, Tokyo, Japan
ISCA 2025, Tokyo, Japan
Foundation models have become the foundation of a new wave of machine learning models. The application of such models spans from natural language understanding into image processing, protein folding, and many more. The main objective of this workshop is to bring the attention of machine learning and system communities to the upcoming architectural and system challenges for the foundational models and drive the productive usage of these models in chip design process and system design. Subject areas of the workshop included (but not limited to):
🆕 Agents for accelerating hardware development and improving hardware design productivity
🆕 System design for extremely large chain-of-thought-reasoning models
🆕 Noisy hardware-efficient approximation (e.g. numerics and analog)
🆕 Generative AI for security and vulnerability detection, design verification and testing
🆕 Self-optimizing hardware using ML
🆕 Hardware accelerators for neurosymbolic and hybrid AI models
🆕 ML-driven resilient computing
System and architecture support of foundational models at scale
Efficient model compression (e.g. quantization, sparsity) techniques
Efficient and sustainable training and serving
Benchmarking and evaluation of foundational models
Learned models for computer architecture and systems optimization
Machine learning techniques for compiler and code optimization
Distributed systems and infrastructure design for machine learning workloads
Machine learning for hardware/software co-design (AutoML for Hardware)
Automated machine learning in EDA tools
Optimized code generation for hardware and software
Evaluation of deployed machine learning systems and architectures
Areas: Computer Architecture, Systems, Compilers, Model Scaling, Security, Self-Attention, Foundational Models, EDA, Foundational Model Compression.
We are committed to fostering an inclusive and diverse environment for all participants. Our vision for this workshop is to build a diverse community and collectively work towards tackling challenges of foundational models. We recognize the value of diversity in promoting innovation, creativity, and meaningful discussions. Therefore, we have made significant efforts to ensure demographic diversity among our organizers and speakers. We acknowledge that achieving diversity is an ongoing process, and we continuously strive to improve our efforts in this regard. We encourage open feedback from our participants and the broader community to help us identify areas where we can enhance our inclusivity initiatives.
The use of LLMs is allowed as a general-purpose writing assist tool. Authors should understand that they take full responsibility for the contents of their papers, including content generated by LLMs that could be construed as plagiarism or scientific misconduct (e.g. fabrication of facts). LLMs are not eligible for authorship.
Authors have the right to withdraw papers from consideration at any time until paper notification. Before the paper submission deadline, if an author withdraws the appear it will be deleted from the OpenReview hosting site. However, note that after the paper submission deadline, if an author chooses to withdraw a submission, it will remain hosted by OpenReview in a publicly visible "withdrawn papers" section. Withdrawn papers will be de-anonymized.
Authors can change author order, but not add or remove authors. However, minor changes to titles and abstracts are allowed, if properly justified by the authors.
We welcome submissions of up to 4 pages (not including references). This is not a strict limit, but authors are encouraged to adhere to it if possible.
All submissions must be in PDF format and should follow the MLArchSys'25 Latex Template (Overleaf).
Please follow the guidelines provided at ISCA 2025 Paper Submission Guidelines.
Please submit your paper at OpenReview. While the review process is not public, we make the accepted papers and their reviews public after the notification deadline.
Please carefully read and understand the MLArchSys 2025 Paper Checklist Guidelines.
Reviewing will be double blind: please do not include any author names on any submitted documents except in the space provided on the submission form.
We welcome submissions that include parts of ongoing work intended for a future conference submission; however, please ensure that your submitted work has not been previously published at a conference or in a journal.
Bahar Asgari (UMD)
Clive Chan (OpenAI)
Thaleia Dimitra Doudali (IMDEA Software)
Farshad Firouzi (ASU)
Qijing Huang (NVIDIA)
Priya Panda (Yale)
Geonhwa Jeong (Meta)
Suvinay Subramanian (Google)
Neeraja J. Yadwadkar (University of Texas, Austin)
Amir Yazdanbakhsh (Google DeepMind)
Full Paper Submission Deadline: May 1st, 2025 (OpenReview)
Author Notification: May 16th, 2025.
Workshop: June 21, 2025 (Tokyo, Japan).
Contact us at mlarchsys@gmail.com