1st ICML Workshop on In-Context Learning (ICL @ ICML 2024)
In-context learning (ICL) is an emerging capability of large-scale models, including large language models (LLMs) like GPT-3, to acquire new capabilities directly from the context of an input example without separate training or fine-tuning, enabling these models to adapt rapidly to new tasks, datasets, and domains. This workshop brings together diverse perspectives on this new paradigm to assess progress, synthesize best practices, and chart open problems. Core topics will include architectural and other inductive biases enabling in-context skill acquisition, and reliable evaluation of ICL in application domains including reinforcement learning, representation learning, and safe and reliable machine learning.
The workshop took place on Saturday, July 27th, 2024 at Room Lehar 4, Messe Wien Exhibition Congress Center, Vienna. Video recordings from the workshop day are now available on the ICML.cc site!
In case of any issues or questions, feel free to email the organizers at iclworkshop@googlegroups.com.
Schedule
All times listed below are in Central European Summer Time (CEST).
8:30 - 9:00 AM | Coffee Break |
9:00 - 9:05 AM | Opening Remarks |
9:05 - 9:45 AM |
Invited Talk "Towards Understanding the Modern Alchemy" Ekin Akyürek |
9:45 - 10:25 AM |
Invited Talk "What do you need for in-context learning? Data, subcircuits, and dynamics" Stephanie Chan |
10:25 - 11:15 AM |
Poster Session + Break |
11:15 - 11:20 AM |
Spotlight Awards Ceremony by QuantCo |
11:20 - 11:30 AM | Spotlight Paper: A Theoretical Understanding of Self-Correction through In-context Alignment |
11:30 - 11:40 AM | Spotlight Paper: Transformers Learn Temporal Difference Methods for In-Context Reinforcement Learning |
11:40 - 11:50 AM | Spotlight Paper: LLM Processes: Numerical Predictive Distributions Conditioned on Natural Language |
11:50 AM - 12:30 PM |
Invited Talk: "ICL for Bayesians & TabPFN" Samuel Müller |
12:30 - 2:00 PM | Lunch Break |
2:00 - 2:40 PM |
Invited Talk: "In-Context Deductive Reasoning" Mehran Kazemi |
2:40 - 3:20 PM |
Invited Talk: "Exploring Model Expressivity and Optimization Landscape in in-context Learning" Yingcong Li |
3:20 - 4:15 PM | Poster Session |
3:30 - 4:00 PM | Coffee Break |
4:15 - 4:55 PM |
Panel Discussion |
4:55 - 5:00 PM | Closing Remarks |
Important Dates
Submission Deadline | Monday, May 27th, 2024, Anywhere on Earth (AoE) |
Decision Notification | Monday, June 17th, 2024 |
Camera-ready Deadline | Sunday, July 21st, 2024, Anywhere on Earth (AoE) |
Workshop Date | Saturday, July 27th, 2024 @ Lehar 4, Messe Wien Exhibition Congress Center, Vienna |
Call for Papers
We invite submissions to the ICL 2024 workshop, focusing on the development of new architectures, algorithms, theoretical analysis, empirical studies, and applications of In-Context Learning (ICL). Submissions must present original research that has not been previously published.
Specific topics of interest include, but are not limited to:
- architectures, training paradigms, and inductive biases that enable or improve ICL;
- theoretical analyses and guarantees for ICL methods;
- empirical evaluation of the performance of ICL on interpretability, controllability, and safety considerations for ICL systems;
- similarities and differences between ICL in large-scale language modeling systems and learned algorithms in other domains;
- the relationship between ICL and few-shot learning, meta-learning and automated machine learning (AutoML).
Accepted papers will be presented as posters and a subset will be selected for oral presentation. The ICL 2024 workshop will be held in person at ICML 2024 with virtual participation options to be determined.
Submission Guidelines
We welcome both long (up to 8 pages) and short papers (up to 4 pages); the track can be selected during submission. Submitted manuscripts should be composed of a page-limited main body followed by an unlimited number of pages for references and appendices, all in a single file. Submissions should be uploaded via the ICML 2024 Workshop ICL Submission portal on OpenReview.
Paper templates and style files (adapted from the ICML template) can be found in this Overleaf template. Submissions must follow the template and style, be properly anonymized (for double-blind review), and not exceed the page limits for the specified track (excluding references and appendices). We will have non-archival proceedings, but will share accepted papers and their reviews on OpenReview. We encourage including code in papers, though we ask to anonymize the code along with the submission.
Dual Submission Policy
We aim to host work-in-preparation that would most benefit from feedback, which informs our dual submission policy. We accept submissions that are currently under review for publication in other venues. However, as per ICML guidelines, we do not accept works accepted for publication in another archival venue as of the date of the workshop deadline. A work accepted at ICML 2024 or KDD 2024 can thus not be submitted to the workshop, but a paper under review at NeurIPS 2024 would be eligible.
Speakers
Massachusetts Institute of Technology
Google Research
University of Freiburg
Google DeepMind
University of Michigan, Ann Arbor
Organizers
University of Freiburg
University of Freiburg
University of Freiburg & Charité Berlin
Reviewers
We thank the following reviewers for providing thorough and constructive feedback on submissions to the workshop:
Andrei Margeloiu, Andrey Zhmoginov, Annie Xie, Arvind Mahankali, Bhishma Dedhia, Chenxiao Yang, Dayeon Ki, Dileep George, Eric Bigelow, Eric Todd, Ethan Blaser, Hao Zhao, Haozhe Zhao, Herilalaina Rakotoarison, Hrayr Harutyunyan, Jerry Wei, Jerry Weihong Liu, Johannes Von Oswald, John Bronskill, Jun Seo, Junwei Ma, Keegan Harris, Kevin Christian Wibisono, Lei Li, Lennart Purucker, Liu Yang, Louis Kirsch, Man Luo, Max Vladymyrov, Mohammadreza Pourreza, Mustafa Shukor, Nan Jiang, Oussama Zekri, Pavel Czempin, Pengwei Li, Renat Aksitov, Riccardo Grazzi, Ruiqi Zhang, Ruixin Yang, Ruomin Huang, Rylan Schaeffer, Sadhika Malladi, Samuel Müller, Sharut Gupta, Simon Schrodi, Sivaramakrishnan Swaminathan, Stephanie Chan, Sunita Sarawagi, Taeyoung Kim, Tian Jin, Ting-Yun Chang, Tong Wu, Toni Liu, Weiyang Liu, Wenyang Hu, Xinyi Wang, Xinyi Wu, Yingcong Li, Yixing Jiang, Zefan Cai, Zeming Wei, Zhendong Chu, Zhongbin Fang, Zhu Li, Zijian Zhou, Ziyu Wang.
Accepted Papers
-
Improve Temporal Awareness of LLMs for Domain-general Sequential Recommendation
Authors: Zhendong Chu, Zichao Wang, Ruiyi Zhang, Yangfeng Ji, Hongning Wang, Tong Sun
-
In-Context Principle Learning from Mistakes
Authors: Tianjun Zhang, Aman Madaan, Luyu Gao, Steven Zhang, Swaroop Mishra, Yiming Yang, Niket Tandon, Uri Alon
-
In-Context Symmetries: Self-Supervised Learning through Contextual World Models
Authors: Sharut Gupta, Chenyu Wang, Yifei Wang, Tommi S. Jaakkola, Stefanie Jegelka
-
Localized Zeroth-Order Prompt Optimization
Authors: Wenyang Hu, Yao Shu, Zongmin Yu, Zhaoxuan Wu, Xiaoqiang Lin, Zhongxiang Dai, See-Kiong Ng, Bryan Kian Hsiang Low
-
Fast Training Dataset Attribution via In-Context Learning
Authors: Milad fotouhi, Mohammad Taha Bahadori, Oluwaseyi Feyisetan, Payman Arabshahi, David Heckerman
-
In-context learning in presence of spurious correlations
Authors: Hrayr Harutyunyan, Rafayel Darbinyan, Samvel Karapetyan, Hrant Khachatrian
-
DETAIL: Task DEmonsTration Attribution for Interpretable In-context Learning
Authors: Zijian Zhou, Xiaoqiang Lin, Xinyi Xu, Alok Prakash, Daniela Rus, Bryan Kian Hsiang Low
-
TabMDA: Tabular Manifold Data Augmentation for Any Classifier using Transformers with In-context Subsetting
Authors: Andrei Margeloiu, Adrián Bazaga, Nikola Simidjievski, Pietro Lio, Mateja Jamnik
-
Probing the Decision Boundaries of In-context Learning in Large Language Models
Authors: Siyan Zhao, Tung Nguyen, Aditya Grover
-
Transformers Learn Temporal Difference Methods for In-Context Reinforcement Learning
Authors: Jiuqi Wang, Ethan Blaser, Hadi Daneshmand, Shangtong Zhang
-
Many-Shot In-Context Learning in Multimodal Foundation Models
Authors: Yixing Jiang, Jeremy Andrew Irvin, Ji Hun Wang, Muhammad Ahmed Chaudhry, Jonathan H Chen, Andrew Y. Ng
-
Many-shot In-Context Learning
Authors: Rishabh Agarwal, Avi Singh, Lei M Zhang, Bernd Bohnet, Luis Rosias, Stephanie C.Y. Chan, Biao Zhang, Aleksandra Faust, Hugo Larochelle
-
How In-Context Learning Emerges from Training on Unstructured Data: The Role of Co-Occurrence, Positional Information, and Noise Structures
Authors: Kevin Christian Wibisono, Yixin Wang
-
Automatic Domain Adaptation by Transformers in In-Context Learning
Authors: Ryuichiro Hataya, Kota Matsui, Masaaki Imaizumi
-
A Theoretical Understanding of Self-Correction through In-context Alignment
Authors: Yifei Wang, Yuyang Wu, Zeming Wei, Stefanie Jegelka, Yisen Wang
-
Learning Fast and Slow: Representations for In-Context Weight Modulation
Authors: Andrey Zhmoginov, Jihwan Lee, Max Vladymyrov, Mark Sandler
-
Transformers are Minimax Optimal Nonparametric In-Context Learners
Authors: Juno Kim, Tai Nakamaki, Taiji Suzuki
-
Prompt Optimization with EASE? Efficient Ordering-aware Automated Selection of Exemplars
Authors: Zhaoxuan Wu, Xiaoqiang Lin, Zhongxiang Dai, Wenyang Hu, Yao Shu, See-Kiong Ng, Patrick Jaillet, Bryan Kian Hsiang Low
-
Can Transformers Solve Least Squares to High Precision?
Authors: Jerry Weihong Liu, Jessica Grogan, Owen M Dugan, Simran Arora, Atri Rudra, Christopher Re
-
Linear Transformers are Versatile In-Context Learners
Authors: Max Vladymyrov, Johannes Von Oswald, Mark Sandler, Rong Ge
-
Transformers Can Perform Distributionally-robust Optimisation through In-context Learning
Authors: Taeyoung Kim, Hongseok Yang
-
In-Context Generalization to New Tasks From Unlabeled Observation Data
Authors: Anthony Liang, Pavel Czempin, Yutai Zhou, Stephen Tu, Erdem Biyik
-
Universal Self-Consistency for Large Language Models
Authors: Xinyun Chen, Renat Aksitov, Uri Alon, Jie Ren, Kefan Xiao, Pengcheng Yin, Sushant Prakash, Charles Sutton, Xuezhi Wang, Denny Zhou
-
Retrieval & Fine-Tuning for In-Context Tabular Models
Authors: Valentin Thomas, Junwei Ma, Rasa Hosseinzadeh, Keyvan Golestan, Guangwei Yu, Maksims Volkovs, Anthony L. Caterini
-
Can Mamba In-Context Learn Task Mixtures?
Authors: Yingcong Li, Xupeng Wei, Haonan Zhao, Taigao Ma
-
Learning Task Representations from In-Context Learning
Authors: Baturay Saglam, Zhuoran Yang, Dionysis Kalogerias, Amin Karbasi
-
An In-Context Learning Theoretic Analysis of Chain-of-Thought
Authors: Chenxiao Yang, Zhiyuan Li, David Wipf
-
Can LLMs predict the convergence of Stochastic Gradient Descent?
Authors: Oussama Zekri, Abdelhakim Benechehab, Ievgen Redko
-
Cross-lingual QA: A Key to Unlocking In-context Cross-lingual Performance
Authors: Sunkyoung Kim, Dayeon Ki, Yireun Kim, Jinsik Lee
-
LLM Processes: Numerical Predictive Distributions Conditioned on Natural Language
Authors: John F Bronskill, James Requeima, Dami Choi, Richard E Turner, David Duvenaud
-
LLMs learn governing principles of dynamical systems, revealing an in-context neural scaling law
Authors: Toni J.B. Liu, Nicolas Boulle, Raphaël Sarfati, Christopher Earls
-
In-Context Learning of Energy Functions
Authors: Rylan Schaeffer, Mikail Khona, Sanmi Koyejo
-
Polynomial Regression as a Task for Understanding In-context Learning Through Finetuning and Alignment
Authors: Max Wilcoxson, Morten Svendgård, Ria Doshi, Dylan Davis, Reya Vir, Anant Sahai
-
Can large language models explore in-context?
Authors: Akshay Krishnamurthy, Keegan Harris, Dylan J Foster, Cyril Zhang, Aleksandrs Slivkins
-
In-Context Reinforcement Learning Without Optimal Action Labels
Authors: Juncheng Dong, Moyang Guo, Ethan X Fang, Zhuoran Yang, Vahid Tarokh
-
Task Descriptors Help Transformers Learn Linear Models In-Context
Authors: Ruomin Huang, Rong Ge
-
Transformers as Stochastic Optimizers
Authors: Ryuichiro Hataya, Masaaki Imaizumi
-
Verbalized Machine Learning: Revisiting Machine Learning with Language Models
Authors: Tim Z. Xiao, Robert Bamler, Bernhard Schölkopf, Weiyang Liu
-
Fine-grained Analysis of In-context Linear Estimation
Authors: Yingcong Li, Ankit Singh Rawat, Samet Oymak
Sponsors