1st ICML Workshop on In-Context Learning (ICL @ ICML 2024)


In-context learning (ICL) is an emerging capability of large-scale models, including large language models (LLMs) like GPT-3, to acquire new capabilities directly from the context of an input example without separate training or fine-tuning, enabling these models to adapt rapidly to new tasks, datasets, and domains. This workshop brings together diverse perspectives on this new paradigm to assess progress, synthesize best practices, and chart open problems. Core topics will include architectural and other inductive biases enabling in-context skill acquisition, and reliable evaluation of ICL in application domains including reinforcement learning, representation learning, and safe and reliable machine learning.

The workshop took place on Saturday, July 27th, 2024 at Room Lehar 4, Messe Wien Exhibition Congress Center, Vienna. Video recordings from the workshop day are now available on the ICML.cc site!

In case of any issues or questions, feel free to email the organizers at iclworkshop@googlegroups.com.


Schedule


All times listed below are in Central European Summer Time (CEST).

8:30 - 9:00 AM Coffee Break
9:00 - 9:05 AM Opening Remarks
9:05 - 9:45 AM Invited Talk "Towards Understanding the Modern Alchemy"
Ekin Akyürek
9:45 - 10:25 AM Invited Talk "What do you need for in-context learning? Data, subcircuits, and dynamics"
Stephanie Chan
10:25 - 11:15 AM Poster Session + Break
11:15 - 11:20 AM Spotlight Awards Ceremony by QuantCo
11:20 - 11:30 AM Spotlight Paper: A Theoretical Understanding of Self-Correction through In-context Alignment
11:30 - 11:40 AM Spotlight Paper: Transformers Learn Temporal Difference Methods for In-Context Reinforcement Learning
11:40 - 11:50 AM Spotlight Paper: LLM Processes: Numerical Predictive Distributions Conditioned on Natural Language
11:50 AM - 12:30 PM Invited Talk: "ICL for Bayesians & TabPFN"
Samuel Müller
12:30 - 2:00 PM Lunch Break
2:00 - 2:40 PM Invited Talk: "In-Context Deductive Reasoning"
Mehran Kazemi
2:40 - 3:20 PM Invited Talk: "Exploring Model Expressivity and Optimization Landscape in in-context Learning"
Yingcong Li
3:20 - 4:15 PM Poster Session
3:30 - 4:00 PM Coffee Break
4:15 - 4:55 PM Panel Discussion
4:55 - 5:00 PM Closing Remarks


Important Dates


Submission Deadline Monday, May 27th, 2024, Anywhere on Earth (AoE)
Decision Notification Monday, June 17th, 2024
Camera-ready Deadline Sunday, July 21st, 2024, Anywhere on Earth (AoE)
Workshop Date Saturday, July 27th, 2024 @ Lehar 4, Messe Wien Exhibition Congress Center, Vienna



Call for Papers


We invite submissions to the ICL 2024 workshop, focusing on the development of new architectures, algorithms, theoretical analysis, empirical studies, and applications of In-Context Learning (ICL). Submissions must present original research that has not been previously published.

Specific topics of interest include, but are not limited to:

  • architectures, training paradigms, and inductive biases that enable or improve ICL;
  • theoretical analyses and guarantees for ICL methods;
  • empirical evaluation of the performance of ICL on interpretability, controllability, and safety considerations for ICL systems;
  • similarities and differences between ICL in large-scale language modeling systems and learned algorithms in other domains;
  • the relationship between ICL and few-shot learning, meta-learning and automated machine learning (AutoML).

Accepted papers will be presented as posters and a subset will be selected for oral presentation. The ICL 2024 workshop will be held in person at ICML 2024 with virtual participation options to be determined.

Submission Guidelines


We welcome both long (up to 8 pages) and short papers (up to 4 pages); the track can be selected during submission. Submitted manuscripts should be composed of a page-limited main body followed by an unlimited number of pages for references and appendices, all in a single file. Submissions should be uploaded via the ICML 2024 Workshop ICL Submission portal on OpenReview.

Paper templates and style files (adapted from the ICML template) can be found in this Overleaf template. Submissions must follow the template and style, be properly anonymized (for double-blind review), and not exceed the page limits for the specified track (excluding references and appendices). We will have non-archival proceedings, but will share accepted papers and their reviews on OpenReview. We encourage including code in papers, though we ask to anonymize the code along with the submission.

Dual Submission Policy


We aim to host work-in-preparation that would most benefit from feedback, which informs our dual submission policy. We accept submissions that are currently under review for publication in other venues. However, as per ICML guidelines, we do not accept works accepted for publication in another archival venue as of the date of the workshop deadline. A work accepted at ICML 2024 or KDD 2024 can thus not be submitted to the workshop, but a paper under review at NeurIPS 2024 would be eligible.



Speakers


Ekin Akyürek
Massachusetts Institute of Technology
Mehran Kazemi
Google Research
Samuel Müller
University of Freiburg
Stephanie Chan
Google DeepMind
Yingcong Li
University of Michigan, Ann Arbor




Organizers


Erin Grant
University College London
Frank Hutter
University of Freiburg
Jelena Bratulić
University of Freiburg
Julien Siems
University of Freiburg
Noah Hollmann
University of Freiburg & Charité Berlin




Reviewers




Accepted Papers




Sponsors


We thank QuantCo for their generous support for this workshop!




Website theme adapted from the SSL-RL workshop which was adapted from the VIGIL workshop.