06 Mar 2026

Reasoning for Summarization in the Era of Large Language Models

ABOUT EVENT

Workshop Description

This workshop focuses on understanding how different reasoning strategies influence the way large language models generate summaries. Attendees can expect to explore a wide range of reasoning approaches, from step-by-step prompting and question-answering reasoning to hierarchical document organization and reflective refinement. We will review how each of them affects summary quality and factual accuracy across diverse types of text.

Prerequisites

Experience with an AI chatbot including ChatGPT, Gemini or Claude is helpful.

Recommended link: https://platform.openai.com/docs/quickstart?api-mode=responses

Learning Objectives

This workshop aims to help attendees build an intuitive understanding of how summarization works in the age of large language models (LLMs). We’ll start with the everyday idea of summarizing, and connect it to how modern LLMs handle this task in practical applications.

From there, attendees will learn what “reasoning” means for LLMs and why it has become an important part of getting better, more reliable answers. Finally, we’ll explore how different prompting strategies can lead to very different summaries, and participants will walk away with a clear sense of how small changes in instructions can shape the way an AI thinks and responds.

Tools Used

  • Python

EVENT SPEAKERS

Registration for : Reasoning for Summarization in the Era of Large Language Models

Error: Contact form not found.

Register Now

Share This Event