SWAG: Storytelling With Action Guidance

hci
SWAG improves long-form story generation using two-model feedback loop, outperforming previous techniques.
Author

Zeeshan Patel, Karim El-Refai, Jonathan Pei, Tianle Li

Published

February 5, 2024

Summary:

  • SWAG is a novel approach to storytelling with large language models (LLMs) that reduces story writing to a search problem through a two-model feedback loop.
  • The SWAG pipeline using only open-source models surpasses GPT-3.5-Turbo in terms of performance.
  • The SWAG feedback loop can be run as many times as needed until the desired story length is reached.

Major Findings:

  1. SWAG substantially outperforms previous end-to-end story generation techniques when evaluated by GPT-4 and through human evaluation.
  2. The SWAG pipeline using only open-source models surpasses GPT-3.5-Turbo.
  3. The SWAG feedback loop can be run as many times as needed until the desired story length is reached.

Analysis and Critique:

  • The article provides a comprehensive overview of the SWAG approach to storytelling with LLMs, highlighting its effectiveness in generating engaging and captivating stories.
  • The use of human and machine evaluations demonstrates the superiority of SWAG over end-to-end story generation techniques.
  • The limitations of the study include the use of DPO for AD LLM alignment due to compute restraints, as well as the constraints in evaluating a larger set of stories for both machine and human evaluations.

Appendix

Model gpt-3.5-turbo-1106
Date Generated 2024-02-26
Abstract https://arxiv.org/abs/2402.03483v1
HTML https://browse.arxiv.org/html/2402.03483v1
Truncated False
Word Count 7252