Chain-of-Draft (CoD) Is The New King Of Prompting Techniques | by Dr. Ashish Bamania | Level Up Coding


AI Summary Hide AI Generated Summary

Key Advancements in LLM Reasoning

The article discusses the evolution of Large Language Model (LLM) reasoning techniques, starting from GPT-1 to advanced models like Grok-3. It highlights the significant impact of Chain-of-Thought (CoT) prompting in improving LLM reasoning capabilities.

Chain-of-Draft (CoD) Prompting

The main focus is on a new technique called Chain-of-Draft (CoD) prompting, developed by researchers at Zoom Communications. CoD is presented as a superior alternative to CoT, achieving higher accuracy while using significantly fewer tokens (7.6% in some cases).

Performance Comparison

A figure (not included in this summary due to its non-textual nature) compares the accuracy and token usage of three prompting methods: Standard (direct answer), CoT, and CoD across various reasoning domains. CoD demonstrates superior performance in both metrics.

Implications

The article concludes that CoD represents a significant advancement in LLM reasoning, addressing the verbosity issue common in current models. This improvement promises to enhance the efficiency and effectiveness of reasoning LLMs.

Sign in to unlock more AI features Sign in with Google

Reasoning LLMs are a hot topic in AI research today.

We started all the way from GPT-1 to arrive at advanced reasoners like Grok-3.

This journey has been remarkable, with some really important reasoning approaches discovered along the way.

One of them has been Chain-of-Thought (CoT) Prompting (Few-shot and Zero-shot), leading to much of the LLM reasoning revolution that we see today.

Excitingly, there’s now an even better technique published by researchers from Zoom Communications.

This technique, called Chain-of-Draft (CoD) Prompting, outperforms CoT Prompting in accuracy, using as little as 7.6% of all reasoning tokens when answering a query.

Comparison of Accuracy and Token Usage when Claude 3.5 Sonnet is prompted using direct answer (Standard), Chain of Thought (CoT), and Chain of Draft (CoD) to solve tasks in different reasoning domains

This is a big win for reasoning LLMs that are currently very verbose, require lots of…

🧠 Pro Tip

Skip the extension — just come straight here.

We’ve built a fast, permanent tool you can bookmark and use anytime.

Go To Paywall Unblock Tool
Sign up for a free account and get the following:
  • Save articles and sync them across your devices
  • Get a digest of the latest premium articles in your inbox twice a week, personalized to you (Coming soon).
  • Get access to our AI features

  • Save articles to reading lists
    and access them on any device
    If you found this app useful,
    Please consider supporting us.
    Thank you!

    Save articles to reading lists
    and access them on any device
    If you found this app useful,
    Please consider supporting us.
    Thank you!