Chain-of-Draft (CoD) Is The New King Of Prompting Techniques | by Dr. Ashish Bamania | Level Up Coding


AI Summary Hide AI Generated Summary

Chain-of-Draft (CoD) Prompting: A Breakthrough in LLM Reasoning

The article discusses Chain-of-Draft (CoD) prompting, a novel technique in AI research that significantly improves Large Language Model (LLM) reasoning capabilities. CoD builds upon the previous Chain-of-Thought (CoT) prompting, enhancing accuracy while dramatically reducing the number of tokens required. Researchers from Zoom Communications developed this method, which showcases a substantial advancement in LLM efficiency.

Key Improvements Over Chain-of-Thought (CoT)

CoD outperforms CoT in accuracy, achieving the same or better results with considerably fewer tokens. Specifically, the article highlights a reduction of up to 7.6% in token usage while maintaining or improving accuracy across various reasoning tasks.

Impact and Significance

This breakthrough addresses a significant challenge in current LLM technology: verbosity. By significantly reducing token consumption, CoD promises more efficient and cost-effective reasoning capabilities for LLMs, potentially leading to wider applications and accessibility.

  • Improved accuracy compared to CoT.
  • Substantially reduced token usage (up to 92.4% reduction).
  • Developed by researchers at Zoom Communications.
Sign in to unlock more AI features Sign in with Google

Reasoning LLMs are a hot topic in AI research today.

We started all the way from GPT-1 to arrive at advanced reasoners like Grok-3.

This journey has been remarkable, with some really important reasoning approaches discovered along the way.

One of them has been Chain-of-Thought (CoT) Prompting (Few-shot and Zero-shot), leading to much of the LLM reasoning revolution that we see today.

Excitingly, there’s now an even better technique published by researchers from Zoom Communications.

This technique, called Chain-of-Draft (CoD) Prompting, outperforms CoT Prompting in accuracy, using as little as 7.6% of all reasoning tokens when answering a query.

Comparison of Accuracy and Token Usage when Claude 3.5 Sonnet is prompted using direct answer (Standard), Chain of Thought (CoT), and Chain of Draft (CoD) to solve tasks in different reasoning domains

This is a big win for reasoning LLMs that are currently very verbose, require lots of…

🧠 Pro Tip

Skip the extension — just come straight here.

We’ve built a fast, permanent tool you can bookmark and use anytime.

Go To Paywall Unblock Tool
Sign up for a free account and get the following:
  • Save articles and sync them across your devices
  • Get a digest of the latest premium articles in your inbox twice a week, personalized to you (Coming soon).
  • Get access to our AI features

  • Save articles to reading lists
    and access them on any device
    If you found this app useful,
    Please consider supporting us.
    Thank you!

    Save articles to reading lists
    and access them on any device
    If you found this app useful,
    Please consider supporting us.
    Thank you!