Chain-of-Draft (CoD) Is The New King Of Prompting Techniques | by Dr. Ashish Bamania | Level Up Coding


Chain-of-Draft (CoD) prompting, a new technique surpassing Chain-of-Thought (CoT) in accuracy and efficiency for reasoning LLMs, has been developed by Zoom Communications researchers.
AI Summary available — skim the key points instantly. Show AI Generated Summary
Show AI Generated Summary

Reasoning LLMs are a hot topic in AI research today.

We started all the way from GPT-1 to arrive at advanced reasoners like Grok-3.

This journey has been remarkable, with some really important reasoning approaches discovered along the way.

One of them has been Chain-of-Thought (CoT) Prompting (Few-shot and Zero-shot), leading to much of the LLM reasoning revolution that we see today.

Excitingly, there’s now an even better technique published by researchers from Zoom Communications.

This technique, called Chain-of-Draft (CoD) Prompting, outperforms CoT Prompting in accuracy, using as little as 7.6% of all reasoning tokens when answering a query.

Comparison of Accuracy and Token Usage when Claude 3.5 Sonnet is prompted using direct answer (Standard), Chain of Thought (CoT), and Chain of Draft (CoD) to solve tasks in different reasoning domains

This is a big win for reasoning LLMs that are currently very verbose, require lots of…

Was this article displayed correctly? Not happy with what you see?

Tabs Reminder: Tabs piling up in your browser? Set a reminder for them, close them and get notified at the right time.

Try our Chrome extension today!


Share this article with your
friends and colleagues.
Earn points from views and
referrals who sign up.
Learn more

Facebook

Save articles to reading lists
and access them on any device


Share this article with your
friends and colleagues.
Earn points from views and
referrals who sign up.
Learn more

Facebook

Save articles to reading lists
and access them on any device