Reasoning LLMs are a hot topic in AI research today.

We started all the way from GPT-1 to arrive at advanced reasoners like Grok-3.

This journey has been remarkable, with some really important reasoning approaches discovered along the way.

One of them has been Chain-of-Thought (CoT) Prompting (Few-shot and Zero-shot), leading to much of the LLM reasoning revolution that we see today.

Excitingly, there’s now an even better technique published by researchers from Zoom Communications.

This technique, called Chain-of-Draft (CoD) Prompting, outperforms CoT Prompting in accuracy, using as little as 7.6% of all reasoning tokens when answering a query.

Comparison of Accuracy and Token Usage when Claude 3.5 Sonnet is prompted using direct answer (Standard), Chain of Thought (CoT), and Chain of Draft (CoD) to solve tasks in different reasoning domains

This is a big win for reasoning LLMs that are currently very verbose, require lots of…

Chain-of-Draft (CoD) Is The New King Of Prompting Techniques | by Dr. Ashish Bamania | Level Up Coding


Click on the Run Some AI Magic button and choose an AI action to run on this article