Course Curriculum

    1. Setting up Environment

    2. Course Tips

    3. Getting Started

    4. Quiz

    1. Write Clear Instructions

    2. Role Prompting & Delimeters

    3. Step-by-Step Prompting

    4. Providing Examples & Specifying Output Length

    5. Providing Reference Text

    6. Splitting Complex Tasks

    7. Time to Think

    8. Chaining Queries

    1. Zero-shot Prompting

    2. Few-shot Prompting

    3. Few-shot Prompt Template

    4. Evaluating Few-Shot Prompts

    5. LLM-as-a-Judge

    6. Exercise

    1. Chain-of-thought Prompting

    2. Inner Monologue & Testing CoT

    3. Discussion: Automatic CoT vs. Manual CoT

    4. Exercise

    1. Prompt Chaining Overview

    2. Applied Prompt Chaining

    3. Exercise

    1. PAL Overview

    2. Exercise

    3. Function Calling Overview

    4. Advanced Function Calling Tips

    5. Exercise

    6. Applying Function Calling to a Chat App

    7. Exercise

About this course

  • 43 lessons
  • 4 hours of video content
  • Projects to apply learnings
  • Earn a Certificate of Completion
  • Intermediate

Join Pro to Get Started

Get unlimited access to all our AI courses, special tutorials, certificates, live webinars, workshops, office hours, dedicated support, and community discussions.

Instructor(s)

Elvis Saravia, Ph.D.

Founder and Lead Instructor

Elvis is a co-founder of DAIR.AI, where he leads all AI research, education, and engineering efforts. Elvis holds a Ph.D. in computer science, specializing in NLP and language models. His primary interests are training and evaluating large language models and developing scalable applications with LLMs. He co-created the Galactica LLM at Meta AI and supported and advised world-class teams like FAIR, PyTorch, and Papers with Code. Prior to this, he was an education architect at Elastic where he developed technical curriculum and courses on solutions such as Elasticsearch, Kibana, and Logstash.

More about this course

OVERVIEW

Prompt Engineering for Developers is a course focused on learning how to design and optimize prompts for LLMs with the latest prompting techniques and methods.

After completing this course, students will know how to apply the best practices and prompting techniques to help them build more effective, efficient, and robust LLM applications and agents.

PREREQUISITES

This course leverages the Python programming language, so it requires basic knowledge of Python. We primarily use the OpenAI SDK, so you are required to have a paid developer account with OpenAI (any tier should work).

SYLLABUS

Course Introduction

  • Get started with a hands-on, code-first approach to prompt engineering using Python, designed for developers.
  • Learn to set up your local development environment, including configuring your API key, installing packages, and running the initial notebooks.


Prompting Tips

  • Master foundational prompting techniques by learning to write clear, specific instructions to get more relevant and accurate responses from LLMs.
  • Learn to structure prompts effectively using role prompting, delimiters like XML tags, step-by-step instructions, and providing reference text to reduce hallucinations.


Few-shot Prompting

  • Go beyond basic instructions by learning few-shot prompting, a powerful technique to steer model behavior by providing concrete examples.
  • Learn to format demonstrations as input-output pairs and create a reusable prompt template for complex tasks like information extraction.
  • Implement a systematic evaluation pipeline using an LLM-as-a-Judge to measure the effectiveness of your few-shot prompts against a baseline.


Chain-of-thought Prompting

  • Activate deeper reasoning in LLMs using Chain-of-Thought (CoT) prompting, where you instruct the model to "think step-by-step" before providing an answer.
  • Learn to implement an "inner monologue" for your prompts, allowing you to separate the model's reasoning process from the final user-facing response.


Prompt Chaining

  • Discover prompt chaining, a workflow that decomposes a complex task into a sequence of smaller, more manageable sub-prompts for improved reliability.
  • Build a multi-step chatbot that uses separate, chained prompts for reasoning about a menu, extracting a response, refining the output, and verifying its accuracy.


LLMs + External Tools

  • Learn to augment LLMs by connecting them to external tools, starting with the Program-Aided Language (PAL) model concept.
  • Master the OpenAI Function Calling API, a core feature that allows models to interact with your custom code to retrieve real-time data or take action.
  • Apply your knowledge by building a conversational chatbot that uses a custom function to process a food order and calculate the total price.


Special LLM Topics

  • Learn to use Structured Outputs with Pydantic schemas to force the model to generate reliable, machine-readable JSON.
  • Discover how to leverage Prompt Caching to automatically reduce latency and cost for applications with long, repetitive prompts.


Reasoning LLMs

  • Explore advanced reasoning models, like OpenAI's o series, and their native Chain-of-Thought capabilities for solving complex problems.
  • Learn to implement meta-prompting, an automated workflow where a reasoning model acts as an LLM-as-a-Judge to evaluate and iteratively optimize your system prompts.
  • Build a complete evaluation and optimization loop to systematically improve the performance of your prompts on a validation dataset.


TOPICS

Throughout the course, students will use Python to apply best practices for prompting LLMs, including:

  • Prompting Tips: Learn the most important prompting tips to help efficiently and effectively prompt modern LLMs.
  • Few-shot prompting: Learn the best practices for designing and optimizing robust few-shot prompts.
  • Chain-of-thought prompting: Leverage the step-by-step thinking capabilities of LLMs through carefully designed chain-of-thought prompts.
  • Prompt chaining: Apply the core design pattern of chaining queries with LLMs.
  • LLMs + External Tools: Combine LLMs with external tools to develop complex LLM applications. In addition, learn about function calling and when and how to leverage it.
  • Special LLM Topics: Apply the latest best practices for working with LLMs, ranging from structured outputs to prompt caching.
  • Reasoning: Learn how to effectively prompt reasoning LLMs with the latest and most effective prompting tips and best practices for different use cases. Learn to apply LLM-as-a-Judge with meta-prompting to optimize your LLM applications.


Here's a subset of companies whose employees have benefitted from our courses:

Accrete, Airbnb, Alston & Bird, Amazon, Apple, Arm, Asana, Bank of America, Belong For Me, Biogen, Brilliant, Carebound, CDM, CentralReach, Centric Software, Chime, Coinbase, Digital Green, DoHQ, Elekta, Fidelity Investments, Fivecast, Fulcrum Labs, Google, Guru, Gretel, Harrison Insights, Intel, Intuit, Jina AI, JPMorgan Chase & Co, Khan Academy, KnowBe4, Lawyer.com, LinkedIn, LionSentry, MagmaLabs, MasterClass, Meta, Metopio, Microsoft, Moneta Health, Oracle, OpenAI, Rechat AI, RingCentral, Salesforce, Scale AI, Scribd, Space-O Technologies, Sun Life, TD Bank, TELUS Corporations, Trilogy, TTEC Digital, UniCredit, VaxCalc Labs, Vendr, Walmart, Wolfram Alpha, Zemoso Technologies, Zeplin

Reach out to [email protected] for any questions and team/student discounts.

Join Pro to Get Started

Get unlimited access to all our AI courses, special tutorials, certificates, live webinars, workshops, office hours, dedicated support, and community discussions.

What people are saying

“DAIR.AI Academy was my best choice for learning about AI in 2025! Elvis is a very competent teacher, and the content is extremely useful. Prompt Engineering, Agents, and so many other topics were covered that I don't need any other resources. Thank you for bringing this knowledge to us!”

Felipe Fontoura | Senior Consultant at 2Fx Tech

“This course helped me understand how prompts actually affect model outputs. Before this, I was just trial and erroring. Now I approach prompt writing more like debugging with structure and logic.”

Rahul Jain | Co-Founder & CEO at Pixeldust Technologies

Stay Updated!

Be the first to know about what's new in our Academy.

Thank You