Wire the Pipeline

Connect search, build_prompt, generate_answer, and print inside chat_loop

💻

Writing code and entering commands is only available on desktop. Open this page on a larger screen to complete this chapter.

Connecting the pieces

All the functions you need already exist. chat_loop just needs to call them in sequence for each question.

StepFunctionInputOutput
1searchclient, question, chunks, embeddingstop_chunks — the most relevant dict chunks
2build_promptquestion, top_chunksprompt — the full text sent to the model
3generate_answerclient, promptanswer — the model's response text
4printf"Assistant: {answer}"Displayed to the user

Each function takes the output of the previous one, forming a pipeline from raw question to displayed answer.

Instructions

  1. After if not question: continue, call search(client, question, chunks, embeddings) and assign the result to top_chunks.
  2. Call build_prompt(question, top_chunks) and assign the result to prompt.
  3. Call generate_answer(client, prompt) and assign the result to answer.
  4. Print f"Assistant: {answer}".