HomeAI 101™Build a Chatbot
Beginner20 min readAI 101™

Build a Chatbot

Create your own conversational AI chatbot from scratch with natural language understanding, contextual memory, and a customizable personality.

Introduction

Chatbots are one of the most popular applications of AI. From customer support to personal assistants, conversational AI is everywhere. In this guide, you will build a fully functional chatbot from scratch using Python and the OpenAI API.

By the end of this tutorial, you will have a chatbot that can hold natural conversations, remember context across messages, and handle errors gracefully.

How Chatbots Work

A modern AI chatbot operates through a simple but powerful loop:

  • Input ProcessingThe user types a message, which is sent to the AI model as text.
  • AI ProcessingThe language model analyzes the message along with conversation history to understand context.
  • Response GenerationThe model generates a natural language response based on its understanding.
  • Memory ManagementThe conversation history is stored and sent with each new message to maintain context.

💡 Types of Chatbots

There are rule-based chatbots (following predefined scripts) and AI-powered chatbots (using language models to generate dynamic responses). This guide focuses on AI-powered chatbots, which are far more flexible and natural.

Setting Up Your Environment

Let us set up the development environment step by step.

1

Install Dependencies

You need the OpenAI Python library and python-dotenv for environment variable management.

bash
pip install openai python-dotenv
2

Configure Your API Key

Create a .env file in your project root to store your API key securely. Never hardcode API keys in your source code.

text
# .env file
OPENAI_API_KEY=your-api-key-here

Building a Basic Chatbot

Let us start with the simplest possible chatbot — one that takes a message and returns a response, but has no memory of previous messages.

python
import openai
from dotenv import load_dotenv
import os

load_dotenv()
client = openai.OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

def chat(user_message):
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": user_message}
        ],
        temperature=0.7,
        max_tokens=500
    )
    return response.choices[0].message.content

# Main loop
print("Chatbot ready! Type 'quit' to exit.")
while True:
    user_input = input("You: ")
    if user_input.lower() == 'quit':
        break
    response = chat(user_input)
    print(f"Bot: {response}")

Adding Conversation Memory

The basic chatbot forgets everything between messages. To make it conversational, we need to store and send the entire conversation history with each request. Here is a class-based approach:

python
class ChatBot:
    def __init__(self, system_prompt="You are a helpful assistant."):
        self.client = openai.OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
        self.conversation_history = [
            {"role": "system", "content": system_prompt}
        ]
    
    def chat(self, user_message):
        self.conversation_history.append(
            {"role": "user", "content": user_message}
        )
        
        response = self.client.chat.completions.create(
            model="gpt-4",
            messages=self.conversation_history,
            temperature=0.7,
            max_tokens=500
        )
        
        assistant_message = response.choices[0].message.content
        self.conversation_history.append(
            {"role": "assistant", "content": assistant_message}
        )
        
        # Keep only last 20 messages to manage token usage
        if len(self.conversation_history) > 21:
            self.conversation_history = (
                [self.conversation_history[0]]  # Keep system prompt
                + self.conversation_history[-20:]
            )
        
        return assistant_message

Managing Token Limits

Language models have a maximum context window (e.g., 8K or 128K tokens). When conversations get long, you need to trim older messages. The code above keeps the system prompt and the last 20 messages as a simple strategy.

Customizing Personality

The system prompt defines your chatbot's personality, expertise, and behavior rules. Here are examples of different personalities:

python
# Customer support bot
support_bot = ChatBot(
    system_prompt="""You are a friendly customer support agent 
for a tech company. You are patient, empathetic, and always 
try to resolve issues step by step. If you don't know something, 
say so honestly and offer to escalate to a human agent."""
)

# Coding tutor bot
tutor_bot = ChatBot(
    system_prompt="""You are a Python programming tutor. 
Explain concepts clearly using simple language and examples. 
When the user makes a mistake, guide them to the answer 
instead of giving it directly. Use code examples frequently."""
)

Error Handling

In production, API calls can fail due to rate limits, network issues, or server errors. Always implement retry logic with exponential backoff:

python
import time

def chat_with_retry(self, user_message, max_retries=3):
    for attempt in range(max_retries):
        try:
            return self.chat(user_message)
        except openai.RateLimitError:
            wait_time = 2 ** attempt
            print(f"Rate limited. Waiting {wait_time}s...")
            time.sleep(wait_time)
        except openai.APIError as e:
            print(f"API error: {e}")
            if attempt == max_retries - 1:
                return "Sorry, I'm having trouble right now."
    return "Please try again later."

Deployment Options

Once your chatbot works locally, you have several deployment options:

  • REST APIWrap your chatbot in a Flask or FastAPI server and deploy to any cloud provider.
  • Web InterfaceBuild a frontend with React or Next.js and connect it to your chatbot API.
  • Messaging PlatformsIntegrate with Slack, Discord, WhatsApp, or Telegram using their bot APIs.

⚠️ Security Considerations

Never expose your API key to the client side. Always route requests through your own backend server. Implement rate limiting and input validation to prevent abuse.

Summary

You have built a complete chatbot with memory and personality. Key takeaways:

  • AI chatbots work by sending conversation history to a language model with each request.
  • Conversation memory is essential for natural dialogue — store messages and manage token limits.
  • System prompts define your chatbot's personality, expertise, and behavior boundaries.
  • Always implement error handling and retry logic for production applications.
Vionis Labs - Intelligent AI Solutions for Every Industry | Vionis Labs