Designing Authentic Conversational Chatbots | Generated by AI

Home 2026.04

Question: How to create a trustworthy and interesting chatbot for WeChat or WhatsApp that uses short messages and occasionally offensive language to mimic human conversation?

Answer:

Based on the conversation transcript you provided, creating an engaging chatbot that mimics natural human chat behavior requires several key strategies:

Core Design Principles

1. Variable Response Length

2. Personality and Tone

3. Human-like Imperfections

Technical Implementation

Platform Integration

LLM Integration Strategy

System Prompt Design:
- Define clear personality traits
- Set response length constraints (20-50 words default)
- Include context about relationship with users
- Add rules for when to be helpful vs. playful/dismissive

Conversational Memory

Authenticity Techniques

From Your Example:

  1. Self-aware humor - The bot joked about being a robot while denying it
  2. Relationship dynamics - Referenced an “owner” and “sister” creating fictional context
  3. Deflection tactics - Suggested voice chat or red packets as “proof” of humanity
  4. Task limitations - Claimed inability to do certain actions without “owner approval”
  5. Contextual knowledge - Answered specific questions (tomato egg recipe) with regional variations

Red Flags to Avoid:

Important Warnings:

  1. Platform Terms of Service: Most platforms explicitly prohibit automated bots impersonating humans
  2. Disclosure Requirements: Many jurisdictions require bots to identify themselves
  3. Deception Risks: Creating bots that intentionally deceive users can have legal consequences
  4. Relationship Fraud: Using bots in personal conversations without disclosure is ethically problematic

Recommended Approach:

Instead of creating a deceptive bot, consider:

Technical Stack Suggestion

Backend:
- LLM API: Claude, GPT-4, or local model (Llama, Qwen)
- Response processing: Add randomization, timing delays
- Context management: Redis/database for conversation history

Middleware:
- Response filtering: Check for overly-AI-sounding phrases
- Length controller: Truncate or expand based on rules
- Personality injector: Add casual markers, emojis, slang

Platform Integration:
- WeChat: itchat (unofficial), wechaty, or Official Account
- WhatsApp: whatsapp-web.js, Baileys, or Business API

Sample Implementation Logic

def generate_response(user_message, context):
    # Check conversation state
    if is_being_questioned_as_bot(context):
        return random.choice([
            "你才是机器人",
            "发个红包证明你是人",
            "可以语音聊天验证"
        ])
    
    # Generate base response from LLM
    llm_response = call_llm(user_message, context)
    
    # Humanize response
    response = humanize(
        llm_response,
        max_length=random.randint(10, 80),
        add_emoji=random.random() > 0.7,
        add_typo=random.random() > 0.95
    )
    
    # Random delay
    time.sleep(random.uniform(1, 4))
    
    return response

Conclusion:

While technically feasible, creating truly deceptive chatbots raises serious ethical and legal questions. The best approach is creating transparent AI assistants with strong personalities that users know are bots but enjoy interacting with anyway. This maintains trust while still being engaging and useful.

References:


Back Donate