Skip to main content

AI Feedback Learning System

Self-Improving AI

Vartovii uses a sophisticated feedback loop to automatically learn from user interactions and improve over time.

Overview

The Feedback Learning System enables Vartovii to:

  1. Collect user feedback (👍/👎) on responses
  2. Classify feedback using AI to identify root causes
  3. Generate learning documents from identified knowledge gaps
  4. Update the RAG knowledge base for future improvements

System Architecture

┌─────────────────────────────────────────────────────────────────┐
│ FEEDBACK PROCESSING PIPELINE │
├─────────────────────────────────────────────────────────────────┤
│ │
│ User submits 👎 feedback │
│ │ │
│ ▼ │
│ ┌──────────────────┐ │
│ │ 1. AUTO-TRIAGE │ ◄── AI classifies: category + urgency │
│ └────────┬─────────┘ │
│ │ │
│ ┌─────┴─────┐ │
│ ▼ ▼ │
│ [SPAM] [ACTIONABLE] │
│ │ │ │
│ Dismiss ┌────┴────┐ │
│ ▼ ▼ │
│ [CODE FIX] [KNOWLEDGE GAP] │
│ │ │ │
│ GitHub Issue ┌────┴────┐ │
│ ▼ ▼ │
│ [Add to [Update │
│ RAG KB] Prompt] │
│ │
└─────────────────────────────────────────────────────────────────┘

Feedback Categories

CategoryDescriptionAction
wrong_dataBot provided incorrect informationFix search/tool logic
outdated_dataData exists but is staleTrigger data refresh
wrong_languageResponded in wrong languageUpdate system prompt
unclear_responseResponse was confusingImprove response templates
feature_requestUser wants new functionalityAdd to product backlog
spamIrrelevant or test feedbackDismiss

How Classification Works

When negative feedback is received, the AI analyzes:

{
"category": "wrong_data",
"is_actionable": true,
"urgency": "high",
"root_cause": "Bot failed to search database correctly",
"suggested_fix": "Check DB before saying 'not found'",
"knowledge_gap": "How to handle crypto project name variations"
}

Knowledge Gap Detection

When knowledge_gap contains actionable information (not "N/A" or "technical issue"), the system automatically:

  1. Creates a learning document in knowledge-base/docs/ai-agent/learnings/
  2. Includes the problem, correct behavior, and example
  3. Adds metadata for tracking

Learning Document Structure

---
title: "Learning: How to handle crypto project name variations"
description: "AI learning from user feedback"
---

# Problem Identified
**Root Cause:** User asked about "Enso Finance" but project is stored as "Enso".

# Correct Behavior
Use fuzzy matching to find projects with common name variations.

# Example
**User Query:** "Analyze Enso Finance"
**Correct Response:** "I found Enso (ENSO) with Trust Score 66..."

Admin Panel Controls

Access via Admin Panel → Prioritized Feedback:

  • 🤖 Auto-Classify All - Process all pending feedback with AI
  • 📌 Action - Mark as actionable (needs fix)
  • ✗ Dismiss - Mark as spam/irrelevant
  • Stats Dashboard - View negative/positive/pending counts

API Endpoints

EndpointMethodDescription
/api/admin/feedbackGETList all feedback with categories
/api/admin/feedback/classify-allPOSTAI-classify all pending
/api/admin/feedback/{id}PATCHUpdate status manually
/api/admin/feedback/statsGETGet feedback statistics
/api/admin/knowledge/learningsGETList learning documents
/api/admin/knowledge/syncPOSTTrigger RAG sync

RAG Integration

Learning documents are indexed by Vertex AI Search:

  1. Documents are created in knowledge-base/docs/ai-agent/learnings/
  2. On deploy, Docusaurus builds the knowledge base
  3. Vertex AI Search indexes the new content
  4. Future queries retrieve relevant learnings via RAG

Best Practices

  1. Review classifications - AI isn't perfect, verify important cases
  2. Filter spam - Dismiss test or irrelevant feedback promptly
  3. Track patterns - Multiple similar feedbacks indicate systemic issues
  4. Regular syncs - Deploy regularly to incorporate new learnings

Technical Implementation

  • Backend: services/feedback_processor.py - AI classification
  • Backend: services/knowledge_updater.py - Document generation
  • Frontend: AdminPanel.jsx - Feedback management UI
  • Database: chat_logs table with review workflow fields