$ cat ./self-rag-pipeline/README.md
// self-evaluating rag pipeline — the llm grades its own retrieved docs, checks for hallucinations, and validates its answer before returning it. built as a stateful langgraph workflow.
// stack: langgraph · langchain · groq · chroma · huggingface · python
// graph
START
↓
init_groq_model // load llama 3.3 70b via groq
↓
prepare_vector_database // scrape urls → chunk → embed → chroma
↓
fetch_relevant_docs // vector similarity search
↓
filter_docs_by_relevance // llm grades each doc: yes or no
↓ (conditional)
├── no relevant docs → END
└── docs found ↓
produce_answer // rag prompt → answer draft
↓
detect_hallucination // is answer grounded in facts?
↓
validate_answer // does answer address the question?
↓
END