Skip to main content
The Settings class is the central configuration backbone of RAGLight. It provides a single source of truth for:
  • Provider Identifiers (LLMs, embeddings, vector stores)
  • Default Models (optimized for local use)
  • Environment Configuration (loading .env automatically)
Philosophy: RAGLight avoids “magic values”. All defaults are exposed as constants in Settings, making pipelines predictable, debuggable, and easy to override.

Global Setup

Logging

Before building any pipeline, it is recommended to initialize the logging system. This ensures that RAGLight’s internal logs (ingestion progress, retrieval stats) are formatted correctly.
from raglight.config.settings import Settings

# Call this once at the start of your script to enable structured logging
Settings.setup_logging()

Provider Identifiers

RAGLight uses string constants to identify providers. Using these constants prevents typo-related errors when configuring your RAGConfig.

LLM & Embeddings

These constants are used in RAGConfig.provider and VectorStoreConfig.provider.
ConstantValueDescription
Settings.OLLAMA"ollama"Recommended for local. Requires an Ollama instance.
Settings.OPENAI"openai"Uses OpenAI API (requires key).
Settings.MISTRAL"mistral"Uses Mistral API (requires key).
Settings.VLLM"vllm"Connects to a vLLM server (high throughput).
Settings.LMSTUDIO"lmstudio"Connects to LM Studio local server.
Settings.GOOGLE_GEMINI"google_gemini"Uses Google’s Gemini API.
Settings.HUGGINGFACE"huggingface"Embeddings only. Runs locally via sentence-transformers.

Vector Stores

ConstantValueDescription
Settings.CHROMA"chroma"Default. Local, persistent vector database.

Default Configuration

RAGLight ships with “batteries-included” defaults, optimized for a standard local laptop (Apple Silicon / NVIDIA GPU).
# Default Model (usually "llama3")
Settings.DEFAULT_LLM

# Reasoning / Agentic Model

Settings.DEFAULT_REASONING_LLM

System Prompts

You can inspect or override the default system prompts used by the pipelines. These are designed to be readable and inspectable.
# Standard RAG prompt
print(Settings.DEFAULT_SYSTEM_PROMPT)

# Agentic RAG prompt (Tool use & reasoning rules)
print(Settings.DEFAULT_AGENT_PROMPT)

Environment Variables

Settings automatically looks for a .env file in your working directory using python-dotenv. This is the secure way to manage API keys and custom endpoints without hardcoding them.

API Keys

Supported keys for remote providers: * OPENAI_API_KEY * MISTRAL_API_KEY
  • GEMINI_API_KEY * ANTHROPIC_API_KEY

Custom Endpoints

URLs for local/custom servers: * OLLAMA_CLIENT_URL (default: http://localhost:11434) * LMSTUDIO_CLIENT * OPENAI_CLIENT_URL (useful for vLLM/compatible APIs)

Example .env File

Create a .env file in your project root:
.env
# Use a custom Ollama instance on another machine
OLLAMA_CLIENT_URL=http://192.168.1.50:11434

# OpenAI Fallback
OPENAI_API_KEY=sk-proj-12345...

Ignore Folders

When using FolderSource to ingest data, RAGLight automatically ignores common build, cache, and system directories to keep your index clean and relevant.
# Returns a list of ignored directories
ignored_list = Settings.DEFAULT_IGNORE_FOLDERS
Includes by default:
  • .git, .vscode, .idea
  • __pycache__, venv, env
  • node_modules
  • dist, build
  • .DS_Store