GET STARTED
Getting started
Getting Verdaca running takes under five minutes if you have Python 3.11 or later and credentials for at least one supported LLM provider. This guide covers installation, a first session, and the configuration file that controls adapter selection.
ADVISORY
Verdaca requires Python 3.11 or later. Virtual environment isolation is strongly recommended; the package pins its adapter dependencies and conflicts with unpinned versions of mem0 or letta in a shared environment.
Installation
Pull the package from PyPI. The install includes the kernel, all bundled ports, and the default Mem0 and Letta memory adapters.
pip install verdaca
First session
The Studio object is the entry point for every Verdaca session. Create one, pass a workflow name, and the kernel assembles the persona panel, routes calls through the configured adapters, and returns a structured result object with the full deliberation trail.
from verdaca import Studio
session = Studio.create()
result = session.run("[workflow-placeholder]")
print(result.output)
Configuration
Adapter selection and model configuration live in verdaca.yaml at your project root. Override any key with an environment variable prefixed VERDACA_.
# verdaca.yaml
llm:
provider: anthropic
model: claude-opus-4-7
memory:
adapter: mem0
version: "1.0.11"
IMPORTANT
Your LLM provider credentials must be available as environment variables before the session runs — Verdaca does not store or proxy API keys.
Next steps
- Conceptual model — understand the kernel architecture
- Agent catalog — browse available personas
- Build your own agent — create a custom persona