Agents & Memory
Integrate Endee with CrewAI to give your agents persistent vector-based memory for contextual reasoning and knowledge retrieval.
Memory Types
CrewAI supports different memory types that can use Endee as a backend:
| Memory Type | Description |
|---|---|
| ShortTermMemory | Recent context and conversation history |
| EntityMemory | Information about entities (people, places, concepts) |
Creating CrewAI Memory Objects
Use your Endee vector store as the storage backend for CrewAI memory:
from crewai.memory import ShortTermMemory, EntityMemory
# Create CrewAI memory objects with Endee backend
short_memory = ShortTermMemory(storage=memory_store)
entity_memory = EntityMemory(storage=memory_store)Defining an Agent with Memory
Create an agent that uses Endee-backed memory:
from crewai import Agent, LLM
# Define LLM (use any valid model)
llm = LLM(model="gemini-2.5-flash-lite", api_key="<GOOGLE_API_KEY>")
# Define an agent with memory
agent = Agent(
role="Programming Language Expert",
goal="Answer questions using stored programming language knowledge.",
backstory="Consult documents and provide concise answers.",
llm=llm,
memory=short_memory,
verbose=True
)Defining Tasks
Create tasks for your agents:
from crewai import Task
# Define a task
task = Task(
description="Answer questions about programming languages.",
agent=agent,
expected_output="Concise and accurate answers."
)Running a Crew
Bring it all together with a Crew:
from crewai import Crew, Process
# Run Crew with memory
crew = Crew(
agents=[agent],
tasks=[task],
process=Process.sequential,
memory=True,
short_term_memory=short_memory,
entity_memory=entity_memory,
verbose=True
)
result = crew.kickoff()
print(result)Complete Example
Here’s a full working example:
from endee_crewai import EndeeVectorStore
from crewai.memory import ShortTermMemory, EntityMemory
from crewai import Agent, Crew, Task, Process, LLM
import time
# Initialize Endee vector store
embedder_config = {
"provider": "cohere",
"config": {"model_name": "small", "api_key": "<COHERE_API_KEY>"}
}
memory_store = EndeeVectorStore(
type="programming_knowledge",
api_token="<ENDEE_API_TOKEN>",
embedder_config=embedder_config,
space_type="cosine",
crew=None,
)
# Reset and populate with knowledge
memory_store.reset()
time.sleep(2)
documents = [
("Python is dynamically typed.", {"creator": "Guido van Rossum", "typing": "dynamic"}),
("Go is statically typed with great concurrency.", {"creator": "Robert Griesemer", "typing": "static"}),
("JavaScript is the language of the web.", {"creator": "Brendan Eich", "typing": "dynamic"}),
]
for text, meta in documents:
memory_store.save(text, meta)
time.sleep(1)
# Create CrewAI memory objects
short_memory = ShortTermMemory(storage=memory_store)
entity_memory = EntityMemory(storage=memory_store)
# Define LLM
llm = LLM(model="gemini-2.5-flash-lite", api_key="<GOOGLE_API_KEY>")
# Define agent
agent = Agent(
role="Programming Language Expert",
goal="Answer questions using stored programming language knowledge.",
backstory="Consult documents and provide concise answers.",
llm=llm,
memory=short_memory,
verbose=True
)
# Define task
task = Task(
description="What are the differences between Python and Go?",
agent=agent,
expected_output="A concise comparison of Python and Go."
)
# Run Crew
crew = Crew(
agents=[agent],
tasks=[task],
process=Process.sequential,
memory=True,
short_term_memory=short_memory,
entity_memory=entity_memory,
verbose=True
)
result = crew.kickoff()
print(result)Helper Functions
Create helper functions to validate your memory store:
def check_language_vectors(memory_store):
"""Check what's stored in the vector memory."""
queries = ["Python language features", "Go concurrency"]
for q in queries:
results = memory_store.search(query=q, limit=3)
for r in results:
print(r["context"])
# Test the memory store
check_language_vectors(memory_store)