Skip to Content
IntegrationsLlamaIndexBasic Usage

Basic Usage

This guide covers creating documents, building vector indexes, and performing semantic search with Endee and LlamaIndex.

Creating Documents

Create documents with metadata for filtering and organization:

from llama_index.core import Document # Create sample documents with different categories and metadata documents = [ Document( text="Python is a high-level, interpreted programming language known for its readability and simplicity.", metadata={"category": "programming", "language": "python", "difficulty": "beginner"} ), Document( text="JavaScript is a scripting language that enables interactive web pages and is an essential part of web applications.", metadata={"category": "programming", "language": "javascript", "difficulty": "intermediate"} ), Document( text="Machine learning is a subset of artificial intelligence that provides systems the ability to automatically learn and improve from experience.", metadata={"category": "ai", "field": "machine_learning", "difficulty": "advanced"} ), Document( text="Deep learning is part of a broader family of machine learning methods based on artificial neural networks with representation learning.", metadata={"category": "ai", "field": "deep_learning", "difficulty": "advanced"} ), Document( text="Vector databases are specialized database systems designed to store and query high-dimensional vectors for similarity search.", metadata={"category": "database", "type": "vector", "difficulty": "intermediate"} ), Document( text="Endee is a vector database that provides secure and private vector search capabilities.", metadata={"category": "database", "type": "vector", "product": "endee", "difficulty": "intermediate"} ) ] print(f"Created {len(documents)} sample documents")

Creating a Vector Index

Build a searchable vector index from your documents:

from llama_index.core import VectorStoreIndex from llama_index.embeddings.openai import OpenAIEmbedding embed_model = OpenAIEmbedding() # Create a vector index index = VectorStoreIndex.from_documents( documents, storage_context=storage_context, embed_model=embed_model ) print("Vector index created successfully")

Basic Retrieval with Query Engine

Create a query engine and perform semantic search:

# Create a query engine query_engine = index.as_query_engine() # Ask a question response = query_engine.query("What is Python?") print("Query: What is Python?") print("Response:") print(response)

Example Output:

Query: What is Python? Response: Python is a high-level, interpreted programming language known for its readability and simplicity.

Saving and Loading Indexes

Your vectors are stored in the cloud. Reconnect to your index in future sessions:

# To reconnect to an existing index in a future session: def reconnect_to_index(api_token, index_name): # Initialize the vector store with existing index vector_store = EndeeVectorStore.from_params( api_token=api_token, index_name=index_name ) # Create storage context storage_context = StorageContext.from_defaults(vector_store=vector_store) # Load the index index = VectorStoreIndex.from_vector_store( vector_store, embed_model=OpenAIEmbedding() ) return index # Example usage reconnected_index = reconnect_to_index(endee_api_token, index_name) query_engine = reconnected_index.as_query_engine() response = query_engine.query("What is Endee?") print(response)

Important: Save your index_name to reconnect to your data later.

Cleanup

Delete the index when you’re done to free up resources:

# Uncomment to delete your index endee.delete_index(index_name) print(f"Index {index_name} deleted")

Warning: Deleting an index permanently removes all stored vectors and cannot be undone.

Next Steps