Client API Reference
Complete reference for the RecallClient class and all available methods.
RecallClient
The main client class for interacting with Recall's hybrid memory system.
Constructor
1// TypeScript2new RecallClient(options?: RecallClientOptions)3
4interface RecallClientOptions {5 redisUrl?: string;6 mem0ApiKey?: string;7 environment?: string;8 cacheConfig?: CacheConfig;9 syncConfig?: SyncConfig;10 [key: string]: any;11}
1# Python2RecallClient(3 redis_url: str | None = None,4 mem0_api_key: str | None = None,5 environment: str = "development",6 cache_config: CacheConfig | None = None,7 sync_config: SyncConfig | None = None,8 **kwargs9)
Parameters
Parameter | Type | Default | Description |
---|---|---|---|
redis_url | string | "redis://localhost:6379" | Redis connection URL |
mem0_api_key | string | None | Mem0 API key for cloud storage |
environment | string | "development" | Environment name (development, staging, production) |
cache_config | CacheConfig | None | Cache configuration options |
sync_config | SyncConfig | None | Synchronization configuration |
Example
1from recall import RecallClient2
3# Basic initialization4client = RecallClient(5 redis_url="redis://localhost:6379",6 mem0_api_key="m0-xxxxxxxxxxxx"7)8
9# With configuration10client = RecallClient(11 redis_url="redis://localhost:6379",12 mem0_api_key="m0-xxxxxxxxxxxx",13 environment="production",14 cache_config=CacheConfig(ttl=3600),15 sync_config=SyncConfig(mode="eager")16)
1import { RecallClient } from "@recall/client";2
3// Basic initialization4const client = new RecallClient({5 redisUrl: "redis://localhost:6379",6 mem0ApiKey: "m0-xxxxxxxxxxxx",7});8
9// With configuration10const client = new RecallClient({11 redisUrl: "redis://localhost:6379",12 mem0ApiKey: "m0-xxxxxxxxxxxx",13 environment: "production",14 cacheConfig: { ttl: 3600 },15 syncConfig: { mode: "eager" },16});
Core Methods
add()
Add a new memory to the system.
1add(2 content: str,3 user_id: str,4 priority: str = "medium",5 metadata: dict | None = None,6 async_mode: bool = False7) -> dict
1add(options: AddMemoryOptions): Promise<Memory>2
3interface AddMemoryOptions {4 content: string;5 userId: string;6 priority?: Priority;7 metadata?: Record<string, any>;8 asyncMode?: boolean;9}
Parameters
Parameter | Type | Required | Description |
---|---|---|---|
content | string | Yes | The memory content to store |
user_id | string | Yes | User identifier |
priority | string | No | Priority level: "critical", "high", "medium", "low" |
metadata | object | No | Additional metadata |
async_mode | boolean | No | If true, returns immediately without waiting for cloud sync |
Returns
A dictionary/object containing:
id
: Unique memory identifiercontent
: The stored contentuser_id
: Associated user IDpriority
: Assigned priority levelcreated_at
: Creation timestampmetadata
: Any additional metadata
Example
1memory = client.add(2 content="User prefers dark theme",3 user_id="user_123",4 priority="high",5 metadata={6 "category": "preferences",7 "source": "settings_update"8 }9)10
11print(f"Memory ID: {memory['id']}")12# Output: Memory ID: mem_abc123xyz
1const memory = await client.add({2 content: "User prefers dark theme",3 userId: "user_123",4 priority: "high",5 metadata: {6 category: "preferences",7 source: "settings_update",8 },9});10
11console.log(`Memory ID: ${memory.id}`);12// Output: Memory ID: mem_abc123xyz
search()
Search for relevant memories using semantic search.
1search(2 query: str,3 user_id: str | None = None,4 limit: int = 10,5 filters: dict | None = None,6 threshold: float = 0.07) -> list[dict]
1search(options: SearchOptions): Promise<Memory[]>2
3interface SearchOptions {4 query: string;5 userId?: string;6 limit?: number;7 filters?: Record<string, any>;8 threshold?: number;9}
Parameters
Parameter | Type | Required | Description |
---|---|---|---|
query | string | Yes | Search query |
user_id | string | No | Filter by user ID |
limit | integer | No | Maximum results to return (default: 10) |
filters | object | No | Metadata filters |
threshold | float | No | Minimum relevance score (0.0 to 1.0) |
Returns
Array of memory objects, each containing:
- All memory fields
score
: Relevance score (0.0 to 1.0)source
: Whether from "cache" or "cloud"
Example
1results = client.search(2 query="user preferences for UI",3 user_id="user_123",4 limit=5,5 filters={"category": "preferences"},6 threshold=0.77)8
9for memory in results:10 print(f"{memory['content']} (score: {memory['score']:.2f})")
1const results = await client.search({2 query: "user preferences for UI",3 userId: "user_123",4 limit: 5,5 filters: { category: "preferences" },6 threshold: 0.7,7});8
9results.forEach((memory) => {10 console.log(`${memory.content} (score: ${memory.score.toFixed(2)})`);11});
get()
Retrieve a specific memory by ID.
1get(memory_id: str) -> dict | None
1get(memoryId: string): Promise<Memory | null>
Example
1memory = client.get("mem_abc123xyz")2if memory:3 print(f"Content: {memory['content']}")4else:5 print("Memory not found")
1const memory = await client.get("mem_abc123xyz");2if (memory) {3 console.log(`Content: ${memory.content}`);4} else {5 console.log("Memory not found");6}
update()
Update an existing memory.
1update(2 memory_id: str,3 content: str | None = None,4 priority: str | None = None,5 metadata: dict | None = None6) -> dict
1update(options: UpdateOptions): Promise<Memory>2
3interface UpdateOptions {4 memoryId: string;5 content?: string;6 priority?: Priority;7 metadata?: Record<string, any>;8}
Example
1updated = client.update(2 memory_id="mem_abc123xyz",3 priority="critical",4 metadata={"last_accessed": datetime.now().isoformat()}5)
1const updated = await client.update({2 memoryId: "mem_abc123xyz",3 priority: "critical",4 metadata: { lastAccessed: new Date().toISOString() },5});
delete()
Delete a memory from both cache and cloud storage.
1delete(memory_id: str) -> bool
1delete(memoryId: string): Promise<boolean>
Example
1success = client.delete("mem_abc123xyz")2print(f"Deleted: {success}")
1const success = await client.delete("mem_abc123xyz");2console.log(`Deleted: ${success}`);
get_all()
Retrieve all memories for a user.
1get_all(2 user_id: str,3 limit: int | None = None,4 offset: int = 05) -> list[dict]
1getAll(options: GetAllOptions): Promise<Memory[]>2
3interface GetAllOptions {4 userId: string;5 limit?: number;6 offset?: number;7}
Example
1memories = client.get_all(2 user_id="user_123",3 limit=100,4 offset=05)6print(f"Total memories: {len(memories)}")
1const memories = await client.getAll({2 userId: "user_123",3 limit: 100,4 offset: 0,5});6console.log(`Total memories: ${memories.length}`);
Batch Operations
add_batch()
Add multiple memories in a single operation.
1add_batch(memories: list[dict]) -> list[dict]
1addBatch(memories: AddMemoryOptions[]): Promise<Memory[]>
Example
1memories = [2 {3 "content": "Prefers email notifications",4 "user_id": "user_123",5 "priority": "high"6 },7 {8 "content": "Works in tech industry",9 "user_id": "user_123",10 "priority": "medium"11 }12]13
14results = client.add_batch(memories)15print(f"Added {len(results)} memories")
1const memories = [2 {3 content: "Prefers email notifications",4 userId: "user_123",5 priority: "high",6 },7 {8 content: "Works in tech industry",9 userId: "user_123",10 priority: "medium",11 },12];13
14const results = await client.addBatch(memories);15console.log(`Added ${results.length} memories`);
delete_batch()
Delete multiple memories by ID.
1delete_batch(memory_ids: list[str]) -> dict
1deleteBatch(memoryIds: string[]): Promise<BatchDeleteResult>
Cache Management
cache_stats()
Get detailed cache statistics.
1cache_stats() -> dict
1cacheStats(): Promise<CacheStats>
Returns
1{2 "size": 1234, # Number of cached items3 "memory_usage": "45.6MB", # Memory used4 "hit_rate": 0.92, # Cache hit rate5 "miss_rate": 0.08, # Cache miss rate6 "evictions": 156, # Number of evictions7 "avg_ttl": 3600, # Average TTL in seconds8 "by_priority": {9 "critical": 10,10 "high": 234,11 "medium": 567,12 "low": 42313 }14}
1interface CacheStats {2 size: number;3 memoryUsage: string;4 hitRate: number;5 missRate: number;6 evictions: number;7 avgTtl: number;8 byPriority: {9 critical: number;10 high: number;11 medium: number;12 low: number;13 };14}
optimize_cache()
Optimize cache by removing stale entries and reorganizing based on access patterns.
1optimize_cache(2 aggressive: bool = False3) -> dict
1optimizeCache(options?: OptimizeOptions): Promise<OptimizeResult>
clear_cache()
Clear cache for specific user or entirely.
1clear_cache(user_id: str | None = None) -> bool
1clearCache(userId?: string): Promise<boolean>
Synchronization
sync()
Manually trigger synchronization between cache and cloud.
1sync(2 direction: str = "bidirectional",3 force: bool = False4) -> dict
1sync(options?: SyncOptions): Promise<SyncResult>2
3interface SyncOptions {4 direction?: "bidirectional" | "to_cloud" | "from_cloud";5 force?: boolean;6}
Health & Monitoring
health_check()
Check the health status of all components.
1health_check() -> dict
1healthCheck(): Promise<HealthStatus>
Returns
1{2 "status": "healthy",3 "timestamp": "2024-01-15T10:30:00Z",4 "components": {5 "redis": {6 "status": "healthy",7 "latency_ms": 1.2,8 "version": "7.0.5"9 },10 "mem0": {11 "status": "healthy",12 "latency_ms": 45.3,13 "quota_used": 0.2314 },15 "cache": {16 "status": "healthy",17 "size": 1234,18 "memory_usage": "45.6MB"19 }20 },21 "version": "1.0.0"22}
1interface HealthStatus {2 status: "healthy" | "degraded" | "unhealthy";3 timestamp: string;4 components: {5 redis: ComponentHealth;6 mem0: ComponentHealth;7 cache: ComponentHealth;8 };9 version: string;10}
Configuration Classes
CacheConfig
1class CacheConfig:2 ttl: int | dict[str, int | None] = 36003 max_memory: str = "512mb"4 eviction_policy: str = "allkeys-lru"5 compression: bool = False6 warm_cache: bool = True
1interface CacheConfig {2 ttl?: number | Record<Priority, number | null>;3 maxMemory?: string;4 evictionPolicy?: string;5 compression?: boolean;6 warmCache?: boolean;7}
SyncConfig
1class SyncConfig:2 mode: str = "lazy" # lazy, eager, manual3 batch_size: int = 1004 interval: int = 605 retry_policy: str = "exponential"6 max_retries: int = 3
1interface SyncConfig {2 mode?: "lazy" | "eager" | "manual";3 batchSize?: number;4 interval?: number;5 retryPolicy?: string;6 maxRetries?: number;7}
Error Handling
Exception Types
1from recall.exceptions import (2 RecallError, # Base exception3 ConnectionError, # Redis/Mem0 connection issues4 AuthenticationError, # Invalid API key5 ValidationError, # Invalid parameters6 CacheError, # Cache-specific errors7 SyncError, # Synchronization errors8 RateLimitError # API rate limiting9)10
11try:12 client.add(content="", user_id="")13except ValidationError as e:14 print(f"Invalid input: {e}")15except RecallError as e:16 print(f"Recall error: {e}")
1import {2 RecallError,3 ConnectionError,4 AuthenticationError,5 ValidationError,6 CacheError,7 SyncError,8 RateLimitError,9} from "@recall/client";10
11try {12 await client.add({ content: "", userId: "" });13} catch (error) {14 if (error instanceof ValidationError) {15 console.log(`Invalid input: ${error.message}`);16 } else if (error instanceof RecallError) {17 console.log(`Recall error: ${error.message}`);18 }19}
Async Support
Async Client (Python)
1from recall import AsyncRecallClient2import asyncio3
4async def main():5 client = AsyncRecallClient(6 redis_url="redis://localhost:6379",7 mem0_api_key="your-api-key"8 )9
10 # Async methods11 memory = await client.add(12 content="Async memory",13 user_id="user_123"14 )15
16 results = await client.search(17 query="async operations",18 user_id="user_123"19 )20
21 # Concurrent operations22 tasks = [23 client.add(content=f"Memory {i}", user_id="user_123")24 for i in range(10)25 ]26 memories = await asyncio.gather(*tasks)27
28asyncio.run(main())
Next Steps
- Explore advanced features
- Learn about webhooks and events
- Review best practices
- Check SDK references for language-specific details