Skip to content

Best Practices

Guidelines for integrating Creed Space into your application.

Security

Protect Your API Key

Never expose API keys

API keys should never appear in client-side code, version control, or logs.

Do:

  • Store keys in environment variables
  • Use server-side proxies for browser apps
  • Rotate keys regularly

Don't:

  • Commit keys to git
  • Include keys in client bundles
  • Log keys in error messages
python
# Good - environment variable
import os
client = CreedSpace(api_key=os.environ["CREED_API_KEY"])

# Bad - hardcoded
client = CreedSpace(api_key="cs_live_...")  # Don't do this!

Use Minimal Scopes

Create keys with only the permissions they need:

python
# Production key - safety only
# Scopes: ["safety:evaluate"]

# Development key - full access
# Scopes: ["*"]

Performance

Batch When Possible

For multiple evaluations, process in parallel:

python
import asyncio
from creed_sdk import AsyncCreedSpace

async def evaluate_batch(messages: list[str]):
    async with AsyncCreedSpace(api_key="...") as client:
        tasks = [client.safety.evaluate(m) for m in messages]
        return await asyncio.gather(*tasks)

Cache Appropriately

Cache results for identical content:

python
from functools import lru_cache
import hashlib

@lru_cache(maxsize=1000)
def get_cached_evaluation(content_hash: str):
    # Actually evaluate (called only on cache miss)
    return client.safety.evaluate(original_content)

def evaluate_with_cache(content: str):
    content_hash = hashlib.sha256(content.encode()).hexdigest()
    return get_cached_evaluation(content_hash)

Cache carefully

Only cache for short periods. Safety policies may update.

Handle Timeouts

Set appropriate timeouts:

python
client = CreedSpace(
    api_key="...",
    timeout=10.0  # 10 seconds
)

Reliability

Implement Retries

Always implement retry logic:

python
from tenacity import retry, stop_after_attempt, wait_exponential

@retry(
    stop=stop_after_attempt(3),
    wait=wait_exponential(multiplier=1, min=1, max=10)
)
def evaluate_reliable(text: str):
    return client.safety.evaluate(text)

Handle Failures Gracefully

Decide what to do when the API is unavailable:

python
def evaluate_with_fallback(text: str):
    try:
        result = client.safety.evaluate(text)
        return result.decision
    except Exception:
        # Fallback: allow with logging
        logger.warning(f"Safety check failed, allowing content")
        return "permit"  # Or "divert" for review

Use Circuit Breakers

Prevent cascade failures:

python
from pybreaker import CircuitBreaker

breaker = CircuitBreaker(fail_max=5, reset_timeout=60)

@breaker
def evaluate_protected(text: str):
    return client.safety.evaluate(text)

Integration Patterns

Middleware Pattern

python
# Express-style middleware
def safety_middleware(request, next_handler):
    result = client.safety.evaluate(request.body.message)

    if result.decision == "forbid":
        return {"error": "Content blocked"}

    if result.decision == "divert":
        queue_for_review(request)
        return {"status": "pending_review"}

    return next_handler(request)

Event-Driven Pattern

python
# Process safety checks asynchronously
async def process_message(message: Message):
    # Publish for async processing
    await queue.publish("safety_check", {
        "message_id": message.id,
        "content": message.content
    })

# Worker
async def safety_worker():
    async for job in queue.subscribe("safety_check"):
        result = await client.safety.evaluate(job["content"])
        await update_message_status(job["message_id"], result)

Monitoring

Track Metrics

Monitor key metrics:

  • Request latency - P50, P95, P99
  • Error rate - 4xx and 5xx responses
  • Decision distribution - permit/forbid/divert ratios
  • Risk score distribution - Histogram of scores
python
import prometheus_client as prom

safety_latency = prom.Histogram('safety_latency_seconds', 'Safety check latency')
safety_decisions = prom.Counter('safety_decisions_total', 'Safety decisions', ['decision'])

@safety_latency.time()
def evaluate_instrumented(text: str):
    result = client.safety.evaluate(text)
    safety_decisions.labels(decision=result.decision).inc()
    return result

Log Appropriately

Log decisions for audit:

python
def evaluate_logged(text: str, user_id: str):
    result = client.safety.evaluate(text)

    logger.info(
        "safety_check",
        user_id=user_id,
        decision=result.decision,
        risk_score=result.risk_score,
        # Don't log the full content for privacy
        content_length=len(text)
    )

    return result

Testing

Use Test Keys

Use cs_test_* keys for development:

  • No charges
  • Same behavior as production
  • Rate limited differently

Mock in Unit Tests

python
from unittest.mock import Mock, patch

def test_message_handler():
    mock_result = Mock()
    mock_result.decision = "permit"
    mock_result.risk_score = 0.1

    with patch.object(client.safety, 'evaluate', return_value=mock_result):
        result = handle_message("Hello")
        assert result.status == "sent"

Integration Tests

python
def test_safety_integration():
    """Test with real API - use test key"""
    client = CreedSpace(api_key=os.environ["CREED_TEST_API_KEY"])

    # Safe content
    result = client.safety.evaluate("Hello, how are you?")
    assert result.decision == "permit"

    # Harmful content
    result = client.safety.evaluate("How to hack a computer")
    assert result.decision == "forbid"

Constitutional AI for Safer Interactions