Caching Strategy
Easy AppServer implements multi-level caching across permissions, settings, and assets to minimize latency and database load while maintaining consistency through event-driven invalidation.
Overview
The platform uses three primary caching layers:
- Permission Cache: Multi-level (local + Redis + OpenFGA)
- Settings Cache: In-memory with event-based invalidation
- Asset Cache: In-memory with TTL and size limits
All caches use event-driven invalidation to maintain consistency across distributed appserver instances.
Permission Caching
Code Reference: pkg/v2/infrastructure/permission/cache/cache.go:12
Cache Layers
L1: In-Memory Cache (per instance)
TTL: None (lives for process lifetime)
Eviction: Manual invalidation only
Latency: < 1ms
L2: Redis Cache (shared)
TTL: Configurable (default 5 minutes)
Latency: 2-5ms
L3: OpenFGA (source of truth)
No TTL (always fresh)
Unlimited size
Latency: 10-50ms
Cache Flow
Permission Check:
├─ Check L1 (in-memory)
│ ├─ Hit: Return (< 1ms)
│ └─ Miss: Check L2 (Redis)
│ ├─ Hit: Store in L1, Return (2-5ms)
│ └─ Miss: Query L3 (OpenFGA)
│ ↓
│ Store in L2 and L1
│ ↓
│ Return (10-50ms)
Invalidation
Tuple Changed (create/delete):
↓
Publish permission.invalidate event
↓
All instances receive event
↓
Clear L1 cache for affected resource
↓
Delete from L2 (Redis)
↓
Next check queries OpenFGA (fresh data)
Performance Metrics
Target Hit Rates:
- L1: 85-90%
- L2: 5-10%
- L3 (miss): 5%
Overall latency: ~1ms average
Settings Caching
Code Reference: pkg/v2/application/settings/cache.go:12
Cache Structure
type SettingsCache struct {
entries map[uuid.UUID]*CacheEntry // appID -> entry
ttl time.Duration // Default: 60 seconds (configurable via settings.cache_ttl)
}
type CacheEntry struct {
Definitions []*settings.SettingDefinition
Values []*settings.SettingValue
CachedAt time.Time
}
Cache Behavior
Get:
func (c *SettingsCache) Get(appID uuid.UUID) (*CacheEntry, bool) {
entry, exists := c.entries[appID]
if !exists || entry.IsExpired(c.ttl) {
return nil, false // Cache miss
}
return entry, true // Cache hit
}
Set:
func (c *SettingsCache) Set(
appID uuid.UUID,
definitions []*settings.SettingDefinition,
values []*settings.SettingValue,
) {
c.entries[appID] = &CacheEntry{
Definitions: definitions,
Values: values,
CachedAt: time.Now(),
}
}
Event-Driven Invalidation
Settings Updated:
↓
Publish settings.updated event
↓
All instances subscribe to event
↓
Invalidate cache entry: cache.Invalidate(appID)
↓
Next Get query fetches from database
Code Reference: pkg/v2/application/settings/settings_service.go:47
Cleanup Loop
Automatic removal of expired entries:
func (c *SettingsCache) cleanupLoop() {
ticker := time.NewTicker(c.ttl / 2) // With default TTL=60s this runs every 30s
for range ticker.C {
c.cleanup() // Remove expired entries
}
}
Asset Caching
Code Reference: pkg/v2/application/ui/cache_manager.go:8
Cache Configuration
type CacheManager struct {
cache map[string]*CachedAsset
ttl time.Duration // Default: 5 minutes (configurable via UI.cache_ttl)
maxSize int64 // Max bytes: configurable (default 100 MB)
currSize int64 // Current usage
}
type CachedAsset struct {
Data []byte
MimeType string
SHA256 string
ETag string
SRI string
CachedAt int64
LastModified int64
}
Cache Operations
Get with Expiration Check:
func (cm *CacheManager) Get(key string) *CachedAsset {
asset, exists := cm.cache[key]
if !exists {
return nil
}
// Check if expired
if time.Since(time.UnixMilli(asset.CachedAt)) > cm.ttl {
return nil
}
return asset
}
Set with Eviction:
func (cm *CacheManager) Set(key string, asset *CachedAsset) {
assetSize := int64(len(asset.Data))
// Evict oldest if over limit
if cm.currSize + assetSize > cm.maxSize {
cm.evictOldest(assetSize)
}
asset.CachedAt = time.Now().UnixMilli()
cm.cache[key] = asset
cm.currSize += assetSize
}
Eviction Strategy
Simple FIFO eviction when cache is full:
func (cm *CacheManager) evictOldest(requiredSize int64) {
// Remove entries until enough space
for key, asset := range cm.cache {
if cm.currSize + requiredSize <= cm.maxSize {
break
}
cm.currSize -= int64(len(asset.Data))
delete(cm.cache, key)
}
}
Prefix-Based Invalidation
Clear all assets for a specific app:
func (cm *CacheManager) DeleteByPrefix(prefix string) int {
deleted := 0
for key, asset := range cm.cache {
if strings.HasPrefix(key, prefix) {
cm.currSize -= int64(len(asset.Data))
delete(cm.cache, key)
deleted++
}
}
return deleted
}
Usage:
// Clear all assets for app "de.easy-m.todos"
cache.DeleteByPrefix("de.easy-m.todos:")
Cache Consistency Patterns
Write-Through
Settings and permissions use write-through:
Update Operation:
├─ Write to database
├─ Invalidate cache
└─ Next read fetches fresh data
Cache-Aside (Lazy Loading)
Assets use cache-aside:
Read Operation:
├─ Check cache
│ ├─ Hit: Return cached
│ └─ Miss:
│ ├─ Fetch from database
│ ├─ Store in cache
│ └─ Return
Event-Driven Invalidation
Caches invalidate on relevant events (when properly configured):
Event: app.uninstalled
↓
Handlers:
├─ Permission cache: Clear all tuples for app
├─ Settings cache: Invalidate(appID)
└─ Asset cache: DeleteByPrefix("appName:") (if uiService.Start() called)
Asset cache invalidation: The UI service's Start() method (pkg/v2/application/ui/ui_service.go:200-239) subscribes to app.* events for automatic cache invalidation, but is not called during server startup (pkg/v2/server/services.go:330-357).
Current behavior:
- ✅ Permission cache: Automatically invalidated via events
- ✅ Settings cache: Automatically invalidated via events
- ❌ Asset cache: NOT automatically invalidated (requires manual clearing or server restart)
To enable asset cache invalidation, call uiService.Start(ctx) during server initialization.
Distributed Consistency
Event Bus Coordination
Instance A: RabbitMQ: Instance B:
Update settings
↓
Invalidate local cache
↓
Publish event ───────→ Route event ─────────→ Receive event
↓
Invalidate local cache
This ensures eventual consistency across all instances.
Redis as Shared Cache
Permission L2 cache (Redis) provides shared state:
Instance A: Check L1 → Miss → Check Redis (L2) → Hit
Instance B: Check L1 → Miss → Check Redis (L2) → Hit (same cached data)
Cache Monitoring
Metrics to Track
Permission Cache:
- L1 hit rate (target: > 85%)
- L2 hit rate (target: 5-10%)
- Average check latency (target: < 2ms)
- Cache size (entries and bytes)
Settings Cache:
- Hit rate (target: > 90%)
- Average retrieval time
- Invalidation frequency
- Expired entry cleanup rate
Asset Cache:
- Hit rate (target: > 95%)
- Cache size vs max size
- Eviction frequency
- Average serve latency
Alerts
- Permission cache hit rate < 80%
- Settings cache size growing unbounded
- Asset cache eviction rate > 10/min
- Cache invalidation failures
Tuning Guidelines
Permission Cache
Increase L1 size if:
- Hit rate < 85%
- Working set fits in memory
- Low cache churn
Increase TTL if:
- Permissions change infrequently
- Acceptable staleness window
- OpenFGA load is high
Settings Cache
Increase TTL if:
- Settings rarely change
- Database load is high
- Acceptable eventual consistency
Decrease TTL if:
- Strict consistency required
- Settings change frequently
Asset Cache
Increase max size if:
- Evictions happening frequently
- Adequate memory available
- Many unique assets
Decrease TTL if:
- Assets updated frequently
- Want faster invalidation
Best Practices
Cache Key Design
Use Structured Keys:
Good: "permission:user:alice:viewer:app:todos"
Good: "settings:app:de.easy-m.todos"
Good: "asset:de.easy-m.todos:main.js"
Bad: "alicetodosviewer" // Ambiguous
Bad: "app_settings_123" // Non-descriptive
Invalidation Strategy
Invalidate on Write:
async function updateSetting(key, value) {
await repository.update(key, value);
cache.invalidate(appID); // Immediate invalidation
await eventBus.publish('settings.updated', { appID });
}
Graceful Degradation:
try {
const cached = cache.get(key);
if (cached) return cached;
} catch (err) {
logger.warn('Cache unavailable, falling back to DB', err);
}
// Always fall back to database
return await repository.get(key);
Related Concepts
- Permission Model - Permission caching implementation
- Settings Management - Settings cache details
- Asset Serving & Microfrontends - Asset cache
- Event-Driven Architecture - Event-based invalidation
- Platform Architecture - Redis and infrastructure
Further Reading
- Caching Patterns - Common caching strategies
- Redis Best Practices - Redis usage patterns