Effects Composition¶
The CacheStore generator produces an optional Eff<RT, T> composition layer on top of the plain .NET cache interfaces. This enables functional effect pipelines with built-in OpenTelemetry tracing.
Capability Interface¶
The generator produces a single capability interface per cache store:
public interface IHasAppCache
{
IProductCache ProductCache { get; }
IOrderCache OrderCache { get; }
// ... one property per cached entity
}
Use this as a runtime constraint on effect methods — any runtime type implementing IHasAppCache can resolve caches at execution time.
Effect Methods¶
Effect methods are generated as nested static classes on the cache store container:
public static partial class AppCache
{
public static partial class Products
{
public static Eff<RT, Option<Product>> Get<RT>(ProductId id)
where RT : IHasAppCache => ...
public static Eff<RT, Unit> Set<RT>(ProductId id, Product value, TimeSpan? expiration = null)
where RT : IHasAppCache => ...
public static Eff<RT, Unit> Remove<RT>(ProductId id)
where RT : IHasAppCache => ...
}
}
Each method lifts the corresponding IProductCache async call into an Eff<RT, T> value.
OpenTelemetry Tracing¶
Built-in instrumentation
CacheStore generates its own effect methods with built-in OpenTelemetry instrumentation. Each effect method includes .WithActivity() calls that create tracing spans automatically. Set Instrumented = false on [CacheStore] to disable it.
When Instrumented = true (the default), every effect method creates an OpenTelemetry activity span. This gives you distributed tracing out of the box without any additional configuration.
Composing with [Runtime] and [Uses]¶
Wire the capability interface into your runtime to enable effect composition:
The runtime resolves all cache interfaces from the DI container, making all AppCache.* effect methods available in your pipelines.
Effect Pipeline Examples¶
Use LINQ from/select syntax to compose cache operations:
// Typed IDs
[TypedId] public readonly partial struct ProductId;
[TypedId] public readonly partial struct CategoryId;
// Cached entities
[Cached(ExpirationSeconds = 600)]
public record Product(ProductId Id, string Name, decimal Price);
[Cached(ExpirationSeconds = 3600, KeyPrefix = "cat")]
public record Category(CategoryId Id, string Name);
// Cache store
[CacheStore]
public static partial class AppCache;
// Read-through cache pipeline
var program =
from cached in AppCache.Products.Get<AppRuntime>(productId)
from product in cached.Match(
Some: p => SuccessEff(p),
None: () =>
from p in AppStore.Products.GetById<AppRuntime>(productId)
from _ in p.Match(
Some: found => AppCache.Products.Set<AppRuntime>(productId, found),
None: () => unitEff)
select p.IfNoneUnsafe(() => null!))
select product;
Cache Invalidation¶
Remove individual entries after mutation:
var updateAndInvalidate =
AppStore.Products.Save<AppRuntime>(updatedProduct)
>> AppCache.Products.Remove<AppRuntime>(updatedProduct.Id);
Provider-specific operations
Bulk invalidation (e.g., "invalidate all products") is a provider-specific operation. Redis supports pattern-based key deletion via SCAN+DEL, but the standard IDistributedCache does not. Provider-specific cache packages (e.g., Deepstaging.Redis) can offer these capabilities as dedicated effect modules.
Concurrency¶
The cache-aside pattern (Get → miss → GetById → Set) is not atomic. Within a single from/select pipeline, effects execute strictly in order — but separate pipelines handling concurrent requests can interleave freely.
Stale write after invalidation¶
The most common race occurs when a reader populates the cache between a writer's store update and cache removal:
Time Request A (Writer) Request B (Reader)
─────────────────────────────────────────────────────────
T1 Cache.Get(id) → None (miss)
T2 Store.GetById(id) → Product v1
T3 Store.Save(Product v2)
T4 Cache.Remove(id)
T5 Cache.Set(id, Product v1) ← stale
After T5, the cache holds v1 while the store has v2. Every subsequent reader sees stale data until the entry expires or is explicitly removed again.
Thundering herd¶
When a popular entry expires, many concurrent requests all miss the cache and hit the datastore simultaneously. Correctness is preserved (all write the same value), but the datastore sees a burst of redundant queries.
TTL is the primary mitigation¶
Always set ExpirationSeconds on [Cached] entities that are backed by a mutable data store. TTL bounds how long stale data can persist — even in the worst-case interleaving, the entry self-heals when it expires.
// Good — staleness bounded to 10 minutes
[Cached(ExpirationSeconds = 600)]
public record Product(ProductId Id, string Name, decimal Price);
// Risky — stale data persists indefinitely
[Cached]
public record Product(ProductId Id, string Name, decimal Price);
These races are inherent to the cache-aside pattern in any system, not specific to Deepstaging. For most workloads, short TTLs provide sufficient consistency.