LatticeDB: Deterministic Storage for Time-Travel Simulations
LatticeDB: Deterministic Storage for Time-Travel Simulations
Version: 1.0.0 Status: In Development Last Updated: 2026-01-10
Overview
LatticeDB is Orix’s deterministic storage system built for time-travel simulations. It provides tick-aware storage, built-in Chronicle time-travel, CRDT support, and deterministic query ordering—features that traditional databases lack.
The Problem
Traditional databases don’t understand simulation time. LatticeDB was built to solve specific problems that arise when you need deterministic, time-aware storage for simulations:
1. No Simulation Time Awareness
- PostgreSQL, MongoDB, and others track wall-clock time (
DateTime.Now) - They don’t understand ticks - discrete simulation time units
- Querying “what was the state at tick 5000?” requires custom temporal extensions
- Time-travel queries are bolted on, not built in
2. No Built-In Time-Travel
- Historical state requires custom versioning tables
- Reconstructing past states means replaying change logs
- Branching timelines (“what if” scenarios) requires application-level complexity
- No native snapshot/restore primitives
3. CRDT Support Missing
- Conflict-free data types (counters, sets, maps) require custom layers
- Multi-master replication needs manual conflict resolution
- No guarantees about merge convergence
4. Non-Deterministic Query Ordering
Dictionary<K,V>iteration order varies between runsHashSet<T>enumeration is non-deterministic- Index scan order can differ based on implementation details
- This breaks simulation determinism
5. Schema Enforcement Gaps
- MongoDB allows arbitrary document shapes
- PostgreSQL requires migrations for schema changes
- No compile-time guarantee that code matches database schema
How Lattice Solves It
Core Principles
- Tick-Aware by Default - Every query understands simulation time
- Chronicle Built-In - Time-travel is a first-class feature, not an extension
- Deterministic Ordering - Query results are stable across runs
- Schema Authority - Axion schemas define storage structures
- CRDT Native - Conflict-free data types for distributed consistency
Technical Approach
Tick-Aware Storage
// Storage engine tracks tick metadatapublic interface IStorageEngine{ void Put(ReadOnlySpan<byte> key, ReadOnlySpan<byte> value); bool TryGet(ReadOnlySpan<byte> key, out byte[] value); IEnumerable<KeyValuePair<byte[], byte[]>> Scan(ReadOnlySpan<byte> prefix);}
// Chronicle wraps storage with temporal versioningpublic sealed class ChronicleEngine{ Snapshot CreateSnapshot(ulong tick, byte[] stateData, ulong stateHash); TravelResult TravelTo(ulong targetTick); BranchResult Branch(ulong snapshotId, BranchOptions options);}Deterministic Collections
All Lattice storage operations return results in sorted order:
- Keys are ordered lexicographically
- Scan operations return deterministic sequences
- CRDT merge operations preserve commutativity
Schema-Driven Types
// Axion schema defines structureentity Transaction { @key id: uuid; state: TransactionState; start_tick: int64; commit_tick: int64?;}
// Generated code implements storage interface// ILatticeEntity<T> for ORM integrationpublic interface ILatticeEntity<TSelf> where TSelf : struct, ILatticeEntity<TSelf>{ uint Id { get; set; } void Serialize(ref BitWriter writer); static abstract TSelf Deserialize(ref BitReader reader);}What Lattice Provides
Architecture Overview
┌─────────────────────────────────────────────────────────┐│ Application Layer │├─────────────────────────────────────────────────────────┤│ Lattice.ORM │ Lattice.QL │ Lattice.Client ││ (Active Record) │ (Query Language) │ (Network API) │├─────────────────────────────────────────────────────────┤│ Lattice.Chronicle (Time-Travel Layer) ││ - Snapshots - Branching - Version Tracking │├─────────────────────────────────────────────────────────┤│ Lattice.DB (Storage Engine Layer) ││ - Memory - File - Binary Pages │├─────────────────────────────────────────────────────────┤│ Lattice.CRDT (Conflict-Free Data Types) ││ - Counters - Sets - Maps │├─────────────────────────────────────────────────────────┤│ Lattice.Replication (Multi-Node Sync Layer) ││ - Sync Protocol - Conflict Resolution - Delta Logs │└─────────────────────────────────────────────────────────┘Component Status
| Component | Status | Description |
|---|---|---|
| Lattice.DB | ✅ Implemented | Core key-value storage engine |
| Lattice.Chronicle | ✅ Implemented | Time-travel snapshots and branching |
| Lattice.ORM | 🚧 In Progress | Object-relational mapping layer |
| Lattice.QL | 🚧 In Progress | Query language parser (GraphQL-like) |
| Lattice.CRDT | ✅ Implemented | GCounter, PNCounter, OR-Set, G-Set, 2P-Set |
| Lattice.Search | 📋 Planned | Full-text search indexing |
| Lattice.Replication | 🚧 In Progress | Multi-node synchronization |
| Lattice.Server | 🚧 In Progress | HTTP/gRPC/WebSocket transports |
| Lattice.Client | 🚧 In Progress | Client library for remote access |
| Lattice.Crypto | 🚧 In Progress | Capability-based access control |
| Lattice.Optimization | 🚧 In Progress | Query planner and batch executor |
Lattice.DB: Core Storage
Storage Engines
LatticeDB supports multiple backend engines:
public enum StorageEngine{ Lmdb = 0, // Lightning Memory-Mapped Database (default) Sqlite = 1, // SQLite for compatibility RocksDb = 2, // RocksDB for high-throughput Memory = 3 // In-memory for testing}Key-Value Operations
// Open storagevar engine = new MemoryStorageEngine();engine.Open(":memory:");
// Store dataengine.Put("user:123"u8, userData);
// Retrieve dataif (engine.TryGet("user:123"u8, out var data)){ // Process data}
// Range scan (deterministic order)foreach (var kvp in engine.Scan("user:"u8)){ Console.WriteLine($"Key: {Encoding.UTF8.GetString(kvp.Key)}");}
// Count keys with prefixint userCount = engine.Count("user:"u8);
// Delete rangeint deleted = engine.DeleteRange("user:"u8);Transaction Support
// Use transactional storage for atomic operationsif (engine is ITransactionalStorage txStorage){ using var tx = txStorage.BeginTransaction(); try { tx.Put("balance:123"u8, newBalance); tx.Put("history:123:5000"u8, historyEntry); tx.Commit(); } catch { tx.Rollback(); throw; }}Configuration
config StorageConfig { engine: StorageEngine = Lmdb; max_size_bytes: int64 = 0; // 0 = unlimited page_size: int32 = 4096; // 4KB pages mmap: bool = true; // Memory-mapped I/O sync_mode: SyncMode = Normal; // Write synchronization wal_enabled: bool = true; // Write-ahead logging}Lattice.Chronicle: Time-Travel Engine
Chronicle enables snapshot-based time travel and timeline branching.
Creating Snapshots
var chronicle = new ChronicleEngine(storageEngine);chronicle.EnsureMainTimeline();
// Capture state at current tickvar snapshot = chronicle.CreateSnapshot( tick: 5000, stateData: SerializeGameState(), stateHash: ComputeStateHash(), options: new SnapshotOptions { Label = "Before boss fight", Compress = true });
Console.WriteLine($"Snapshot {snapshot.Id} at tick {snapshot.Tick}");Console.WriteLine($"Size: {snapshot.Data.Length} bytes (compressed from {snapshot.UncompressedSize})");Time-Travel Queries
// Travel to a specific tickvar result = chronicle.TravelTo(targetTick: 3500);
if (result.Success){ Console.WriteLine($"Found snapshot at tick {result.Snapshot.Tick}"); Console.WriteLine($"Need to replay {result.TicksReplayed} ticks"); Console.WriteLine($"Retrieved in {result.ElapsedMicroseconds} µs");
// Restore state RestoreGameState(result.StateData);}Timeline Branching
// Branch from a snapshot to explore alternate outcomesvar branchResult = chronicle.Branch( snapshotId: snapshot.Id, options: new BranchOptions { Name = "What if player chose door B?", Description = "Alternate timeline where player made different choice", SwitchToNewTimeline = true });
if (branchResult.Success){ Console.WriteLine($"Created timeline {branchResult.Timeline.Id}"); Console.WriteLine($"Branched from tick {branchResult.BranchPoint.Tick}");
// Continue simulation in alternate timeline // Original timeline remains unchanged}Timeline Management
// List all timelinesforeach (var timeline in chronicle.Timelines){ Console.WriteLine($"Timeline {timeline.Id}: {timeline.Name}"); Console.WriteLine($" Created at tick: {timeline.CreatedAtTick}"); Console.WriteLine($" Current tick: {timeline.CurrentTick}"); Console.WriteLine($" Active: {timeline.IsActive}");}
// Switch between timelineschronicle.SwitchTimeline(timelineId: 2);
// Compare states across timelinesvar comparison = chronicle.Compare( timelineA: 1, timelineB: 2, tick: 5000);
Console.WriteLine($"Timeline 1 hash: {comparison.StateHashA:X16}");Console.WriteLine($"Timeline 2 hash: {comparison.StateHashB:X16}");Console.WriteLine($"Diverged: {comparison.StateHashA != comparison.StateHashB}");Snapshot Verification
// Verify snapshot integritybool valid = chronicle.VerifySnapshot( snapshotId: snapshot.Id, hashFunction: data => ComputeStateHash(data));
if (!valid){ Console.WriteLine("WARNING: Snapshot integrity check failed!");}Chronicle Statistics
var stats = chronicle.GetStats();Console.WriteLine($"Timelines: {stats.TimelineCount}");Console.WriteLine($"Snapshots: {stats.SnapshotCount}");Console.WriteLine($"Total data: {stats.TotalDataBytes / 1024} KB");Console.WriteLine($"Uncompressed: {stats.TotalUncompressedBytes / 1024} KB");Console.WriteLine($"Compression ratio: {stats.CompressionRatio:P1}");Lattice.QL: Query Language
Lattice Query Language (LatticeQL) is a GraphQL-inspired language with temporal extensions.
Query Syntax
# Basic queryquery GetPlayer { player(id: "123") { name score level }}
# Time-travel queryquery GetPlayerHistory { player(id: "123") @at(tick: 5000) { name score }}
# Range queryquery HighScorers { players(where: { score_gt: 1000 }, orderBy: score_DESC) { name score }}
# Temporal rangequery PlayerScoreHistory { player(id: "123") @between(start: 1000, end: 5000) { tick score }}Query Execution (In Progress)
var parser = QueryParser.Parse(@" query GetPlayer { player(id: ""123"") { name score } }");
if (!parser.HasErrors){ var document = parser.Document; // Execute query (executor not yet implemented)}Lattice.CRDT: Conflict-Free Data Types
CRDTs enable distributed state that converges without coordination.
Counters
GCounter (Grow-Only Counter)
// Create counter for this nodevar counter = new GCounter(nodeId: "node-1");
// Increment locallycounter.Increment(10);counter.Increment(5);
Console.WriteLine($"Value: {counter.Value}"); // 15
// Merge from another nodevar otherCounter = new GCounter(nodeId: "node-2");otherCounter.Increment(20);
counter.Merge(otherCounter);Console.WriteLine($"Merged value: {counter.Value}"); // 35
// Serialize for network transmissionbyte[] bytes = counter.ToBytes();var restored = GCounter.FromBytes(bytes, "node-1");PNCounter (Positive-Negative Counter)
// Counter that supports both increment and decrementvar counter = new PnCounter(nodeId: "node-1");
counter.Increment(100);counter.Decrement(30);
Console.WriteLine($"Value: {counter.Value}"); // 70Console.WriteLine($"Total increments: {counter.TotalIncrements}"); // 100Console.WriteLine($"Total decrements: {counter.TotalDecrements}"); // 30
// Merge from another nodevar other = new PnCounter(nodeId: "node-2");other.Increment(50);counter.Merge(other);
Console.WriteLine($"Merged: {counter.Value}"); // 120BoundedCounter (Lower-Bounded Counter)
// Counter that cannot go below zero (useful for inventory)var inventory = new BoundedCounter(nodeId: "node-1", lowerBound: 0);
inventory.Increment(100);
// Try to decrementif (inventory.TryDecrement(30)){ Console.WriteLine("Decremented successfully");}
// This would fail (returns false)bool success = inventory.TryDecrement(200);Sets
GSet (Grow-Only Set)
// Set that only supports additionsvar achievements = new GSet<string>();
achievements.Add("First Victory");achievements.Add("Level 10");achievements.Add("Perfect Score");
Console.WriteLine($"Count: {achievements.Count}"); // 3Console.WriteLine($"Has 'Level 10': {achievements.Contains("Level 10")}"); // true
// Merge from another nodevar otherAchievements = new GSet<string>();otherAchievements.Add("Speed Run");
achievements.Merge(otherAchievements);Console.WriteLine($"Merged count: {achievements.Count}"); // 4ORSet (Observed-Remove Set / Add-Wins Set)
// Set supporting both add and remove// Uses Hybrid Logical Clock for causality trackingvar clock = new HybridLogicalClock("node-1");var friends = new OrSet<string>("node-1", clock);
friends.Add("Alice");friends.Add("Bob");friends.Add("Charlie");
Console.WriteLine($"Friends: {friends.Count}"); // 3
// Remove a friendfriends.Remove("Bob");Console.WriteLine($"Friends: {friends.Count}"); // 2
// Merge from another nodevar otherFriends = new OrSet<string>("node-2", new HybridLogicalClock("node-2"));otherFriends.Add("Diana");
friends.Merge(otherFriends);Console.WriteLine($"Merged friends: {friends.Count}"); // 3
// Tombstone garbage collection (remove old deletion markers)var cutoff = clock.Now();int removed = friends.CollectGarbage(cutoff);TwoPhaseSet (2P-Set)
// Set where removal is permanent (element can never be re-added)var permissions = new TwoPhaseSet<string>();
permissions.Add("read");permissions.Add("write");permissions.Add("execute");
// Revoke permission (permanent)permissions.Remove("execute");
// Try to re-add (throws exception)try{ permissions.Add("execute");}catch (InvalidOperationException){ Console.WriteLine("Cannot re-add removed permission");}
// Safe versionbool added = permissions.TryAdd("execute"); // Returns falseCRDT Characteristics
All Lattice CRDTs satisfy:
- Convergence: All replicas that receive the same updates will have identical state
- Commutativity: Updates can be applied in any order
- Idempotence: Applying the same update twice has no additional effect
- Associativity: Merge(A, Merge(B, C)) = Merge(Merge(A, B), C)
Serialization
// All CRDTs implement ISerializableCrdtISerializableCrdt crdt = counter;byte[] serialized = crdt.ToBytes();
// Transmit over network, store to disk, etc.
// Deserializevar restored = GCounter.FromBytes(serialized, "node-1");Lattice.Replication: Multi-Node Sync
Status: In Progress
Enables synchronization across multiple Lattice instances.
// Create replication nodevar node = new ReplicationNode( nodeId: "node-1", storage: storageEngine);
// Connect to peernode.ConnectToPeer("node-2", "tcp://192.168.1.100:5000");
// Sync protocol handles:// - Delta transmission (only changes since last sync)// - Conflict resolution (using CRDT semantics)// - Vector clocks (for causality tracking)// - Compression (LZ4 for efficient network usage)Lattice.ORM: Object-Relational Mapping
Status: In Progress
Provides Active Record pattern for entities.
// Define entity via Axion schemaentity InventoryItem { @key id: uint; name: string; quantity: int32; category: ItemCategory;}
// Generated code implements ILatticeEntity<T>// Use Active Record methods:
var item = new InventoryItem{ Name = "Health Potion", Quantity = 10, Category = ItemCategory.Consumable};
item.Save(); // Persists to storage
// Find by IDvar loaded = InventoryItem.Find(item.Id);
// Updateloaded.Quantity += 5;loaded.Save();
// Deleteloaded.Delete();
// Queryvar consumables = InventoryItem.Where(i => i.Category == ItemCategory.Consumable);Advantages
1. Time-Travel Built-In
No need for custom temporal tables or application-level versioning:
// Traditional DBvar state = await db.ExecuteQuery( "SELECT * FROM game_state WHERE tick <= @tick ORDER BY tick DESC LIMIT 1", new { tick = 5000 });
// LatticeDBvar state = chronicle.TravelTo(5000);2. CRDT for Distributed Consistency
Built-in conflict-free data types eliminate coordination overhead:
// Traditional DB: Manual conflict resolutiontry{ var current = await db.GetCounter("player:score"); await db.UpdateCounter("player:score", current + 10);}catch (ConcurrencyException){ // Retry or merge manually}
// LatticeDB: Automatic convergencecounter.Increment(10);counter.Merge(otherNodeCounter); // Always converges3. Tick-Aware Queries
First-class support for simulation time:
# Query state at specific tickquery { player(id: "123") @at(tick: 5000) { position { x, y } health }}
# Query changes over time rangequery { player(id: "123") @between(start: 1000, end: 2000) { events { tick type } }}4. Schema-Driven Development
Axion schemas ensure code and storage are always in sync:
- Compile-time type safety
- No runtime schema mismatches
- Automatic serialization code generation
- Schema versioning and migration support
5. Deterministic Guarantees
All operations produce stable, reproducible results:
- Key iteration is lexicographically ordered
- CRDT merges are commutative and associative
- Query results are deterministic across runs
- No hidden state or ambient time
Disadvantages
1. Learning Curve
New query language and concepts:
- Developers must learn LatticeQL syntax
- CRDT semantics differ from traditional locks
- Tick-based time model requires mindset shift
- Chronicle timeline management adds complexity
2. More Complex Than Key-Value
Not as simple as Redis or basic key-value stores:
- Chronicle adds versioning overhead
- CRDT metadata increases storage size
- Transaction isolation levels to understand
- Multiple storage engines to choose from
3. Storage Overhead for History
Time-travel requires keeping historical state:
- Snapshots consume disk space
- Delta compression helps but adds CPU cost
- Retention policies need careful tuning
- Garbage collection necessary for long-running systems
4. In-Progress Features
Some components are not yet complete:
- Lattice.QL query executor (parser done, executor in progress)
- Lattice.Search full-text indexing (planned)
- Lattice.Optimization query planner (in progress)
- Network replication protocol (in progress)
5. Specialized Use Case
Not a general-purpose database:
- Optimized for simulation/gaming workloads
- May be overkill for simple CRUD applications
- Tick-based model not suitable for all domains
- Chronicle overhead unnecessary if time-travel not needed
Comparison with Other Databases
vs PostgreSQL with Temporal Extensions
| Feature | PostgreSQL + Extension | LatticeDB |
|---|---|---|
| Time-travel | Add-on (pg_temporal) | Built-in (Chronicle) |
| CRDT support | Manual implementation | Native (Lattice.CRDT) |
| Determinism | Query order may vary | Guaranteed deterministic |
| Tick awareness | Application layer | First-class concept |
| Schema sync | Manual migrations | Axion code generation |
| Branching | Not supported | Native timeline branching |
Winner: LatticeDB for simulation workloads requiring determinism and time-travel
vs MongoDB
| Feature | MongoDB | LatticeDB |
|---|---|---|
| Schema | Flexible (pro/con) | Enforced via Axion |
| Versioning | Application-level | Built-in Chronicle |
| Determinism | Document order varies | Guaranteed stable |
| CRDT | Not supported | Native types |
| Query language | MQL (JavaScript-like) | LatticeQL (GraphQL-like) |
| Replication | Built-in (replica sets) | In progress |
Winner: MongoDB for general web apps, LatticeDB for deterministic simulations
vs FoundationDB
| Feature | FoundationDB | LatticeDB |
|---|---|---|
| Transactions | ACID, serializable | ACID with tick metadata |
| Abstractions | Low-level key-value | Higher-level (ORM, QL) |
| Time-travel | Not built-in | Native Chronicle |
| CRDT | Not supported | Native types |
| Determinism | Guaranteed | Guaranteed |
| Complexity | High (bare metal) | Medium (more abstractions) |
Winner: FoundationDB for maximum control, LatticeDB for faster development
vs Redis
| Feature | Redis | LatticeDB |
|---|---|---|
| Persistence | Optional (RDB/AOF) | Primary focus |
| Data types | String, Hash, Set, etc. | Axion-defined structs + CRDTs |
| Time-travel | Not supported | Native Chronicle |
| Determinism | Not guaranteed | Guaranteed |
| Query language | Commands | LatticeQL (planned) |
| Use case | Cache/pub-sub | Persistent simulation state |
Winner: Redis for caching, LatticeDB for persistent deterministic state
Performance Targets
These are design goals; actual performance depends on workload:
Write Latency
| Operation | Target | Notes |
|---|---|---|
| Single Put | <1ms | Local storage, no sync |
| Transactional Put | <5ms | With durability guarantees |
| Snapshot Creation | <10ms | For typical game state (1-10 MB) |
| CRDT Merge | <1ms | Per operation, amortized |
Read Latency
| Operation | Target | Notes |
|---|---|---|
| Single Get | <0.1ms | Cached in memory |
| Range Scan (1000 items) | <5ms | Sequential read |
| Time-Travel Query | <10ms | Recent history (last 1000 ticks) |
| Snapshot Restore | <50ms | Decompression + load |
Throughput
| Workload | Target | Notes |
|---|---|---|
| Writes/sec | 100K+ | Batch operations, memory engine |
| Reads/sec | 500K+ | Hot data cached |
| Snapshots/sec | 100 | Concurrent snapshot creation |
Storage
| Metric | Target | Notes |
|---|---|---|
| Compression ratio | 2-4x | LZ4 compression on snapshots |
| Overhead | 10-20% | Version metadata, indexes |
| Compaction | <1% | Amortized write amplification |
Usage Recommendations
When to Use LatticeDB
✅ Use LatticeDB if you need:
- Deterministic simulation state
- Time-travel debugging (“go back to tick 5000”)
- Timeline branching (“what if” scenarios)
- Distributed state with CRDTs
- Tick-based time model
- Schema-enforced types
When NOT to Use LatticeDB
❌ Don’t use LatticeDB if you need:
- Simple key-value caching (use Redis)
- General CRUD web app (use PostgreSQL/MongoDB)
- Maximum raw throughput (use FoundationDB)
- No schema enforcement (use MongoDB)
- Real-time analytics (use ClickHouse)
- Graph queries (use Neo4j)
Migration Path
If you’re considering migrating to LatticeDB:
- Prototype Phase: Use Lattice.DB with in-memory storage
- Schema Design: Define Axion schemas for your entities
- Incremental Adoption: Start with Chronicle for snapshots only
- Add CRDTs: Replace coordination with conflict-free types where applicable
- Full Integration: Switch to LatticeQL for querying
Example: Complete Integration
using Lattice.DB;using Lattice.Chronicle;using Lattice.CRDT;
// Setup storagevar storage = new FileStorageEngine();storage.Open("game.ldb");
// Setup Chroniclevar chronicle = new ChronicleEngine(storage);chronicle.EnsureMainTimeline();
// Game state (serialized to bytes)var gameState = new GameState{ Players = new Dictionary<uint, Player>(), CurrentTick = 0};
// Game loopfor (ulong tick = 0; tick < 10000; tick++){ // Update simulation UpdateSimulation(gameState, tick);
// Snapshot every 100 ticks if (tick % 100 == 0) { var stateBytes = SerializeState(gameState); var stateHash = ComputeHash(stateBytes);
chronicle.CreateSnapshot( tick: tick, stateData: stateBytes, stateHash: stateHash ); }}
// Time-travel debuggingvar buggyTick = 5234;var travel = chronicle.TravelTo(buggyTick);if (travel.Success){ Console.WriteLine($"Traveled to tick {travel.Snapshot.Tick}"); Console.WriteLine($"Need to replay {travel.TicksReplayed} ticks");
// Restore and investigate var restoredState = DeserializeState(travel.StateData); DebugGameState(restoredState);}
// Create alternate timelinevar branchPoint = chronicle.FindLatestSnapshotBefore(1, tick: 3000);if (branchPoint != null){ var branch = chronicle.Branch(branchPoint.Id, new BranchOptions { Name = "Alternate ending", SwitchToNewTimeline = true });
// Continue simulation in alternate timeline // Original timeline unchanged}
// Cleanupchronicle.Flush();storage.Close();Future Roadmap
Short-Term (Q1 2026)
- ✅ Complete Lattice.QL query executor
- ✅ Finish Lattice.ORM Active Record implementation
- ✅ Stabilize replication protocol
Medium-Term (Q2 2026)
- 📋 Add Lattice.Search full-text indexing
- 📋 Implement query optimizer
- 📋 Add PostgreSQL compatibility layer
Long-Term (2026+)
- 📋 Distributed query execution
- 📋 Sharding support for horizontal scaling
- 📋 SQL compatibility layer
Related Documents
- Chronicle Time-Travel - Deep dive into temporal features
- Axion Schemas - Schema language specification
- Executive Overview - Platform overview
Document Version: 1.0.0 Status: Complete Next Review: 2026-02-01