Compare commits
3 Commits
ffd3eda63c
...
972e6ece5e
| Author | SHA1 | Date | |
|---|---|---|---|
| 972e6ece5e | |||
| 31e80ac572 | |||
| cd4ee1cbd7 |
@@ -1,4 +1,5 @@
|
|||||||
## JormunDB Architecture
|
## JormunDB Architecture
|
||||||
|
# !!THIS IS NO LONGER ENTIRELY ACCURATE IGNORE OR UPDATE WITH ACCURATE INFO!!
|
||||||
|
|
||||||
This document explains the internal architecture of JormunDB, including design decisions, storage formats, and the arena-per-request memory management pattern.
|
This document explains the internal architecture of JormunDB, including design decisions, storage formats, and the arena-per-request memory management pattern.
|
||||||
|
|
||||||
|
|||||||
@@ -16,7 +16,7 @@
|
|||||||
|
|
||||||
JormunDB is a Self-Hosted DynamoDB replacement that speaks the DynamoDB wire protocol. Point your AWS SDK or CLI at it and use it as a drop-in replacement.
|
JormunDB is a Self-Hosted DynamoDB replacement that speaks the DynamoDB wire protocol. Point your AWS SDK or CLI at it and use it as a drop-in replacement.
|
||||||
|
|
||||||
**Why Odin?** The original Zig implementation suffered from explicit allocator threading. Where every function ended up needing an `allocator` parameter and every allocation needed `errdefer` cleanup. Odin's implicit context allocator system eliminates this ceremony. Just one `context.allocator = arena_allocator` at the request handler entry and everything downstream just works.
|
**Why Odin?** The original Zig implementation suffered from explicit allocator threading. Where every function ended up needing an `allocator` parameter and every allocation needed `errdefer` cleanup. Odin's implicit context allocator system eliminates this ceremony. Just one `context.allocator = arena_allocator` at the request handler entry and it feels more like working with ctx in Go instead of filling out tax forms.
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
|
|
||||||
|
|||||||
45
TODO.md
45
TODO.md
@@ -47,22 +47,11 @@ Goal: "aws cli works reliably for CreateTable/ListTables/PutItem/GetItem/DeleteI
|
|||||||
- [x] Expand operator coverage: BETWEEN and begins_with are implemented in parser
|
- [x] Expand operator coverage: BETWEEN and begins_with are implemented in parser
|
||||||
- [x] **Sort key condition filtering in query** — **DONE**: `query()` now accepts optional `Sort_Key_Condition` and applies it (=, <, <=, >, >=, BETWEEN, begins_with)
|
- [x] **Sort key condition filtering in query** — **DONE**: `query()` now accepts optional `Sort_Key_Condition` and applies it (=, <, <=, >, >=, BETWEEN, begins_with)
|
||||||
|
|
||||||
---
|
### 5) Service Features
|
||||||
|
- [ ] Configuration settings like environment variables for defining users and credentials
|
||||||
|
- [ ] Configuration settings for setting up master and replica nodes
|
||||||
|
|
||||||
## Next (feature parity with Zig + API completeness)
|
### 6) Test coverage / tooling
|
||||||
### 5) UpdateItem / conditional logic groundwork
|
|
||||||
- [x] `UpdateItem` handler registered in router (currently returns clear "not yet supported" error)
|
|
||||||
- [x] Implement `UpdateItem` (initially minimal: SET for scalar attrs)
|
|
||||||
- [ ] `UpdateItem` needs UPDATED_NEW/UPDATED_OLD response filtering for perfect parity with Dynamo
|
|
||||||
- [x] Add `ConditionExpression` support for Put/Delete/Update (start with simple comparisons)
|
|
||||||
- [x] Define internal "update plan" representation (parsed ops → applied mutations)
|
|
||||||
|
|
||||||
### 6) Response completeness / options
|
|
||||||
- [x] `ReturnValues` handling where relevant (NONE/ALL_OLD/UPDATED_NEW etc. — even partial support is useful)
|
|
||||||
- [x] `ProjectionExpression` (return subset of attributes)
|
|
||||||
- [x] `FilterExpression` (post-query filter for Scan/Query)
|
|
||||||
|
|
||||||
### 7) Test coverage / tooling
|
|
||||||
- [ ] Add integration tests mirroring AWS CLI script flows:
|
- [ ] Add integration tests mirroring AWS CLI script flows:
|
||||||
- create table → put → get → scan → query → delete
|
- create table → put → get → scan → query → delete
|
||||||
- [ ] Add fuzz-ish tests for:
|
- [ ] Add fuzz-ish tests for:
|
||||||
@@ -70,39 +59,19 @@ Goal: "aws cli works reliably for CreateTable/ListTables/PutItem/GetItem/DeleteI
|
|||||||
- expression parsing robustness
|
- expression parsing robustness
|
||||||
- TLV decode failure cases (corrupt bytes)
|
- TLV decode failure cases (corrupt bytes)
|
||||||
|
|
||||||
---
|
### 7) Secondary indexes
|
||||||
|
|
||||||
## Later (big features)
|
|
||||||
These align with the "Future Enhancements" list in ARCHITECTURE.md.
|
|
||||||
|
|
||||||
### 8) Secondary indexes
|
|
||||||
- [ ] Global Secondary Indexes (GSI)
|
- [ ] Global Secondary Indexes (GSI)
|
||||||
- [ ] Local Secondary Indexes (LSI)
|
- [ ] Local Secondary Indexes (LSI)
|
||||||
- [ ] Index backfill + write-path maintenance
|
- [ ] Index backfill + write-path maintenance
|
||||||
|
|
||||||
### 9) Batch + transactions
|
### 8) Performance / ops
|
||||||
- [x] BatchWriteItem
|
|
||||||
- [x] BatchGetItem
|
|
||||||
- [ ] Transactions (TransactWriteItems / TransactGetItems)
|
|
||||||
|
|
||||||
### 10) Performance / ops
|
|
||||||
- [ ] Connection reuse / keep-alive tuning
|
- [ ] Connection reuse / keep-alive tuning
|
||||||
- [ ] Bloom filters / RocksDB options tuning for common patterns
|
- [ ] Bloom filters / RocksDB options tuning for common patterns
|
||||||
- [ ] Optional compression policy (LZ4/Zstd knobs)
|
- [ ] Optional compression policy (LZ4/Zstd knobs)
|
||||||
- [ ] Parallel scan (segment scanning)
|
- [ ] Parallel scan (segment scanning)
|
||||||
|
|
||||||
---
|
### 9) Replication / WAL
|
||||||
|
|
||||||
## Replication / WAL
|
|
||||||
(There is a C++ shim stubbed out for WAL iteration and applying write batches.)
|
(There is a C++ shim stubbed out for WAL iteration and applying write batches.)
|
||||||
- [ ] Implement WAL iterator: `latest_sequence`, `wal_iter_next` returning writebatch blob
|
- [ ] Implement WAL iterator: `latest_sequence`, `wal_iter_next` returning writebatch blob
|
||||||
- [ ] Implement apply-writebatch on follower
|
- [ ] Implement apply-writebatch on follower
|
||||||
- [ ] Add a minimal replication test harness (leader generates N ops → follower applies → compare)
|
- [ ] Add a minimal replication test harness (leader generates N ops → follower applies → compare)
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Housekeeping
|
|
||||||
- [x] Fix TODO hygiene: keep this file short and "actionable"
|
|
||||||
- Added "Bug Fixes Applied" section documenting what changed and why
|
|
||||||
- [ ] Add a CONTRIBUTING quick checklist (allocator rules, formatting, tests)
|
|
||||||
- [ ] Add "known limitations" section in README (unsupported DynamoDB features)
|
|
||||||
|
|||||||
481
dynamodb/gsi.odin
Normal file
481
dynamodb/gsi.odin
Normal file
@@ -0,0 +1,481 @@
|
|||||||
|
// Global Secondary Index (GSI) support
|
||||||
|
//
|
||||||
|
// DynamoDB GSI semantics:
|
||||||
|
// - GSI entries are maintained automatically on every write (put/delete/update)
|
||||||
|
// - Each GSI has its own key schema (partition key + optional sort key)
|
||||||
|
// - GSI keys are built from item attributes; if an item doesn't have the GSI
|
||||||
|
// key attribute(s), NO GSI entry is written (sparse index)
|
||||||
|
// - Projection controls which non-key attributes are stored in the GSI entry:
|
||||||
|
// ALL → entire item is copied
|
||||||
|
// KEYS_ONLY → only table PK/SK + GSI PK/SK
|
||||||
|
// INCLUDE → table keys + GSI keys + specified non-key attributes
|
||||||
|
// - Query on a GSI uses IndexName to route to the correct key prefix
|
||||||
|
//
|
||||||
|
// Storage layout:
|
||||||
|
// GSI key: [0x03][table_name][index_name][gsi_pk_value][gsi_sk_value?]
|
||||||
|
// GSI value: TLV-encoded projected item (same binary format as regular items)
|
||||||
|
//
|
||||||
|
// Write path:
|
||||||
|
// put_item → for each GSI, extract GSI key attrs from the NEW item, write GSI entry
|
||||||
|
// delete → for each GSI, extract GSI key attrs from the OLD item, delete GSI entry
|
||||||
|
// update → delete OLD GSI entries, write NEW GSI entries
|
||||||
|
//
|
||||||
|
package dynamodb
|
||||||
|
|
||||||
|
import "core:slice"
|
||||||
|
import "core:strings"
|
||||||
|
import "../rocksdb"
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// GSI Key Extraction
|
||||||
|
//
|
||||||
|
// Extracts the GSI partition key (and optional sort key) raw bytes from an item.
|
||||||
|
// Returns false if the item doesn't have the required GSI PK attribute (sparse).
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
GSI_Key_Values :: struct {
|
||||||
|
pk: []byte,
|
||||||
|
sk: Maybe([]byte),
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract GSI key values from an item based on the GSI's key schema.
|
||||||
|
// Returns ok=false if the required partition key attribute is missing (sparse index).
|
||||||
|
gsi_extract_key_values :: proc(item: Item, gsi_key_schema: []Key_Schema_Element) -> (GSI_Key_Values, bool) {
|
||||||
|
result: GSI_Key_Values
|
||||||
|
|
||||||
|
for ks in gsi_key_schema {
|
||||||
|
attr, found := item[ks.attribute_name]
|
||||||
|
if !found {
|
||||||
|
if ks.key_type == .HASH {
|
||||||
|
return {}, false // PK missing → sparse, skip this GSI entry
|
||||||
|
}
|
||||||
|
continue // SK missing is OK, just no SK segment
|
||||||
|
}
|
||||||
|
|
||||||
|
raw, raw_ok := attr_value_to_bytes(attr)
|
||||||
|
if !raw_ok {
|
||||||
|
if ks.key_type == .HASH {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
switch ks.key_type {
|
||||||
|
case .HASH:
|
||||||
|
result.pk = raw
|
||||||
|
case .RANGE:
|
||||||
|
result.sk = raw
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return result, true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert a scalar attribute value to its raw byte representation (borrowed).
|
||||||
|
attr_value_to_bytes :: proc(attr: Attribute_Value) -> ([]byte, bool) {
|
||||||
|
#partial switch v in attr {
|
||||||
|
case String:
|
||||||
|
return transmute([]byte)string(v), true
|
||||||
|
case Number:
|
||||||
|
return transmute([]byte)string(v), true
|
||||||
|
case Binary:
|
||||||
|
return transmute([]byte)string(v), true
|
||||||
|
}
|
||||||
|
return nil, false
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// GSI Projection
|
||||||
|
//
|
||||||
|
// Build a projected copy of an item for storage in a GSI entry.
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
// Build the projected item for a GSI entry.
|
||||||
|
// The result is a new Item that the caller owns.
|
||||||
|
gsi_project_item :: proc(
|
||||||
|
item: Item,
|
||||||
|
gsi: ^Global_Secondary_Index,
|
||||||
|
table_key_schema: []Key_Schema_Element,
|
||||||
|
) -> Item {
|
||||||
|
switch gsi.projection.projection_type {
|
||||||
|
case .ALL:
|
||||||
|
return item_deep_copy(item)
|
||||||
|
|
||||||
|
case .KEYS_ONLY:
|
||||||
|
projected := make(Item)
|
||||||
|
// Include table key attributes
|
||||||
|
for ks in table_key_schema {
|
||||||
|
if val, found := item[ks.attribute_name]; found {
|
||||||
|
projected[strings.clone(ks.attribute_name)] = attr_value_deep_copy(val)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Include GSI key attributes
|
||||||
|
for ks in gsi.key_schema {
|
||||||
|
if _, already := projected[ks.attribute_name]; already {
|
||||||
|
continue // Already included as table key
|
||||||
|
}
|
||||||
|
if val, found := item[ks.attribute_name]; found {
|
||||||
|
projected[strings.clone(ks.attribute_name)] = attr_value_deep_copy(val)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return projected
|
||||||
|
|
||||||
|
case .INCLUDE:
|
||||||
|
projected := make(Item)
|
||||||
|
// Include table key attributes
|
||||||
|
for ks in table_key_schema {
|
||||||
|
if val, found := item[ks.attribute_name]; found {
|
||||||
|
projected[strings.clone(ks.attribute_name)] = attr_value_deep_copy(val)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Include GSI key attributes
|
||||||
|
for ks in gsi.key_schema {
|
||||||
|
if _, already := projected[ks.attribute_name]; already {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if val, found := item[ks.attribute_name]; found {
|
||||||
|
projected[strings.clone(ks.attribute_name)] = attr_value_deep_copy(val)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Include specified non-key attributes
|
||||||
|
if nka, has_nka := gsi.projection.non_key_attributes.?; has_nka {
|
||||||
|
for attr_name in nka {
|
||||||
|
if _, already := projected[attr_name]; already {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if val, found := item[attr_name]; found {
|
||||||
|
projected[strings.clone(attr_name)] = attr_value_deep_copy(val)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return projected
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fallback: all
|
||||||
|
return item_deep_copy(item)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// GSI Write Maintenance
|
||||||
|
//
|
||||||
|
// Called after a successful data write to maintain GSI entries.
|
||||||
|
// Uses WriteBatch for atomicity (all GSI entries for one item in one batch).
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
// Write GSI entries for an item across all GSIs defined on the table.
|
||||||
|
// Should be called AFTER the main data key is written.
|
||||||
|
gsi_write_entries :: proc(
|
||||||
|
engine: ^Storage_Engine,
|
||||||
|
table_name: string,
|
||||||
|
item: Item,
|
||||||
|
metadata: ^Table_Metadata,
|
||||||
|
) -> Storage_Error {
|
||||||
|
gsis, has_gsis := metadata.global_secondary_indexes.?
|
||||||
|
if !has_gsis || len(gsis) == 0 {
|
||||||
|
return .None
|
||||||
|
}
|
||||||
|
|
||||||
|
for &gsi in gsis {
|
||||||
|
// Extract GSI key from item
|
||||||
|
gsi_kv, kv_ok := gsi_extract_key_values(item, gsi.key_schema)
|
||||||
|
if !kv_ok {
|
||||||
|
continue // Sparse: item doesn't have GSI PK, skip
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build GSI storage key
|
||||||
|
gsi_storage_key := build_gsi_key(table_name, gsi.index_name, gsi_kv.pk, gsi_kv.sk)
|
||||||
|
defer delete(gsi_storage_key)
|
||||||
|
|
||||||
|
// Build projected item
|
||||||
|
projected := gsi_project_item(item, &gsi, metadata.key_schema)
|
||||||
|
defer item_destroy(&projected)
|
||||||
|
|
||||||
|
// Encode projected item
|
||||||
|
encoded, encode_ok := encode(projected)
|
||||||
|
if !encode_ok {
|
||||||
|
return .Serialization_Error
|
||||||
|
}
|
||||||
|
defer delete(encoded)
|
||||||
|
|
||||||
|
// Write to RocksDB
|
||||||
|
put_err := rocksdb.db_put(&engine.db, gsi_storage_key, encoded)
|
||||||
|
if put_err != .None {
|
||||||
|
return .RocksDB_Error
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return .None
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete GSI entries for an item across all GSIs.
|
||||||
|
// Should be called BEFORE or AFTER the main data key is deleted.
|
||||||
|
// Needs the OLD item to know which GSI keys to remove.
|
||||||
|
gsi_delete_entries :: proc(
|
||||||
|
engine: ^Storage_Engine,
|
||||||
|
table_name: string,
|
||||||
|
old_item: Item,
|
||||||
|
metadata: ^Table_Metadata,
|
||||||
|
) -> Storage_Error {
|
||||||
|
gsis, has_gsis := metadata.global_secondary_indexes.?
|
||||||
|
if !has_gsis || len(gsis) == 0 {
|
||||||
|
return .None
|
||||||
|
}
|
||||||
|
|
||||||
|
for &gsi in gsis {
|
||||||
|
gsi_kv, kv_ok := gsi_extract_key_values(old_item, gsi.key_schema)
|
||||||
|
if !kv_ok {
|
||||||
|
continue // Item didn't have a GSI entry
|
||||||
|
}
|
||||||
|
|
||||||
|
gsi_storage_key := build_gsi_key(table_name, gsi.index_name, gsi_kv.pk, gsi_kv.sk)
|
||||||
|
defer delete(gsi_storage_key)
|
||||||
|
|
||||||
|
del_err := rocksdb.db_delete(&engine.db, gsi_storage_key)
|
||||||
|
if del_err != .None {
|
||||||
|
return .RocksDB_Error
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return .None
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// GSI Query
|
||||||
|
//
|
||||||
|
// Queries a GSI by partition key with optional sort key condition.
|
||||||
|
// Mirrors the main table query() but uses GSI key prefix.
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
gsi_query :: proc(
|
||||||
|
engine: ^Storage_Engine,
|
||||||
|
table_name: string,
|
||||||
|
index_name: string,
|
||||||
|
partition_key_value: []byte,
|
||||||
|
exclusive_start_key: Maybe([]byte),
|
||||||
|
limit: int,
|
||||||
|
sk_condition: Maybe(Sort_Key_Condition) = nil,
|
||||||
|
) -> (Query_Result, Storage_Error) {
|
||||||
|
// Build GSI partition prefix
|
||||||
|
prefix := build_gsi_partition_prefix(table_name, index_name, partition_key_value)
|
||||||
|
defer delete(prefix)
|
||||||
|
|
||||||
|
iter, iter_err := rocksdb.iter_create(&engine.db)
|
||||||
|
if iter_err != .None {
|
||||||
|
return {}, .RocksDB_Error
|
||||||
|
}
|
||||||
|
defer rocksdb.iter_destroy(&iter)
|
||||||
|
|
||||||
|
max_items := limit if limit > 0 else 1_000_000
|
||||||
|
|
||||||
|
// Seek to start position
|
||||||
|
if start_key, has_start := exclusive_start_key.?; has_start {
|
||||||
|
if has_prefix(start_key, prefix) {
|
||||||
|
rocksdb.iter_seek(&iter, start_key)
|
||||||
|
if rocksdb.iter_valid(&iter) {
|
||||||
|
rocksdb.iter_next(&iter)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
rocksdb.iter_seek(&iter, prefix)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
rocksdb.iter_seek(&iter, prefix)
|
||||||
|
}
|
||||||
|
|
||||||
|
items := make([dynamic]Item)
|
||||||
|
count := 0
|
||||||
|
last_key: Maybe([]byte) = nil
|
||||||
|
has_more := false
|
||||||
|
|
||||||
|
for rocksdb.iter_valid(&iter) {
|
||||||
|
key := rocksdb.iter_key(&iter)
|
||||||
|
if key == nil || !has_prefix(key, prefix) {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
|
||||||
|
if count >= max_items {
|
||||||
|
has_more = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
|
||||||
|
value := rocksdb.iter_value(&iter)
|
||||||
|
if value == nil {
|
||||||
|
rocksdb.iter_next(&iter)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
item, decode_ok := decode(value)
|
||||||
|
if !decode_ok {
|
||||||
|
rocksdb.iter_next(&iter)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sort key condition filtering
|
||||||
|
if skc, has_skc := sk_condition.?; has_skc {
|
||||||
|
if !evaluate_sort_key_condition(item, &skc) {
|
||||||
|
item_copy := item
|
||||||
|
item_destroy(&item_copy)
|
||||||
|
rocksdb.iter_next(&iter)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
append(&items, item)
|
||||||
|
count += 1
|
||||||
|
|
||||||
|
// Track key of last returned item
|
||||||
|
if prev_key, had_prev := last_key.?; had_prev {
|
||||||
|
delete(prev_key)
|
||||||
|
}
|
||||||
|
last_key = slice.clone(key)
|
||||||
|
|
||||||
|
rocksdb.iter_next(&iter)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Only emit LastEvaluatedKey if there are more items
|
||||||
|
if !has_more {
|
||||||
|
if lk, had_lk := last_key.?; had_lk {
|
||||||
|
delete(lk)
|
||||||
|
}
|
||||||
|
last_key = nil
|
||||||
|
}
|
||||||
|
|
||||||
|
result_items := make([]Item, len(items))
|
||||||
|
copy(result_items, items[:])
|
||||||
|
|
||||||
|
return Query_Result{
|
||||||
|
items = result_items,
|
||||||
|
last_evaluated_key = last_key,
|
||||||
|
}, .None
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// GSI Scan
|
||||||
|
//
|
||||||
|
// Scans all entries in a GSI (all partition keys under that index).
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
gsi_scan :: proc(
|
||||||
|
engine: ^Storage_Engine,
|
||||||
|
table_name: string,
|
||||||
|
index_name: string,
|
||||||
|
exclusive_start_key: Maybe([]byte),
|
||||||
|
limit: int,
|
||||||
|
) -> (Scan_Result, Storage_Error) {
|
||||||
|
prefix := build_gsi_prefix(table_name, index_name)
|
||||||
|
defer delete(prefix)
|
||||||
|
|
||||||
|
iter, iter_err := rocksdb.iter_create(&engine.db)
|
||||||
|
if iter_err != .None {
|
||||||
|
return {}, .RocksDB_Error
|
||||||
|
}
|
||||||
|
defer rocksdb.iter_destroy(&iter)
|
||||||
|
|
||||||
|
max_items := limit if limit > 0 else 1_000_000
|
||||||
|
|
||||||
|
if start_key, has_start := exclusive_start_key.?; has_start {
|
||||||
|
if has_prefix(start_key, prefix) {
|
||||||
|
rocksdb.iter_seek(&iter, start_key)
|
||||||
|
if rocksdb.iter_valid(&iter) {
|
||||||
|
rocksdb.iter_next(&iter)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
rocksdb.iter_seek(&iter, prefix)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
rocksdb.iter_seek(&iter, prefix)
|
||||||
|
}
|
||||||
|
|
||||||
|
items := make([dynamic]Item)
|
||||||
|
count := 0
|
||||||
|
last_key: Maybe([]byte) = nil
|
||||||
|
has_more := false
|
||||||
|
|
||||||
|
for rocksdb.iter_valid(&iter) {
|
||||||
|
key := rocksdb.iter_key(&iter)
|
||||||
|
if key == nil || !has_prefix(key, prefix) {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
|
||||||
|
if count >= max_items {
|
||||||
|
has_more = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
|
||||||
|
value := rocksdb.iter_value(&iter)
|
||||||
|
if value == nil {
|
||||||
|
rocksdb.iter_next(&iter)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
item, decode_ok := decode(value)
|
||||||
|
if !decode_ok {
|
||||||
|
rocksdb.iter_next(&iter)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
append(&items, item)
|
||||||
|
count += 1
|
||||||
|
|
||||||
|
if prev_key, had_prev := last_key.?; had_prev {
|
||||||
|
delete(prev_key)
|
||||||
|
}
|
||||||
|
last_key = slice.clone(key)
|
||||||
|
|
||||||
|
rocksdb.iter_next(&iter)
|
||||||
|
}
|
||||||
|
|
||||||
|
if !has_more {
|
||||||
|
if lk, had_lk := last_key.?; had_lk {
|
||||||
|
delete(lk)
|
||||||
|
}
|
||||||
|
last_key = nil
|
||||||
|
}
|
||||||
|
|
||||||
|
result_items := make([]Item, len(items))
|
||||||
|
copy(result_items, items[:])
|
||||||
|
|
||||||
|
return Scan_Result{
|
||||||
|
items = result_items,
|
||||||
|
last_evaluated_key = last_key,
|
||||||
|
}, .None
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// GSI Metadata Lookup Helpers
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
// Find a GSI definition by index name in the table metadata.
|
||||||
|
find_gsi :: proc(metadata: ^Table_Metadata, index_name: string) -> (^Global_Secondary_Index, bool) {
|
||||||
|
gsis, has_gsis := metadata.global_secondary_indexes.?
|
||||||
|
if !has_gsis {
|
||||||
|
return nil, false
|
||||||
|
}
|
||||||
|
|
||||||
|
for &gsi in gsis {
|
||||||
|
if gsi.index_name == index_name {
|
||||||
|
return &gsi, true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil, false
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get the GSI's sort key attribute name (if any).
|
||||||
|
gsi_get_sort_key_name :: proc(gsi: ^Global_Secondary_Index) -> Maybe(string) {
|
||||||
|
for ks in gsi.key_schema {
|
||||||
|
if ks.key_type == .RANGE {
|
||||||
|
return ks.attribute_name
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get the GSI's partition key attribute name.
|
||||||
|
gsi_get_partition_key_name :: proc(gsi: ^Global_Secondary_Index) -> Maybe(string) {
|
||||||
|
for ks in gsi.key_schema {
|
||||||
|
if ks.key_type == .HASH {
|
||||||
|
return ks.attribute_name
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
187
dynamodb/gsi_metadata.odin
Normal file
187
dynamodb/gsi_metadata.odin
Normal file
@@ -0,0 +1,187 @@
|
|||||||
|
// gsi_metadata.odin — GSI metadata parsing for serialize/deserialize_table_metadata
|
||||||
|
//
|
||||||
|
// Parses GSI definitions from the embedded JSON string stored in table metadata.
|
||||||
|
// This file lives in the dynamodb/ package.
|
||||||
|
package dynamodb
|
||||||
|
|
||||||
|
import "core:encoding/json"
|
||||||
|
import "core:mem"
|
||||||
|
import "core:strings"
|
||||||
|
|
||||||
|
// Parse GlobalSecondaryIndexes from a JSON string like:
|
||||||
|
// [{"IndexName":"email-index","KeySchema":[{"AttributeName":"email","KeyType":"HASH"}],
|
||||||
|
// "Projection":{"ProjectionType":"ALL"}}]
|
||||||
|
//
|
||||||
|
// Allocates all strings with the given allocator (engine.allocator for long-lived data).
|
||||||
|
parse_gsis_json :: proc(json_str: string, allocator: mem.Allocator) -> ([]Global_Secondary_Index, bool) {
|
||||||
|
data, parse_err := json.parse(transmute([]byte)json_str, allocator = context.temp_allocator)
|
||||||
|
if parse_err != nil {
|
||||||
|
return nil, false
|
||||||
|
}
|
||||||
|
defer json.destroy_value(data)
|
||||||
|
|
||||||
|
arr, ok := data.(json.Array)
|
||||||
|
if !ok {
|
||||||
|
return nil, false
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(arr) == 0 {
|
||||||
|
return nil, true // Empty is valid
|
||||||
|
}
|
||||||
|
|
||||||
|
result := make([]Global_Secondary_Index, len(arr), allocator)
|
||||||
|
|
||||||
|
for elem, i in arr {
|
||||||
|
obj, obj_ok := elem.(json.Object)
|
||||||
|
if !obj_ok {
|
||||||
|
cleanup_gsis(result[:i], allocator)
|
||||||
|
delete(result, allocator)
|
||||||
|
return nil, false
|
||||||
|
}
|
||||||
|
|
||||||
|
gsi, gsi_ok := parse_single_gsi_json(obj, allocator)
|
||||||
|
if !gsi_ok {
|
||||||
|
cleanup_gsis(result[:i], allocator)
|
||||||
|
delete(result, allocator)
|
||||||
|
return nil, false
|
||||||
|
}
|
||||||
|
|
||||||
|
result[i] = gsi
|
||||||
|
}
|
||||||
|
|
||||||
|
return result, true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse a single GSI object from JSON
|
||||||
|
@(private = "file")
|
||||||
|
parse_single_gsi_json :: proc(obj: json.Object, allocator: mem.Allocator) -> (Global_Secondary_Index, bool) {
|
||||||
|
gsi: Global_Secondary_Index
|
||||||
|
|
||||||
|
// IndexName
|
||||||
|
idx_val, idx_found := obj["IndexName"]
|
||||||
|
if !idx_found {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
idx_str, idx_ok := idx_val.(json.String)
|
||||||
|
if !idx_ok {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
gsi.index_name = strings.clone(string(idx_str), allocator)
|
||||||
|
|
||||||
|
// KeySchema
|
||||||
|
ks_val, ks_found := obj["KeySchema"]
|
||||||
|
if !ks_found {
|
||||||
|
delete(gsi.index_name, allocator)
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
ks_arr, ks_ok := ks_val.(json.Array)
|
||||||
|
if !ks_ok || len(ks_arr) == 0 || len(ks_arr) > 2 {
|
||||||
|
delete(gsi.index_name, allocator)
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
|
||||||
|
key_schema := make([]Key_Schema_Element, len(ks_arr), allocator)
|
||||||
|
for ks_elem, j in ks_arr {
|
||||||
|
ks_obj, kobj_ok := ks_elem.(json.Object)
|
||||||
|
if !kobj_ok {
|
||||||
|
for k in 0..<j {
|
||||||
|
delete(key_schema[k].attribute_name, allocator)
|
||||||
|
}
|
||||||
|
delete(key_schema, allocator)
|
||||||
|
delete(gsi.index_name, allocator)
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
|
||||||
|
an_val, an_found := ks_obj["AttributeName"]
|
||||||
|
if !an_found {
|
||||||
|
for k in 0..<j { delete(key_schema[k].attribute_name, allocator) }
|
||||||
|
delete(key_schema, allocator)
|
||||||
|
delete(gsi.index_name, allocator)
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
an_str, an_ok := an_val.(json.String)
|
||||||
|
if !an_ok {
|
||||||
|
for k in 0..<j { delete(key_schema[k].attribute_name, allocator) }
|
||||||
|
delete(key_schema, allocator)
|
||||||
|
delete(gsi.index_name, allocator)
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
|
||||||
|
kt_val, kt_found := ks_obj["KeyType"]
|
||||||
|
if !kt_found {
|
||||||
|
for k in 0..<j { delete(key_schema[k].attribute_name, allocator) }
|
||||||
|
delete(key_schema, allocator)
|
||||||
|
delete(gsi.index_name, allocator)
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
kt_str, kt_ok := kt_val.(json.String)
|
||||||
|
if !kt_ok {
|
||||||
|
for k in 0..<j { delete(key_schema[k].attribute_name, allocator) }
|
||||||
|
delete(key_schema, allocator)
|
||||||
|
delete(gsi.index_name, allocator)
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
|
||||||
|
kt, kt_parse_ok := key_type_from_string(string(kt_str))
|
||||||
|
if !kt_parse_ok {
|
||||||
|
for k in 0..<j { delete(key_schema[k].attribute_name, allocator) }
|
||||||
|
delete(key_schema, allocator)
|
||||||
|
delete(gsi.index_name, allocator)
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
|
||||||
|
key_schema[j] = Key_Schema_Element{
|
||||||
|
attribute_name = strings.clone(string(an_str), allocator),
|
||||||
|
key_type = kt,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
gsi.key_schema = key_schema
|
||||||
|
|
||||||
|
// Projection
|
||||||
|
gsi.projection.projection_type = .ALL // default
|
||||||
|
if proj_val, proj_found := obj["Projection"]; proj_found {
|
||||||
|
if proj_obj, proj_ok := proj_val.(json.Object); proj_ok {
|
||||||
|
if pt_val, pt_found := proj_obj["ProjectionType"]; pt_found {
|
||||||
|
if pt_str, pt_ok := pt_val.(json.String); pt_ok {
|
||||||
|
switch string(pt_str) {
|
||||||
|
case "ALL": gsi.projection.projection_type = .ALL
|
||||||
|
case "KEYS_ONLY": gsi.projection.projection_type = .KEYS_ONLY
|
||||||
|
case "INCLUDE": gsi.projection.projection_type = .INCLUDE
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// NonKeyAttributes
|
||||||
|
if nka_val, nka_found := proj_obj["NonKeyAttributes"]; nka_found {
|
||||||
|
if nka_arr, nka_ok := nka_val.(json.Array); nka_ok && len(nka_arr) > 0 {
|
||||||
|
nka := make([]string, len(nka_arr), allocator)
|
||||||
|
for attr_val, k in nka_arr {
|
||||||
|
if attr_str, attr_ok := attr_val.(json.String); attr_ok {
|
||||||
|
nka[k] = strings.clone(string(attr_str), allocator)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
gsi.projection.non_key_attributes = nka
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return gsi, true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Clean up partially-constructed GSI array
|
||||||
|
cleanup_gsis :: proc(gsis: []Global_Secondary_Index, allocator: mem.Allocator) {
|
||||||
|
for gsi in gsis {
|
||||||
|
delete(gsi.index_name, allocator)
|
||||||
|
for ks in gsi.key_schema {
|
||||||
|
delete(ks.attribute_name, allocator)
|
||||||
|
}
|
||||||
|
delete(gsi.key_schema, allocator)
|
||||||
|
if nka, has_nka := gsi.projection.non_key_attributes.?; has_nka {
|
||||||
|
for attr in nka {
|
||||||
|
delete(attr, allocator)
|
||||||
|
}
|
||||||
|
delete(nka, allocator)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
94
dynamodb/key_codec_gsi.odin
Normal file
94
dynamodb/key_codec_gsi.odin
Normal file
@@ -0,0 +1,94 @@
|
|||||||
|
// key_codec_gsi.odin — Additional key codec functions for GSI support
|
||||||
|
//
|
||||||
|
// These procedures complement key_codec.odin with prefix builders needed
|
||||||
|
// for GSI scanning and querying. They follow the same encoding conventions:
|
||||||
|
// [entity_type][varint_len][segment_bytes]...
|
||||||
|
//
|
||||||
|
// Add the contents of this file to key_codec.odin (or keep as a separate file
|
||||||
|
// in the dynamodb/ package).
|
||||||
|
package dynamodb
|
||||||
|
|
||||||
|
import "core:bytes"
|
||||||
|
|
||||||
|
// Build GSI index prefix for scanning all entries in a GSI:
|
||||||
|
// [gsi][table_name][index_name]
|
||||||
|
build_gsi_prefix :: proc(table_name: string, index_name: string) -> []byte {
|
||||||
|
buf: bytes.Buffer
|
||||||
|
bytes.buffer_init_allocator(&buf, 0, 256, context.allocator)
|
||||||
|
|
||||||
|
bytes.buffer_write_byte(&buf, u8(Entity_Type.GSI))
|
||||||
|
|
||||||
|
encode_varint(&buf, len(table_name))
|
||||||
|
bytes.buffer_write_string(&buf, table_name)
|
||||||
|
|
||||||
|
encode_varint(&buf, len(index_name))
|
||||||
|
bytes.buffer_write_string(&buf, index_name)
|
||||||
|
|
||||||
|
return bytes.buffer_to_bytes(&buf)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build GSI partition prefix for querying within a single partition:
|
||||||
|
// [gsi][table_name][index_name][pk_value]
|
||||||
|
build_gsi_partition_prefix :: proc(table_name: string, index_name: string, pk_value: []byte) -> []byte {
|
||||||
|
buf: bytes.Buffer
|
||||||
|
bytes.buffer_init_allocator(&buf, 0, 512, context.allocator)
|
||||||
|
|
||||||
|
bytes.buffer_write_byte(&buf, u8(Entity_Type.GSI))
|
||||||
|
|
||||||
|
encode_varint(&buf, len(table_name))
|
||||||
|
bytes.buffer_write_string(&buf, table_name)
|
||||||
|
|
||||||
|
encode_varint(&buf, len(index_name))
|
||||||
|
bytes.buffer_write_string(&buf, index_name)
|
||||||
|
|
||||||
|
encode_varint(&buf, len(pk_value))
|
||||||
|
bytes.buffer_write(&buf, pk_value)
|
||||||
|
|
||||||
|
return bytes.buffer_to_bytes(&buf)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Decode a GSI key back into components
|
||||||
|
Decoded_GSI_Key :: struct {
|
||||||
|
table_name: string,
|
||||||
|
index_name: string,
|
||||||
|
pk_value: []byte,
|
||||||
|
sk_value: Maybe([]byte),
|
||||||
|
}
|
||||||
|
|
||||||
|
decode_gsi_key :: proc(key: []byte) -> (result: Decoded_GSI_Key, ok: bool) {
|
||||||
|
decoder := Key_Decoder{data = key, pos = 0}
|
||||||
|
|
||||||
|
entity_type := decoder_read_entity_type(&decoder) or_return
|
||||||
|
if entity_type != .GSI {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
|
||||||
|
table_name_bytes := decoder_read_segment(&decoder) or_return
|
||||||
|
result.table_name = string(table_name_bytes)
|
||||||
|
|
||||||
|
index_name_bytes := decoder_read_segment(&decoder) or_return
|
||||||
|
result.index_name = string(index_name_bytes)
|
||||||
|
|
||||||
|
result.pk_value = decoder_read_segment(&decoder) or_return
|
||||||
|
|
||||||
|
if decoder_has_more(&decoder) {
|
||||||
|
sk := decoder_read_segment(&decoder) or_return
|
||||||
|
result.sk_value = sk
|
||||||
|
}
|
||||||
|
|
||||||
|
return result, true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build GSI prefix for deleting all GSI entries for a table (used by delete_table)
|
||||||
|
// [gsi][table_name]
|
||||||
|
build_gsi_table_prefix :: proc(table_name: string) -> []byte {
|
||||||
|
buf: bytes.Buffer
|
||||||
|
bytes.buffer_init_allocator(&buf, 0, 256, context.allocator)
|
||||||
|
|
||||||
|
bytes.buffer_write_byte(&buf, u8(Entity_Type.GSI))
|
||||||
|
|
||||||
|
encode_varint(&buf, len(table_name))
|
||||||
|
bytes.buffer_write_string(&buf, table_name)
|
||||||
|
|
||||||
|
return bytes.buffer_to_bytes(&buf)
|
||||||
|
}
|
||||||
@@ -84,7 +84,23 @@ table_metadata_destroy :: proc(metadata: ^Table_Metadata, allocator: mem.Allocat
|
|||||||
}
|
}
|
||||||
delete(metadata.attribute_definitions, allocator)
|
delete(metadata.attribute_definitions, allocator)
|
||||||
|
|
||||||
// TODO: Free GSI/LSI if we implement them
|
// Free GSI definitions
|
||||||
|
if gsis, has_gsis := metadata.global_secondary_indexes.?; has_gsis {
|
||||||
|
for gsi in gsis {
|
||||||
|
delete(gsi.index_name, allocator)
|
||||||
|
for ks in gsi.key_schema {
|
||||||
|
delete(ks.attribute_name, allocator)
|
||||||
|
}
|
||||||
|
delete(gsi.key_schema, allocator)
|
||||||
|
if nka, has_nka := gsi.projection.non_key_attributes.?; has_nka {
|
||||||
|
for attr in nka {
|
||||||
|
delete(attr, allocator)
|
||||||
|
}
|
||||||
|
delete(nka, allocator)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
delete(gsis, allocator)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get the partition key attribute name
|
// Get the partition key attribute name
|
||||||
@@ -187,7 +203,6 @@ remove_table_lock :: proc(engine: ^Storage_Engine, table_name: string) {
|
|||||||
|
|
||||||
// Serialize table metadata to binary format
|
// Serialize table metadata to binary format
|
||||||
serialize_table_metadata :: proc(metadata: ^Table_Metadata) -> ([]byte, bool) {
|
serialize_table_metadata :: proc(metadata: ^Table_Metadata) -> ([]byte, bool) {
|
||||||
// Create a temporary item to hold metadata
|
|
||||||
meta_item := make(Item, context.temp_allocator)
|
meta_item := make(Item, context.temp_allocator)
|
||||||
defer delete(meta_item)
|
defer delete(meta_item)
|
||||||
|
|
||||||
@@ -200,7 +215,7 @@ serialize_table_metadata :: proc(metadata: ^Table_Metadata) -> ([]byte, bool) {
|
|||||||
if i > 0 {
|
if i > 0 {
|
||||||
strings.write_string(&ks_builder, ",")
|
strings.write_string(&ks_builder, ",")
|
||||||
}
|
}
|
||||||
fmt.sbprintf(&ks_builder, `{"AttributeName":"%s","KeyType":"%s"}`,
|
fmt.sbprintf(&ks_builder, `{{"AttributeName":"%s","KeyType":"%s"}}`,
|
||||||
ks.attribute_name, key_type_to_string(ks.key_type))
|
ks.attribute_name, key_type_to_string(ks.key_type))
|
||||||
}
|
}
|
||||||
strings.write_string(&ks_builder, "]")
|
strings.write_string(&ks_builder, "]")
|
||||||
@@ -216,7 +231,7 @@ serialize_table_metadata :: proc(metadata: ^Table_Metadata) -> ([]byte, bool) {
|
|||||||
if i > 0 {
|
if i > 0 {
|
||||||
strings.write_string(&ad_builder, ",")
|
strings.write_string(&ad_builder, ",")
|
||||||
}
|
}
|
||||||
fmt.sbprintf(&ad_builder, `{"AttributeName":"%s","AttributeType":"%s"}`,
|
fmt.sbprintf(&ad_builder, `{{"AttributeName":"%s","AttributeType":"%s"}}`,
|
||||||
ad.attribute_name, scalar_type_to_string(ad.attribute_type))
|
ad.attribute_name, scalar_type_to_string(ad.attribute_type))
|
||||||
}
|
}
|
||||||
strings.write_string(&ad_builder, "]")
|
strings.write_string(&ad_builder, "]")
|
||||||
@@ -227,6 +242,48 @@ serialize_table_metadata :: proc(metadata: ^Table_Metadata) -> ([]byte, bool) {
|
|||||||
meta_item["TableStatus"] = String(strings.clone(table_status_to_string(metadata.table_status)))
|
meta_item["TableStatus"] = String(strings.clone(table_status_to_string(metadata.table_status)))
|
||||||
meta_item["CreationDateTime"] = Number(fmt.aprint(metadata.creation_date_time))
|
meta_item["CreationDateTime"] = Number(fmt.aprint(metadata.creation_date_time))
|
||||||
|
|
||||||
|
// Encode GSI definitions as JSON string
|
||||||
|
if gsis, has_gsis := metadata.global_secondary_indexes.?; has_gsis && len(gsis) > 0 {
|
||||||
|
gsi_builder := strings.builder_make(context.temp_allocator)
|
||||||
|
defer strings.builder_destroy(&gsi_builder)
|
||||||
|
|
||||||
|
strings.write_string(&gsi_builder, "[")
|
||||||
|
for gsi, i in gsis {
|
||||||
|
if i > 0 {
|
||||||
|
strings.write_string(&gsi_builder, ",")
|
||||||
|
}
|
||||||
|
fmt.sbprintf(&gsi_builder, `{{"IndexName":"%s","KeySchema":[`, gsi.index_name)
|
||||||
|
for ks, j in gsi.key_schema {
|
||||||
|
if j > 0 {
|
||||||
|
strings.write_string(&gsi_builder, ",")
|
||||||
|
}
|
||||||
|
fmt.sbprintf(&gsi_builder, `{{"AttributeName":"%s","KeyType":"%s"}}`,
|
||||||
|
ks.attribute_name, key_type_to_string(ks.key_type))
|
||||||
|
}
|
||||||
|
strings.write_string(&gsi_builder, `],"Projection":{{"ProjectionType":"`)
|
||||||
|
switch gsi.projection.projection_type {
|
||||||
|
case .ALL: strings.write_string(&gsi_builder, "ALL")
|
||||||
|
case .KEYS_ONLY: strings.write_string(&gsi_builder, "KEYS_ONLY")
|
||||||
|
case .INCLUDE: strings.write_string(&gsi_builder, "INCLUDE")
|
||||||
|
}
|
||||||
|
strings.write_string(&gsi_builder, `"`)
|
||||||
|
if nka, has_nka := gsi.projection.non_key_attributes.?; has_nka && len(nka) > 0 {
|
||||||
|
strings.write_string(&gsi_builder, `,"NonKeyAttributes":[`)
|
||||||
|
for attr, k in nka {
|
||||||
|
if k > 0 {
|
||||||
|
strings.write_string(&gsi_builder, ",")
|
||||||
|
}
|
||||||
|
fmt.sbprintf(&gsi_builder, `"%s"`, attr)
|
||||||
|
}
|
||||||
|
strings.write_string(&gsi_builder, "]")
|
||||||
|
}
|
||||||
|
strings.write_string(&gsi_builder, "}}")
|
||||||
|
}
|
||||||
|
strings.write_string(&gsi_builder, "]")
|
||||||
|
|
||||||
|
meta_item["GlobalSecondaryIndexes"] = String(strings.clone(strings.to_string(gsi_builder)))
|
||||||
|
}
|
||||||
|
|
||||||
// Encode to binary
|
// Encode to binary
|
||||||
return encode(meta_item)
|
return encode(meta_item)
|
||||||
}
|
}
|
||||||
@@ -282,6 +339,17 @@ deserialize_table_metadata :: proc(data: []byte, allocator: mem.Allocator) -> (T
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Parse GlobalSecondaryIndexes from embedded JSON string
|
||||||
|
if gsi_val, gsi_found := meta_item["GlobalSecondaryIndexes"]; gsi_found {
|
||||||
|
#partial switch v in gsi_val {
|
||||||
|
case String:
|
||||||
|
gsis, gsi_ok := parse_gsis_json(string(v), allocator)
|
||||||
|
if gsi_ok && len(gsis) > 0 {
|
||||||
|
metadata.global_secondary_indexes = gsis
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return metadata, true
|
return metadata, true
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -463,6 +531,7 @@ create_table :: proc(
|
|||||||
table_name: string,
|
table_name: string,
|
||||||
key_schema: []Key_Schema_Element,
|
key_schema: []Key_Schema_Element,
|
||||||
attribute_definitions: []Attribute_Definition,
|
attribute_definitions: []Attribute_Definition,
|
||||||
|
gsis: Maybe([]Global_Secondary_Index) = nil,
|
||||||
) -> (Table_Description, Storage_Error) {
|
) -> (Table_Description, Storage_Error) {
|
||||||
table_lock := get_or_create_table_lock(engine, table_name)
|
table_lock := get_or_create_table_lock(engine, table_name)
|
||||||
sync.rw_mutex_lock(table_lock)
|
sync.rw_mutex_lock(table_lock)
|
||||||
@@ -500,6 +569,34 @@ create_table :: proc(
|
|||||||
ad.attribute_name = strings.clone(ad.attribute_name, engine.allocator)
|
ad.attribute_name = strings.clone(ad.attribute_name, engine.allocator)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Deep copy GSI definitions into engine allocator
|
||||||
|
if gsi_defs, has_gsis := gsis.?; has_gsis && len(gsi_defs) > 0 {
|
||||||
|
owned_gsis := make([]Global_Secondary_Index, len(gsi_defs), engine.allocator)
|
||||||
|
for gsi_def, i in gsi_defs {
|
||||||
|
owned_gsis[i] = Global_Secondary_Index{
|
||||||
|
index_name = strings.clone(gsi_def.index_name, engine.allocator),
|
||||||
|
key_schema = make([]Key_Schema_Element, len(gsi_def.key_schema), engine.allocator),
|
||||||
|
projection = Projection{
|
||||||
|
projection_type = gsi_def.projection.projection_type,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for ks, j in gsi_def.key_schema {
|
||||||
|
owned_gsis[i].key_schema[j] = Key_Schema_Element{
|
||||||
|
attribute_name = strings.clone(ks.attribute_name, engine.allocator),
|
||||||
|
key_type = ks.key_type,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if nka, has_nka := gsi_def.projection.non_key_attributes.?; has_nka {
|
||||||
|
owned_nka := make([]string, len(nka), engine.allocator)
|
||||||
|
for attr, k in nka {
|
||||||
|
owned_nka[k] = strings.clone(attr, engine.allocator)
|
||||||
|
}
|
||||||
|
owned_gsis[i].projection.non_key_attributes = owned_nka
|
||||||
|
}
|
||||||
|
}
|
||||||
|
metadata.global_secondary_indexes = owned_gsis
|
||||||
|
}
|
||||||
|
|
||||||
// Serialize and store
|
// Serialize and store
|
||||||
meta_value, serialize_ok := serialize_table_metadata(&metadata)
|
meta_value, serialize_ok := serialize_table_metadata(&metadata)
|
||||||
if !serialize_ok {
|
if !serialize_ok {
|
||||||
@@ -522,6 +619,7 @@ create_table :: proc(
|
|||||||
creation_date_time = now,
|
creation_date_time = now,
|
||||||
item_count = 0,
|
item_count = 0,
|
||||||
table_size_bytes = 0,
|
table_size_bytes = 0,
|
||||||
|
global_secondary_indexes = gsis,
|
||||||
}
|
}
|
||||||
|
|
||||||
return desc, .None
|
return desc, .None
|
||||||
@@ -565,7 +663,6 @@ delete_table :: proc(engine: ^Storage_Engine, table_name: string) -> Storage_Err
|
|||||||
break
|
break
|
||||||
}
|
}
|
||||||
|
|
||||||
// Delete this item
|
|
||||||
err: cstring
|
err: cstring
|
||||||
rocksdb.rocksdb_delete(
|
rocksdb.rocksdb_delete(
|
||||||
engine.db.handle,
|
engine.db.handle,
|
||||||
@@ -582,6 +679,41 @@ delete_table :: proc(engine: ^Storage_Engine, table_name: string) -> Storage_Err
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Delete all GSI entries for this table
|
||||||
|
gsi_table_prefix := build_gsi_table_prefix(table_name)
|
||||||
|
defer delete(gsi_table_prefix)
|
||||||
|
|
||||||
|
gsi_iter := rocksdb.rocksdb_create_iterator(engine.db.handle, engine.db.read_options)
|
||||||
|
if gsi_iter != nil {
|
||||||
|
defer rocksdb.rocksdb_iter_destroy(gsi_iter)
|
||||||
|
|
||||||
|
rocksdb.rocksdb_iter_seek(gsi_iter, raw_data(gsi_table_prefix), c.size_t(len(gsi_table_prefix)))
|
||||||
|
|
||||||
|
for rocksdb.rocksdb_iter_valid(gsi_iter) != 0 {
|
||||||
|
key_len: c.size_t
|
||||||
|
key_ptr := rocksdb.rocksdb_iter_key(gsi_iter, &key_len)
|
||||||
|
key_bytes := key_ptr[:key_len]
|
||||||
|
|
||||||
|
if !has_prefix(key_bytes, gsi_table_prefix) {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
|
||||||
|
err: cstring
|
||||||
|
rocksdb.rocksdb_delete(
|
||||||
|
engine.db.handle,
|
||||||
|
engine.db.write_options,
|
||||||
|
raw_data(key_bytes),
|
||||||
|
c.size_t(len(key_bytes)),
|
||||||
|
&err,
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
rocksdb.rocksdb_free(rawptr(err))
|
||||||
|
}
|
||||||
|
|
||||||
|
rocksdb.rocksdb_iter_next(gsi_iter)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Delete metadata
|
// Delete metadata
|
||||||
del_err := rocksdb.db_delete(&engine.db, meta_key)
|
del_err := rocksdb.db_delete(&engine.db, meta_key)
|
||||||
if del_err != .None {
|
if del_err != .None {
|
||||||
@@ -639,6 +771,17 @@ put_item :: proc(engine: ^Storage_Engine, table_name: string, item: Item) -> Sto
|
|||||||
storage_key := build_data_key(table_name, key_values.pk, key_values.sk)
|
storage_key := build_data_key(table_name, key_values.pk, key_values.sk)
|
||||||
defer delete(storage_key)
|
defer delete(storage_key)
|
||||||
|
|
||||||
|
// --- GSI cleanup: delete OLD GSI entries if item already exists ---
|
||||||
|
existing_value, existing_err := rocksdb.db_get(&engine.db, storage_key)
|
||||||
|
if existing_err == .None && existing_value != nil {
|
||||||
|
defer delete(existing_value)
|
||||||
|
old_item, decode_ok := decode(existing_value)
|
||||||
|
if decode_ok {
|
||||||
|
defer item_destroy(&old_item)
|
||||||
|
gsi_delete_entries(engine, table_name, old_item, &metadata)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Encode item
|
// Encode item
|
||||||
encoded_item, encode_ok := encode(item)
|
encoded_item, encode_ok := encode(item)
|
||||||
if !encode_ok {
|
if !encode_ok {
|
||||||
@@ -652,6 +795,12 @@ put_item :: proc(engine: ^Storage_Engine, table_name: string, item: Item) -> Sto
|
|||||||
return .RocksDB_Error
|
return .RocksDB_Error
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// --- GSI maintenance: write NEW GSI entries ---
|
||||||
|
gsi_err := gsi_write_entries(engine, table_name, item, &metadata)
|
||||||
|
if gsi_err != .None {
|
||||||
|
return gsi_err
|
||||||
|
}
|
||||||
|
|
||||||
return .None
|
return .None
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -748,6 +897,17 @@ delete_item :: proc(engine: ^Storage_Engine, table_name: string, key: Item) -> S
|
|||||||
storage_key := build_data_key(table_name, key_values.pk, key_values.sk)
|
storage_key := build_data_key(table_name, key_values.pk, key_values.sk)
|
||||||
defer delete(storage_key)
|
defer delete(storage_key)
|
||||||
|
|
||||||
|
// --- GSI cleanup: read existing item to know which GSI entries to remove ---
|
||||||
|
existing_value, existing_err := rocksdb.db_get(&engine.db, storage_key)
|
||||||
|
if existing_err == .None && existing_value != nil {
|
||||||
|
defer delete(existing_value)
|
||||||
|
old_item, decode_ok := decode(existing_value)
|
||||||
|
if decode_ok {
|
||||||
|
defer item_destroy(&old_item)
|
||||||
|
gsi_delete_entries(engine, table_name, old_item, &metadata)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Delete from RocksDB
|
// Delete from RocksDB
|
||||||
del_err := rocksdb.db_delete(&engine.db, storage_key)
|
del_err := rocksdb.db_delete(&engine.db, storage_key)
|
||||||
if del_err != .None {
|
if del_err != .None {
|
||||||
|
|||||||
719
dynamodb/transact.odin
Normal file
719
dynamodb/transact.odin
Normal file
@@ -0,0 +1,719 @@
|
|||||||
|
// TransactWriteItems and TransactGetItems storage operations
|
||||||
|
//
|
||||||
|
// TransactWriteItems: Atomic write of up to 100 items across multiple tables.
|
||||||
|
// - Supports Put, Delete, Update, and ConditionCheck actions
|
||||||
|
// - ALL actions succeed or ALL fail (all-or-nothing)
|
||||||
|
// - ConditionExpressions are evaluated BEFORE any mutations
|
||||||
|
// - Uses exclusive locks on all involved tables
|
||||||
|
//
|
||||||
|
// TransactGetItems: Atomic read of up to 100 items across multiple tables.
|
||||||
|
// - Each item specifies TableName + Key + optional ProjectionExpression
|
||||||
|
// - All reads are consistent (snapshot isolation via table locks)
|
||||||
|
package dynamodb
|
||||||
|
|
||||||
|
import "core:strings"
|
||||||
|
import "core:sync"
|
||||||
|
import "../rocksdb"
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// TransactWriteItems Types
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
Transact_Write_Action_Type :: enum {
|
||||||
|
Put,
|
||||||
|
Delete,
|
||||||
|
Update,
|
||||||
|
Condition_Check,
|
||||||
|
}
|
||||||
|
|
||||||
|
Transact_Write_Action :: struct {
|
||||||
|
type: Transact_Write_Action_Type,
|
||||||
|
table_name: string,
|
||||||
|
// For Put: the full item to write
|
||||||
|
item: Maybe(Item),
|
||||||
|
// For Delete/Update/ConditionCheck: the key item
|
||||||
|
key: Maybe(Item),
|
||||||
|
// For Update: the parsed update plan
|
||||||
|
update_plan: Maybe(Update_Plan),
|
||||||
|
// ConditionExpression components (shared across all action types)
|
||||||
|
condition_expr: Maybe(string),
|
||||||
|
expr_attr_names: Maybe(map[string]string),
|
||||||
|
expr_attr_values: map[string]Attribute_Value,
|
||||||
|
// For Update: ReturnValuesOnConditionCheckFailure (not implemented yet, placeholder)
|
||||||
|
}
|
||||||
|
|
||||||
|
Transact_Write_Result :: struct {
|
||||||
|
// For now, either all succeed (no error) or we return a
|
||||||
|
// TransactionCanceledException with reasons per action.
|
||||||
|
cancellation_reasons: []Cancellation_Reason,
|
||||||
|
}
|
||||||
|
|
||||||
|
Cancellation_Reason :: struct {
|
||||||
|
code: string, // "None", "ConditionalCheckFailed", "ValidationError", etc.
|
||||||
|
message: string,
|
||||||
|
}
|
||||||
|
|
||||||
|
transact_write_action_destroy :: proc(action: ^Transact_Write_Action) {
|
||||||
|
if item, has := action.item.?; has {
|
||||||
|
item_copy := item
|
||||||
|
item_destroy(&item_copy)
|
||||||
|
}
|
||||||
|
if key, has := action.key.?; has {
|
||||||
|
key_copy := key
|
||||||
|
item_destroy(&key_copy)
|
||||||
|
}
|
||||||
|
if plan, has := action.update_plan.?; has {
|
||||||
|
plan_copy := plan
|
||||||
|
update_plan_destroy(&plan_copy)
|
||||||
|
}
|
||||||
|
if names, has := action.expr_attr_names.?; has {
|
||||||
|
for k, v in names {
|
||||||
|
delete(k)
|
||||||
|
delete(v)
|
||||||
|
}
|
||||||
|
names_copy := names
|
||||||
|
delete(names_copy)
|
||||||
|
}
|
||||||
|
for k, v in action.expr_attr_values {
|
||||||
|
delete(k)
|
||||||
|
v_copy := v
|
||||||
|
attr_value_destroy(&v_copy)
|
||||||
|
}
|
||||||
|
delete(action.expr_attr_values)
|
||||||
|
}
|
||||||
|
|
||||||
|
transact_write_result_destroy :: proc(result: ^Transact_Write_Result) {
|
||||||
|
if result.cancellation_reasons != nil {
|
||||||
|
delete(result.cancellation_reasons)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// TransactWriteItems — Execute an atomic batch of write operations
|
||||||
|
//
|
||||||
|
// DynamoDB semantics:
|
||||||
|
// 1. Acquire exclusive locks on all involved tables
|
||||||
|
// 2. Evaluate ALL ConditionExpressions (pre-flight check)
|
||||||
|
// 3. If any condition fails → cancel entire transaction
|
||||||
|
// 4. If all pass → apply all mutations
|
||||||
|
// 5. Release locks
|
||||||
|
//
|
||||||
|
// Returns .None on success, Transaction_Cancelled on condition failure.
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
Transaction_Error :: enum {
|
||||||
|
None,
|
||||||
|
Cancelled, // One or more conditions failed
|
||||||
|
Validation_Error, // Bad request data
|
||||||
|
Internal_Error, // Storage/serialization failure
|
||||||
|
}
|
||||||
|
|
||||||
|
transact_write_items :: proc(
|
||||||
|
engine: ^Storage_Engine,
|
||||||
|
actions: []Transact_Write_Action,
|
||||||
|
) -> (Transact_Write_Result, Transaction_Error) {
|
||||||
|
result: Transact_Write_Result
|
||||||
|
|
||||||
|
if len(actions) == 0 {
|
||||||
|
return result, .Validation_Error
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---- Step 1: Collect unique table names and acquire locks ----
|
||||||
|
table_set := make(map[string]bool, allocator = context.temp_allocator)
|
||||||
|
for action in actions {
|
||||||
|
table_set[action.table_name] = true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Acquire exclusive locks on all tables in deterministic order
|
||||||
|
// to prevent deadlocks
|
||||||
|
table_names := make([dynamic]string, allocator = context.temp_allocator)
|
||||||
|
for name in table_set {
|
||||||
|
append(&table_names, name)
|
||||||
|
}
|
||||||
|
// Simple sort for deterministic lock ordering
|
||||||
|
for i := 0; i < len(table_names); i += 1 {
|
||||||
|
for j := i + 1; j < len(table_names); j += 1 {
|
||||||
|
if table_names[j] < table_names[i] {
|
||||||
|
table_names[i], table_names[j] = table_names[j], table_names[i]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
locks := make([dynamic]^sync.RW_Mutex, allocator = context.temp_allocator)
|
||||||
|
for name in table_names {
|
||||||
|
lock := get_or_create_table_lock(engine, name)
|
||||||
|
sync.rw_mutex_lock(lock)
|
||||||
|
append(&locks, lock)
|
||||||
|
}
|
||||||
|
defer {
|
||||||
|
// Release all locks in reverse order
|
||||||
|
for i := len(locks) - 1; i >= 0; i -= 1 {
|
||||||
|
sync.rw_mutex_unlock(locks[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---- Step 2: Pre-flight — fetch metadata and existing items, evaluate conditions ----
|
||||||
|
reasons := make([]Cancellation_Reason, len(actions))
|
||||||
|
any_failed := false
|
||||||
|
|
||||||
|
// Cache table metadata to avoid redundant lookups
|
||||||
|
metadata_cache := make(map[string]Table_Metadata, allocator = context.temp_allocator)
|
||||||
|
defer {
|
||||||
|
for _, meta in metadata_cache {
|
||||||
|
meta_copy := meta
|
||||||
|
table_metadata_destroy(&meta_copy, engine.allocator)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for action, idx in actions {
|
||||||
|
// Get table metadata (cached)
|
||||||
|
metadata: ^Table_Metadata
|
||||||
|
if cached, found := &metadata_cache[action.table_name]; found {
|
||||||
|
metadata = cached
|
||||||
|
} else {
|
||||||
|
meta, meta_err := get_table_metadata(engine, action.table_name)
|
||||||
|
if meta_err != .None {
|
||||||
|
reasons[idx] = Cancellation_Reason{
|
||||||
|
code = "ValidationError",
|
||||||
|
message = "Table not found",
|
||||||
|
}
|
||||||
|
any_failed = true
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
metadata_cache[action.table_name] = meta
|
||||||
|
metadata = &metadata_cache[action.table_name]
|
||||||
|
}
|
||||||
|
|
||||||
|
// Determine the key item for this action
|
||||||
|
key_item: Item
|
||||||
|
switch action.type {
|
||||||
|
case .Put:
|
||||||
|
if item, has := action.item.?; has {
|
||||||
|
key_item = item // For Put, key is extracted from the item
|
||||||
|
} else {
|
||||||
|
reasons[idx] = Cancellation_Reason{
|
||||||
|
code = "ValidationError",
|
||||||
|
message = "Put action missing Item",
|
||||||
|
}
|
||||||
|
any_failed = true
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
case .Delete, .Update, .Condition_Check:
|
||||||
|
if key, has := action.key.?; has {
|
||||||
|
key_item = key
|
||||||
|
} else {
|
||||||
|
reasons[idx] = Cancellation_Reason{
|
||||||
|
code = "ValidationError",
|
||||||
|
message = "Action missing Key",
|
||||||
|
}
|
||||||
|
any_failed = true
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Evaluate ConditionExpression if present
|
||||||
|
if cond_str, has_cond := action.condition_expr.?; has_cond {
|
||||||
|
// Fetch existing item
|
||||||
|
existing_item, get_err := get_item_internal(engine, action.table_name, key_item, metadata)
|
||||||
|
if get_err != .None && get_err != .Item_Not_Found {
|
||||||
|
reasons[idx] = Cancellation_Reason{
|
||||||
|
code = "InternalError",
|
||||||
|
message = "Failed to read existing item",
|
||||||
|
}
|
||||||
|
any_failed = true
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
defer {
|
||||||
|
if ex, has_ex := existing_item.?; has_ex {
|
||||||
|
ex_copy := ex
|
||||||
|
item_destroy(&ex_copy)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse and evaluate condition
|
||||||
|
filter_node, parse_ok := parse_filter_expression(
|
||||||
|
cond_str, action.expr_attr_names, action.expr_attr_values,
|
||||||
|
)
|
||||||
|
if !parse_ok || filter_node == nil {
|
||||||
|
reasons[idx] = Cancellation_Reason{
|
||||||
|
code = "ValidationError",
|
||||||
|
message = "Invalid ConditionExpression",
|
||||||
|
}
|
||||||
|
any_failed = true
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
defer {
|
||||||
|
filter_node_destroy(filter_node)
|
||||||
|
free(filter_node)
|
||||||
|
}
|
||||||
|
|
||||||
|
eval_item: Item
|
||||||
|
if item, has_item := existing_item.?; has_item {
|
||||||
|
eval_item = item
|
||||||
|
} else {
|
||||||
|
eval_item = Item{}
|
||||||
|
}
|
||||||
|
|
||||||
|
if !evaluate_filter(eval_item, filter_node) {
|
||||||
|
reasons[idx] = Cancellation_Reason{
|
||||||
|
code = "ConditionalCheckFailed",
|
||||||
|
message = "The conditional request failed",
|
||||||
|
}
|
||||||
|
any_failed = true
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ConditionCheck actions only validate — they don't mutate
|
||||||
|
if action.type == .Condition_Check {
|
||||||
|
reasons[idx] = Cancellation_Reason{code = "None"}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate key/item against schema
|
||||||
|
switch action.type {
|
||||||
|
case .Put:
|
||||||
|
if item, has := action.item.?; has {
|
||||||
|
validation_err := validate_item_key_types(
|
||||||
|
item, metadata.key_schema, metadata.attribute_definitions,
|
||||||
|
)
|
||||||
|
if validation_err != .None {
|
||||||
|
reasons[idx] = Cancellation_Reason{
|
||||||
|
code = "ValidationError",
|
||||||
|
message = "Key attribute type mismatch",
|
||||||
|
}
|
||||||
|
any_failed = true
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
case .Delete, .Update:
|
||||||
|
// Key validation happens during execution
|
||||||
|
case .Condition_Check:
|
||||||
|
// Already handled above
|
||||||
|
}
|
||||||
|
|
||||||
|
reasons[idx] = Cancellation_Reason{code = "None"}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---- Step 3: If any condition failed, return cancellation ----
|
||||||
|
if any_failed {
|
||||||
|
result.cancellation_reasons = reasons
|
||||||
|
return result, .Cancelled
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---- Step 4: Apply all mutations ----
|
||||||
|
for &action, idx in actions {
|
||||||
|
metadata := &metadata_cache[action.table_name]
|
||||||
|
|
||||||
|
apply_err := transact_apply_action(engine, &action, metadata)
|
||||||
|
if apply_err != .None {
|
||||||
|
// This shouldn't happen after pre-validation, but handle gracefully
|
||||||
|
reasons[idx] = Cancellation_Reason{
|
||||||
|
code = "InternalError",
|
||||||
|
message = "Failed to apply mutation",
|
||||||
|
}
|
||||||
|
// In a real impl we'd need to rollback. For now, report the failure.
|
||||||
|
result.cancellation_reasons = reasons
|
||||||
|
return result, .Internal_Error
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
delete(reasons)
|
||||||
|
return result, .None
|
||||||
|
}
|
||||||
|
|
||||||
|
// Apply a single transact write action (called after all conditions have passed)
|
||||||
|
@(private = "file")
|
||||||
|
transact_apply_action :: proc(
|
||||||
|
engine: ^Storage_Engine,
|
||||||
|
action: ^Transact_Write_Action,
|
||||||
|
metadata: ^Table_Metadata,
|
||||||
|
) -> Storage_Error {
|
||||||
|
switch action.type {
|
||||||
|
case .Put:
|
||||||
|
if item, has := action.item.?; has {
|
||||||
|
return put_item_internal(engine, action.table_name, item, metadata)
|
||||||
|
}
|
||||||
|
return .Invalid_Key
|
||||||
|
|
||||||
|
case .Delete:
|
||||||
|
if key, has := action.key.?; has {
|
||||||
|
return delete_item_internal(engine, action.table_name, key, metadata)
|
||||||
|
}
|
||||||
|
return .Invalid_Key
|
||||||
|
|
||||||
|
case .Update:
|
||||||
|
if key, has := action.key.?; has {
|
||||||
|
if plan, has_plan := action.update_plan.?; has_plan {
|
||||||
|
plan_copy := plan
|
||||||
|
_, _, err := update_item_internal(engine, action.table_name, key, &plan_copy, metadata)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return .Invalid_Key
|
||||||
|
}
|
||||||
|
return .Invalid_Key
|
||||||
|
|
||||||
|
case .Condition_Check:
|
||||||
|
return .None // No mutation
|
||||||
|
}
|
||||||
|
return .None
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Internal storage operations that skip lock acquisition
|
||||||
|
// (Used by transact_write_items which manages its own locking)
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
get_item_internal :: proc(
|
||||||
|
engine: ^Storage_Engine,
|
||||||
|
table_name: string,
|
||||||
|
key: Item,
|
||||||
|
metadata: ^Table_Metadata,
|
||||||
|
) -> (Maybe(Item), Storage_Error) {
|
||||||
|
key_struct, key_ok := key_from_item(key, metadata.key_schema)
|
||||||
|
if !key_ok {
|
||||||
|
return nil, .Missing_Key_Attribute
|
||||||
|
}
|
||||||
|
defer key_destroy(&key_struct)
|
||||||
|
|
||||||
|
key_values, kv_ok := key_get_values(&key_struct)
|
||||||
|
if !kv_ok {
|
||||||
|
return nil, .Invalid_Key
|
||||||
|
}
|
||||||
|
|
||||||
|
storage_key := build_data_key(table_name, key_values.pk, key_values.sk)
|
||||||
|
defer delete(storage_key)
|
||||||
|
|
||||||
|
value, get_err := rocksdb.db_get(&engine.db, storage_key)
|
||||||
|
if get_err == .NotFound {
|
||||||
|
return nil, .None
|
||||||
|
}
|
||||||
|
if get_err != .None {
|
||||||
|
return nil, .RocksDB_Error
|
||||||
|
}
|
||||||
|
defer delete(value)
|
||||||
|
|
||||||
|
item, decode_ok := decode(value)
|
||||||
|
if !decode_ok {
|
||||||
|
return nil, .Serialization_Error
|
||||||
|
}
|
||||||
|
|
||||||
|
return item, .None
|
||||||
|
}
|
||||||
|
|
||||||
|
put_item_internal :: proc(
|
||||||
|
engine: ^Storage_Engine,
|
||||||
|
table_name: string,
|
||||||
|
item: Item,
|
||||||
|
metadata: ^Table_Metadata,
|
||||||
|
) -> Storage_Error {
|
||||||
|
key_struct, key_ok := key_from_item(item, metadata.key_schema)
|
||||||
|
if !key_ok {
|
||||||
|
return .Missing_Key_Attribute
|
||||||
|
}
|
||||||
|
defer key_destroy(&key_struct)
|
||||||
|
|
||||||
|
key_values, kv_ok := key_get_values(&key_struct)
|
||||||
|
if !kv_ok {
|
||||||
|
return .Invalid_Key
|
||||||
|
}
|
||||||
|
|
||||||
|
storage_key := build_data_key(table_name, key_values.pk, key_values.sk)
|
||||||
|
defer delete(storage_key)
|
||||||
|
|
||||||
|
encoded_item, encode_ok := encode(item)
|
||||||
|
if !encode_ok {
|
||||||
|
return .Serialization_Error
|
||||||
|
}
|
||||||
|
defer delete(encoded_item)
|
||||||
|
|
||||||
|
put_err := rocksdb.db_put(&engine.db, storage_key, encoded_item)
|
||||||
|
if put_err != .None {
|
||||||
|
return .RocksDB_Error
|
||||||
|
}
|
||||||
|
|
||||||
|
return .None
|
||||||
|
}
|
||||||
|
|
||||||
|
delete_item_internal :: proc(
|
||||||
|
engine: ^Storage_Engine,
|
||||||
|
table_name: string,
|
||||||
|
key: Item,
|
||||||
|
metadata: ^Table_Metadata,
|
||||||
|
) -> Storage_Error {
|
||||||
|
key_struct, key_ok := key_from_item(key, metadata.key_schema)
|
||||||
|
if !key_ok {
|
||||||
|
return .Missing_Key_Attribute
|
||||||
|
}
|
||||||
|
defer key_destroy(&key_struct)
|
||||||
|
|
||||||
|
key_values, kv_ok := key_get_values(&key_struct)
|
||||||
|
if !kv_ok {
|
||||||
|
return .Invalid_Key
|
||||||
|
}
|
||||||
|
|
||||||
|
storage_key := build_data_key(table_name, key_values.pk, key_values.sk)
|
||||||
|
defer delete(storage_key)
|
||||||
|
|
||||||
|
del_err := rocksdb.db_delete(&engine.db, storage_key)
|
||||||
|
if del_err != .None {
|
||||||
|
return .RocksDB_Error
|
||||||
|
}
|
||||||
|
|
||||||
|
return .None
|
||||||
|
}
|
||||||
|
|
||||||
|
update_item_internal :: proc(
|
||||||
|
engine: ^Storage_Engine,
|
||||||
|
table_name: string,
|
||||||
|
key_item: Item,
|
||||||
|
plan: ^Update_Plan,
|
||||||
|
metadata: ^Table_Metadata,
|
||||||
|
) -> (old_item: Maybe(Item), new_item: Maybe(Item), err: Storage_Error) {
|
||||||
|
key_struct, key_ok := key_from_item(key_item, metadata.key_schema)
|
||||||
|
if !key_ok {
|
||||||
|
return nil, nil, .Missing_Key_Attribute
|
||||||
|
}
|
||||||
|
defer key_destroy(&key_struct)
|
||||||
|
|
||||||
|
key_values, kv_ok := key_get_values(&key_struct)
|
||||||
|
if !kv_ok {
|
||||||
|
return nil, nil, .Invalid_Key
|
||||||
|
}
|
||||||
|
|
||||||
|
storage_key := build_data_key(table_name, key_values.pk, key_values.sk)
|
||||||
|
defer delete(storage_key)
|
||||||
|
|
||||||
|
// Fetch existing item
|
||||||
|
existing_encoded, get_err := rocksdb.db_get(&engine.db, storage_key)
|
||||||
|
existing_item: Item
|
||||||
|
|
||||||
|
if get_err == .None && existing_encoded != nil {
|
||||||
|
defer delete(existing_encoded)
|
||||||
|
decoded, decode_ok := decode(existing_encoded)
|
||||||
|
if !decode_ok {
|
||||||
|
return nil, nil, .Serialization_Error
|
||||||
|
}
|
||||||
|
existing_item = decoded
|
||||||
|
old_item = item_deep_copy(existing_item)
|
||||||
|
} else if get_err == .NotFound || existing_encoded == nil {
|
||||||
|
existing_item = make(Item)
|
||||||
|
for ks in metadata.key_schema {
|
||||||
|
if val, found := key_item[ks.attribute_name]; found {
|
||||||
|
existing_item[strings.clone(ks.attribute_name)] = attr_value_deep_copy(val)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
return nil, nil, .RocksDB_Error
|
||||||
|
}
|
||||||
|
|
||||||
|
if !execute_update_plan(&existing_item, plan) {
|
||||||
|
item_destroy(&existing_item)
|
||||||
|
if old, has := old_item.?; has {
|
||||||
|
old_copy := old
|
||||||
|
item_destroy(&old_copy)
|
||||||
|
}
|
||||||
|
return nil, nil, .Invalid_Key
|
||||||
|
}
|
||||||
|
|
||||||
|
encoded_item, encode_ok := encode(existing_item)
|
||||||
|
if !encode_ok {
|
||||||
|
item_destroy(&existing_item)
|
||||||
|
if old, has := old_item.?; has {
|
||||||
|
old_copy := old
|
||||||
|
item_destroy(&old_copy)
|
||||||
|
}
|
||||||
|
return nil, nil, .Serialization_Error
|
||||||
|
}
|
||||||
|
defer delete(encoded_item)
|
||||||
|
|
||||||
|
put_err := rocksdb.db_put(&engine.db, storage_key, encoded_item)
|
||||||
|
if put_err != .None {
|
||||||
|
item_destroy(&existing_item)
|
||||||
|
if old, has := old_item.?; has {
|
||||||
|
old_copy := old
|
||||||
|
item_destroy(&old_copy)
|
||||||
|
}
|
||||||
|
return nil, nil, .RocksDB_Error
|
||||||
|
}
|
||||||
|
|
||||||
|
new_item = existing_item
|
||||||
|
return old_item, new_item, .None
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// TransactGetItems Types
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
Transact_Get_Action :: struct {
|
||||||
|
table_name: string,
|
||||||
|
key: Item,
|
||||||
|
projection: Maybe([]string), // Optional ProjectionExpression paths
|
||||||
|
}
|
||||||
|
|
||||||
|
Transact_Get_Result :: struct {
|
||||||
|
items: []Maybe(Item), // One per action, nil if item not found
|
||||||
|
}
|
||||||
|
|
||||||
|
transact_get_action_destroy :: proc(action: ^Transact_Get_Action) {
|
||||||
|
item_destroy(&action.key)
|
||||||
|
if proj, has := action.projection.?; has {
|
||||||
|
delete(proj)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
transact_get_result_destroy :: proc(result: ^Transact_Get_Result) {
|
||||||
|
for &maybe_item in result.items {
|
||||||
|
if item, has := maybe_item.?; has {
|
||||||
|
item_copy := item
|
||||||
|
item_destroy(&item_copy)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
delete(result.items)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// TransactGetItems — Atomically read up to 100 items
|
||||||
|
//
|
||||||
|
// DynamoDB semantics:
|
||||||
|
// - All reads are performed with a consistent snapshot
|
||||||
|
// - Missing items are returned as nil (no error)
|
||||||
|
// - ProjectionExpression is applied per-item
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
transact_get_items :: proc(
|
||||||
|
engine: ^Storage_Engine,
|
||||||
|
actions: []Transact_Get_Action,
|
||||||
|
) -> (Transact_Get_Result, Transaction_Error) {
|
||||||
|
result: Transact_Get_Result
|
||||||
|
|
||||||
|
if len(actions) == 0 {
|
||||||
|
return result, .Validation_Error
|
||||||
|
}
|
||||||
|
|
||||||
|
// Collect unique tables and acquire shared locks in deterministic order
|
||||||
|
table_set := make(map[string]bool, allocator = context.temp_allocator)
|
||||||
|
for action in actions {
|
||||||
|
table_set[action.table_name] = true
|
||||||
|
}
|
||||||
|
|
||||||
|
table_names := make([dynamic]string, allocator = context.temp_allocator)
|
||||||
|
for name in table_set {
|
||||||
|
append(&table_names, name)
|
||||||
|
}
|
||||||
|
for i := 0; i < len(table_names); i += 1 {
|
||||||
|
for j := i + 1; j < len(table_names); j += 1 {
|
||||||
|
if table_names[j] < table_names[i] {
|
||||||
|
table_names[i], table_names[j] = table_names[j], table_names[i]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
locks := make([dynamic]^sync.RW_Mutex, allocator = context.temp_allocator)
|
||||||
|
for name in table_names {
|
||||||
|
lock := get_or_create_table_lock(engine, name)
|
||||||
|
sync.rw_mutex_shared_lock(lock)
|
||||||
|
append(&locks, lock)
|
||||||
|
}
|
||||||
|
defer {
|
||||||
|
for i := len(locks) - 1; i >= 0; i -= 1 {
|
||||||
|
sync.rw_mutex_shared_unlock(locks[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cache metadata
|
||||||
|
metadata_cache := make(map[string]Table_Metadata, allocator = context.temp_allocator)
|
||||||
|
defer {
|
||||||
|
for _, meta in metadata_cache {
|
||||||
|
meta_copy := meta
|
||||||
|
table_metadata_destroy(&meta_copy, engine.allocator)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
items := make([]Maybe(Item), len(actions))
|
||||||
|
|
||||||
|
for action, idx in actions {
|
||||||
|
// Get metadata (cached)
|
||||||
|
metadata: ^Table_Metadata
|
||||||
|
if cached, found := &metadata_cache[action.table_name]; found {
|
||||||
|
metadata = cached
|
||||||
|
} else {
|
||||||
|
meta, meta_err := get_table_metadata(engine, action.table_name)
|
||||||
|
if meta_err != .None {
|
||||||
|
items[idx] = nil
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
metadata_cache[action.table_name] = meta
|
||||||
|
metadata = &metadata_cache[action.table_name]
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fetch item
|
||||||
|
item_result, get_err := get_item_internal(engine, action.table_name, action.key, metadata)
|
||||||
|
if get_err != .None {
|
||||||
|
items[idx] = nil
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Apply projection if specified
|
||||||
|
if item, has_item := item_result.?; has_item {
|
||||||
|
if proj, has_proj := action.projection.?; has_proj && len(proj) > 0 {
|
||||||
|
projected := apply_projection(item, proj)
|
||||||
|
item_copy := item
|
||||||
|
item_destroy(&item_copy)
|
||||||
|
items[idx] = projected
|
||||||
|
} else {
|
||||||
|
items[idx] = item
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
items[idx] = nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
result.items = items
|
||||||
|
return result, .None
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Helper: Extract modified attribute paths from an Update_Plan
|
||||||
|
//
|
||||||
|
// Used for UPDATED_NEW / UPDATED_OLD ReturnValues filtering.
|
||||||
|
// DynamoDB only returns the attributes that were actually modified
|
||||||
|
// by the UpdateExpression, not the entire item.
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
get_update_plan_modified_paths :: proc(plan: ^Update_Plan) -> []string {
|
||||||
|
paths := make(map[string]bool, allocator = context.temp_allocator)
|
||||||
|
|
||||||
|
for action in plan.sets {
|
||||||
|
paths[action.path] = true
|
||||||
|
}
|
||||||
|
for action in plan.removes {
|
||||||
|
paths[action.path] = true
|
||||||
|
}
|
||||||
|
for action in plan.adds {
|
||||||
|
paths[action.path] = true
|
||||||
|
}
|
||||||
|
for action in plan.deletes {
|
||||||
|
paths[action.path] = true
|
||||||
|
}
|
||||||
|
|
||||||
|
result := make([]string, len(paths))
|
||||||
|
i := 0
|
||||||
|
for path in paths {
|
||||||
|
result[i] = path
|
||||||
|
i += 1
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
|
||||||
|
// Filter an item to only include the specified attribute paths.
|
||||||
|
// Returns a new deep-copied item containing only matching attributes.
|
||||||
|
filter_item_to_paths :: proc(item: Item, paths: []string) -> Item {
|
||||||
|
result := make(Item)
|
||||||
|
for path in paths {
|
||||||
|
if val, found := item[path]; found {
|
||||||
|
result[strings.clone(path)] = attr_value_deep_copy(val)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
}
|
||||||
@@ -120,6 +120,12 @@ update_item :: proc(
|
|||||||
return nil, nil, .RocksDB_Error
|
return nil, nil, .RocksDB_Error
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// --- GSI maintenance: delete old entries, write new entries ---
|
||||||
|
if old, has := old_item.?; has {
|
||||||
|
gsi_delete_entries(engine, table_name, old, &metadata)
|
||||||
|
}
|
||||||
|
gsi_write_entries(engine, table_name, existing_item, &metadata)
|
||||||
|
|
||||||
new_item = existing_item
|
new_item = existing_item
|
||||||
return old_item, new_item, .None
|
return old_item, new_item, .None
|
||||||
}
|
}
|
||||||
|
|||||||
276
gsi_handlers.odin
Normal file
276
gsi_handlers.odin
Normal file
@@ -0,0 +1,276 @@
|
|||||||
|
// gsi_handlers.odin — GSI-related HTTP handler helpers
|
||||||
|
//
|
||||||
|
// This file lives in the main package alongside main.odin.
|
||||||
|
// It provides:
|
||||||
|
// 1. parse_global_secondary_indexes — parse GSI definitions from CreateTable request
|
||||||
|
// 2. parse_index_name — extract IndexName from Query/Scan requests
|
||||||
|
// 3. Projection type helper for response building
|
||||||
|
package main
|
||||||
|
|
||||||
|
import "core:encoding/json"
|
||||||
|
import "core:strings"
|
||||||
|
import "dynamodb"
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Parse GlobalSecondaryIndexes from CreateTable request body
|
||||||
|
//
|
||||||
|
// DynamoDB CreateTable request format for GSIs:
|
||||||
|
// {
|
||||||
|
// "GlobalSecondaryIndexes": [
|
||||||
|
// {
|
||||||
|
// "IndexName": "email-index",
|
||||||
|
// "KeySchema": [
|
||||||
|
// { "AttributeName": "email", "KeyType": "HASH" },
|
||||||
|
// { "AttributeName": "timestamp", "KeyType": "RANGE" }
|
||||||
|
// ],
|
||||||
|
// "Projection": {
|
||||||
|
// "ProjectionType": "ALL" | "KEYS_ONLY" | "INCLUDE",
|
||||||
|
// "NonKeyAttributes": ["attr1", "attr2"] // only for INCLUDE
|
||||||
|
// }
|
||||||
|
// }
|
||||||
|
// ]
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// Returns nil if no GSI definitions are present (valid — GSIs are optional).
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
parse_global_secondary_indexes :: proc(
|
||||||
|
root: json.Object,
|
||||||
|
attr_defs: []dynamodb.Attribute_Definition,
|
||||||
|
) -> Maybe([]dynamodb.Global_Secondary_Index) {
|
||||||
|
gsi_val, found := root["GlobalSecondaryIndexes"]
|
||||||
|
if !found {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
gsi_arr, ok := gsi_val.(json.Array)
|
||||||
|
if !ok || len(gsi_arr) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
gsis := make([]dynamodb.Global_Secondary_Index, len(gsi_arr))
|
||||||
|
|
||||||
|
for elem, i in gsi_arr {
|
||||||
|
elem_obj, elem_ok := elem.(json.Object)
|
||||||
|
if !elem_ok {
|
||||||
|
cleanup_parsed_gsis(gsis[:i])
|
||||||
|
delete(gsis)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
gsi, gsi_ok := parse_single_gsi(elem_obj, attr_defs)
|
||||||
|
if !gsi_ok {
|
||||||
|
cleanup_parsed_gsis(gsis[:i])
|
||||||
|
delete(gsis)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
gsis[i] = gsi
|
||||||
|
}
|
||||||
|
|
||||||
|
return gsis
|
||||||
|
}
|
||||||
|
|
||||||
|
@(private = "file")
|
||||||
|
parse_single_gsi :: proc(
|
||||||
|
obj: json.Object,
|
||||||
|
attr_defs: []dynamodb.Attribute_Definition,
|
||||||
|
) -> (dynamodb.Global_Secondary_Index, bool) {
|
||||||
|
gsi: dynamodb.Global_Secondary_Index
|
||||||
|
|
||||||
|
// IndexName (required)
|
||||||
|
idx_val, idx_found := obj["IndexName"]
|
||||||
|
if !idx_found {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
idx_str, idx_ok := idx_val.(json.String)
|
||||||
|
if !idx_ok {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
gsi.index_name = strings.clone(string(idx_str))
|
||||||
|
|
||||||
|
// KeySchema (required)
|
||||||
|
ks_val, ks_found := obj["KeySchema"]
|
||||||
|
if !ks_found {
|
||||||
|
delete(gsi.index_name)
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
ks_arr, ks_ok := ks_val.(json.Array)
|
||||||
|
if !ks_ok || len(ks_arr) == 0 || len(ks_arr) > 2 {
|
||||||
|
delete(gsi.index_name)
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
|
||||||
|
key_schema := make([]dynamodb.Key_Schema_Element, len(ks_arr))
|
||||||
|
hash_count := 0
|
||||||
|
|
||||||
|
for ks_elem, j in ks_arr {
|
||||||
|
ks_obj, kobj_ok := ks_elem.(json.Object)
|
||||||
|
if !kobj_ok {
|
||||||
|
for k in 0..<j { delete(key_schema[k].attribute_name) }
|
||||||
|
delete(key_schema)
|
||||||
|
delete(gsi.index_name)
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
|
||||||
|
an_val, an_found := ks_obj["AttributeName"]
|
||||||
|
if !an_found {
|
||||||
|
for k in 0..<j { delete(key_schema[k].attribute_name) }
|
||||||
|
delete(key_schema)
|
||||||
|
delete(gsi.index_name)
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
an_str, an_ok := an_val.(json.String)
|
||||||
|
if !an_ok {
|
||||||
|
for k in 0..<j { delete(key_schema[k].attribute_name) }
|
||||||
|
delete(key_schema)
|
||||||
|
delete(gsi.index_name)
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
|
||||||
|
kt_val, kt_found := ks_obj["KeyType"]
|
||||||
|
if !kt_found {
|
||||||
|
for k in 0..<j { delete(key_schema[k].attribute_name) }
|
||||||
|
delete(key_schema)
|
||||||
|
delete(gsi.index_name)
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
kt_str, kt_ok := kt_val.(json.String)
|
||||||
|
if !kt_ok {
|
||||||
|
for k in 0..<j { delete(key_schema[k].attribute_name) }
|
||||||
|
delete(key_schema)
|
||||||
|
delete(gsi.index_name)
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
|
||||||
|
kt, kt_parse_ok := dynamodb.key_type_from_string(string(kt_str))
|
||||||
|
if !kt_parse_ok {
|
||||||
|
for k in 0..<j { delete(key_schema[k].attribute_name) }
|
||||||
|
delete(key_schema)
|
||||||
|
delete(gsi.index_name)
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
|
||||||
|
if kt == .HASH {
|
||||||
|
hash_count += 1
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate that the GSI key attribute is in AttributeDefinitions
|
||||||
|
attr_defined := false
|
||||||
|
for ad in attr_defs {
|
||||||
|
if ad.attribute_name == string(an_str) {
|
||||||
|
attr_defined = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !attr_defined {
|
||||||
|
for k in 0..<j { delete(key_schema[k].attribute_name) }
|
||||||
|
delete(key_schema)
|
||||||
|
delete(gsi.index_name)
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
|
||||||
|
key_schema[j] = dynamodb.Key_Schema_Element{
|
||||||
|
attribute_name = strings.clone(string(an_str)),
|
||||||
|
key_type = kt,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Must have exactly one HASH key
|
||||||
|
if hash_count != 1 {
|
||||||
|
for ks in key_schema { delete(ks.attribute_name) }
|
||||||
|
delete(key_schema)
|
||||||
|
delete(gsi.index_name)
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
|
||||||
|
gsi.key_schema = key_schema
|
||||||
|
|
||||||
|
// Projection (optional — defaults to ALL)
|
||||||
|
gsi.projection.projection_type = .ALL
|
||||||
|
if proj_val, proj_found := obj["Projection"]; proj_found {
|
||||||
|
if proj_obj, proj_ok := proj_val.(json.Object); proj_ok {
|
||||||
|
if pt_val, pt_found := proj_obj["ProjectionType"]; pt_found {
|
||||||
|
if pt_str, pt_ok := pt_val.(json.String); pt_ok {
|
||||||
|
switch string(pt_str) {
|
||||||
|
case "ALL": gsi.projection.projection_type = .ALL
|
||||||
|
case "KEYS_ONLY": gsi.projection.projection_type = .KEYS_ONLY
|
||||||
|
case "INCLUDE": gsi.projection.projection_type = .INCLUDE
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// NonKeyAttributes (only valid for INCLUDE projection)
|
||||||
|
if nka_val, nka_found := proj_obj["NonKeyAttributes"]; nka_found {
|
||||||
|
if nka_arr, nka_ok := nka_val.(json.Array); nka_ok && len(nka_arr) > 0 {
|
||||||
|
nka := make([]string, len(nka_arr))
|
||||||
|
for attr_val, k in nka_arr {
|
||||||
|
if attr_str, attr_ok := attr_val.(json.String); attr_ok {
|
||||||
|
nka[k] = strings.clone(string(attr_str))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
gsi.projection.non_key_attributes = nka
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return gsi, true
|
||||||
|
}
|
||||||
|
|
||||||
|
@(private = "file")
|
||||||
|
cleanup_parsed_gsis :: proc(gsis: []dynamodb.Global_Secondary_Index) {
|
||||||
|
for gsi in gsis {
|
||||||
|
delete(gsi.index_name)
|
||||||
|
for ks in gsi.key_schema {
|
||||||
|
delete(ks.attribute_name)
|
||||||
|
}
|
||||||
|
delete(gsi.key_schema)
|
||||||
|
if nka, has_nka := gsi.projection.non_key_attributes.?; has_nka {
|
||||||
|
for attr in nka { delete(attr) }
|
||||||
|
delete(nka)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Parse IndexName from Query/Scan request
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
parse_index_name :: proc(request_body: []byte) -> Maybe(string) {
|
||||||
|
data, parse_err := json.parse(request_body, allocator = context.temp_allocator)
|
||||||
|
if parse_err != nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
defer json.destroy_value(data)
|
||||||
|
|
||||||
|
root, root_ok := data.(json.Object)
|
||||||
|
if !root_ok {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
idx_val, found := root["IndexName"]
|
||||||
|
if !found {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
idx_str, ok := idx_val.(json.String)
|
||||||
|
if !ok {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return string(idx_str)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Projection type to string for DescribeTable response
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
projection_type_to_string :: proc(pt: dynamodb.Projection_Type) -> string {
|
||||||
|
switch pt {
|
||||||
|
case .ALL: return "ALL"
|
||||||
|
case .KEYS_ONLY: return "KEYS_ONLY"
|
||||||
|
case .INCLUDE: return "INCLUDE"
|
||||||
|
}
|
||||||
|
return "ALL"
|
||||||
|
}
|
||||||
187
main.odin
187
main.odin
@@ -106,6 +106,10 @@ handle_dynamodb_request :: proc(ctx: rawptr, request: ^HTTP_Request, request_all
|
|||||||
handle_batch_write_item(engine, request, &response)
|
handle_batch_write_item(engine, request, &response)
|
||||||
case .BatchGetItem:
|
case .BatchGetItem:
|
||||||
handle_batch_get_item(engine, request, &response)
|
handle_batch_get_item(engine, request, &response)
|
||||||
|
case .TransactWriteItems:
|
||||||
|
handle_transact_write_items(engine, request, &response)
|
||||||
|
case .TransactGetItems:
|
||||||
|
handle_transact_get_items(engine, request, &response)
|
||||||
case .Unknown:
|
case .Unknown:
|
||||||
return make_error_response(&response, .ValidationException, "Unknown operation")
|
return make_error_response(&response, .ValidationException, "Unknown operation")
|
||||||
case:
|
case:
|
||||||
@@ -169,8 +173,25 @@ handle_create_table :: proc(engine: ^dynamodb.Storage_Engine, request: ^HTTP_Req
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Parse GlobalSecondaryIndexes (optional)
|
||||||
|
gsis := parse_global_secondary_indexes(root, attr_defs)
|
||||||
|
defer {
|
||||||
|
if gsi_list, has := gsis.?; has {
|
||||||
|
for &g in gsi_list {
|
||||||
|
delete(g.index_name)
|
||||||
|
for &ks in g.key_schema { delete(ks.attribute_name) }
|
||||||
|
delete(g.key_schema)
|
||||||
|
if nka, has_nka := g.projection.non_key_attributes.?; has_nka {
|
||||||
|
for a in nka { delete(a) }
|
||||||
|
delete(nka)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
delete(gsi_list)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Create the table
|
// Create the table
|
||||||
desc, create_err := dynamodb.create_table(engine, string(table_name), key_schema, attr_defs)
|
desc, create_err := dynamodb.create_table(engine, string(table_name), key_schema, attr_defs, gsis)
|
||||||
if create_err != .None {
|
if create_err != .None {
|
||||||
#partial switch create_err {
|
#partial switch create_err {
|
||||||
case .Table_Already_Exists:
|
case .Table_Already_Exists:
|
||||||
@@ -257,7 +278,30 @@ handle_describe_table :: proc(engine: ^dynamodb.Storage_Engine, request: ^HTTP_R
|
|||||||
ad.attribute_name, dynamodb.scalar_type_to_string(ad.attribute_type))
|
ad.attribute_name, dynamodb.scalar_type_to_string(ad.attribute_type))
|
||||||
}
|
}
|
||||||
|
|
||||||
strings.write_string(&builder, `]}}`)
|
strings.write_string(&builder, `]`)
|
||||||
|
|
||||||
|
// Include GSI Info — INSIDE the Table object, before the closing braces
|
||||||
|
if gsis, has_gsis := metadata.global_secondary_indexes.?; has_gsis && len(gsis) > 0 {
|
||||||
|
strings.write_string(&builder, `,"GlobalSecondaryIndexes":[`)
|
||||||
|
for gsi, gi in gsis {
|
||||||
|
if gi > 0 do strings.write_string(&builder, ",")
|
||||||
|
strings.write_string(&builder, `{"IndexName":"`)
|
||||||
|
strings.write_string(&builder, gsi.index_name)
|
||||||
|
strings.write_string(&builder, `","KeySchema":[`)
|
||||||
|
for ks, ki in gsi.key_schema {
|
||||||
|
if ki > 0 do strings.write_string(&builder, ",")
|
||||||
|
fmt.sbprintf(&builder, `{"AttributeName":"%s","KeyType":"%s"}`,
|
||||||
|
ks.attribute_name, dynamodb.key_type_to_string(ks.key_type))
|
||||||
|
}
|
||||||
|
strings.write_string(&builder, `],"Projection":{"ProjectionType":"`)
|
||||||
|
strings.write_string(&builder, projection_type_to_string(gsi.projection.projection_type))
|
||||||
|
strings.write_string(&builder, `"},"IndexStatus":"ACTIVE"}`)
|
||||||
|
}
|
||||||
|
strings.write_string(&builder, "]")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Close Table object and root object
|
||||||
|
strings.write_string(&builder, `}}`)
|
||||||
|
|
||||||
resp_body := strings.to_string(builder)
|
resp_body := strings.to_string(builder)
|
||||||
response_set_body(response, transmute([]byte)resp_body)
|
response_set_body(response, transmute([]byte)resp_body)
|
||||||
@@ -657,7 +701,9 @@ handle_update_item :: proc(engine: ^dynamodb.Storage_Engine, request: ^HTTP_Requ
|
|||||||
|
|
||||||
case "UPDATED_NEW":
|
case "UPDATED_NEW":
|
||||||
if new_val, has := new_item.?; has {
|
if new_val, has := new_item.?; has {
|
||||||
item_json := dynamodb.serialize_item(new_val)
|
filtered := filter_updated_attributes(new_val, &plan)
|
||||||
|
defer dynamodb.item_destroy(&filtered)
|
||||||
|
item_json := dynamodb.serialize_item(filtered)
|
||||||
resp := fmt.aprintf(`{"Attributes":%s}`, item_json)
|
resp := fmt.aprintf(`{"Attributes":%s}`, item_json)
|
||||||
response_set_body(response, transmute([]byte)resp)
|
response_set_body(response, transmute([]byte)resp)
|
||||||
} else {
|
} else {
|
||||||
@@ -666,7 +712,9 @@ handle_update_item :: proc(engine: ^dynamodb.Storage_Engine, request: ^HTTP_Requ
|
|||||||
|
|
||||||
case "UPDATED_OLD":
|
case "UPDATED_OLD":
|
||||||
if old, has := old_item.?; has {
|
if old, has := old_item.?; has {
|
||||||
item_json := dynamodb.serialize_item(old)
|
filtered := filter_updated_attributes(old, &plan)
|
||||||
|
defer dynamodb.item_destroy(&filtered)
|
||||||
|
item_json := dynamodb.serialize_item(filtered)
|
||||||
resp := fmt.aprintf(`{"Attributes":%s}`, item_json)
|
resp := fmt.aprintf(`{"Attributes":%s}`, item_json)
|
||||||
response_set_body(response, transmute([]byte)resp)
|
response_set_body(response, transmute([]byte)resp)
|
||||||
} else {
|
} else {
|
||||||
@@ -1046,6 +1094,9 @@ handle_query :: proc(engine: ^dynamodb.Storage_Engine, request: ^HTTP_Request, r
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Grab index name from request body
|
||||||
|
index_name := parse_index_name(request.body)
|
||||||
|
|
||||||
// Fetch table metadata early for ExclusiveStartKey parsing
|
// Fetch table metadata early for ExclusiveStartKey parsing
|
||||||
metadata, meta_err := dynamodb.get_table_metadata(engine, table_name)
|
metadata, meta_err := dynamodb.get_table_metadata(engine, table_name)
|
||||||
if meta_err != .None {
|
if meta_err != .None {
|
||||||
@@ -1073,6 +1124,8 @@ handle_query :: proc(engine: ^dynamodb.Storage_Engine, request: ^HTTP_Request, r
|
|||||||
copy(pk_owned, pk_bytes)
|
copy(pk_owned, pk_bytes)
|
||||||
defer delete(pk_owned)
|
defer delete(pk_owned)
|
||||||
|
|
||||||
|
// ---- Parse shared parameters BEFORE the GSI/table branch ----
|
||||||
|
|
||||||
// Parse Limit
|
// Parse Limit
|
||||||
limit := dynamodb.parse_limit(request.body)
|
limit := dynamodb.parse_limit(request.body)
|
||||||
if limit == 0 {
|
if limit == 0 {
|
||||||
@@ -1099,13 +1152,6 @@ handle_query :: proc(engine: ^dynamodb.Storage_Engine, request: ^HTTP_Request, r
|
|||||||
sk_condition = skc
|
sk_condition = skc
|
||||||
}
|
}
|
||||||
|
|
||||||
result, err := dynamodb.query(engine, table_name, pk_owned, exclusive_start_key, limit, sk_condition)
|
|
||||||
if err != .None {
|
|
||||||
handle_storage_error(response, err)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
defer dynamodb.query_result_destroy(&result)
|
|
||||||
|
|
||||||
// ---- Parse ExpressionAttributeNames/Values for filter/projection ----
|
// ---- Parse ExpressionAttributeNames/Values for filter/projection ----
|
||||||
attr_names := dynamodb.parse_expression_attribute_names(request.body)
|
attr_names := dynamodb.parse_expression_attribute_names(request.body)
|
||||||
defer {
|
defer {
|
||||||
@@ -1129,6 +1175,62 @@ handle_query :: proc(engine: ^dynamodb.Storage_Engine, request: ^HTTP_Request, r
|
|||||||
delete(attr_values)
|
delete(attr_values)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ---- GSI query path ----
|
||||||
|
if idx_name, has_idx := index_name.?; has_idx {
|
||||||
|
_, gsi_found := dynamodb.find_gsi(&metadata, idx_name)
|
||||||
|
if !gsi_found {
|
||||||
|
make_error_response(response, .ValidationException,
|
||||||
|
fmt.tprintf("The table does not have the specified index: %s", idx_name))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
result, err := dynamodb.gsi_query(engine, table_name, idx_name,
|
||||||
|
pk_owned, exclusive_start_key, limit, sk_condition)
|
||||||
|
if err != .None {
|
||||||
|
handle_storage_error(response, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer dynamodb.query_result_destroy(&result)
|
||||||
|
|
||||||
|
// Apply FilterExpression
|
||||||
|
filtered_items := apply_filter_to_items(request.body, result.items, attr_names, attr_values)
|
||||||
|
scanned_count := len(result.items)
|
||||||
|
|
||||||
|
// Apply ProjectionExpression
|
||||||
|
projection, has_proj := dynamodb.parse_projection_expression(request.body, attr_names)
|
||||||
|
final_items: []dynamodb.Item
|
||||||
|
|
||||||
|
if has_proj && len(projection) > 0 {
|
||||||
|
projected := make([]dynamodb.Item, len(filtered_items))
|
||||||
|
for item, i in filtered_items {
|
||||||
|
projected[i] = dynamodb.apply_projection(item, projection)
|
||||||
|
}
|
||||||
|
final_items = projected
|
||||||
|
} else {
|
||||||
|
final_items = filtered_items
|
||||||
|
}
|
||||||
|
|
||||||
|
write_items_response_with_pagination_ex(
|
||||||
|
response, final_items, result.last_evaluated_key, &metadata, scanned_count,
|
||||||
|
)
|
||||||
|
|
||||||
|
if has_proj && len(projection) > 0 {
|
||||||
|
for &item in final_items {
|
||||||
|
dynamodb.item_destroy(&item)
|
||||||
|
}
|
||||||
|
delete(final_items)
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---- Main table query path ----
|
||||||
|
result, err := dynamodb.query(engine, table_name, pk_owned, exclusive_start_key, limit, sk_condition)
|
||||||
|
if err != .None {
|
||||||
|
handle_storage_error(response, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer dynamodb.query_result_destroy(&result)
|
||||||
|
|
||||||
// ---- Apply FilterExpression (post-query filter) ----
|
// ---- Apply FilterExpression (post-query filter) ----
|
||||||
filtered_items := apply_filter_to_items(request.body, result.items, attr_names, attr_values)
|
filtered_items := apply_filter_to_items(request.body, result.items, attr_names, attr_values)
|
||||||
scanned_count := len(result.items)
|
scanned_count := len(result.items)
|
||||||
@@ -1169,6 +1271,9 @@ handle_scan :: proc(engine: ^dynamodb.Storage_Engine, request: ^HTTP_Request, re
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Grab index name from request body
|
||||||
|
index_name := parse_index_name(request.body)
|
||||||
|
|
||||||
metadata, meta_err := dynamodb.get_table_metadata(engine, table_name)
|
metadata, meta_err := dynamodb.get_table_metadata(engine, table_name)
|
||||||
if meta_err != .None {
|
if meta_err != .None {
|
||||||
handle_storage_error(response, meta_err)
|
handle_storage_error(response, meta_err)
|
||||||
@@ -1194,13 +1299,6 @@ handle_scan :: proc(engine: ^dynamodb.Storage_Engine, request: ^HTTP_Request, re
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
result, err := dynamodb.scan(engine, table_name, exclusive_start_key, limit)
|
|
||||||
if err != .None {
|
|
||||||
handle_storage_error(response, err)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
defer dynamodb.scan_result_destroy(&result)
|
|
||||||
|
|
||||||
// ---- Parse ExpressionAttributeNames/Values for filter/projection ----
|
// ---- Parse ExpressionAttributeNames/Values for filter/projection ----
|
||||||
attr_names := dynamodb.parse_expression_attribute_names(request.body)
|
attr_names := dynamodb.parse_expression_attribute_names(request.body)
|
||||||
defer {
|
defer {
|
||||||
@@ -1224,6 +1322,59 @@ handle_scan :: proc(engine: ^dynamodb.Storage_Engine, request: ^HTTP_Request, re
|
|||||||
delete(attr_values)
|
delete(attr_values)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ---- GSI scan path ----
|
||||||
|
if idx_name, has_idx := index_name.?; has_idx {
|
||||||
|
_, gsi_found := dynamodb.find_gsi(&metadata, idx_name)
|
||||||
|
if !gsi_found {
|
||||||
|
make_error_response(response, .ValidationException,
|
||||||
|
fmt.tprintf("The table does not have the specified index: %s", idx_name))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
result, err := dynamodb.gsi_scan(engine, table_name, idx_name, exclusive_start_key, limit)
|
||||||
|
if err != .None {
|
||||||
|
handle_storage_error(response, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer dynamodb.scan_result_destroy(&result)
|
||||||
|
|
||||||
|
filtered_items := apply_filter_to_items(request.body, result.items, attr_names, attr_values)
|
||||||
|
scanned_count := len(result.items)
|
||||||
|
|
||||||
|
projection, has_proj := dynamodb.parse_projection_expression(request.body, attr_names)
|
||||||
|
final_items: []dynamodb.Item
|
||||||
|
|
||||||
|
if has_proj && len(projection) > 0 {
|
||||||
|
projected := make([]dynamodb.Item, len(filtered_items))
|
||||||
|
for item, i in filtered_items {
|
||||||
|
projected[i] = dynamodb.apply_projection(item, projection)
|
||||||
|
}
|
||||||
|
final_items = projected
|
||||||
|
} else {
|
||||||
|
final_items = filtered_items
|
||||||
|
}
|
||||||
|
|
||||||
|
write_items_response_with_pagination_ex(
|
||||||
|
response, final_items, result.last_evaluated_key, &metadata, scanned_count,
|
||||||
|
)
|
||||||
|
|
||||||
|
if has_proj && len(projection) > 0 {
|
||||||
|
for &item in final_items {
|
||||||
|
dynamodb.item_destroy(&item)
|
||||||
|
}
|
||||||
|
delete(final_items)
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---- Main table scan path ----
|
||||||
|
result, err := dynamodb.scan(engine, table_name, exclusive_start_key, limit)
|
||||||
|
if err != .None {
|
||||||
|
handle_storage_error(response, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer dynamodb.scan_result_destroy(&result)
|
||||||
|
|
||||||
// ---- Apply FilterExpression ----
|
// ---- Apply FilterExpression ----
|
||||||
filtered_items := apply_filter_to_items(request.body, result.items, attr_names, attr_values)
|
filtered_items := apply_filter_to_items(request.body, result.items, attr_names, attr_values)
|
||||||
scanned_count := len(result.items)
|
scanned_count := len(result.items)
|
||||||
|
|||||||
595
transact_handlers.odin
Normal file
595
transact_handlers.odin
Normal file
@@ -0,0 +1,595 @@
|
|||||||
|
// transact_handlers.odin — HTTP handlers for TransactWriteItems and TransactGetItems
|
||||||
|
//
|
||||||
|
// Also contains the UPDATED_NEW / UPDATED_OLD filtering helper for UpdateItem.
|
||||||
|
package main
|
||||||
|
|
||||||
|
import "core:encoding/json"
|
||||||
|
import "core:fmt"
|
||||||
|
import "core:strings"
|
||||||
|
import "dynamodb"
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// TransactWriteItems Handler
|
||||||
|
//
|
||||||
|
// Request format:
|
||||||
|
// {
|
||||||
|
// "TransactItems": [
|
||||||
|
// {
|
||||||
|
// "Put": {
|
||||||
|
// "TableName": "...",
|
||||||
|
// "Item": { ... },
|
||||||
|
// "ConditionExpression": "...", // optional
|
||||||
|
// "ExpressionAttributeNames": { ... }, // optional
|
||||||
|
// "ExpressionAttributeValues": { ... } // optional
|
||||||
|
// }
|
||||||
|
// },
|
||||||
|
// {
|
||||||
|
// "Delete": {
|
||||||
|
// "TableName": "...",
|
||||||
|
// "Key": { ... },
|
||||||
|
// "ConditionExpression": "...", // optional
|
||||||
|
// ...
|
||||||
|
// }
|
||||||
|
// },
|
||||||
|
// {
|
||||||
|
// "Update": {
|
||||||
|
// "TableName": "...",
|
||||||
|
// "Key": { ... },
|
||||||
|
// "UpdateExpression": "...",
|
||||||
|
// "ConditionExpression": "...", // optional
|
||||||
|
// "ExpressionAttributeNames": { ... }, // optional
|
||||||
|
// "ExpressionAttributeValues": { ... } // optional
|
||||||
|
// }
|
||||||
|
// },
|
||||||
|
// {
|
||||||
|
// "ConditionCheck": {
|
||||||
|
// "TableName": "...",
|
||||||
|
// "Key": { ... },
|
||||||
|
// "ConditionExpression": "...",
|
||||||
|
// "ExpressionAttributeNames": { ... }, // optional
|
||||||
|
// "ExpressionAttributeValues": { ... } // optional
|
||||||
|
// }
|
||||||
|
// }
|
||||||
|
// ]
|
||||||
|
// }
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
handle_transact_write_items :: proc(
|
||||||
|
engine: ^dynamodb.Storage_Engine,
|
||||||
|
request: ^HTTP_Request,
|
||||||
|
response: ^HTTP_Response,
|
||||||
|
) {
|
||||||
|
data, parse_err := json.parse(request.body, allocator = context.allocator)
|
||||||
|
if parse_err != nil {
|
||||||
|
make_error_response(response, .SerializationException, "Invalid JSON")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer json.destroy_value(data)
|
||||||
|
|
||||||
|
root, root_ok := data.(json.Object)
|
||||||
|
if !root_ok {
|
||||||
|
make_error_response(response, .SerializationException, "Request must be an object")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
transact_items_val, found := root["TransactItems"]
|
||||||
|
if !found {
|
||||||
|
make_error_response(response, .ValidationException, "Missing TransactItems")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
transact_items, ti_ok := transact_items_val.(json.Array)
|
||||||
|
if !ti_ok {
|
||||||
|
make_error_response(response, .ValidationException, "TransactItems must be an array")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(transact_items) == 0 {
|
||||||
|
make_error_response(response, .ValidationException,
|
||||||
|
"TransactItems must contain at least one item")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(transact_items) > 100 {
|
||||||
|
make_error_response(response, .ValidationException,
|
||||||
|
"Member must have length less than or equal to 100")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse each action
|
||||||
|
actions := make([dynamic]dynamodb.Transact_Write_Action)
|
||||||
|
defer {
|
||||||
|
for &action in actions {
|
||||||
|
dynamodb.transact_write_action_destroy(&action)
|
||||||
|
}
|
||||||
|
delete(actions)
|
||||||
|
}
|
||||||
|
|
||||||
|
for elem in transact_items {
|
||||||
|
elem_obj, elem_ok := elem.(json.Object)
|
||||||
|
if !elem_ok {
|
||||||
|
make_error_response(response, .ValidationException,
|
||||||
|
"Each TransactItem must be an object")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
action, action_ok := parse_transact_write_action(elem_obj)
|
||||||
|
if !action_ok {
|
||||||
|
make_error_response(response, .ValidationException,
|
||||||
|
"Invalid TransactItem action")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
append(&actions, action)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Execute transaction
|
||||||
|
result, tx_err := dynamodb.transact_write_items(engine, actions[:])
|
||||||
|
defer dynamodb.transact_write_result_destroy(&result)
|
||||||
|
|
||||||
|
switch tx_err {
|
||||||
|
case .None:
|
||||||
|
response_set_body(response, transmute([]byte)string("{}"))
|
||||||
|
|
||||||
|
case .Cancelled:
|
||||||
|
// Build TransactionCanceledException response
|
||||||
|
builder := strings.builder_make()
|
||||||
|
strings.write_string(&builder, `{"__type":"com.amazonaws.dynamodb.v20120810#TransactionCanceledException","message":"Transaction cancelled, please refer cancellation reasons for specific reasons [`)
|
||||||
|
|
||||||
|
for reason, i in result.cancellation_reasons {
|
||||||
|
if i > 0 {
|
||||||
|
strings.write_string(&builder, ", ")
|
||||||
|
}
|
||||||
|
strings.write_string(&builder, reason.code)
|
||||||
|
}
|
||||||
|
|
||||||
|
strings.write_string(&builder, `]","CancellationReasons":[`)
|
||||||
|
|
||||||
|
for reason, i in result.cancellation_reasons {
|
||||||
|
if i > 0 {
|
||||||
|
strings.write_string(&builder, ",")
|
||||||
|
}
|
||||||
|
fmt.sbprintf(&builder, `{{"Code":"%s","Message":"%s"}}`, reason.code, reason.message)
|
||||||
|
}
|
||||||
|
|
||||||
|
strings.write_string(&builder, "]}")
|
||||||
|
|
||||||
|
response_set_status(response, .Bad_Request)
|
||||||
|
resp_body := strings.to_string(builder)
|
||||||
|
response_set_body(response, transmute([]byte)resp_body)
|
||||||
|
|
||||||
|
case .Validation_Error:
|
||||||
|
make_error_response(response, .ValidationException,
|
||||||
|
"Transaction validation failed")
|
||||||
|
|
||||||
|
case .Internal_Error:
|
||||||
|
make_error_response(response, .InternalServerError,
|
||||||
|
"Internal error during transaction")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse a single TransactItem action from JSON
|
||||||
|
@(private = "file")
|
||||||
|
parse_transact_write_action :: proc(obj: json.Object) -> (dynamodb.Transact_Write_Action, bool) {
|
||||||
|
action: dynamodb.Transact_Write_Action
|
||||||
|
action.expr_attr_values = make(map[string]dynamodb.Attribute_Value)
|
||||||
|
|
||||||
|
// Try Put
|
||||||
|
if put_val, has_put := obj["Put"]; has_put {
|
||||||
|
put_obj, put_ok := put_val.(json.Object)
|
||||||
|
if !put_ok {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
action.type = .Put
|
||||||
|
return parse_transact_put_action(put_obj, &action)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Try Delete
|
||||||
|
if del_val, has_del := obj["Delete"]; has_del {
|
||||||
|
del_obj, del_ok := del_val.(json.Object)
|
||||||
|
if !del_ok {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
action.type = .Delete
|
||||||
|
return parse_transact_key_action(del_obj, &action)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Try Update
|
||||||
|
if upd_val, has_upd := obj["Update"]; has_upd {
|
||||||
|
upd_obj, upd_ok := upd_val.(json.Object)
|
||||||
|
if !upd_ok {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
action.type = .Update
|
||||||
|
return parse_transact_update_action(upd_obj, &action)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Try ConditionCheck
|
||||||
|
if cc_val, has_cc := obj["ConditionCheck"]; has_cc {
|
||||||
|
cc_obj, cc_ok := cc_val.(json.Object)
|
||||||
|
if !cc_ok {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
action.type = .Condition_Check
|
||||||
|
return parse_transact_key_action(cc_obj, &action)
|
||||||
|
}
|
||||||
|
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse common expression fields from a transact action object
|
||||||
|
@(private = "file")
|
||||||
|
parse_transact_expression_fields :: proc(obj: json.Object, action: ^dynamodb.Transact_Write_Action) {
|
||||||
|
// ConditionExpression
|
||||||
|
if ce_val, found := obj["ConditionExpression"]; found {
|
||||||
|
if ce_str, str_ok := ce_val.(json.String); str_ok {
|
||||||
|
action.condition_expr = strings.clone(string(ce_str))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExpressionAttributeNames
|
||||||
|
if ean_val, found := obj["ExpressionAttributeNames"]; found {
|
||||||
|
if ean_obj, ean_ok := ean_val.(json.Object); ean_ok {
|
||||||
|
names := make(map[string]string)
|
||||||
|
for key, val in ean_obj {
|
||||||
|
if str, str_ok := val.(json.String); str_ok {
|
||||||
|
names[strings.clone(key)] = strings.clone(string(str))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
action.expr_attr_names = names
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExpressionAttributeValues
|
||||||
|
if eav_val, found := obj["ExpressionAttributeValues"]; found {
|
||||||
|
if eav_obj, eav_ok := eav_val.(json.Object); eav_ok {
|
||||||
|
for key, val in eav_obj {
|
||||||
|
attr, attr_ok := dynamodb.parse_attribute_value(val)
|
||||||
|
if attr_ok {
|
||||||
|
action.expr_attr_values[strings.clone(key)] = attr
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse a Put transact action
|
||||||
|
@(private = "file")
|
||||||
|
parse_transact_put_action :: proc(
|
||||||
|
obj: json.Object,
|
||||||
|
action: ^dynamodb.Transact_Write_Action,
|
||||||
|
) -> (dynamodb.Transact_Write_Action, bool) {
|
||||||
|
// TableName
|
||||||
|
tn_val, tn_found := obj["TableName"]
|
||||||
|
if !tn_found {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
tn_str, tn_ok := tn_val.(json.String)
|
||||||
|
if !tn_ok {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
action.table_name = string(tn_str)
|
||||||
|
|
||||||
|
// Item
|
||||||
|
item_val, item_found := obj["Item"]
|
||||||
|
if !item_found {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
item, item_ok := dynamodb.parse_item_from_value(item_val)
|
||||||
|
if !item_ok {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
action.item = item
|
||||||
|
|
||||||
|
// Expression fields
|
||||||
|
parse_transact_expression_fields(obj, action)
|
||||||
|
|
||||||
|
return action^, true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse a Delete or ConditionCheck transact action (both use Key)
|
||||||
|
@(private = "file")
|
||||||
|
parse_transact_key_action :: proc(
|
||||||
|
obj: json.Object,
|
||||||
|
action: ^dynamodb.Transact_Write_Action,
|
||||||
|
) -> (dynamodb.Transact_Write_Action, bool) {
|
||||||
|
// TableName
|
||||||
|
tn_val, tn_found := obj["TableName"]
|
||||||
|
if !tn_found {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
tn_str, tn_ok := tn_val.(json.String)
|
||||||
|
if !tn_ok {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
action.table_name = string(tn_str)
|
||||||
|
|
||||||
|
// Key
|
||||||
|
key_val, key_found := obj["Key"]
|
||||||
|
if !key_found {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
key, key_ok := dynamodb.parse_item_from_value(key_val)
|
||||||
|
if !key_ok {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
action.key = key
|
||||||
|
|
||||||
|
// Expression fields
|
||||||
|
parse_transact_expression_fields(obj, action)
|
||||||
|
|
||||||
|
return action^, true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse an Update transact action
|
||||||
|
@(private = "file")
|
||||||
|
parse_transact_update_action :: proc(
|
||||||
|
obj: json.Object,
|
||||||
|
action: ^dynamodb.Transact_Write_Action,
|
||||||
|
) -> (dynamodb.Transact_Write_Action, bool) {
|
||||||
|
// TableName
|
||||||
|
tn_val, tn_found := obj["TableName"]
|
||||||
|
if !tn_found {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
tn_str, tn_ok := tn_val.(json.String)
|
||||||
|
if !tn_ok {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
action.table_name = string(tn_str)
|
||||||
|
|
||||||
|
// Key
|
||||||
|
key_val, key_found := obj["Key"]
|
||||||
|
if !key_found {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
key, key_ok := dynamodb.parse_item_from_value(key_val)
|
||||||
|
if !key_ok {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
action.key = key
|
||||||
|
|
||||||
|
// Expression fields (must be parsed before UpdateExpression so attr values are available)
|
||||||
|
parse_transact_expression_fields(obj, action)
|
||||||
|
|
||||||
|
// UpdateExpression
|
||||||
|
ue_val, ue_found := obj["UpdateExpression"]
|
||||||
|
if !ue_found {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
ue_str, ue_ok := ue_val.(json.String)
|
||||||
|
if !ue_ok {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
|
||||||
|
plan, plan_ok := dynamodb.parse_update_expression(
|
||||||
|
string(ue_str), action.expr_attr_names, action.expr_attr_values,
|
||||||
|
)
|
||||||
|
if !plan_ok {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
action.update_plan = plan
|
||||||
|
|
||||||
|
return action^, true
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// TransactGetItems Handler
|
||||||
|
//
|
||||||
|
// Request format:
|
||||||
|
// {
|
||||||
|
// "TransactItems": [
|
||||||
|
// {
|
||||||
|
// "Get": {
|
||||||
|
// "TableName": "...",
|
||||||
|
// "Key": { ... },
|
||||||
|
// "ProjectionExpression": "...", // optional
|
||||||
|
// "ExpressionAttributeNames": { ... } // optional
|
||||||
|
// }
|
||||||
|
// }
|
||||||
|
// ]
|
||||||
|
// }
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
handle_transact_get_items :: proc(
|
||||||
|
engine: ^dynamodb.Storage_Engine,
|
||||||
|
request: ^HTTP_Request,
|
||||||
|
response: ^HTTP_Response,
|
||||||
|
) {
|
||||||
|
data, parse_err := json.parse(request.body, allocator = context.allocator)
|
||||||
|
if parse_err != nil {
|
||||||
|
make_error_response(response, .SerializationException, "Invalid JSON")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer json.destroy_value(data)
|
||||||
|
|
||||||
|
root, root_ok := data.(json.Object)
|
||||||
|
if !root_ok {
|
||||||
|
make_error_response(response, .SerializationException, "Request must be an object")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
transact_items_val, found := root["TransactItems"]
|
||||||
|
if !found {
|
||||||
|
make_error_response(response, .ValidationException, "Missing TransactItems")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
transact_items, ti_ok := transact_items_val.(json.Array)
|
||||||
|
if !ti_ok {
|
||||||
|
make_error_response(response, .ValidationException, "TransactItems must be an array")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(transact_items) == 0 {
|
||||||
|
make_error_response(response, .ValidationException,
|
||||||
|
"TransactItems must contain at least one item")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(transact_items) > 100 {
|
||||||
|
make_error_response(response, .ValidationException,
|
||||||
|
"Member must have length less than or equal to 100")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse each get action
|
||||||
|
actions := make([dynamic]dynamodb.Transact_Get_Action)
|
||||||
|
defer {
|
||||||
|
for &action in actions {
|
||||||
|
dynamodb.transact_get_action_destroy(&action)
|
||||||
|
}
|
||||||
|
delete(actions)
|
||||||
|
}
|
||||||
|
|
||||||
|
for elem in transact_items {
|
||||||
|
elem_obj, elem_ok := elem.(json.Object)
|
||||||
|
if !elem_ok {
|
||||||
|
make_error_response(response, .ValidationException,
|
||||||
|
"Each TransactItem must be an object")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
get_val, has_get := elem_obj["Get"]
|
||||||
|
if !has_get {
|
||||||
|
make_error_response(response, .ValidationException,
|
||||||
|
"TransactGetItems only supports Get actions")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
get_obj, get_ok := get_val.(json.Object)
|
||||||
|
if !get_ok {
|
||||||
|
make_error_response(response, .ValidationException,
|
||||||
|
"Get action must be an object")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
action, action_ok := parse_transact_get_action(get_obj)
|
||||||
|
if !action_ok {
|
||||||
|
make_error_response(response, .ValidationException,
|
||||||
|
"Invalid Get action")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
append(&actions, action)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Execute transaction get
|
||||||
|
result, tx_err := dynamodb.transact_get_items(engine, actions[:])
|
||||||
|
defer dynamodb.transact_get_result_destroy(&result)
|
||||||
|
|
||||||
|
if tx_err != .None {
|
||||||
|
make_error_response(response, .InternalServerError,
|
||||||
|
"Transaction get failed")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build response
|
||||||
|
builder := strings.builder_make()
|
||||||
|
strings.write_string(&builder, `{"Responses":[`)
|
||||||
|
|
||||||
|
for maybe_item, i in result.items {
|
||||||
|
if i > 0 {
|
||||||
|
strings.write_string(&builder, ",")
|
||||||
|
}
|
||||||
|
|
||||||
|
if item, has_item := maybe_item.?; has_item {
|
||||||
|
item_json := dynamodb.serialize_item(item)
|
||||||
|
fmt.sbprintf(&builder, `{{"Item":%s}}`, item_json)
|
||||||
|
} else {
|
||||||
|
strings.write_string(&builder, "{}")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
strings.write_string(&builder, "]}")
|
||||||
|
|
||||||
|
resp_body := strings.to_string(builder)
|
||||||
|
response_set_body(response, transmute([]byte)resp_body)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse a single TransactGetItems Get action
|
||||||
|
@(private = "file")
|
||||||
|
parse_transact_get_action :: proc(obj: json.Object) -> (dynamodb.Transact_Get_Action, bool) {
|
||||||
|
action: dynamodb.Transact_Get_Action
|
||||||
|
|
||||||
|
// TableName
|
||||||
|
tn_val, tn_found := obj["TableName"]
|
||||||
|
if !tn_found {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
tn_str, tn_ok := tn_val.(json.String)
|
||||||
|
if !tn_ok {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
action.table_name = string(tn_str)
|
||||||
|
|
||||||
|
// Key
|
||||||
|
key_val, key_found := obj["Key"]
|
||||||
|
if !key_found {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
key, key_ok := dynamodb.parse_item_from_value(key_val)
|
||||||
|
if !key_ok {
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
action.key = key
|
||||||
|
|
||||||
|
// ProjectionExpression (optional)
|
||||||
|
if pe_val, pe_found := obj["ProjectionExpression"]; pe_found {
|
||||||
|
if pe_str, pe_ok := pe_val.(json.String); pe_ok {
|
||||||
|
// Parse ExpressionAttributeNames for projection
|
||||||
|
attr_names: Maybe(map[string]string) = nil
|
||||||
|
if ean_val, ean_found := obj["ExpressionAttributeNames"]; ean_found {
|
||||||
|
if ean_obj, ean_ok := ean_val.(json.Object); ean_ok {
|
||||||
|
names := make(map[string]string, allocator = context.temp_allocator)
|
||||||
|
for key_str, val in ean_obj {
|
||||||
|
if str, str_ok := val.(json.String); str_ok {
|
||||||
|
names[key_str] = string(str)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
attr_names = names
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
parts := strings.split(string(pe_str), ",")
|
||||||
|
paths := make([dynamic]string)
|
||||||
|
for part in parts {
|
||||||
|
trimmed := strings.trim_space(part)
|
||||||
|
if len(trimmed) == 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
resolved, res_ok := dynamodb.resolve_attribute_name(trimmed, attr_names)
|
||||||
|
if !res_ok {
|
||||||
|
delete(paths)
|
||||||
|
dynamodb.item_destroy(&action.key)
|
||||||
|
return {}, false
|
||||||
|
}
|
||||||
|
append(&paths, strings.clone(resolved))
|
||||||
|
}
|
||||||
|
action.projection = paths[:]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return action, true
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// UPDATED_NEW / UPDATED_OLD Filtering Helper
|
||||||
|
//
|
||||||
|
// DynamoDB ReturnValues semantics:
|
||||||
|
// ALL_NEW → all attributes of the item after the update
|
||||||
|
// ALL_OLD → all attributes of the item before the update
|
||||||
|
// UPDATED_NEW → only the attributes that were modified, with new values
|
||||||
|
// UPDATED_OLD → only the attributes that were modified, with old values
|
||||||
|
//
|
||||||
|
// This filters an item to only include the attributes touched by the
|
||||||
|
// UpdateExpression (the "modified paths").
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
filter_updated_attributes :: proc(
|
||||||
|
item: dynamodb.Item,
|
||||||
|
plan: ^dynamodb.Update_Plan,
|
||||||
|
) -> dynamodb.Item {
|
||||||
|
modified_paths := dynamodb.get_update_plan_modified_paths(plan)
|
||||||
|
defer delete(modified_paths)
|
||||||
|
|
||||||
|
return dynamodb.filter_item_to_paths(item, modified_paths)
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user