Compare commits

...

2 Commits

Author SHA1 Message Date
d318534782 fix concatenator 2026-02-15 11:45:09 -05:00
ad599a0af7 fix docs and todo 2026-02-15 11:42:43 -05:00
7 changed files with 248 additions and 366 deletions

3
.gitignore vendored
View File

@@ -1,2 +1,3 @@
./build ./build
./data ./data
./project_context.txt

View File

@@ -41,7 +41,7 @@ COMMON_FLAGS := -vet -strict-style
EXTRA_LINKER_FLAGS := $(LIB_PATH) $(SHIM_LIB) $(ROCKSDB_LIBS) EXTRA_LINKER_FLAGS := $(LIB_PATH) $(SHIM_LIB) $(ROCKSDB_LIBS)
# Runtime configuration # Runtime configuration
PORT ?= 8000 PORT ?= 8002
HOST ?= 0.0.0.0 HOST ?= 0.0.0.0
DATA_DIR ?= ./data DATA_DIR ?= ./data
VERBOSE ?= 0 VERBOSE ?= 0
@@ -191,7 +191,7 @@ help:
@echo " make clean - Remove build artifacts" @echo " make clean - Remove build artifacts"
@echo "" @echo ""
@echo "$(GREEN)Run Commands:$(NC)" @echo "$(GREEN)Run Commands:$(NC)"
@echo " make run - Build and run server (default: localhost:8000)" @echo " make run - Build and run server (default: localhost:8002)"
@echo " make run PORT=9000 - Run on custom port" @echo " make run PORT=9000 - Run on custom port"
@echo " make dev - Clean, build, and run" @echo " make dev - Clean, build, and run"
@echo " make quick - Fast rebuild and run" @echo " make quick - Fast rebuild and run"

View File

@@ -101,7 +101,7 @@ export PATH=$PATH:/path/to/odin
### Basic Usage ### Basic Usage
```bash ```bash
# Run with defaults (localhost:8000, ./data directory) # Run with defaults (localhost:8002, ./data directory)
make run make run
``` ```
@@ -118,10 +118,10 @@ You should see:
║ ║ ║ ║
╚═══════════════════════════════════════════════╝ ╚═══════════════════════════════════════════════╝
Port: 8000 | Data Dir: ./data Port: 8002 | Data Dir: ./data
Storage engine initialized at ./data Storage engine initialized at ./data
Starting DynamoDB-compatible server on 0.0.0.0:8000 Starting DynamoDB-compatible server on 0.0.0.0:8002
Ready to accept connections! Ready to accept connections!
``` ```
@@ -186,7 +186,7 @@ aws configure
**Create a Table:** **Create a Table:**
```bash ```bash
aws dynamodb create-table \ aws dynamodb create-table \
--endpoint-url http://localhost:8000 \ --endpoint-url http://localhost:8002 \
--table-name Users \ --table-name Users \
--key-schema \ --key-schema \
AttributeName=id,KeyType=HASH \ AttributeName=id,KeyType=HASH \
@@ -197,13 +197,13 @@ aws dynamodb create-table \
**List Tables:** **List Tables:**
```bash ```bash
aws dynamodb list-tables --endpoint-url http://localhost:8000 aws dynamodb list-tables --endpoint-url http://localhost:8002
``` ```
**Put an Item:** **Put an Item:**
```bash ```bash
aws dynamodb put-item \ aws dynamodb put-item \
--endpoint-url http://localhost:8000 \ --endpoint-url http://localhost:8002 \
--table-name Users \ --table-name Users \
--item '{ --item '{
"id": {"S": "user123"}, "id": {"S": "user123"},
@@ -216,7 +216,7 @@ aws dynamodb put-item \
**Get an Item:** **Get an Item:**
```bash ```bash
aws dynamodb get-item \ aws dynamodb get-item \
--endpoint-url http://localhost:8000 \ --endpoint-url http://localhost:8002 \
--table-name Users \ --table-name Users \
--key '{"id": {"S": "user123"}}' --key '{"id": {"S": "user123"}}'
``` ```
@@ -224,7 +224,7 @@ aws dynamodb get-item \
**Query Items:** **Query Items:**
```bash ```bash
aws dynamodb query \ aws dynamodb query \
--endpoint-url http://localhost:8000 \ --endpoint-url http://localhost:8002 \
--table-name Users \ --table-name Users \
--key-condition-expression "id = :id" \ --key-condition-expression "id = :id" \
--expression-attribute-values '{ --expression-attribute-values '{
@@ -235,14 +235,14 @@ aws dynamodb query \
**Scan Table:** **Scan Table:**
```bash ```bash
aws dynamodb scan \ aws dynamodb scan \
--endpoint-url http://localhost:8000 \ --endpoint-url http://localhost:8002 \
--table-name Users --table-name Users
``` ```
**Delete an Item:** **Delete an Item:**
```bash ```bash
aws dynamodb delete-item \ aws dynamodb delete-item \
--endpoint-url http://localhost:8000 \ --endpoint-url http://localhost:8002 \
--table-name Users \ --table-name Users \
--key '{"id": {"S": "user123"}}' --key '{"id": {"S": "user123"}}'
``` ```
@@ -250,7 +250,7 @@ aws dynamodb delete-item \
**Delete a Table:** **Delete a Table:**
```bash ```bash
aws dynamodb delete-table \ aws dynamodb delete-table \
--endpoint-url http://localhost:8000 \ --endpoint-url http://localhost:8002 \
--table-name Users --table-name Users
``` ```
@@ -262,7 +262,7 @@ aws dynamodb delete-table \
const { DynamoDBClient, PutItemCommand, GetItemCommand } = require("@aws-sdk/client-dynamodb"); const { DynamoDBClient, PutItemCommand, GetItemCommand } = require("@aws-sdk/client-dynamodb");
const client = new DynamoDBClient({ const client = new DynamoDBClient({
endpoint: "http://localhost:8000", endpoint: "http://localhost:8002",
region: "us-east-1", region: "us-east-1",
credentials: { credentials: {
accessKeyId: "dummy", accessKeyId: "dummy",
@@ -279,13 +279,13 @@ async function test() {
name: { S: "Alice" } name: { S: "Alice" }
} }
})); }));
// Get the item // Get the item
const result = await client.send(new GetItemCommand({ const result = await client.send(new GetItemCommand({
TableName: "Users", TableName: "Users",
Key: { id: { S: "user123" } } Key: { id: { S: "user123" } }
})); }));
console.log(result.Item); console.log(result.Item);
} }
@@ -299,7 +299,7 @@ import boto3
dynamodb = boto3.client( dynamodb = boto3.client(
'dynamodb', 'dynamodb',
endpoint_url='http://localhost:8000', endpoint_url='http://localhost:8002',
region_name='us-east-1', region_name='us-east-1',
aws_access_key_id='dummy', aws_access_key_id='dummy',
aws_secret_access_key='dummy' aws_secret_access_key='dummy'
@@ -364,8 +364,8 @@ make fmt
### Port Already in Use ### Port Already in Use
```bash ```bash
# Check what's using port 8000 # Check what's using port 8002
lsof -i :8000 lsof -i :8002
# Use a different port # Use a different port
make run PORT=9000 make run PORT=9000
@@ -426,7 +426,7 @@ make profile
# Load test # Load test
ab -n 10000 -c 100 -p item.json -T application/json \ ab -n 10000 -c 100 -p item.json -T application/json \
http://localhost:8000/ http://localhost:8002/
``` ```
## Production Deployment ## Production Deployment

View File

@@ -3,7 +3,7 @@
A high-performance, DynamoDB-compatible database server written in Odin, backed by RocksDB. A high-performance, DynamoDB-compatible database server written in Odin, backed by RocksDB.
``` ```
╦╔═╗╦═╗╔╦╗╦ ╦╔╗╔╔╦╗╔╗ ╦╔═╗╦═╗╔╦╗╦ ╦╔╗╔╔╦╗╔╗
║║ ║╠╦╝║║║║ ║║║║ ║║╠╩╗ ║║ ║╠╦╝║║║║ ║║║║ ║║╠╩╗
╚╝╚═╝╩╚═╩ ╩╚═╝╝╚╝═╩╝╚═╝ ╚╝╚═╝╩╚═╩ ╩╚═╝╝╚╝═╩╝╚═╝
DynamoDB-Compatible Database DynamoDB-Compatible Database
@@ -55,7 +55,7 @@ sudo apt install librocksdb-dev libsnappy-dev liblz4-dev libzstd-dev libbz2-dev
# Build the server # Build the server
make build make build
# Run with default settings (localhost:8000, ./data directory) # Run with default settings (localhost:8002, ./data directory)
make run make run
# Run with custom port # Run with custom port
@@ -70,7 +70,7 @@ make run DATA_DIR=/tmp/jormundb
```bash ```bash
# Create a table # Create a table
aws dynamodb create-table \ aws dynamodb create-table \
--endpoint-url http://localhost:8000 \ --endpoint-url http://localhost:8002 \
--table-name Users \ --table-name Users \
--key-schema AttributeName=id,KeyType=HASH \ --key-schema AttributeName=id,KeyType=HASH \
--attribute-definitions AttributeName=id,AttributeType=S \ --attribute-definitions AttributeName=id,AttributeType=S \
@@ -78,26 +78,26 @@ aws dynamodb create-table \
# Put an item # Put an item
aws dynamodb put-item \ aws dynamodb put-item \
--endpoint-url http://localhost:8000 \ --endpoint-url http://localhost:8002 \
--table-name Users \ --table-name Users \
--item '{"id":{"S":"user123"},"name":{"S":"Alice"},"age":{"N":"30"}}' --item '{"id":{"S":"user123"},"name":{"S":"Alice"},"age":{"N":"30"}}'
# Get an item # Get an item
aws dynamodb get-item \ aws dynamodb get-item \
--endpoint-url http://localhost:8000 \ --endpoint-url http://localhost:8002 \
--table-name Users \ --table-name Users \
--key '{"id":{"S":"user123"}}' --key '{"id":{"S":"user123"}}'
# Query items # Query items
aws dynamodb query \ aws dynamodb query \
--endpoint-url http://localhost:8000 \ --endpoint-url http://localhost:8002 \
--table-name Users \ --table-name Users \
--key-condition-expression "id = :id" \ --key-condition-expression "id = :id" \
--expression-attribute-values '{":id":{"S":"user123"}}' --expression-attribute-values '{":id":{"S":"user123"}}'
# Scan table # Scan table
aws dynamodb scan \ aws dynamodb scan \
--endpoint-url http://localhost:8000 \ --endpoint-url http://localhost:8002 \
--table-name Users --table-name Users
``` ```
@@ -163,15 +163,15 @@ handle_request :: proc(conn: net.TCP_Socket) {
arena: mem.Arena arena: mem.Arena
mem.arena_init(&arena, make([]byte, mem.Megabyte * 4)) mem.arena_init(&arena, make([]byte, mem.Megabyte * 4))
defer mem.arena_destroy(&arena) defer mem.arena_destroy(&arena)
context.allocator = mem.arena_allocator(&arena) context.allocator = mem.arena_allocator(&arena)
// Everything below uses the arena automatically // Everything below uses the arena automatically
// No manual frees, no errdefer cleanup needed // No manual frees, no errdefer cleanup needed
request := parse_request() // Uses context.allocator request := parse_request() // Uses context.allocator
response := process(request) // Uses context.allocator response := process(request) // Uses context.allocator
send_response(response) // Uses context.allocator send_response(response) // Uses context.allocator
// Arena is freed here automatically // Arena is freed here automatically
} }
``` ```
@@ -243,7 +243,7 @@ Scan (full table) | 5000 ops | 234.56 ms | 21320 ops/sec
### Environment Variables ### Environment Variables
```bash ```bash
JORMUN_PORT=8000 # Server port JORMUN_PORT=8002 # Server port
JORMUN_HOST=0.0.0.0 # Bind address JORMUN_HOST=0.0.0.0 # Bind address
JORMUN_DATA_DIR=./data # RocksDB data directory JORMUN_DATA_DIR=./data # RocksDB data directory
JORMUN_VERBOSE=1 # Enable verbose logging JORMUN_VERBOSE=1 # Enable verbose logging
@@ -275,7 +275,7 @@ chmod 755 ./data
Check if the port is already in use: Check if the port is already in use:
```bash ```bash
lsof -i :8000 lsof -i :8002
``` ```
### "Invalid JSON" errors ### "Invalid JSON" errors

25
TODO.md
View File

@@ -12,8 +12,8 @@ This tracks the rewrite from Zig to Odin and remaining features.
- [x] Core types (dynamodb/types.odin) - [x] Core types (dynamodb/types.odin)
- [x] Key codec with varint encoding (key_codec/key_codec.odin) - [x] Key codec with varint encoding (key_codec/key_codec.odin)
- [x] Main entry point with arena pattern demo - [x] Main entry point with arena pattern demo
- [x] LICENSE file
- [x] .gitignore - [x] .gitignore
- [x] HTTP Server Scaffolding
## 🚧 In Progress (Need to Complete) ## 🚧 In Progress (Need to Complete)
@@ -23,7 +23,7 @@ This tracks the rewrite from Zig to Odin and remaining features.
- Parse `{"S": "value"}` format - Parse `{"S": "value"}` format
- Serialize AttributeValue to DynamoDB JSON - Serialize AttributeValue to DynamoDB JSON
- Parse request bodies (PutItem, GetItem, etc.) - Parse request bodies (PutItem, GetItem, etc.)
- [ ] **item_codec/item_codec.odin** - Binary TLV encoding for items - [ ] **item_codec/item_codec.odin** - Binary TLV encoding for items
- Encode Item to binary TLV format - Encode Item to binary TLV format
- Decode binary TLV back to Item - Decode binary TLV back to Item
@@ -50,7 +50,7 @@ This tracks the rewrite from Zig to Odin and remaining features.
- Read JSON bodies - Read JSON bodies
- Send HTTP responses with headers - Send HTTP responses with headers
- Keep-alive support - Keep-alive support
- Options: - Options (Why we haven't checked this off yet, we need to make sure we chose the right option as the project grows, might make more sense to impliment different option):
- Use `core:net` directly - Use `core:net` directly
- Use C FFI with libmicrohttpd - Use C FFI with libmicrohttpd
- Use Odin's vendor:microui (if suitable) - Use Odin's vendor:microui (if suitable)
@@ -69,6 +69,23 @@ This tracks the rewrite from Zig to Odin and remaining features.
- ADD operations - ADD operations
- DELETE operations - DELETE operations
### Replication Support (Priority 4)
- [ ] **Build C++ Shim in order to use RocksDB's WAL replication helpers**
- [ ] **Add configurator to set instance as a master or slave node and point to proper Target and Destination IPs**
- [ ] **Leverage C++ helpers from shim**
### Subscribe To Changes Feature (Priority LAST [But keep in mind because semantics we decide now will make this easier later])
- [ ] **Best-effort notifications (Postgres-ish LISTEN/NOTIFY [in-memory pub/sub fanout. If youre not connected, you miss it.])**
- Add an in-process “event bus” channels: table-wide, partition-key, item-key, “all”.
- When putItem/deleteItem/updateItem/createTable/... commits successfully publish {op, table, key, timestamp, item?}
- [ ] **Durable change streams (Mongo-ish [append every mutation to a persistent log and let consumers read it with resume tokens.])**
- Create a “changelog” keyspace
- Generate a monotonically increasing sequence by using a stable Per-partition sequence cursor
- Expose via an API (I prefer publishing to MQTT or SSE)
## 📋 Testing ## 📋 Testing
- [ ] Unit tests for key_codec - [ ] Unit tests for key_codec
@@ -182,5 +199,5 @@ make test
make run make run
# Test with AWS CLI # Test with AWS CLI
aws dynamodb list-tables --endpoint-url http://localhost:8000 aws dynamodb list-tables --endpoint-url http://localhost:8002
``` ```

View File

@@ -7,10 +7,10 @@ OUTPUT_FILE="project_context.txt"
EXCLUDE_DIRS=("build" "data" ".git") EXCLUDE_DIRS=("build" "data" ".git")
# File extensions to include (add more as needed) # File extensions to include (add more as needed)
INCLUDE_EXTENSIONS=("odin" "Makefile" "md") INCLUDE_EXTENSIONS=("odin" "Makefile" "md" "json" "h" "cc")
# Special files to include (without extension) # Special files to include (without extension)
INCLUDE_FILES=("ols.json" "Makefile" "build.odin.zon") INCLUDE_FILES=()
# Clear the output file # Clear the output file
> "$OUTPUT_FILE" > "$OUTPUT_FILE"

View File

@@ -1,5 +1,5 @@
# Project: jormun-db # Project: jormun-db
# Generated: Sun Feb 15 08:31:32 AM EST 2026 # Generated: Sun Feb 15 11:44:33 AM EST 2026
================================================================================ ================================================================================
@@ -899,6 +899,7 @@ package main
import "core:fmt" import "core:fmt"
import "core:mem" import "core:mem"
import vmem "core:mem/virtual"
import "core:net" import "core:net"
import "core:strings" import "core:strings"
import "core:strconv" import "core:strconv"
@@ -1097,29 +1098,31 @@ handle_connection :: proc(server: ^Server, conn: net.TCP_Socket, source: net.End
defer net.close(conn) defer net.close(conn)
request_count := 0 request_count := 0
// Keep-alive loop
for request_count < server.config.max_requests_per_connection { for request_count < server.config.max_requests_per_connection {
request_count += 1 request_count += 1
// Create arena for this request (4MB) // Growing arena for this request
arena: mem.Arena arena: vmem.Arena
mem.arena_init(&arena, make([]byte, mem.Megabyte * 4)) arena_err := vmem.arena_init_growing(&arena)
defer mem.arena_destroy(&arena) if arena_err != .None {
break
}
defer vmem.arena_destroy(&arena)
request_alloc := mem.arena_allocator(&arena) request_alloc := vmem.arena_allocator(&arena)
// TODO: Double check if we want *all* downstream allocations to use the request arena?
old := context.allocator
context.allocator = request_alloc
defer context.allocator = old
// Parse request
request, parse_ok := parse_request(conn, request_alloc, server.config) request, parse_ok := parse_request(conn, request_alloc, server.config)
if !parse_ok { if !parse_ok {
// Connection closed or parse error
break break
} }
// Call handler
response := server.handler(server.handler_ctx, &request, request_alloc) response := server.handler(server.handler_ctx, &request, request_alloc)
// Send response
send_ok := send_response(conn, &response, request_alloc) send_ok := send_response(conn, &response, request_alloc)
if !send_ok { if !send_ok {
break break
@@ -1222,7 +1225,7 @@ parse_request :: proc(
copy(body, existing_body) copy(body, existing_body)
remaining := content_length - len(existing_body) remaining := content_length - len(existing_body)
bytes_read := len(existing_body) body_written := len(existing_body)
for remaining > 0 { for remaining > 0 {
chunk_size := min(remaining, config.read_buffer_size) chunk_size := min(remaining, config.read_buffer_size)
@@ -1233,8 +1236,8 @@ parse_request :: proc(
return {}, false return {}, false
} }
copy(body[bytes_read:], chunk[:n]) copy(body[body_written:], chunk[:n])
bytes_read += n body_written += n
remaining -= n remaining -= n
} }
} }
@@ -1592,7 +1595,7 @@ import "core:fmt"
import "core:mem" import "core:mem"
import "core:os" import "core:os"
import "core:strconv" import "core:strconv"
import "core:strings" //import "core:strings" // I know we'll use in future but because we're not right now, compiler is complaining
import "rocksdb" import "rocksdb"
Config :: struct { Config :: struct {
@@ -1654,7 +1657,7 @@ main :: proc() {
// Temporary HTTP request handler // Temporary HTTP request handler
// TODO: Replace with full DynamoDB handler once dynamodb/handler.odin is implemented // TODO: Replace with full DynamoDB handler once dynamodb/handler.odin is implemented
handle_http_request :: proc(ctx: rawptr, request: ^HTTP_Request, request_alloc: mem.Allocator) -> HTTP_Response { handle_http_request :: proc(ctx: rawptr, request: ^HTTP_Request, request_alloc: mem.Allocator) -> HTTP_Response {
db := cast(^rocksdb.DB)ctx //db := cast(^rocksdb.DB)ctx // I know we'll use in future but because we're not right now, compiler is complaining
response := response_init(request_alloc) response := response_init(request_alloc)
response_add_header(&response, "Content-Type", "application/x-amz-json-1.0") response_add_header(&response, "Content-Type", "application/x-amz-json-1.0")
@@ -1675,59 +1678,17 @@ handle_http_request :: proc(ctx: rawptr, request: ^HTTP_Request, request_alloc:
return response return response
} }
// Demonstrate arena-per-request memory management
demo_arena_pattern :: proc(db: ^rocksdb.DB) {
// Simulate handling a request
{
// Create arena for this request
arena: mem.Arena
mem.arena_init(&arena, make([]byte, mem.Megabyte * 4))
defer mem.arena_destroy(&arena)
// Set context allocator to arena
context.allocator = mem.arena_allocator(&arena)
// All allocations below use the arena automatically
// No individual frees needed!
fmt.println("Simulating request handler...")
// Example: parse JSON, process, respond
table_name := strings.clone("Users")
key := make([]byte, 16)
value := make([]byte, 256)
// Simulate storage operation
copy(key, "user:123")
copy(value, `{"name":"Alice","age":30}`)
err := rocksdb.db_put(db, key, value)
if err == .None {
fmt.println("✓ Stored item using arena allocator")
}
// Read it back
result, read_err := rocksdb.db_get(db, key)
if read_err == .None {
fmt.printf("✓ Retrieved item: %s\n", string(result))
}
// Everything is freed here when arena is destroyed
fmt.println("✓ Arena destroyed - all memory freed automatically")
}
}
parse_config :: proc() -> Config { parse_config :: proc() -> Config {
config := Config{ config := Config{
host = "0.0.0.0", host = "0.0.0.0",
port = 8000, port = 8002,
data_dir = "./data", data_dir = "./data",
verbose = false, verbose = false,
} }
// Environment variables // Environment variables
if port_str, ok := os.lookup_env("JORMUN_PORT"); ok { if port_str, env_ok := os.lookup_env("JORMUN_PORT"); env_ok {
if port, ok := strconv.parse_int(port_str); ok { if port, parse_ok := strconv.parse_int(port_str); parse_ok {
config.port = port config.port = port
} }
} }
@@ -1767,210 +1728,6 @@ print_banner :: proc(config: Config) {
} }
================================================================================
FILE: ./Makefile
================================================================================
.PHONY: all build release run test clean fmt help install
# Project configuration
PROJECT_NAME := jormundb
ODIN := odin
BUILD_DIR := build
SRC_DIR := .
# RocksDB and compression libraries
ROCKSDB_LIBS := -lrocksdb -lstdc++ -lsnappy -llz4 -lzstd -lz -lbz2
# Platform-specific library paths
UNAME_S := $(shell uname -s)
ifeq ($(UNAME_S),Darwin)
# macOS (Homebrew)
LIB_PATH := -L/usr/local/lib -L/opt/homebrew/lib
INCLUDE_PATH := -I/usr/local/include -I/opt/homebrew/include
else ifeq ($(UNAME_S),Linux)
# Linux
LIB_PATH := -L/usr/local/lib -L/usr/lib
INCLUDE_PATH := -I/usr/local/include
endif
# Build flags
DEBUG_FLAGS := -debug -o:none
RELEASE_FLAGS := -o:speed -disable-assert -no-bounds-check
COMMON_FLAGS := -vet -strict-style
# Linker flags
EXTRA_LINKER_FLAGS := $(LIB_PATH) $(ROCKSDB_LIBS)
# Runtime configuration
PORT ?= 8000
HOST ?= 0.0.0.0
DATA_DIR ?= ./data
VERBOSE ?= 0
# Colors for output
BLUE := \033[0;34m
GREEN := \033[0;32m
YELLOW := \033[0;33m
RED := \033[0;31m
NC := \033[0m # No Color
# Default target
all: build
# Build debug version
build:
@echo "$(BLUE)Building $(PROJECT_NAME) (debug)...$(NC)"
@mkdir -p $(BUILD_DIR)
$(ODIN) build $(SRC_DIR) \
$(COMMON_FLAGS) \
$(DEBUG_FLAGS) \
-out:$(BUILD_DIR)/$(PROJECT_NAME) \
-extra-linker-flags:"$(EXTRA_LINKER_FLAGS)"
@echo "$(GREEN)✓ Build complete: $(BUILD_DIR)/$(PROJECT_NAME)$(NC)"
# Build optimized release version
release:
@echo "$(BLUE)Building $(PROJECT_NAME) (release)...$(NC)"
@mkdir -p $(BUILD_DIR)
$(ODIN) build $(SRC_DIR) \
$(COMMON_FLAGS) \
$(RELEASE_FLAGS) \
-out:$(BUILD_DIR)/$(PROJECT_NAME) \
-extra-linker-flags:"$(EXTRA_LINKER_FLAGS)"
@echo "$(GREEN)✓ Release build complete: $(BUILD_DIR)/$(PROJECT_NAME)$(NC)"
# Run the server
run: build
@echo "$(BLUE)Starting $(PROJECT_NAME)...$(NC)"
@mkdir -p $(DATA_DIR)
@JORMUN_PORT=$(PORT) \
JORMUN_HOST=$(HOST) \
JORMUN_DATA_DIR=$(DATA_DIR) \
JORMUN_VERBOSE=$(VERBOSE) \
$(BUILD_DIR)/$(PROJECT_NAME)
# Run with custom port
run-port: build
@echo "$(BLUE)Starting $(PROJECT_NAME) on port $(PORT)...$(NC)"
@mkdir -p $(DATA_DIR)
@JORMUN_PORT=$(PORT) $(BUILD_DIR)/$(PROJECT_NAME)
# Run tests
test:
@echo "$(BLUE)Running tests...$(NC)"
$(ODIN) test $(SRC_DIR) \
$(COMMON_FLAGS) \
$(DEBUG_FLAGS) \
-extra-linker-flags:"$(EXTRA_LINKER_FLAGS)"
@echo "$(GREEN)✓ Tests passed$(NC)"
# Format code
fmt:
@echo "$(BLUE)Formatting code...$(NC)"
@find $(SRC_DIR) -name "*.odin" -exec odin-format -w {} \;
@echo "$(GREEN)✓ Code formatted$(NC)"
# Clean build artifacts
clean:
@echo "$(YELLOW)Cleaning build artifacts...$(NC)"
@rm -rf $(BUILD_DIR)
@rm -rf $(DATA_DIR)
@echo "$(GREEN)✓ Clean complete$(NC)"
# Install to /usr/local/bin (requires sudo)
install: release
@echo "$(BLUE)Installing $(PROJECT_NAME)...$(NC)"
@sudo cp $(BUILD_DIR)/$(PROJECT_NAME) /usr/local/bin/
@sudo chmod +x /usr/local/bin/$(PROJECT_NAME)
@echo "$(GREEN)✓ Installed to /usr/local/bin/$(PROJECT_NAME)$(NC)"
# Uninstall from /usr/local/bin
uninstall:
@echo "$(YELLOW)Uninstalling $(PROJECT_NAME)...$(NC)"
@sudo rm -f /usr/local/bin/$(PROJECT_NAME)
@echo "$(GREEN)✓ Uninstalled$(NC)"
# Check dependencies
check-deps:
@echo "$(BLUE)Checking dependencies...$(NC)"
@which $(ODIN) > /dev/null || (echo "$(RED)✗ Odin compiler not found$(NC)" && exit 1)
@pkg-config --exists rocksdb || (echo "$(RED)✗ RocksDB not found$(NC)" && exit 1)
@echo "$(GREEN)✓ All dependencies found$(NC)"
# AWS CLI test commands
aws-test: run &
@sleep 2
@echo "$(BLUE)Testing with AWS CLI...$(NC)"
@echo "\n$(YELLOW)Creating table...$(NC)"
@aws dynamodb create-table \
--endpoint-url http://localhost:$(PORT) \
--table-name TestTable \
--key-schema AttributeName=pk,KeyType=HASH \
--attribute-definitions AttributeName=pk,AttributeType=S \
--billing-mode PAY_PER_REQUEST || true
@echo "\n$(YELLOW)Listing tables...$(NC)"
@aws dynamodb list-tables --endpoint-url http://localhost:$(PORT)
@echo "\n$(YELLOW)Putting item...$(NC)"
@aws dynamodb put-item \
--endpoint-url http://localhost:$(PORT) \
--table-name TestTable \
--item '{"pk":{"S":"test1"},"data":{"S":"hello world"}}'
@echo "\n$(YELLOW)Getting item...$(NC)"
@aws dynamodb get-item \
--endpoint-url http://localhost:$(PORT) \
--table-name TestTable \
--key '{"pk":{"S":"test1"}}'
@echo "\n$(YELLOW)Scanning table...$(NC)"
@aws dynamodb scan \
--endpoint-url http://localhost:$(PORT) \
--table-name TestTable
@echo "\n$(GREEN)✓ AWS CLI test complete$(NC)"
# Development workflow
dev: clean build run
# Quick rebuild and run
quick:
@$(MAKE) build run
# Show help
help:
@echo "$(BLUE)JormunDB - DynamoDB-compatible database$(NC)"
@echo ""
@echo "$(GREEN)Build Commands:$(NC)"
@echo " make build - Build debug version"
@echo " make release - Build optimized release version"
@echo " make clean - Remove build artifacts"
@echo ""
@echo "$(GREEN)Run Commands:$(NC)"
@echo " make run - Build and run server (default: localhost:8000)"
@echo " make run PORT=9000 - Run on custom port"
@echo " make dev - Clean, build, and run"
@echo " make quick - Fast rebuild and run"
@echo ""
@echo "$(GREEN)Test Commands:$(NC)"
@echo " make test - Run unit tests"
@echo " make aws-test - Test with AWS CLI commands"
@echo ""
@echo "$(GREEN)Utility Commands:$(NC)"
@echo " make fmt - Format source code"
@echo " make check-deps - Check for required dependencies"
@echo " make install - Install to /usr/local/bin (requires sudo)"
@echo " make uninstall - Remove from /usr/local/bin"
@echo ""
@echo "$(GREEN)Configuration:$(NC)"
@echo " PORT=$(PORT) - Server port"
@echo " HOST=$(HOST) - Bind address"
@echo " DATA_DIR=$(DATA_DIR) - RocksDB data directory"
@echo " VERBOSE=$(VERBOSE) - Enable verbose logging (0/1)"
@echo ""
@echo "$(GREEN)Examples:$(NC)"
@echo " make run PORT=9000"
@echo " make run DATA_DIR=/tmp/jormun VERBOSE=1"
@echo " make dev"
================================================================================ ================================================================================
FILE: ./ols.json FILE: ./ols.json
================================================================================ ================================================================================
@@ -2089,7 +1846,7 @@ export PATH=$PATH:/path/to/odin
### Basic Usage ### Basic Usage
```bash ```bash
# Run with defaults (localhost:8000, ./data directory) # Run with defaults (localhost:8002, ./data directory)
make run make run
``` ```
@@ -2106,10 +1863,10 @@ You should see:
║ ║ ║ ║
╚═══════════════════════════════════════════════╝ ╚═══════════════════════════════════════════════╝
Port: 8000 | Data Dir: ./data Port: 8002 | Data Dir: ./data
Storage engine initialized at ./data Storage engine initialized at ./data
Starting DynamoDB-compatible server on 0.0.0.0:8000 Starting DynamoDB-compatible server on 0.0.0.0:8002
Ready to accept connections! Ready to accept connections!
``` ```
@@ -2174,7 +1931,7 @@ aws configure
**Create a Table:** **Create a Table:**
```bash ```bash
aws dynamodb create-table \ aws dynamodb create-table \
--endpoint-url http://localhost:8000 \ --endpoint-url http://localhost:8002 \
--table-name Users \ --table-name Users \
--key-schema \ --key-schema \
AttributeName=id,KeyType=HASH \ AttributeName=id,KeyType=HASH \
@@ -2185,13 +1942,13 @@ aws dynamodb create-table \
**List Tables:** **List Tables:**
```bash ```bash
aws dynamodb list-tables --endpoint-url http://localhost:8000 aws dynamodb list-tables --endpoint-url http://localhost:8002
``` ```
**Put an Item:** **Put an Item:**
```bash ```bash
aws dynamodb put-item \ aws dynamodb put-item \
--endpoint-url http://localhost:8000 \ --endpoint-url http://localhost:8002 \
--table-name Users \ --table-name Users \
--item '{ --item '{
"id": {"S": "user123"}, "id": {"S": "user123"},
@@ -2204,7 +1961,7 @@ aws dynamodb put-item \
**Get an Item:** **Get an Item:**
```bash ```bash
aws dynamodb get-item \ aws dynamodb get-item \
--endpoint-url http://localhost:8000 \ --endpoint-url http://localhost:8002 \
--table-name Users \ --table-name Users \
--key '{"id": {"S": "user123"}}' --key '{"id": {"S": "user123"}}'
``` ```
@@ -2212,7 +1969,7 @@ aws dynamodb get-item \
**Query Items:** **Query Items:**
```bash ```bash
aws dynamodb query \ aws dynamodb query \
--endpoint-url http://localhost:8000 \ --endpoint-url http://localhost:8002 \
--table-name Users \ --table-name Users \
--key-condition-expression "id = :id" \ --key-condition-expression "id = :id" \
--expression-attribute-values '{ --expression-attribute-values '{
@@ -2223,14 +1980,14 @@ aws dynamodb query \
**Scan Table:** **Scan Table:**
```bash ```bash
aws dynamodb scan \ aws dynamodb scan \
--endpoint-url http://localhost:8000 \ --endpoint-url http://localhost:8002 \
--table-name Users --table-name Users
``` ```
**Delete an Item:** **Delete an Item:**
```bash ```bash
aws dynamodb delete-item \ aws dynamodb delete-item \
--endpoint-url http://localhost:8000 \ --endpoint-url http://localhost:8002 \
--table-name Users \ --table-name Users \
--key '{"id": {"S": "user123"}}' --key '{"id": {"S": "user123"}}'
``` ```
@@ -2238,7 +1995,7 @@ aws dynamodb delete-item \
**Delete a Table:** **Delete a Table:**
```bash ```bash
aws dynamodb delete-table \ aws dynamodb delete-table \
--endpoint-url http://localhost:8000 \ --endpoint-url http://localhost:8002 \
--table-name Users --table-name Users
``` ```
@@ -2250,7 +2007,7 @@ aws dynamodb delete-table \
const { DynamoDBClient, PutItemCommand, GetItemCommand } = require("@aws-sdk/client-dynamodb"); const { DynamoDBClient, PutItemCommand, GetItemCommand } = require("@aws-sdk/client-dynamodb");
const client = new DynamoDBClient({ const client = new DynamoDBClient({
endpoint: "http://localhost:8000", endpoint: "http://localhost:8002",
region: "us-east-1", region: "us-east-1",
credentials: { credentials: {
accessKeyId: "dummy", accessKeyId: "dummy",
@@ -2267,13 +2024,13 @@ async function test() {
name: { S: "Alice" } name: { S: "Alice" }
} }
})); }));
// Get the item // Get the item
const result = await client.send(new GetItemCommand({ const result = await client.send(new GetItemCommand({
TableName: "Users", TableName: "Users",
Key: { id: { S: "user123" } } Key: { id: { S: "user123" } }
})); }));
console.log(result.Item); console.log(result.Item);
} }
@@ -2287,7 +2044,7 @@ import boto3
dynamodb = boto3.client( dynamodb = boto3.client(
'dynamodb', 'dynamodb',
endpoint_url='http://localhost:8000', endpoint_url='http://localhost:8002',
region_name='us-east-1', region_name='us-east-1',
aws_access_key_id='dummy', aws_access_key_id='dummy',
aws_secret_access_key='dummy' aws_secret_access_key='dummy'
@@ -2352,8 +2109,8 @@ make fmt
### Port Already in Use ### Port Already in Use
```bash ```bash
# Check what's using port 8000 # Check what's using port 8002
lsof -i :8000 lsof -i :8002
# Use a different port # Use a different port
make run PORT=9000 make run PORT=9000
@@ -2414,7 +2171,7 @@ make profile
# Load test # Load test
ab -n 10000 -c 100 -p item.json -T application/json \ ab -n 10000 -c 100 -p item.json -T application/json \
http://localhost:8000/ http://localhost:8002/
``` ```
## Production Deployment ## Production Deployment
@@ -2454,7 +2211,7 @@ FILE: ./README.md
A high-performance, DynamoDB-compatible database server written in Odin, backed by RocksDB. A high-performance, DynamoDB-compatible database server written in Odin, backed by RocksDB.
``` ```
╦╔═╗╦═╗╔╦╗╦ ╦╔╗╔╔╦╗╔╗ ╦╔═╗╦═╗╔╦╗╦ ╦╔╗╔╔╦╗╔╗
║║ ║╠╦╝║║║║ ║║║║ ║║╠╩╗ ║║ ║╠╦╝║║║║ ║║║║ ║║╠╩╗
╚╝╚═╝╩╚═╩ ╩╚═╝╝╚╝═╩╝╚═╝ ╚╝╚═╝╩╚═╩ ╩╚═╝╝╚╝═╩╝╚═╝
DynamoDB-Compatible Database DynamoDB-Compatible Database
@@ -2506,7 +2263,7 @@ sudo apt install librocksdb-dev libsnappy-dev liblz4-dev libzstd-dev libbz2-dev
# Build the server # Build the server
make build make build
# Run with default settings (localhost:8000, ./data directory) # Run with default settings (localhost:8002, ./data directory)
make run make run
# Run with custom port # Run with custom port
@@ -2521,7 +2278,7 @@ make run DATA_DIR=/tmp/jormundb
```bash ```bash
# Create a table # Create a table
aws dynamodb create-table \ aws dynamodb create-table \
--endpoint-url http://localhost:8000 \ --endpoint-url http://localhost:8002 \
--table-name Users \ --table-name Users \
--key-schema AttributeName=id,KeyType=HASH \ --key-schema AttributeName=id,KeyType=HASH \
--attribute-definitions AttributeName=id,AttributeType=S \ --attribute-definitions AttributeName=id,AttributeType=S \
@@ -2529,26 +2286,26 @@ aws dynamodb create-table \
# Put an item # Put an item
aws dynamodb put-item \ aws dynamodb put-item \
--endpoint-url http://localhost:8000 \ --endpoint-url http://localhost:8002 \
--table-name Users \ --table-name Users \
--item '{"id":{"S":"user123"},"name":{"S":"Alice"},"age":{"N":"30"}}' --item '{"id":{"S":"user123"},"name":{"S":"Alice"},"age":{"N":"30"}}'
# Get an item # Get an item
aws dynamodb get-item \ aws dynamodb get-item \
--endpoint-url http://localhost:8000 \ --endpoint-url http://localhost:8002 \
--table-name Users \ --table-name Users \
--key '{"id":{"S":"user123"}}' --key '{"id":{"S":"user123"}}'
# Query items # Query items
aws dynamodb query \ aws dynamodb query \
--endpoint-url http://localhost:8000 \ --endpoint-url http://localhost:8002 \
--table-name Users \ --table-name Users \
--key-condition-expression "id = :id" \ --key-condition-expression "id = :id" \
--expression-attribute-values '{":id":{"S":"user123"}}' --expression-attribute-values '{":id":{"S":"user123"}}'
# Scan table # Scan table
aws dynamodb scan \ aws dynamodb scan \
--endpoint-url http://localhost:8000 \ --endpoint-url http://localhost:8002 \
--table-name Users --table-name Users
``` ```
@@ -2614,15 +2371,15 @@ handle_request :: proc(conn: net.TCP_Socket) {
arena: mem.Arena arena: mem.Arena
mem.arena_init(&arena, make([]byte, mem.Megabyte * 4)) mem.arena_init(&arena, make([]byte, mem.Megabyte * 4))
defer mem.arena_destroy(&arena) defer mem.arena_destroy(&arena)
context.allocator = mem.arena_allocator(&arena) context.allocator = mem.arena_allocator(&arena)
// Everything below uses the arena automatically // Everything below uses the arena automatically
// No manual frees, no errdefer cleanup needed // No manual frees, no errdefer cleanup needed
request := parse_request() // Uses context.allocator request := parse_request() // Uses context.allocator
response := process(request) // Uses context.allocator response := process(request) // Uses context.allocator
send_response(response) // Uses context.allocator send_response(response) // Uses context.allocator
// Arena is freed here automatically // Arena is freed here automatically
} }
``` ```
@@ -2694,7 +2451,7 @@ Scan (full table) | 5000 ops | 234.56 ms | 21320 ops/sec
### Environment Variables ### Environment Variables
```bash ```bash
JORMUN_PORT=8000 # Server port JORMUN_PORT=8002 # Server port
JORMUN_HOST=0.0.0.0 # Bind address JORMUN_HOST=0.0.0.0 # Bind address
JORMUN_DATA_DIR=./data # RocksDB data directory JORMUN_DATA_DIR=./data # RocksDB data directory
JORMUN_VERBOSE=1 # Enable verbose logging JORMUN_VERBOSE=1 # Enable verbose logging
@@ -2726,7 +2483,7 @@ chmod 755 ./data
Check if the port is already in use: Check if the port is already in use:
```bash ```bash
lsof -i :8000 lsof -i :8002
``` ```
### "Invalid JSON" errors ### "Invalid JSON" errors
@@ -2779,6 +2536,9 @@ import "core:fmt"
foreign import rocksdb "system:rocksdb" foreign import rocksdb "system:rocksdb"
// In order to use RocksDB's WAL replication helpers, we need to import the C++ library so we use this shim
//foreign import rocksdb_shim "system:jormun_rocksdb_shim" // I know we'll use in future but because we're not right now, compiler is complaining
// RocksDB C API types // RocksDB C API types
RocksDB_T :: distinct rawptr RocksDB_T :: distinct rawptr
RocksDB_Options :: distinct rawptr RocksDB_Options :: distinct rawptr
@@ -2816,7 +2576,7 @@ foreign rocksdb {
// Database operations // Database operations
rocksdb_open :: proc(options: RocksDB_Options, path: cstring, errptr: ^cstring) -> RocksDB_T --- rocksdb_open :: proc(options: RocksDB_Options, path: cstring, errptr: ^cstring) -> RocksDB_T ---
rocksdb_close :: proc(db: RocksDB_T) --- rocksdb_close :: proc(db: RocksDB_T) ---
// Options // Options
rocksdb_options_create :: proc() -> RocksDB_Options --- rocksdb_options_create :: proc() -> RocksDB_Options ---
rocksdb_options_destroy :: proc(options: RocksDB_Options) --- rocksdb_options_destroy :: proc(options: RocksDB_Options) ---
@@ -2824,25 +2584,25 @@ foreign rocksdb {
rocksdb_options_increase_parallelism :: proc(options: RocksDB_Options, total_threads: c.int) --- rocksdb_options_increase_parallelism :: proc(options: RocksDB_Options, total_threads: c.int) ---
rocksdb_options_optimize_level_style_compaction :: proc(options: RocksDB_Options, memtable_memory_budget: c.uint64_t) --- rocksdb_options_optimize_level_style_compaction :: proc(options: RocksDB_Options, memtable_memory_budget: c.uint64_t) ---
rocksdb_options_set_compression :: proc(options: RocksDB_Options, compression: c.int) --- rocksdb_options_set_compression :: proc(options: RocksDB_Options, compression: c.int) ---
// Write options // Write options
rocksdb_writeoptions_create :: proc() -> RocksDB_WriteOptions --- rocksdb_writeoptions_create :: proc() -> RocksDB_WriteOptions ---
rocksdb_writeoptions_destroy :: proc(options: RocksDB_WriteOptions) --- rocksdb_writeoptions_destroy :: proc(options: RocksDB_WriteOptions) ---
// Read options // Read options
rocksdb_readoptions_create :: proc() -> RocksDB_ReadOptions --- rocksdb_readoptions_create :: proc() -> RocksDB_ReadOptions ---
rocksdb_readoptions_destroy :: proc(options: RocksDB_ReadOptions) --- rocksdb_readoptions_destroy :: proc(options: RocksDB_ReadOptions) ---
// Put/Get/Delete // Put/Get/Delete
rocksdb_put :: proc(db: RocksDB_T, options: RocksDB_WriteOptions, key: [^]byte, keylen: c.size_t, val: [^]byte, vallen: c.size_t, errptr: ^cstring) --- rocksdb_put :: proc(db: RocksDB_T, options: RocksDB_WriteOptions, key: [^]byte, keylen: c.size_t, val: [^]byte, vallen: c.size_t, errptr: ^cstring) ---
rocksdb_get :: proc(db: RocksDB_T, options: RocksDB_ReadOptions, key: [^]byte, keylen: c.size_t, vallen: ^c.size_t, errptr: ^cstring) -> [^]byte --- rocksdb_get :: proc(db: RocksDB_T, options: RocksDB_ReadOptions, key: [^]byte, keylen: c.size_t, vallen: ^c.size_t, errptr: ^cstring) -> [^]byte ---
rocksdb_delete :: proc(db: RocksDB_T, options: RocksDB_WriteOptions, key: [^]byte, keylen: c.size_t, errptr: ^cstring) --- rocksdb_delete :: proc(db: RocksDB_T, options: RocksDB_WriteOptions, key: [^]byte, keylen: c.size_t, errptr: ^cstring) ---
// Flush // Flush
rocksdb_flushoptions_create :: proc() -> RocksDB_FlushOptions --- rocksdb_flushoptions_create :: proc() -> RocksDB_FlushOptions ---
rocksdb_flushoptions_destroy :: proc(options: RocksDB_FlushOptions) --- rocksdb_flushoptions_destroy :: proc(options: RocksDB_FlushOptions) ---
rocksdb_flush :: proc(db: RocksDB_T, options: RocksDB_FlushOptions, errptr: ^cstring) --- rocksdb_flush :: proc(db: RocksDB_T, options: RocksDB_FlushOptions, errptr: ^cstring) ---
// Write batch // Write batch
rocksdb_writebatch_create :: proc() -> RocksDB_WriteBatch --- rocksdb_writebatch_create :: proc() -> RocksDB_WriteBatch ---
rocksdb_writebatch_destroy :: proc(batch: RocksDB_WriteBatch) --- rocksdb_writebatch_destroy :: proc(batch: RocksDB_WriteBatch) ---
@@ -2850,7 +2610,7 @@ foreign rocksdb {
rocksdb_writebatch_delete :: proc(batch: RocksDB_WriteBatch, key: [^]byte, keylen: c.size_t) --- rocksdb_writebatch_delete :: proc(batch: RocksDB_WriteBatch, key: [^]byte, keylen: c.size_t) ---
rocksdb_writebatch_clear :: proc(batch: RocksDB_WriteBatch) --- rocksdb_writebatch_clear :: proc(batch: RocksDB_WriteBatch) ---
rocksdb_write :: proc(db: RocksDB_T, options: RocksDB_WriteOptions, batch: RocksDB_WriteBatch, errptr: ^cstring) --- rocksdb_write :: proc(db: RocksDB_T, options: RocksDB_WriteOptions, batch: RocksDB_WriteBatch, errptr: ^cstring) ---
// Iterator // Iterator
rocksdb_create_iterator :: proc(db: RocksDB_T, options: RocksDB_ReadOptions) -> RocksDB_Iterator --- rocksdb_create_iterator :: proc(db: RocksDB_T, options: RocksDB_ReadOptions) -> RocksDB_Iterator ---
rocksdb_iter_destroy :: proc(iter: RocksDB_Iterator) --- rocksdb_iter_destroy :: proc(iter: RocksDB_Iterator) ---
@@ -2863,7 +2623,7 @@ foreign rocksdb {
rocksdb_iter_prev :: proc(iter: RocksDB_Iterator) --- rocksdb_iter_prev :: proc(iter: RocksDB_Iterator) ---
rocksdb_iter_key :: proc(iter: RocksDB_Iterator, klen: ^c.size_t) -> [^]byte --- rocksdb_iter_key :: proc(iter: RocksDB_Iterator, klen: ^c.size_t) -> [^]byte ---
rocksdb_iter_value :: proc(iter: RocksDB_Iterator, vlen: ^c.size_t) -> [^]byte --- rocksdb_iter_value :: proc(iter: RocksDB_Iterator, vlen: ^c.size_t) -> [^]byte ---
// Memory management // Memory management
rocksdb_free :: proc(ptr: rawptr) --- rocksdb_free :: proc(ptr: rawptr) ---
} }
@@ -2883,41 +2643,41 @@ db_open :: proc(path: string, create_if_missing := true) -> (DB, Error) {
if options == nil { if options == nil {
return {}, .Unknown return {}, .Unknown
} }
// Set create if missing // Set create if missing
rocksdb_options_set_create_if_missing(options, create_if_missing ? 1 : 0) rocksdb_options_set_create_if_missing(options, create_if_missing ? 1 : 0)
// Performance optimizations // Performance optimizations
rocksdb_options_increase_parallelism(options, 4) rocksdb_options_increase_parallelism(options, 4)
rocksdb_options_optimize_level_style_compaction(options, 512 * 1024 * 1024) rocksdb_options_optimize_level_style_compaction(options, 512 * 1024 * 1024)
rocksdb_options_set_compression(options, ROCKSDB_LZ4_COMPRESSION) rocksdb_options_set_compression(options, ROCKSDB_LZ4_COMPRESSION)
// Create write and read options // Create write and read options
write_options := rocksdb_writeoptions_create() write_options := rocksdb_writeoptions_create()
if write_options == nil { if write_options == nil {
rocksdb_options_destroy(options) rocksdb_options_destroy(options)
return {}, .Unknown return {}, .Unknown
} }
read_options := rocksdb_readoptions_create() read_options := rocksdb_readoptions_create()
if read_options == nil { if read_options == nil {
rocksdb_writeoptions_destroy(write_options) rocksdb_writeoptions_destroy(write_options)
rocksdb_options_destroy(options) rocksdb_options_destroy(options)
return {}, .Unknown return {}, .Unknown
} }
// Open database // Open database
err: cstring err: cstring
path_cstr := fmt.ctprintf("%s", path) path_cstr := fmt.ctprintf("%s", path)
handle := rocksdb_open(options, path_cstr, &err) handle := rocksdb_open(options, path_cstr, &err)
if err != nil { if err != nil {
defer rocksdb_free(err) defer rocksdb_free(rawptr(err)) // Cast it here and now so we don't deal with issues from FFI down the line
rocksdb_readoptions_destroy(read_options) rocksdb_readoptions_destroy(read_options)
rocksdb_writeoptions_destroy(write_options) rocksdb_writeoptions_destroy(write_options)
rocksdb_options_destroy(options) rocksdb_options_destroy(options)
return {}, .OpenFailed return {}, .OpenFailed
} }
return DB{ return DB{
handle = handle, handle = handle,
options = options, options = options,
@@ -2947,7 +2707,7 @@ db_put :: proc(db: ^DB, key: []byte, value: []byte) -> Error {
&err, &err,
) )
if err != nil { if err != nil {
defer rocksdb_free(err) defer rocksdb_free(rawptr(err)) // Cast it here and now so we don't deal with issues from FFI down the line
return .WriteFailed return .WriteFailed
} }
return .None return .None
@@ -2957,7 +2717,7 @@ db_put :: proc(db: ^DB, key: []byte, value: []byte) -> Error {
db_get :: proc(db: ^DB, key: []byte) -> (value: []byte, err: Error) { db_get :: proc(db: ^DB, key: []byte) -> (value: []byte, err: Error) {
errptr: cstring errptr: cstring
value_len: c.size_t value_len: c.size_t
value_ptr := rocksdb_get( value_ptr := rocksdb_get(
db.handle, db.handle,
db.read_options, db.read_options,
@@ -2966,21 +2726,21 @@ db_get :: proc(db: ^DB, key: []byte) -> (value: []byte, err: Error) {
&value_len, &value_len,
&errptr, &errptr,
) )
if errptr != nil { if errptr != nil {
defer rocksdb_free(errptr) defer rocksdb_free(rawptr(errptr)) // Cast it here and now so we don't deal with issues from FFI down the line
return nil, .ReadFailed return nil, .ReadFailed
} }
if value_ptr == nil { if value_ptr == nil {
return nil, .NotFound return nil, .NotFound
} }
// Copy the data and free RocksDB's buffer // Copy the data and free RocksDB's buffer
result := make([]byte, value_len, context.allocator) result := make([]byte, value_len, context.allocator)
copy(result, value_ptr[:value_len]) copy(result, value_ptr[:value_len])
rocksdb_free(value_ptr) rocksdb_free(rawptr(value_ptr)) // Cast it here and now so we don't deal with issues from FFI down the line
return result, .None return result, .None
} }
@@ -2995,7 +2755,7 @@ db_delete :: proc(db: ^DB, key: []byte) -> Error {
&err, &err,
) )
if err != nil { if err != nil {
defer rocksdb_free(err) defer rocksdb_free(rawptr(err)) // Cast it here and now so we don't deal with issues from FFI down the line
return .DeleteFailed return .DeleteFailed
} }
return .None return .None
@@ -3008,11 +2768,11 @@ db_flush :: proc(db: ^DB) -> Error {
return .Unknown return .Unknown
} }
defer rocksdb_flushoptions_destroy(flush_opts) defer rocksdb_flushoptions_destroy(flush_opts)
err: cstring err: cstring
rocksdb_flush(db.handle, flush_opts, &err) rocksdb_flush(db.handle, flush_opts, &err)
if err != nil { if err != nil {
defer rocksdb_free(err) defer rocksdb_free(rawptr(err)) // Cast it here and now so we don't deal with issues from FFI down the line
return .IOError return .IOError
} }
return .None return .None
@@ -3067,7 +2827,7 @@ batch_write :: proc(db: ^DB, batch: ^WriteBatch) -> Error {
err: cstring err: cstring
rocksdb_write(db.handle, db.write_options, batch.handle, &err) rocksdb_write(db.handle, db.write_options, batch.handle, &err)
if err != nil { if err != nil {
defer rocksdb_free(err) defer rocksdb_free(rawptr(err)) // Cast it here and now so we don't deal with issues from FFI down the line
return .WriteFailed return .WriteFailed
} }
return .None return .None
@@ -3143,6 +2903,93 @@ iter_value :: proc(iter: ^Iterator) -> []byte {
} }
================================================================================
FILE: ./rocksdb_shim/rocksdb_shim.cc
================================================================================
// TODO: In order to use RocksDB's WAL replication helpers, we need to import the C++ library so we use this shim
/**
C++ shim implementation notes (the important bits)
In this rocksdb_shim.cc we'll need to use:
rocksdb::DB::Open(...)
db->GetLatestSequenceNumber()
db->GetUpdatesSince(seq, &iter)
from each TransactionLogIterator entry:
get WriteBatch and serialize via WriteBatch::Data()
apply via rocksdb::WriteBatch wb(data); db->Write(write_options, &wb);
Also we must configure WAL retention so the followers dont fall off the end. RocksDB warns the iterator can become invalid if WAL is cleared aggressively; typical controls are WAL TTL / size limit.
https://github.com/facebook/rocksdb/issues/1565
*/
================================================================================
FILE: ./rocksdb_shim/rocksdb_shim.h
================================================================================
// In order to use RocksDB's WAL replication helpers, we need to import the C++ library so we use this shim
#pragma once
#include <stdint.h>
#include <stddef.h>
#ifdef __cplusplus
extern "C"
{
#endif
typedef struct jormun_db jormun_db;
typedef struct jormun_wal_iter jormun_wal_iter;
// Open/close (so Odin never touches rocksdb_t directly)
jormun_db *jormun_db_open(const char *path, int create_if_missing, char **err);
void jormun_db_close(jormun_db *db);
// Basic ops (you can mirror what you already have)
void jormun_db_put(jormun_db *db,
const void *key, size_t keylen,
const void *val, size_t vallen,
char **err);
unsigned char *jormun_db_get(jormun_db *db,
const void *key, size_t keylen,
size_t *vallen,
char **err);
// caller frees with this:
void jormun_free(void *p);
// Replication primitives
uint64_t jormun_latest_sequence(jormun_db *db);
// Iterator: start at seq (inclusive-ish; RocksDB positions to batch containing seq or first after)
jormun_wal_iter *jormun_wal_iter_create(jormun_db *db, uint64_t seq, char **err);
void jormun_wal_iter_destroy(jormun_wal_iter *it);
// Next batch -> returns 1 if produced a batch, 0 if no more / not available
// You get a serialized “write batch” blob (rocksdb::WriteBatch::Data()) plus the batch start seq.
int jormun_wal_iter_next(jormun_wal_iter *it,
uint64_t *batch_start_seq,
unsigned char **out_data,
size_t *out_len,
char **err);
// Apply serialized writebatch blob on follower
void jormun_apply_writebatch(jormun_db *db,
const unsigned char *data, size_t len,
char **err);
#ifdef __cplusplus
}
#endif
================================================================================ ================================================================================
FILE: ./TODO.md FILE: ./TODO.md
================================================================================ ================================================================================
@@ -3161,8 +3008,8 @@ This tracks the rewrite from Zig to Odin and remaining features.
- [x] Core types (dynamodb/types.odin) - [x] Core types (dynamodb/types.odin)
- [x] Key codec with varint encoding (key_codec/key_codec.odin) - [x] Key codec with varint encoding (key_codec/key_codec.odin)
- [x] Main entry point with arena pattern demo - [x] Main entry point with arena pattern demo
- [x] LICENSE file
- [x] .gitignore - [x] .gitignore
- [x] HTTP Server Scaffolding
## 🚧 In Progress (Need to Complete) ## 🚧 In Progress (Need to Complete)
@@ -3172,7 +3019,7 @@ This tracks the rewrite from Zig to Odin and remaining features.
- Parse `{"S": "value"}` format - Parse `{"S": "value"}` format
- Serialize AttributeValue to DynamoDB JSON - Serialize AttributeValue to DynamoDB JSON
- Parse request bodies (PutItem, GetItem, etc.) - Parse request bodies (PutItem, GetItem, etc.)
- [ ] **item_codec/item_codec.odin** - Binary TLV encoding for items - [ ] **item_codec/item_codec.odin** - Binary TLV encoding for items
- Encode Item to binary TLV format - Encode Item to binary TLV format
- Decode binary TLV back to Item - Decode binary TLV back to Item
@@ -3199,7 +3046,7 @@ This tracks the rewrite from Zig to Odin and remaining features.
- Read JSON bodies - Read JSON bodies
- Send HTTP responses with headers - Send HTTP responses with headers
- Keep-alive support - Keep-alive support
- Options: - Options (Why we haven't checked this off yet, we need to make sure we chose the right option as the project grows, might make more sense to impliment different option):
- Use `core:net` directly - Use `core:net` directly
- Use C FFI with libmicrohttpd - Use C FFI with libmicrohttpd
- Use Odin's vendor:microui (if suitable) - Use Odin's vendor:microui (if suitable)
@@ -3218,6 +3065,23 @@ This tracks the rewrite from Zig to Odin and remaining features.
- ADD operations - ADD operations
- DELETE operations - DELETE operations
### Replication Support (Priority 4)
- [ ] **Build C++ Shim in order to use RocksDB's WAL replication helpers**
- [ ] **Add configurator to set instance as a master or slave node and point to proper Target and Destination IPs**
- [ ] **Leverage C++ helpers from shim**
### Subscribe To Changes Feature (Priority LAST [But keep in mind because semantics we decide now will make this easier later])
- [ ] **Best-effort notifications (Postgres-ish LISTEN/NOTIFY [in-memory pub/sub fanout. If youre not connected, you miss it.])**
- Add an in-process “event bus” channels: table-wide, partition-key, item-key, “all”.
- When putItem/deleteItem/updateItem/createTable/... commits successfully publish {op, table, key, timestamp, item?}
- [ ] **Durable change streams (Mongo-ish [append every mutation to a persistent log and let consumers read it with resume tokens.])**
- Create a “changelog” keyspace
- Generate a monotonically increasing sequence by using a stable Per-partition sequence cursor
- Expose via an API (I prefer publishing to MQTT or SSE)
## 📋 Testing ## 📋 Testing
- [ ] Unit tests for key_codec - [ ] Unit tests for key_codec
@@ -3331,7 +3195,7 @@ make test
make run make run
# Test with AWS CLI # Test with AWS CLI
aws dynamodb list-tables --endpoint-url http://localhost:8000 aws dynamodb list-tables --endpoint-url http://localhost:8002
``` ```