Compare commits

..

2 Commits

Author SHA1 Message Date
d318534782 fix concatenator 2026-02-15 11:45:09 -05:00
ad599a0af7 fix docs and todo 2026-02-15 11:42:43 -05:00
7 changed files with 248 additions and 366 deletions

1
.gitignore vendored
View File

@@ -1,2 +1,3 @@
./build
./data
./project_context.txt

View File

@@ -41,7 +41,7 @@ COMMON_FLAGS := -vet -strict-style
EXTRA_LINKER_FLAGS := $(LIB_PATH) $(SHIM_LIB) $(ROCKSDB_LIBS)
# Runtime configuration
PORT ?= 8000
PORT ?= 8002
HOST ?= 0.0.0.0
DATA_DIR ?= ./data
VERBOSE ?= 0
@@ -191,7 +191,7 @@ help:
@echo " make clean - Remove build artifacts"
@echo ""
@echo "$(GREEN)Run Commands:$(NC)"
@echo " make run - Build and run server (default: localhost:8000)"
@echo " make run - Build and run server (default: localhost:8002)"
@echo " make run PORT=9000 - Run on custom port"
@echo " make dev - Clean, build, and run"
@echo " make quick - Fast rebuild and run"

View File

@@ -101,7 +101,7 @@ export PATH=$PATH:/path/to/odin
### Basic Usage
```bash
# Run with defaults (localhost:8000, ./data directory)
# Run with defaults (localhost:8002, ./data directory)
make run
```
@@ -118,10 +118,10 @@ You should see:
║ ║
╚═══════════════════════════════════════════════╝
Port: 8000 | Data Dir: ./data
Port: 8002 | Data Dir: ./data
Storage engine initialized at ./data
Starting DynamoDB-compatible server on 0.0.0.0:8000
Starting DynamoDB-compatible server on 0.0.0.0:8002
Ready to accept connections!
```
@@ -186,7 +186,7 @@ aws configure
**Create a Table:**
```bash
aws dynamodb create-table \
--endpoint-url http://localhost:8000 \
--endpoint-url http://localhost:8002 \
--table-name Users \
--key-schema \
AttributeName=id,KeyType=HASH \
@@ -197,13 +197,13 @@ aws dynamodb create-table \
**List Tables:**
```bash
aws dynamodb list-tables --endpoint-url http://localhost:8000
aws dynamodb list-tables --endpoint-url http://localhost:8002
```
**Put an Item:**
```bash
aws dynamodb put-item \
--endpoint-url http://localhost:8000 \
--endpoint-url http://localhost:8002 \
--table-name Users \
--item '{
"id": {"S": "user123"},
@@ -216,7 +216,7 @@ aws dynamodb put-item \
**Get an Item:**
```bash
aws dynamodb get-item \
--endpoint-url http://localhost:8000 \
--endpoint-url http://localhost:8002 \
--table-name Users \
--key '{"id": {"S": "user123"}}'
```
@@ -224,7 +224,7 @@ aws dynamodb get-item \
**Query Items:**
```bash
aws dynamodb query \
--endpoint-url http://localhost:8000 \
--endpoint-url http://localhost:8002 \
--table-name Users \
--key-condition-expression "id = :id" \
--expression-attribute-values '{
@@ -235,14 +235,14 @@ aws dynamodb query \
**Scan Table:**
```bash
aws dynamodb scan \
--endpoint-url http://localhost:8000 \
--endpoint-url http://localhost:8002 \
--table-name Users
```
**Delete an Item:**
```bash
aws dynamodb delete-item \
--endpoint-url http://localhost:8000 \
--endpoint-url http://localhost:8002 \
--table-name Users \
--key '{"id": {"S": "user123"}}'
```
@@ -250,7 +250,7 @@ aws dynamodb delete-item \
**Delete a Table:**
```bash
aws dynamodb delete-table \
--endpoint-url http://localhost:8000 \
--endpoint-url http://localhost:8002 \
--table-name Users
```
@@ -262,7 +262,7 @@ aws dynamodb delete-table \
const { DynamoDBClient, PutItemCommand, GetItemCommand } = require("@aws-sdk/client-dynamodb");
const client = new DynamoDBClient({
endpoint: "http://localhost:8000",
endpoint: "http://localhost:8002",
region: "us-east-1",
credentials: {
accessKeyId: "dummy",
@@ -299,7 +299,7 @@ import boto3
dynamodb = boto3.client(
'dynamodb',
endpoint_url='http://localhost:8000',
endpoint_url='http://localhost:8002',
region_name='us-east-1',
aws_access_key_id='dummy',
aws_secret_access_key='dummy'
@@ -364,8 +364,8 @@ make fmt
### Port Already in Use
```bash
# Check what's using port 8000
lsof -i :8000
# Check what's using port 8002
lsof -i :8002
# Use a different port
make run PORT=9000
@@ -426,7 +426,7 @@ make profile
# Load test
ab -n 10000 -c 100 -p item.json -T application/json \
http://localhost:8000/
http://localhost:8002/
```
## Production Deployment

View File

@@ -55,7 +55,7 @@ sudo apt install librocksdb-dev libsnappy-dev liblz4-dev libzstd-dev libbz2-dev
# Build the server
make build
# Run with default settings (localhost:8000, ./data directory)
# Run with default settings (localhost:8002, ./data directory)
make run
# Run with custom port
@@ -70,7 +70,7 @@ make run DATA_DIR=/tmp/jormundb
```bash
# Create a table
aws dynamodb create-table \
--endpoint-url http://localhost:8000 \
--endpoint-url http://localhost:8002 \
--table-name Users \
--key-schema AttributeName=id,KeyType=HASH \
--attribute-definitions AttributeName=id,AttributeType=S \
@@ -78,26 +78,26 @@ aws dynamodb create-table \
# Put an item
aws dynamodb put-item \
--endpoint-url http://localhost:8000 \
--endpoint-url http://localhost:8002 \
--table-name Users \
--item '{"id":{"S":"user123"},"name":{"S":"Alice"},"age":{"N":"30"}}'
# Get an item
aws dynamodb get-item \
--endpoint-url http://localhost:8000 \
--endpoint-url http://localhost:8002 \
--table-name Users \
--key '{"id":{"S":"user123"}}'
# Query items
aws dynamodb query \
--endpoint-url http://localhost:8000 \
--endpoint-url http://localhost:8002 \
--table-name Users \
--key-condition-expression "id = :id" \
--expression-attribute-values '{":id":{"S":"user123"}}'
# Scan table
aws dynamodb scan \
--endpoint-url http://localhost:8000 \
--endpoint-url http://localhost:8002 \
--table-name Users
```
@@ -243,7 +243,7 @@ Scan (full table) | 5000 ops | 234.56 ms | 21320 ops/sec
### Environment Variables
```bash
JORMUN_PORT=8000 # Server port
JORMUN_PORT=8002 # Server port
JORMUN_HOST=0.0.0.0 # Bind address
JORMUN_DATA_DIR=./data # RocksDB data directory
JORMUN_VERBOSE=1 # Enable verbose logging
@@ -275,7 +275,7 @@ chmod 755 ./data
Check if the port is already in use:
```bash
lsof -i :8000
lsof -i :8002
```
### "Invalid JSON" errors

23
TODO.md
View File

@@ -12,8 +12,8 @@ This tracks the rewrite from Zig to Odin and remaining features.
- [x] Core types (dynamodb/types.odin)
- [x] Key codec with varint encoding (key_codec/key_codec.odin)
- [x] Main entry point with arena pattern demo
- [x] LICENSE file
- [x] .gitignore
- [x] HTTP Server Scaffolding
## 🚧 In Progress (Need to Complete)
@@ -50,7 +50,7 @@ This tracks the rewrite from Zig to Odin and remaining features.
- Read JSON bodies
- Send HTTP responses with headers
- Keep-alive support
- Options:
- Options (Why we haven't checked this off yet, we need to make sure we chose the right option as the project grows, might make more sense to impliment different option):
- Use `core:net` directly
- Use C FFI with libmicrohttpd
- Use Odin's vendor:microui (if suitable)
@@ -69,6 +69,23 @@ This tracks the rewrite from Zig to Odin and remaining features.
- ADD operations
- DELETE operations
### Replication Support (Priority 4)
- [ ] **Build C++ Shim in order to use RocksDB's WAL replication helpers**
- [ ] **Add configurator to set instance as a master or slave node and point to proper Target and Destination IPs**
- [ ] **Leverage C++ helpers from shim**
### Subscribe To Changes Feature (Priority LAST [But keep in mind because semantics we decide now will make this easier later])
- [ ] **Best-effort notifications (Postgres-ish LISTEN/NOTIFY [in-memory pub/sub fanout. If youre not connected, you miss it.])**
- Add an in-process “event bus” channels: table-wide, partition-key, item-key, “all”.
- When putItem/deleteItem/updateItem/createTable/... commits successfully publish {op, table, key, timestamp, item?}
- [ ] **Durable change streams (Mongo-ish [append every mutation to a persistent log and let consumers read it with resume tokens.])**
- Create a “changelog” keyspace
- Generate a monotonically increasing sequence by using a stable Per-partition sequence cursor
- Expose via an API (I prefer publishing to MQTT or SSE)
## 📋 Testing
- [ ] Unit tests for key_codec
@@ -182,5 +199,5 @@ make test
make run
# Test with AWS CLI
aws dynamodb list-tables --endpoint-url http://localhost:8000
aws dynamodb list-tables --endpoint-url http://localhost:8002
```

View File

@@ -7,10 +7,10 @@ OUTPUT_FILE="project_context.txt"
EXCLUDE_DIRS=("build" "data" ".git")
# File extensions to include (add more as needed)
INCLUDE_EXTENSIONS=("odin" "Makefile" "md")
INCLUDE_EXTENSIONS=("odin" "Makefile" "md" "json" "h" "cc")
# Special files to include (without extension)
INCLUDE_FILES=("ols.json" "Makefile" "build.odin.zon")
INCLUDE_FILES=()
# Clear the output file
> "$OUTPUT_FILE"

View File

@@ -1,5 +1,5 @@
# Project: jormun-db
# Generated: Sun Feb 15 08:31:32 AM EST 2026
# Generated: Sun Feb 15 11:44:33 AM EST 2026
================================================================================
@@ -899,6 +899,7 @@ package main
import "core:fmt"
import "core:mem"
import vmem "core:mem/virtual"
import "core:net"
import "core:strings"
import "core:strconv"
@@ -1097,29 +1098,31 @@ handle_connection :: proc(server: ^Server, conn: net.TCP_Socket, source: net.End
defer net.close(conn)
request_count := 0
// Keep-alive loop
for request_count < server.config.max_requests_per_connection {
request_count += 1
// Create arena for this request (4MB)
arena: mem.Arena
mem.arena_init(&arena, make([]byte, mem.Megabyte * 4))
defer mem.arena_destroy(&arena)
// Growing arena for this request
arena: vmem.Arena
arena_err := vmem.arena_init_growing(&arena)
if arena_err != .None {
break
}
defer vmem.arena_destroy(&arena)
request_alloc := mem.arena_allocator(&arena)
request_alloc := vmem.arena_allocator(&arena)
// TODO: Double check if we want *all* downstream allocations to use the request arena?
old := context.allocator
context.allocator = request_alloc
defer context.allocator = old
// Parse request
request, parse_ok := parse_request(conn, request_alloc, server.config)
if !parse_ok {
// Connection closed or parse error
break
}
// Call handler
response := server.handler(server.handler_ctx, &request, request_alloc)
// Send response
send_ok := send_response(conn, &response, request_alloc)
if !send_ok {
break
@@ -1222,7 +1225,7 @@ parse_request :: proc(
copy(body, existing_body)
remaining := content_length - len(existing_body)
bytes_read := len(existing_body)
body_written := len(existing_body)
for remaining > 0 {
chunk_size := min(remaining, config.read_buffer_size)
@@ -1233,8 +1236,8 @@ parse_request :: proc(
return {}, false
}
copy(body[bytes_read:], chunk[:n])
bytes_read += n
copy(body[body_written:], chunk[:n])
body_written += n
remaining -= n
}
}
@@ -1592,7 +1595,7 @@ import "core:fmt"
import "core:mem"
import "core:os"
import "core:strconv"
import "core:strings"
//import "core:strings" // I know we'll use in future but because we're not right now, compiler is complaining
import "rocksdb"
Config :: struct {
@@ -1654,7 +1657,7 @@ main :: proc() {
// Temporary HTTP request handler
// TODO: Replace with full DynamoDB handler once dynamodb/handler.odin is implemented
handle_http_request :: proc(ctx: rawptr, request: ^HTTP_Request, request_alloc: mem.Allocator) -> HTTP_Response {
db := cast(^rocksdb.DB)ctx
//db := cast(^rocksdb.DB)ctx // I know we'll use in future but because we're not right now, compiler is complaining
response := response_init(request_alloc)
response_add_header(&response, "Content-Type", "application/x-amz-json-1.0")
@@ -1675,59 +1678,17 @@ handle_http_request :: proc(ctx: rawptr, request: ^HTTP_Request, request_alloc:
return response
}
// Demonstrate arena-per-request memory management
demo_arena_pattern :: proc(db: ^rocksdb.DB) {
// Simulate handling a request
{
// Create arena for this request
arena: mem.Arena
mem.arena_init(&arena, make([]byte, mem.Megabyte * 4))
defer mem.arena_destroy(&arena)
// Set context allocator to arena
context.allocator = mem.arena_allocator(&arena)
// All allocations below use the arena automatically
// No individual frees needed!
fmt.println("Simulating request handler...")
// Example: parse JSON, process, respond
table_name := strings.clone("Users")
key := make([]byte, 16)
value := make([]byte, 256)
// Simulate storage operation
copy(key, "user:123")
copy(value, `{"name":"Alice","age":30}`)
err := rocksdb.db_put(db, key, value)
if err == .None {
fmt.println("✓ Stored item using arena allocator")
}
// Read it back
result, read_err := rocksdb.db_get(db, key)
if read_err == .None {
fmt.printf("✓ Retrieved item: %s\n", string(result))
}
// Everything is freed here when arena is destroyed
fmt.println("✓ Arena destroyed - all memory freed automatically")
}
}
parse_config :: proc() -> Config {
config := Config{
host = "0.0.0.0",
port = 8000,
port = 8002,
data_dir = "./data",
verbose = false,
}
// Environment variables
if port_str, ok := os.lookup_env("JORMUN_PORT"); ok {
if port, ok := strconv.parse_int(port_str); ok {
if port_str, env_ok := os.lookup_env("JORMUN_PORT"); env_ok {
if port, parse_ok := strconv.parse_int(port_str); parse_ok {
config.port = port
}
}
@@ -1767,210 +1728,6 @@ print_banner :: proc(config: Config) {
}
================================================================================
FILE: ./Makefile
================================================================================
.PHONY: all build release run test clean fmt help install
# Project configuration
PROJECT_NAME := jormundb
ODIN := odin
BUILD_DIR := build
SRC_DIR := .
# RocksDB and compression libraries
ROCKSDB_LIBS := -lrocksdb -lstdc++ -lsnappy -llz4 -lzstd -lz -lbz2
# Platform-specific library paths
UNAME_S := $(shell uname -s)
ifeq ($(UNAME_S),Darwin)
# macOS (Homebrew)
LIB_PATH := -L/usr/local/lib -L/opt/homebrew/lib
INCLUDE_PATH := -I/usr/local/include -I/opt/homebrew/include
else ifeq ($(UNAME_S),Linux)
# Linux
LIB_PATH := -L/usr/local/lib -L/usr/lib
INCLUDE_PATH := -I/usr/local/include
endif
# Build flags
DEBUG_FLAGS := -debug -o:none
RELEASE_FLAGS := -o:speed -disable-assert -no-bounds-check
COMMON_FLAGS := -vet -strict-style
# Linker flags
EXTRA_LINKER_FLAGS := $(LIB_PATH) $(ROCKSDB_LIBS)
# Runtime configuration
PORT ?= 8000
HOST ?= 0.0.0.0
DATA_DIR ?= ./data
VERBOSE ?= 0
# Colors for output
BLUE := \033[0;34m
GREEN := \033[0;32m
YELLOW := \033[0;33m
RED := \033[0;31m
NC := \033[0m # No Color
# Default target
all: build
# Build debug version
build:
@echo "$(BLUE)Building $(PROJECT_NAME) (debug)...$(NC)"
@mkdir -p $(BUILD_DIR)
$(ODIN) build $(SRC_DIR) \
$(COMMON_FLAGS) \
$(DEBUG_FLAGS) \
-out:$(BUILD_DIR)/$(PROJECT_NAME) \
-extra-linker-flags:"$(EXTRA_LINKER_FLAGS)"
@echo "$(GREEN)✓ Build complete: $(BUILD_DIR)/$(PROJECT_NAME)$(NC)"
# Build optimized release version
release:
@echo "$(BLUE)Building $(PROJECT_NAME) (release)...$(NC)"
@mkdir -p $(BUILD_DIR)
$(ODIN) build $(SRC_DIR) \
$(COMMON_FLAGS) \
$(RELEASE_FLAGS) \
-out:$(BUILD_DIR)/$(PROJECT_NAME) \
-extra-linker-flags:"$(EXTRA_LINKER_FLAGS)"
@echo "$(GREEN)✓ Release build complete: $(BUILD_DIR)/$(PROJECT_NAME)$(NC)"
# Run the server
run: build
@echo "$(BLUE)Starting $(PROJECT_NAME)...$(NC)"
@mkdir -p $(DATA_DIR)
@JORMUN_PORT=$(PORT) \
JORMUN_HOST=$(HOST) \
JORMUN_DATA_DIR=$(DATA_DIR) \
JORMUN_VERBOSE=$(VERBOSE) \
$(BUILD_DIR)/$(PROJECT_NAME)
# Run with custom port
run-port: build
@echo "$(BLUE)Starting $(PROJECT_NAME) on port $(PORT)...$(NC)"
@mkdir -p $(DATA_DIR)
@JORMUN_PORT=$(PORT) $(BUILD_DIR)/$(PROJECT_NAME)
# Run tests
test:
@echo "$(BLUE)Running tests...$(NC)"
$(ODIN) test $(SRC_DIR) \
$(COMMON_FLAGS) \
$(DEBUG_FLAGS) \
-extra-linker-flags:"$(EXTRA_LINKER_FLAGS)"
@echo "$(GREEN)✓ Tests passed$(NC)"
# Format code
fmt:
@echo "$(BLUE)Formatting code...$(NC)"
@find $(SRC_DIR) -name "*.odin" -exec odin-format -w {} \;
@echo "$(GREEN)✓ Code formatted$(NC)"
# Clean build artifacts
clean:
@echo "$(YELLOW)Cleaning build artifacts...$(NC)"
@rm -rf $(BUILD_DIR)
@rm -rf $(DATA_DIR)
@echo "$(GREEN)✓ Clean complete$(NC)"
# Install to /usr/local/bin (requires sudo)
install: release
@echo "$(BLUE)Installing $(PROJECT_NAME)...$(NC)"
@sudo cp $(BUILD_DIR)/$(PROJECT_NAME) /usr/local/bin/
@sudo chmod +x /usr/local/bin/$(PROJECT_NAME)
@echo "$(GREEN)✓ Installed to /usr/local/bin/$(PROJECT_NAME)$(NC)"
# Uninstall from /usr/local/bin
uninstall:
@echo "$(YELLOW)Uninstalling $(PROJECT_NAME)...$(NC)"
@sudo rm -f /usr/local/bin/$(PROJECT_NAME)
@echo "$(GREEN)✓ Uninstalled$(NC)"
# Check dependencies
check-deps:
@echo "$(BLUE)Checking dependencies...$(NC)"
@which $(ODIN) > /dev/null || (echo "$(RED)✗ Odin compiler not found$(NC)" && exit 1)
@pkg-config --exists rocksdb || (echo "$(RED)✗ RocksDB not found$(NC)" && exit 1)
@echo "$(GREEN)✓ All dependencies found$(NC)"
# AWS CLI test commands
aws-test: run &
@sleep 2
@echo "$(BLUE)Testing with AWS CLI...$(NC)"
@echo "\n$(YELLOW)Creating table...$(NC)"
@aws dynamodb create-table \
--endpoint-url http://localhost:$(PORT) \
--table-name TestTable \
--key-schema AttributeName=pk,KeyType=HASH \
--attribute-definitions AttributeName=pk,AttributeType=S \
--billing-mode PAY_PER_REQUEST || true
@echo "\n$(YELLOW)Listing tables...$(NC)"
@aws dynamodb list-tables --endpoint-url http://localhost:$(PORT)
@echo "\n$(YELLOW)Putting item...$(NC)"
@aws dynamodb put-item \
--endpoint-url http://localhost:$(PORT) \
--table-name TestTable \
--item '{"pk":{"S":"test1"},"data":{"S":"hello world"}}'
@echo "\n$(YELLOW)Getting item...$(NC)"
@aws dynamodb get-item \
--endpoint-url http://localhost:$(PORT) \
--table-name TestTable \
--key '{"pk":{"S":"test1"}}'
@echo "\n$(YELLOW)Scanning table...$(NC)"
@aws dynamodb scan \
--endpoint-url http://localhost:$(PORT) \
--table-name TestTable
@echo "\n$(GREEN)✓ AWS CLI test complete$(NC)"
# Development workflow
dev: clean build run
# Quick rebuild and run
quick:
@$(MAKE) build run
# Show help
help:
@echo "$(BLUE)JormunDB - DynamoDB-compatible database$(NC)"
@echo ""
@echo "$(GREEN)Build Commands:$(NC)"
@echo " make build - Build debug version"
@echo " make release - Build optimized release version"
@echo " make clean - Remove build artifacts"
@echo ""
@echo "$(GREEN)Run Commands:$(NC)"
@echo " make run - Build and run server (default: localhost:8000)"
@echo " make run PORT=9000 - Run on custom port"
@echo " make dev - Clean, build, and run"
@echo " make quick - Fast rebuild and run"
@echo ""
@echo "$(GREEN)Test Commands:$(NC)"
@echo " make test - Run unit tests"
@echo " make aws-test - Test with AWS CLI commands"
@echo ""
@echo "$(GREEN)Utility Commands:$(NC)"
@echo " make fmt - Format source code"
@echo " make check-deps - Check for required dependencies"
@echo " make install - Install to /usr/local/bin (requires sudo)"
@echo " make uninstall - Remove from /usr/local/bin"
@echo ""
@echo "$(GREEN)Configuration:$(NC)"
@echo " PORT=$(PORT) - Server port"
@echo " HOST=$(HOST) - Bind address"
@echo " DATA_DIR=$(DATA_DIR) - RocksDB data directory"
@echo " VERBOSE=$(VERBOSE) - Enable verbose logging (0/1)"
@echo ""
@echo "$(GREEN)Examples:$(NC)"
@echo " make run PORT=9000"
@echo " make run DATA_DIR=/tmp/jormun VERBOSE=1"
@echo " make dev"
================================================================================
FILE: ./ols.json
================================================================================
@@ -2089,7 +1846,7 @@ export PATH=$PATH:/path/to/odin
### Basic Usage
```bash
# Run with defaults (localhost:8000, ./data directory)
# Run with defaults (localhost:8002, ./data directory)
make run
```
@@ -2106,10 +1863,10 @@ You should see:
║ ║
╚═══════════════════════════════════════════════╝
Port: 8000 | Data Dir: ./data
Port: 8002 | Data Dir: ./data
Storage engine initialized at ./data
Starting DynamoDB-compatible server on 0.0.0.0:8000
Starting DynamoDB-compatible server on 0.0.0.0:8002
Ready to accept connections!
```
@@ -2174,7 +1931,7 @@ aws configure
**Create a Table:**
```bash
aws dynamodb create-table \
--endpoint-url http://localhost:8000 \
--endpoint-url http://localhost:8002 \
--table-name Users \
--key-schema \
AttributeName=id,KeyType=HASH \
@@ -2185,13 +1942,13 @@ aws dynamodb create-table \
**List Tables:**
```bash
aws dynamodb list-tables --endpoint-url http://localhost:8000
aws dynamodb list-tables --endpoint-url http://localhost:8002
```
**Put an Item:**
```bash
aws dynamodb put-item \
--endpoint-url http://localhost:8000 \
--endpoint-url http://localhost:8002 \
--table-name Users \
--item '{
"id": {"S": "user123"},
@@ -2204,7 +1961,7 @@ aws dynamodb put-item \
**Get an Item:**
```bash
aws dynamodb get-item \
--endpoint-url http://localhost:8000 \
--endpoint-url http://localhost:8002 \
--table-name Users \
--key '{"id": {"S": "user123"}}'
```
@@ -2212,7 +1969,7 @@ aws dynamodb get-item \
**Query Items:**
```bash
aws dynamodb query \
--endpoint-url http://localhost:8000 \
--endpoint-url http://localhost:8002 \
--table-name Users \
--key-condition-expression "id = :id" \
--expression-attribute-values '{
@@ -2223,14 +1980,14 @@ aws dynamodb query \
**Scan Table:**
```bash
aws dynamodb scan \
--endpoint-url http://localhost:8000 \
--endpoint-url http://localhost:8002 \
--table-name Users
```
**Delete an Item:**
```bash
aws dynamodb delete-item \
--endpoint-url http://localhost:8000 \
--endpoint-url http://localhost:8002 \
--table-name Users \
--key '{"id": {"S": "user123"}}'
```
@@ -2238,7 +1995,7 @@ aws dynamodb delete-item \
**Delete a Table:**
```bash
aws dynamodb delete-table \
--endpoint-url http://localhost:8000 \
--endpoint-url http://localhost:8002 \
--table-name Users
```
@@ -2250,7 +2007,7 @@ aws dynamodb delete-table \
const { DynamoDBClient, PutItemCommand, GetItemCommand } = require("@aws-sdk/client-dynamodb");
const client = new DynamoDBClient({
endpoint: "http://localhost:8000",
endpoint: "http://localhost:8002",
region: "us-east-1",
credentials: {
accessKeyId: "dummy",
@@ -2287,7 +2044,7 @@ import boto3
dynamodb = boto3.client(
'dynamodb',
endpoint_url='http://localhost:8000',
endpoint_url='http://localhost:8002',
region_name='us-east-1',
aws_access_key_id='dummy',
aws_secret_access_key='dummy'
@@ -2352,8 +2109,8 @@ make fmt
### Port Already in Use
```bash
# Check what's using port 8000
lsof -i :8000
# Check what's using port 8002
lsof -i :8002
# Use a different port
make run PORT=9000
@@ -2414,7 +2171,7 @@ make profile
# Load test
ab -n 10000 -c 100 -p item.json -T application/json \
http://localhost:8000/
http://localhost:8002/
```
## Production Deployment
@@ -2506,7 +2263,7 @@ sudo apt install librocksdb-dev libsnappy-dev liblz4-dev libzstd-dev libbz2-dev
# Build the server
make build
# Run with default settings (localhost:8000, ./data directory)
# Run with default settings (localhost:8002, ./data directory)
make run
# Run with custom port
@@ -2521,7 +2278,7 @@ make run DATA_DIR=/tmp/jormundb
```bash
# Create a table
aws dynamodb create-table \
--endpoint-url http://localhost:8000 \
--endpoint-url http://localhost:8002 \
--table-name Users \
--key-schema AttributeName=id,KeyType=HASH \
--attribute-definitions AttributeName=id,AttributeType=S \
@@ -2529,26 +2286,26 @@ aws dynamodb create-table \
# Put an item
aws dynamodb put-item \
--endpoint-url http://localhost:8000 \
--endpoint-url http://localhost:8002 \
--table-name Users \
--item '{"id":{"S":"user123"},"name":{"S":"Alice"},"age":{"N":"30"}}'
# Get an item
aws dynamodb get-item \
--endpoint-url http://localhost:8000 \
--endpoint-url http://localhost:8002 \
--table-name Users \
--key '{"id":{"S":"user123"}}'
# Query items
aws dynamodb query \
--endpoint-url http://localhost:8000 \
--endpoint-url http://localhost:8002 \
--table-name Users \
--key-condition-expression "id = :id" \
--expression-attribute-values '{":id":{"S":"user123"}}'
# Scan table
aws dynamodb scan \
--endpoint-url http://localhost:8000 \
--endpoint-url http://localhost:8002 \
--table-name Users
```
@@ -2694,7 +2451,7 @@ Scan (full table) | 5000 ops | 234.56 ms | 21320 ops/sec
### Environment Variables
```bash
JORMUN_PORT=8000 # Server port
JORMUN_PORT=8002 # Server port
JORMUN_HOST=0.0.0.0 # Bind address
JORMUN_DATA_DIR=./data # RocksDB data directory
JORMUN_VERBOSE=1 # Enable verbose logging
@@ -2726,7 +2483,7 @@ chmod 755 ./data
Check if the port is already in use:
```bash
lsof -i :8000
lsof -i :8002
```
### "Invalid JSON" errors
@@ -2779,6 +2536,9 @@ import "core:fmt"
foreign import rocksdb "system:rocksdb"
// In order to use RocksDB's WAL replication helpers, we need to import the C++ library so we use this shim
//foreign import rocksdb_shim "system:jormun_rocksdb_shim" // I know we'll use in future but because we're not right now, compiler is complaining
// RocksDB C API types
RocksDB_T :: distinct rawptr
RocksDB_Options :: distinct rawptr
@@ -2911,7 +2671,7 @@ db_open :: proc(path: string, create_if_missing := true) -> (DB, Error) {
path_cstr := fmt.ctprintf("%s", path)
handle := rocksdb_open(options, path_cstr, &err)
if err != nil {
defer rocksdb_free(err)
defer rocksdb_free(rawptr(err)) // Cast it here and now so we don't deal with issues from FFI down the line
rocksdb_readoptions_destroy(read_options)
rocksdb_writeoptions_destroy(write_options)
rocksdb_options_destroy(options)
@@ -2947,7 +2707,7 @@ db_put :: proc(db: ^DB, key: []byte, value: []byte) -> Error {
&err,
)
if err != nil {
defer rocksdb_free(err)
defer rocksdb_free(rawptr(err)) // Cast it here and now so we don't deal with issues from FFI down the line
return .WriteFailed
}
return .None
@@ -2968,7 +2728,7 @@ db_get :: proc(db: ^DB, key: []byte) -> (value: []byte, err: Error) {
)
if errptr != nil {
defer rocksdb_free(errptr)
defer rocksdb_free(rawptr(errptr)) // Cast it here and now so we don't deal with issues from FFI down the line
return nil, .ReadFailed
}
@@ -2979,7 +2739,7 @@ db_get :: proc(db: ^DB, key: []byte) -> (value: []byte, err: Error) {
// Copy the data and free RocksDB's buffer
result := make([]byte, value_len, context.allocator)
copy(result, value_ptr[:value_len])
rocksdb_free(value_ptr)
rocksdb_free(rawptr(value_ptr)) // Cast it here and now so we don't deal with issues from FFI down the line
return result, .None
}
@@ -2995,7 +2755,7 @@ db_delete :: proc(db: ^DB, key: []byte) -> Error {
&err,
)
if err != nil {
defer rocksdb_free(err)
defer rocksdb_free(rawptr(err)) // Cast it here and now so we don't deal with issues from FFI down the line
return .DeleteFailed
}
return .None
@@ -3012,7 +2772,7 @@ db_flush :: proc(db: ^DB) -> Error {
err: cstring
rocksdb_flush(db.handle, flush_opts, &err)
if err != nil {
defer rocksdb_free(err)
defer rocksdb_free(rawptr(err)) // Cast it here and now so we don't deal with issues from FFI down the line
return .IOError
}
return .None
@@ -3067,7 +2827,7 @@ batch_write :: proc(db: ^DB, batch: ^WriteBatch) -> Error {
err: cstring
rocksdb_write(db.handle, db.write_options, batch.handle, &err)
if err != nil {
defer rocksdb_free(err)
defer rocksdb_free(rawptr(err)) // Cast it here and now so we don't deal with issues from FFI down the line
return .WriteFailed
}
return .None
@@ -3143,6 +2903,93 @@ iter_value :: proc(iter: ^Iterator) -> []byte {
}
================================================================================
FILE: ./rocksdb_shim/rocksdb_shim.cc
================================================================================
// TODO: In order to use RocksDB's WAL replication helpers, we need to import the C++ library so we use this shim
/**
C++ shim implementation notes (the important bits)
In this rocksdb_shim.cc we'll need to use:
rocksdb::DB::Open(...)
db->GetLatestSequenceNumber()
db->GetUpdatesSince(seq, &iter)
from each TransactionLogIterator entry:
get WriteBatch and serialize via WriteBatch::Data()
apply via rocksdb::WriteBatch wb(data); db->Write(write_options, &wb);
Also we must configure WAL retention so the followers dont fall off the end. RocksDB warns the iterator can become invalid if WAL is cleared aggressively; typical controls are WAL TTL / size limit.
https://github.com/facebook/rocksdb/issues/1565
*/
================================================================================
FILE: ./rocksdb_shim/rocksdb_shim.h
================================================================================
// In order to use RocksDB's WAL replication helpers, we need to import the C++ library so we use this shim
#pragma once
#include <stdint.h>
#include <stddef.h>
#ifdef __cplusplus
extern "C"
{
#endif
typedef struct jormun_db jormun_db;
typedef struct jormun_wal_iter jormun_wal_iter;
// Open/close (so Odin never touches rocksdb_t directly)
jormun_db *jormun_db_open(const char *path, int create_if_missing, char **err);
void jormun_db_close(jormun_db *db);
// Basic ops (you can mirror what you already have)
void jormun_db_put(jormun_db *db,
const void *key, size_t keylen,
const void *val, size_t vallen,
char **err);
unsigned char *jormun_db_get(jormun_db *db,
const void *key, size_t keylen,
size_t *vallen,
char **err);
// caller frees with this:
void jormun_free(void *p);
// Replication primitives
uint64_t jormun_latest_sequence(jormun_db *db);
// Iterator: start at seq (inclusive-ish; RocksDB positions to batch containing seq or first after)
jormun_wal_iter *jormun_wal_iter_create(jormun_db *db, uint64_t seq, char **err);
void jormun_wal_iter_destroy(jormun_wal_iter *it);
// Next batch -> returns 1 if produced a batch, 0 if no more / not available
// You get a serialized “write batch” blob (rocksdb::WriteBatch::Data()) plus the batch start seq.
int jormun_wal_iter_next(jormun_wal_iter *it,
uint64_t *batch_start_seq,
unsigned char **out_data,
size_t *out_len,
char **err);
// Apply serialized writebatch blob on follower
void jormun_apply_writebatch(jormun_db *db,
const unsigned char *data, size_t len,
char **err);
#ifdef __cplusplus
}
#endif
================================================================================
FILE: ./TODO.md
================================================================================
@@ -3161,8 +3008,8 @@ This tracks the rewrite from Zig to Odin and remaining features.
- [x] Core types (dynamodb/types.odin)
- [x] Key codec with varint encoding (key_codec/key_codec.odin)
- [x] Main entry point with arena pattern demo
- [x] LICENSE file
- [x] .gitignore
- [x] HTTP Server Scaffolding
## 🚧 In Progress (Need to Complete)
@@ -3199,7 +3046,7 @@ This tracks the rewrite from Zig to Odin and remaining features.
- Read JSON bodies
- Send HTTP responses with headers
- Keep-alive support
- Options:
- Options (Why we haven't checked this off yet, we need to make sure we chose the right option as the project grows, might make more sense to impliment different option):
- Use `core:net` directly
- Use C FFI with libmicrohttpd
- Use Odin's vendor:microui (if suitable)
@@ -3218,6 +3065,23 @@ This tracks the rewrite from Zig to Odin and remaining features.
- ADD operations
- DELETE operations
### Replication Support (Priority 4)
- [ ] **Build C++ Shim in order to use RocksDB's WAL replication helpers**
- [ ] **Add configurator to set instance as a master or slave node and point to proper Target and Destination IPs**
- [ ] **Leverage C++ helpers from shim**
### Subscribe To Changes Feature (Priority LAST [But keep in mind because semantics we decide now will make this easier later])
- [ ] **Best-effort notifications (Postgres-ish LISTEN/NOTIFY [in-memory pub/sub fanout. If youre not connected, you miss it.])**
- Add an in-process “event bus” channels: table-wide, partition-key, item-key, “all”.
- When putItem/deleteItem/updateItem/createTable/... commits successfully publish {op, table, key, timestamp, item?}
- [ ] **Durable change streams (Mongo-ish [append every mutation to a persistent log and let consumers read it with resume tokens.])**
- Create a “changelog” keyspace
- Generate a monotonically increasing sequence by using a stable Per-partition sequence cursor
- Expose via an API (I prefer publishing to MQTT or SSE)
## 📋 Testing
- [ ] Unit tests for key_codec
@@ -3331,7 +3195,7 @@ make test
make run
# Test with AWS CLI
aws dynamodb list-tables --endpoint-url http://localhost:8000
aws dynamodb list-tables --endpoint-url http://localhost:8002
```