Hacker Newsnew | past | comments | ask | show | jobs | submit | ankuranand's commentslogin

Thanks! The idea is to build a highly replicated KV/column store for the edge: small Raft core for ordering, lots of async replicas. I have it running on a 30-node fanout in test/staging and am focusing on hardening (crash/recovery, backpressure) before production.


UnisonDB is a log-native database that combines storage and streaming into one system — no CDC, no Kafka, no separate message bus.

It uses WAL-based replication with B+Tree storage to fan out writes to 100+ edge nodes in sub-second latency. Every write is durable, queryable, and instantly available as a replication stream.

Built for Edge AI and distributed systems where data needs to live close to computation. Supports: - Multi-model storage (KV, Wide-Column, LOB) - Atomic multi-key transactions - Real-time change notifications - Namespace isolation for multi-tenancy

We benchmarked it against BadgerDB and BoltDB using redis-benchmark — results in the README show competitive write/read throughput with consistent replication performance even at 100+ concurrent relayers.

Open source (Apache 2.0): https://github.com/ankur-anand/unisondb

Would love feedback on the architecture and use cases!


Hey Folks,

I’ve been experimenting with an idea that combines a database and a message bus into one system — built specifically for Edge AI and real-time applications that need to scale across 100+ nodes.

Most databases write to a WAL (Write-Ahead Log) for recovery.

UnisonDB treats the log as the database itself — making replication, streaming, and durability all part of the same mechanism.

Every write is: * Stored durably (WAL-first design) * Streamed instantly (no separate CDC or Kafka) * Synced globally across replicas

It’s built in Go and uses a B+Tree storage engine on top of a streaming WAL, so edge nodes can read locally while syncing in real time with upstream hubs.

No external brokers, no double-pipeline — just a single source of truth that streams.

Writes on one node replicate like a message bus, yet remain queryable like a database — instantly and durably.

GitHub: github.com/ankur-anand/unisondb

Deployment Topologies

UnisonDB supports multiple replication setups out of the box:

* Hub-and-Spoke – for edge rollouts where a central hub fans out data to 100+ edge nodes

* Peer-to-Peer – for regional datacenters that replicate changes between each other

* Follower/Relay – for read-only replicas that tail logs directly for analytics or caching

Each node maintains its own offset in the WAL, so replicas can catch up from any position without re-syncing the entire dataset.

UnisonDB’s goal is to make log-native databases practical for both the core and the edge — combining replication, storage, and event propagation in one Go-based system.

I’m still exploring how far this log-native approach can go. Would love to hear your thoughts, feedback, or any edge cases you think might be interesting to test.


I recently wrote about optimizing cache expiration for millions of TTL-based entries without melting the CPU.

The naive approach — scanning every key every second — works fine at small scale but collapses once you hit millions of entries.

So I implemented a Timing Wheel in Go — the same idea used in Kafka, Netty, and the Linux kernel — replacing the O(n) scan loop with an O(1) tick-based expiration model.

Here’s what I found when comparing both approaches at 10 million keys:

Avg Read Latency: • Naive Scan → 4.68 ms • Timing Wheel → 3.15 µs

Max Read Stall: • Naive Scan → 500 ms • Timing Wheel → ≈ 2 ms

At that scale, the naive loop stalls reads for half a second. The timing wheel glides through them in microseconds.

GitHub repo: https://github.com/ankur-anand/taskwheel


What is JSON Threat Protection? JSON requests are susceptible to attacks characterized by unusual inflation of elements and nesting levels. Attackers use recursive techniques to consume memory resources by using huge json files to overwhelm the parser and eventually crash the service. JSON threat protection is terms that describe the way to minimize the risk from such attacks by defining few limits on the json structure like length and depth validation on a json, and helps protect your applications from such intrusions.

There are situations where you do not want to parse the JSON, but do want to ensure that the JSON is not going to cause a problem. Such as an API Gateway. It would be a PITA for the gateway to have to know all JSON schema of all services it is protecting. There are XML validators that perform similar functions.


What is JSON Threat Protection?

JSON requests are susceptible to attacks characterized by unusual inflation of elements and nesting levels. Attackers use recursive techniques to consume memory resources by using huge json files to overwhelm the parser and eventually crash the service.

JSON threat protection is terms that describe the way to minimize the risk from such attacks by defining few limits on the json structure like length and depth validation on a json, and helps protect your applications from such intrusions.


There is `func (v Verify) Verify(reader io.Reader) (bool, error)` function in the api, which will support the streaming part. Currently it's WIP.


JavaScript object notation(JSON) is vulnerable to content level attacks. Such attacks attempt to use huge json files to overwhelm the parser and eventually crash the service.

JSON threat protection is terms that describe the way to minimize the risk from such attacks by defining few limits on the json structure.

Yes It also validates the json.


if the idea is "limit the size of JSON" you already have http.MaxBytesReader or io.LimitReader


JavaScript object notation(JSON) is vulnerable to content level attacks. Such attacks attempt to use huge json files to overwhelm the parser and eventually crash the service.

JSON threat protection is terms that describe the way to minimize the risk from such attacks by defining few limits on the json structure.


I see. Is that really useful? Worst case is a 500?


It validates the structure before fully parsing it and allocating lots of objects you normally wouldn't.


Thanks for feedback, while "JSON Threat Protection" is also a quite used terminology, but yes would provide an description with common word too. Currently Streaming handling is in progress.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: