Datomic Cloud Architecture
Datomic's data model - based on immutable facts stored over time - enables a physical design that is fundamentally different from traditional RDBMSs. Instead of processing all requests in a single server component, Datomic distributes transactions, queries, indexing, and caching to provide high availability, horizontal scaling, and elasticity. Datomic also allows for dynamic assignment of compute resources to tasks without any kind of preassignment or sharding.
The durable elements managed by Datomic are called Storage Resources, including:
- the DynamoDB Transaction Log
- S3 storage of Indexes
- an EFS cache layer
- operational logs
- A VPC and subnets in which computational resources will run
These resources are retained even when no computational resources are active, so you can shut down all the active elements of Datomic while maintaining your data.
How Datomic Uses Storage
Datomic leverages the attributes of multiple AWS storage options to satisfy its semantic and performance characteristics. As indicated in the tables below, different AWS storage services provide different latencies, costs, and semantic behaviors.
Datomic utilizes a stratified approach to provide high performance, low cost, and strong reliability guarantees. Specifically:
|Storage of Record||S3|
|Cache||Memory > SSD > EFS|
|Reliability||S3 + DDB + EFS|
|S3||low-cost, high reliability|
|EFS||durable cache survives restarts|
|Memory & SSD||speed|
This multi-layered persistence architecture ensures high reliability, as data missing from any given layer can be recovered from deeper within the stack, as well as excellent cache locality and latency via the multi-level distributed cache.
Datomic indexes are covering indexes. This means the index actually contains the datoms, rather than just a pointer to them. So, when you find datoms in the index, you get the datoms themselves, not just a reference to where they live. This allows Datomic to very efficiently access datoms through its indexes.
Datomic maintains four indexes that contain ordered sets of datoms. Each of these indexes is named based on the sort order used. E, A, and V are always sorted in ascending order, while T is always in descending order:
Indexes Accumulate Only
Datomic is accumulate-only. Information accumulates over time, and change is represented by accumulating the new, not by modifying or removing the old. For example, "removing" occurs not by taking something away, but by adding a new retraction.
At the implementation level, this means that index and log segments are immutable, and can be consumed directly without coordination by any processes in a Datomic system. All Datomic processes are peers in that they have equivalent access to the information in the system.
Note that accumulate-only is a semantic property, and is not the same as append-only, which is a structural property describing how data is written. Datomic is not an append-only system, and does not have the performance characteristics associated with append-only systems.
The Datomic API presents indexes to consumers as sorted sets of datoms or of transactions. However, Datomic is designed for efficient writing at transaction time, and for use with data sets much larger than can fit in memory. To meet these objectives, Datomic:
- Stores indexes as shallow trees of segments, where each segment typically contains thousands of datoms.
- Stores segments, not raw datoms, in storage.
- Updates the datom trees only occasionally, via background indexing jobs.
- Uses an adaptive indexing algorithm that has a sublinear relationship with total database size.
- Merges index trees with an in-memory representation of recent change so that all processes see up-to-date indexes.
- Updates the log for every transaction (the D in ACID)
- Optimizes log writes using additional data structures tuned to allow O(1) storage writes per transaction.
Primary Compute Stack
Every running system has a single primary compute stack which provides computational resources and a means to access those resources. A Primary Compute Stack consists of:
Data outlives code, and database systems often serve more than one application. Each application can have its own:
- Computational requirements
- Cacheable working set
A query group is an independent unit of computation and caching that is a distinct application deployment target. Each query group:
- Extends the abilities of an existing production topology system
- Is a deployment target for its own distinct application code
- Has its own clustered nodes
- Manages its own working set cache
- Can elastically autoscale application reads without any up-front planning or sharding
Query groups deliver the entire semantic model of Datomic. In particular:
- Client code does not know or care whether it is talking to the primary compute group or to a query group.
- Query groups read Datomic data at memory speeds, just as the primary compute group does.
You can add, modify, or remove query groups at any time. For example, you might initially release a transactional application that uses only a primary compute group. Later, you might decide to split out multiple query groups:
- an autoscaling query group for transactional load
- a fixed query group with one large instance for analytic queries
- a fixed query group with a smaller instance for support
A Datomic application manages deployments for software that you design to perform a group of related tasks or activites. Every Datomic compute group is associated with an application that can be used as follows:
- A Datomic ion is your application code, plus a tiny amount of configuration.
- The push operation creates a revision, packaging your ion so that is ready for reproducible deployment.
- The deploy operation creates a deployment, installing a revision onto a compute group.
Datomic Cloud is designed to be a complete solution for Clojure application development on AWS. In particular, you can:
- Develop and test with realtime feedback at a local REPL.
- Rapidly deploy to AWS with no downtime.
- Reproducibly deploy across different development stages.
- Deploy multiple applications that share a common Datomic system.
- elastically scale your entire application instead of many separate elements.
- Automatically generate AWS Lambda entry points without writing any Lambda code.
- Implement web services directly in Datomic behind AWS API Gateway.
Datomic is designed to follow AWS security best practices, including:
- All authorization is performed using AWS HMAC, with key transfer via S3, enabling access control governed by IAM roles.
- All data in Datomic is encrypted at rest using AWS KMS.
- All Datomic resources are isolated in a private VPC, with optional access through a network bastion.
- EC2 instances run in an IAM role configured for least privilege.
For security, Datomic nodes all run inside a dedicated VPC that is not accessible from the internet. To provide e.g. developer access to a system, you can configure a bastion server that is open to a range of IP addresses outside the Datomic VPC, forwarding traffic to Datomic nodes.