·
Amazon DynamoDB is a fast and flexible NoSQL database
service for all applications that need consistent, single-digit millisecond
latency at any scale. Its flexible data model and reliable performance make it
a great fit for mobile, web, gaming, ad-tech, IoT, and many other applications.
·
DynamoDB provides on-demand backup capability.
·
You can create on-demand backups and enable
point-in-time recovery for your Amazon DynamoDB tables.
·
DynamoDB allows you to delete expired items
from tables automatically to help you reduce storage usage and the cost of
storing data that is no longer relevant.
·
DynamoDB synchronously replicates data across three
facilities in an AWS Region, giving you high availability and data durability.
·
Consistency Models:
·
Eventually consistent reads (the default) –
The eventual consistency option maximizes your read throughput. However, an eventually
consistent read might not reflect the results of a recently completed write.
All copies of data usually reach consistency within a second. Repeating a read
after a short time should return the updated data.
·
Strongly consistent reads — In addition to
eventual consistency, DynamoDB also gives you the flexibility and control to
request a strongly consistent read if your application, or an element of your
application, requires it. A strongly consistent read returns a result that
reflects all writes that received a successful response before the read.
·
ACID transactions – DynamoDB transactions
provide developers atomicity, consistency, isolation, and durability (ACID)
across one or more tables within a single AWS account and region. You can use
transactions when building applications that require coordinated inserts,
deletes, or updates to multiple items as part of a single logical business
operation.
·
DynamoDB supports GET/PUT operations by using a user-defined
primary key. The primary key is the only required attribute for items in a
table. You specify the primary key when you create a table, and it uniquely
identifies each item. DynamoDB also provides flexible querying by letting you
query on nonprimary key attributes using global secondary
indexes and local secondary
indexes.
·
DynamoDB is a
fully managed cloud service that you access via API. Applications running on
any operating system (such as Linux, Windows, iOS, Android, Solaris, AIX, and
HP-UX) can use DynamoDB.
·
Maximum
throughput per DynamoDB table is practically unlimited
·
The smallest
provisioned throughput you can request is 1 write capacity unit and 1 read
capacity unit for both auto scaling and manual throughput provisioning. Such
provisioning falls within the free tier which allows for 25 units of write
capacity and 25 units of read capacity. The free tier applies at the account
level, not the table level. In other words, if you add up the provisioned
capacity of all your tables, and if the total capacity is no more than 25 units
of write capacity and 25 units of read capacity, your provisioned capacity
would fall into the free tier.
·
The partition key of an item is also known as
its hash attribute. The term hash
attribute derives from the use of an internal hash function in
DynamoDB that evenly distributes data items across partitions, based on their
partition key values.
·
The sort key of an item is also known as
its range attribute. The term range
attribute derives from the way DynamoDB stores items with the
same partition key physically close together, in sorted order by the sort key
value.
·
Each primary
key attribute must be a scalar (meaning that it can hold only a single value).
The only data types allowed for primary key attributes are string, number, or
binary. There are no such restrictions for other, non-key attributes.
·
DynamoDB
Streams is an optional feature that captures data modification events in
DynamoDB tables. The data about these events appear in the stream in near-real
time, and in the order that the events occurred.
·
Each event is
represented by a stream
record. If you enable a stream on a table, DynamoDB
Streams writes a stream record whenever one of the following events occurs:
· A new item is added to
the table: The stream captures an image of the entire item, including all of its attributes.
· An item is updated: The
stream captures the "before" and "after" image of any
attributes that were modified in the item.
· An item is deleted from
the table: The stream captures an image of the entire item before it was
deleted.
· Each stream record also contains the name of the table, the
event timestamp, and other metadata. Stream records have a lifetime of 24
hours; after that, they are automatically removed from the stream.
· Amazon
DynamoDB is available in multiple AWS Regions around the world. Each Region is
independent and isolated from other AWS Regions. For example, if you have a
table called People in the us-east-2 Region and another table named People in the us-west-2 Region, these are considered two entirely separate
tables.
· When your
application writes data to a DynamoDB table and receives an HTTP 200 response (OK
), the write has occurred and is durable. The data is
eventually consistent across all storage locations, usually within one second
or less.
·
When you read data from a DynamoDB table, the response might not
reflect the results of a recently completed write operation. The response might
include some stale data. If you repeat your read request after a short time,
the response should return the latest data.
·
DynamoDB uses eventually consistent reads,
unless you specify otherwise. Read operations (such as GetItem
, Query
, and Scan
) provide a ConsistentRead
parameter. If you set this parameter to true, DynamoDB
uses strongly consistent reads during the operation.
·
When you request a strongly consistent read, DynamoDB returns a
response with the most up-to-date data, reflecting the updates from all prior
write operations that were successful. However, this consistency comes with
some disadvantages:
·
A strongly consistent read might not be available if
there is a network delay or outage. In this case, DynamoDB may return a server
error (HTTP 500).
·
Strongly consistent reads may have higher latency
than eventually consistent reads.
· Strongly
consistent reads are not supported on global secondary indexes.
· Strongly
consistent reads use more throughput capacity than eventually consistent reads.
For details, see Read/Write Capacity Mode