$ npm install abstract-level
Abstract class for a lexicographically sorted key-value database. Provides state, encodings, sublevels, events and hooks. If you are upgrading, please see UPGRADING.md
.
:pushpin: What happened to
levelup
? Head on over to Frequently Asked Questions.
db = new Constructor(...[, options])
db.status
db.open([options])
db.close()
db.supports
db.get(key[, options])
db.getMany(keys[, options])
db.put(key, value[, options])
db.del(key[, options])
db.batch(operations[, options])
chainedBatch = db.batch()
iterator = db.iterator([options])
keyIterator = db.keys([options])
valueIterator = db.values([options])
db.clear([options])
sublevel = db.sublevel(name[, options])
encoding = db.keyEncoding([encoding])
encoding = db.valueEncoding([encoding])
key = db.prefixKey(key, keyFormat[, local])
db.defer(fn[, options])
db.deferAsync(fn[, options])
chainedBatch
iterator
keyIterator
valueIterator
sublevel
LEVEL_DATABASE_NOT_OPEN
LEVEL_DATABASE_NOT_CLOSED
LEVEL_ITERATOR_NOT_OPEN
LEVEL_ITERATOR_BUSY
LEVEL_BATCH_NOT_OPEN
LEVEL_ABORTED
LEVEL_ENCODING_NOT_FOUND
LEVEL_ENCODING_NOT_SUPPORTED
LEVEL_DECODE_ERROR
LEVEL_INVALID_KEY
LEVEL_INVALID_VALUE
LEVEL_CORRUPTION
LEVEL_IO_ERROR
LEVEL_INVALID_PREFIX
LEVEL_NOT_SUPPORTED
LEVEL_LEGACY
LEVEL_LOCKED
LEVEL_HOOK_ERROR
LEVEL_STATUS_LOCKED
LEVEL_READONLY
LEVEL_CONNECTION_LOST
LEVEL_REMOTE_ERROR
db = AbstractLevel(manifest[, options])
db._open(options)
db._close()
db._get(key, options)
db._getMany(keys, options)
db._put(key, value, options)
db._del(key, options)
db._batch(operations, options)
db._chainedBatch()
db._iterator(options)
db._keys(options)
db._values(options)
db._clear(options)
sublevel = db._sublevel(name, options)
iterator = AbstractIterator(db, options)
keyIterator = AbstractKeyIterator(db, options)
valueIterator = AbstractValueIterator(db, options)
chainedBatch = AbstractChainedBatch(db, options)
This module exports an abstract class that should not be instantiated by end users. Instead use modules like level
that contain a concrete implementation and actual data storage. The purpose of the abstract class is to provide a common interface that looks like this:
// Create a database
const db = new Level('./db', { valueEncoding: 'json' })
// Add an entry with key 'a' and value 1
await db.put('a', 1)
// Add multiple entries
await db.batch([{ type: 'put', key: 'b', value: 2 }])
// Get value of key 'a': 1
const value = await db.get('a')
// Iterate entries with keys that are greater than 'a'
for await (const [key, value] of db.iterator({ gt: 'a' })) {
console.log(value) // 2
}
Usage from TypeScript requires generic type parameters.
// Specify types of keys and values (any, in the case of json).
// The generic type parameters default to Level<string, string>.
const db = new Level<string, any>('./db', { valueEncoding: 'json' })
// All relevant methods then use those types
await db.put('a', { x: 123 })
// Specify different types when overriding encoding per operation
await db.get<string, string>('a', { valueEncoding: 'utf8' })
// Though in some cases TypeScript can infer them
await db.get('a', { valueEncoding: db.valueEncoding('utf8') })
// It works the same for sublevels
const abc = db.sublevel('abc')
const xyz = db.sublevel<string, any>('xyz', { valueEncoding: 'json' })
We aim to support Active LTS and Current Node.js releases, as well as evergreen browsers that are based on Chromium, Firefox or Webkit. Features that the runtime must support include queueMicrotask
, Promise.allSettled()
, globalThis
and async generators. Supported runtimes may differ per implementation.
This module has a public API for consumers of a database and a private API for concrete implementations. The public API, as documented in this section, offers a simple yet rich interface that is common between all implementations. Implementations may have additional options or methods. TypeScript type declarations are included (and exported for reuse) only for the public API.
An abstract-level
database is at its core a key-value database. A key-value pair is referred to as an entry here and typically returned as an array, comparable to Object.entries()
.
db = new Constructor(...[, options])
Creating a database is done by calling a class constructor. Implementations export a class that extends the AbstractLevel
class and has its own constructor with an implementation-specific signature. All constructors should have an options
argument as the last. Typically, constructors take a location
as their first argument, pointing to where the data will be stored. That may be a file path, URL, something else or none at all, since not all implementations are disk-based or persistent. Others take another database rather than a location as their first argument.
The optional options
object may contain:
keyEncoding
(string or object, default 'utf8'
): encoding to use for keysvalueEncoding
(string or object, default 'utf8'
): encoding to use for values.See Encodings for a full description of these options. Other options
(except passive
) are forwarded to db.open()
which is automatically called in a next tick after the constructor returns. Any read & write operations are queued internally until the database has finished opening. If opening fails, those queued operations will yield errors.
db.status
Read-only getter that returns a string reflecting the current state of the database:
'opening'
- waiting for the database to be opened'open'
- successfully opened the database'closing'
- waiting for the database to be closed'closed'
- database is closed.db.open([options])
Open the database. Returns a promise. Options passed to open()
take precedence over options passed to the database constructor. Not all implementations support the createIfMissing
and errorIfExists
options (notably memory-level
and browser-level
) and will indicate so via db.supports.createIfMissing
and db.supports.errorIfExists
.
The optional options
object may contain:
createIfMissing
(boolean, default: true
): If true
, create an empty database if one doesn't already exist. If false
and the database doesn't exist, opening will fail.errorIfExists
(boolean, default: false
): If true
and the database already exists, opening will fail.passive
(boolean, default: false
): Wait for, but do not initiate, opening of the database.It's generally not necessary to call open()
because it's automatically called by the database constructor. It may however be useful to capture an error from failure to open, that would otherwise not surface until another method like db.get()
is called. It's also possible to reopen the database after it has been closed with close()
. Once open()
has then been called, any read & write operations will again be queued internally until opening has finished.
The open()
and close()
methods are idempotent. If the database is already open, the promise returned by open()
will resolve without delay. If opening is already in progress, the promise will resolve when that has finished. If closing is in progress, the database will be reopened once closing has finished. Likewise, if close()
is called after open()
, the database will be closed once opening has finished.
db.close()
Close the database. Returns a promise.
A database may have associated resources like file handles and locks. When the database is no longer needed (for the remainder of a program) it's recommended to call db.close()
to free up resources.
After db.close()
has been called, no further read & write operations are allowed unless and until db.open()
is called again. For example, db.get(key)
will yield an error with code LEVEL_DATABASE_NOT_OPEN
. Any unclosed iterators or chained batches will be closed by db.close()
and can then no longer be used even when db.open()
is called again.
db.supports
A manifest describing the features supported by this database. Might be used like so:
if (!db.supports.permanence) {
throw new Error('Persistent storage is required')
}
db.get(key[, options])
Get a value from the database by key
. The optional options
object may contain:
keyEncoding
: custom key encoding for this operation, used to encode the key
.valueEncoding
: custom value encoding for this operation, used to decode the value.Returns a promise for the value. If the key
was not found then the value will be undefined
.
db.getMany(keys[, options])
Get multiple values from the database by an array of keys
. The optional options
object may contain:
keyEncoding
: custom key encoding for this operation, used to encode the keys
.valueEncoding
: custom value encoding for this operation, used to decode values.Returns a promise for an array of values with the same order as keys
. If a key was not found, the relevant value will be undefined
.
db.put(key, value[, options])
Add a new entry or overwrite an existing entry. The optional options
object may contain:
keyEncoding
: custom key encoding for this operation, used to encode the key
.valueEncoding
: custom value encoding for this operation, used to encode the value
.Returns a promise.
db.del(key[, options])
Delete an entry by key
. The optional options
object may contain:
keyEncoding
: custom key encoding for this operation, used to encode the key
.Returns a promise.
db.batch(operations[, options])
Perform multiple put and/or del operations in bulk. Returns a promise. The operations
argument must be an array containing a list of operations to be executed sequentially, although as a whole they are performed as an atomic operation.
Each operation must be an object with at least a type
property set to either 'put'
or 'del'
. If the type
is 'put'
, the operation must have key
and value
properties. It may optionally have keyEncoding
and / or valueEncoding
properties to encode keys or values with a custom encoding for just that operation. If the type
is 'del'
, the operation must have a key
property and may optionally have a keyEncoding
property.
An operation of either type may also have a sublevel
property, to prefix the key of the operation with the prefix of that sublevel. This allows atomically committing data to multiple sublevels. The given sublevel
must have the same root (i.e. top-most) database as db
. Keys and values will be encoded by the sublevel, to the same effect as a sublevel.batch(..)
call. In the following example, the first value
will be encoded with 'json'
rather than the default encoding of db
:
const people = db.sublevel('people', { valueEncoding: 'json' })
const nameIndex = db.sublevel('names')
await db.batch([{
type: 'put',
sublevel: people,
key: '123',
value: {
name: 'Alice'
}
}, {
type: 'put',
sublevel: nameIndex,
key: 'Alice',
value: '123'
}])
The optional options
object may contain:
keyEncoding
: custom key encoding for this batch, used to encode keys.valueEncoding
: custom value encoding for this batch, used to encode values.Encoding properties on individual operations take precedence. In the following example, the first value will be encoded with the 'utf8'
encoding and the second with 'json'
.
await db.batch([
{ type: 'put', key: 'a', value: 'foo' },
{ type: 'put', key: 'b', value: 123, valueEncoding: 'json' }
], { valueEncoding: 'utf8' })
chainedBatch = db.batch()
Create a chained batch, when batch()
is called with zero arguments. A chained batch can be used to build and eventually commit an atomic batch of operations:
const chainedBatch = db.batch()
.del('bob')
.put('alice', 361)
.put('kim', 220)
// Commit
await chainedBatch.write()
Depending on how it's used, it is possible to obtain greater overall performance with this form of batch()
, mainly because its methods like put()
can immediately copy the data of that singular operation to the underlying storage, rather than having to block the event loop while copying the data of multiple operations. However, on several abstract-level
implementations, chained batch is just sugar and has no performance benefits.
Due to its synchronous nature, it is not possible to create a chained batch before the database has finished opening. Be sure to call await db.open()
before chainedBatch = db.batch()
. This does not apply to other database methods.
iterator = db.iterator([options])
Create an iterator. The optional options
object may contain the following range options to control the range of entries to be iterated:
gt
(greater than) or gte
(greater than or equal): define the lower bound of the range to be iterated. Only entries where the key is greater than (or equal to) this option will be included in the range. When reverse
is true the order will be reversed, but the entries iterated will be the same.lt
(less than) or lte
(less than or equal): define the higher bound of the range to be iterated. Only entries where the key is less than (or equal to) this option will be included in the range. When reverse
is true the order will be reversed, but the entries iterated will be the same.reverse
(boolean, default: false
): iterate entries in reverse order. Beware that a reverse seek can be slower than a forward seek.limit
(number, default: Infinity
): limit the number of entries yielded. This number represents a maximum number of entries and will not be reached if the end of the range is reached first. A value of Infinity
or -1
means there is no limit. When reverse
is true the entries with the highest keys will be returned instead of the lowest keys.The gte
and lte
range options take precedence over gt
and lt
respectively. If no range options are provided, the iterator will visit all entries of the database, starting at the lowest key and ending at the highest key (unless reverse
is true). In addition to range options, the options
object may contain:
keys
(boolean, default: true
): whether to return the key of each entry. If set to false
, the iterator will yield keys that are undefined
. Prefer to use db.keys()
instead.values
(boolean, default: true
): whether to return the value of each entry. If set to false
, the iterator will yield values that are undefined
. Prefer to use db.values()
instead.keyEncoding
: custom key encoding for this iterator, used to encode range options, to encode seek()
targets and to decode keys.valueEncoding
: custom value encoding for this iterator, used to decode values.signal
: an AbortSignal
to abort read operations on the iterator.Lastly, an implementation is free to add its own options.
:pushpin: To instead consume data using streams, see
level-read-stream
andlevel-web-stream
.
keyIterator = db.keys([options])
Create a key iterator, having the same interface as db.iterator()
except that it yields keys instead of entries. If only keys are needed, using db.keys()
may increase performance because values won't have to fetched, copied or decoded. Options are the same as for db.iterator()
except that db.keys()
does not take keys
, values
and valueEncoding
options.
// Iterate lazily
for await (const key of db.keys({ gt: 'a' })) {
console.log(key)
}
// Get all at once. Setting a limit is recommended.
const keys = await db.keys({ gt: 'a', limit: 10 }).all()
valueIterator = db.values([options])
Create a value iterator, having the same interface as db.iterator()
except that it yields values instead of entries. If only values are needed, using db.values()
may increase performance because keys won't have to fetched, copied or decoded. Options are the same as for db.iterator()
except that db.values()
does not take keys
and values
options. Note that it does take a keyEncoding
option, relevant for the encoding of range options.
// Iterate lazily
for await (const value of db.values({ gt: 'a' })) {
console.log(value)
}
// Get all at once. Setting a limit is recommended.
const values = await db.values({ gt: 'a', limit: 10 }).all()
db.clear([options])
Delete all entries or a range. Not guaranteed to be atomic. Returns a promise. Accepts the following options (with the same rules as on iterators):
gt
(greater than) or gte
(greater than or equal): define the lower bound of the range to be deleted. Only entries where the key is greater than (or equal to) this option will be included in the range. When reverse
is true the order will be reversed, but the entries deleted will be the same.lt
(less than) or lte
(less than or equal): define the higher bound of the range to be deleted. Only entries where the key is less than (or equal to) this option will be included in the range. When reverse
is true the order will be reversed, but the entries deleted will be the same.reverse
(boolean, default: false
): delete entries in reverse order. Only effective in combination with limit
, to delete the last N entries.limit
(number, default: Infinity
): limit the number of entries to be deleted. This number represents a maximum number of entries and will not be reached if the end of the range is reached first. A value of Infinity
or -1
means there is no limit. When reverse
is true the entries with the highest keys will be deleted instead of the lowest keys.keyEncoding
: custom key encoding for this operation, used to encode range options.The gte
and lte
range options take precedence over gt
and lt
respectively. If no options are provided, all entries will be deleted.
sublevel = db.sublevel(name[, options])
Create a sublevel that has the same interface as db
(except for additional, implementation-specific methods) and prefixes the keys of operations before passing them on to db
. The name
argument is required and must be a string, or an array of strings (explained further below).
const example = db.sublevel('example')
await example.put('hello', 'world')
await db.put('a', '1')
// Prints ['hello', 'world']
for await (const [key, value] of example.iterator()) {
console.log([key, value])
}
Sublevels effectively separate a database into sections. Think SQL tables, but evented, ranged and realtime! Each sublevel is an AbstractLevel
instance with its own keyspace, encodings, hooks and events. For example, it's possible to have one sublevel with 'buffer'
keys and another with 'utf8'
keys. The same goes for values. Like so:
db.sublevel('one', { valueEncoding: 'json' })
db.sublevel('two', { keyEncoding: 'buffer' })
An own keyspace means that sublevel.iterator()
only includes entries of that sublevel, sublevel.clear()
will only delete entries of that sublevel, and so forth. Range options get prefixed too.
Fully qualified keys (as seen from the parent database) take the form of prefix + key
where prefix
is separator + name + separator
. If name
is empty, the effective prefix is two separators. Sublevels can be nested: if db
is itself a sublevel then the effective prefix is a combined prefix, e.g. '!one!!two!'
. Note that a parent database will see its own keys as well as keys of any nested sublevels:
// Prints ['!example!hello', 'world'] and ['a', '1']
for await (const [key, value] of db.iterator()) {
console.log([key, value])
}
:pushpin: The key structure is equal to that of
subleveldown
which offered sublevels before they were built-in toabstract-level
. This means that anabstract-level
sublevel can read sublevels previously created with (and populated by)subleveldown
.
Internally, sublevels operate on keys that are either a string, Buffer or Uint8Array, depending on parent database and choice of encoding. Which is to say: binary keys are fully supported. The name
must however always be a string and can only contain ASCII characters.
The optional options
object may contain:
separator
(string, default: '!'
): Character for separating sublevel names from user keys and each other. Must sort before characters used in name
. An error will be thrown if that's not the case.keyEncoding
(string or object, default 'utf8'
): encoding to use for keysvalueEncoding
(string or object, default 'utf8'
): encoding to use for values.The keyEncoding
and valueEncoding
options are forwarded to the AbstractLevel
constructor and work the same, as if a new, separate database was created. They default to 'utf8'
regardless of the encodings configured on db
. Other options are forwarded too but abstract-level
has no relevant options at the time of writing. For example, setting the createIfMissing
option will have no effect. Why is that?
Like regular databases, sublevels open themselves but they do not affect the state of the parent database. This means a sublevel can be individually closed and (re)opened. If the sublevel is created while the parent database is opening, it will wait for that to finish. If the parent database is closed, then opening the sublevel will fail and subsequent operations on the sublevel will yield errors with code LEVEL_DATABASE_NOT_OPEN
.
Lastly, the name
argument can be an array as a shortcut to create nested sublevels. Those are normally created like so:
const indexes = db.sublevel('idx')
const colorIndex = indexes.sublevel('colors')
Here, the parent database of colorIndex
is indexes
. Operations made on colorIndex
are thus forwarded from that sublevel to indexes
and from there to db
. At each step, hooks and events are available to transform and react to data from a different perspective. Which comes at a (typically small) performance cost that increases with further nested sublevels. If the indexes
sublevel is only used to organize keys and not directly interfaced with, operations on colorIndex
can be made faster by skipping indexes
:
const colorIndex = db.sublevel(['idx', 'colors'])
In this case, the parent database of colorIndex
is db
. Note that it's still possible to separately create the indexes
sublevel, but it will be disconnected from colorIndex
, meaning that indexes
will not see (live) operations made on colorIndex
.
encoding = db.keyEncoding([encoding])
Returns the given encoding
argument as a normalized encoding object that follows the level-transcoder
encoding interface. See Encodings for an introduction. The encoding
argument may be:
level-transcoder
, level-codec
, abstract-encoding
, multiformats
keyEncoding(x)
equals keyEncoding(keyEncoding(x))
null
or undefined
, in which case the default keyEncoding
of the database is returned.Other methods that take keyEncoding
or valueEncoding
options, accept the same as above. Results are cached. If the encoding
argument is an object and it has a name then subsequent calls can refer to that encoding by name.
Depending on the encodings supported by a database, this method may return a transcoder encoding that translates the desired encoding from / to an encoding supported by the database. Its encode()
and decode()
methods will have respectively the same input and output types as a non-transcoded encoding, but its name
property will differ.
Assume that e.g. db.keyEncoding().encode(key)
is safe to call at any time including if the database isn't open, because encodings must be stateless. If the given encoding is not found or supported, a LEVEL_ENCODING_NOT_FOUND
or LEVEL_ENCODING_NOT_SUPPORTED
error is thrown.
encoding = db.valueEncoding([encoding])
Same as db.keyEncoding([encoding])
except that it returns the default valueEncoding
of the database (if the encoding
argument is omitted, null
or undefined
).
key = db.prefixKey(key, keyFormat[, local])
Add sublevel prefix to the given key
, which must be already-encoded. If this database is not a sublevel, the given key
is returned as-is. The keyFormat
must be one of 'utf8'
, 'buffer'
, 'view'
. If 'utf8'
then key
must be a string and the return value will be a string. If 'buffer'
then Buffer, if 'view'
then Uint8Array.
const sublevel = db.sublevel('example')
console.log(db.prefixKey('a', 'utf8')) // 'a'
console.log(sublevel.prefixKey('a', 'utf8')) // '!example!a'
By default, the given key
will be prefixed to form a fully-qualified key in the context of the root (i.e. top-most) database, as the following example will demonstrate. If local
is true, the given key
will instead be prefixed to form a fully-qualified key in the context of the parent database.
const sublevel = db.sublevel('example')
const nested = sublevel.sublevel('nested')
console.log(nested.prefixKey('a', 'utf8')) // '!example!!nested!a'
console.log(nested.prefixKey('a', 'utf8', true)) // '!nested!a'
db.defer(fn[, options])
Call the function fn
at a later time when db.status
changes to 'open'
or 'closed'
. Known as a deferred operation. Used by abstract-level
itself to implement "deferred open" which is a feature that makes it possible to call methods like db.put()
before the database has finished opening. The defer()
method is exposed for implementations and plugins to achieve the same on their custom methods:
db.foo = function (key) {
if (this.status === 'opening') {
this.defer(() => this.foo(key))
} else {
// ..
}
}
The optional options
object may contain:
signal
: an AbortSignal
to abort the deferred operation. When aborted (now or later) the fn
function will not be called.When deferring a custom operation, do it early: after normalizing optional arguments but before encoding (to avoid double encoding and to emit original input if the operation has events) and before any fast paths (to avoid calling back before the database has finished opening). For example, db.batch([])
has an internal fast path where it skips work if the array of operations is empty. Resources that can be closed on their own (like iterators) should however first check such state before deferring, in order to reject operations after close (including when the database was reopened).
db.deferAsync(fn[, options])
Similar to db.defer(fn)
but for asynchronous work. Returns a promise, which waits for db.status
to change to 'open'
or 'closed'
and then calls fn
which itself must return a promise. This allows for recursion:
db.foo = async function (key) {
if (this.status === 'opening') {
return this.deferAsync(() => this.foo(key))
} else {
// ..
}
}
The optional options
object may contain:
signal
: an AbortSignal
to abort the deferred operation. When aborted (now or later) the fn
function will not be called, and the promise returned by deferAsync()
will be rejected with a LEVEL_ABORTED
error.chainedBatch
chainedBatch.put(key, value[, options])
Add a put
operation to this chained batch, not committed until write()
is called. This will throw a LEVEL_INVALID_KEY
or LEVEL_INVALID_VALUE
error if key
or value
is invalid. The optional options
object may contain:
keyEncoding
: custom key encoding for this operation, used to encode the key
.valueEncoding
: custom value encoding for this operation, used to encode the value
.sublevel
(sublevel instance): act as though the put
operation is performed on the given sublevel, to similar effect as sublevel.batch().put(key, value)
. This allows atomically committing data to multiple sublevels. The given sublevel
must have the same root (i.e. top-most) database as chainedBatch.db
. The key
will be prefixed with the prefix of the sublevel, and the key
and value
will be encoded by the sublevel (using the default encodings of the sublevel unless keyEncoding
and / or valueEncoding
are provided).chainedBatch.del(key[, options])
Add a del
operation to this chained batch, not committed until write()
is called. This will throw a LEVEL_INVALID_KEY
error if key
is invalid. The optional options
object may contain:
keyEncoding
: custom key encoding for this operation, used to encode the key
.sublevel
(sublevel instance): act as though the del
operation is performed on the given sublevel, to similar effect as sublevel.batch().del(key)
. This allows atomically committing data to multiple sublevels. The given sublevel
must have the same root (i.e. top-most) database as chainedBatch.db
. The key
will be prefixed with the prefix of the sublevel, and the key
will be encoded by the sublevel (using the default key encoding of the sublevel unless keyEncoding
is provided).chainedBatch.clear()
Remove all operations from this chained batch, so that they will not be committed.
chainedBatch.write([options])
Commit the operations. Returns a promise. All operations will be written atomically, that is, they will either all succeed or fail with no partial commits.
There are no options
by default but implementations may add theirs. Note that write()
does not take encoding options. Those can only be set on put()
and del()
because implementations may synchronously forward such calls to an underlying store and thus need keys and values to be encoded at that point.
After write()
or close()
has been called, no further operations are allowed.
chainedBatch.close()
Free up underlying resources. This should be done even if the chained batch has zero operations. Automatically called by write()
so normally not necessary to call, unless the intent is to discard a chained batch without committing it. Closing the batch is an idempotent operation, such that calling close()
more than once is allowed and makes no difference. Returns a promise.
chainedBatch.length
The number of operations in this chained batch, including operations that were added by prewrite
hook functions if any.
chainedBatch.db
A reference to the database that created this chained batch.
iterator
An iterator allows one to lazily read a range of entries stored in the database. The entries will be sorted by keys in lexicographic order (in other words: byte order) which in short means key 'a'
comes before 'b'
and key '10'
comes before '2'
.
An iterator reads from a snapshot of the database, created at the time db.iterator()
was called. This means the iterator will not see the data of simultaneous write operations. Most but not all implementations can offer this guarantee, as indicated by db.supports.snapshots
.
Iterators can be consumed with for await...of
and iterator.all()
, or by manually calling iterator.next()
or nextv()
in succession. In the latter case, iterator.close()
must always be called. In contrast, finishing, throwing, breaking or returning from a for await...of
loop automatically calls iterator.close()
, as does iterator.all()
.
An iterator reaches its natural end in the following situations:
iterator.seek()
was out of range.An iterator keeps track of calls that are in progress. It doesn't allow concurrent next()
, nextv()
or all()
calls (including a combination thereof) and will throw an error with code LEVEL_ITERATOR_BUSY
if that happens:
// Not awaited
iterator.next()
try {
// Which means next() is still in progress here
iterator.all()
} catch (err) {
console.log(err.code) // 'LEVEL_ITERATOR_BUSY'
}
for await...of iterator
Yields entries, which are arrays containing a key
and value
. The type of key
and value
depends on the options passed to db.iterator()
.
try {
for await (const [key, value] of db.iterator()) {
console.log(key)
}
} catch (err) {
console.error(err)
}
Note for implementors: this uses iterator.next()
and iterator.close()
under the hood so no further method implementations are needed to support for await...of
.
iterator.next()
Advance to the next entry and yield that entry. Returns a promise for either an entry array (containing a key
and value
) or for undefined
if the iterator reached its natural end. The type of key
and value
depends on the options passed to db.iterator()
.
Note: iterator.close()
must always be called once there's no intention to call next()
or nextv()
again. Even if such calls yielded an error and even if the iterator reached its natural end. Not closing the iterator will result in memory leaks and may also affect performance of other operations if many iterators are unclosed and each is holding a snapshot of the database.
iterator.nextv(size[, options])
Advance repeatedly and get at most size
amount of entries in a single call. Can be faster than repeated next()
calls. The size
argument must be an integer and has a soft minimum of 1. There are no options
by default but implementations may add theirs.
Returns a promise for an array of entries, where each entry is an array containing a key and value. The natural end of the iterator will be signaled by yielding an empty array.
const iterator = db.iterator()
while (true) {
const entries = await iterator.nextv(100)
if (entries.length === 0) {
break
}
for (const [key, value] of entries) {
// ..
}
}
await iterator.close()
iterator.all([options])
Advance repeatedly and get all (remaining) entries as an array, automatically closing the iterator. Assumes that those entries fit in memory. If that's not the case, instead use next()
, nextv()
or for await...of
. There are no options
by default but implementations may add theirs. Returns a promise for an array of entries, where each entry is an array containing a key and value.
const entries = await db.iterator({ limit: 100 }).all()
for (const [key, value] of entries) {
// ..
}
iterator.seek(target[, options])
Seek to the key closest to target
. This method is synchronous, but the actual work may happen lazily. Subsequent calls to iterator.next()
, nextv()
or all()
(including implicit calls in a for await...of
loop) will yield entries with keys equal to or larger than target
, or equal to or smaller than target
if the reverse
option passed to db.iterator()
was true.
The optional options
object may contain:
keyEncoding
: custom key encoding, used to encode the target
. By default the keyEncoding
option of the iterator is used or (if that wasn't set) the keyEncoding
of the database.If range options like gt
were passed to db.iterator()
and target
does not fall within that range, the iterator will reach its natural end.
Note: Not all implementations support seek()
. Consult db.supports.seek
or the support matrix.
iterator.close()
Free up underlying resources. Returns a promise. Closing the iterator is an idempotent operation, such that calling close()
more than once is allowed and makes no difference.
If a next()
,nextv()
or all()
call is in progress, closing will wait for that to finish. After close()
has been called, further calls to next()
,nextv()
or all()
will yield an error with code LEVEL_ITERATOR_NOT_OPEN
.
iterator.db
A reference to the database that created this iterator.
iterator.count
Read-only getter that indicates how many entries have been yielded so far (by any method) excluding calls that errored or yielded undefined
.
iterator.limit
Read-only getter that reflects the limit
that was set in options. Greater than or equal to zero. Equals Infinity
if no limit, which allows for easy math:
const hasMore = iterator.count < iterator.limit
const remaining = iterator.limit - iterator.count
Iterators take an experimental signal
option that, once signaled, aborts an in-progress read operation (if any) and rejects subsequent reads. The relevant promise will be rejected with a LEVEL_ABORTED
error. Aborting does not close the iterator, because closing is asynchronous and may result in an error that needs a place to go. This means signals should be used together with a pattern that automatically closes the iterator:
const abortController = new AbortController()
const signal = abortController.signal
// Will result in 'aborted' log
abortController.abort()
try {
for await (const entry of db.iterator({ signal })) {
console.log(entry)
}
} catch (err) {
if (err.code === 'LEVEL_ABORTED') {
console.log('aborted')
}
}
Otherwise, close the iterator explicitly:
const iterator = db.iterator({ signal })
try {
const entries = await iterator.nextv(10)
} catch (err) {
if (err.code === 'LEVEL_ABORTED') {
console.log('aborted')
}
} finally {
await iterator.close()
}
Support of signals is indicated via db.supports.signals.iterators
.
keyIterator
A key iterator has the same interface as iterator
except that its methods yield keys instead of entries. Usage is otherwise the same.
valueIterator
A value iterator has the same interface as iterator
except that its methods yield values instead of entries. Usage is otherwise the same.
sublevel
A sublevel is an instance of the AbstractSublevel
class, which extends AbstractLevel
and thus has the same API as documented above. Sublevels have a few additional properties and methods.
sublevel.prefix
Prefix of the sublevel. A read-only string property.
const example = db.sublevel('example')
const nested = example.sublevel('nested')
console.log(example.prefix) // '!example!'
console.log(nested.prefix) // '!example!!nested!'
sublevel.parent
Parent database. A read-only property.
const example = db.sublevel('example')
const nested = example.sublevel('nested')
console.log(example.parent === db) // true
console.log(nested.parent === example) // true
sublevel.db
Root database. A read-only property.
const example = db.sublevel('example')
const nested = example.sublevel('nested')
console.log(example.db === db) // true
console.log(nested.db === db) // true
sublevel.path([local])
Get the path of this sublevel, which is its prefix without separators. If local
is true, exclude path of parent database. If false (the default) then recurse to form a fully-qualified path that travels from the root database to this sublevel.
const example = db.sublevel('example')
const nested = example.sublevel('nested')
const foo = db.sublevel(['example', 'nested', 'foo'])
// Get global or local path
console.log(nested.path()) // ['example', 'nested']
console.log(nested.path(true)) // ['nested']
// Has no intermediary sublevels, so the local option has no effect
console.log(foo.path()) // ['example', 'nested', 'foo']
console.log(foo.path(true)) // ['example', 'nested', 'foo']
Hooks are experimental and subject to change without notice.
Hooks allow userland hook functions to customize behavior of the database. Each hook is a different extension point, accessible via db.hooks
. Some are shared between database methods to encapsulate common behavior. A hook is either synchronous or asynchronous, and functions added to a hook must respect that trait.
hook = db.hooks.prewrite
A synchronous hook for modifying or adding operations to db.batch([])
, db.batch().put()
, db.batch().del()
, db.put()
and db.del()
calls. It does not include db.clear()
because the entries deleted by such a call are not communicated back to db
.
Functions added to this hook will receive two arguments: op
and batch
.
const charwise = require('charwise-compact')
const books = db.sublevel('books', { valueEncoding: 'json' })
const index = db.sublevel('authors', { keyEncoding: charwise })
books.hooks.prewrite.add(function (op, batch) {
if (op.type === 'put') {
batch.add({
type: 'put',
key: [op.value.author, op.key],
value: '',
sublevel: index
})
}
})
// Will atomically commit it to the author index as well
await books.put('12', { title: 'Siddhartha', author: 'Hesse' })
op
(object)The op
argument reflects the input operation and has the following properties: type
, key
, keyEncoding
, an optional sublevel
, and if type
is 'put'
then also value
and valueEncoding
. It can also include userland options, that were provided either in the input operation object (if it originated from db.batch([])
) or in the options
argument of the originating call, for example the options
in db.del(key, options)
.
The key
and value
have not yet been encoded at this point. The keyEncoding
and valueEncoding
properties are always encoding objects (rather than encoding names like 'json'
) which means hook functions can call (for example) op.keyEncoding.encode(123)
.
Hook functions can modify the key
, value
, keyEncoding
and valueEncoding
properties, but not type
or sublevel
. If a hook function modifies keyEncoding
or valueEncoding
it can use either encoding names or encoding objects, which will subsequently be normalized to encoding objects. Hook functions can also add custom properties to op
which will be visible to other hook functions, the private API of the database and in the write
event.
batch
(object)The batch
argument of the hook function is an interface to add operations, to be committed in the same batch as the input operation(s). This also works if the originating call was a singular operation like db.put()
because the presence of one or more hook functions will change db.put()
and db.del()
to internally use a batch. For originating calls like db.batch([])
that provide multiple input operations, operations will be added after the last input operation, rather than interleaving. The hook function will not be called for operations that were added by either itself or other hook functions.
batch = batch.add(op)
Add a batch operation, using the same format as the operations that db.batch([])
takes. However, it is assumed that op
can be freely mutated by abstract-level
. Unlike input operations it will not be cloned before doing so. The add
method returns batch
which allows for chaining, similar to the chained batch API.
For hook functions to be generic, it is recommended to explicitly define keyEncoding
and valueEncoding
properties on op
(instead of relying on database defaults) or to use an isolated sublevel with known defaults.
hook = db.hooks.postopen
An asynchronous hook that runs after the database has succesfully opened, but before deferred operations are executed and before events are emitted. It thus allows for additional initialization, including reading and writing data that deferred operations might need. The postopen hook always runs before the prewrite hook.
Functions added to this hook must return a promise and will receive one argument: options
. If one of the hook functions yields an error then the database will be closed. In the rare event that closing also fails, which means there's no safe state to return to, the database will enter an internal locked state where db.status
is 'closed'
and subsequent calls to db.open()
or db.close()
will be met with a LEVEL_STATUS_LOCKED
error. This locked state is also used during the postopen hook itself, meaning hook functions are not allowed to call db.open()
or db.close()
.
db.hooks.postopen.add(async function (options) {
// Can read and write like usual
return db.put('example', 123, {
valueEncoding: 'json'
})
})
options
(object)The options
that were provided in the originating db.open(options)
call, merged with constructor options and defaults. Equivalent to what the private API received in db._open(options)
.
hook = db.hooks.newsub
A synchronous hook that runs when a AbstractSublevel
instance has been created by db.sublevel(options)
. Functions added to this hook will receive two arguments: sublevel
and options
.
This hook can be useful to hook into a database and any sublevels created on that database. Userland modules that act like plugins might like the following pattern:
module.exports = function logger (db, options) {
// Recurse so that db.sublevel('foo', opts) will call logger(sublevel, opts)
db.hooks.newsub.add(logger)
db.hooks.prewrite.add(function (op, batch) {
console.log('writing', { db, op })
})
}
sublevel
(object)The AbstractSublevel
instance that was created.
options
(object)The options
that were provided in the originating db.sublevel(options)
call, merged with defaults. Equivalent to what the private API received in db._sublevel(options)
.
hook
hook.add(fn)
Add the given fn
function to this hook, if it wasn't already added.
hook.delete(fn)
Remove the given fn
function from this hook.
If a hook function throws an error, it will be wrapped in an error with code LEVEL_HOOK_ERROR
and abort the originating call:
try {
await db.put('abc', 123)
} catch (err) {
if (err.code === 'LEVEL_HOOK_ERROR') {
console.log(err.cause)
}
}
As a result, other hook functions will not be called.
On sublevels and their parent database(s), hooks are triggered in bottom-up order. For example, db.sublevel('a').sublevel('b').batch(..)
will trigger the prewrite
hook of sublevel a
, then the prewrite
hook of sublevel b
and then of db
. Only direct operations on a database will trigger hooks, not when a sublevel is provided as an option. This means db.batch([{ sublevel, ... }])
will trigger the prewrite
hook of db
but not of sublevel
. These behaviors are symmetrical to events: db.batch([{ sublevel, ... }])
will only emit a write
event from db
while db.sublevel(..).batch([{ ... }])
will emit a write
event from the sublevel and then another from db
(this time with fully-qualified keys).
Any method that takes a key
argument, value
argument or range options like gte
, hereby jointly referred to as data
, runs that data
through an encoding. This means to encode input data
and decode output data
.
Several encodings are builtin courtesy of level-transcoder
and can be selected by a short name like 'utf8'
or 'json'
. The default encoding is 'utf8'
which ensures you'll always get back a string. Encodings can be specified for keys and values independently with keyEncoding
and valueEncoding
options, either in the database constructor or per method to apply an encoding selectively. For example:
const db = level('./db', {
keyEncoding: 'view',
valueEncoding: 'json'
})
// Use binary keys
const key = Uint8Array.from([1, 2])
// Encode the value with JSON
await db.put(key, { x: 2 })
// Decode the value with JSON. Yields { x: 2 }
const obj = await db.get(key)
// Decode the value with utf8. Yields '{"x":2}'
const str = await db.get(key, { valueEncoding: 'utf8' })
The keyEncoding
and valueEncoding
options accept a string to select a known encoding by its name, or an object to use a custom encoding like charwise
. See keyEncoding()
for details. If a custom encoding is passed to the database constructor, subsequent method calls can refer to that encoding by name. Supported encodings are exposed in the db.supports
manifest:
const db = level('./db', {
keyEncoding: require('charwise'),
valueEncoding: 'json'
})
// Includes builtin and custom encodings
console.log(db.supports.encodings.utf8) // true
console.log(db.supports.encodings.charwise) // true
An encoding can both widen and limit the range of data
types. The default 'utf8'
encoding can only store strings. Other types, though accepted, are irreversibly stringified before storage. That includes JavaScript primitives which are converted with String(x)
, Buffer which is converted with x.toString('utf8')
and Uint8Array converted with TextDecoder#decode(x)
. Use other encodings for a richer set of data
types, as well as binary data without a conversion cost - or loss of non-unicode bytes.
For binary data two builtin encodings are available: 'buffer'
and 'view'
. They use a Buffer or Uint8Array respectively. To some extent these encodings are interchangeable, as the 'buffer'
encoding also accepts Uint8Array as input data
(and will convert that to a Buffer without copying the underlying ArrayBuffer), the 'view'
encoding also accepts Buffer as input data
and so forth. Output data
will be either a Buffer or Uint8Array respectively and can also be converted:
const db = level('./db', { valueEncoding: 'view' })
const buffer = await db.get('example', { valueEncoding: 'buffer' })
In browser environments it may be preferable to only use 'view'
. When bundling JavaScript with Webpack, Browserify or other, you can choose not to use the 'buffer'
encoding and (through configuration of the bundler) exclude the buffer
shim in order to reduce bundle size.
Regardless of the choice of encoding, a key
or value
may not be null
or undefined
due to preexisting significance in iterators and streams. No such restriction exists on range options because null
and undefined
are significant types in encodings like charwise
as well as some underlying stores like IndexedDB. Consumers of an abstract-level
implementation must assume that range options like { gt: undefined }
are not the same as {}
. The abstract test suite does not test these types. Whether they are supported or how they sort may differ per implementation. An implementation can choose to:
Lastly, one way or another, every implementation must support data
of type String and should support data
of type Buffer or Uint8Array.
An abstract-level
database is an EventEmitter
and emits the events listed below.
The put
, del
and batch
events are deprecated in favor of the write
event and will be removed in a future version of abstract-level
. If one or more write
event listeners exist or if the prewrite
hook is in use, either of which implies opting-in to the write
event, then the deprecated events will not be emitted.
opening
Emitted when database is opening. Receives 0 arguments:
db.once('opening', function () {
console.log('Opening...')
})
open
Emitted when database has successfully opened. Receives 0 arguments:
db.once('open', function () {
console.log('Opened!')
})
closing
Emitted when database is closing. Receives 0 arguments.
closed
Emitted when database has successfully closed. Receives 0 arguments.
write
Emitted when data was successfully written to the database as the result of db.batch()
, db.put()
or db.del()
. Receives a single operations
argument, which is an array containing normalized operation objects. The array will contain at least one operation object and reflects modifications made (and operations added) by the prewrite
hook. Normalized means that every operation object has keyEncoding
and (if type
is 'put'
) valueEncoding
properties and these are always encoding objects, rather than their string names like 'utf8'
or whatever was given in the input.
Operation objects also include userland options that were provided in the options
argument of the originating call, for example the options
in a db.put(key, value, options)
call:
db.on('write', function (operations) {
for (const op of operations) {
if (op.type === 'put') {
console.log(op.key, op.value, op.foo)
}
}
})
// Put with a userland 'foo' option
await db.put('abc', 'xyz', { foo: true })
The key
and value
of the operation object match the original input, before having encoded it. To provide access to encoded data, the operation object additionally has encodedKey
and (if type
is 'put'
) encodedValue
properties. Event listeners can inspect keyEncoding.format
and valueEncoding.format
to determine the data type of encodedKey
and encodedValue
.
As an example, given a sublevel created with users = db.sublevel('users', { valueEncoding: 'json' })
, a call like users.put('isa', { score: 10 })
will emit a write
event from the sublevel with an operations
argument that looks like the following. Note that specifics (in data types and encodings) may differ per database at it depends on which encodings an implementation supports and uses internally. This example assumes that the database uses 'utf8'
.
[{
type: 'put',
key: 'isa',
value: { score: 10 },
keyEncoding: users.keyEncoding('utf8'),
valueEncoding: users.valueEncoding('json'),
encodedKey: 'isa', // No change (was already utf8)
encodedValue: '{"score":10}', // JSON-encoded
}]
Because sublevels encode and then forward operations to their parent database, a separate write
event will be emitted from db
with:
[{
type: 'put',
key: '!users!isa', // Prefixed
value: '{"score":10}', // No change
keyEncoding: db.keyEncoding('utf8'),
valueEncoding: db.valueEncoding('utf8'),
encodedKey: '!users!isa',
encodedValue: '{"score":10}'
}]
Similarly, if a sublevel
option was provided:
await db.batch()
.del('isa', { sublevel: users })
.write()
We'll get:
[{
type: 'del',
key: '!users!isa', // Prefixed
keyEncoding: db.keyEncoding('utf8'),
encodedKey: '!users!isa'
}]
Lastly, newly added write
event listeners are only called for subsequently created batches (including chained batches):
const promise = db.batch([{ type: 'del', key: 'abc' }])
db.on('write', listener) // Too late
await promise
For the event listener to be called it must be added earlier:
db.on('write', listener)
await db.batch([{ type: 'del', key: 'abc' }])
The same is true for db.put()
and db.del()
.
clear
Emitted when a db.clear()
call completed and entries were thus successfully deleted from the database. Receives a single options
argument, which is the verbatim options
argument that was passed to db.clear(options)
(or an empty object if none) before having encoded range options.
put
(deprecated)Emitted when a db.put()
call completed and an entry was thus successfully written to the database. Receives key
and value
arguments, which are the verbatim key
and value
that were passed to db.put(key, value)
before having encoded them.
db.on('put', function (key, value) {
console.log('Wrote', key, value)
})
del
(d© 2010 - cnpmjs.org x YWFE | Home | YWFE