diff --git a/CONDUCT.md b/CODE_OF_CONDUCT.md
similarity index 100%
rename from CONDUCT.md
rename to CODE_OF_CONDUCT.md
diff --git a/README.md b/README.md
index d390685708..3c32c667ca 100644
--- a/README.md
+++ b/README.md
@@ -2,858 +2,272 @@
[![codecov.io](https://codecov.io/github/zalando/nakadi/coverage.svg?branch=master)](https://codecov.io/github/zalando/nakadi?branch=master)
[![Codacy Badge](https://api.codacy.com/project/badge/Grade/785ccd4ab5e34867b760a8b07c3b62f1)](https://www.codacy.com/app/aruha/nakadi?utm_source=www.github.com&utm_medium=referral&utm_content=zalando/nakadi&utm_campaign=Badge_Grade)
-
-
-**Table of Contents**
-
-- [Nakadi Event Broker](#nakadi-event-broker)
-- [Quickstart](#quickstart)
- - [Running a Server](#running-a-server)
- - [Stopping a Server](#stopping-a-server)
- - [Mac OS Docker Settings](#mac-os-docker-settings)
-- [API Overview and Usage](#api-overview-and-usage)
- - [Events and Event Types](#events-and-event-types)
- - [Creating Event Types](#creating-event-types)
- - [Create an Event Type](#create-an-event-type)
- - [List Event Types](#list-event-types)
- - [View an Event Type](#view-an-event-type)
- - [List Partitions for an Event Type](#list-partitions-for-an-event-type)
- - [View a Partition for an Event Type](#view-a-partition-for-an-event-type)
- - [Publishing Events](#publishing-events)
- - [Posting one or more Events](#posting-one-or-more-events)
- - [Consuming Events](#consuming-events)
- - [Opening an Event Stream](#opening-an-event-stream)
- - [Event Stream Structure](#event-stream-structure)
- - [Cursors, Offsets and Partitions](#cursors-offsets-and-partitions)
- - [Event Stream Keepalives](#event-stream-keepalives)
- - [Subscriptions](#subscriptions)
- - [Creating Subscriptions](#creating-subscriptions)
- - [Consuming Events from a Subscription](#consuming-events-from-a-subscription)
- - [Client Rebalancing](#client-rebalancing)
- - [Subscription Cursors](#subscription-cursors)
- - [Committing Cursors](#committing-cursors)
- - [Checking Current Position](#checking-current-position)
- - [Subscription Statistics](#subscription-statistics)
- - [Deleting a Subscription](#deleting-a-subscription)
- - [Getting and Listing Subscriptions](#getting-and-listing-subscriptions)
-- [Build and Development](#build-and-development)
- - [Building](#building)
- - [Dependencies](#dependencies)
- - [What does the project already implement?](#what-does-the-project-already-implement)
-- [Contributing](#contributing)
-
-
-
-## Nakadi Event Broker
-
-The goal of Nakadi (ნაკადი means "stream" in Georgian) is to provide an event broker infrastructure to:
-
-- Abstract event delivery via a secured [RESTful API](api/nakadi-event-bus-api.yaml). This allows microservices teams to maintain service boundaries, and not directly depend on any specific message broker technology. Access to the API can be managed and secured using OAuth scopes.
-
-- Enable convenient development of event-driven applications and asynchronous microservices. Event types can be defined with schemas and managed via a registry. Nakadi also has optional support for events describing business processes and data changes using standard primitives for identity, timestamps, event types, and causality.
-
-- Efficient low latency event delivery. Once a publisher sends an event using a simple HTTP POST, consumers can be pushed to via a streaming HTTP connection, allowing near real-time event processing. The consumer connection has keepalive controls and support for managing stream offsets.
-
-The project also provides compatability with the [STUPS project](https://stups.io/). Additional features that we plan to cover in the future are:
-
-* Discoverability of the resource structures flowing into the broker.
-
-* A managed API that allows consumers to subscribe and have stream offsets stored by the server.
+## [Nakadi Event Broker](https://zalando.github.io/nakadi/)
-* Filtering of events for subscribing consumers.
+Nakadi is a distributed event bus broker that implements a RESTful API abstraction on top of Kafka-like queues.
-* Role based access control to data.
+![Nakadi Deployment Diagram](docs/img/NakadiDeploymentDiagram.png)
-* Support for different streaming technologies and engines. Nakadi currently uses [Apache Kafka](http://kafka.apache.org/) as its broker, but other providers (such as Kinesis) will be possible.
+More detailed information can be found on our [website](http://zalando.github.io/nakadi/).
-More detailed information can be found on the [manual](http://zalando.github.io/nakadi-manual/).
+### Project goal
+The goal of Nakadi (**ნაკადი** means *stream* in Georgian) is to provide an event broker infrastructure to:
-## Quickstart
+- Abstract event delivery via a secured [RESTful API](https://zalando.github.io/nakadi/manual.html#nakadi-event-bus-api).
+
+ This allows microservices teams to maintain service boundaries, and not directly depend on any specific message broker technology.
+ Access can be managed individually for every queue and secured using *OAuth* and custom authorization plugins.
-You can run the project locally using [Docker](https://www.docker.com/). Note that Nakadi requires very recent versions of docker and docker-compose. See [Dependencies](#dependencies) for more information.
+- Enable convenient development of event-driven applications and asynchronous microservices.
-### Running a Server
+ Event types can be defined with [Event type schemas](https://zalando.github.io/nakadi/manual.html#using_event-types)
+ and managed via a registry. All events will be validated against the schema before publishing.
+ This guarantees data quality and consistency for consumers.
+
+- Efficient low latency event delivery.
+
+ Once a publisher sends an event using a simple [HTTP POST](https://zalando.github.io/nakadi/manual.html#using_producing-events),
+ consumers can be pushed to via a [streaming](https://zalando.github.io/nakadi/manual.html#using_consuming-events-lola)
+ HTTP connection, allowing near real-time event processing.
+ The consumer connection has keepalive controls and support for managing stream offsets using
+ [subscriptions](https://zalando.github.io/nakadi/manual.html#using_consuming-events-hila).
+
+### Links
+
+Read more to understand *The big picture*
+[Architecture for data integration](https://pages.github.bus.zalan.do/core-platform/docs/architecture/data_integration.html)
+
+Watch the talk [Data Integration in the World of Microservices](https://clusterhq.com/2016/05/20/microservices-zalando/)
+
+### Development status
+
+Nakadi is high-load production ready.
+Zalando uses Nakadi as its central Event Bus Service.
+Nakadi reliably handles the traffic from thousands event types with
+the throughput of more than hundreds gigabytes per second.
+The project is in active development. See the [CHANGELOG.md](CHANGELOG.md)
+
+#### Features
+
+* Stream:
+ * REST abstraction over Kafka-like queues.
+ * CRUD for event types.
+ * Event batch publishing.
+ * Low-level interface.
+ * manual client side partition management is needed
+ * no support of commits
+ * High-level interface (Subscription API).
+ * automatic redistribution of partitions between consuming clients
+ * commits should be issued to move server-side cursors
+* Schema:
+ * Schema registry.
+ * Several event type categories (Undefined, Business, Data Change).
+ * Several partitioning strategies (Random, Hash, User defined).
+ * Event enrichment strategies.
+ * Schema evolution.
+ * Events validation using an event type schema.
+* Security:
+ * OAuth2 authentication.
+ * Per-event type authorization.
+ * Blacklist of users and applications.
+* Operations:
+ * [STUPS](https://stups.io/) platform compatible.
+ * [ZMON](https://zmon.io/) monitoring compatible.
+ * SLO monitoring.
+ * Timelines.
+ * This allows transparently switch production and consumption to different cluster (tier, region, AZ) without
+ moving actual data and any service degradation.
+ * Opens possibility for implementation of other streaming technologies and engines besides Kafka
+ (like AWS Kinesis, Google pub/sub etc.)
+
+Read more about latest development in our [CHANGELOG.md](CHANGELOG.md)
-From the project's home directory you can start Nakadi via Gradle:
+#### Additional features that we plan to cover in the future are:
-```sh
-./gradlew startNakadi
-```
+* Support for different streaming technologies and engines. Nakadi currently uses [Apache Kafka](http://kafka.apache.org/)
+ as its broker, but other providers (such as Kinesis) will be possible.
+* Filtering of events for subscribing consumers.
+* Store old published events forever using transparent fall back backup shortages like AWS S3.
+* Separate the internal schema register to standalone service.
+* Use additional schema formats and protocols like Avro, protobuf and [others](https://en.wikipedia.org/wiki/Comparison_of_data_serialization_formats).
-This will build the project and run docker compose with 4 services:
+#### Related projects
-- Nakadi (8080)
-- PostgreSQL (5432)
-- Kafka (9092)
-- Zookeeper (2181)
+The [zalando-nakadi](https://github.com/zalando-nakadi/) organisation contains many useful related projects
+like
-### Stopping a Server
+* Client libraries
+* SDK
+* GUI
+* DevOps tools and more
-To stop the running Nakadi:
+## Quickstart
-```sh
-./gradlew stopNakadi
-```
+You can run the project locally using [Docker](https://www.docker.com/).
-### Mac OS Docker Settings
+### Dependencies
-Since Docker for Mac OS runs inside Virtual Box, you will want to expose
-some ports first to allow Nakadi to access its dependencies:
+The Nakadi server is a Java 8 [Spring Boot](http://projects.spring.io/spring-boot/) application.
+It uses [Kafka 0.10.2](http://kafka.apache.org/0102/documentation.html) as its broker and
+ [PostgreSQL 9.5](http://www.postgresql.org/docs/9.5/static/release-9-5.html) as its supporting database.
-```sh
-docker-machine ssh default \
--L 9092:localhost:9092 \
--L 8080:localhost:8080 \
--L 5432:localhost:5432 \
--L 2181:localhost:2181
-```
+Nakadi requires recent versions of docker and docker-compose. In
+particular, docker-compose >= v1.7.0 is required. See [Install Docker
+Compose](https://docs.docker.com/compose/install/) for information on
+installing the most recent docker-compose version.
+
+The project is built with [Gradle](http://gradle.org).
+The `./gradlew` [wrapper script](http://www.gradle.org/docs/current/userguide/gradle_wrapper.html) will bootstrap
+the right Gradle version if it's not already installed.
+
+[Mac OS specific configuration](https://zalando.github.io/nakadi/manual.html#macos)
-Alternatively you can set up port forwarding on the "default" machine through
-its network settings in the VirtualBox UI. If you get the message "Is the
-docker daemon running on this host?" but you know Docker and VirtualBox are
-running, you might want to run this command:
+### Install
+To get the source, clone the git repository.
```sh
-eval "$(docker-machine env default)"
+git clone https://github.com/zalando/nakadi.git
```
-Note: Docker for Mac OS (previously in beta) version 1.12 (1.12.0 or 1.12.1) currently is not supported due to the [bug](https://github.com/docker/docker/issues/22753#issuecomment-242711639) in networking host configuration.
+### Building
-## API Overview and Usage
+The gradle setup is fairly standard, the main tasks are:
-### Events and Event Types
+- `./gradlew build`: run a build and test
+- `./gradlew clean`: clean down the build
-The Nakadi API allows the publishing and consuming of _events_ over HTTP.
-To do this the producer must register an _event type_ with the Nakadi schema
-registry.
+Some other useful tasks are:
-The event type contains information such as the name, the owning application,
-strategies for partitioning and enriching data, and a JSON schema. Once the
-event type is created, a publishing resource becomes available that will accept
-events for the type, and consumers can also read from the event stream.
+- `./gradlew acceptanceTest`: run the ATs
+- `./gradlew fullAcceptanceTest`: run the ATs in the context of Docker
+- `./gradlew startNakadi`: build Nakadi and start docker-compose services: nakadi, postgresql, zookeeper and kafka
+- `./gradlew stopNakadi`: shutdown docker-compose services
+- `./gradlew startStorages`: start docker-compose services: postgres, zookeeper and kafka (useful for development purposes)
+- `./gradlew stopStorages`: shutdown docker-compose services
-There are three main _categories_ of event type defined by Nakadi -
+For working with an IDE, the `eclipse` IDE task is available and you'll be able to import the `build.gradle` into Intellij IDEA directly.
-- Undefined: A free form category suitable for events that are entirely custom to the producer.
+### Running a Server
-- Data: an event that represents a change to a record or other item, or a new item. Change events are associated with a create, update, delete, or snapshot operation.
+From the project's home directory you can start Nakadi via Gradle:
-- Business: an event that is part of, or drives a business process, such as a state transition in a customer order.
+```sh
+./gradlew startNakadi
+```
-The events for the business and data change helper categories follow a
-generic Nakadi event schema as well as a schema custom to the event data. The generic
-schema pre-defines common fields for an event and the custom schema for the event
-is defined when the event type is created. When a JSON event for one of these
-categories is posted to the server, it is expected to conform to the
-combination of the generic schema for the category and to the custom schema defined
-for the event type. This combination is called the _effective schema_ and is
-validated by Nakadi.
+This will build the project and run docker compose with 4 services:
-The undefined category is also required to have a JSON schema on creation,
-but this can be as simple as `{ "\additionalProperties\": true }` to allow arbitrary
-JSON. Unlike the business and data categories, the schema for an undefined type is
-not checked by Nakadi when an event is posted, but it can be used by a consumer
-to validate data on the stream.
+- Nakadi (8080)
+- PostgreSQL (5432)
+- Kafka (9092)
+- Zookeeper (2181)
-### Creating Event Types
+To stop the running Nakadi:
+
+```sh
+./gradlew stopNakadi
+```
-#### Create an Event Type
+## API Usage Quickstart
-An event type can be created by posting to the `event-types` resource.
+Please read the [manual](https://zalando.github.io/nakadi/manual.html) for the full API usage details.
-Each event type must have a unique `name`. If the event type already exists a
-`409 Conflict` response will be returned. Otherwise a successful request will
-result in a `201 Created` response. The exact required fields depend on the
-event type's category, but `name`, `owning_application` and `schema` are always
-expected.
+### Creating Event Types
-The `schema` value should only declare the custom part of the event - the generic
-schema is implicit and doesn't need to be defined. The combination of the two
-(the "effective schema") will be checked when events are submitted for the event type.
+The Nakadi API allows the publishing and consuming of _events_ over HTTP.
+To do this the producer must register an _event type_ with the Nakadi schema
+registry.
-Each event type can have a `default_statistic` object attached. It controls the
-number of partitions of the underlying topic. If you do not provide any value,
-Nakadi will use a sensible default value which may be just a single partition.
-This will effectively disallow parallel reads of subscriptions of this event
-type. The values provided here can not be changed later, so choose them wisely.
-This example shows a `business` category event type with a simple schema for an
-order number -
+This example shows minimal `undefined` category event type with a wilcard schema -
```sh
curl -v -XPOST http://localhost:8080/event-types -H "Content-type: application/json" -d '{
"name": "order.ORDER_RECEIVED",
"owning_application": "order-service",
- "category": "business",
- "partition_strategy": "hash",
- "partition_key_fields": ["order_number"],
- "enrichment_strategies": ["metadata_enrichment"],
- "default_statistic": {
- "messages_per_minute": 1000,
- "message_size": 5,
- "read_parallelism": 1,
- "write_parallelism": 1
- },
- "schema": {
- "type": "json_schema",
- "schema": "{ \"properties\": { \"order_number\": { \"type\": \"string\" } } }"
- }
-}'
-```
-
-This example shows an `undefined` category event type with a wilcard schema -
-
-```sh
-curl -v -XPOST http://localhost:8080/event-types -H "Content-type: application/json" -d '{
- "name": "undef",
- "owning_application": "jinteki",
- "category": "undefined",
- "partition_strategy": "random",
+ "category": "undefined",
"schema": {
"type": "json_schema",
"schema": "{ \"additionalProperties\": true }"
}
}'
```
+**Note:** This is not recommended category and schema. It should be used only for the testing.
-An undefined event does not accept a value for `enrichment_strategies`.
-
-#### List Event Types
-
-```sh
-curl -v http://localhost:8080/event-types
-
-
-HTTP/1.1 200 OK
-Content-Type: application/json;charset=UTF-8
-
-[
- {
- "category": "business",
- "default_statistic": null,
- "enrichment_strategies": ["metadata_enrichment"],
- "name": "order.ORDER_RECEIVED",
- "owning_application": "order-service",
- "partition_key_fields": ["order_number"],
- "partition_strategy": "hash",
- "schema": {
- "schema": "{ \"properties\": { \"order_number\": { \"type\": \"string\" } } }",
- "type": "json_schema"
- }
- }
-]
-```
-
-#### View an Event Type
-
-Each event type registered with Nakadi has a URI based on its `name` -
-
-```sh
-curl -v http://localhost:8080/event-types/order.ORDER_RECEIVED
-
-
-HTTP/1.1 200 OK
-Content-Type: application/json;charset=UTF-8
-
-{
- "category": "business",
- "default_statistic": null,
- "enrichment_strategies": ["metadata_enrichment"],
- "name": "order.ORDER_RECEIVED",
- "owning_application": "order-service",
- "partition_key_fields": ["order_number"],
- "partition_strategy": "hash",
- "schema": {
- "schema": "{ \"properties\": { \"order_number\": { \"type\": \"string\" } } }",
- "type": "json_schema"
- }
-}
-```
-
-#### List Partitions for an Event Type
-
-The partitions for an event type are available via its `/partitions` resource:
-
-```sh
-curl -v http://localhost:8080/event-types/order.ORDER_RECEIVED/partitions
-
-
-HTTP/1.1 200 OK
-Content-Type: application/json;charset=UTF-8
-
-[
- {
- "newest_available_offset": "BEGIN",
- "oldest_available_offset": "0",
- "partition": "0"
- }
-]
-```
-
-#### View a Partition for an Event Type
-
-Each partition for an event type has a URI based on its `partition` value:
-
-```sh
-curl -v http://localhost:8080/event-types/order.ORDER_RECEIVED/partitions/0
-
-
-HTTP/1.1 200 OK
-Content-Type: application/json;charset=UTF-8
-
-{
- "newest_available_offset": "BEGIN",
- "oldest_available_offset": "0",
- "partition": "0"
-}
-```
-
-### Publishing Events
-
-#### Posting one or more Events
-
-Events for an event type can be published by posting to its "events" collection:
-
-```sh
-curl -v -XPOST http://localhost:8080/event-types/order.ORDER_RECEIVED/events -H "Content-type: application/json" -d '[
- {
- "order_number": "24873243241",
- "metadata": {
- "eid": "d765de34-09c0-4bbb-8b1e-7160a33a0791",
- "occurred_at": "2016-03-15T23:47:15+01:00"
- }
- }, {
- "order_number": "24873243242",
- "metadata": {
- "eid": "a7671c51-49d1-48e6-bb03-b50dcf14f3d3",
- "occurred_at": "2016-03-15T23:47:16+01:00"
- }
- }]'
-
-
-HTTP/1.1 200 OK
-```
-
-The events collection accepts an array of events. As well as the fields defined
-in the event type's schema, the posted event must also contain a `metadata`
-object with an `eid` and `occurred_at` fields. The `eid` is a UUID that uniquely
-identifies an event and the `occurred_at` field identifies the time of creation
-of the Event defined by the producer.
-
-Note that the order of events in the posted array will be the order they are published
-onto the event stream and seen by consumers. They are not re-ordered based on
-their `occurred_at` or other data values.
+Read mode in the [manual](https://zalando.github.io/nakadi/manual.html#using_event-types)
### Consuming Events
-#### Opening an Event Stream
-
You can open a stream for an Event Type via the `events` sub-resource:
-```sh
-curl -v http://localhost:8080/event-types/order.ORDER_RECEIVED/events
-```
-
-#### Event Stream Structure
-
-The stream response groups events into batches. Batches in the response
-are separated by a newline and each batch will be emitted on a single
-line, but a pretty-printed batch object looks like this -
-
-```json
-{
- "cursor": {
- "partition": "0",
- "offset": "4"
- },
- "events": [{
- "order_number": "24873243241",
- "metadata": {
- "eid": "d765de34-09c0-4bbb-8b1e-7160a33a0791",
- "occurred_at": "2016-03-15T23:47:15+01:00"
- }
- }, {
- "order_number": "24873243242",
- "metadata": {
- "eid": "a7671c51-49d1-48e6-bb03-b50dcf14f3d3",
- "occurred_at": "2016-03-15T23:47:16+01:00"
- }
- }]
-}
-```
-
-The `cursor` object describes the partition and the offset for this batch of
-events. The cursor allow clients to checkpoint which events have already been
-consumed and navigate through the stream - individual events in the stream don't
-have cursors. The `events` array contains a list of events that were published in
-the order they were posted by the producer. Each event will contain a `metadata`
-field as well as the custom data defined by the event type's schema.
-
-The HTTP response then will look something like this -
-
```sh
curl -v http://localhost:8080/event-types/order.ORDER_RECEIVED/events
HTTP/1.1 200 OK
-{"cursor":{"partition":"0","offset":"4"},"events":[{"order_number": "ORDER_001", "metadata": {"eid": "4ae5011e-eb01-11e5-8b4a-1c6f65464fc6", "occurred_at": "2016-03-15T23:56:11+01:00"}}]}
-{"cursor":{"partition":"0","offset":"5"},"events":[{"order_number": "ORDER_002", "metadata": {"eid": "4bea74a4-eb01-11e5-9efa-1c6f65464fc6", "occurred_at": "2016-03-15T23:57:15+01:00"}}]}
-{"cursor":{"partition":"0","offset":"6"},"events":[{"order_number": "ORDER_003", "metadata": {"eid": "4cc6d2f0-eb01-11e5-b606-1c6f65464fc6", "occurred_at": "2016-03-15T23:58:15+01:00"}}]}
-```
-
-#### Cursors, Offsets and Partitions
-
-By default the `events` resource will consume from all partitions of an event
-type and from the end (or "tail") of the stream. To select only particular
-partitions and a position where in the stream to start, you can supply
-an `X-Nakadi-Cursors` header in the request:
-
-```sh
-curl -v http://localhost:8080/event-types/order.ORDER_RECEIVED/events \
- -H 'X-Nakadi-Cursors: [{"partition": "0", "offset":"12"}]'
-```
-
-The header value is a JSON array of _cursors_. Each cursor in the array
-describes its partition for the stream and an offset to stream from. Note that
-events within the same partition maintain their overall order.
-
-The `offset` value of the cursor allows you select where the in the stream you
-want to consume from. This can be any known offset value, or the dedicated value
-`BEGIN` which will start the stream from the beginning. For example, to read
-from partition `0` from the beginning:
-
-```sh
-curl -v http://localhost:8080/event-types/order.ORDER_RECEIVED/events \
- -H 'X-Nakadi-Cursors:[{"partition": "0", "offset":"BEGIN"}]'
-```
-
-The details of the partitions and their offsets for an event type are
-available via its `partitions` resource.
-
-#### Event Stream Keepalives
-
-If there are no events to be delivered Nakadi will keep a streaming connection open by
-periodically sending a batch with no events but which contains a `cursor` pointing to
-the current offset. For example:
-
-```sh
-curl -v http://localhost:8080/event-types/order.ORDER_RECEIVED/events
-
-
-HTTP/1.1 200 OK
-
-{"cursor":{"partition":"0","offset":"6"},"events":[{"order_number": "ORDER_003", "metadata": {"eid": "4cc6d2f0-eb01-11e5-b606-1c6f65464fc6", "occurred_at": "2016-03-15T23:58:15+01:00"}}]}
-{"cursor":{"partition":"0","offset":"6"}}
-{"cursor":{"partition":"0","offset":"6"}}
-{"cursor":{"partition":"0","offset":"6"}}
-{"cursor":{"partition":"0","offset":"6"}}
-```
-
-This can be treated as a keep-alive control for some load balancers.
-
-### Subscriptions
-
-Subscriptions allow clients to consume events, where the Nakadi server store offsets and
-automatically manages reblancing of partitions across consumer clients. This allows clients
-to avoid managing stream state locally.
-
-The typical workflow when using subscriptions is:
-
-1. Create a Subscription specifying the event-types you want to read.
-
-1. Start reading batches of events from the subscription.
-
-1. Commit the cursors found in the event batches back to Nakadi, which will store the offsets.
-
-
-If the connection is closed, and later restarted, clients will get events from
-the point of your last cursor commit. If you need more than one client for your
-subscription to distribute the load you can read the subscription with multiple
-clients and Nakadi will balance the load across them.
-
-The following sections provide more detail on the Subscription API and basic
-examples of Subscription API creation and usage:
-
- - [Creating Subscriptions](#creating-subscriptions): How to create a new Subscription and select the event types.
- - [Consuming Events from a Subscription](#consuming-events-from-a-subscription): How to connect to and consume batches from a Susbcription stream.
- - [Client Rebalancing](#client-rebalancing): Describes how clients for a Subscription are automatically assigned partitions, and how the API's _at-least-once_ delivery guarantee works.
- - [Subscription Cursors](#subscription-cursors): Describes the structure of a Subscription batch cursor.
- - [Committing Cursors](#committing-cursors): How to send offset positions for a partition to Nakadi for storage.
- - [Checking Current Position](#checking-current-position): How to determine the current offsets for a Subscription.
- - [Subscription Statistics](#subscription-statistics): Viewing metrics for a Subscription.
- - [Deleting a Subscription](#deleting-a-subscription): How to remove a Subscription.
- - [Getting and Listing Subscriptions](#getting-and-listing-subscriptions): How to view individual an subscription and list existing susbcriptions.
-
-For a more detailed description and advanced configuration options please take a look at Nakadi [swagger](api/nakadi-event-bus-api.yaml) file.
-
-#### Creating Subscriptions
-
-A Subscription can be created by posting to the `/subscriptions` collection resource:
-
-```sh
-curl -v -XPOST "http://localhost:8080/subscriptions" -H "Content-type: application/json" -d '{
- "owning_application": "order-service",
- "event_types": ["order.ORDER_RECEIVED"]
- }'
-```
-
-The response returns the whole Subscription object that was created, including the server generated `id` field:
-
-```sh
-HTTP/1.1 201 Created
-Content-Type: application/json;charset=UTF-8
-
-{
- "owning_application": "order-service",
- "event_types": [
- "order.ORDER_RECEIVED"
- ],
- "consumer_group": "default",
- "read_from": "end",
- "id": "038fc871-1d2c-4e2e-aa29-1579e8f2e71f",
- "created_at": "2016-09-23T16:35:13.273Z"
-}
-```
-
-#### Consuming Events from a Subscription
-
-Consuming events is done by sending a GET request to the Subscriptions's event resource (`/subscriptions/{subscription-id}/events`):
-
-```sh
-curl -v -XGET "http://localhost:8080/subscriptions/038fc871-1d2c-4e2e-aa29-1579e8f2e71f/events"
-```
-
-The response is a stream that groups events into JSON batches separated by an endline (`\n`) character. The output looks like this:
-
-```sh
-HTTP/1.1 200 OK
-X-Nakadi-StreamId: 70779f46-950d-4e48-9fca-10c413845e7f
-Transfer-Encoding: chunked
-
-{"cursor":{"partition":"5","offset":"543","event_type":"order.ORDER_RECEIVED","cursor_token":"b75c3102-98a4-4385-a5fd-b96f1d7872f2"},"events":[{"metadata":{"occurred_at":"1996-10-15T16:39:57+07:00","eid":"1f5a76d8-db49-4144-ace7-e683e8ff4ba4","event_type":"aruha-test-hila","partition":"5","received_at":"2016-09-30T09:19:00.525Z","flow_id":"blahbloh"},"data_op":"C","data":{"order_number":"abc","id":"111"},"data_type":"blah"},"info":{"debug":"Stream started"}]}
-{"cursor":{"partition":"5","offset":"544","event_type":"order.ORDER_RECEIVED","cursor_token":"a28568a9-1ca0-4d9f-b519-dd6dd4b7a610"},"events":[{"metadata":{"occurred_at":"1996-10-15T16:39:57+07:00","eid":"1f5a76d8-db49-4144-ace7-e683e8ff4ba4","event_type":"aruha-test-hila","partition":"5","received_at":"2016-09-30T09:19:00.741Z","flow_id":"blahbloh"},"data_op":"C","data":{"order_number":"abc","id":"111"},"data_type":"blah"}]}
-{"cursor":{"partition":"5","offset":"545","event_type":"order.ORDER_RECEIVED","cursor_token":"a241c147-c186-49ad-a96e-f1e8566de738"},"events":[{"metadata":{"occurred_at":"1996-10-15T16:39:57+07:00","eid":"1f5a76d8-db49-4144-ace7-e683e8ff4ba4","event_type":"aruha-test-hila","partition":"5","received_at":"2016-09-30T09:19:00.741Z","flow_id":"blahbloh"},"data_op":"C","data":{"order_number":"abc","id":"111"},"data_type":"blah"}]}
-{"cursor":{"partition":"0","offset":"545","event_type":"order.ORDER_RECEIVED","cursor_token":"bf6ee7a9-0fe5-4946-b6d6-30895baf0599"}}
-{"cursor":{"partition":"1","offset":"545","event_type":"order.ORDER_RECEIVED","cursor_token":"9ed8058a-95be-4611-a33d-f862d6dc4af5"}}
-```
-
-Each batch contains the following fields:
-
-- `cursor`: The cursor of the batch which should be used for committing the batch.
-
-- `events`: The array of events of this batch.
-
-- `info`: An optional field that can hold useful information (e.g. the reason why the stream was closed by Nakadi).
-
-Please also note that when stream is started, the client receives a header `X-Nakadi-StreamId` which must be used when committing cursors.
-
-To see a full list of parameters that can be used to control a stream of events, please see
-an API specification in [swagger](api/nakadi-event-bus-api.yaml) file.
-
-#### Client Rebalancing
-
-If you need more than one client for your subscription to distribute load or increase throughput - you can read the subscription with multiple clients and Nakadi will automatically balance the load across them.
-
-The balancing unit is the partition, so the number of clients of your subscription can't be higher
-than the total number of all partitions of the event-types of your subscription.
-
-For example, suppose you had a subscription for two event-types `A` and `B`, with 2 and 4 partitions respectively. If you start reading events with a single client, then the client will get events from all 6 partitions. If a second client connects, then 3 partitions will be transferred from first client to a second client, resulting in each client consuming 3 partitions. In this case, the maximum possible number of clients for the subscription is 6, where each client will be allocated 1 partition to consume.
-
-The Subscription API provides a guarantee of _at-least-once_ delivery. In practice this means clients can see a duplicate event in the case where there are errors [committing events](#committing-cursors). However the events which were successfully committed will not be resent.
-
-A useful technique to detect and handle duplicate events on consumer side is to be idempotent and to check `eid` field of event metadata. Note: `eid` checking is not possible using the "undefined" category, as it's only supplied in the "business" and "data" categories.
-
-
-#### Subscription Cursors
-
-The cursors in the Subscription API have the following structure:
-
-```json
-{
- "partition": "5",
- "offset": "543",
- "event_type": "order.ORDER_RECEIVED",
- "cursor_token": "b75c3102-98a4-4385-a5fd-b96f1d7872f2"
-}
-```
-
-The fields are:
-
-- `partition`: The partition this batch belongs to. A batch can only have one partition.
-
-- `offset`: The offset of this batch. The offset is server defined and opaque to the client - clients should not try to infer or assume a structure.
-
-- `event_type`: Specifies the event-type of the cursor (as in one stream there can be events of different event-types);
-
-- `cursor_token`: The cursor token generated by Nakadi.
-
-#### Committing Cursors
-
-Cursors can be committed by posting to Subscription's cursor resource (`/subscriptions/{subscriptionId}/cursors`), for example:
-
-```sh
-curl -v -XPOST "http://localhost:8080/subscriptions/038fc871-1d2c-4e2e-aa29-1579e8f2e71f/cursors"\
- -H "X-Nakadi-StreamId: ae1e39c3-219d-49a9-b444-777b4b03e84c" \
- -H "Content-type: application/json" \
- -d '{
- "items": [
- {
- "partition": "0",
- "offset": "543",
- "event_type": "order.ORDER_RECEIVED",
- "cursor_token": "b75c3102-98a4-4385-a5fd-b96f1d7872f2"
- },
- {
- "partition": "1",
- "offset": "923",
- "event_type": "order.ORDER_RECEIVED",
- "cursor_token": "a28568a9-1ca0-4d9f-b519-dd6dd4b7a610"
- }
- ]
- }'
-```
-
-Please be aware that `X-Nakadi-StreamId` header is required when doing a commit. The value should be the same as you get in `X-Nakadi-StreamId` header when opening a stream of events. Also, each client can commit only the batches that were sent to it.
-
-The possible successful responses for a commit are:
-
-- `204`: cursors were successfully committed and offset was increased.
-
-- `200`: cursors were committed but at least one of the cursors didn't increase the offset as it was less or equal to already committed one. In a case of this response code user will get a json in a response body with a list of cursors and the results of their commits.
-
-The timeout for commit is 60 seconds. If you open the stream, read data and don't commit
-anything for 60 seconds - the stream connection will be closed from Nakadi side. Please note
-that if there are no events available to send and you get only empty batches - there is no need
-to commit, Nakadi will close connection only if there is some uncommitted data and no
-commits happened for 60 seconds.
-
-If the connection is closed for some reason then the client still has 60 seconds to commit the events it received from the moment when the events were sent. After that the session
-will be considered closed and it will be not possible to do commits with that `X-Nakadi-StreamId`.
-If the commit was not done - then the next time you start reading from a subscription you
-will get data from the last point of your commit, and you will again receive the events you
-haven't committed.
-
-When a rebalance happens and a partition is transferred to another client - the commit timeout
-of 60 seconds saves the day again. The first client will have 60 seconds to do the commit for that partition, after that the partition is started to stream to a new client. So if the commit wasn't done in 60 seconds then the streaming will start from a point of last successful commit. In other case if the commit was done by the first client - the data from this partition will be immediately streamed to second client (because there is no uncommitted data left and there is no need to wait any more).
-
-It is not necessary to commit each batch. When the cursor is committed, all events that
-are before this cursor in the partition will also be considered committed. For example suppose the offset was at `e0` in the stream below,
-
-```
-partition: [ e0 | e1 | e2 | e3 | e4 | e5 | e6 | e7 | e8 | e9 ]
- offset--^
-```
-
-and the stream sent back three batches to the client, where the client committed batch 3 but not batch 1 or batch 2,
-
-```
-partition: [ e0 | e1 | e2 | e3 | e4 | e5 | e6 | e7 | e8 | e9 ]
- offset--^
- |--- batch1 ---|--- batch2 ---|--- batch3 ---|
- | | |
- v | |
- [ e1 | e2 | e3 ] | |
- v |
- [ e4 | e5 | e6 ] |
- v
- [ e7 | e8 | e9 ]
-
-client: cursor commit --> |--- batch3 ---|
-```
-
-then the offset will be moved all the way up to `e9` implicitly committing all the events that were in the previous batches 1 and 2,
-
-```
-partition: [ e0 | e1 | e2 | e3 | e4 | e5 | e6 | e7 | e8 | e9 ]
- ^-- offset
+{"cursor":{"partition":"0","offset":"82376-000087231"},"events":[{"order_number": "ORDER_001"}]}
+{"cursor":{"partition":"0","offset":"82376-000087232"}}
+{"cursor":{"partition":"0","offset":"82376-000087232"},"events":[{"order_number": "ORDER_002"}]}
+{"cursor":{"partition":"0","offset":"82376-000087233"},"events":[{"order_number": "ORDER_003"}]}
```
-
+You will see the events when you publish them from another console for example.
+The records without `events` field are `Keep Alive` messages.
-#### Checking Current Position
-
-You can also check the current position of your subscription:
-
-```sh
-curl -v -XGET "http://localhost:8080/subscriptions/038fc871-1d2c-4e2e-aa29-1579e8f2e71f/cursors"
-```
-
-The response will be a list of current cursors that reflect the last committed offsets:
-
-```
-HTTP/1.1 200 OK
-{
- "items": [
- {
- "partition": "0",
- "offset": "8361",
- "event_type": "order.ORDER_RECEIVED",
- "cursor_token": "35e7480a-ecd3-488a-8973-3aecd3b678ad"
- },
- {
- "partition": "1",
- "offset": "6214",
- "event_type": "order.ORDER_RECEIVED",
- "cursor_token": "d1e5d85e-1d8d-4a22-815d-1be1c8c65c84"
- }
- ]
-}
-```
-
-#### Subscription Statistics
+**Note:** This is the [low-level API](https://zalando.github.io/nakadi/manual.html#using_consuming-events-lola) should be
+used only for debugging. It is not recommended for production systems.
+For production systems please use [Subscriptions API](https://zalando.github.io/nakadi/manual.html#using_consuming-events-hila)
-The API also provides statistics on your subscription:
-
-```sh
-curl -v -XGET "http://localhost:8080/subscriptions/038fc871-1d2c-4e2e-aa29-1579e8f2e71f/stats"
-```
-
-The output will contain the statistics for all partitions of the stream:
-
-```
-HTTP/1.1 200 OK
-{
- "items": [
- {
- "event_type": "order.ORDER_RECEIVED",
- "partitions": [
- {
- "partition": "0",
- "state": "reassigning",
- "unconsumed_events": 2115,
- "stream_id": "b75c3102-98a4-4385-a5fd-b96f1d7872f2"
- },
- {
- "partition": "1",
- "state": "assigned",
- "unconsumed_events": 1029,
- "stream_id": "ae1e39c3-219d-49a9-b444-777b4b03e84c"
- }
- ]
- }
- ]
-}
-```
-
-#### Deleting a Subscription
-
-To delete a Subscription, send a DELETE request to the Subscription resource using its `id` field (`/subscriptions/{id}`):
-
-```sh
-curl -v -X DELETE "http://localhost:8080/subscriptions/038fc871-1d2c-4e2e-aa29-1579e8f2e71f"
-```
-
-Successful response:
-
-```
-HTTP/1.1 204 No Content
-```
-
-#### Getting and Listing Subscriptions
-
-To view a Subscription send a GET request to the Subscription resource resource using its `id` field (`/subscriptions/{id}`): :
-
-```sh
-curl -v -XGET "http://localhost:8080/subscriptions/038fc871-1d2c-4e2e-aa29-1579e8f2e71f"
-```
-
-Successful response:
-
-```
-HTTP/1.1 200 OK
-{
- "owning_application": "order-service",
- "event_types": [
- "order.ORDER_RECEIVED"
- ],
- "consumer_group": "default",
- "read_from": "end",
- "id": "038fc871-1d2c-4e2e-aa29-1579e8f2e71f",
- "created_at": "2016-09-23T16:35:13.273Z"
-}
-```
+### Publishing Events
-To get a list of subscriptions send a GET request to the Subscription collection resource:
+Events for an event type can be published by posting to its "events" collection:
```sh
-curl -v -XGET "http://localhost:8080/subscriptions"
-```
+curl -v -XPOST http://localhost:8080/event-types/order.ORDER_RECEIVED/events \
+ -H "Content-type: application/json" \
+ -d '[{
+ "order_number": "24873243241",
+ }, {
+ "order_number": "24873243242",
+ }]'
-Example answer:
+HTTP/1.1 200 OK
```
-HTTP/1.1 200 OK
-{
- "items": [
- {
- "owning_application": "order-service",
- "event_types": [
- "order.ORDER_RECEIVED"
- ],
- "consumer_group": "default",
- "read_from": "end",
- "id": "038fc871-1d2c-4e2e-aa29-1579e8f2e71f",
- "created_at": "2016-09-23T16:35:13.273Z"
- }
- ],
- "_links": {
- "next": {
- "href": "/subscriptions?offset=20&limit=20"
- }
- }
-}
-```
-
-It's possible to filter the list with the following parameters: `event_type`, `owning_application`.
-Also, the following pagination parameters are available: `offset`, `limit`.
-
-
-## Build and Development
-### Building
+Read more in the [manual](https://zalando.github.io/nakadi/manual.html#using_producing-events)
-The project is built with [Gradle](http://gradle.org). The `./gradlew`
-[wrapper script](http://www.gradle.org/docs/current/userguide/gradle_wrapper.html) will bootstrap the right Gradle version if it's not already installed.
+## Contributing
-The gradle setup is fairly standard, the main tasks are:
+Nakadi accepts contributions from the open-source community.
-- `./gradlew build`: run a build and test
-- `./gradlew clean`: clean down the build
+Please read [CONTRIBUTING.md](CONTRIBUTING.md).
-Some other useful tasks are:
+Please also note our [CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md).
-- `./gradlew acceptanceTest`: run the ATs
-- `./gradlew fullAcceptanceTest`: run the ATs in the context of Docker
-- `./gradlew startNakadi`: build Nakadi and start docker-compose services: nakadi, postgresql, zookeeper and kafka
-- `./gradlew stopNakadi`: shutdown docker-compose services
-- `./gradlew startStorages`: start docker-compose services: postgres, zookeeper and kafka (useful for development purposes)
-- `./gradlew stopStorages`: shutdown docker-compose services
+## Contact
-For working with an IDE, the `eclipse` IDE task is available and you'll be able to import the `build.gradle` into Intellij IDEA directly.
+This [email address](MAINTAINERS) serves as the main contact address for this project.
-### Dependencies
+Bug reports and feature requests are more likely to be addressed
+if posted as [issues](https://github.com/zalando/nakadi/issues) here on GitHub.
-The Nakadi server is a Java 8 [Spring Boot](http://projects.spring.io/spring-boot/) application. It uses [Kafka 0.9](http://kafka.apache.org/090/documentation.html) as its broker and [PostgreSQL 9.5](http://www.postgresql.org/docs/9.5/static/release-9-5.html) as its supporting database.
+## License
-Nakadi requires recent versions of docker and docker-compose. In
-particular, docker-compose >= v1.7.0 is required. See [Install Docker
-Compose](https://docs.docker.com/compose/install/) for information on
-installing the most recent docker-compose version.
+Please read the full [LICENSE](LICENSE)
-### What does the project already implement?
+The MIT License (MIT) Copyright © 2015 Zalando SE, https://tech.zalando.com
-* [x] REST abstraction over Kafka-like queues
-* [x] creation of event types
-* [x] low-level interface
- * manual client side partition management is needed
- * no support of commits
-* [x] high-level interface (Subscription API)
- * automatic redistribution of partitions between consuming clients
- * commits should be issued to move server-side cursors
-* [ ] Support of event filtering per subscriptions
+Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
+documentation files (the “Software”), to deal in the Software without restriction, including without limitation
+the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and
+to permit persons to whom the Software is furnished to do so, subject to the following conditions:
-## Contributing
+The above copyright notice and this permission notice shall be included in all copies or substantial portions
+of the Software.
-Nakadi accepts contributions from the open-source community. Please see the [issue tracker](https://github.com/zalando/nakadi/issues) for things to work on.
+THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
+TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
-Before making a contribution, please let us know by posting a comment to the relevant issue. And if you would like to propose a new feature, do start a new issue explaining the feature you’d like to contribute.
diff --git a/docs/README.md b/docs/README.md
index 9830945085..2ad504dae4 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -64,7 +64,6 @@ Then you can see the result in the browser on `http://localhost:4000`
Every changes in the source will automatically rebuild documentation
so you only need to refresh the browser page.
-
## Acknowledgments
The template based on [swaggyll](https://github.com/hauptrolle/swaggyll) but heavily modified.
diff --git a/docs/_documentation/architecture_event-schema.md b/docs/_documentation/architecture_event-schema.md
deleted file mode 100644
index 73752a54d5..0000000000
--- a/docs/_documentation/architecture_event-schema.md
+++ /dev/null
@@ -1,46 +0,0 @@
----
-title: Event schema
-position: 100
----
-
-Event schema
-============
-
-`Event` is the core entity of the event processing system. The main goal of the standartization
-of that format is to have a transparent way to exchange events between distributed applications.
-
-DRAFT JSON-Schema definitions:
-```yaml
-definitions:
- Event:
- type: object
- description: |
- This is the most general representation of an event, that can be processed by Nakadi.
- It should be used as a base definition for all events, that flow through Nakadi by extending attributes of this object type.
- required:
- - event
- - partitioning_key
- - meta_data
- properties:
- event:
- type: string
- example: "https://resource-events.zalando.com/ResourceCreated"
- partitioning_key:
- type: string
- example: "ARTICLE:ABC123XXX-001"
- meta_data:
- $ref: '#/definitions/EventMetaData'
-
- EventMetaData:
- type: object
- required: [ id, created ]
- properties:
- id: { type: string, format: uuid }
- created: { type: string, format: data-time }
- root_id: { type: string, format: uuid }
- parent_id: { type: string, format: uuid }
- scopes:
- type: array
- items:
- type: string
-```
diff --git a/docs/_documentation/architecture_timelines.md b/docs/_documentation/architecture_timelines.md
index d30d1268d7..72462595c1 100644
--- a/docs/_documentation/architecture_timelines.md
+++ b/docs/_documentation/architecture_timelines.md
@@ -3,14 +3,14 @@ title: Timelines
position: 101
---
-Timelines
----------
+## Timelines
+
This document covers Timelines internals. It's meant to explain how
timelines work, to help you understand the code and what each part of
it contributes to the overall picture.
-# Fake timeline: your first timeline
+### Fake timeline: your first timeline
Before timelines, Nakadi would connect to a single Kafka cluster,
which used to be specified in the application yaml file. This was a
@@ -32,66 +32,66 @@ migration process is as follow:
storage and different topic is going to be created by Nakadi for
this event type.
-# Timeline creation
+### Timeline creation
Timeline creation is coordinated through a series of locks and
barriers using Zookeeper. Following we depict an example of what the
ZK datastructure looks like at each step.
-## Initial state
+#### Initial state
Every time a Nakadi application is launched, it tries to create the
following ZK structure:
-```
-- timelines
- + - lock lock for timeline versions synchronization
- + - version: {version} monotonically incremented long value (version of timelines configuration)
- + - locked_et
- + - nodes nakadi nodes
- + - {node1}: {version} Each nakadi node exposes the version used on this node
- + - {node2}: {version}
+```yaml
+timelines:
+ lock: - lock for timeline versions synchronization
+ version: {version} monotonically incremented long value (version of timelines configuration)
+ locked_et: -
+ nodes: nakadi nodes
+ node1: {version} Each nakadi node exposes the version used on this node
+ node2: {version}
```
In order to not override the initial structure, due to concurrency,
each instance needs to take the lock `/nakadi/timelines/lock` before
executing.
-## Start timeline creation for et_1
+#### Start timeline creation for et_1
When a new timeline creation is initiated, the first step is to
acquire a lock to update timelines for et_1 by creating an ephemeral
node at `/timelines/locked_et/et_1`.
-```
-- timelines
- + - lock
- + - version: 0
- + - locked_et:
- + - et_1
- + - nodes
- + - node1: 0
- + - node2: 0
+```yaml
+timelines:
+ lock: -
+ version: 0
+ locked_et:
+ et_1: -
+ nodes:
+ node1: 0
+ node2: 0
```
-## Notify all Nakadi nodes about change: the version barrier
+#### Notify all Nakadi nodes about change: the version barrier
Next, the instance coordinating the timeline creation bumps the
version node, which all Nakadi instances are listening to changes, so
they are notified when something changes.
-```
-- timelines
- + - lock
- + - version: 1 # this is incremented by 1
- + - locked_et:
- + - et_1
- + - nodes
- + - node1: 0
- + - node2: 0
+```yaml
+timelines:
+ lock: -
+ version: 1 # this is incremented by 1
+ locked_et:
+ et_1: -
+ nodes:
+ node1: 0
+ node2: 0
```
-## Wait for all nodes to react to the new version
+#### Wait for all nodes to react to the new version
Each Nakadi instance watches the value of the
`/nakadi/timelines/version/` node. When it changes, each instance
@@ -102,18 +102,19 @@ Once each instance has updated its local list of locked event types,
it bumps its own version, to let the timeline creator initiator know
that it can proceed.
-```
-- timelines
- + - lock
- + - version: 1
- + - locked_et:
- + - et_1
- + - nodes
- + - node1: 1 # each instance updates its own version
- + - node2: 1
+
+```yaml
+timelines:
+ lock: -
+ version: 1
+ locked_et:
+ et_1: -
+ nodes:
+ node1: 1 # each instance updates its own version
+ node2: 1
```
-## Proceed with timeline creation
+#### Proceed with timeline creation
Once all instances reacted, the creation proceeds with the initiator
inserting the necessary database entries in the timelines table, and
@@ -122,40 +123,43 @@ storage. It also creates a topic in the new storage. Be aware that if
a timeline partition has never been used, the offset stored is -1. If
it has a single event, the offset is zero and so on.
-## Remove lock and notify all instances again
+#### Remove lock and notify all instances again
Following the same logic for initiating the creation of a timeline,
locks are deleted and version is bumped. All Nakadi instances react by
removing their local locks and switching timeline if necessary.
-```
-- timelines
- + - lock
- + - version: 2
- + - locked_et:
- + - nodes
- + - node1: 1
- + - node2: 1
+```yaml
+timelines:
+ lock: -
+ version: 2
+ locked_et:
+ nodes:
+ node1: 1
+ node2: 1
```
+
After every instance reacted, it should look like:
-```
-- timelines
- + - lock
- + - version: 2
- + - locked_et:
- + - nodes
- + - node1: 2
- + - node2: 2
+```yaml
+timelines:
+ lock: -
+ version: 2
+ locked_et:
+ nodes:
+ node1: 2 # each instance updates its own version
+ node2: 2
```
-## Done
+#### Done
All done here. A new timeline has been created successfully. All
operations are logged so in case you need to debug things, just take a
look at INFO level logs.
-# Cursors
+
diff --git a/docs/_documentation/developing.md b/docs/_documentation/developing.md
index 3a7ee5f522..cda9ba38d8 100644
--- a/docs/_documentation/developing.md
+++ b/docs/_documentation/developing.md
@@ -9,7 +9,6 @@ position: 13
Nakadi is hosted on Github - [zalando/nakadi](https://github.com/zalando/nakadi/) and you can clone or fork it from there.
-
## Building
The project is built with [Gradle](https://gradle.org).
@@ -23,7 +22,6 @@ The gradle setup is fairly standard, the main dev tasks are:
Pull requests and master are built using Travis CI and you can see the build history [here](https://travis-ci.org/zalando/nakadi).
-
## Running Tests
There are a few build commands for testing -
@@ -32,7 +30,6 @@ There are a few build commands for testing -
- `./gradlew acceptanceTest`: will run the acceptance tests
- `./gradlew fullAcceptanceTest`: will run the ATs in the context of Docker
-
## Running Containers
There are a few build commands for running Docker -
@@ -41,7 +38,6 @@ There are a few build commands for running Docker -
- `./gradlew stopAndRemoveDockerContainer`: shutdown the docker processes
- `./gradlew startStoragesInDocker`: start the storage container that runs Kafka and PostgreSQL. This is handy for running Nakadi directly or in your IDE.
-
## IDE Setup
For working with an IDE, the `./gradlew eclipse` IDE task is available and you'll be able to import the `build.gradle` into Intellij IDEA directly.
diff --git a/docs/_documentation/faq.md b/docs/_documentation/faq.md
index 0ab60d6f26..a1a3449a4b 100644
--- a/docs/_documentation/faq.md
+++ b/docs/_documentation/faq.md
@@ -7,22 +7,22 @@ position: 14
## Table of Contents
-- [How long will events be persisted for?](#how-long-will-events-be-persisted-for)
-- [How do I define how long will events be persisted for?](#how-do-i-define-how-long-will-events-be-persisted-for)
-- [How many partitions will an event type be given?](#how-many-partitions-will-an-event-type-be-given)
-- [How do I configure the number of partitions?](#how-do-i-configure-the-number-of-partitions)
-- [Which partitioning strategy should I use?](#which-partitioning-strategy-should-i-use)
-- [How can I keep track of a position in a stream?](#how-can-i-keep-track-of-a-position-in-a-stream)
-- [What's an effective schema?](#whats-an-effective-schema)
-- [Nakadi isn't validating metadata and/or event identifiers, what's going on?](#nakadi-isnt-validating-metadata-andor-event-identifiers-whats-going-on)
-- [What clients are available?](#what-clients-are-available)
-- [How do I disable OAuth for local development?](#how-do-i-disable-oauth-for-local-development)
-- [I want to send arbitrary JSON, how do I avoid defining a JSON Schema?](#i-want-to-send-arbitrary-json-how-do-i-avoid-defining-a-json-schema)
-- [Can I post something other than JSON as an event?](#can-i-post-something-other-than-json-as-an-event)
-- [I get the message "Is the docker daemon running on this host?" - Help!](#i-get-the-message-is-the-docker-daemon-running-on-this-host---help)
-- [What's the reason for newest available offset being bigger than oldest offset?](#whats-the-reason-for-newest-available-offset-being-bigger-than-oldest-offset)
-- [Does Nakadi support compression?](#does-nakadi-support-compression)
-- [How do I contribute to the project?](#how-do-i-contribute-to-the-project)
+- [How long will events be persisted for?](#how-long-will-events-be-persisted-for-)
+- [How do I define how long will events be persisted for?](#how-do-i-define-how-long-will-events-be-persisted-for-)
+- [How many partitions will an event type be given?](#how-many-partitions-will-an-event-type-be-given-)
+- [How do I configure the number of partitions?](#how-do-i-configure-the-number-of-partitions-)
+- [Which partitioning strategy should I use?](#which-partitioning-strategy-should-i-use-)
+- [How can I keep track of a position in a stream?](#how-can-i-keep-track-of-a-position-in-a-stream-)
+- [What's an effective schema?](#whats-an-effective-schema-)
+- [Nakadi isn't validating metadata and/or event identifiers, what's going on?](#nakadi-isnt-validating-metadata-andor-event-identifiers-whats-going-on-)
+- [What clients are available?](#what-clients-are-available-)
+- [How do I disable OAuth for local development?](#how-do-i-disable-oauth-for-local-development-)
+- [I want to send arbitrary JSON, how do I avoid defining a JSON Schema?](#i-want-to-send-arbitrary-json-how-do-i-avoid-defining-a-json-schema-)
+- [Can I post something other than JSON as an event?](#can-i-post-something-other-than-json-as-an-event-)
+- [I get the message "Is the docker daemon running on this host?" - Help!](#i-get-the-message--is-the-docker-daemon-running-on-this-host-----help-)
+- [What's the reason for newest available offset being bigger than oldest offset?](#whats-the-reason-for-newest-available-offset-being-bigger-than-oldest-offset-)
+- [Does Nakadi support compression?](#does-nakadi-support-compression-)
+- [How do I contribute to the project?](#how-do-i-contribute-to-the-project-)
----
@@ -122,4 +122,4 @@ The server will accept gzip encoded events when posted. On the consumer side, if
#### How do I contribute to the project?
-Nakadi accepts contributions from the open-source community. Please see the [project issue tracker](https://github.com/zalando/nakadi/issues) for things to work on. Before making a contribution, please let us know by posting a comment to the relevant issue. And if you would like to propose a new feature, do start a new issue explaining the feature you’d like to contribute.
+Nakadi accepts contributions from the open-source community. Please see [CONTRIBUTE.md](https://github.com/zalando/nakadi/blob/master/CONTRIBUTING.md).
diff --git a/docs/_documentation/getting-started.md b/docs/_documentation/getting-started.md
index 3dcc471886..0fffe2488a 100644
--- a/docs/_documentation/getting-started.md
+++ b/docs/_documentation/getting-started.md
@@ -39,7 +39,7 @@ To stop the running Nakadi:
### Notes
If you're having trouble getting started, you might find an answer in the
-[Frequently Asked Questions (FAQ)](#faq) section of the documentation.
+[Frequently Asked Questions (FAQ)](#f-a-q) section of the documentation.
#### Ports
@@ -53,10 +53,11 @@ Some ports need to be available to run the service:
They allow the services to communicate with each other and should not be used
by other applications.
-#### Mac OS and Docker
+
+### Mac OS Docker Settings
Since Docker for Mac OS runs inside Virtual Box, you will want to expose
-some ports first to allow Nakadi to access its dependencies -
+some ports first to allow Nakadi to access its dependencies:
```sh
docker-machine ssh default \
@@ -67,6 +68,17 @@ docker-machine ssh default \
```
Alternatively you can set up port forwarding on the "default" machine through
-its network settings in the VirtualBox UI, which look like this -
+its network settings in the VirtualBox UI.
![vbox](./img/vbox.png)
+
+If you get the message "Is the
+docker daemon running on this host?" but you know Docker and VirtualBox are
+running, you might want to run this command:
+
+```sh
+eval "$(docker-machine env default)"
+```
+
+**Note:** Docker for Mac OS (previously in beta) version 1.12 (1.12.0 or 1.12.1) currently is not supported due to the [bug](https://github.com/docker/docker/issues/22753#issuecomment-242711639) in networking host configuration.
+
diff --git a/docs/_documentation/intro.md b/docs/_documentation/intro.md
index 35c7409e9d..1bde999a99 100644
--- a/docs/_documentation/intro.md
+++ b/docs/_documentation/intro.md
@@ -5,27 +5,101 @@ position: 1
## Nakadi Event Broker
-The goal of Nakadi (ნაკადი means "stream" in Georgian) is to provide an event broker infrastructure to:
+The goal of Nakadi (**ნაკადი** means "stream" in Georgian) is to provide an event broker infrastructure to:
-#### RESTful
+- Abstract event delivery via a secured [RESTful API](https://zalando.github.io/nakadi/manual.html#nakadi-event-bus-api).
+
+ This allows microservices teams to maintain service boundaries, and not directly depend on any specific message broker technology.
+ Access can be managed individually for every Event Type and secured using *OAuth* and custom authorization plugins.
-Abstract event delivery via a secured [RESTful API](#nakadi-event-bus-api). This allows microservices teams to maintain service boundaries, and not directly depend on any specific message broker technology. Access to the API can be managed and secured using OAuth scopes.
+- Enable convenient development of event-driven applications and asynchronous microservices.
-#### JSON Schema
-
-Enable convenient development of event-driven applications and asynchronous microservices. Event types can be defined with schemas and managed via a registry. Nakadi also has optional support for events describing business processes and data changes using standard primitives for identity, timestamps, event types, and causality.
+ Event types can be defined with [Event type schemas](https://zalando.github.io/nakadi/manual.html#using_event-types)
+ and managed via a registry. All events will be validated against the schema before publishing the event type.
+ It allows to granite the data quality and data consistency for the data consumers.
+
+- Efficient low latency event delivery.
+
+ Once a publisher sends an event using a simple [HTTP POST](https://zalando.github.io/nakadi/manual.html#using_producing-events),
+ consumers can be pushed to via a [streaming](https://zalando.github.io/nakadi/manual.html#using_consuming-events-lola)
+ HTTP connection, allowing near real-time event processing.
+ The consumer connection has keepalive controls and support for managing stream offsets using
+ [subscriptions](https://zalando.github.io/nakadi/manual.html#using_consuming-events-hila).
+
+More detailed information can be found in the [manual](http://zalando.github.io/nakadi-manual/).
+
+
+
+### Links
+
+Read more to understand *The big picture*
+[Architecture for data integration](https://pages.github.bus.zalan.do/core-platform/docs/architecture/data_integration.html)
+
+Watch the talk [Data Integration in the World of Microservices](https://clusterhq.com/2016/05/20/microservices-zalando/)
+
+### Development status
+
+Nakadi is high-load production ready.
+Zalando uses Nakadi as its central Event Bus Service.
+Nakadi reliably handles the traffic from thousands event types with
+the throughput of more than hundreds gigabytes per second.
+The project is in active development. See the [changelog](https://github.com/zalando/nakadi/blob/master/CHANGELOG.md)
+
+#### Features
+
+* Stream:
+ * REST abstraction over Kafka-like queues.
+ * CRUD for event types.
+ * Event batch publishing.
+ * Low-level interface.
+ * manual client side partition management is needed
+ * no support of commits
+ * High-level interface (Subscription API).
+ * automatic redistribution of partitions between consuming clients
+ * commits should be issued to move server-side cursors
+* Schema:
+ * Schema registry.
+ * Several event type categories (Undefined, Business, Data Change).
+ * Several partitioning strategies (Random, Hash, User defined).
+ * Event enrichment strategies.
+ * Schema evolution.
+ * Events validation using an event type schema.
+* Security:
+ * OAuth2 authentication.
+ * Per-event type authorization.
+ * Blacklist of users and applications.
+* Operations:
+ * [STUPS](https://stups.io/) platform compatible.
+ * [ZMON](https://zmon.io/) monitoring compatible.
+ * SLO monitoring.
+ * Timelines.
+ * This allows transparently switch production and consumption to different cluster (tier, region, AZ) without
+ moving actual data and any service degradation.
+ * Opens possibility for implementation of other streaming technologies and engines besides Kafka
+ (like AWS Kinesis, Google pub/sub etc.)
+
+ Read more about latest development in our [Changelog](https://github.com/zalando/nakadi/blob/master/CHANGELOG.md)
+
-#### Performance
+#### Additional features that we plan to cover in the future are:
-Efficient low latency event delivery. Once a publisher sends an event using a simple HTTP POST, consumers can be pushed to via a streaming HTTP connection, allowing near real-time event processing. The consumer connection has keepalive controls and support for managing stream offsets.
+* Support for different streaming technologies and engines. Nakadi currently uses [Apache Kafka](http://kafka.apache.org/)
+ as its broker, but other providers (such as Kinesis) will be possible.
+* Filtering of events for subscribing consumers.
+* Store old published events forever using transparent fall back backup shortages like AWS S3.
+* Separate the internal schema register to standalone service.
+* Use additional schema formats and protocols like Avro, protobuf and [others](https://en.wikipedia.org/wiki/Comparison_of_data_serialization_formats).
-#### Scalability
+#### Related projects
-Nakadi instances are stateless. They can be run on AWS with auto-scaling.
+The [zalando-nakadi](https://github.com/zalando-nakadi/) organisation contains many useful related projects
+like
-#### Flexibility
+* Client libraries
+* SDK
+* GUI
+* DevOps tools and more
-Using Timelines it is easy to move the traffic to another cluster without moving the data and any service degradation.
## Examples
diff --git a/docs/_documentation/using_authorization.md b/docs/_documentation/using_authorization.md
index 080923ee27..5d87bdf4da 100644
--- a/docs/_documentation/using_authorization.md
+++ b/docs/_documentation/using_authorization.md
@@ -109,6 +109,7 @@ When updating an event type, users should keep in mind the following caveats:
- If the event type already has an authorization section, then it cannot be removed in an update;
- If the update changes the list of readers, then all consumers will be disconnected. It is expected that they will
-try to reconnect, which will only work for those that are still authorized. **WARNING**: this *also* applies to consumers
-using subscriptions; if a subscription includes multiple event types, and as a result of the update, a consumer loses
+try to reconnect, which will only work for those that are still authorized.
+
+**WARNING**: this *also* applies to consumers using subscriptions; if a subscription includes multiple event types, and as a result of the update, a consumer loses
read access to one of them, then the consumer will not be able to consume from the subscription anymore.
diff --git a/docs/_documentation/using_clients.md b/docs/_documentation/using_clients.md
index 3e041a7da5..38bbd77f3c 100644
--- a/docs/_documentation/using_clients.md
+++ b/docs/_documentation/using_clients.md
@@ -7,11 +7,15 @@ position: 10
Nakadi does not ship with a client, but there are some open source clients available that you can try:
-| Name | Language/Framework | GitHub |
-|-----------------|--------------|------------|
-| Nakadi Klients | Scala & Java | https://github.com/zalando/nakadi-klients |
-| Reactive Nakadi | Scala/Akka | https://github.com/zalando/reactive-nakadi |
-| Fahrschein | Java | https://github.com/zalando-incubator/fahrschein |
-| Straw | Java | https://github.com/zalando-incubator/straw |
+| Name | Language/Framework | GitHub |
+|-----------------|--------------------|---------------------------------------------------|
+| Nakadi Java | Java | https://github.com/dehora/nakadi-java |
+| Nakadi Klients | Scala & Java | https://github.com/kanuku/nakadi-klients |
+| Reactive Nakadi | Scala/Akka | https://github.com/zalando-nakadi/reactive-nakadi |
+| Fahrschein | Java | https://github.com/zalando-nakadi/fahrschein |
+| Nakadion | Rust | https://github.com/chridou/nakadion |
+| Peek | Java/CLI tool | https://github.com/bocytko/peek |
+
+More Nakadi related projects can be found here [https://github.com/zalando-nakadi](https://github.com/zalando-nakadi)
We'll add more clients to this section as they appear. Nakadi doesn't support these clients; issues and pull requests should be filed with the client project.
diff --git a/docs/_documentation/using_comparison.md b/docs/_documentation/using_comparison.md
index cd0ae1b07d..93a0314bf2 100644
--- a/docs/_documentation/using_comparison.md
+++ b/docs/_documentation/using_comparison.md
@@ -7,15 +7,12 @@ position: 11
In this section, we'll look at how Nakadi fits in with the stream broker/processing ecosystems. Notably we'll compare it to Apache Kafka, as that's a common question, but also look briefly at some of the main cloud offerings in this area.
- - [Apache Kafka](#kafka)
- - [Google Pub/Sub](#pubsub)
- - [AWS Kinesis](#kinesis)
- - [AWS Simple Queue Service (SQS)](#sqs)
- - [Allegro Hermes](#hermes)
- - [Azure EventHub](#eventhub)
- - [Confluent Platform](#confluent)
-
-
+ - [Apache Kafka](#apache-kafka--version-0-9-)
+ - [Google Pub/Sub](#google-pub-sub)
+ - [AWS Kinesis](#aws-kinesis)
+ - [AWS Simple Queue Service (SQS)](#aws-simple-queue-service--sqs-)
+ - [Allegro Hermes](#allegro-hermes)
+
### Apache Kafka (version 0.9)
Relative to Apache Kafka, Nakadi provides a number of benefits while still leveraging the raw power of Kafka as its internal broker.
@@ -34,7 +31,6 @@ Relative to Apache Kafka, Nakadi provides a number of benefits while still lever
In short, Nakadi is best seen as a complement to Kafka. It allows teams to use Kafka within their own boundaries but not be forced into sharing it as a global dependency.
-
### Google Pub/Sub
Like Nakadi, Pub/Sub has a HTTP API which hides details from producers and consumers and makes it suitable for use as a microservices backplane. There are some differences worth noting:
@@ -47,7 +43,6 @@ Like Nakadi, Pub/Sub has a HTTP API which hides details from producers and consu
- Pub/Sub uses a common envelope structure for producing and consuming messages, and does not define any higher level structures beyond that.
-
### AWS Kinesis
Like Nakadi and Pub/Sub, AWS Kinesis has a HTTP API to hide its details. Kinesis and Nakadi are more similar to each other than Pub/Sub, but there are some differences.
@@ -63,7 +58,6 @@ Like Nakadi and Pub/Sub, AWS Kinesis has a HTTP API to hide its details. Kinesis
- Kinesis supports resizing the number of shards in a stream wheres partition counts in Nakadi are fixed once set for an event type.
-
### AWS Simple Queue Service (SQS)
The basic abstraction in SQS is a queue, which is quite different from a Nakadi / Kafka stream.
@@ -76,7 +70,6 @@ The basic abstraction in SQS is a queue, which is quite different from a Nakadi
- In contrast to moving a single cursor in the datastream (like in Nakadi, Kinesis or Kafka), SQS semantics of confirming individual messages, has advantages if a single message is unprocessable (i.e. format is not parseable). In SQS only the problamatic message is delayed. In a cursor semantic the client has to decide: Either stop all further message processing until the problem is fixed or skip the message and move the cursor.
-
### Allegro Hermes
[Hermes](https://github.com/allegro/hermes) like Nakadi, is an API based broker build on Apache Kafka. There are some differences worth noting:
@@ -90,13 +83,3 @@ The basic abstraction in SQS is a queue, which is quite different from a Nakadi
- The Hermes project supports a Java client driver for publishing messages. Nakadi does not ship with a client.
- Hermes claims resilience when it comes to issues with its internal Kafka broker, such that it will continue to accept messages when Kafka is down. It does this by buffering messages in memory with an optional means to spill to local disk; this will help with crashing brokers or hermes nodes, but not with loss of an instance (eg an ec2 instance). Nakadi does not accept messages if its Kafka brokers are down or unavailable.
-
-
-### Azure Event Hub
-
-_@@@ todo_
-
-
-### Confluent Platform
-
-_@@@ todo_
diff --git a/docs/_documentation/using_concepts.md b/docs/_documentation/using_concepts.md
index ef13d5cbf3..b86a6cbe33 100644
--- a/docs/_documentation/using_concepts.md
+++ b/docs/_documentation/using_concepts.md
@@ -30,3 +30,53 @@ In summary, applications using Nakadi can be grouped as follows:
[1] For more detail on partitions and the design of streams see ["The Log"](https://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying) by Jay Kreps.
+#### Cursors, Offsets and Partitions
+
+By default the `events` resource will consume from all partitions of an event
+type and from the end (or "tail") of the stream. To select only particular
+partitions and a position where in the stream to start, you can supply
+an `X-Nakadi-Cursors` header in the request:
+
+```sh
+curl -v http://localhost:8080/event-types/order.ORDER_RECEIVED/events \
+ -H 'X-Nakadi-Cursors: [{"partition": "0", "offset":"12"}]'
+```
+
+The header value is a JSON array of _cursors_. Each cursor in the array
+describes its partition for the stream and an offset to stream from. Note that
+events within the same partition maintain their overall order.
+
+The `offset` value of the cursor allows you select where the in the stream you
+want to consume from. This can be any known offset value, or the dedicated value
+`BEGIN` which will start the stream from the beginning. For example, to read
+from partition `0` from the beginning:
+
+```sh
+curl -v http://localhost:8080/event-types/order.ORDER_RECEIVED/events \
+ -H 'X-Nakadi-Cursors:[{"partition": "0", "offset":"BEGIN"}]'
+```
+
+The details of the partitions and their offsets for an event type are
+available via its `partitions` resource.
+
+#### Event Stream Keepalives
+
+If there are no events to be delivered Nakadi will keep a streaming connection open by
+periodically sending a batch with no events but which contains a `cursor` pointing to
+the current offset. For example:
+
+```sh
+curl -v http://localhost:8080/event-types/order.ORDER_RECEIVED/events
+
+
+HTTP/1.1 200 OK
+
+{"cursor":{"partition":"0","offset":"6"},"events":[{"order_number": "ORDER_003", "metadata": {"eid": "4cc6d2f0-eb01-11e5-b606-1c6f65464fc6", "occurred_at": "2016-03-15T23:58:15+01:00"}}]}
+{"cursor":{"partition":"0","offset":"6"}}
+{"cursor":{"partition":"0","offset":"6"}}
+{"cursor":{"partition":"0","offset":"6"}}
+{"cursor":{"partition":"0","offset":"6"}}
+```
+
+This can be treated as a keep-alive control for some load balancers.
+
diff --git a/docs/_documentation/using_consuming-events-hila.md b/docs/_documentation/using_consuming-events-hila.md
index d3caeae31d..38c1b8a19d 100644
--- a/docs/_documentation/using_consuming-events-hila.md
+++ b/docs/_documentation/using_consuming-events-hila.md
@@ -2,10 +2,9 @@
title: Subscriptions
position: 9
---
+## Subscriptions
-## Consuming events with the High-level API (Subscriptions)
-
-Subscriptions (also knows as the high-level API) allow clients to consume events, where the Nakadi server store offsets and
+Subscriptions allow clients to consume events, where the Nakadi server store offsets and
automatically manages reblancing of partitions across consumer clients. This allows clients
to avoid managing stream state locally.
@@ -43,7 +42,7 @@ For a more detailed description and advanced configuration options please take a
A Subscription can be created by posting to the `/subscriptions` collection resource:
```sh
-curl -v -XPOST "https://localhost:8080/subscriptions" -H "Content-type: application/json" -d '{
+curl -v -XPOST "http://localhost:8080/subscriptions" -H "Content-type: application/json" -d '{
"owning_application": "order-service",
"event_types": ["order.ORDER_RECEIVED"]
}'
@@ -72,7 +71,7 @@ Content-Type: application/json;charset=UTF-8
Consuming events is done by sending a GET request to the Subscriptions's event resource (`/subscriptions/{subscription-id}/events`):
```sh
-curl -v -XGET "https://localhost:8080/subscriptions/038fc871-1d2c-4e2e-aa29-1579e8f2e71f/events"
+curl -v -XGET "http://localhost:8080/subscriptions/038fc871-1d2c-4e2e-aa29-1579e8f2e71f/events"
```
The response is a stream that groups events into JSON batches separated by an endline (`\n`) character. The output looks like this:
@@ -115,7 +114,6 @@ The Subscription API provides a guarantee of _at-least-once_ delivery. In practi
A useful technique to detect and handle duplicate events on consumer side is to be idempotent and to check `eid` field of event metadata. Note: `eid` checking is not possible using the "undefined" category, as it's only supplied in the "business" and "data" categories.
-
### Subscription Cursors
The cursors in the Subscription API have the following structure:
@@ -144,7 +142,7 @@ The fields are:
Cursors can be committed by posting to Subscription's cursor resource (`/subscriptions/{subscriptionId}/cursors`), for example:
```sh
-curl -v -XPOST "https://localhost:8080/subscriptions/038fc871-1d2c-4e2e-aa29-1579e8f2e71f/cursors"\
+curl -v -XPOST "http://localhost:8080/subscriptions/038fc871-1d2c-4e2e-aa29-1579e8f2e71f/cursors"\
-H "X-Nakadi-StreamId: ae1e39c3-219d-49a9-b444-777b4b03e84c" \
-H "Content-type: application/json" \
-d '{
@@ -191,14 +189,14 @@ of 60 seconds saves the day again. The first client will have 60 seconds to do t
It is not necessary to commit each batch. When the cursor is committed, all events that
are before this cursor in the partition will also be considered committed. For example suppose the offset was at `e0` in the stream below,
-```
+```text
partition: [ e0 | e1 | e2 | e3 | e4 | e5 | e6 | e7 | e8 | e9 ]
offset--^
```
and the stream sent back three batches to the client, where the client committed batch 3 but not batch 1 or batch 2,
-```
+```text
partition: [ e0 | e1 | e2 | e3 | e4 | e5 | e6 | e7 | e8 | e9 ]
offset--^
|--- batch1 ---|--- batch2 ---|--- batch3 ---|
@@ -215,7 +213,7 @@ client: cursor commit --> |--- batch3 ---|
then the offset will be moved all the way up to `e9` implicitly committing all the events that were in the previous batches 1 and 2,
-```
+```text
partition: [ e0 | e1 | e2 | e3 | e4 | e5 | e6 | e7 | e8 | e9 ]
^-- offset
```
@@ -226,12 +224,12 @@ partition: [ e0 | e1 | e2 | e3 | e4 | e5 | e6 | e7 | e8 | e9 ]
You can also check the current position of your subscription:
```sh
-curl -v -XGET "https://localhost:8080/subscriptions/038fc871-1d2c-4e2e-aa29-1579e8f2e71f/cursors"
+curl -v -XGET "http://localhost:8080/subscriptions/038fc871-1d2c-4e2e-aa29-1579e8f2e71f/cursors"
```
The response will be a list of current cursors that reflect the last committed offsets:
-```
+```json
HTTP/1.1 200 OK
{
"items": [
@@ -256,12 +254,12 @@ HTTP/1.1 200 OK
The API also provides statistics on your subscription:
```sh
-curl -v -XGET "https://localhost:8080/subscriptions/038fc871-1d2c-4e2e-aa29-1579e8f2e71f/stats"
+curl -v -XGET "http://localhost:8080/subscriptions/038fc871-1d2c-4e2e-aa29-1579e8f2e71f/stats"
```
The output will contain the statistics for all partitions of the stream:
-```
+```json
HTTP/1.1 200 OK
{
"items": [
@@ -291,12 +289,12 @@ HTTP/1.1 200 OK
To delete a Subscription, send a DELETE request to the Subscription resource using its `id` field (`/subscriptions/{id}`):
```sh
-curl -v -X DELETE "https://localhost:8080/subscriptions/038fc871-1d2c-4e2e-aa29-1579e8f2e71f"
+curl -v -X DELETE "http://localhost:8080/subscriptions/038fc871-1d2c-4e2e-aa29-1579e8f2e71f"
```
Successful response:
-```
+```text
HTTP/1.1 204 No Content
```
@@ -305,7 +303,7 @@ HTTP/1.1 204 No Content
To view a Subscription send a GET request to the Subscription resource resource using its `id` field (`/subscriptions/{id}`): :
```sh
-curl -v -XGET "https://localhost:8080/subscriptions/038fc871-1d2c-4e2e-aa29-1579e8f2e71f"
+curl -v -XGET "http://localhost:8080/subscriptions/038fc871-1d2c-4e2e-aa29-1579e8f2e71f"
```
Successful response:
@@ -327,12 +325,12 @@ HTTP/1.1 200 OK
To get a list of subscriptions send a GET request to the Subscription collection resource:
```sh
-curl -v -XGET "https://localhost:8080/subscriptions"
+curl -v -XGET "http://localhost:8080/subscriptions"
```
Example answer:
-```
+```json
HTTP/1.1 200 OK
{
"items": [
diff --git a/docs/_includes/definition.html b/docs/_includes/definition.html
index 07dc6b4c57..f97b7c5fbb 100644
--- a/docs/_includes/definition.html
+++ b/docs/_includes/definition.html
@@ -1,10 +1,9 @@
{% assign definition = include.definition %}
-
+
-
{{ definition[1].summary }}
{{ definition[1].description | markdownify }}
diff --git a/docs/_includes/endpoint.html b/docs/_includes/endpoint.html
index 87ce5d38a2..fae259347d 100644
--- a/docs/_includes/endpoint.html
+++ b/docs/_includes/endpoint.html
@@ -1,11 +1,14 @@
+{% assign id = include.path | replace: '{', '' | replace: '}', '' %}
+
+
+
+
-
-
-
- {{ include.path }}
-
+
+ {{ include.path }}
+
{{ include.method.summary }}
diff --git a/docs/_includes/sidebar.html b/docs/_includes/sidebar.html
index dc1c80d22f..66d1d188ec 100644
--- a/docs/_includes/sidebar.html
+++ b/docs/_includes/sidebar.html
@@ -45,6 +45,7 @@
{% for path in singleData %}
-
+ {% assign id = path[0] | replace: '{', '' | replace: '}', '' %}
{% assign methods = path[1] %}
{%capture links %}
@@ -54,13 +55,14 @@
{% if first_method == '' %}
{% assign first_method = method_name %}
{% endif %}
-
+
+
{% endif %}
{% endfor %}
{% endcapture %}
-