Skip to content

Commit

Permalink
Refactor HCD Agents (#609)
Browse files Browse the repository at this point in the history
* Update CC 4 version in HCD Agent.
* Refactor HCD Agent modules
  • Loading branch information
emerkle826 authored Feb 20, 2025
1 parent 903f024 commit cb2591e
Show file tree
Hide file tree
Showing 32 changed files with 222 additions and 70 deletions.
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ Changelog for Management API, new PRs should update the `main / unreleased` sect

## unreleased
* [CHANGE] Remove Cassandra 4.0.16 from the build matrix due to a regression (https://issues.apache.org/jira/browse/CASSANDRA-20090)
* [CHANGE] [#608](https://github.com/k8ssandra/management-api-for-apache-cassandra/issues/608) Refactor HCD Agents (cc4 vs cc5)
* [FEATURE] [#601](https://github.com/k8ssandra/management-api-for-apache-cassandra/issues/602) Add Cassandra 4.0.17 to the build matrix
* [FEATURE] [#603](https://github.com/k8ssandra/management-api-for-apache-cassandra/issues/603) Add DSE 6.8.54 to the build matrix
* [FEATURE] [#604](https://github.com/k8ssandra/management-api-for-apache-cassandra/issues/604) Add DSE 6.9.7 to the build matrix
Expand Down
16 changes: 8 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -196,29 +196,29 @@ Example for DSE 6.9.0

** NOTE: The docker repo is not a typo, it really is `datastax/dse-mgmtapi-6_8` for 6.9 images

### Docker coordinates for HCD 1.0.x/1.2.x images
### Docker coordinates for HCD 1.1.x/1.2.x images

#### Ubuntu based images (HCD 1.0/1.2)
#### Ubuntu based images (HCD 1.1/1.2)

For all JDK 11 Ubuntu based HCD 1.0.x/1.2.x images, the Docker coordinates are as follows:
For all JDK 11 Ubuntu based HCD 1.1.x/1.2.x images, the Docker coordinates are as follows:

datastax/hcd:<version>

Example for HCD 1.0.0
Example for HCD 1.1.0

datastax/hcd:1.0.0
datastax/hcd:1.1.0

Example for HCD 1.2.0

datastax/hcd:1.2.0

#### RedHat UBI images (HCD 1.0/1.2)
#### RedHat UBI images (HCD 1.1/1.2)

For all RedHat UBI based HCD 1.0.x/1.2.x images, the Docker coordinates are as follows:
For all RedHat UBI based HCD 1.1.x/1.2.x images, the Docker coordinates are as follows:

datastax/hcd:<version>-ubi

Example for HCD 1.0.0
Example for HCD 1.1.0

datastax/hcd:1.0.0-ubi

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,14 @@
It is important to note that all HCD dependencies should only be specified in the HCD agent modules. No HCD dependencies
can be added to any other projects/modules, as users without access to HCD artifacts won't be able to build the OSS Management API.

## HCD versions
## HCD versions (hcd-cc4 vs hcd-cc5)

As of this document edit, there are 2 versions of HCD in development. Version 1.0.x is currently maintained on the `hcd-1.0` branch
of the HCD repository. Version 1.2.x is maintained on the `main` branch of the repository. The major difference between the two
versions is the Converged Cassandra Core that is used. HCD 1.0.x uses Converged Core 4, while HCD 1.2.x uses Converged Core 5. As
with Cassandra versions, the HCD agent has to be broken into 2 sub-modules for compiling compatibility. The version in this
sub-module is for HCD 1.0.x. For HCD 1.2.x, use the agent in sub-module `management-api-agent-hcd-1.2.x`.
As of this document edit, there are 2 versions of HCD in development. Version 1.1.x is currently maintained on the `hcd-1.1` branch
of the HCD repository. Version 1.2.x is maintained on the `main` branch of the repository. Until recently, HCD 1.2 was based on
Converged Cassandra (Converged Core/CC) 5, while HCD 1.1 is based on CC 4. Soon, HCD 1.2 will switch to CC 4, meaning a future release
of HCD 2.x will be based on CC 5. To make things a little easier to follow from this project's view, as of v0.1.97, the Management
API Agent for HCD will be CC based. This Readme is in the `hcd-cc4` Agent. There is an equivalent one in the `hcd-cc5` Agent. You
must pick the one that your HCD code is based on for it to work properly.

## Maven Settings

Expand Down Expand Up @@ -37,18 +38,12 @@ OUT OF SCOPE: At the moment, no HCD images are being built as part of this proje

OUT OF SCOPE: At the moment, no HCD images are being built as part of this project. They are built from the HCD repo currently.

If you have access to the HCD repository, you can build an image from the `hcd-1.0` branch. Use the following from the HCD repository root:
If you have access to the HCD repository, you can build an image from the `hcd-1.1` branch. Use the following from the HCD repository root:

```sh
./mvnw clean verify
./mvnw clean package
```

### Building a specific version of HCD

HCD versions are maintained in branch names with the format `hcd-<major>.<minor>` (for example `hcd-1.1`). The latest/current version
pf HCD will be in the `main` branch (version 1.2.x as of this edit). Building a specific versions of HCD simply requires you to checkout
the version bracnh (or `main` if you wanto build the latest version) and build as above.

## Running a locally built image

To run an image you built locally with Management API enabled, run the following:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
<version>${revision}</version>
</parent>
<version>${revision}</version>
<artifactId>datastax-mgmtapi-agent-hcd</artifactId>
<artifactId>datastax-mgmtapi-agent-hcd-cc4</artifactId>
<repositories>
<repository>
<id>artifactory</id>
Expand Down Expand Up @@ -79,7 +79,7 @@
<dependency>
<groupId>com.datastax.dse</groupId>
<artifactId>dse-db-all</artifactId>
<version>4.0.11-3b5d38811943</version>
<version>4.0.11-21b99d7386fd</version>
<exclusions>
<exclusion>
<groupId>commons-codec</groupId>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,12 @@
import io.netty.channel.VoidChannelPromise;
import io.netty.handler.codec.ByteToMessageDecoder;
import io.netty.util.Attribute;
import java.lang.reflect.Method;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.CompletableFuture;
import org.apache.cassandra.auth.IAuthenticator;
import org.apache.cassandra.cql3.QueryProcessor;
import org.apache.cassandra.service.ClientState;
Expand Down Expand Up @@ -62,7 +64,7 @@ protected void initChannel(Channel channel) throws Exception {
INITIAL_HANDLER,
new PipelineChannelInitializer(
new Envelope.Decoder(),
(channel1, version) ->
(Channel channel1, ProtocolVersion version) ->
new UnixSocketConnection(channel1, version, connectionTracker)));
/**
* The exceptionHandler will take care of handling exceptionCaught(...) events while still
Expand All @@ -82,7 +84,6 @@ static class UnixSockMessage extends SimpleChannelInboundHandler<Message.Request
@Override
protected void channelRead0(ChannelHandlerContext ctx, Message.Request request)
throws Exception {
final Message.Response response;
final UnixSocketConnection connection;
long queryStartNanoTime = System.nanoTime();

Expand All @@ -98,15 +99,35 @@ protected void channelRead0(ChannelHandlerContext ctx, Message.Request request)
// logger.info("Executing {} {} {}", request, connection.getVersion(),
// request.getStreamId());

Message.Response r = request.execute(qstate, queryStartNanoTime);

// UnixSocket has no auth
response = r instanceof AuthenticateMessage ? new ReadyMessage() : r;
// Converged Cassandra/Core 4 added Async processing as part of CNDB-10759. See if we have
// the method that returns a CompletableFuture.
try {
Method requestExecute =
Message.Request.class.getDeclaredMethod("execute", QueryState.class, long.class);
// get CompletableFuture type
if (CompletableFuture.class.equals(requestExecute.getReturnType())) {
// newer Async processing
CompletableFuture<Message.Response> future =
(CompletableFuture<Message.Response>)
requestExecute.invoke(request, qstate, queryStartNanoTime);
future.whenComplete(
(Message.Response response, Throwable ignore) -> {
processMessageResponse(response, request, connection, ctx);
});
} else if (Message.Response.class.equals(requestExecute.getReturnType())) {
// older non-async processing
Message.Response response =
(Message.Response) requestExecute.invoke(request, qstate, queryStartNanoTime);

response.setStreamId(request.getStreamId());
response.setWarnings(ClientWarn.instance.getWarnings());
response.attach(connection);
connection.applyStateTransition(request.type, response.type);
processMessageResponse(response, request, connection, ctx);
}
} catch (NoSuchMethodException ex) {
// Unexepected missing method, throw an error and figure out what method signature we have
logger.error(
"Expected Cassandra Message.Request.execute() method signature not found. Management API agent will not be able to start Cassandra.",
ex);
throw ex;
}
} catch (Throwable t) {
// logger.warn("Exception encountered", t);
JVMStabilityInspector.inspectThrowable(t);
Expand All @@ -119,7 +140,21 @@ protected void channelRead0(ChannelHandlerContext ctx, Message.Request request)
} finally {
ClientWarn.instance.resetWarnings();
}
}

private void processMessageResponse(
Message.Response response,
Message.Request request,
final UnixSocketConnection connection,
ChannelHandlerContext ctx) {
if (response instanceof AuthenticateMessage) {
// UnixSocket has no auth
response = new ReadyMessage();
}
response.setStreamId(request.getStreamId());
response.setWarnings(ClientWarn.instance.getWarnings());
response.attach(connection);
connection.applyStateTransition(request.type, response.type);
ctx.writeAndFlush(response);
request.getSource().release();
}
Expand Down Expand Up @@ -284,18 +319,56 @@ protected void decode(ChannelHandlerContext ctx, ByteBuf buffer, List<Object> ou

promise = new VoidChannelPromise(ctx.channel(), false);

Message.Response response =
Dispatcher.processRequest(
(ServerConnection) connection, startup, ClientResourceLimits.Overload.NONE);

if (response.type.equals(Message.Type.AUTHENTICATE))
// bypass authentication
response = new ReadyMessage();

outbound = response.encode(inbound.header.version);
ctx.writeAndFlush(outbound, promise);
logger.debug("Configured pipeline: {}", ctx.pipeline());
break;
// More Converged Cassandra/Core 4 changes for Async processing. This is generally a
// copy of upstream's InitConnectionHandler.

// Try to get the newer processInit static method
try {
Method processInit =
Dispatcher.class.getDeclaredMethod(
"processInit", ServerConnection.class, StartupMessage.class);
((CompletableFuture<Message.Response>)
processInit.invoke(null, (ServerConnection) connection, startup))
.whenComplete(
(Message.Response response, Throwable error) -> {
if (error == null) {
processStartupResponse(response, inbound, ctx, promise);
} else {
ErrorMessage message =
ErrorMessage.fromException(
new ProtocolException(
String.format("Unexpected error %s", error.getMessage())));
Envelope encoded = message.encode(inbound.header.version);
ctx.writeAndFlush(encoded);
}
});
break;
} catch (NoSuchMethodException nsme) {
// try the older processRequest method
try {
Method processRequest =
Dispatcher.class.getDeclaredMethod(
"processRequest",
ServerConnection.class,
StartupMessage.class,
ClientResourceLimits.Overload.class);
Message.Response response =
(Message.Response)
processRequest.invoke(
null,
(ServerConnection) connection,
startup,
ClientResourceLimits.Overload.NONE);
processStartupResponse(response, inbound, ctx, promise);
break;
} catch (NoSuchMethodException nsme2) {
// Expected method not found. Log an error and figure out what signature we need
logger.error(
"Expected Cassandra Dispatcher.processRequest() method signature not found. Management API agent will not be able to start Cassandra.",
nsme2);
throw nsme2;
}
}

default:
ErrorMessage error =
Expand All @@ -311,5 +384,19 @@ protected void decode(ChannelHandlerContext ctx, ByteBuf buffer, List<Object> ou
inbound.release();
}
}

private void processStartupResponse(
Message.Response response,
Envelope inbound,
ChannelHandlerContext ctx,
ChannelPromise promise) {
if (response.type.equals(Message.Type.AUTHENTICATE)) {
// bypass authentication
response = new ReadyMessage();
}
Envelope encoded = response.encode(inbound.header.version);
ctx.writeAndFlush(encoded, promise);
logger.debug("Configured pipeline: {}", ctx.pipeline());
}
}
}
55 changes: 55 additions & 0 deletions management-api-agent-hcd-cc5/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# Management API with HCD (Hyper-Converged Database)

It is important to note that all HCD dependencies should only be specified in the HCD agent modules. No HCD dependencies
can be added to any other projects/modules, as users without access to HCD artifacts won't be able to build the OSS Management API.

## HCD versions (hcd-cc4 vs hcd-cc5)

As of this document edit, there are 2 versions of HCD in development. Version 1.1.x is currently maintained on the `hcd-1.1` branch
of the HCD repository. Version 1.2.x is maintained on the `main` branch of the repository. Until recently, HCD 1.2 was based on
Converged Cassandra (Converged Core/CC) 5, while HCD 1.1 is based on CC 4. Soon, HCD 1.2 will switch to CC 4, meaning a future release
of HCD 2.x will be based on CC 5. To make things a little easier to follow from this project's view, as of v0.1.97, the Management
API Agent for HCD will be CC based. This Readme is in the `hcd-cc5` Agent. There is an equivalent one in the `hcd-cc4` Agent. You
must pick the one that your HCD code is based on for it to work properly.

## Maven Settings

In order to build Management API artifacts for HCD (jarfiles and/or Docker images), you will need to have access to the DSE Maven
Artifactory. This will require credentials that should be stored in your `${HOME}/.m2/settings.xml` file.

## Building the Management API with HCD

A special `hcd` profile was created when building the Management API with HCD dependencies. The required maven command is as following:

```sh
mvn package -P hcd
```

## Running tests for the HCD Agent

TODO: The tests have not yet been adapted to run against HCD as this would require copying the HCD Docker image build from DSE repos,
which is an ongoing effort.

## Docker image builds

OUT OF SCOPE: At the moment, no HCD images are being built as part of this project. They are built from the HCD repo currently.

### Building HCD images locally

OUT OF SCOPE: At the moment, no HCD images are being built as part of this project. They are built from the HCD repo currently.

If you have access to the HCD repository, you can build an image from the `main` branch. Use the following from the HCD repository root:

```sh
./mvnw clean package
```

## Running a locally built image

To run an image you built locally with Management API enabled, run the following:

```sh
docker run -e DS_LICENSE=accept -e USE_MGMT_API=true -p 8080:8080 --name hcd my-hcd
```

where `my-hcd` is the tag of the image you built (you must have access to the BDP repo to build an image).
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
<version>${revision}</version>
</parent>
<version>${revision}</version>
<artifactId>datastax-mgmtapi-agent-hcd-1.2.x</artifactId>
<artifactId>datastax-mgmtapi-agent-hcd-cc5</artifactId>
<repositories>
<repository>
<id>artifactory</id>
Expand Down
Loading

0 comments on commit cb2591e

Please sign in to comment.