SDK, the Go version.
- Prerequisites
- SDK functionality
- APM & Instrumentation
- Using the
go-sdk
in isolation - Developing the SDK
- Release
- Maintainers
The SDK provides the following out-of-the-box functionality:
- Application Configuration & Environment-awareness
- Logger
The SDK provides the framework for managing your application's configuration.
As it uses the excellent Viper under the
hood, go-sdk
knows how to read configuration files (YAML only) and ENV
variables. go-sdk
facilitates usage of predefined static configurations and
custom dynamic configurations. Lastly, the go-sdk
is environment-aware. This
means that it supports environment scoped configuration in the configuration
files out-of-the-box.
The predefined configurations have an associated type and expect their respective configuration to be present in a file corresponding to their name.
The list of predefined top-level configurations:
Server
, containing the application server configurations, expects aconfig/server.yml
configuration file.Logger
, containing the application logger configuration, expects aconfig/logger.yml
configuration file.Database
, containing the application database configuration, expects aconfig/database.yml
configuration file.Cache
, containing the application Cache configuration, expects aconfig/cache.yml
configuration file.PubSub
, containing the application pubsub configuration, expects aconfig/pubsub.yml
configuration file.AWS
, containing the application AWS configuration, expects aconfig/aws.yml
configuration file.
For example, to get the host and the port at which the HTTP server listens on in an application:
host := sdk.Config.Server.Host
httpPort := sdk.Config.Server.HTTPPort
Additionally, the go-sdk
allows developers to add custom configuration data
in a go-chassis
-powered application, through the config/settings.yml
file.
As these configurations are custom and their type cannot (read: shouldn't) be
inferred by the SDK, it exposes a familiar Viper-like interface for interaction
with them.
All of the application-specific configurations can be accessed through the
sdk.Config.App
object. For example, given a config/settings.yml
with the
following contents:
# config/settings.yml
common: &common
name: "my-awesome-app"
development:
<<: *common
staging:
<<: *common
production:
<<: *common
To get the application name from the configuration one would use the following statement:
applicationName := sdk.Config.App.GetString("name")
The configuration variables can be overridden by a corresponding environment variable; these variables must have the following format:
APP_SETTINGS_NAME=my-really-awesome-app
^^^ ^^^^^^^^ ^^^^ ^^^^^^^^^^^^^^^^^^^^^
| | | + ----------- variable_value
| | + ---------------- variable_name
| + ------------------------- config_file
+ ----------------------------- prefix
The environment variable has the precedence over the configuration file.
Since yaml
data type support is much richer than environment variables we have to take extra care if we want to
override yaml
complex data types such as list
and dictionary
.
To represent the list
we can use space-separated values:
APP_SETTINGS_NAME="value1 value2"
And then, on the application side we have to convert it to a string slice manually:
// if the value is the part of `settings.yml`
stringSlice := sdk.Config.App.StringSlice("name")
// or calling the Viper's API directly
stringSlice := viper.GetStringSlice("app.settings.name")
To represent the dictionary
we can use a JSON:
APP_SETTINGS_NAME="{\"key\":\"value\"}"
And then, on the application side we have to convert it to a string map manually:
// if the value is the part of `settings.yml`
stringMap := sdk.Config.App.StringMapString("name")
// or calling the Viper's API directly
stringMap := viper.GetStringMapString("app.settings.name")
Application configurations vary between environments, therefore the go-sdk
is
environment-aware. This means that any application that uses the go-sdk
should have an APP_ENV
environment variable set. Applications built using the
go-chassis
have this out of the box. In absence of the APP_ENV
variable,
it's defaulted to development
.
This means that any configuration files, placed in the config
directory,
should contain environment namespacing. For example:
# config/logger.yml
common: &common
console_enabled: true
console_json_format: true
console_level: "info"
file_enabled: false
file_json_format: true
file_level: "trace"
development:
<<: *common
console_level: "debug"
file_enabled: true
file_level: "debug"
staging:
<<: *common
production:
<<: *common
When run in a particular environment, by setting APP_ENV
, it will load the
respective section from the YAML file. For example, when the application is
loaded in development
environment, go-sdk
will automatically load the
values from the development
section. This convention is applied to all
configurations supported by go-sdk
.
When an application is using the SDK to load configurations, that includes
loading application configurations in test environment as well. This means that
due to its environment-aware nature, by default the
SDK will load the configuration from the YAML files in the test
namespace.
This provides two ways to configure your application for testing:
- By adding a
test
namespace to the respective YAML file and adding the test configuration there, or - Using the provided structs and their setters/getters from the test files themselves.
Adding a test
namespace to any configuration file, looks like:
# config/logger.yml
common: &common
console_enabled: true
console_json_format: true
console_level: "info"
file_enabled: false
file_json_format: true
file_level: "trace"
development:
<<: *common
test:
<<: *common
console_level: "debug"
file_enabled: true
file_level: "debug"
staging:
<<: *common
production:
<<: *common
Given the configuration above, the SDK will load out-of-the-box the test
configuration and apply it to the logger.
In cases where we want to modify the configuration of an application in a test
file, we can simply use the constructor that go-sdk
provides:
package main
import (
"testing"
sdkconfig "github.com/scribd/go-sdk/pkg/configuration"
}
var (
// Config is SDK-powered application configuration.
Config *sdkconfig.Config
)
func SomeTest(t *testing.T) {
if Config, err = sdkconfig.NewConfig(); err != nil {
log.Fatalf("Failed to load SDK config: %s", err.Error())
}
// Change application settings
Config.App.Set("key", "value")
// Continue with testing...
setting := Config.App.GetString("key") // returns "value"
}
go-sdk
ships with a logger, configured with sane defaults out-of-the-box.
Under the hood, it uses the popular
logrus as a logger, in combination with
lumberjack for log rolling. It
supports log levels, formatting (plain or JSON), structured logging and
storing the output to a file and/or STDOUT
.
go-sdk
comes with a configuration built-in for the logger. That means that
initializing a new logger is as easy as:
package main
import (
"log"
sdklogger "github.com/scribd/go-sdk/pkg/logger"
)
var (
// Logger is SDK-powered application logger.
Logger sdklogger.Logger
err error
)
func main() {
if loggerConfig, err := sdklogger.NewConfig(); err != nil {
log.Fatalf("Failed to initialize SDK logger configuration: %s", err.Error())
}
if Logger, err = sdklogger.NewBuilder(loggerConfig).Build(); err != nil {
log.Fatalf("Failed to load SDK logger: %s", err.Error())
}
}
The logger initialized with the default configuration will use the log/
directory in the root of the project to save the log files, with the name of
the current application environment and a .log
extension.
Much like with the application configuration, the logger follows the convention
of loading a YAML file placed in the config/logger.yml
file. This means that
any project which imports the logger
package from go-sdk
can place the
logger configuration in their config/logger.yml
file and the go-sdk
will
load that configuration when initializing the logger.
Since the logger is also environment-aware, it will assume the presence of the
APP_ENV
environment variable and use it to set the name of the log file to
the environment name. For example, an application running with
APP_ENV=development
will have its log entries in log/development.log
by
default.
Also, it expects the respective logger.yml
file to be environment namespaced.
For example:
# config/logger.yml
common: &common
console_enabled: true
console_json_format: false
console_level: "debug"
file_enabled: true
file_json_format: false
file_level: "debug"
development:
<<: *common
file_enabled: false
test:
<<: *common
console_enabled: false
staging:
<<: *common
console_json_format: true
console_level: "debug"
file_enabled: false
production:
<<: *common
console_json_format: true
console_level: "info"
file_enabled: false
Given the configuration above, the logger package will load out-of-the-box the configuration for the respective environment and apply it to the logger instance.
The SDK's logger follows best practices when it comes to logging levels. It exposes multiple log levels, in order from lowest to highest:
- Trace, invoked with
logger.Tracef
- Debug, invoked with
logger.Debugf
- Info, invoked with
logger.Infof
- Warn, invoked with
logger.Warnf
- Error, invoked with
logger.Errorf
- Fatal, invoked with
logger.Fatalf
- Panic, invoked with
logger.Panicf
Each of these log levels will produce log entries, while two of them have additional functionality:
logger.Fatalf
will add a log entry and exit the program with error code 1 (i.e.exit(1)
)logger.Panicf
will add a log entry and invokepanic
with all of the arguments passed to thePanicf
call
Loggers created using the go-sdk
logger package, define a list of hardcoded
fields that every log entry will be consisted of. This is done by design, with
the goal of a uniform log line structure across all Go services that use the
go-sdk
.
Adding more field is made possible by the logger.WithFields
function
and by the Builder
API:
fields := map[string]string{ "role": "server" }
if Logger, err = sdklogger.NewBuilder(loggerConfig).SetFields(fields).Build(); err != nil {
log.Fatalf("Failed to load SDK logger: %s", err.Error())
}
While adding more fields is easy to do, removing the three default fields from the log lines is, by design, very hard to do and highly discouraged.
The list of fields are:
level
, indicating the log level of the log linemessage
, representing the actual log messagetimestamp
, the date & time of the log entry in ISO 8601 UTC format
go-sdk
ships with a Logger
middleware. When used, it tries to retrieve the RequestID
, TraceID
and SpanID
from the incoming context. Then, middleware assigns those values to the log entries for further correlation.
func main() {
loggingMiddleware := sdkmiddleware.NewLoggingMiddleware(sdk.Logger)
httpServer := server.
NewHTTPServer(host, httpPort, applicationEnv, applicationName).
MountMiddleware(loggingMiddleware.Handler).
MountRoutes(routes)
}
func main() {
rpcServer, err := server.NewGrpcServer(
host,
grpcPort,
[]grpc.ServerOption{
grpc.ChainUnaryInterceptor(
sdkinterceptors.TracingUnaryServerInterceptor(applicationName),
sdkinterceptors.RequestIDUnaryServerInterceptor(),
sdkinterceptors.LoggerUnaryServerInterceptor(logger),
),
grpc.ChainStreamInterceptor(
sdkinterceptors.TracingStreamServerInterceptor(applicationName),
sdkinterceptors.RequestIDStreamServerInterceptor(),
sdkinterceptors.LoggerStreamServerInterceptor(logger),
),
}...)
}
The logger ships with two different formats: a plaintext and JSON format. This
means that the log entries can have a simple plaintext format, or a JSON
format. These options are configurable using the ConsoleJSONFormat
and
FileJSONFormat
attributes of the logger Config
.
An example of the plain text format:
timestamp="2019-10-23T15:28:54Z" level=info message="GET HTTP/1.1 200"
An example of the JSON format:
{"level":"info","message":"GET HTTP/1.1 200","timestamp":"2019-10-23T15:29:26Z"}
The logger handles the log entries and can store them in a file or send them to
STDOUT
. These options are configurable using the ConsoleEnabled
and
FileEnabled
attributes of the logger Config
.
The logger can be further instrumented to report error messages to Sentry.
The following instructions assume that a project has been setup in Sentry and that the corresponding DSN, or Data Source Name, is available.
The respective configuration file is sentry.yml
and it should include
the following content:
# config/sentry.yml
common: &common
dsn: ""
development:
<<: *common
staging:
<<: *common
production:
<<: *common
dsn: "https://<key>@sentry.io/<project>"
The tracking can be enabled from the Logger Builder
with the SetTracking
function. To trigger tracking, use logger
with WithError(err)
as follows:
package main
import (
"log"
"errors"
sdklogger "github.com/scribd/go-sdk/pkg/logger"
sdktracking "github.com/scribd/go-sdk/pkg/tracking"
)
var (
// Logger is SDK-powered application logger.
Logger sdklogger.Logger
err error
)
func main() {
if loggerConfig, err := sdklogger.NewConfig(); err != nil {
log.Fatalf("Failed to initialize SDK logger configuration: %s", err.Error())
}
if trackingConfig, err := sdktracking.NewConfig(); err != nil {
log.Fatalf("Failed to initialize SDK tracking configuration: %s", err.Error())
}
if Logger, err = sdklogger.NewBuilder(loggerConfig).SetTracking(trackingConfig).Build(); err != nil {
log.Fatalf("Failed to load SDK logger: %s", err.Error())
}
err = errors.New("new error")
logger.WithError(err).Errorf("trying sentry error functionality")
A logger build with a valid tracking configuration will automatically report to Sentry any errors emitted from the following log levels:
Error
;Fatal
;Panic
;
The following environment variables are automatically used in the configuration of the Sentry client to enrich the error data:
environment: APP_ENV
release: APP_VERSION
serverName: APP_SERVER_NAME
More about the "environment" configuration the "server name" configuration and the "release" configuration can be found in the Sentry documentation.
go-sdk
ships with a default setup for a database connection, built on top of
the built-in database
configuration. The
configuration that is read from the config/database.yml
file is then used to
create connection details, which are then used to compose a data source
name (DSN), for example:
username:password@tcp(192.168.1.1:8080)/dbname?timeout=10s&charset=utf8&parseTime=True&loc=Local
At the moment, the database connection established using the go-sdk
can be
only to a MySQL database. This is subject to change as the go-sdk
evolves and
becomes more feature-complete.
The database connection can also be configured through a YAML file. go-sdk
expects this file to be placed at the config/database.yml
path, within the
root of the project.
Each of these connection details can be overriden by an ENV
variable.
Setting | Description | YAML variable | Environment variable (ENV) | Default |
---|---|---|---|---|
Host | The database host | host |
APP_DATABASE_HOST |
localhost |
Port | The database port | port |
APP_DATABASE_PORT |
3306 |
Database | The database name | database |
APP_DATABASE_DATABASE |
|
Username | App user name | username |
APP_DATABASE_USERNAME |
|
Password | App user password | password |
APP_DATABASE_PASSWORD |
|
Pool | Connection pool size | pool |
APP_DATABASE_POOL |
5 |
Timeout | Connection timeout (in seconds) | timeout |
APP_DATABASE_TIMEOUT |
1s |
An example database.yml
:
common: &common
host: db
port: 3306
username: username
password: password
pool: 5
timeout: 1s
development:
<<: *common
database: application_development
test:
<<: *common
database: application_test
production:
<<: *common
go-sdk
provides a convenient way to create a basic Server configuration.
Setting | Description | YAML variable | Environment variable (ENV) |
---|---|---|---|
Host | Server host | host |
APP_SERVER_HOST |
HTTPPort | HTTP port | http_port |
APP_SERVER_HTTP_PORT |
GRPCPort | gRPC port | grpc_port |
APP_SERVER_GRPC_PORT |
HTTPTimeout | HTTP related timeouts | http_timeout |
|
CORS | CORS settings | cors |
An example server.yml
:
common: &common
http_port: 8080
http_timeout:
write: 2s
read: 1s
idle: 90s
cors:
enabled: true
settings:
- path: "*"
allowed_origins: ["*"]
allowed_methods: ["GET"]
allowed_headers: ["Allowed-Header"]
exposed_headers: ["Exposed-Header"]
allow_credentials: true
max_age: 600
CORS stands for Cross Origin Resource Sharing. go-sdk
provides a basic
optional configuration for the CORS settings that are passed to the HTTP middleware.
Setting | Description | YAML variable | Environment variable (ENV) | Default |
---|---|---|---|---|
Enabled | Whether CORS enabled or not | enabled |
APP_SERVER_CORS_ENABLED |
false |
Settings | List of CORS Settings | settings |
PLEASE NOTE: there is no way to specify Settings
via environment variables as it is presented as
a nested structure. To configure the CORS, use the server.yaml
file
For the full list of the CORS settings please refer to the inline documentation of the server package Also, consider looking into the documentation of the cors library which currently lays under the hood of the CORS middleware.
CORS middleware is a tiny wrapper around the cors library. It aims to provide an extensive way to configure CORS and at the same time not bind services to a particular implementation.
Below is an example of the CORS middleware initialization:
package main
import (
"github.com/scribd/go-sdk/pkg/server"
"log"
)
func main() {
config, err := server.NewConfig()
if err != nil {
log.Fatal(err)
}
corsMiddleware := middleware.NewCorsMiddleware(config.Cors.Settings[0])
// possible Server implementation
httpServer := server.
NewHTTPServer(host, httpPort, applicationEnv, applicationName).
MountMiddleware(corsMiddleware.Handler).
MountRoutes(routes)
}
go-sdk
comes with an integration with the popular
gorm as an object-relational mapper (ORM).
Using the configuration details, namely the data source
name (DSN) as their product,
gorm is able to open a connection and give the go-sdk
users a preconfigured
ready-to-use database connection with an ORM attached. This can be done as
follows:
package main
import (
sdkdb "github.com/scribd/go-sdk/pkg/database"
)
func main() {
// Loads the database configuration.
dbConfig, err := sdkdb.NewConfig()
// Establishes a gorm database connection using the connection details.
dbConn, err := sdkdb.NewConnection(dbConfig)
}
The connection details are handled internally by the gorm integration, in other
words the NewConnection
function, so they remain opaque for the user.
Invoking the constructor for a database connection, go-sdk
returns a
Gorm-powered database connection. It can be
used right away to query the database:
package main
import (
"fmt"
sdkdb "github.com/scribd/go-sdk/pkg/database"
"gorm.io/gorm"
)
type User struct {
gorm.Model
Name string `gorm:"type:varchar(255)"`
Age uint `gorm:"type:int"`
}
func main() {
dbConfig, err := sdkdb.NewConfig()
dbConn, err := sdkdb.NewConnection(dbConfig)
user := User{Name: name, Age: age}
errs := dbConn.Create(&user).GetErrors()
if errs != nil {
fmt.Println(errs)
}
fmt.Println(user)
}
To learn more about Gorm, you can start with its official documentation.
go-sdk
provides a convenient way to create a basic PubSub configuration.
Top-level configuration is split into sections specific for the vendor. Let's look into the sample pubsub.yml
file:
common: &common
kafka:
# Set by APP_PUBSUB_KAFKA_BROKER_URLS env variable
broker_urls:
- "localhost:9092"
# Set by APP_PUBSUB_KAFKA_CLIENT_ID env variable
client_id: "test-app"
# Set by APP_PUBSUB_KAFKA_CERT_PEM env variable
cert_pem: "pem string"
# Set by APP_PUBSUB_KAFKA_CERT_PEM_KEY env variable
cert_pem_key: "pem key"
security_protocol: "ssl"
ssl_verification_enabled: true
publisher:
# Set by APP_PUBSUB_KAFKA_PUBLISHER_MAX_ATTEMPTS env variable
max_attempts: 3
write_timeout: "10s"
topic: "test-topic"
subscriber:
topic: "test-topic"
group_id: ""
sqs:
subscriber:
# Set by APP_PUBSUB_SQS_SUBSCRIBER_ENABLED env variable
enabled: true
# Set by APP_PUBSUB_SQS_SUBSCRIBER_QUEUE_URL env variable
queue_url: "https://sqs.us-east-2.amazonaws.com/123456789012/test-queue"
# Set by APP_PUBSUB_SQS_SUBSCRIBER_MAX_MESSAGES env variable
max_messages: 10
# Set by APP_PUBSUB_SQS_SUBSCRIBER_WORKERS env variable
workers: 1
# Set by APP_PUBSUB_SQS_SUBSCRIBER_WAIT_TIME env variable
wait_time: "10s"
publisher:
# Set by APP_PUBSUB_SQS_PUBLISHER_ENABLED env variable
enabled: true
# Set by APP_PUBSUB_SQS_PUBLISHER_QUEUE_URL env variable
queue_url: "https://sqs.us-east-2.amazonaws.com/123456789012/test-queue"
Kafka top-level configuration contains generic options that fit both publisher and subscriber:
Setting | Description | YAML variable | Environment variable (ENV) | Type | Possible Values |
---|---|---|---|---|---|
Broker URLs | Broker URLs to connect to | broker_urls |
APP_PUBSUB_KAFKA_BROKER_URLS |
list(string) | localhost:9093 |
Client ID | Client identifier | client_id |
APP_PUBSUB_KAFKA_CLIENT_ID |
string | go-chassis-app |
Cert PEM | Client's public key string (PEM format) used for authentication | cert_pem |
APP_PUBSUB_KAFKA_CERT_PEM |
string | long PEM string (deprecated) |
Cert PEM Key | Client's private key string (PEM format) used for authentication | cert_pem_key |
APP_PUBSUB_KAFKA_CERT_PEM_KEY |
string | long PEM key string (deprecated) |
Security Protocol | Protocol used to communicate with brokers | security_protocol |
APP_PUBSUB_KAFKA_SECURITY_PROTOCOL |
string | plaintext, ssl, sasl_plaintext, sasl_ssl (deprecated) |
SSL verification enabled | Enable OpenSSL's builtin broker (server) certificate verification | ssl_verification_enabled |
APP_PUBSUB_KAFKA_SSL_VERIFICATION_ENABLED |
bool | true, false (deprecated) |
Enable Metrics | Enable publishing metrics for Kafka client broker connection | metrics_enabled |
APP_PUBSUB_KAFKA_METRICS_ENABLED |
bool | true, false |
To break it down further Publisher
and Subscriber
have their own set of configuration options:
Publisher:
Setting | Description | YAML variable | Environment variable (ENV) | Type | Possible Values |
---|---|---|---|---|---|
Max attempts | Maximum amount of times to retry sending a failing Message | max_attempts |
APP_PUBSUB_KAFKA_PUBLISHER_MAX_ATTEMPTS |
int | 3 |
Write timeout | Local message timeout. This value is only enforced locally and limits the time a produced message waits for successful delivery | write_timeout |
APP_PUBSUB_KAFKA_PUBLISHER_WRITE_TIMEOUT |
string | 10s |
Topic | Topic name | topic |
APP_PUBSUB_KAFKA_PUBLISHER_TOPIC |
string | topic |
Enable Metrics | Enable publishing metrics for Kafka producer | metrics_enabled |
APP_PUBSUB_KAFKA_PUBLISHER_METRICS_ENABLED |
bool | true, false |
Subscriber:
Setting | Description | YAML variable | Environment variable (ENV) | Type | Possible Values |
---|---|---|---|---|---|
Topic | Topic name | topic |
APP_PUBSUB_KAFKA_SUBSCRIBER_TOPIC |
string | topic |
Group ID | Client group id string. All clients sharing the same group id belong to the same group | group_id |
APP_PUBSUB_KAFKA_SUBSCRIBER_GROUP_ID |
string | service-name |
Enable Metrics | Enable publishing metrics for Kafka consumer | metrics_enabled |
APP_PUBSUB_KAFKA_SUBSCRIBER_METRICS_ENABLED |
bool | true, false |
Enable AutoCommit | Enable AutoCommit option for Kafka consumer (default true ) |
auto_commit.enabled |
APP_PUBSUB_KAFKA_SUBSCRIBER_AUTO_COMMIT_ENABLED |
bool | true, false |
Number of workers | Number of workers processing incoming messages | workers |
APP_PUBSUB_KAFKA_SUBSCRIBER_WORKERS |
number | 1,2... |
Block rebalance | Indicates if the subscriber will block rebalance event when polling. For more information refer to the franz-go documentation | block_rebalance |
APP_PUBSUB_KAFKA_SUBSCRIBER_BLOCK_REBALANCE |
bool | true, false |
Max records | Number of messages to retrieve from Kafka topic per polling iteration | max_records |
APP_PUBSUB_KAFKA_SUBSCRIBER_MAX_RECORDS |
number | 1,2... |
IMPORTANT NOTES:
APP_PUBSUB_KAFKA_SUBSCRIBER_WORKERS
is a setting for subscriber to increase the throughput. If the number is more than 1, messages will be processed concurrently in a separate goroutine. If the order of messages is important, this setting should be set to 1.APP_PUBSUB_KAFKA_SUBSCRIBER_BLOCK_REBALANCE
is a setting for subscriber to block rebalance event when polling. If the setting is set totrue
, the subscriber will block the rebalance event when polling. This option helps to reduce the amount of duplicate messages. But it is important to be sure that the message processing is fast enough to avoid blocking the rebalance event for a long time. For more information refer to the franz-go documentation.
To authenticate the requests to Kafka, Go SDK provides a configuration set for TLS and SASL
TLS:
Setting | Description | YAML variable | Environment variable (ENV) | Type | Possible Values |
---|---|---|---|---|---|
Enabled | Whether TLS authentication is enabled or not | enabled |
APP_PUBSUB_KAFKA_TLS_ENABLED |
bool | true, false |
Root certificate | Ca Root CA certificate | ca |
APP_PUBSUB_KAFKA_TLS_CA |
string | Root CA |
Cert PEM | Client's public key string (PEM format) used for authentication | cert_pem |
APP_PUBSUB_KAFKA_TLS_CERT_PEM |
string | long PEM string (deprecated) |
Cert PEM Key | Client's private key string (PEM format) used for authentication | cert_pem_key |
APP_PUBSUB_KAFKA_TLS_CERT_PEM_KEY |
string | long PEM key string (deprecated) |
Passphrase | Passphrase is used in case the private key needs to be decrypted | passphrase |
APP_PUBSUB_KAFKA_TLS_PASSPHRASE |
string | pass phrase |
Skip TLS verification | Turn on / off TLS verification | insecure_skip_verify |
APP_PUBSUB_KAFKA_TLS_INSECURE_SKIP_VERIFY |
bool | true, false |
SASL:
Setting | Description | YAML variable | Environment variable (ENV) | Type | Possible Values |
---|---|---|---|---|---|
Enabled | Whether SASL authentication is enabled or not | enabled |
APP_PUBSUB_KAFKA_SASL_ENABLED |
bool | true, false |
Mechanism | Mechanism is a string representation of the SASL mechanism. Only "plain" and "aws_msk_iam" are supported | mechanism |
APP_PUBSUB_KAFKA_SASL_MECHANISM |
string | plain, aws_msk_iam |
Username | The username to authenticate Kafka requests | username |
APP_PUBSUB_KAFKA_SASL_USERNAME |
string | username |
Password | The password to authenticate Kafka requests | paswword |
APP_PUBSUB_KAFKA_SASL_PASSWORD |
string | password |
AWS MSK IAM access key | AWS MSK IAM access key to authenticate AWS MSK requests | access_key |
APP_PUBSUB_KAFKA_SASL_AWS_MSK_IAM_ACCESS_KEY |
string | access key |
AWS MSK IAM secret key | AWS MSK IAM secret key to authenticate AWS MSK requests | secret_key |
APP_PUBSUB_KAFKA_SASL_AWS_MSK_IAM_SECRET_KEY |
string | secret key |
AWS STS Session Token | SessionToken is used to authenticate AWS MSK requests via AWS STS service. For more information please check the documentation. |
session_token |
APP_PUBSUB_KAFKA_SASL_AWS_MSK_IAM_SESSION_TOKEN |
string | token |
User agent | The client's user agent string | user_agent |
APP_PUBSUB_KAFKA_SASL_AWS_MSK_IAM_USER_AGENT |
string | user agent |
Assumable role | This role will be used to establish connection to AWS MSK ignoring the static credentials | role |
APP_PUBSUB_KAFKA_SASL_AWS_MSK_IAM_ROLE |
string | AWS ARN string |
Session name | Will be passed to AWS STS when assuming the role | session_name |
APP_PUBSUB_KAFKA_SASL_AWS_MSK_IAM_SESSION_NAME |
string | application |
Under the hood, the SQS integration relies on the provided SQS client. To learn more about the AWS services initialization, check the AWS service configuration.
Subscriber:
Setting | Description | YAML variable | Environment variable (ENV) | Type | Possible Values |
---|---|---|---|---|---|
Queue URL | URL of the SQS queue | queue_url |
APP_PUBSUB_SQS_SUBSCRIBER_QUEUE_URL |
string | queue URL |
Max messages | Maximum number of messages to retrieve from the queue per polling iteration | max_messages |
APP_PUBSUB_SQS_SUBSCRIBER_MAX_MESSAGES |
number | 1,2... |
Number of workers | Number of workers processing incoming messages | workers |
APP_PUBSUB_SQS_SUBSCRIBER_WORKERS |
number | 1,2... |
Wait time | The maximum amount of time in time.Duration to wait for messages to be available for retrieval. | wait_time |
APP_PUBSUB_SQS_SUBSCRIBER_WAIT_TIME |
string | 10s |
Publisher:
Setting | Description | YAML variable | Environment variable (ENV) | Type | Possible Values |
---|---|---|---|---|---|
Queue URL | URL of the SQS queue | queue_url |
APP_PUBSUB_SQS_PUBLISHER_QUEUE_URL |
string | queue URL |
go-sdk
provides a convenient way to create an application Cache configuration.
Top-level configuration is split into sections specific for the vendor. Let's look into the sample cache.yml
file:
common: &common
store: redis
redis:
url: ""
# Set by APP_CACHE_REDIS_ADDRS env variable
addrs:
- "localhost:6379"
# Set by APP_CACHE_REDIS_USERNAME env variable
username: "username"
# Set by APP_CACHE_REDIS_PASSWORD env variable
password: "password"
Redis top-level configuration is based on the official go-redis UniversalOptions library options. The following table describes the configuration options:
Setting | Description | YAML variable | Environment variable (ENV) | Type | Possible Values |
---|---|---|---|---|---|
URL | URL into Redis ClusterOptions that can be used to connect to Redis | url |
APP_CACHE_REDIS_URL |
string | redis://:password@localhost:6379/0 |
Addrs | Either a single address or a seed list of host:port addresses of cluster/sentinel nodes. | addrs |
APP_CACHE_REDIS_ADDRS |
list(string) | localhost:6379 |
ClientName | ClientName will execute the CLIENT SETNAME ClientName command for each conn. |
client_name |
APP_CACHE_REDIS_CLIENT_NAME |
string | client-name |
DB | Database to be selected after connecting to the server. Only single-node and failover clients. | db |
APP_CACHE_REDIS_DB |
int | 0 |
Protocol | Protocol 2 or 3. Use the version to negotiate RESP version with redis-server. | protocol |
APP_CACHE_REDIS_PROTOCOL |
int | 2, 3 |
Username | Username for Redis authentication. | username |
APP_CACHE_REDIS_USERNAME |
string | username |
Password | Password for Redis authentication. | password |
APP_CACHE_REDIS_PASSWORD |
string | password |
SentinelUsername | SentinelUsername is the username for Sentinel authentication. | sentinel_username |
APP_CACHE_REDIS_SENTINEL_USERNAME |
string | sentinel-username |
SentinelPassword | SentinelPassword is the password for Sentinel authentication. | sentinel_password |
APP_CACHE_REDIS_SENTINEL_PASSWORD |
string | sentinel-password |
MaxRetries | Maximum number of retries before giving up. | max_retries |
APP_CACHE_REDIS_MAX_RETRIES |
int | 0 |
MinRetryBackoff | Minimum backoff between each retry. | min_retry_backoff |
APP_CACHE_REDIS_MIN_RETRY_BACKOFF |
time.Duration | 8ms |
MaxRetryBackoff | Maximum backoff between each retry. | max_retry_backoff |
APP_CACHE_REDIS_MAX_RETRY_BACKOFF |
time.Duration | 512ms |
DialTimeout | The timeout for establishing a connection. | dial_timeout |
APP_CACHE_REDIS_DIAL_TIMEOUT |
time.Duration | 5s |
ReadTimeout | The timeout for socket reads. | read_timeout |
APP_CACHE_REDIS_READ_TIMEOUT |
time.Duration | 3s |
WriteTimeout | The timeout for socket writes. | write_timeout |
APP_CACHE_REDIS_WRITE_TIMEOUT |
time.Duration | 3s |
ContextTimeoutEnabled | Controls whether the client respects context timeouts and deadlines. | context_timeout_enabled |
APP_CACHE_REDIS_CONTEXT_TIMEOUT_ENABLED |
bool | true, false |
PoolSize | Base number of socket connections. | pool_size |
APP_CACHE_REDIS_POOL_SIZE |
int | 10 |
PoolTimeout | Amount of time client waits for connection if all connections are busy before returning an error. | pool_timeout |
APP_CACHE_REDIS_POOL_TIMEOUT |
time.Duration | 4s |
MaxIdleConns | Maximum number of idle connections. | max_idle_conns |
APP_CACHE_REDIS_MAX_IDLE_CONNS |
int | 10 |
MinIdleConns | Minimum number of idle connections. | min_idle_conns |
APP_CACHE_REDIS_MIN_IDLE_CONNS |
int | 5 |
MaxActiveConns | Maximum number of connections allocated by the pool at a given time. | max_active_conns |
APP_CACHE_REDIS_MAX_ACTIVE_CONNS |
int | 100 |
ConnMaxIdleTime | Maximum amount of time a connection may be idle. Should be less than server's timeout. | conn_max_idle_time |
APP_CACHE_REDIS_CONN_MAX_IDLE_TIME |
time.Duration | 5m |
ConnMaxLifetime | Maximum amount of time a connection may be reused. | conn_max_lifetime |
APP_CACHE_REDIS_CONN_MAX_LIFETIME |
time.Duration | 5m |
MaxRedirects | The maximum number of retries before giving up. Command is retried on network errors and MOVED/ASK redirects. Only cluster clients. | max_redirects |
APP_CACHE_REDIS_MAX_REDIRECTS |
int | 8 |
ReadOnly | Enable read-only commands on slave nodes. Only cluster clients. | read_only |
APP_CACHE_REDIS_READ_ONLY |
bool | true, false |
RouteByLatency | Route read-only commands to the closest master or slave node. Only cluster clients. | route_by_latency |
APP_CACHE_REDIS_ROUTE_BY_LATENCY |
bool | true, false |
RouteRandomly | Route read-only commands to a random master or slave node. Only cluster clients. | route_randomly |
APP_CACHE_REDIS_ROUTE_RANDOMLY |
bool | true, false |
MasterName | Name of the master to use for Sentinel failover. | master_name |
APP_CACHE_REDIS_MASTER_NAME |
string | master-name |
DisableIndentity | Disable set-lib on connect. | disable_indentity |
APP_CACHE_REDIS_DISABLE_INDENTITY |
bool | true, false |
IdentitySuffix | Add suffix to client name. | identity_suffix |
APP_CACHE_REDIS_IDENTITY_SUFFIX |
string | suffix |
To secure the requests to Redis, Go SDK provides a configuration set for TLS:
TLS:
Setting | Description | YAML variable | Environment variable (ENV) | Type | Possible Values |
---|---|---|---|---|---|
Enabled | Whether TLS is enabled or not | enabled |
APP_CACHE_REDIS_TLS_ENABLED |
bool | true, false |
Root certificate | Ca Root CA certificate | ca |
APP_CACHE_REDIS_TLS_CA |
string | Root CA |
Cert PEM | Client's public key string (PEM format) used for authentication | cert_pem |
APP_CACHE_REDIS_TLS_CERT_PEM |
string | long PEM string |
Cert PEM Key | Client's private key string (PEM format) used for authentication | cert_pem_key |
APP_CACHE_REDIS_TLS_CERT_PEM_KEY |
string | long PEM key string |
Passphrase | Passphrase is used in case the private key needs to be decrypted | passphrase |
APP_CACHE_REDIS_TLS_PASSPHRASE |
string | pass phrase |
Skip TLS verification | Turn on / off TLS verification | insecure_skip_verify |
APP_CACHE_REDIS_TLS_INSECURE_SKIP_VERIFY |
bool | true, false |
go-sdk
comes with an integration with Statsig to
leverage its experimentation and feature flag functionality.
The respective configuration file is statsig.yml
and it should include
the following content:
# config/statsig.yml
common: &common
secret_key: ""
local_mode: false
config_sync_interval: 10s
id_list_sync_interval: 60s
development:
<<: *common
test:
<<: *common
staging:
<<: *common
production:
<<: *common
For more details on the configuration check the Statsig Go Server SDK.
Using the configuration details, the statsig client can be initialized as follows:
package main
import (
sdkstatsig "github.com/scribd/go-sdk/pkg/statsig"
)
func main() {
// Loads the statsig configuration.
statsigConfig, err := sdkstatsig.NewConfig()
// Initialize statsig connection using the configuration.
sdkstatsig.Initialize(statsigConfig)
}
After initializing the statsig client, one can start using experiments or feature flags using the following code:
import (
sdkstatsig "github.com/scribd/go-sdk/pkg/statsig"
statsig "github.com/statsig-io/go-sdk"
)
func main() {
u := statsig.User{}
experiment := sdkstatsig.GetExperiment(u, "experiment_name")
featureFlag := sdkstatsig.GetFeatureFlag(u, "feature_flag_name")
}
go-sdk
provides a convenient way to create an AWS configuration.
Configuration contains the common configuration for AWS configuration and specific configuration for services.
Service configuration is split by the service name to facilitate the configuration of multiple services per service type.
common: &common
config:
region: "us-east-2"
s3:
default:
# APP_AWS_S3_DEFAULT_REGION
region: "us-east-1"
credentials:
assume_role:
# APP_AWS_S3_DEFAULT_CREDENTIALS_ASSUME_ROLE_ARN
arn: ""
example:
# APP_AWS_S3_EXAMPLE_REGION
region: "us-west-2"
credentials:
static:
# APP_AWS_S3_EXAMPLE_CREDENTIALS_STATIC_ACCESS_KEY_ID
access_key_id: ""
# APP_AWS_S3_EXAMPLE_CREDENTIALS_STATIC_SECRET_ACCESS_KEY
secret_access_key: ""
# APP_AWS_S3_EXAMPLE_CREDENTIALS_STATIC_SESSION_TOKEN
session_token: ""
Common configuration contains the following options:
Setting | Description | YAML variable | Environment variable (ENV) | Type | Possible Values |
---|---|---|---|---|---|
Region | AWS region to use | region |
APP_AWS_REGION |
string | us-west-2 |
HTTP client max idle conns | Maximum number of idle connections in the HTTP client. | max_idle_conns |
APP_AWS_HTTP_CLIENT_MAX_IDLE_CONNS |
int | 100 |
Currently, go-sdk
supports the following AWS service types: s3
, sagemakerruntime
, sfn
and sqs
.
To facilitate the configuration of multiple services per service type, the configuration is split by the service name. To override the settings specified in the YAML via the environment variables, the following naming convention is used:
```
APP_AWS_<SERVICE_TYPE>_<SERVICE_NAME>_*
```
Where SERVICE_TYPE
is the service type (s3, sqs etc.) and SERVICE_NAME
is the service name (default, example etc.).
It is possible to specify region and HTTP client settings for each service, which will override the common configuration.
For each service, it is possible to specify credentials. Currently, go-sdk
supports two types of credentials: assume_role
and static
.
assume_role
credentials are used to assume a role to get credentials using AWS STS service. The following options are available:
Setting | Description | YAML variable | Environment variable (ENV) | Type | Possible Values |
---|---|---|---|---|---|
ARN | The Amazon Resource Name (ARN) of the role to assume | arn |
APP_AWS_S3_DEFAULT_CREDENTIALS_ASSUME_ROLE_ARN |
string | arn:aws:iam::123456789012:role/role-name |
static
credentials are used to specify the access key ID, secret access key and session token. The following options are available:
Setting | Description | YAML variable | Environment variable (ENV) | Type | Possible Values |
---|---|---|---|---|---|
Access Key ID | The access key ID | access_key_id |
APP_AWS_S3_EXAMPLE_CREDENTIALS_STATIC_ACCESS_KEY_ID |
string | access-key-id |
Secret Access Key | The secret access key | secret_access_key |
APP_AWS_S3_EXAMPLE_CREDENTIALS_STATIC_SECRET_ACCESS_KEY |
string | secret-access-key |
Session Token | The session token | session_token |
APP_AWS_S3_EXAMPLE_CREDENTIALS_STATIC_SESSION_TOKEN |
string | session-token |
The go-sdk
provides an easy way to add application performance monitoring
(APM) & instrumentation to a service. It provides DataDog APM using the
dd-trace-go
library. dd-trace-go
provides gRPC server & client interceptors, HTTP router instrumentation, database connection & ORM instrumentation
and AWS clients instrumentation. All of the traces and data are opaquely sent
to DataDog.
For easier identification of requests and their tracing within components of a
single service, and across-services, go-sdk
has a RequestID
middleware.
As an HTTP middleware, it checks every incoming request for a X-Request-Id
HTTP header
and sets the value of the header as a field in the request Context
. In case
there's no RequestID
present, it will generate a UUID and assign it in the
request Context
.
Example usage of the middleware:
func main() {
requestIDMiddleware := sdkmiddleware.NewRequestIDMiddleware()
httpServer := server.
NewHTTPServer(host, httpPort, applicationEnv, applicationName).
MountMiddleware(requestIDMiddleware.Handler).
MountRoutes(routes)
}
For the gRPC server, sdk provides unary and stream interceptors. It checks every incoming request
for a x-request-id
grpc metadata header and sets the value of the header as a field in the request Context
.
In case there's no RequestID
present, it will generate a UUID and assign it in the request Context
.
Example usage of interceptors:
func main() {
grpcServer, err := server.NewGrpcServer(
host,
grpcPort,
[]grpc.ServerOption{
grpc.ChainUnaryInterceptor(
sdkinterceptors.TracingUnaryServerInterceptor(applicationName),
sdkinterceptors.RequestIDUnaryServerInterceptor(),
sdkinterceptors.LoggerUnaryServerInterceptor(logger),
),
grpc.ChainStreamInterceptor(
sdkinterceptors.TracingStreamServerInterceptor(applicationName),
sdkinterceptors.RequestIDStreamServerInterceptor(),
sdkinterceptors.LoggerStreamServerInterceptor(logger),
),
}...)
}
go-sdk
ships with HTTP router instrumentation, based on DataDog's
dd-trace-go
library. It spawns a new instrumented router where each of the
requests will create traces that will be sent opaquely to the DataDog agent.
Example usage of the instrumentation:
// Example taken from go-chassis
func NewHTTPServer(host, port, applicationEnv, applicationName string) *HTTPServer {
router := sdk.Tracer.Router(applicationName)
srv := &http.Server{
Addr: fmt.Sprintf("%s:%s", host, port),
Handler: router,
}
return &HTTPServer{
srv: srv,
applicationEnv: applicationEnv,
}
}
go-sdk
ships with gRPC instrumentation, based on DataDog's
dd-trace-go
library. It creates new gRPC interceptors for the server & client. Thus,
requests will create traces that will be sent opaquely to the DataDog agent.
Example usage of the instrumentation:
// gRPC client example
si := grpctrace.StreamClientInterceptor(grpctrace.WithServiceName("client-application"))
ui := grpctrace.UnaryClientInterceptor(grpctrace.WithServiceName("client-application"))
conn, err := grpc.Dial("<gRPC host>", grpc.WithStreamInterceptor(si), grpc.WithUnaryInterceptor(ui))
if err != nil {
log.Fatalf("Failed to init gRPC connection: %s", err)
}
defer conn.Close()
// gRPC server example
grpcServer, err := server.NewGrpcServer(
host,
grpcPort,
[]grpc.ServerOption{
grpc.ChainUnaryInterceptor(
sdkinterceptors.TracingUnaryServerInterceptor(applicationName),
sdkinterceptors.LoggerUnaryServerInterceptor(logger),
sdkinterceptors.MetricsUnaryServerInterceptor(metrics),
sdkinterceptors.DatabaseUnaryServerInterceptor(database),
kitgrpc.Interceptor,
),
grpc.ChainStreamInterceptor(
sdkinterceptors.TracingStreamServerInterceptor(applicationName),
sdkinterceptors.LoggerStreamServerInterceptor(logger),
sdkinterceptors.MetricsStreamServerInterceptor(metrics),
sdkinterceptors.DatabaseStreamServerInterceptor(database),
),
}...)
if err != nil {
logger.Fatalf("Failed to create gRPC Server: %s", err)
}
go-sdk
ships with two database-related middlewares: Database
&
DatabaseLogging
for both HTTP and gRPC servers.
The Database
middleware which instruments the
Gorm-powered database connection. It utilizes
Gorm-specific callbacks that report spans and traces to Datadog. The
instrumented Gorm database connection is injected in the request Context
and
it is always scoped within the request.
The DatabaseLogging
middleware checks for a logger injected in the request
context
. If found, the logger is passed to the Gorm database connection,
which in turn uses the logger to produce database query logs. A nice
side-effect of this approach is that, if the logger is tagged with a
request_id
, there's a logs correlation between the HTTP requests and the
database queries. Also, if the logger is tagged with trace_id
we can easily
correlate logs with traces and see corresponding database queries. Keep in mind
that ORM logging happens at the same level as the base logger. The additional
database fields ('duration', 'affected_rows' and 'sql') are available
only when using the trace
levels.
func main() {
databaseMiddleware := sdkmiddleware.NewDatabaseMiddleware(sdk.Database)
databaseLoggingMiddleware := sdkmiddleware.NewDatabaseLoggingMiddleware()
httpServer := server.
NewHTTPServer(host, httpPort, applicationEnv, applicationName).
MountMiddleware(databaseMiddleware.Handler).
MountMiddleware(databaseLoggingMiddleware.Handler).
MountRoutes(routes)
}
func main() {
grpcServer, err := server.NewGrpcServer(
host,
grpcPort,
[]grpc.ServerOption{
grpc.ChainUnaryInterceptor(
sdkinterceptors.TracingUnaryServerInterceptor(applicationName),
sdkinterceptors.LoggerUnaryServerInterceptor(logger),
sdkinterceptors.MetricsUnaryServerInterceptor(metrics),
sdkinterceptors.DatabaseUnaryServerInterceptor(database),
sdkinterceptors.DatabaseLoggingUnaryServerInterceptor(),
kitgrpc.Interceptor,
),
grpc.ChainStreamInterceptor(
sdkinterceptors.TracingStreamServerInterceptor(applicationName),
sdkinterceptors.LoggerStreamServerInterceptor(logger),
sdkinterceptors.MetricsStreamServerInterceptor(metrics),
sdkinterceptors.DatabaseStreamServerInterceptor(database),
sdkinterceptors.DatabaseLoggingStreamServerInterceptor(),
),
}...)
}
go-sdk
instruments the AWS clients by wrapping it with a DataDog trace and
tagging it with the service name. In addition, this registers AWS as a separate
service in DataDog.
Example usage of the instrumentation:
import (
"context"
"log"
sdkaws "github.com/scribd/go-sdk/pkg/aws"
sdkconfig "github.com/scribd/go-sdk/pkg/configuration"
instrumentation "github.com/scribd/go-sdk/pkg/instrumentation"
)
func main() {
config, err := sdkconfig.NewConfig()
if err != nil {
log.Fatalf("Failed to load SDK config: %s", err)
}
awsBuilder := sdkaws.NewBuilder(config.AWS)
cfg, err := awsBuilder.LoadConfig(context.Background())
if err != nil {
log.Fatalf("error: %v", err)
}
instrumentation.InstrumentAWSClient(cfg, instrumentation.Settings{
AppName: applicationName,
})
// Create an S3 service client with the service name "default"
s3client := awsBuilder.NewS3Service(cfg, "default")
}
To correlate the traces that are registered with DataDog with the corresponding
requests in other DataDog services, the
aws-sdk-go
provides functions with the
WithContext
suffix. These functions expect the request Context
as the first
arugment of the function, which allows the tracing chain to be continued inside
the SDK call stack. To learn more about these functions you can start by
reading about them on the AWS developer
blog.
go-sdk
provides a flexible way to instrument the Kafka client with logging, tracing and metrics capabilities.
The franz-go Kafka client library is wrapped by the go-sdk
and traces calls to Kafka.
For logging, go-sdk
provides a franz-go compatible logger which is basically a plugin
around the SDK's logger.Logger
interface. This logger will print franz-go
log stubs depending on a configured log level.
import (
"github.com/twmb/franz-go/pkg/kgo"
sdkkafkalogger "github.com/scribd/go-sdk/pkg/logger/kafka"
)
func main() {
kafkaClientOpts := []kgo.Opt{
kgo.WithLogger(
sdkkafkalogger.NewKafkaLogger(
logger,
sdkkafkalogger.WithLevel(sdklogger.Level(config.Logger.ConsoleLevel)),
),
),
}
}
For metrics, go-sdk
provides a franz-go
plugin to publish metrics to DataDog via DogStatsd. It is implemented as a
hook. Metrics are split into three categories: broker
, producer
and consumer
.
Broker metrics
Broker metrics represent a set of metrics around the broker connection. If enabled, the following metrics will be published:
count metrics
#{service_name}.kafka_client.broker.read_errors_total{node="#{node}"}
#{service_name}.kafka_client.broker.read_bytes_total{node="#{node}"}
#{service_name}.kafka_client.broker.write_errors_total{node="#{node}"}
#{service_name}.kafka_client.broker.write_bytes_total{node="#{node}"}
#{service_name}.kafka_client.broker.disconnects_total{node="#{node}"}
#{service_name}.kafka_client.broker.connect_errors_total{node="#{node}"}
#{service_name}.kafka_client.broker.connects_total{node="#{node}"}
time metrics
#{service_name}.kafka_client.broker.write_latency{node="#{node}"}
#{service_name}.kafka_client.broker.write_wait_latency{node="#{node}"}
#{service_name}.kafka_client.broker.read_latency{node="#{node}"}
#{service_name}.kafka_client.broker.read_wait_latency{node="#{node}"}
histogram metrics
#{service_name}.kafka_client.broker.throttle_latency{node="#{node}"}
Producer metrics
Producer metrics represent a set of metrics around the producer. If enabled, the following metrics will be published:
count metrics
#{service_name}.kafka_client.producer.records_error
#{service_name}.kafka_client.producer.records_unbuffered
#{service_name}.kafka_client.producer.records_buffered
#{service_name}.kafka_client.producer.produce_bytes_uncompressed_total{node="#{node}",topic="#{topic}",partition="#{partition}"}
#{service_name}.kafka_client.producer.produce_bytes_compressed_total{node="#{node}",topic="#{topic}",partition="#{partition}"}
Consumer metrics
Consumer metrics represent a set of metrics around the consumer. If enabled, the following metrics will be published:
count metrics
#{service_name}.kafka_client.consumer.records_unbuffered
#{service_name}.kafka_client.consumer.records_buffered
#{service_name}.kafka_client.consumer.fetch_bytes_uncompressed_total{node="#{node}",topic="#{topic}",partition="#{partition}"}
#{service_name}.kafka_client.consumer.fetch_bytes_compressed_total{node="#{node}",topic="#{topic}",partition="#{partition}"}
By default, all metrics will be published without sampling (rate 1.0
). The client can specify the sampling rate per
metric type and/or per metric name:
import (
"github.com/twmb/franz-go/pkg/kgo"
sdkkafkamertics "github.com/scribd/go-sdk/pkg/metrics/kafka"
)
func main() {
kafkaClientOpts := []kgo.Opt{
// no sampling rate specified
kgo.WithHooks(sdkkafkamertics.NewBrokerMetrics(metrics)),
// sampling rate of 0.5 will be applied for all producer metrics
kgo.WithHooks(sdkkafkamertics.NewProducerMetrics(metrics, sdkkafkamertics.WithSampleRate(0.5))),
// sampling rate of 0.3 will be applied to consumer metric name provided only,
// other metrics will be published with default sample rate
kgo.WithHooks(sdkkafkamertics.NewConsumerMetrics(
metrics,
sdkkafkamertics.WithSampleRatesPerMetric(map[string]float64{
"kafka_client.consumer.records_buffered": 0.3,
}),
)),
}
}
go-sdk
provides a flexible way to instrument the Redis client with logging and tracing capabilities.
The go-redis Redis client library is wrapped by the go-sdk
using dd-trace-go library and traces calls to Redis.
For logging, go-sdk
sets the go-redis logger to the SDK logger. This logger will print go-redis
log stubs as error log entries.
import (
sdklogger "github.com/scribd/go-sdk/pkg/logger"
instrumentation "github.com/scribd/go-sdk/pkg/instrumentation"
sdkredis "github.com/scribd/go-sdk/pkg/cache/redis"
)
func main() {
config, err := sdkconfig.NewConfig()
if err != nil {
log.Fatalf("Failed to load SDK config: %s", err)
}
logger, err := sdklogger.NewBuilder(config.Logger).SetTracking(config.Tracking).Build()
if err != nil {
log.Fatalf("Failed to load SDK logger: %s", err)
}
redisClient, err := sdkredis.New(&config.Cache.Redis)
if err != nil {
logger.WithError(err).Fatalf("Failed to create Redis client: %s", err)
}
// instrument redis client
instrumentation.InstrumentRedis(redisClient, applicationName)
// set logger for redis client
sdklogger.SetRedisLogger(logger)
}
You can send pprof
samples to DataDog by enabling the profiler.
Under the hood the DataDog profiler will continuously take heap, CPU and mutex profiles, every 1 minute by default.
The default CPU profile duration is 15 seconds. Keep in mind that the profiler introduces overhead when it is being executed.
The default DataDog configuration, which go-sdk uses by default, is following good practices.
func main() {
// Build profiler.
sdkProfiler := instrumentation.NewProfiler(
config.Instrumentation,
profiler.WithService(appName),
profiler.WithVersion(version),
)
if err := sdkProfiler.Start(); err != nil {
log.Fatalf("Failed to start profiler: %s", err)
}
defer sdkProfiler.Stop()
}
go-sdk
provides a way to send custom metrics to Datadog.
The metrics.Configuration
configuration can be used to define the set of
tags to attach to every metric emitted by the client.
This client is configured by default with:
service:$APP_NAME
env:$ENVIRONMENT
Datadog tags documentation is available here.
See the metric submission documentation on how to submit custom metrics.
Metric names must only contain ASCII alphanumerics, underscores, and periods. The client will not replace nor check for invalid characters.
Some options are suppported when submitting metrics, like applying a sample rate to your metrics or tagging your metrics with your custom tags. Find all the available functions to report metrics in the Datadog Go client GoDoc documentation.
Example usage of the custom metrics:
package main
import (
"log"
"time"
"github.com/scribd/go-sdk/pkg/metrics"
)
func main() {
applicationEnv := "development"
applicationName := "go-sdk-example"
metricsConfig := &metrics.Config{
Environment: applicationEnv,
App: applicationName,
}
client, err := metrics.NewBuilder(metricsConfig).Build()
if err != nil {
log.Fatalf("Could not initialize Metrics client: %s", err)
}
_ = client.Incr("example.increment", []string{""}, 1)
_ = client.Decr("example.decrement", []string{""}, 1)
_ = client.Count("example.count", 2, []string{""}, 1)
}
The go-sdk
is a standalone Go module. This means that it can be imported and
used in virtually any Go project. Still, there are four conventions that the
go-sdk
enforces which must be present in the host application:
- The presence of the
APP_ENV
environment variable, used for environment-awareness, - Support of a single file format (YAML) for storing configurations,
- Support of only a single path to store the configuration,
config/
in the application root, and - The presence of the
APP_ROOT
environment variable, set to the absolute path of the application that it's used in. This is used to locate theconfig
directory on disk and load the enclosed YAML files.
A good way to approach initialization of the go-sdk
in your application can be
seen in the go-chassis
itself:
// internal/pkg/sdk/sdk.go
package sdk
import (
"log"
sdkconfig "github.com/scribd/go-sdk/pkg/configuration"
sdklogger "github.com/scribd/go-sdk/pkg/logger"
)
var (
// Config is SDK-powered application configuration.
Config *sdkconfig.Config
// Logger is SDK-powered application logger.
Logger sdklogger.Logger
err error
)
func init() {
if Config, err = sdkconfig.NewConfig(); err != nil {
log.Fatalf("Failed to load SDK config: %s", err.Error())
}
if Logger, err = sdklogger.NewBuilder(loggerConfig).Build(); err != nil {
log.Fatalf("Failed to load SDK logger: %s", err.Error())
}
}
Please note that while using the go-sdk
in isolation is possible, it is
highly recommended to use it in combination with the go-chassis
for the
best development, debugging and maintenance experience.
- The SDK provides a Docker
Compose development environment to
run, develop and test a service (version
1.24.1
). - See
docker-compose.yml
for the service definitions.
$ docker-compose build [--no-cache|--pull|--parallel]
Refer to the Compose CLI reference for the full list of option and the synopsis of the command.
Compose provides a way to create and destroy isolated testing environments:
$ docker-compose run --rm sdk mage test:run
You can enter the docker environment to build, run and debug your service:
$ docker-compose run --rm sdk /bin/bash
root@1f31fa8e5c49:/sdk# go version
go version go1.23.0 linux/amd64
Refer to the Compose CLI reference for the full list of option and the synopsis of the command.
When developing a project that uses go-sdk
as a dependency, you might need
to introduce changes to the go-sdk
codebase.
In this case, perform the following steps:
-
Create a branch in the
go-sdk
repository with the changes you want, and push the branch to the repository remote:git push -u origin <username/branch-name>
-
Add or change the following line in the
go.mod
file of the project that usesgo-sdk
as a dependency:replace github.com/scribd/go-sdk => github.com/scribd/go-sdk.git <username/branch-name>
-
From the project root, fetch the new branch by running:
go mod tidy
-
Note that running
go mod tidy
will tiego-sdk
to the specific git commit. So after running it, thereplace
statement will look like this:replace github.com/scribd/go-sdk => github.com/scribd/go-sdk.git <pseudo-version>
Therefore, you will need to repeat steps 1, 2 and 3 each time you add new changes to your branch in
go-sdk
. -
After you are done with the required changes, create a merge request to the
go-sdk
repository. After the merge request is merged and a release is done, you need to, once again, alter thereplace
statement in yourgo.mod
file:replace github.com/scribd/go-sdk => github.com/scribd/go-sdk.git <tag-name>
Please follow Conventional Commits and Angular commit message guidelines for writing commit messages.
This project is using semantic-release
and conventional-commits,
with the angular
preset.
Releases are done from the origin/main
branch using a manual step at the end of the CI/CD pipeline.
In order to create a new release:
- Merge / push changes to
origin/main
- Open the
origin/main
GitHub Action Release pipeline - Press
▶️ on the "Run workflow" step
A version bump will happen automatically and the type of version bump (patch, minor, major) depends on the commits introduced since the last release.
The semantic-release
configuration is in .releaserc.yml
.
Made with ❤️ by the Core Services team.