Releases: gofr-dev/gofr
v1.26.0
Release v1.26.0
✨ Features
1. Support for NATS as External Pub-Sub
We have added support for NATS as an external pub/sub service in GoFr. Users can now easily set up NATS for message publishing and subscribing by configuring it via app.AddPubSub
. Here’s a quick example:
app.AddPubSub(nats.New(&nats.Config{
Server: "nats://localhost:4222",
CredsFile: "",
Stream: nats.StreamConfig{
Stream: "my-stream",
Subjects: []string{"order-logs", "products"},
},
MaxWait: 5 * time.Second,
MaxPullWait: 500,
Consumer: "my-consumer",
}, app.Logger()))
To inject NATS, import it using the following command:
go get gofr.dev/pkg/gofr/datasources/pubsub/nats
Refer to our documentation for detailed setup instructions.
2. New Environment Variable DB_CHARSET
Introduced a new environment variable DB_CHARSET
to make the MySQL character set configurable. By default, DB_CHARSET is set to utf8. However, setting it to utf8mb4 is recommended for full Unicode support, including emojis and special characters.
3. Enhanced CLI with TUI Elements (Spinners and Progress Bar)
This release enhances the GoFr command-line experience by introducing Text User Interface (TUI) elements, including spinners and a progress bar for visual feedback during command execution.
Check out an example in examples/sample-cmd
to see this in action.
4. New Examples for Remote File Server Interaction
We’ve added new examples in the examples/using-add-filestore
directory, demonstrating how to interact with remote file servers using GoFr. This addition provides a convenient reference for developers working with remote file management.
🛠️ Fixes
1. Fix: Invalid ClickHouse Documentation
Updated the ClickHouse package documentation to highlight only the relevant parts, improving readability and maintainability.
Release v1.25.0
Release v1.25.0
✨ Features
BadgerDB Tracing
- Added context support and tracing for BadgerDB datasource operations, enhancing observability and tracking within the BadgerDB storage layer.(Released pkg/gofr/datasource/kv-store/badger - v0.2.0)
Redis Authentication Support
-
Users can now connect to Redis instances requiring a username, password, or both. Configure these credentials in the
.env
file located in theconfigs
folder:-
REDIS_USER
: User credential for connecting to the Redis server. Multiple users with different permissions can be configured within a single Redis instance. For more details, refer to the official Redis documentation. -
REDIS_PASSWORD
: Password credential required only if authentication is enabled on the Redis instance.
-
Enhanced Authentication Retrieval in Context
-
The
GetAuthInfo
method on the context provides easy access to various authentication methods, allowing developers to retrieve:-
JWT Claims: Use
GetAuthInfo().GetClaims()
when OAuth is enabled, returning ajwt.MapClaims
response. -
Username: Use
GetAuthInfo().GetUsername()
for basic authentication, returning the username or an empty string if basic auth is not enabled. -
API Key: Use
GetAuthInfo().GetAPIKey()
for API key-based authentication, returning the API key or an empty string if API key auth is not enabled.
-
🛠️ Fixes
Static File Route Issue
-
Resolved a bug in the
AddStaticFiles
method where static files could not be accessed via specified routes. Now, running the following code:func main() { app := gofr.New() app.AddStaticFiles("/", "./static") app.Run() }
Files within the
./static
directory can be accessed properly, e.g.,http://localhost:9000/abc.jpeg
, without encountering a "route not registered" error.
Controlled Access to OpenAPI Files
- Prevented direct access to the
openapi.json
file via static routes. The file is now restricted to access through explicitly defined routes such as/.well-known/swagger
or/.well-known/openapi.json
for secure and controlled API documentation access.
Release v1.24.2
Release v1.24.2
🛠️ Fixes
-
Tracing in Solr data source
Fixed missing tracing for Solr data source. Now, spans for Solr data source will be visible on the trace graph with important attributes to better debug the user application. -
Cassandra logs misaligned
The logs while running the migrations for the Cassandra database were misaligned and did not contain the exact operation that was being performed. Now logs are aligned with the other logs and also contain which operation is being performed. -
EventHub messages not getting consumed
The Azure EventHub was not able to subscribe to the messages, fixed the flow to correctly consume the messages, and also added proper shutdown for the subscriber to close on application closure. -
Application Shutdown not working when subscribers are registered
Now gofr applications will shutdown gracefully without getting stuck because of the forever subscription for topics.
Full Changelog: v1.24.1...v1.24.2
v1.24.1
Release v1.24.1
🛠️ Fixes:
- Google Subscriptions not calling user handlers
Fixed a bug in the Google subscriber where messages were being consumed in the callback handler but was not calling theSubscribeFunc
registered by the user.
v1.24.0
Release v1.24.0
✨ Features:
-
Cassandra Tracing & Context Support:
We’ve added context support and tracing for Cassandra operations, improving flexibility and observability. The following methods are now available:Single Operations:
QueryWithCtx
: Executes queries with context, binding results to the specified destination.ExecWithCtx
: Executes non-query operations with context.ExecCASWithCtx
: Executes lightweight (CAS) transactions with context.NewBatchWithCtx
: Initializes a new batch operation with context.
Batch Operations:
BatchQueryWithCtx
: Adds queries to batch operations with context.ExecuteBatchWithCtx
: Executes batch operations with context.ExecuteBatchCASWithCtx
: Executes batch operations with context and returns the result.
Note: The following methods in Cassandra have been deprecated:
type Cassandra interface { Query(dest interface{}, stmt string, values ...any) error Exec(stmt string, values ...any) error ExecCAS(dest any, stmt string, values ...any) (bool, error) BatchQuery(stmt string, values ...any) error NewBatch(name string, batchType int) error CassandraBatch } type CassandraBatch interface { BatchQuery(name, stmt string, values ...any) ExecuteBatch(name string) error ExecuteBatchCAS(name string, dest ...any) (bool, error) }
-
JWT Claims Retrieval:
OAuth-enabled applications can now retrieve JWT claims directly within handlers. Here’s an example:func HelloHandler(c *gofr.Context) (interface{}, error) { // Retrieve the JWT claim from the context claimData := c.Context.Value(middleware.JWTClaim) // Assert that the claimData is of type jwt.MapClaims claims, ok := claimData.(jwt.MapClaims) if !ok { return nil, fmt.Errorf("invalid claim data type") } // Return the claims as a response return claims, nil }
🛠️ Fixes:
-
Redis Panic Handling:
Resolved an issue where callingRedis.Ping()
without an active connection caused the application to panic. This is now handled gracefully. -
Docker Example Enhancement:
Thehttp-server
example has been enhanced to include Prometheus and Grafana containers in its Docker setup, allowing users to fully explore GoFr's observability features.
v1.23.0
Release v1.23.0
✨Features:
-
Tracing support added for MongoDB database:
Added tracing capabilities for MongoDB database interactions, extending built-in tracing support across various MongoDB methods. -
Support for binding encoded forms:
Added functionality for binding multipart-form data and URL-encoded form data.- You can use the
Bind
method to map form fields to struct fields by tagging them appropriately. - For more details, visit the documentation.
- You can use the
🛠️Fixes:
- Resolved nil correlationID due to uninitialized exporter:
Addressed an issue that emerged from release v1.22.0, where the trace exporter and provider were not initialized when no configurations were specified. The isssue has been fixed. The trace provider is now set to initialize by default, regardless of the provided configuration.
v1.22.1
Release v1.22.1
✨ Fixes
- Fix clickhouse Import
Importing clickhouse import was failing in version 1.22.0, due to otel tracer package present as indirect dependency.
v1.22.0
Release v1.22.0
✨ Features
-
Support for tracing in clickhouse.
Clickhouse Traces are now added and sent along with the respective request traces. -
Support for sampling traces.
Traces can now be sampled based on the env configTRACER_RATIO
It refers to the proportion of traces that are exported through sampling. It ranges between 0 to 1. By default, this ratio is set to 1, meaning all traces are exported. -
Support Azure Eventhub as external pub-sub datasource.
Eventhub- Eventhub can be used similar to how messages are published and subscribed to KAFKA, MQTT and Google Pubsub.
- To inject Eventhub import it using the following command.
go get gofr.dev/pkg/gofr/datasources/pubsub/eventhub
- Setup Eventhub by calling the AddPubSub method of gofr.
app.AddPubSub(eventhub.New(eventhub.Config{ ConnectionString: "", ContainerConnectionString: "", StorageServiceURL: "", StorageContainerName: "", EventhubName: "", ConsumerGroup: "", }))
Refer documentation to know how to get these values.
-
Support to enable HTTPS in the HTTP server
You can now secure your servers with SSL/TLS certificates by adding the certificates through following configs -CERT_FILE
andKEY_FILE
.
✨ Fixes
- Fix SQLite logs.
Empty strings were coming due to difference in configuration parameters required in SQLite vs other SQL datasource when connecting which has been fixed.
v1.21.0
Release v1.21.0
✨ Features
-
Support for DGraph
Dgraph can be added using the method on gofrApp AddDgraph.
Following methods are supported:
// Dgraph defines the methods for interacting with a Dgraph database.
type Dgraph interface {
// Query executes a read-only query in the Dgraph database and returns the result.
Query(ctx context.Context, query string) (interface{}, error)
// QueryWithVars executes a read-only query with variables in the Dgraph database.
QueryWithVars(ctx context.Context, query string, vars map[string]string) (interface{}, error)
// Mutate executes a write operation (mutation) in the Dgraph database and returns the result.
Mutate(ctx context.Context, mu interface{}) (interface{}, error)
// Alter applies schema or other changes to the Dgraph database.
Alter(ctx context.Context, op interface{}) error
// NewTxn creates a new transaction (read-write) for interacting with the Dgraph database.
NewTxn() interface{}
// NewReadOnlyTxn creates a new read-only transaction for querying the Dgraph database.
NewReadOnlyTxn() interface{}
// HealthChecker checks the health of the Dgraph instance.
HealthChecker
}
To use Dgraph in your GoFr application, follow the steps given below:
Step 1
go get gofr.dev/pkg/gofr/datasource/dgraph
Step 2
app.AddDgraph(dgraph.New(dgraph.Config{
Host: "localhost",
Port: "8080",
}))
GoFr supports both queries and mutations in Dgraph. To know more: Read the Docs
🛠 Enhancements
-
Migrations in Cassandra
Users can now add migrations while usingCassandra
as the datasource. This enhancement assumes that user has already created theKEYSPACE
in cassandra. AKEYSPACE
in Cassandra is a container for tables that defines data replication settings across the cluster. Visit the Docs to know more.
type Cassandra interface {
Exec(query string, args ...interface{}) error
NewBatch(name string, batchType int) error
BatchQuery(name, stmt string, values ...any) error
ExecuteBatch(name string) error
HealthCheck(ctx context.Context) (any, error)
}
To achieve atomicity during migrations, users can leverage batch operations using the NewBatch, BatchQuery, and ExecuteBatch methods. These methods allow multiple queries to be executed as a single atomic operation.
When using batch operations, consider using batchType: LoggedBatch i.e. 0 for atomicity or an UnloggedBatch i.e. 1 for improved performance where atomicity isn't required. This approach provides a way to maintain data consistency during complex migrations.
-
Added mocks for Metrics
MockContainer
can be used to set expectation for metrics in the application while writing test.Usage:
// GoFr's mockContainer _, mock := NewMockContainer(t) // Set mock expectations using the mocks from NewMockContainer mock.Metrics.EXPECT().IncrementCounter(context.Background(), "name") // Call to your function where metrics has to be mocked . . .
v1.20.0
Release v1.20.0
✨ Features
- Support for Solr
Solr can now be used as a datasource, for adding Solr useAddSolr(cfg Solr.Config)
method of gofrApp.
Refer documentation for detailed info.
Supported Functionalities are:Search(ctx context.Context, collection string, params map[string]any) (any, error) Create(ctx context.Context, collection string, document *bytes.Buffer, params map[string]any) (any, error) Update(ctx context.Context, collection string, document *bytes.Buffer, params map[string]any) (any, error) Delete(ctx context.Context, collection string, document *bytes.Buffer, params map[string]any) (any, error) Retrieve(ctx context.Context, collection string, params map[string]any) (any, error) ListFields(ctx context.Context, collection string, params map[string]any) (any, error) AddField(ctx context.Context, collection string, document *bytes.Buffer) (any, error) UpdateField(ctx context.Context, collection string, document *bytes.Buffer) (any, error) DeleteField(ctx context.Context, collection string, document *bytes.Buffer) (any, error)
🛠 Enhancements
- Added mocks for HTTP Service
Mocks to test GoFr HTTP client had to be generated manually. Now, mocks for the HTTP service has been added in GoFr'sMockContainer
.Usage:
// register HTTP services to be mocked httpservices := []string{"cat-facts", "cat-facts1", "cat-facts2"} // pass the httpservices in NewMockContainer _, mock := NewMockContainer(t, WithMockHTTPService(httpservices...)) // Set mock expectations using the mocks from NewMockContainer mock.HTTPService.EXPECT().Get(context.Background(), "fact",map[string]interface{}{ "max_length": 20, }).Return(result, nil) // Call to your function where HTTPService has to be mocked . . .