Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Serving SRV targets and getting all nodes in single request #835

Merged
merged 39 commits into from
Oct 4, 2024
Merged
Show file tree
Hide file tree
Changes from 22 commits
Commits
Show all changes
39 commits
Select commit Hold shift + click to select a range
8e38fe4
overall handler + label wrapper
adnull Apr 3, 2024
6d6094c
srv seed list + splitCluster option
adnull Apr 3, 2024
1ca1f5f
added request opts to overall handler
adnull Apr 3, 2024
7bd64f1
added splitcluster+srv tests
adnull Apr 3, 2024
0d121cc
added overall handler test
adnull Apr 3, 2024
3693794
readme update
adnull Apr 3, 2024
af94725
Merge branch 'main' into host-labels
adnull Apr 3, 2024
7bc5aa8
fixed most linters
adnull Apr 3, 2024
3362493
liners
adnull Apr 3, 2024
cf051cc
added licence header
adnull Apr 4, 2024
0187a77
Merge branch 'main' into host-labels
adnull Apr 15, 2024
9d405eb
Merge branch 'main' into host-labels
JiriCtvrtka Apr 18, 2024
ae4704a
Merge branch 'main' into host-labels
adnull Apr 25, 2024
64c263f
updated sync package
adnull Apr 25, 2024
9dc8456
formatted
adnull Apr 25, 2024
bf1c60b
Merge branch 'main' into host-labels
adnull May 21, 2024
998624e
Merge branch 'main' into host-labels
BupycHuk May 30, 2024
8a02ebe
Update v1_compatibility_test.go
BupycHuk May 30, 2024
e9eb2ec
Merge branch 'main' into host-labels
BupycHuk May 31, 2024
def470f
Merge branch 'main' into host-labels
adnull Jul 29, 2024
90b843f
Merge branch 'main' into host-labels
adnull Jul 30, 2024
fcf57eb
fixes due to idoqo review
adnull Aug 3, 2024
28c22ee
overall handler + label wrapper
adnull Apr 3, 2024
1c775b1
srv seed list + splitCluster option
adnull Apr 3, 2024
0b059b1
added request opts to overall handler
adnull Apr 3, 2024
59c9b73
added splitcluster+srv tests
adnull Apr 3, 2024
9fe2c7b
added overall handler test
adnull Apr 3, 2024
b2a4502
readme update
adnull Apr 3, 2024
679755b
fixed most linters
adnull Apr 3, 2024
be87952
liners
adnull Apr 3, 2024
7c4b9a1
added licence header
adnull Apr 4, 2024
55d00bf
fixes due to idoqo review
adnull Aug 3, 2024
95ff992
Merge branch 'host-labels' of github.com:adnull/mongodb_exporter_mt i…
adnull Aug 30, 2024
b236ccb
fix redeclaration
adnull Aug 30, 2024
e332f24
Merge branch 'main' into host-labels
BupycHuk Sep 19, 2024
c28e054
missing collector.pbm
adnull Sep 20, 2024
a0d9d8f
fix prev merge
adnull Sep 20, 2024
cad1d49
Merge branch 'main' into host-labels
adnull Sep 28, 2024
74977e5
fix enable fcv
adnull Oct 4, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,6 +104,19 @@ If your URI is prefixed by mongodb:// or mongodb+srv:// schema, any host not pre
--mongodb.uri=mongodb+srv://user:pass@host1:27017,host2:27017,host3:27017/admin,mongodb://user2:pass2@host4:27018/admin
```

You can use the --split-cluster option to split all cluster nodes into separate targets. This mode is useful when cluster nodes are defined as SRV records and the mongodb_exporter is running with mongodb+srv domain specified. In this case SRV records will be queried upon mongodb_exporter start and each cluster node can be queried using the **target** parameter of multitarget endpoint.

#### Overall targets request endpoint

There is an overall targets endpoint **/scrapeall** that queries all the targets in one request. It can be used to store multiple node metrics without separate target requests. In this case, each node metric will have a **instance** label containing the node name as a host:port pair (or just host if no port was not specified). For example, for mongodb_exporter running with the options:
```
--mongodb.uri="mongodb://host1:27015,host2:27016" --split-cluster=true
adnull marked this conversation as resolved.
Show resolved Hide resolved
```
we get metrics like this:
```
mongodb_up{instance="host1:27015"} 1
mongodb_up{instance="host2:27016"} 1
```

#### Enabling collstats metrics gathering
`--mongodb.collstats-colls` receives a list of databases and collections to monitor using collstats.
Expand Down
1 change: 1 addition & 0 deletions REFERENCE.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
|--[no-]mongodb.direct-connect|Whether or not a direct connect should be made. Direct connections are not valid if multiple hosts are specified or an SRV URI is used||
|--[no-]mongodb.global-conn-pool|Use global connection pool instead of creating new pool for each http request||
|--mongodb.uri|MongoDB connection URI ($MONGODB_URI)|--mongodb.uri=mongodb://user:[email protected]:27017/admin?ssl=true|
|--split-cluster|Whether to treat cluster members from the connection URI as separate targets|
|--web.listen-address|Address to listen on for web interface and telemetry|--web.listen-address=":9216"|
|--web.telemetry-path|Metrics expose path|--web.telemetry-path="/metrics"|
|--web.config|Path to the file having Prometheus TLS config for basic auth|--web.config=STRING|
Expand Down
70 changes: 38 additions & 32 deletions exporter/exporter.go
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,8 @@
IndexStatsCollections []string
Logger *logrus.Logger

URI string
URI string
NodeName string
}

var (
Expand Down Expand Up @@ -272,7 +273,7 @@
func (e *Exporter) Handler() http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
seconds, err := strconv.Atoi(r.Header.Get("X-Prometheus-Scrape-Timeout-Seconds"))
// To support also older ones vmagents.
// To support older ones vmagents.
if err != nil {
seconds = 10
}
Expand All @@ -282,36 +283,7 @@
ctx, cancel := context.WithTimeout(r.Context(), time.Duration(seconds)*time.Second)
defer cancel()

filters := r.URL.Query()["collect[]"]

requestOpts := Opts{}

if len(filters) == 0 {
requestOpts = *e.opts
}

for _, filter := range filters {
switch filter {
case "diagnosticdata":
requestOpts.EnableDiagnosticData = true
case "replicasetstatus":
requestOpts.EnableReplicasetStatus = true
case "dbstats":
requestOpts.EnableDBStats = true
case "topmetrics":
requestOpts.EnableTopMetrics = true
case "currentopmetrics":
requestOpts.EnableCurrentopMetrics = true
case "indexstats":
requestOpts.EnableIndexStats = true
case "collstats":
requestOpts.EnableCollStats = true
case "profile":
requestOpts.EnableProfile = true
case "shards":
requestOpts.EnableShards = true
}
}
requestOpts := GetRequestOpts(r.URL.Query()["collect[]"], e.opts)

client, err = e.getClient(ctx)
if err != nil {
Expand Down Expand Up @@ -364,6 +336,40 @@
})
}

// GetRequestOpts makes exporter.Opts structure from request filters and default options.
func GetRequestOpts(filters []string, defaultOpts *Opts) Opts {

Check failure on line 340 in exporter/exporter.go

View workflow job for this annotation

GitHub Actions / Lint Check

calculated cyclomatic complexity for function GetRequestOpts is 12, max is 10 (cyclop)
requestOpts := Opts{}

Check failure on line 341 in exporter/exporter.go

View workflow job for this annotation

GitHub Actions / Lint Check

exporter.Opts is missing fields CollStatsNamespaces, CollStatsLimit, CompatibleMode, DirectConnect, ConnectTimeoutMS, DisableDefaultRegistry, DiscoveringMode, GlobalConnPool, ProfileTimeTS, TimeoutOffset, CurrentOpSlowTime, CollectAll, EnableDBStats, EnableDBStatsFreeStorage, EnableDiagnosticData, EnableReplicasetStatus, EnableCurrentopMetrics, EnableTopMetrics, EnableIndexStats, EnableCollStats, EnableProfile, EnableShards, EnableOverrideDescendingIndex, IndexStatsCollections, Logger, URI, NodeName (exhaustruct)

if len(filters) == 0 {
requestOpts = *defaultOpts
}

for _, filter := range filters {
switch filter {
case "diagnosticdata":
requestOpts.EnableDiagnosticData = true
case "replicasetstatus":
requestOpts.EnableReplicasetStatus = true
case "dbstats":
requestOpts.EnableDBStats = true
case "topmetrics":
requestOpts.EnableTopMetrics = true
case "currentopmetrics":
requestOpts.EnableCurrentopMetrics = true
case "indexstats":
requestOpts.EnableIndexStats = true
case "collstats":
requestOpts.EnableCollStats = true
case "profile":
requestOpts.EnableProfile = true
case "shards":
requestOpts.EnableShards = true
}
}

return requestOpts
}

func connect(ctx context.Context, opts *Opts) (*mongo.Client, error) {
clientOpts, err := dsn_fix.ClientOptionsForDSN(opts.URI)
if err != nil {
Expand Down
59 changes: 59 additions & 0 deletions exporter/gatherer_wrapper.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
// mongodb_exporter
// Copyright (C) 2017 Percona LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

package exporter

import (
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
io_prometheus_client "github.com/prometheus/client_model/go"
)

// GathererWrapped is a wrapper for prometheus.Gatherer that adds labels to all metrics.
type GathererWrapped struct {
originalGatherer prometheus.Gatherer
labels prometheus.Labels
}

// NewGathererWrapper creates a new GathererWrapped with the given Gatherer and additional labels.
func NewGathererWrapper(gs prometheus.Gatherer, labels prometheus.Labels) *GathererWrapped {
return &GathererWrapped{
originalGatherer: gs,
labels: labels,
}
}

// Gather implements prometheus.Gatherer interface.
func (g *GathererWrapped) Gather() ([]*io_prometheus_client.MetricFamily, error) {
metrics, err := g.originalGatherer.Gather()
if err != nil {
return nil, errors.Wrap(err, "failed to gather metrics")
}

for _, metric := range metrics {
for _, m := range metric.GetMetric() {
for k, v := range g.labels {
v := v

Check failure on line 48 in exporter/gatherer_wrapper.go

View workflow job for this annotation

GitHub Actions / Lint Check

The copy of the 'for' variable "v" can be deleted (Go 1.22+) (copyloopvar)
k := k

Check failure on line 49 in exporter/gatherer_wrapper.go

View workflow job for this annotation

GitHub Actions / Lint Check

The copy of the 'for' variable "k" can be deleted (Go 1.22+) (copyloopvar)
m.Label = append(m.Label, &io_prometheus_client.LabelPair{
Name: &k,
Value: &v,
})
}
}
}

return metrics, nil
}
62 changes: 62 additions & 0 deletions exporter/multi_target_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,11 @@ package exporter

import (
"fmt"
"io"
"net"
"net/http"
"net/http/httptest"
"regexp"
"testing"

"github.com/sirupsen/logrus"
Expand Down Expand Up @@ -70,3 +74,61 @@ func TestMultiTarget(t *testing.T) {
assert.HTTPBodyContains(t, multiTargetHandler(serverMap), "GET", fmt.Sprintf("?target=%s", opt.URI), nil, expected[sn])
}
}

func TestOverallHandler(t *testing.T) {
t.Parallel()

opts := []*Opts{
{
NodeName: "standalone",
URI: fmt.Sprintf("mongodb://127.0.0.1:%s", tu.GetenvDefault("TEST_MONGODB_STANDALONE_PORT", "27017")),
DirectConnect: true,
ConnectTimeoutMS: 1000,
},
{
NodeName: "s1",
URI: fmt.Sprintf("mongodb://127.0.0.1:%s", tu.GetenvDefault("TEST_MONGODB_S1_PRIMARY_PORT", "17001")),
DirectConnect: true,
ConnectTimeoutMS: 1000,
},
{
NodeName: "s2",
URI: fmt.Sprintf("mongodb://127.0.0.1:%s", tu.GetenvDefault("TEST_MONGODB_S2_PRIMARY_PORT", "17004")),
DirectConnect: true,
ConnectTimeoutMS: 1000,
},
{
NodeName: "s3",
URI: "mongodb://127.0.0.1:12345",
DirectConnect: true,
ConnectTimeoutMS: 1000,
},
}
expected := []*regexp.Regexp{
regexp.MustCompile(`mongodb_up{[^\}]*instance="standalone"[^\}]*} 1\n`),
regexp.MustCompile(`mongodb_up{[^\}]*instance="s1"[^\}]*} 1\n`),
regexp.MustCompile(`mongodb_up{[^\}]*instance="s2"[^\}]*} 1\n`),
regexp.MustCompile(`mongodb_up{[^\}]*instance="s3"[^\}]*} 0\n`),
}
exporters := make([]*Exporter, len(opts))

logger := logrus.New()

for i, opt := range opts {
exporters[i] = New(opt)
}

rr := httptest.NewRecorder()
req := httptest.NewRequest(http.MethodGet, "/", nil)
OverallTargetsHandler(exporters, logger)(rr, req)
res := rr.Result()
resBody, _ := io.ReadAll(res.Body)
err := res.Body.Close()
assert.NoError(t, err)

assert.Equal(t, http.StatusOK, res.StatusCode)

for _, expected := range expected {
assert.Regexp(t, expected, string(resBody))
}
}
88 changes: 88 additions & 0 deletions exporter/seedlist.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
// mongodb_exporter
// Copyright (C) 2017 Percona LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

package exporter

import (
"net"
"net/url"
"strconv"
"strings"

"github.com/sirupsen/logrus"
)

// GetSeedListFromSRV converts mongodb+srv URI to flat connection string.
func GetSeedListFromSRV(uri string, log *logrus.Logger) string {

Check failure on line 28 in exporter/seedlist.go

View workflow job for this annotation

GitHub Actions / Lint Check

calculated cyclomatic complexity for function GetSeedListFromSRV is 13, max is 10 (cyclop)
uriParsed, err := url.Parse(uri)
if err != nil {
log.Fatalf("Failed to parse URI %s: %v", uri, err)
}

cname, srvRecords, err := net.LookupSRV("mongodb", "tcp", uriParsed.Hostname())
if err != nil {
log.Errorf("Failed to lookup SRV records for %s: %v", uri, err)
return uri
}

if len(srvRecords) == 0 {
log.Errorf("No SRV records found for %s", uri)
return uri
}

queryString := uriParsed.RawQuery

txtRecords, err := net.LookupTXT(uriParsed.Hostname())
if err != nil {
log.Errorf("Failed to lookup TXT records for %s: %v", cname, err)
}
if len(txtRecords) > 1 {
log.Errorf("Multiple TXT records found for %s, thus were not applied", cname)
}
if len(txtRecords) == 1 {
// We take connection parameters from the TXT record
uriParams, err := url.ParseQuery(txtRecords[0])
if err != nil {
log.Errorf("Failed to parse TXT record %s: %v", txtRecords[0], err)
} else {
// Override connection parameters with ones from URI query string
for p, v := range uriParsed.Query() {
uriParams[p] = v
}
queryString = uriParams.Encode()
}
}

// Build final connection URI
servers := make([]string, len(srvRecords))
for i, srv := range srvRecords {
servers[i] = net.JoinHostPort(strings.TrimSuffix(srv.Target, "."), strconv.FormatUint(uint64(srv.Port), 10))
}
uri = "mongodb://"
if uriParsed.User != nil {
uri += uriParsed.User.String() + "@"
}
uri += strings.Join(servers, ",")
if uriParsed.Path != "" {
uri += uriParsed.Path
} else {
uri += "/"
}
if queryString != "" {
uri += "?" + queryString
}

return uri
}
Loading