Compare commits

..

113 Commits

Author SHA1 Message Date
Ben Reedy
d9f4264fc4 Merge pull request #898 from breed808/github_actions
Migrate CI/CD to Github Actions
2022-01-02 19:15:40 +10:00
Ben Reedy
27ceeecff3 Merge pull request #902 from breed808/textfile
Move textfile mtime metric from loop
2022-01-02 08:32:08 +10:00
Ben Reedy
1ba5835af6 Move textfile mtime metric from loop
Loop was erroneously creating duplicate `windows_textfile_mtime_seconds`
metrics, causing the exporter to return a HTTP 500 error and no metrics
from any collector.

Signed-off-by: Ben Reedy <breed808@breed808.com>
2022-01-01 11:48:19 +10:00
Ben Reedy
0db956aa4d Migrate CI/CD to Github Actions
Signed-off-by: Ben Reedy <breed808@breed808.com>
2022-01-01 10:04:33 +10:00
Ben Reedy
b6f88cbbdd Use pwsh to run e2e-test target
Powershell >= 5 is required for the `New-Guid` command in the e2e script.

Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-12-30 20:49:46 +10:00
Calle Pettersson
4b9b9e97cb Merge pull request #893 from prometheus-community/new-appveyor-token
Update CI token
2021-12-28 22:00:26 +01:00
Calle Pettersson
3ebe0e937e Update CI token
Signed-off-by: Calle Pettersson <calle@cape.nu>
2021-12-28 21:44:22 +01:00
Ben Reedy
4d771d2bce Merge pull request #892 from mjtrangoni/fix-golanci-lint
Fix and update golanci-lint reported issues
2021-12-25 10:34:02 +10:00
Mario Trangoni
919f90a571 golangci-lint: Acknowledge all remaining checks and update golanci-lint to v1.43.0
Signed-off-by: Mario Trangoni <mjtrangoni@gmail.com>
2021-12-24 11:19:05 +01:00
Ben Reedy
c7d07a37ea Merge pull request #883 from breed808/msi_listen_port
Remove explicit LISTEN_PORT from MSI installer
2021-12-19 08:30:21 +10:00
Ben Reedy
87c21bfa50 Merge pull request #891 from breed808/update_perflib
Update Perflib dependency
2021-12-19 08:27:14 +10:00
Mario Trangoni
df4f6b206b revive: make type exportable and remove unnecessary log word
See,
```
log/gokit_adapter.go:9:26: unexported-return: exported func NewToolkitAdapter returns unexported type *log.logAdapter, which can be annoying to use (revive)
func NewToolkitAdapter() *logAdapter {
                         ^
```

Signed-off-by: Mario Trangoni <mjtrangoni@gmail.com>
2021-12-18 19:54:31 +01:00
Mario Trangoni
9e3c585a28 revive: Remove unnecessary = 0 from var declaration.
See,
```
$ GOOS=windows GOARCH=amd64 golangci-lint run  ./... 2>1 | grep var-declaration
collector/os.go:205:22: var-declaration: should drop = 0 from declaration of var fsipf; it is the zero value (revive)
collector/os.go:226:23: var-declaration: should drop = 0 from declaration of var pfbRaw; it is the zero value (revive)
```

Signed-off-by: Mario Trangoni <mjtrangoni@gmail.com>
2021-12-18 19:30:47 +01:00
Mario Trangoni
e4a43c539b codespell: Fix word spelling issues
See,
```
$ codespell --skip=".git,./vendor" --ignore-words-list=calle
./exporter.go:262: overriden ==> overridden
./collector/dfsr.go:132: receieved ==> received
./collector/dns.go:140: reponses ==> responses
./collector/exchange.go:238: occational ==> occasional
./collector/mssql.go:1961: shoud ==> should
./collector/process.go:137: sharable ==> shareable
./collector/remote_fx.go:64: seccond ==> second
./docs/collector.dfsr.md:47: fils ==> fills, files, file
./docs/collector.exchange.md:39: lengt ==> length
./docs/collector.fsrmquota.md:3: Ressource ==> Resource
./docs/collector.fsrmquota.md:51: Ressource ==> Resource
./docs/collector.os.md:20: sotred ==> sorted, stored
./docs/collector.process.md:56: sharable ==> shareable
./docs/collector.smtp.md:27: mailformed ==> malformed
```

Signed-off-by: Mario Trangoni <mjtrangoni@gmail.com>
2021-12-18 19:19:06 +01:00
Mario Trangoni
03e15a0f80 unconvert: Remove unnecessary conversion
See,
```
collector/os.go:306:10: unnecessary conversion (unconvert)
		float64(fsipf),
		       ^
```

Signed-off-by: Mario Trangoni <mjtrangoni@gmail.com>
2021-12-18 19:05:31 +01:00
Mario Trangoni
b98a956d51 gofmt: Fix File is not gofmt-ed with -s for go1.17
Signed-off-by: Mario Trangoni <mjtrangoni@gmail.com>
2021-12-18 19:01:29 +01:00
Calle Pettersson
524bfde5a3 Merge pull request #887 from SouenMazouin/fix/request-error-total-iis
fix: add missing metrics for IIS version >= 8
2021-12-18 15:28:17 +01:00
Ben Reedy
963cee0a13 Update Perflib dependency
Dependabot has likely passed over this as there has been no tagged
release.

Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-12-18 19:31:08 +10:00
Ben Reedy
45e9357ad9 Remove explicit LISTEN_PORT from MSI installer
Explicit setting of listening port in the service definition causes port
setting in configuration file to be ignored.

Exporter already defines a default port (9812) if one is not specified,
so no impact from this change is anticipated.

Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-12-18 18:34:47 +10:00
Souen Mazouin
6120ea9be1 fix: add missing metrics for IIS version >= 8
Allows the following metrics to be exposed again, they had disappeared after the migration to perflib :
- worker_request_errors_total
- worker_current_websocket_requests
- worker_websocket_connection_accepted_total
- worker_websocket_connection_rejected_total

Signed-off-by: Souen Mazouin <souen.mazouin@cdiscount.com>
2021-12-14 17:44:08 +01:00
Ben Reedy
376060b053 Merge pull request #884 from prometheus-community/dependabot/go_modules/github.com/prometheus/exporter-toolkit-0.7.1
Bump github.com/prometheus/exporter-toolkit from 0.7.0 to 0.7.1
2021-12-14 10:45:31 +10:00
dependabot[bot]
e04c4aab29 Bump github.com/prometheus/exporter-toolkit from 0.7.0 to 0.7.1
Bumps [github.com/prometheus/exporter-toolkit](https://github.com/prometheus/exporter-toolkit) from 0.7.0 to 0.7.1.
- [Release notes](https://github.com/prometheus/exporter-toolkit/releases)
- [Changelog](https://github.com/prometheus/exporter-toolkit/blob/master/CHANGELOG.md)
- [Commits](https://github.com/prometheus/exporter-toolkit/compare/v0.7.0...v0.7.1)

---
updated-dependencies:
- dependency-name: github.com/prometheus/exporter-toolkit
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-12-06 11:35:00 +00:00
Ben Reedy
479e6b1381 Merge pull request #882 from geraudster/fix/textfile_default_path
Fix default path for textfile collector
2021-12-02 13:13:13 +10:00
Géraud Duge de bernonville
f6f7dc96e9 Get EXE directory
Signed-off-by: Géraud Duge de bernonville <geraud.dugedebernonville@ext.cdiscount.com>
2021-12-01 10:41:46 +01:00
Ben Reedy
f84f54afda Merge pull request #875 from prometheus-community/dependabot/go_modules/github.com/Microsoft/hcsshim-0.9.1
Bump github.com/Microsoft/hcsshim from 0.8.6 to 0.9.1
2021-11-15 08:27:59 +10:00
dependabot[bot]
e22ef6e3cc Bump github.com/Microsoft/hcsshim from 0.8.6 to 0.9.1
Bumps [github.com/Microsoft/hcsshim](https://github.com/Microsoft/hcsshim) from 0.8.6 to 0.9.1.
- [Release notes](https://github.com/Microsoft/hcsshim/releases)
- [Commits](https://github.com/Microsoft/hcsshim/compare/v0.8.6...v0.9.1)

---
updated-dependencies:
- dependency-name: github.com/Microsoft/hcsshim
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-11-14 21:57:35 +00:00
Ben Reedy
02b69afe8b Merge pull request #874 from prometheus-community/dependabot/go_modules/github.com/sirupsen/logrus-1.8.1
Bump github.com/sirupsen/logrus from 1.6.0 to 1.8.1
2021-11-15 07:42:52 +10:00
dependabot[bot]
b7a0a09e58 Bump github.com/sirupsen/logrus from 1.6.0 to 1.8.1
Bumps [github.com/sirupsen/logrus](https://github.com/sirupsen/logrus) from 1.6.0 to 1.8.1.
- [Release notes](https://github.com/sirupsen/logrus/releases)
- [Changelog](https://github.com/sirupsen/logrus/blob/master/CHANGELOG.md)
- [Commits](https://github.com/sirupsen/logrus/compare/v1.6.0...v1.8.1)

---
updated-dependencies:
- dependency-name: github.com/sirupsen/logrus
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-11-14 21:29:14 +00:00
Ben Reedy
6105792f29 Merge pull request #876 from prometheus-community/dependabot/go_modules/github.com/dimchansky/utfbom-1.1.1
Bump github.com/dimchansky/utfbom from 1.1.0 to 1.1.1
2021-11-15 07:23:25 +10:00
Ben Reedy
1fbc626ee2 Merge pull request #873 from prometheus-community/dependabot/go_modules/github.com/prometheus/common-0.32.1
Bump github.com/prometheus/common from 0.32.0 to 0.32.1
2021-11-15 07:21:13 +10:00
dependabot[bot]
ca07abc1cd Bump github.com/dimchansky/utfbom from 1.1.0 to 1.1.1
Bumps [github.com/dimchansky/utfbom](https://github.com/dimchansky/utfbom) from 1.1.0 to 1.1.1.
- [Release notes](https://github.com/dimchansky/utfbom/releases)
- [Commits](https://github.com/dimchansky/utfbom/compare/v1.1.0...v1.1.1)

---
updated-dependencies:
- dependency-name: github.com/dimchansky/utfbom
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-11-14 11:50:42 +00:00
dependabot[bot]
60583c3366 Bump github.com/prometheus/common from 0.32.0 to 0.32.1
Bumps [github.com/prometheus/common](https://github.com/prometheus/common) from 0.32.0 to 0.32.1.
- [Release notes](https://github.com/prometheus/common/releases)
- [Commits](https://github.com/prometheus/common/compare/v0.32.0...v0.32.1)

---
updated-dependencies:
- dependency-name: github.com/prometheus/common
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-11-14 11:42:09 +00:00
Ben Reedy
a7dcf5896c Merge pull request #871 from breed808/dependabot
Add Dependabot dependency tracking
2021-11-14 21:38:36 +10:00
Ben Reedy
438cb87fc7 Add Dependabot dependency tracking
Bot will submit PRs when new dependency versions are detected,
preventing dependencies from becoming out-of-date.

Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-11-14 21:34:26 +10:00
Ben Reedy
f8b6260ab5 Merge pull request #862 from breed808/dependencies
Update dependencies
2021-11-14 11:11:43 +10:00
Calle Pettersson
d2b3f0f94b Merge pull request #869 from rnjstjdgh/master
Update collector.net.md
2021-11-11 09:14:54 +01:00
rnjstjdgh
d6b4466bc3 Update collector.net.md
Signed-off-by: rnjstjdgh <gshgsh0831@gmail.com>
2021-11-11 14:52:32 +09:00
Calle Pettersson
ce3d517cb3 Merge pull request #863 from jsturtevant/fix-service-identification
use IsWindowsService to detect if running as service
2021-11-05 18:47:18 +01:00
James Sturtevant
a6ea021468 use IsWindowsService to detect if running as service
Signed-off-by: James Sturtevant <jstur@microsoft.com>
2021-11-05 10:15:39 -07:00
Ben Reedy
b58dfdf4f3 Update perflib_exporter dependency
Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-11-05 18:30:03 +10:00
Ben Reedy
676eb55f99 Update Prometheus dependencies
Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-11-05 18:30:01 +10:00
Ben Reedy
121d9980c1 Replace go-kit/kit with go-kit/log
External log package has been extracted to a separate external
repository and module.

Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-11-05 18:29:59 +10:00
Calle Pettersson
947d8473e0 Merge pull request #861 from prometheus-community/maintainers-contacts
Update MAINTAINERS with security contacts
2021-10-29 10:36:43 +02:00
Calle Pettersson
c1569686f7 Update MAINTAINERS with security contacts
Signed-off-by: Calle Pettersson <calle@cape.nu>
2021-10-27 20:46:46 +02:00
Ben Reedy
75966fd37c Merge pull request #848 from ArtamonovEvgenii/master
Set relative default path for textfile collector
2021-10-23 14:27:00 +10:00
eartamonov
d0cfc14af9 Set relative default path for textfile collector
Signed-off-by: Artamonov Evgenii <evgenyi.artamonov@gmail.com>
2021-10-19 14:23:11 +03:00
Ben Reedy
941b66d342 Merge pull request #846 from JDA88/patch-1
Document expected delays in the size metrics
2021-10-01 08:13:58 +10:00
Ben Reedy
388195be97 Update e2e output to match help text changes
Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-10-01 08:09:03 +10:00
JDA88
bbefd8ac97 Document expected delays in the size metrics
Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-10-01 07:58:04 +10:00
Ben Reedy
5b92e1bd3d Merge pull request #841 from breed808/thermal_empty
Thermalzone: return error on empty result
2021-10-01 05:45:09 +10:00
Dave Owen
82f17fd607 Collect IIS metrics using Perflib (#832)
Rewrite IIS collector to use Perflib

Signed-off-by: David Owen <dowen@meddbase.com>
2021-09-25 17:00:39 +02:00
Ben Reedy
3e37b7b6f0 Merge pull request #840 from newrelic-forks/fix_service_memory_leak
Service Api collection close servicehandler to avoid memory leak
2021-09-25 18:22:21 +10:00
Ben Reedy
5d29ff6497 Thermalzone: return error on empty result
Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-09-25 15:35:45 +10:00
Alvaro Cabanas
f4f5aaf146 Service Api collection close servicehandler to avoid memory leak
Signed-off-by: Alvaro Cabanas <acabanas@newrelic.com>
2021-09-23 17:45:31 +02:00
Ben Reedy
5931604b58 Merge pull request #812 from carlossscastro/master
Services collection using API (no WMI)
2021-08-26 08:26:07 +10:00
Carlos Castro
67ca5e5ef2 Update service.go
Signed-off-by: Carlos Castro <ccastro@newrelic.com>
2021-08-25 17:19:41 +01:00
Carlos Castro
384183120f Update service.go
Signed-off-by: Carlos Castro <ccastro@newrelic.com>
2021-08-25 17:19:41 +01:00
Carlos Castro
a9ac2d4672 Collect services using windows api
Signed-off-by: Carlos Castro <ccastro@newrelic.com>
2021-08-25 17:19:41 +01:00
Benjamin Blattberg
1b96bb6d08 Add MSSQL Wait Statistics (#793)
Signed-off-by: benjaminjb <benjamin.blattberg@gmail.com>
2021-06-29 21:32:08 +02:00
Ben Reedy
cc45eeb90b Merge pull request #809 from breed808/process_working_set_private
Add missing Process Collector metrics
2021-06-25 08:36:43 +10:00
Ben Reedy
4b2cd0a024 Merge pull request #759 from breed808/textfile
Fix textfile crashes with duplicate metrics
2021-06-25 08:36:21 +10:00
Ben Reedy
ad447a6b08 Add unit suffix to process working_set metric
Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-06-19 09:02:30 +10:00
Ben Reedy
e4d7604193 Move process metric documentation to markdown file
Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-06-19 09:02:28 +10:00
Ben Reedy
757f88be04 Add missing process counters
Working Set Private and Working Set Peak were being collected, but not
exposed by the exporter.

Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-06-19 09:02:26 +10:00
Calle Pettersson
cff484b5e1 Merge pull request #804 from max-len/bandwidth-bytes
Export CurrentBandwidth as bytes
2021-06-16 20:16:45 +02:00
Calle Pettersson
2dc568b5cd Merge pull request #805 from max-len/typo
Fix typo: process_memory_limit_bytes
2021-06-16 20:14:55 +02:00
Calle Pettersson
448f505729 Merge pull request #807 from max-len/doc-cpu
Fix doc: collector.cpu.md
2021-06-16 20:12:59 +02:00
Max Lendrich
6d1ba11a8e Fix doc: collector.cpu.md
Signed-off-by: Max Lendrich <maximilian.lendrich@sap.com>
2021-06-16 15:18:29 +02:00
Max Lendrich
0f5a232142 Fix typo
Signed-off-by: Max Lendrich <maximilian.lendrich@sap.com>
2021-06-15 12:38:23 +02:00
Max Lendrich
bbab591570 Export CurrentBandwidth as bytes
From https://prometheus.io/docs/practices/naming/:
To avoid confusion combining different metrics, always use bytes, even
where bits appear more common.

Fixes #800

Signed-off-by: Max Lendrich <maximilian.lendrich@sap.com>
2021-06-14 17:33:27 +02:00
Ben Reedy
2bc3c1859a Merge pull request #802 from breed808/log_dependency
Replace deprecated log lib in remaining collectors
2021-06-12 19:52:29 +10:00
Ben Reedy
7c61a4dc25 Run "go mod tidy" on project
Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-06-12 11:57:46 +10:00
Ben Reedy
5a57da53be Replace deprecated log lib in remaining collectors
Some collectors were missed when migrating to the local
github.com/prometheus-community/windows_exporter/log library.

Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-06-12 11:57:40 +10:00
Calle Pettersson
72c46664db Merge pull request #789 from Wittionary/issue-776
Fixes #776
2021-05-25 07:35:49 +02:00
Witt Allen
8689c41c68 Added a 'data source' field to specify hcsshim of Host Compute Services in Hyper-V is used
Signed-off-by: Witt Allen <qwert59@gmail.com>
2021-05-24 00:57:20 -05:00
Calle Pettersson
74eac8f29b Merge pull request #788 from benridley/bugfix_sysinfo_layout
Correct layout of SystemInfo structs
2021-05-21 09:41:34 +02:00
Ben Ridley
bb48f1caac Correct layout of SystemInfo structs to prevent incorrect fields being read
Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-05-20 16:30:52 -07:00
Ben Reedy
068d03bd01 Merge pull request #783 from breed808/msmq_remove_hardcoded_queue
Remove hard-coded "Computer Queues" filter
2021-05-17 16:58:50 +10:00
Ben Reedy
5072879dca Check duplicates across entire textfile set
Check all textfile metrics will be checked for duplicates. If duplicates
are detected, drop all metrics and log error.

Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-05-17 16:54:28 +10:00
Ben Reedy
0fb7eec670 Remove hard-coded "Computer Queues" filter
msmq collector would only collect from a hard-coded "Computer Queues"
queue.
Removal of filter allows other queues to be queried with
the collector.msmq.msmq-where flag.

Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-05-16 14:53:54 +10:00
Ben Reedy
4293497b29 Fix textfile crashes with duplicate metrics
Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-05-12 20:57:17 +10:00
Ben Reedy
95f10f19cb Merge pull request #778 from Wittionary/fix-issue-777
Fixes #777
2021-05-03 14:23:03 +10:00
Witt
288f2a60e7 Changed 'Yes' to 'No' to reflect current state of collectors enabled by default
Signed-off-by: Witt Allen <qwert59@gmail.com>
2021-05-02 19:40:33 -05:00
Ben Reedy
2e32b0e2b1 Merge pull request #767 from louij2/patch-1
Update collector.service.md
2021-05-01 13:14:26 +10:00
Calle Pettersson
09759a4e8c Merge pull request #698 from ramonsmits/patch-1
Example - Using [defaults] with `--collectors.enabled` argument
2021-04-25 19:53:42 +02:00
louij2
dfd42a6c0c Update collector.service.md
Added more details for monitoring multiple services.

Signed-off-by: Luca Chana <clubdog123@gmail.com>
2021-04-24 21:05:36 +01:00
Ramon Smits
576c3bf918 Example - Using [defaults] with --collectors.enabled argument
Signed-off-by: Ramon Smits <ramon.smits@gmail.com>
2021-04-23 18:52:52 +02:00
Ben Reedy
19fee044bf Merge pull request #765 from breed808/checksums
CI: Output artifacts in single, flat directory.
2021-04-20 19:00:35 +10:00
Ben Reedy
45a74fdb7f CI: Output artifacts in single, flat directory.
Nested directories caused issues with promu checksum output, causing
user checks of the sha265sums.txt file to fail as the filenames did not
match.

Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-04-19 19:38:17 +10:00
Ben Reedy
db00553ca6 Merge pull request #744 from breed808/tests
Add benchmark for each collector
2021-04-01 22:35:08 +10:00
Ben Reedy
a2c4bf6a2d Add benchmark for each collector
Benchmarks will allow for easier identification of slow collectors.
Additionally, they increase test coverage of the collectors, with some
collectors now reaching 80-95% coverage with this change.

Collector benchmarks have been structed so that common functionality is
present in `collector/collector_test.go` as is done with non-test
functionality in `collector/collector.go`.
Test logic that is specific to individual collectors is present in the
collector test file (E.G. `collector/process_test.go` for the Process
collector).

Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-04-01 22:28:54 +10:00
Calle Pettersson
7adcac8f39 Merge pull request #702 from benridley/dev_cs_collector
Replace WMI in cs and os collectors
2021-03-30 21:26:23 +02:00
Ben Ridley
863b7d8ab4 Merge branch 'dev_cs_collector' of https://github.com/benridley/windows_exporter into dev_cs_collector 2021-03-29 10:14:26 -07:00
Ben Ridley
33c6b2c6a5 Address GitHub feedback
- Defer registry close calls
- Ensure size parameter in GetComputerName is properly specified
- Clean up some comments to ensure correctness

Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-29 10:13:36 -07:00
Calle Pettersson
6dee2422e1 Merge pull request #753 from prometheus-community/fix-ci
Update CI to install tools with go install rather than go get
2021-03-28 10:41:25 +02:00
Calle Pettersson
5d224b43ca Update CI to install tools with go install rather than go get
Signed-off-by: Calle Pettersson <calle@cape.nu>
2021-03-27 15:30:50 +01:00
Calle Pettersson
3f2a143104 Merge pull request #748 from majerus1223/remote_interactive
Fix typo on remote_interactive
2021-03-19 11:34:25 +01:00
Ben Ridley
ee3848141c Simplify struct usage and comments
Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-18 16:18:47 -07:00
Ben Ridley
df2a7a9ec0 Remove temporary uintptr values, as the garbage collector can move addresses from under them.
Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-18 16:18:47 -07:00
Ben Ridley
05f0f6f688 Add idiomatic wrappers to be exposed publically, and hide low-level
WinAPI operations

Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-18 16:18:47 -07:00
Ben Ridley
d947d0f6db Refactor remaining sysinfoapi calls into header package
Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-18 16:18:47 -07:00
Ben Ridley
d063bc0842 Add correct scrape context to OS benchmark
Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-18 16:18:47 -07:00
retryW
dd473c4807 Fixed paging free bytes
moved

Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-18 16:18:47 -07:00
retryW
7bd58abd27 Converted PagingFreeBytes to use perflib
Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-18 16:18:47 -07:00
retryW
6f941044c7 Change Sprintf interpolation to use explicit types
Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-18 16:18:47 -07:00
retryW
3da11645cf added os_test.go and removed wmi for testing
Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-18 16:18:47 -07:00
retryW
048bff919e Converted most metrics to non-wmi
Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-18 16:18:47 -07:00
retryW
f76334213d Convert os time and timezone from WMI to native go
Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-18 16:18:47 -07:00
Ben Ridley
71054ac429 Replace the CS collector with native WinAPI calls to sysinfoapi
Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-18 16:18:47 -07:00
Ben Ridley
248b7214e3 Move netapi free back to a defer statement
Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-19 10:13:04 +11:00
majerus
094558b1f1 Fix typo
Signed-off-by: majerus <james_majerus@msn.com>
2021-03-16 09:12:56 -05:00
Ben Reedy
18495abb69 Merge pull request #736 from basroovers/master
Typo in tcp doc
2021-03-07 11:04:18 +10:00
Bas Roovers
cc709ac380 Update collector.tcp.md
Changed windows_tcp_connections_established to gauge in tcp doc

Signed-off-by: Bas Roovers <basroovers@icloud.com>
2021-02-24 14:39:07 +01:00
114 changed files with 3384 additions and 1378 deletions

6
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,6 @@
version: 2
updates:
- package-ecosystem: "gomod"
directory: "/"
schedule:
interval: "weekly"

129
.github/workflows/ci.yml vendored Normal file
View File

@@ -0,0 +1,129 @@
name: windows_exporter CI/CD
# Trigger on pull requests and releases
# Deployments will only occur for releases (see `if` clauses in the build job).
on:
pull_request:
branches:
- master
release:
types:
- published
- edited
jobs:
test:
runs-on: windows-2019
steps:
- uses: actions/checkout@v2
- uses: actions/setup-go@v2
with:
go-version: '^1.17.5'
- name: Test
run: make test
- name: Install e2e deps
run: |
go get -u github.com/prometheus/promu@v0.11.1
go get -u github.com/josephspurrier/goversioninfo/cmd/goversioninfo@v1.2.0
# GOPATH\bin dir must be appended to PATH else the `promu` command won't be found
echo "$(go env GOPATH)\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
- name: e2e Test
run: make e2e-test
lint:
runs-on: windows-2019
steps:
# `gofmt` linter run by golangci-lint fails on CRLF line endings (the default for Windows)
- name: Set git to use LF
run: |
git config --global core.autocrlf false
git config --global core.eol lf
- uses: actions/checkout@v2
- uses: actions/setup-go@v2
with:
go-version: '^1.17.5'
- name: golangci-lint
uses: golangci/golangci-lint-action@v2
with:
version: v1.43
args: "--timeout=5m"
# golangci-lint action doesn't always provide helpful output, so re-run without the action for
# better output of the problem.
# The cache from the golangci-lint step is re-used here, so this step should finish quickly.
- name: errors
if: ${{ failure() }}
run: golangci-lint run --timeout=5m -c .golangci.yaml
build:
runs-on: windows-2019
needs:
- test
- lint
steps:
- uses: actions/checkout@v2
with:
# fetch-depth required for gitversion in `Build` step
fetch-depth: 0
- uses: actions/setup-go@v2
with:
go-version: '^1.17.5'
- name: Install Build deps
run: |
go get -u github.com/prometheus/promu@v0.11.1
go get -u github.com/josephspurrier/goversioninfo/cmd/goversioninfo@v1.2.0
# GOPATH\bin dir must be added to PATH else the `promu` and `goversioninfo` commands won't be found
echo "$(go env GOPATH)\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
- name: Build
run: |
$ErrorActionPreference = "Stop"
gitversion /output json /showvariable FullSemVer | Set-Content VERSION -PassThru
$Version = Get-Content VERSION
# Windows versioninfo resources need the file version by parts (but product version is free text)
$VersionParts = ($Version -replace '^v?([0-9\.]+).*$','$1').Split(".")
goversioninfo.exe -ver-major $VersionParts[0] -ver-minor $VersionParts[1] -ver-patch $VersionParts[2] -product-version $Version -platform-specific
make crossbuild
# GH requires all files to have different names, so add version/arch to differentiate
foreach($Arch in "amd64","386") {
Move-Item output\$Arch\windows_exporter.exe output\windows_exporter-$Version-$Arch.exe
}
- name: Upload Artifacts
uses: actions/upload-artifact@v2
with:
name: windows_exporter_binaries
path: output\windows_exporter-*.exe
- name: Build Release Artifacts
if: startsWith(github.ref, 'refs/tags/')
run: |
$ErrorActionPreference = "Stop"
$BuildVersion = Get-Content VERSION
$TagName = $env:GITHUB_REF -replace 'refs/tags/', ''
# The MSI version is not semver compliant, so just take the numerical parts
$MSIVersion = $TagName -replace '^v?([0-9\.]+).*$','$1'
foreach($Arch in "amd64","386") {
Write-Verbose "Building windows_exporter $MSIVersion msi for $Arch"
.\installer\build.ps1 -PathToExecutable .\output\windows_exporter-$BuildVersion-$Arch.exe -Version $MSIVersion -Arch "$Arch"
Move-Item installer\Output\windows_exporter-$MSIVersion-$Arch.msi output\
}
promu checksum output\
- name: Release
if: startsWith(github.ref, 'refs/tags/')
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
$TagName = $env:GITHUB_REF -replace 'refs/tags/', ''
Get-ChildItem -Path output\* -Include @('windows_exporter*.msi', 'windows_exporter*.exe', 'sha256sums.txt') | Foreach-Object {gh release upload $TagName $_}

View File

@@ -3,11 +3,10 @@ linters:
enable:
- deadcode
- errcheck
- golint
- revive
- govet
- gofmt
- ineffassign
- interfacer
- structcheck
- unconvert
- varcheck
@@ -20,4 +19,7 @@ issues:
- # Golint has many capitalisation complaints on WMI class names
text: "`?\\w+`? should be `?\\w+`?"
linters:
- golint
- revive
- text: "don't use ALL_CAPS in Go names; use CamelCase"
linters:
- revive

View File

@@ -1,6 +1,9 @@
Contributors in alphabetical order
Maintainers in alphabetical order
* [Ben Reedy](https://github.com/breed808) - breed808@breed808.com
* [Calle Pettersson](https://github.com/carlpett) - calle@cape.nu
Alumni
* [Ben Reedy](https://github.com/breed808)
* [Brian Brazil](https://github.com/brian-brazil)
* [Martin Lindhe](https://github.com/martinlindhe)
* [Calle Pettersson](https://github.com/carlpett)

View File

@@ -8,12 +8,15 @@ windows_exporter.exe: **/*.go
test:
go test -v ./...
bench:
go test -v -bench='benchmark(cpu|logicaldisk|logon|memory|net|process|service|system|tcp|time)collector' ./...
lint:
golangci-lint -c .golangci.yaml run
.PHONY: e2e-test
e2e-test: windows_exporter.exe
powershell -NonInteractive -ExecutionPolicy Bypass -File .\tools\end-to-end-test.ps1
pwsh -NonInteractive -ExecutionPolicy Bypass -File .\tools\end-to-end-test.ps1
fmt:
gofmt -l -w -s .

View File

@@ -76,7 +76,7 @@ Flag | Description | Default value
`--telemetry.addr` | host:port for exporter. | `:9182`
`--telemetry.path` | URL path for surfacing collected metrics. | `/metrics`
`--telemetry.max-requests` | Maximum number of concurrent requests. 0 to disable. | `5`
`--collectors.enabled` | Comma-separated list of collectors to use. Use `[defaults]` as a placeholder for all the collectors enabled by default." | `[defaults]`
`--collectors.enabled` | Comma-separated list of collectors to use. Use `[defaults]` as a placeholder which gets expanded containing all the collectors enabled by default." | `[defaults]`
`--collectors.print` | If true, print available collectors and exit. |
`--scrape.timeout-margin` | Seconds to subtract from the timeout allowed by the client. Tune to allow for overhead or high loads. | `0.5`
`--web.config.file` | A [web config][web_config] for setting up TLS and Auth | None
@@ -140,6 +140,14 @@ The prometheus metrics will be exposed on [localhost:9182](http://localhost:9182
When there are multiple processes with the same name, WMI represents those after the first instance as `process-name#index`. So to get them all, rather than just the first one, the [regular expression](https://en.wikipedia.org/wiki/Regular_expression) must use `.+`. See [process](docs/collector.process.md) for more information.
### Using [defaults] with `--collectors.enabled` argument
Using `[defaults]` with `--collectors.enabled` argument which gets expanded with all default collectors.
.\windows_exporter.exe --collectors.enabled "[defaults],process,container"
This enables the additional process and container collectors on top of the defaults.
### Using a configuration file
YAML configuration files can be specified with the `--config.file` flag. E.G. `.\windows_exporter.exe --config.file=config.yml`

View File

@@ -1,84 +0,0 @@
version: "{build}"
os: Visual Studio 2019
build: off
environment:
GOPATH: c:\gopath
GO111MODULE: on
clone_folder: c:\gopath\src\github.com\prometheus-community\windows_exporter
install:
- mkdir %GOPATH%\bin
- set PATH=%GOPATH%\bin;%PATH%
- set PATH=%PATH%;C:\msys64\mingw64\bin
- choco install gitversion.portable make -y
- ps: |
appveyor DownloadFile https://github.com/golangci/golangci-lint/releases/download/v1.21.0/golangci-lint-1.21.0-windows-amd64.zip
Expand-Archive golangci-lint-1.21.0-windows-amd64.zip
Move-Item golangci-lint-1.21.0-windows-amd64\golangci-lint-1.21.0-windows-amd64\golangci-lint.exe $env:GOPATH\bin\golangci-lint.exe
- ps: |
$env:GO111MODULE="off"
go get -u github.com/prometheus/promu
go get -u github.com/josephspurrier/goversioninfo/cmd/goversioninfo
$env:GO111MODULE="on"
test_script:
- make test
after_test:
- make lint
- make e2e-test
build_script:
- ps: |
# go mod download (or, if we don't call it, go build) will write every dependent package name to
# stderr, which will be interpreted as an error and abort the build if ErrorActionPreference is Stop,
# so we need to run it before setting the preference.
go mod download
$ErrorActionPreference = "Stop"
gitversion /output json /showvariable FullSemVer | Set-Content VERSION -PassThru
$Version = Get-Content VERSION
# Windows versioninfo resources need the file version by parts (but product version is free text)
$VersionParts = ($Version -replace '^v?([0-9\.]+).*$','$1').Split(".")
goversioninfo.exe -ver-major $VersionParts[0] -ver-minor $VersionParts[1] -ver-patch $VersionParts[2] -product-version $Version -platform-specific
make crossbuild
# GH requires all files to have different names, so add version/arch to differentiate
foreach($Arch in "amd64","386") {
Rename-Item output\$Arch\windows_exporter.exe -NewName windows_exporter-$Version-$Arch.exe
}
after_build:
- ps: |
# Build installer packages only on tagged releases
if($env:APPVEYOR_REPO_TAG -ne "True") {
return
}
$ErrorActionPreference = "Stop"
$BuildVersion = Get-Content VERSION
# The MSI version is not semver compliant, so just take the numerical parts
$MSIVersion = $env:APPVEYOR_REPO_TAG_NAME -replace '^v?([0-9\.]+).*$','$1'
foreach($Arch in "amd64","386") {
Write-Verbose "Building windows_exporter $MSIVersion msi for $Arch"
.\installer\build.ps1 -PathToExecutable .\output\$Arch\windows_exporter-$BuildVersion-$Arch.exe -Version $MSIVersion -Arch "$Arch"
Move-Item installer\Output\windows_exporter-$MSIVersion-$Arch.msi output\$Arch\
}
- promu checksum output\
artifacts:
- name: Artifacts
path: output\**\*
deploy:
- provider: GitHub
description: windows_exporter version $(appveyor_build_version)
artifact: Artifacts
auth_token:
secure: 'hFR7Ymxt/Rb25p4BweFvMNhX03lHD9kXJXrRlC/KbThazHuLD5NTx2ibMI6LYRsr'
draft: false
prerelease: false
on:
appveyor_repo_tag: true

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

9
collector/ad_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkADCollector(b *testing.B) {
benchmarkCollector(b, "ad", NewADCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

9
collector/adfs_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkADFSCollector(b *testing.B) {
benchmarkCollector(b, "adfs", newADFSCollector)
}

View File

@@ -1,10 +1,11 @@
//go:build windows
// +build windows
package collector
import (
"github.com/prometheus-community/windows_exporter/log"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {

View File

@@ -3,6 +3,8 @@ package collector
import (
"reflect"
"testing"
"github.com/prometheus/client_golang/prometheus"
)
func TestExpandChildCollectors(t *testing.T) {
@@ -32,3 +34,27 @@ func TestExpandChildCollectors(t *testing.T) {
})
}
}
func benchmarkCollector(b *testing.B, name string, collectFunc func() (Collector, error)) {
// Create perflib scrape context. Some perflib collectors required a correct context,
// or will fail during benchmark.
scrapeContext, err := PrepareScrapeContext([]string{name})
if err != nil {
b.Error(err)
}
c, err := collectFunc()
if err != nil {
b.Error(err)
}
metrics := make(chan prometheus.Metric)
go func() {
for {
<-metrics
}
}()
for i := 0; i < b.N; i++ {
c.Collect(scrapeContext, metrics) //nolint:errcheck
}
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkContainerCollector(b *testing.B) {
benchmarkCollector(b, "container", NewContainerMetricsCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector
@@ -8,8 +9,8 @@ import (
"strings"
"github.com/StackExchange/wmi"
"github.com/prometheus-community/windows_exporter/log"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {

9
collector/cpu_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkCPUCollector(b *testing.B) {
benchmarkCollector(b, "cpu", newCPUCollector)
}

View File

@@ -1,12 +1,12 @@
//go:build windows
// +build windows
package collector
import (
"errors"
"github.com/StackExchange/wmi"
"github.com/prometheus-community/windows_exporter/headers/sysinfoapi"
"github.com/prometheus-community/windows_exporter/log"
"github.com/prometheus/client_golang/prometheus"
)
@@ -60,51 +60,47 @@ func (c *CSCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) e
return nil
}
// Win32_ComputerSystem docs:
// - https://msdn.microsoft.com/en-us/library/aa394102
type Win32_ComputerSystem struct {
NumberOfLogicalProcessors uint32
TotalPhysicalMemory uint64
DNSHostname string
Domain string
Workgroup *string
}
func (c *CSCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_ComputerSystem
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
// Get systeminfo for number of processors
systemInfo := sysinfoapi.GetSystemInfo()
// Get memory status for physical memory
mem, err := sysinfoapi.GlobalMemoryStatusEx()
if err != nil {
return nil, err
}
if len(dst) == 0 {
return nil, errors.New("WMI query returned empty result set")
}
ch <- prometheus.MustNewConstMetric(
c.LogicalProcessors,
prometheus.GaugeValue,
float64(dst[0].NumberOfLogicalProcessors),
float64(systemInfo.NumberOfProcessors),
)
ch <- prometheus.MustNewConstMetric(
c.PhysicalMemoryBytes,
prometheus.GaugeValue,
float64(dst[0].TotalPhysicalMemory),
float64(mem.TotalPhys),
)
var fqdn string
if dst[0].Workgroup == nil || dst[0].Domain != *dst[0].Workgroup {
fqdn = dst[0].DNSHostname + "." + dst[0].Domain
} else {
fqdn = dst[0].DNSHostname
hostname, err := sysinfoapi.GetComputerName(sysinfoapi.ComputerNameDNSHostname)
if err != nil {
return nil, err
}
domain, err := sysinfoapi.GetComputerName(sysinfoapi.ComputerNameDNSDomain)
if err != nil {
return nil, err
}
fqdn, err := sysinfoapi.GetComputerName(sysinfoapi.ComputerNameDNSFullyQualified)
if err != nil {
return nil, err
}
ch <- prometheus.MustNewConstMetric(
c.Hostname,
prometheus.GaugeValue,
1.0,
dst[0].DNSHostname,
dst[0].Domain,
hostname,
domain,
fqdn,
)

9
collector/cs_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkCsCollector(b *testing.B) {
benchmarkCollector(b, "cs", NewCSCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector
@@ -128,7 +129,7 @@ func NewDFSRCollector() (Collector, error) {
ConnectionFilesReceivedTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "connection_received_files_total"),
"Total number of files receieved for connection",
"Total number of files received for connection",
[]string{"name"},
nil,
),

9
collector/dfsr_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkDFSRCollector(b *testing.B) {
benchmarkCollector(b, "dfsr", NewDFSRCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

9
collector/dhcp_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkDHCPCollector(b *testing.B) {
benchmarkCollector(b, "dhcp", NewDhcpCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector
@@ -136,7 +137,7 @@ func NewDNSCollector() (Collector, error) {
),
Responses: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "responses_total"),
"Number of reponses sent by DNS server",
"Number of responses sent by DNS server",
[]string{"protocol"},
nil,
),

9
collector/dns_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkDNSCollector(b *testing.B) {
benchmarkCollector(b, "dns", NewDNSCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector
@@ -234,7 +235,7 @@ func (c *exchangeCollector) collectADAccessProcesses(ctx *ScrapeContext, ch chan
}
// since we're not including the PID suffix from the instance names in the label names,
// we get an occational duplicate. This seems to affect about 4 instances only on this object.
// we get an occasional duplicate. This seems to affect about 4 instances only on this object.
labelUseCount[labelName]++
if labelUseCount[labelName] > 1 {
labelName = fmt.Sprintf("%s_%d", labelName, labelUseCount[labelName])

View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkExchangeCollector(b *testing.B) {
benchmarkCollector(b, "exchange", newExchangeCollector)
}

View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkFsrmQuotaCollector(b *testing.B) {
benchmarkCollector(b, "fsrmquota", newFSRMQuotaCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

9
collector/hyperv_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkHypervCollector(b *testing.B) {
benchmarkCollector(b, "hyperv", NewHyperVCollector)
}

File diff suppressed because it is too large Load Diff

9
collector/iis_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkIISCollector(b *testing.B) {
benchmarkCollector(b, "iis", NewIISCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector
@@ -103,14 +104,14 @@ func NewLogicalDiskCollector() (Collector, error) {
FreeSpace: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "free_bytes"),
"Free space in bytes (LogicalDisk.PercentFreeSpace)",
"Free space in bytes, updates every 10-15 min (LogicalDisk.PercentFreeSpace)",
[]string{"volume"},
nil,
),
TotalSpace: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "size_bytes"),
"Total space in bytes (LogicalDisk.PercentFreeSpace_Base)",
"Total space in bytes, updates every 10-15 min (LogicalDisk.PercentFreeSpace_Base)",
[]string{"volume"},
nil,
),

View File

@@ -0,0 +1,13 @@
package collector
import (
"testing"
)
func BenchmarkLogicalDiskCollector(b *testing.B) {
// Whitelist is not set in testing context (kingpin flags not parsed), causing the collector to skip all disks.
localVolumeWhitelist := ".+"
volumeWhitelist = &localVolumeWhitelist
benchmarkCollector(b, "logical_disk", NewLogicalDiskCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

10
collector/logon_test.go Normal file
View File

@@ -0,0 +1,10 @@
package collector
import (
"testing"
)
func BenchmarkLogonCollector(b *testing.B) {
// No context name required as collector source is WMI
benchmarkCollector(b, "", NewLogonCollector)
}

View File

@@ -1,6 +1,7 @@
// returns data points from Win32_PerfRawData_PerfOS_Memory
// <add link to documentation here> - Win32_PerfRawData_PerfOS_Memory class
//go:build windows
// +build windows
package collector

9
collector/memory_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkMemoryCollector(b *testing.B) {
benchmarkCollector(b, "memory", NewMemoryCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector
@@ -93,29 +94,27 @@ func (c *Win32_PerfRawData_MSMQ_MSMQQueueCollector) collect(ch chan<- prometheus
}
for _, msmq := range dst {
if msmq.Name == "Computer Queues" {
continue
}
ch <- prometheus.MustNewConstMetric(
c.BytesinJournalQueue,
prometheus.GaugeValue,
float64(msmq.BytesinJournalQueue),
strings.ToLower(msmq.Name),
)
ch <- prometheus.MustNewConstMetric(
c.BytesinQueue,
prometheus.GaugeValue,
float64(msmq.BytesinQueue),
strings.ToLower(msmq.Name),
)
ch <- prometheus.MustNewConstMetric(
c.MessagesinJournalQueue,
prometheus.GaugeValue,
float64(msmq.MessagesinJournalQueue),
strings.ToLower(msmq.Name),
)
ch <- prometheus.MustNewConstMetric(
c.MessagesinQueue,
prometheus.GaugeValue,

10
collector/msmq_test.go Normal file
View File

@@ -0,0 +1,10 @@
package collector
import (
"testing"
)
func BenchmarkMsmqCollector(b *testing.B) {
// No context name required as collector source is WMI
benchmarkCollector(b, "", NewMSMQCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector
@@ -70,7 +71,7 @@ func getMSSQLInstances() mssqlInstancesType {
type mssqlCollectorsMap map[string]mssqlCollectorFunc
func mssqlAvailableClassCollectors() string {
return "accessmethods,availreplica,bufman,databases,dbreplica,genstats,locks,memmgr,sqlstats,sqlerrors,transactions"
return "accessmethods,availreplica,bufman,databases,dbreplica,genstats,locks,memmgr,sqlstats,sqlerrors,transactions,waitstats"
}
func (c *MSSQLCollector) getMSSQLCollectors() mssqlCollectorsMap {
@@ -86,6 +87,7 @@ func (c *MSSQLCollector) getMSSQLCollectors() mssqlCollectorsMap {
mssqlCollectors["sqlstats"] = c.collectSQLStats
mssqlCollectors["sqlerrors"] = c.collectSQLErrors
mssqlCollectors["transactions"] = c.collectTransactions
mssqlCollectors["waitstats"] = c.collectWaitStats
return mssqlCollectors
}
@@ -121,6 +123,8 @@ func mssqlGetPerfObjectName(sqlInstance string, collector string) string {
suffix = "SQL Statistics"
case "transactions":
suffix = "Transactions"
case "waitstats":
suffix = "Wait Statistics"
}
return (prefix + suffix)
}
@@ -382,6 +386,20 @@ type MSSQLCollector struct {
TransactionsVersionStoreCreationUnits *prometheus.Desc
TransactionsVersionStoreTruncationUnits *prometheus.Desc
// Win32_PerfRawData_{instance}_SQLServerWaitStatistics
WaitStatsLockWaits *prometheus.Desc
WaitStatsMemoryGrantQueueWaits *prometheus.Desc
WaitStatsThreadSafeMemoryObjectsWaits *prometheus.Desc
WaitStatsLogWriteWaits *prometheus.Desc
WaitStatsLogBufferWaits *prometheus.Desc
WaitStatsNetworkIOWaits *prometheus.Desc
WaitStatsPageIOLatchWaits *prometheus.Desc
WaitStatsPageLatchWaits *prometheus.Desc
WaitStatsNonpageLatchWaits *prometheus.Desc
WaitStatsWaitForTheWorkerWaits *prometheus.Desc
WaitStatsWorkspaceSynchronizationWaits *prometheus.Desc
WaitStatsTransactionOwnershipWaits *prometheus.Desc
mssqlInstances mssqlInstancesType
mssqlCollectors mssqlCollectorsMap
mssqlChildCollectorFailure int
@@ -1789,6 +1807,91 @@ func NewMSSQLCollector() (Collector, error) {
nil,
),
// Win32_PerfRawData_{instance}_SQLServerWaitStatistics
WaitStatsLockWaits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "waitstats_lock_waits"),
"(WaitStats.LockWaits)",
[]string{"mssql_instance", "item"},
nil,
),
WaitStatsMemoryGrantQueueWaits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "waitstats_memory_grant_queue_waits"),
"(WaitStats.MemoryGrantQueueWaits)",
[]string{"mssql_instance", "item"},
nil,
),
WaitStatsThreadSafeMemoryObjectsWaits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "waitstats_thread_safe_memory_objects_waits"),
"(WaitStats.ThreadSafeMemoryObjectsWaits)",
[]string{"mssql_instance", "item"},
nil,
),
WaitStatsLogWriteWaits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "waitstats_log_write_waits"),
"(WaitStats.LogWriteWaits)",
[]string{"mssql_instance", "item"},
nil,
),
WaitStatsLogBufferWaits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "waitstats_log_buffer_waits"),
"(WaitStats.LogBufferWaits)",
[]string{"mssql_instance", "item"},
nil,
),
WaitStatsNetworkIOWaits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "waitstats_network_io_waits"),
"(WaitStats.NetworkIOWaits)",
[]string{"mssql_instance", "item"},
nil,
),
WaitStatsPageIOLatchWaits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "waitstats_page_io_latch_waits"),
"(WaitStats.PageIOLatchWaits)",
[]string{"mssql_instance", "item"},
nil,
),
WaitStatsPageLatchWaits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "waitstats_page_latch_waits"),
"(WaitStats.PageLatchWaits)",
[]string{"mssql_instance", "item"},
nil,
),
WaitStatsNonpageLatchWaits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "waitstats_nonpage_latch_waits"),
"(WaitStats.NonpageLatchWaits)",
[]string{"mssql_instance", "item"},
nil,
),
WaitStatsWaitForTheWorkerWaits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "waitstats_wait_for_the_worker_waits"),
"(WaitStats.WaitForTheWorkerWaits)",
[]string{"mssql_instance", "item"},
nil,
),
WaitStatsWorkspaceSynchronizationWaits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "waitstats_workspace_synchronization_waits"),
"(WaitStats.WorkspaceSynchronizationWaits)",
[]string{"mssql_instance", "item"},
nil,
),
WaitStatsTransactionOwnershipWaits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "waitstats_transaction_ownership_waits"),
"(WaitStats.TransactionOwnershipWaits)",
[]string{"mssql_instance", "item"},
nil,
),
mssqlInstances: mssqlInstances,
}
@@ -1855,7 +1958,7 @@ func (c *MSSQLCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric
}
wg.Wait()
// this shoud return an error if any? some? children errord.
// this should return an error if any? some? children errord.
if c.mssqlChildCollectorFailure > 0 {
return errors.New("at least one child collector failed")
}
@@ -3731,6 +3834,123 @@ func (c *MSSQLCollector) collectSQLStats(ctx *ScrapeContext, ch chan<- prometheu
return nil, nil
}
// Win32_PerfRawData_MSSQLSERVER_SQLServerWaitStatistics docs:
// - https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-wait-statistics-object
type mssqlWaitStatistics struct {
Name string
WaitStatsLockWaits float64 `perflib:"Lock waits"`
WaitStatsMemoryGrantQueueWaits float64 `perflib:"Memory grant queue waits"`
WaitStatsThreadSafeMemoryObjectsWaits float64 `perflib:"Thread-safe memory objects waits"`
WaitStatsLogWriteWaits float64 `perflib:"Log write waits"`
WaitStatsLogBufferWaits float64 `perflib:"Log buffer waits"`
WaitStatsNetworkIOWaits float64 `perflib:"Network IO waits"`
WaitStatsPageIOLatchWaits float64 `perflib:"Page IO latch waits"`
WaitStatsPageLatchWaits float64 `perflib:"Page latch waits"`
WaitStatsNonpageLatchWaits float64 `perflib:"Non-Page latch waits"`
WaitStatsWaitForTheWorkerWaits float64 `perflib:"Wait for the worker"`
WaitStatsWorkspaceSynchronizationWaits float64 `perflib:"Workspace synchronization waits"`
WaitStatsTransactionOwnershipWaits float64 `perflib:"Transaction ownership waits"`
}
func (c *MSSQLCollector) collectWaitStats(ctx *ScrapeContext, ch chan<- prometheus.Metric, sqlInstance string) (*prometheus.Desc, error) {
var dst []mssqlWaitStatistics
log.Debugf("mssql_waitstats collector iterating sql instance %s.", sqlInstance)
if err := unmarshalObject(ctx.perfObjects[mssqlGetPerfObjectName(sqlInstance, "waitstats")], &dst); err != nil {
return nil, err
}
for _, v := range dst {
item := v.Name
ch <- prometheus.MustNewConstMetric(
c.WaitStatsLockWaits,
prometheus.CounterValue,
v.WaitStatsLockWaits,
sqlInstance, item,
)
ch <- prometheus.MustNewConstMetric(
c.WaitStatsMemoryGrantQueueWaits,
prometheus.CounterValue,
v.WaitStatsMemoryGrantQueueWaits,
sqlInstance, item,
)
ch <- prometheus.MustNewConstMetric(
c.WaitStatsThreadSafeMemoryObjectsWaits,
prometheus.CounterValue,
v.WaitStatsThreadSafeMemoryObjectsWaits,
sqlInstance, item,
)
ch <- prometheus.MustNewConstMetric(
c.WaitStatsLogWriteWaits,
prometheus.CounterValue,
v.WaitStatsLogWriteWaits,
sqlInstance, item,
)
ch <- prometheus.MustNewConstMetric(
c.WaitStatsLogBufferWaits,
prometheus.CounterValue,
v.WaitStatsLogBufferWaits,
sqlInstance, item,
)
ch <- prometheus.MustNewConstMetric(
c.WaitStatsNetworkIOWaits,
prometheus.CounterValue,
v.WaitStatsNetworkIOWaits,
sqlInstance, item,
)
ch <- prometheus.MustNewConstMetric(
c.WaitStatsPageIOLatchWaits,
prometheus.CounterValue,
v.WaitStatsPageIOLatchWaits,
sqlInstance, item,
)
ch <- prometheus.MustNewConstMetric(
c.WaitStatsPageLatchWaits,
prometheus.CounterValue,
v.WaitStatsPageLatchWaits,
sqlInstance, item,
)
ch <- prometheus.MustNewConstMetric(
c.WaitStatsNonpageLatchWaits,
prometheus.CounterValue,
v.WaitStatsNonpageLatchWaits,
sqlInstance, item,
)
ch <- prometheus.MustNewConstMetric(
c.WaitStatsWaitForTheWorkerWaits,
prometheus.CounterValue,
v.WaitStatsWaitForTheWorkerWaits,
sqlInstance, item,
)
ch <- prometheus.MustNewConstMetric(
c.WaitStatsWorkspaceSynchronizationWaits,
prometheus.CounterValue,
v.WaitStatsWorkspaceSynchronizationWaits,
sqlInstance, item,
)
ch <- prometheus.MustNewConstMetric(
c.WaitStatsTransactionOwnershipWaits,
prometheus.CounterValue,
v.WaitStatsTransactionOwnershipWaits,
sqlInstance, item,
)
}
return nil, nil
}
type mssqlSQLErrors struct {
Name string
ErrorsPersec float64 `perflib:"Errors/sec"`

9
collector/mssql_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkMSSQLCollector(b *testing.B) {
benchmarkCollector(b, "mssql", NewMSSQLCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector
@@ -118,7 +119,7 @@ func NewNetworkCollector() (Collector, error) {
nil,
),
CurrentBandwidth: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "current_bandwidth"),
prometheus.BuildFQName(Namespace, subsystem, "current_bandwidth_bytes"),
"(Network.CurrentBandwidth)",
[]string{"nic"},
nil,
@@ -251,7 +252,7 @@ func (c *NetworkCollector) collect(ctx *ScrapeContext, ch chan<- prometheus.Metr
ch <- prometheus.MustNewConstMetric(
c.CurrentBandwidth,
prometheus.GaugeValue,
nic.CurrentBandwidth,
nic.CurrentBandwidth/8,
name,
)
}

View File

@@ -1,8 +1,11 @@
//go:build windows
// +build windows
package collector
import "testing"
import (
"testing"
)
func TestNetworkToInstanceName(t *testing.T) {
data := map[string]string{
@@ -15,3 +18,10 @@ func TestNetworkToInstanceName(t *testing.T) {
}
}
}
func BenchmarkNetCollector(b *testing.B) {
// Whitelist is not set in testing context (kingpin flags not parsed), causing the collector to skip all interfaces.
localNicWhitelist := ".+"
nicWhitelist = &localNicWhitelist
benchmarkCollector(b, "net", NewNetworkCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

View File

@@ -0,0 +1,10 @@
package collector
import (
"testing"
)
func BenchmarkNetFrameworkNETCLRExceptionsCollector(b *testing.B) {
// No context name required as collector source is WMI
benchmarkCollector(b, "", NewNETFramework_NETCLRExceptionsCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

View File

@@ -0,0 +1,10 @@
package collector
import (
"testing"
)
func BenchmarkNETFrameworkNETCLRInteropCollector(b *testing.B) {
// No context name required as collector source is WMI
benchmarkCollector(b, "", NewNETFramework_NETCLRInteropCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

View File

@@ -0,0 +1,10 @@
package collector
import (
"testing"
)
func BenchmarkNETFrameworkNETCLRJitCollector(b *testing.B) {
// No context name required as collector source is WMI
benchmarkCollector(b, "", NewNETFramework_NETCLRJitCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

View File

@@ -0,0 +1,10 @@
package collector
import (
"testing"
)
func BenchmarkNETFrameworkNETCLRLoadingCollector(b *testing.B) {
// No context name required as collector source is WMI
benchmarkCollector(b, "", NewNETFramework_NETCLRLoadingCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

View File

@@ -0,0 +1,10 @@
package collector
import (
"testing"
)
func BenchmarkNETFrameworkNETCLRLocksAndThreadsCollector(b *testing.B) {
// No context name required as collector source is WMI
benchmarkCollector(b, "", NewNETFramework_NETCLRLocksAndThreadsCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

View File

@@ -0,0 +1,10 @@
package collector
import (
"testing"
)
func BenchmarkNETFrameworkNETCLRMemoryCollector(b *testing.B) {
// No context name required as collector source is WMI
benchmarkCollector(b, "", NewNETFramework_NETCLRMemoryCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

View File

@@ -0,0 +1,10 @@
package collector
import (
"testing"
)
func BenchmarkNETFrameworkNETCLRRemotingCollector(b *testing.B) {
// No context name required as collector source is WMI
benchmarkCollector(b, "", NewNETFramework_NETCLRRemotingCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

View File

@@ -0,0 +1,10 @@
package collector
import (
"testing"
)
func BenchmarkNETFrameworkNETCLRSecurityCollector(b *testing.B) {
// No context name required as collector source is WMI
benchmarkCollector(b, "", NewNETFramework_NETCLRSecurityCollector)
}

View File

@@ -1,18 +1,24 @@
//go:build windows
// +build windows
package collector
import (
"errors"
"fmt"
"os"
"strings"
"time"
"github.com/StackExchange/wmi"
"github.com/prometheus-community/windows_exporter/headers/netapi32"
"github.com/prometheus-community/windows_exporter/headers/psapi"
"github.com/prometheus-community/windows_exporter/headers/sysinfoapi"
"github.com/prometheus-community/windows_exporter/log"
"github.com/prometheus/client_golang/prometheus"
"golang.org/x/sys/windows/registry"
)
func init() {
registerCollector("os", NewOSCollector)
registerCollector("os", NewOSCollector, "Paging File")
}
// A OSCollector is a Prometheus collector for WMI metrics
@@ -32,6 +38,12 @@ type OSCollector struct {
Timezone *prometheus.Desc
}
type pagingFileCounter struct {
Name string
Usage float64 `perflib:"% Usage"`
UsagePeak float64 `perflib:"% Usage Peak"`
}
// NewOSCollector ...
func NewOSCollector() (Collector, error) {
const subsystem = "os"
@@ -86,7 +98,7 @@ func NewOSCollector() (Collector, error) {
nil,
),
ProcessMemoryLimitBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "process_memory_limix_bytes"),
prometheus.BuildFQName(Namespace, subsystem, "process_memory_limit_bytes"),
"OperatingSystem.MaxProcessMemorySize",
nil,
nil,
@@ -121,7 +133,7 @@ func NewOSCollector() (Collector, error) {
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *OSCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
if desc, err := c.collect(ctx, ch); err != nil {
log.Error("failed collecting os metrics:", desc, err)
return err
}
@@ -146,41 +158,102 @@ type Win32_OperatingSystem struct {
Version string
}
func (c *OSCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_OperatingSystem
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
func (c *OSCollector) collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
nwgi, err := netapi32.GetWorkstationInfo()
if err != nil {
return nil, err
}
if len(dst) == 0 {
return nil, errors.New("WMI query returned empty result set")
gmse, err := sysinfoapi.GlobalMemoryStatusEx()
if err != nil {
return nil, err
}
currentTime := time.Now()
timezoneName, _ := currentTime.Zone()
// Get total allocation of paging files across all disks.
memManKey, err := registry.OpenKey(registry.LOCAL_MACHINE, `SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management`, registry.QUERY_VALUE)
defer memManKey.Close()
if err != nil {
return nil, err
}
pagingFiles, _, err := memManKey.GetStringsValue("ExistingPageFiles")
if err != nil {
return nil, err
}
// Get build number and product name from registry
ntKey, err := registry.OpenKey(registry.LOCAL_MACHINE, `SOFTWARE\Microsoft\Windows NT\CurrentVersion`, registry.QUERY_VALUE)
defer ntKey.Close()
if err != nil {
return nil, err
}
pn, _, err := ntKey.GetStringValue("ProductName")
if err != nil {
return nil, err
}
bn, _, err := ntKey.GetStringValue("CurrentBuildNumber")
if err != nil {
return nil, err
}
var fsipf float64
for _, pagingFile := range pagingFiles {
fileString := strings.ReplaceAll(pagingFile, `\??\`, "")
file, err := os.Stat(fileString)
if err != nil {
return nil, err
}
fsipf += float64(file.Size())
}
gpi, err := psapi.GetPerformanceInfo()
if err != nil {
return nil, err
}
var pfc = make([]pagingFileCounter, 0)
if err := unmarshalObject(ctx.perfObjects["Paging File"], &pfc); err != nil {
return nil, err
}
// Get current page file usage.
var pfbRaw float64
for _, pageFile := range pfc {
if strings.Contains(strings.ToLower(pageFile.Name), "_total") {
continue
}
pfbRaw += pageFile.Usage
}
// Subtract from total page file allocation on disk.
pfb := fsipf - (pfbRaw * float64(gpi.PageSize))
ch <- prometheus.MustNewConstMetric(
c.OSInformation,
prometheus.GaugeValue,
1.0,
dst[0].Caption,
dst[0].Version,
fmt.Sprintf("Microsoft %s", pn), // Caption
fmt.Sprintf("%d.%d.%s", nwgi.VersionMajor, nwgi.VersionMinor, bn), // Version
)
ch <- prometheus.MustNewConstMetric(
c.PhysicalMemoryFreeBytes,
prometheus.GaugeValue,
float64(dst[0].FreePhysicalMemory*1024), // KiB -> bytes
float64(gmse.AvailPhys),
)
time := dst[0].LocalDateTime
ch <- prometheus.MustNewConstMetric(
c.Time,
prometheus.GaugeValue,
float64(time.Unix()),
float64(currentTime.Unix()),
)
timezoneName, _ := time.Zone()
ch <- prometheus.MustNewConstMetric(
c.Timezone,
prometheus.GaugeValue,
@@ -191,55 +264,58 @@ func (c *OSCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, er
ch <- prometheus.MustNewConstMetric(
c.PagingFreeBytes,
prometheus.GaugeValue,
float64(dst[0].FreeSpaceInPagingFiles*1024), // KiB -> bytes
pfb,
)
ch <- prometheus.MustNewConstMetric(
c.VirtualMemoryFreeBytes,
prometheus.GaugeValue,
float64(dst[0].FreeVirtualMemory*1024), // KiB -> bytes
float64(gmse.AvailPageFile),
)
// Windows has no defined limit, and is based off available resources. This currently isn't calculated by WMI and is set to default value.
// https://techcommunity.microsoft.com/t5/windows-blog-archive/pushing-the-limits-of-windows-processes-and-threads/ba-p/723824
// https://docs.microsoft.com/en-us/windows/win32/cimwin32prov/win32-operatingsystem
ch <- prometheus.MustNewConstMetric(
c.ProcessesLimit,
prometheus.GaugeValue,
float64(dst[0].MaxNumberOfProcesses),
float64(4294967295),
)
ch <- prometheus.MustNewConstMetric(
c.ProcessMemoryLimitBytes,
prometheus.GaugeValue,
float64(dst[0].MaxProcessMemorySize*1024), // KiB -> bytes
float64(gmse.TotalVirtual),
)
ch <- prometheus.MustNewConstMetric(
c.Processes,
prometheus.GaugeValue,
float64(dst[0].NumberOfProcesses),
float64(gpi.ProcessCount),
)
ch <- prometheus.MustNewConstMetric(
c.Users,
prometheus.GaugeValue,
float64(dst[0].NumberOfUsers),
float64(nwgi.LoggedOnUsers),
)
ch <- prometheus.MustNewConstMetric(
c.PagingLimitBytes,
prometheus.GaugeValue,
float64(dst[0].SizeStoredInPagingFiles*1024), // KiB -> bytes
fsipf,
)
ch <- prometheus.MustNewConstMetric(
c.VirtualMemoryBytes,
prometheus.GaugeValue,
float64(dst[0].TotalVirtualMemorySize*1024), // KiB -> bytes
float64(gmse.TotalPageFile),
)
ch <- prometheus.MustNewConstMetric(
c.VisibleMemoryBytes,
prometheus.GaugeValue,
float64(dst[0].TotalVisibleMemorySize*1024), // KiB -> bytes
float64(gmse.TotalPhys),
)
return nil, nil

9
collector/os_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkOSCollector(b *testing.B) {
benchmarkCollector(b, "os", NewOSCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector
@@ -42,6 +43,8 @@ type processCollector struct {
PrivateBytes *prometheus.Desc
ThreadCount *prometheus.Desc
VirtualBytes *prometheus.Desc
WorkingSetPrivate *prometheus.Desc
WorkingSetPeak *prometheus.Desc
WorkingSet *prometheus.Desc
processWhitelistPattern *regexp.Regexp
@@ -65,7 +68,7 @@ func newProcessCollector() (Collector, error) {
),
CPUTimeTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "cpu_time_total"),
"Returns elapsed time that all of the threads of this process used the processor to execute instructions by mode (privileged, user). An instruction is the basic unit of execution in a computer, a thread is the object that executes instructions, and a process is the object created when a program is run. Code executed to handle some hardware interrupts and trap conditions is included in this count.",
"Returns elapsed time that all of the threads of this process used the processor to execute instructions by mode (privileged, user).",
[]string{"process", "process_id", "creating_process_id", "mode"},
nil,
),
@@ -77,31 +80,31 @@ func newProcessCollector() (Collector, error) {
),
IOBytesTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "io_bytes_total"),
"Bytes issued to I/O operations in different modes (read, write, other). This property counts all I/O activity generated by the process to include file, network, and device I/Os. Read and write mode includes data operations; other mode includes those that do not involve data, such as control operations. ",
"Bytes issued to I/O operations in different modes (read, write, other).",
[]string{"process", "process_id", "creating_process_id", "mode"},
nil,
),
IOOperationsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "io_operations_total"),
"I/O operations issued in different modes (read, write, other). This property counts all I/O activity generated by the process to include file, network, and device I/Os. Read and write mode includes data operations; other mode includes those that do not involve data, such as control operations. ",
"I/O operations issued in different modes (read, write, other).",
[]string{"process", "process_id", "creating_process_id", "mode"},
nil,
),
PageFaultsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "page_faults_total"),
"Page faults by the threads executing in this process. A page fault occurs when a thread refers to a virtual memory page that is not in its working set in main memory. This can cause the page not to be fetched from disk if it is on the standby list and hence already in main memory, or if it is in use by another process with which the page is shared.",
"Page faults by the threads executing in this process.",
[]string{"process", "process_id", "creating_process_id"},
nil,
),
PageFileBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "page_file_bytes"),
"Current number of bytes this process has used in the paging file(s). Paging files are used to store pages of memory used by the process that are not contained in other files. Paging files are shared by all processes, and lack of space in paging files can prevent other processes from allocating memory.",
"Current number of bytes this process has used in the paging file(s).",
[]string{"process", "process_id", "creating_process_id"},
nil,
),
PoolBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "pool_bytes"),
"Pool Bytes is the last observed number of bytes in the paged or nonpaged pool. The nonpaged pool is an area of system memory (physical memory used by the operating system) for objects that cannot be written to disk, but must remain in physical memory as long as they are allocated. The paged pool is an area of system memory (physical memory used by the operating system) for objects that can be written to disk when they are not being used. Nonpaged pool bytes is calculated differently than paged pool bytes, so it might not equal the total of paged pool bytes.",
"Pool Bytes is the last observed number of bytes in the paged or nonpaged pool.",
[]string{"process", "process_id", "creating_process_id", "pool"},
nil,
),
@@ -119,19 +122,31 @@ func newProcessCollector() (Collector, error) {
),
ThreadCount: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "thread_count"),
"Number of threads currently active in this process. An instruction is the basic unit of execution in a processor, and a thread is the object that executes instructions. Every running process has at least one thread.",
"Number of threads currently active in this process.",
[]string{"process", "process_id", "creating_process_id"},
nil,
),
VirtualBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "virtual_bytes"),
"Current size, in bytes, of the virtual address space that the process is using. Use of virtual address space does not necessarily imply corresponding use of either disk or main memory pages. Virtual space is finite and, by using too much, the process can limit its ability to load libraries.",
"Current size, in bytes, of the virtual address space that the process is using.",
[]string{"process", "process_id", "creating_process_id"},
nil,
),
WorkingSetPrivate: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "working_set_private_bytes"),
"Size of the working set, in bytes, that is use for this process only and not shared nor shareable by other processes.",
[]string{"process", "process_id", "creating_process_id"},
nil,
),
WorkingSetPeak: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "working_set_peak_bytes"),
"Maximum size, in bytes, of the Working Set of this process at any point in time. The Working Set is the set of memory pages touched recently by the threads in the process.",
[]string{"process", "process_id", "creating_process_id"},
nil,
),
WorkingSet: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "working_set"),
"Maximum number of bytes in the working set of this process at any point in time. The working set is the set of memory pages touched recently by the threads in the process. If free memory in the computer is above a threshold, pages are left in the working set of a process even if they are not in use. When free memory falls below a threshold, pages are trimmed from working sets. If they are needed, they are then soft-faulted back into the working set before they leave main memory.",
prometheus.BuildFQName(Namespace, subsystem, "working_set_bytes"),
"Maximum number of bytes in the working set of this process at any point in time. The working set is the set of memory pages touched recently by the threads in the process.",
[]string{"process", "process_id", "creating_process_id"},
nil,
),
@@ -380,6 +395,24 @@ func (c *processCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metr
cpid,
)
ch <- prometheus.MustNewConstMetric(
c.WorkingSetPrivate,
prometheus.GaugeValue,
process.WorkingSetPrivate,
processName,
pid,
cpid,
)
ch <- prometheus.MustNewConstMetric(
c.WorkingSetPeak,
prometheus.GaugeValue,
process.WorkingSetPeak,
processName,
pid,
cpid,
)
ch <- prometheus.MustNewConstMetric(
c.WorkingSet,
prometheus.GaugeValue,

14
collector/process_test.go Normal file
View File

@@ -0,0 +1,14 @@
package collector
import (
"testing"
)
func BenchmarkProcessCollector(b *testing.B) {
// Whitelist is not set in testing context (kingpin flags not parsed), causing the collector to skip all processes.
localProcessWhitelist := ".+"
processWhitelist = &localProcessWhitelist
// No context name required as collector source is WMI
benchmarkCollector(b, "", newProcessCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector
@@ -60,7 +61,7 @@ func NewRemoteFx() (Collector, error) {
),
CurrentTCPBandwidth: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "net_current_tcp_bandwidth"),
"TCP Bandwidth detected in bytes per seccond.",
"TCP Bandwidth detected in bytes per second.",
[]string{"session_name"},
nil,
),

View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkRemoteFXCollector(b *testing.B) {
benchmarkCollector(b, "remote_fx", NewRemoteFx)
}

View File

@@ -1,14 +1,17 @@
//go:build windows
// +build windows
package collector
import (
"strconv"
"fmt"
"strings"
"github.com/StackExchange/wmi"
"github.com/prometheus-community/windows_exporter/log"
"github.com/prometheus/client_golang/prometheus"
"golang.org/x/sys/windows"
"golang.org/x/sys/windows/svc/mgr"
"gopkg.in/alecthomas/kingpin.v2"
)
@@ -21,6 +24,10 @@ var (
"collector.service.services-where",
"WQL 'where' clause to use in WMI metrics query. Limits the response to the services you specify and reduces the size of the response.",
).Default("").String()
useAPI = kingpin.Flag(
"collector.service.use-api",
"Use API calls to collect service data instead of WMI. Flag 'collector.service.services-where' won't be effective.",
).Default("false").Bool()
)
// A serviceCollector is a Prometheus collector for WMI Win32_Service metrics
@@ -40,6 +47,9 @@ func NewserviceCollector() (Collector, error) {
if *serviceWhereClause == "" {
log.Warn("No where-clause specified for service collector. This will generate a very large number of metrics!")
}
if *useAPI {
log.Warn("API collection is enabled.")
}
return &serviceCollector{
Information: prometheus.NewDesc(
@@ -73,9 +83,16 @@ func NewserviceCollector() (Collector, error) {
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *serviceCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
log.Error("failed collecting service metrics:", desc, err)
return err
if *useAPI {
if err := c.collectAPI(ch); err != nil {
log.Error("failed collecting API service metrics:", err)
return err
}
} else {
if err := c.collectWMI(ch); err != nil {
log.Error("failed collecting WMI service metrics:", err)
return err
}
}
return nil
}
@@ -103,6 +120,15 @@ var (
"paused",
"unknown",
}
apiStateValues = map[uint]string{
windows.SERVICE_CONTINUE_PENDING: "continue pending",
windows.SERVICE_PAUSE_PENDING: "pause pending",
windows.SERVICE_PAUSED: "paused",
windows.SERVICE_RUNNING: "running",
windows.SERVICE_START_PENDING: "start pending",
windows.SERVICE_STOP_PENDING: "stop pending",
windows.SERVICE_STOPPED: "stopped",
}
allStartModes = []string{
"boot",
"system",
@@ -110,6 +136,13 @@ var (
"manual",
"disabled",
}
apiStartModeValues = map[uint32]string{
windows.SERVICE_AUTO_START: "auto",
windows.SERVICE_BOOT_START: "boot",
windows.SERVICE_DEMAND_START: "manual",
windows.SERVICE_DISABLED: "disabled",
windows.SERVICE_SYSTEM_START: "system",
}
allStatuses = []string{
"ok",
"error",
@@ -126,14 +159,14 @@ var (
}
)
func (c *serviceCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
func (c *serviceCollector) collectWMI(ch chan<- prometheus.Metric) error {
var dst []Win32_Service
q := queryAllWhere(&dst, c.queryWhereClause)
if err := wmi.Query(q, &dst); err != nil {
return nil, err
return err
}
for _, service := range dst {
pid := strconv.FormatUint(uint64(service.ProcessId), 10)
pid := fmt.Sprintf("%d", uint64(service.ProcessId))
runAs := ""
if service.StartName != nil {
@@ -191,5 +224,82 @@ func (c *serviceCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Des
)
}
}
return nil, nil
return nil
}
func (c *serviceCollector) collectAPI(ch chan<- prometheus.Metric) error {
svcmgrConnection, err := mgr.Connect()
if err != nil {
return err
}
defer svcmgrConnection.Disconnect() //nolint:errcheck
// List All Services from the Services Manager
serviceList, err := svcmgrConnection.ListServices()
if err != nil {
return err
}
// Iterate through the Services List
for _, service := range serviceList {
// Retrieve handle for each service
serviceHandle, err := svcmgrConnection.OpenService(service)
if err != nil {
continue
}
defer serviceHandle.Close()
// Get Service Configuration
serviceConfig, err := serviceHandle.Config()
if err != nil {
continue
}
// Get Service Current Status
serviceStatus, err := serviceHandle.Query()
if err != nil {
continue
}
pid := fmt.Sprintf("%d", uint64(serviceStatus.ProcessId))
ch <- prometheus.MustNewConstMetric(
c.Information,
prometheus.GaugeValue,
1.0,
strings.ToLower(service),
serviceConfig.DisplayName,
pid,
serviceConfig.ServiceStartName,
)
for _, state := range apiStateValues {
isCurrentState := 0.0
if state == apiStateValues[uint(serviceStatus.State)] {
isCurrentState = 1.0
}
ch <- prometheus.MustNewConstMetric(
c.State,
prometheus.GaugeValue,
isCurrentState,
strings.ToLower(service),
state,
)
}
for _, startMode := range apiStartModeValues {
isCurrentStartMode := 0.0
if startMode == apiStartModeValues[serviceConfig.StartType] {
isCurrentStartMode = 1.0
}
ch <- prometheus.MustNewConstMetric(
c.StartMode,
prometheus.GaugeValue,
isCurrentStartMode,
strings.ToLower(service),
startMode,
)
}
}
return nil
}

View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkServiceCollector(b *testing.B) {
benchmarkCollector(b, "service", NewserviceCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

9
collector/smtp_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkSmtpCollector(b *testing.B) {
benchmarkCollector(b, "smtp", NewSMTPCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

9
collector/system_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkSystemCollector(b *testing.B) {
benchmarkCollector(b, "system", NewSystemCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

9
collector/tcp_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkTCPCollector(b *testing.B) {
benchmarkCollector(b, "tcp", NewTCPCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkTerminalServicesCollector(b *testing.B) {
benchmarkCollector(b, "terminal_services", NewTerminalServicesCollector)
}

View File

@@ -11,6 +11,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
//go:build !notextfile
// +build !notextfile
package collector
@@ -21,6 +22,7 @@ import (
"io/ioutil"
"os"
"path/filepath"
"reflect"
"sort"
"strings"
"time"
@@ -37,7 +39,7 @@ var (
textFileDirectory = kingpin.Flag(
"collector.textfile.directory",
"Directory to read text files with metrics from.",
).Default("C:\\Program Files\\windows_exporter\\textfile_inputs").String()
).Default(getDefaultPath()).String()
mtimeDesc = prometheus.NewDesc(
prometheus.BuildFQName(Namespace, "textfile", "mtime_seconds"),
@@ -65,6 +67,31 @@ func NewTextFileCollector() (Collector, error) {
}, nil
}
// Given a slice of metric families, determine if any two entries are duplicates.
// Duplicates will be detected where the metric name, labels and label values are identical.
func duplicateMetricEntry(metricFamilies []*dto.MetricFamily) bool {
uniqueMetrics := make(map[string]map[string]string)
for _, metricFamily := range metricFamilies {
metric_name := *metricFamily.Name
for _, metric := range metricFamily.Metric {
metric_labels := metric.GetLabel()
labels := make(map[string]string)
for _, label := range metric_labels {
labels[label.GetName()] = label.GetValue()
}
// Check if key is present before appending
_, mapContainsKey := uniqueMetrics[metric_name]
// Duplicate metric found with identical labels & label values
if mapContainsKey == true && reflect.DeepEqual(uniqueMetrics[metric_name], labels) {
return true
}
uniqueMetrics[metric_name] = labels
}
}
return false
}
func convertMetricFamily(metricFamily *dto.MetricFamily, ch chan<- prometheus.Metric) {
var valType prometheus.ValueType
var val float64
@@ -223,6 +250,10 @@ func (c *textFileCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Met
error = 1.0
}
// Create empty metricFamily slice here and append parsedFamilies to it inside the loop.
// Once loop is complete, raise error if any duplicates are present.
// This will ensure that duplicate metrics are correctly detected between multiple .prom files.
var metricFamilies = []*dto.MetricFamily{}
fileLoop:
for _, f := range files {
if !strings.HasSuffix(f.Name(), ".prom") {
@@ -271,7 +302,16 @@ fileLoop:
// a failure does not appear fresh.
mtimes[f.Name()] = f.ModTime()
for _, mf := range parsedFamilies {
for _, metricFamily := range parsedFamilies {
metricFamilies = append(metricFamilies, metricFamily)
}
}
if duplicateMetricEntry(metricFamilies) {
log.Errorf("Duplicate metrics detected in files")
error = 1.0
} else {
for _, mf := range metricFamilies {
convertMetricFamily(mf, ch)
}
}
@@ -297,3 +337,8 @@ func checkBOM(encoding utfbom.Encoding) error {
return fmt.Errorf(encoding.String())
}
func getDefaultPath() string {
execPath, _ := os.Executable()
return filepath.Join(filepath.Dir(execPath), "textfile_inputs")
}

View File

@@ -5,6 +5,8 @@ import (
"io/ioutil"
"strings"
"testing"
dto "github.com/prometheus/client_model/go"
)
func TestCRFilter(t *testing.T) {
@@ -45,3 +47,108 @@ func TestCheckBOM(t *testing.T) {
}
}
}
func TestDuplicateMetricEntry(t *testing.T) {
metric_name := "windows_sometest"
metric_help := "This is a Test."
metric_type := dto.MetricType_GAUGE
gauge_value := 1.0
gauge := dto.Gauge{
Value: &gauge_value,
}
label1_name := "display_name"
label1_value := "foobar"
label1 := dto.LabelPair{
Name: &label1_name,
Value: &label1_value,
}
label2_name := "display_version"
label2_value := "13.4.0"
label2 := dto.LabelPair{
Name: &label2_name,
Value: &label2_value,
}
metric1 := dto.Metric{
Label: []*dto.LabelPair{&label1, &label2},
Gauge: &gauge,
}
metric2 := dto.Metric{
Label: []*dto.LabelPair{&label1, &label2},
Gauge: &gauge,
}
duplicate := dto.MetricFamily{
Name: &metric_name,
Help: &metric_help,
Type: &metric_type,
Metric: []*dto.Metric{&metric1, &metric2},
}
duplicateFamily := []*dto.MetricFamily{}
duplicateFamily = append(duplicateFamily, &duplicate)
// Ensure detection for duplicate metrics
if !duplicateMetricEntry(duplicateFamily) {
t.Errorf("Duplicate not found in duplicateFamily")
}
label3_name := "test"
label3_value := "1.0"
label3 := dto.LabelPair{
Name: &label3_name,
Value: &label3_value,
}
metric3 := dto.Metric{
Label: []*dto.LabelPair{&label1, &label2, &label3},
Gauge: &gauge,
}
differentLabels := dto.MetricFamily{
Name: &metric_name,
Help: &metric_help,
Type: &metric_type,
Metric: []*dto.Metric{&metric1, &metric3},
}
duplicateFamily = []*dto.MetricFamily{}
duplicateFamily = append(duplicateFamily, &differentLabels)
// Additional label on second metric should not be cause for duplicate detection
if duplicateMetricEntry(duplicateFamily) {
t.Errorf("Unexpected duplicate found in differentLabels")
}
label4_value := "2.0"
label4 := dto.LabelPair{
Name: &label3_name,
Value: &label4_value,
}
metric4 := dto.Metric{
Label: []*dto.LabelPair{&label1, &label2, &label4},
Gauge: &gauge,
}
differentValues := dto.MetricFamily{
Name: &metric_name,
Help: &metric_help,
Type: &metric_type,
Metric: []*dto.Metric{&metric3, &metric4},
}
duplicateFamily = []*dto.MetricFamily{}
duplicateFamily = append(duplicateFamily, &differentValues)
// Additional label with different values metric should not be cause for duplicate detection
if duplicateMetricEntry(duplicateFamily) {
t.Errorf("Unexpected duplicate found in differentValues")
}
}

View File

@@ -1,6 +1,8 @@
package collector
import (
"errors"
"github.com/StackExchange/wmi"
"github.com/prometheus-community/windows_exporter/log"
"github.com/prometheus/client_golang/prometheus"
@@ -75,6 +77,11 @@ func (c *thermalZoneCollector) collect(ch chan<- prometheus.Metric) (*prometheus
return nil, err
}
// ThermalZone collector has been known to 'successfully' return an empty result.
if len(dst) == 0 {
return nil, errors.New("Empty results set for collector")
}
for _, info := range dst {
//Divide by 10 and subtract 273.15 to convert decikelvin to celsius
ch <- prometheus.MustNewConstMetric(

View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkThermalZoneCollector(b *testing.B) {
benchmarkCollector(b, "thermalzone", NewThermalZoneCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

9
collector/time_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkTimeCollector(b *testing.B) {
benchmarkCollector(b, "time", newTimeCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

9
collector/vmware_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkVmwareCollector(b *testing.B) {
benchmarkCollector(b, "vmware", NewVmwareCollector)
}

View File

@@ -1,10 +1,11 @@
# container collector
The container collector exposes metrics about containers running on system
The container collector exposes metrics about containers running on a Hyper-V system
|||
-|-
Metric name prefix | `container`
Data source | [hcsshim](https://github.com/Microsoft/hcsshim)
Enabled by default? | No
## Flags

View File

@@ -27,11 +27,11 @@ These metrics are only exposed on Windows Server 2008R2 and later:
Name | Description | Type | Labels
-----|-------------|------|-------
`windows_cpu_clock_interrupts_total` | Total number of received and serviced clock tick interrupts | `core`
`windows_cpu_idle_break_events_total` | Total number of time processor was woken from idle | `core`
`windows_cpu_parking_status` | Parking Status represents whether a processor is parked or not | `gauge`
`windows_cpu_core_frequency_mhz` | Core frequency in megahertz | `gauge`
`windows_cpu_processor_performance` | Processor Performance is the average performance of the processor while it is executing instructions, as a percentage of the nominal performance of the processor. On some processors, Processor Performance may exceed 100% | `gauge`
`windows_cpu_clock_interrupts_total` | Total number of received and serviced clock tick interrupts | counter | `core`
`windows_cpu_idle_break_events_total` | Total number of time processor was woken from idle | counter | `core`
`windows_cpu_parking_status` | Parking Status represents whether a processor is parked or not | gauge | `core`
`windows_cpu_core_frequency_mhz` | Core frequency in megahertz | gauge | `core`
`windows_cpu_processor_performance` | Processor Performance is the average performance of the processor while it is executing instructions, as a percentage of the nominal performance of the processor. On some processors, Processor Performance may exceed 100% | gauge | `core`
### Example metric
Show frequency of host CPU cores

View File

@@ -44,7 +44,7 @@ Name | Description | Type | Labels
`windows_dfsr_folder_deleted_bytes_cleaned_up_total` | Total size (in bytes) of replicating deleted files and folders that were cleaned up from the Conflict and Deleted folder. | gauge | name
`windows_dfsr_folder_deleted_bytes_generated_total` | Total size (in bytes) of replicated deleted files and folders that were moved to the Conflict and Deleted folder after they were deleted from a replicated folder on a sending member. | counter | name
`windows_dfsr_folder_deleted_files_cleaned_up_total` | Number of files and folders that were cleaned up from the Conflict and Deleted folder. | counter | name
`windows_dfsr_folder_deleted_files_generated_total` | Number of deleted fils and folders that were moved to the Conflict and Deleted folder. | counter | name
`windows_dfsr_folder_deleted_files_generated_total` | Number of deleted files and folders that were moved to the Conflict and Deleted folder. | counter | name
`windows_dfsr_folder_file_installs_retried_total` | Total number of file installs that are being retried due to sharing violations or other errors encountered when installing the files. The DFS Replication service replicates staged files into a staging folder, uncompresses them in the Installing folder, and renames them to the target location. The second and third steps of this process are known as installing the file. | counter | name
`windows_dfsr_folder_file_installs_succeeded_total` | Total number of files that were successfully received from sending members and installed locally on this server. The DFS Replication service replicates staged files into a staging folder, uncompresses them in the Installing folder, and renames them to the target location. The second and third steps of this process are known as installing the file. | counter | name
`windows_dfsr_folder_files_received_total` | Total number of files received. | counter | name

View File

@@ -36,7 +36,7 @@ Name | Description
`windows_exchange_transport_queues_internal_active_remote_delivery` | Internal Active Remote Delivery Queue length
`windows_exchange_transport_queues_active_mailbox_delivery` | Active Mailbox Delivery Queue length
`windows_exchange_transport_queues_retry_mailbox_delivery` | Retry Mailbox Delivery Queue length
`windows_exchange_transport_queues_unreachable` | Unreachable Queue lengt
`windows_exchange_transport_queues_unreachable` | Unreachable Queue length
`windows_exchange_transport_queues_external_largest_delivery` | External Largest Delivery Queue length
`windows_exchange_transport_queues_internal_largest_delivery` | Internal Largest Delivery Queue length
`windows_exchange_transport_queues_poison` | Poison Queue length

View File

@@ -1,6 +1,6 @@
# Microsoft File Server Resource Manager (FSRM) Quotas collector
The fsrmquota collector exposes metrics about File Server Ressource Manager Quotas. Note that this collector has only been tested against Windows server 2012R2.
The fsrmquota collector exposes metrics about File Server Resource Manager Quotas. Note that this collector has only been tested against Windows server 2012R2.
Other FSRM versions may work but are not tested.
|||
@@ -48,5 +48,5 @@ rate(windows_fsrmquota_usage_bytes)[1d]
severity: "high"
annotations:
summary: "High Quotas Usage"
description: "High use of File Ressource.\n Quotas: {{ $labels.path }}\n Current use : {{ $value }}"
description: "High use of File Resource.\n Quotas: {{ $labels.path }}\n Current use : {{ $value }}"
```

View File

@@ -5,7 +5,7 @@ The iis collector exposes metrics about the IIS server
|||
-|-
Metric name prefix | `iis`
Classes | `Win32_PerfRawData_W3SVC_WebService`<br/>`Win32_PerfRawData_APPPOOLCountersProvider_APPPOOLWAS`<br/>`Win32_PerfRawData_W3SVCW3WPCounterProvider_W3SVCW3WP`<br/>`Win32_PerfRawData_W3SVC_WebServiceCache`
Data source | Perflib
Enabled by default? | No
## Flags

View File

@@ -30,11 +30,15 @@ Name | Description | Type | Labels
`writes_total` | Rate of write operations on the disk | counter | `volume`
`read_seconds_total` | Seconds the disk was busy servicing read requests | counter | `volume`
`write_seconds_total` | Seconds the disk was busy servicing write requests | counter | `volume`
`free_bytes` | Unused space of the disk in bytes | gauge | `volume`
`size_bytes` | Total size of the disk in bytes | gauge | `volume`
`free_bytes` | Unused space of the disk in bytes (not real time, updates every 10-15 min) | gauge | `volume`
`size_bytes` | Total size of the disk in bytes (not real time, updates every 10-15 min) | gauge | `volume`
`idle_seconds_total` | Seconds the disk was idle (not servicing read/write requests) | counter | `volume`
`split_ios_total` | Number of I/Os to the disk split into multiple I/Os | counter | `volume`
### Warning about size metrics
The `free_bytes` and `size_bytes` metrics are not updated in real time and might have a delay of 10-15min.
This is the same behavior as the windows performance counters.
### Example metric
Query the rate of write operations to a disk
```

View File

@@ -27,7 +27,7 @@ windows_logon_logon_type{status="interactive"}
## Useful queries
Query the total number of local and remote (I.E. Terminal Services) interactive sessions.
```
windows_logon_logon_type{status=~"interactive|remoteinteractive"}
windows_logon_logon_type{status=~"interactive|remote_interactive"}
```
## Alerting examples

View File

@@ -7,7 +7,7 @@ The memory collector exposes metrics about system memory usage
Metric name prefix | `memory`
Data source | Perflib
Classes | `Win32_PerfRawData_PerfOS_Memory`
Enabled by default? | Yes
Enabled by default? | No
## Flags

View File

@@ -5,14 +5,14 @@ The mssql collector exposes metrics about the MSSQL server
|||
-|-
Metric name prefix | `mssql`
Classes | [`Win32_PerfRawData_MSSQLSERVER_SQLServerAccessMethods`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-access-methods-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerAvailabilityReplica`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-availability-replica)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerBufferManager`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-buffer-manager-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerDatabaseReplica`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-database-replica)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerDatabases`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-databases-object?view=sql-server-2017)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerGeneralStatistics`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-general-statistics-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerLocks`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-locks-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerMemoryManager`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-memory-manager-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerSQLStatistics`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-sql-statistics-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerSQLErrors`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-sql-errors-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerTransactions`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-transactions-object)
Classes | [`Win32_PerfRawData_MSSQLSERVER_SQLServerAccessMethods`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-access-methods-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerAvailabilityReplica`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-availability-replica)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerBufferManager`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-buffer-manager-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerDatabaseReplica`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-database-replica)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerDatabases`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-databases-object?view=sql-server-2017)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerGeneralStatistics`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-general-statistics-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerLocks`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-locks-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerMemoryManager`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-memory-manager-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerSQLStatistics`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-sql-statistics-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerSQLErrors`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-sql-errors-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerTransactions`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-transactions-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerWaitStatistics`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-wait-statistics-object)
Enabled by default? | No
## Flags
### `--collectors.mssql.classes-enabled`
Comma-separated list of MSSQL WMI classes to use. Supported values are `accessmethods`, `availreplica`, `bufman`, `databases`, `dbreplica`, `genstats`, `locks`, `memmgr`, `sqlstats`, `sqlerrors` and `transactions`.
Comma-separated list of MSSQL WMI classes to use. Supported values are `accessmethods`, `availreplica`, `bufman`, `databases`, `dbreplica`, `genstats`, `locks`, `memmgr`, `sqlstats`, `sqlerrors`, `transactions`, and `waitstats`.
### `--collectors.mssql.class-print`
@@ -127,7 +127,7 @@ Name | Description | Type | Labels
`windows_mssql_databases_bulk_copy_rows` | Number of rows bulk copied per second | counter | `mssql_instance`, `database`
`windows_mssql_databases_bulk_copy_bytes` | Amount of data bulk copied (in kilobytes) per second | counter | `mssql_instance`, `database`
`windows_mssql_databases_commit_table_entries` | he size (row count) of the in-memory portion of the commit table for the database | counter | `mssql_instance`, `database`
`windows_mssql_databases_data_files_size_bytes` | Cumulative size (in kilobytes) of all the data files in the database including any automatic growth. Monitoring this counter is useful, for example, for determining the correct size of tempdb | counter | `mssql_instance`, `database`
`windows_mssql_databases_data_files_size_bytes` | Cumulative size (in kilobytes) of all the data files in the database including any automatic growth. Monitoring this counter is useful, for example, for determining the correct size of tempdb | gauge | `mssql_instance`, `database`
`windows_mssql_databases_dbcc_logical_scan_bytes` | Number of logical read scan bytes per second for database console commands (DBCC) | counter | `mssql_instance`, `database`
`windows_mssql_databases_group_commit_stall_seconds` | Group stall time (microseconds) per second | counter | `mssql_instance`, `database`
`windows_mssql_databases_log_flushed_bytes` | Total number of log bytes flushed | counter | `mssql_instance`, `database`
@@ -244,6 +244,18 @@ Name | Description | Type | Labels
`windows_mssql_transactions_version_store_units` | The number of active allocation units in the snapshot isolation version store in tempdb | counter | `mssql_instance`
`windows_mssql_transactions_version_store_creation_units` | The number of allocation units that have been created in the snapshot isolation store since the instance of the Database Engine was started | counter | `mssql_instance`
`windows_mssql_transactions_version_store_truncation_units` | The number of allocation units that have been removed from the snapshot isolation store since the instance of the Database Engine was started | counter | `mssql_instance`
`windows_mssql_waitstats_lock_waits` | Statistics for processes waiting on a lock | gauge | `mssql_instance`, `item`
`windows_mssql_waitstats_memory_grant_queue_waits` | Statistics for processes waiting for memory grant to become available | gauge | `mssql_instance`, `item`
`windows_mssql_waitstats_thread_safe_memory_objects_waits` | Statistics for processes waiting on thread-safe memory allocators | gauge | `mssql_instance`, `item`
`windows_mssql_waitstats_log_write_waits` | Statistics for processes waiting for log buffer to be written | gauge | `mssql_instance`, `item`
`windows_mssql_waitstats_log_buffer_waits` | Statistics for processes waiting for log buffer to be available | gauge | `mssql_instance`, `item`
`windows_mssql_waitstats_network_io_waits` | Statistics relevant to wait on network I/O | gauge | `mssql_instance`, `item`
`windows_mssql_waitstats_page_io_latch_waits` | Statistics relevant to page I/O latches | gauge | `mssql_instance`, `item`
`windows_mssql_waitstats_page_latch_waits` | Statistics relevant to page latches, not including I/O latches | gauge | `mssql_instance`, `item`
`windows_mssql_waitstats_nonpage_latch_waits` | Statistics relevant to non-page latches | gauge | `mssql_instance`, `item`
`windows_mssql_waitstats_wait_for_the_worker_waits` | Statistics relevant to processes waiting for worker to become available | gauge | `mssql_instance`, `item`
`windows_mssql_waitstats_workspace_synchronization_waits` | Statistics relevant to processes synchronizing access to workspace | gauge | `mssql_instance`, `item`
`windows_mssql_waitstats_transaction_ownership_waits` | Statistics relevant to processes synchronizing access to transaction | gauge | `mssql_instance`, `item`
### Example metric
_This collector does not yet have explained examples, we would appreciate your help adding them!_

View File

@@ -30,11 +30,11 @@ Name | Description | Type | Labels
`windows_net_packets_outbound_errors_total` | Total packets that could not be transmitted due to errors | counter | `nic`
`windows_net_packets_received_discarded_total` | Total inbound packets that were chosen to be discarded even though no errors had been detected to prevent delivery | counter | `nic`
`windows_net_packets_received_errors_total` | Total packets that could not be received due to errors | counter | `nic`
`windows_net_packets_received_total_total` | Total packets received by interface | counter | `nic`
`windows_net_packets_received_total` | Total packets received by interface | counter | `nic`
`windows_net_packets_received_unknown_total` | Total packets received by interface that were discarded because of an unknown or unsupported protocol | counter | `nic`
`windows_net_packets_total` | Total packets received and transmitted by interface | counter | `nic`
`windows_net_packets_sent_total` | Total packets transmitted by interface | counter | `nic`
`windows_net_current_bandwidth` | Estimate of the interface's current bandwidth in bits per second (bps) | gauge | `nic`
`windows_net_current_bandwidth_bytes` | Estimate of the interface's current bandwidth in bytes per second | gauge | `nic`
### Example metric
Query the rate of transmitted network traffic
@@ -45,14 +45,14 @@ rate(windows_net_bytes_sent_total{instance="localhost"}[2m])
## Useful queries
Get total utilisation of network interface as a percentage
```
rate(windows_net_bytes_total{instance="localhost", nic="Microsoft_Hyper_V_Network_Adapter__1"}[2m]) * 8 / windows_net_current_bandwidth{instance="locahost", nic="Microsoft_Hyper_V_Network_Adapter__1"} * 100
rate(windows_net_bytes_total{instance="localhost", nic="Microsoft_Hyper_V_Network_Adapter__1"}[2m]) / windows_net_current_bandwidth_bytes{instance="localhost", nic="Microsoft_Hyper_V_Network_Adapter__1"} * 100
```
## Alerting examples
**prometheus.rules**
```yaml
- alert: NetInterfaceUsage
expr: rate(windows_net_bytes_total[2m]) * 8 / windows_net_current_bandwidth * 100 > 95
expr: rate(windows_net_bytes_total[2m]) / windows_net_current_bandwidth_bytes * 100 > 95
for: 10m
labels:
severity: high

View File

@@ -17,7 +17,7 @@ None
Name | Description | Type | Labels
-----|-------------|------|-------
`windows_os_info` | Contains full product name & version in labels | gauge | `product`, `version`
`windows_os_paging_limit_bytes` | Total number of bytes that can be sotred in the operating system paging files. 0 (zero) indicates that there are no paging files | gauge | None
`windows_os_paging_limit_bytes` | Total number of bytes that can be stored in the operating system paging files. 0 (zero) indicates that there are no paging files | gauge | None
`windows_os_paging_free_bytes` | Number of bytes that can be mapped into the operating system paging files without causing any other pages to be swapped out | gauge | None
`windows_os_physical_memory_free_bytes` | Bytes of physical memory currently unused and available | gauge | None
`windows_os_time` | Current time as reported by the operating system, in [Unix time](https://en.wikipedia.org/wiki/Unix_time). See [time.Unix()](https://golang.org/pkg/time/#Unix) for details | gauge | None

View File

@@ -41,19 +41,21 @@ This will match all processes named `firefox`, `FIREFOX` or `chrome` .
Name | Description | Type | Labels
-----|-------------|------|-------
`windows_process_start_time` | _Not yet documented_ | gauge | `process`, `process_id`, `creating_process_id`
`windows_process_cpu_time_total` | _Not yet documented_ | counter | `process`, `process_id`, `creating_process_id`
`windows_process_handle_count` | _Not yet documented_ | gauge | `process`, `process_id`, `creating_process_id`
`windows_process_io_bytes_total` | _Not yet documented_ | counter | `process`, `process_id`, `creating_process_id`
`windows_process_io_operations_total` | _Not yet documented_ | counter | `process`, `process_id`, `creating_process_id`
`windows_process_page_faults_total` | _Not yet documented_ | counter | `process`, `process_id`, `creating_process_id`
`windows_process_page_file_bytes` | _Not yet documented_ | gauge | `process`, `process_id`, `creating_process_id`
`windows_process_pool_bytes` | _Not yet documented_ | gauge | `process`, `process_id`, `creating_process_id`
`windows_process_priority_base` | _Not yet documented_ | gauge | `process`, `process_id`, `creating_process_id`
`windows_process_private_bytes` | _Not yet documented_ | gauge | `process`, `process_id`, `creating_process_id`
`windows_process_thread_count` | _Not yet documented_ | gauge | `process`, `process_id`, `creating_process_id`
`windows_process_virtual_bytes` | _Not yet documented_ | gauge | `process`, `process_id`, `creating_process_id`
`windows_process_working_set` | _Not yet documented_ | gauge | `process`, `process_id`, `creating_process_id`
`windows_process_start_time` | Time of process start | gauge | `process`, `process_id`, `creating_process_id`
`windows_process_cpu_time_total` | Returns elapsed time that all of the threads of this process used the processor to execute instructions by mode (privileged, user). An instruction is the basic unit of execution in a computer, a thread is the object that executes instructions, and a process is the object created when a program is run. Code executed to handle some hardware interrupts and trap conditions is included in this count. | counter | `process`, `process_id`, `creating_process_id`
`windows_process_handle_count` | Total number of handles the process has open. This number is the sum of the handles currently open by each thread in the process. | gauge | `process`, `process_id`, `creating_process_id`
`windows_process_io_bytes_total` | Bytes issued to I/O operations in different modes (read, write, other). This property counts all I/O activity generated by the process to include file, network, and device I/Os. Read and write mode includes data operations; other mode includes those that do not involve data, such as control operations. | counter | `process`, `process_id`, `creating_process_id`
`windows_process_io_operations_total` | I/O operations issued in different modes (read, write, other). This property counts all I/O activity generated by the process to include file, network, and device I/Os. Read and write mode includes data operations; other mode includes those that do not involve data, such as control operations. | counter | `process`, `process_id`, `creating_process_id`
`windows_process_page_faults_total` | Page faults by the threads executing in this process. A page fault occurs when a thread refers to a virtual memory page that is not in its working set in main memory. This can cause the page not to be fetched from disk if it is on the standby list and hence already in main memory, or if it is in use by another process with which the page is shared. | counter | `process`, `process_id`, `creating_process_id`
`windows_process_page_file_bytes` | Current number of bytes this process has used in the paging file(s). Paging files are used to store pages of memory used by the process that are not contained in other files. Paging files are shared by all processes, and lack of space in paging files can prevent other processes from allocating memory. | gauge | `process`, `process_id`, `creating_process_id`
`windows_process_pool_bytes` | Pool Bytes is the last observed number of bytes in the paged or nonpaged pool. The nonpaged pool is an area of system memory (physical memory used by the operating system) for objects that cannot be written to disk, but must remain in physical memory as long as they are allocated. The paged pool is an area of system memory (physical memory used by the operating system) for objects that can be written to disk when they are not being used. Nonpaged pool bytes is calculated differently than paged pool bytes, so it might not equal the total of paged pool bytes. | gauge | `process`, `process_id`, `creating_process_id`
`windows_process_priority_base` | Current base priority of this process. Threads within a process can raise and lower their own base priority relative to the process base priority of the process. | gauge | `process`, `process_id`, `creating_process_id`
`windows_process_private_bytes` | Current number of bytes this process has allocated that cannot be shared with other processes. | gauge | `process`, `process_id`, `creating_process_id`
`windows_process_thread_count` | Number of threads currently active in this process. An instruction is the basic unit of execution in a processor, and a thread is the object that executes instructions. Every running process has at least one thread. | gauge | `process`, `process_id`, `creating_process_id`
`windows_process_virtual_bytes` | Current size, in bytes, of the virtual address space that the process is using. Use of virtual address space does not necessarily imply corresponding use of either disk or main memory pages. Virtual space is finite and, by using too much, the process can limit its ability to load libraries. | gauge | `process`, `process_id`, `creating_process_id`
`windows_process_working_set_private_bytes` | Size of the working set, in bytes, that is use for this process only and not shared nor shareable by other processes. | gauge | `process`, `process_id`, `creating_process_id`
`windows_process_working_set_peak_bytes` | Maximum size, in bytes, of the Working Set of this process at any point in time. The Working Set is the set of memory pages touched recently by the threads in the process. If free memory in the computer is above a threshold, pages are left in the Working Set of a process even if they are not in use. When free memory falls below a threshold, pages are trimmed from Working Sets. If they are needed they will then be soft-faulted back into the Working Set before they leave main memory. | gauge | `process`, `process_id`, `creating_process_id`
`windows_process_working_set_bytes` | Maximum number of bytes in the working set of this process at any point in time. The working set is the set of memory pages touched recently by the threads in the process. If free memory in the computer is above a threshold, pages are left in the working set of a process even if they are not in use. When free memory falls below a threshold, pages are trimmed from working sets. If they are needed, they are then soft-faulted back into the working set before they leave main memory. | gauge | `process`, `process_id`, `creating_process_id`
### Example metric
_This collector does not yet have explained examples, we would appreciate your help adding them!_

View File

@@ -16,6 +16,12 @@ A WMI filter on which services to include. Recommended to keep down number of re
Example: `--collector.service.services-where="Name='windows_exporter'"`
Example config win_exporter.yml for multiple services: `services-where: Name='SQLServer' OR Name='Couchbase' OR Name='Spooler' OR Name='ActiveMQ'`
### `--collector.service.use-api`
Uses API calls instead of WMI for performance optimization. **Note** the previous flag (`--collector.service.services-where`) won't have any effect on this mode.
## Metrics
Name | Description | Type | Labels
@@ -48,7 +54,7 @@ A service can have the following start modes:
- `manual`
- `disabled`
### Status
### Status (not available in API mode)
A service can have any of the following statuses:
- `ok`

View File

@@ -24,7 +24,7 @@ If given, a virtual SMTP server needs to *not* match the blacklist regexp in ord
Name | Description | Type | Labels
-----|-------------|------|-------
`windows_smtp_badmailed_messages_bad_pickup_file_total` | Total number of mailformed pickup messages sent to badmail | counter | `server`
`windows_smtp_badmailed_messages_bad_pickup_file_total` | Total number of malformed pickup messages sent to badmail | counter | `server`
`windows_smtp_badmailed_messages_general_failure_total` | Total number of messages sent to badmail for reasons not associated with a specific counter | counter | `server`
`windows_smtp_badmailed_messages_hop_count_exceeded_total` | Total number of messages sent to badmail because they had exceeded the maximum hop count | counter | `server`
`windows_smtp_badmailed_messages_ndr_of_dns_total` | Total number of Delivery Status Notifications sent to badmail because they could not be delivered | counter | `server`

Some files were not shown because too many files have changed in this diff Show More