Compare commits

..

141 Commits

Author SHA1 Message Date
Ben Reedy
e07b2053af Merge pull request #937 from breed808/iis_flags
Add missing whitelist/blacklist checks for IIS
2022-02-02 18:18:13 +10:00
Ben Reedy
7d3c0d3b76 Add missing whitelist/blacklist checks for IIS
Checks were removed in 82f17fd despite flags still being present.

Signed-off-by: Ben Reedy <breed808@breed808.com>
2022-02-02 08:56:50 +10:00
Ben Reedy
27b2ca0b76 Merge pull request #921 from aymericDD/fix/903
fix: iis metrics greater than IIS v7
2022-01-31 18:53:59 +10:00
Aymeric Daurelle
803a0a9a70 fix: iis metrics greater than IIS v7
The IIS >= 8 metrics was updated two times by application and caused a fatal error. The purpose
of this fix is to update metrics one time by application.

Signed-off-by: Aymeric Daurelle <aymeric.daurelle@cdiscount.com>
2022-01-31 09:24:52 +01:00
Ben Reedy
4891acba2d Merge pull request #934 from prometheus-community/dependabot/go_modules/github.com/prometheus/client_golang-1.12.1
Bump github.com/prometheus/client_golang from 1.11.0 to 1.12.1
2022-01-30 15:46:21 +10:00
Ben Reedy
fa51270218 Add new client_golang metrics to e2e output
Introduced in github.com/prometheus/client_golang v1.12.0

Signed-off-by: Ben Reedy <breed808@breed808.com>
2022-01-30 15:31:53 +10:00
dependabot[bot]
a68e6af15a Bump github.com/prometheus/client_golang from 1.11.0 to 1.12.1
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.11.0 to 1.12.1.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.11.0...v1.12.1)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-01-30 04:49:07 +00:00
Ben Reedy
7ad9b6d74a Merge pull request #927 from prometheus-community/dependabot/go_modules/github.com/Microsoft/hcsshim-0.9.2
Bump github.com/Microsoft/hcsshim from 0.9.1 to 0.9.2
2022-01-30 14:48:21 +10:00
dependabot[bot]
9acd5e695e Bump github.com/Microsoft/hcsshim from 0.9.1 to 0.9.2
Bumps [github.com/Microsoft/hcsshim](https://github.com/Microsoft/hcsshim) from 0.9.1 to 0.9.2.
- [Release notes](https://github.com/Microsoft/hcsshim/releases)
- [Commits](https://github.com/Microsoft/hcsshim/compare/v0.9.1...v0.9.2)

---
updated-dependencies:
- dependency-name: github.com/Microsoft/hcsshim
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-01-30 04:38:23 +00:00
Ben Reedy
277f141587 Merge pull request #924 from breed808/e2e_fix
Don't upgrade dependencies when installing tools
2022-01-30 12:12:45 +10:00
Ben Reedy
2a5c51a236 Don't upgrade dependencies when installing tools
Dependency upgrade has resulted in github.com/prometheus/client_golang
being upgraded from v1.11.0 to v1.12.0 prior to end-to-end test.
This new release introduces new `go_*` metrics, causing the test to
fail on the unexpected output.

Signed-off-by: Ben Reedy <breed808@breed808.com>
2022-01-23 09:19:06 +10:00
Ben Reedy
ce205d4c4d Merge pull request #909 from akrauza/more-adfs-metrics-again
Add more ADFS metrics from `AD FS` CounterSet
2022-01-11 09:10:28 +10:00
Austin D. Krauza
2ed0ae837c Add more ADFS metrics from AD FS CounterSet
Signed-off-by: Austin D. Krauza <krauza.austin@gmail.com>

Reformat adfsCollector struct

Signed-off-by: Austin D. Krauza <krauza.austin@gmail.com>

Add metrics to ADFS collector documentation

Signed-off-by: Austin D. Krauza <krauza.austin@gmail.com>

Update ADFS collector with useful queries and links to documentation

Signed-off-by: Austin D. Krauza <krauza.austin@gmail.com>

Remove bad table formatter

Signed-off-by: Austin D. Krauza <krauza.austin@gmail.com>

Reformat ADFS collector using gofmt

Signed-off-by: Austin D. Krauza <krauza.austin@gmail.com>

Fix ADFS Config and Artifact DB Query time metrics

Signed-off-by: Austin D. Krauza <krauza.austin@gmail.com>

Update ADFS collector for Config and Artifact DB Query time from gauge to counter

Signed-off-by: Austin D. Krauza <krauza.austin@gmail.com>

Update ADFS collector for Config and Artifact DB Query time from gauge to counter

Signed-off-by: Austin D. Krauza <krauza.austin@gmail.com>
2022-01-10 17:27:34 -05:00
Ben Reedy
a56ec9166b Merge pull request #912 from breed808/installer_port
Port should default to 9182 if not defined
2022-01-06 18:18:19 +10:00
Calle Pettersson
e03432a22d Merge pull request #901 from mjtrangoni/fix-some-promtool-warnings
Fix some promtool warnings
2022-01-06 09:12:55 +01:00
Ben Reedy
be004b8423 Port should default to 9182 if not defined
Resolves #911 which was introduced by 45e9357a.

This is due to the exporter only using the default port if no LISTEN_ADDR
**and** no LISTEN_PORT is defined.

Signed-off-by: Ben Reedy <breed808@breed808.com>
2022-01-06 07:57:31 +10:00
Ben Reedy
e08a0411d6 Merge pull request #908 from akrauza/ADCS
Update repository readme with ADCS documentation link
2022-01-05 07:33:25 +10:00
Austin D. Krauza
3d7894049f Update repository readme with ADCS documentation link
Signed-off-by: Austin D. Krauza <krauza.austin@gmail.com>
2022-01-04 10:51:49 -05:00
Ben Reedy
de664d4b93 Merge pull request #905 from mjtrangoni/fix-counter-promtool-warnings
Fix counter metrics should have "_total" suffix issue
2022-01-03 08:06:01 +10:00
Ben Reedy
78e026b6ee Merge pull request #906 from mjtrangoni/fix-badge-gh-actions
README.md: Replace the AppVeyor badge with the GH Actions one
2022-01-03 08:04:26 +10:00
Ben Reedy
9eba8dd024 Merge pull request #896 from mjtrangoni/add-codespell
codespell: add GH Action for checking spelling issues
2022-01-03 08:03:05 +10:00
Mario Trangoni
01100d3e6e README.md: Replace the AppVeyor badge with the GH Actions one
Signed-off-by: Mario Trangoni <mjtrangoni@gmail.com>
2022-01-02 18:59:32 +01:00
Mario Trangoni
0f1eb4a936 Fix counter metrics should have "_total" suffix issue
Signed-off-by: Mario Trangoni <mjtrangoni@gmail.com>
2022-01-02 18:47:17 +01:00
Mario Trangoni
a8eefae123 codespell: add GH Action for checking spelling issues
After fixing all spelling issues in #892, this will prevent us for
adding new ones.

Signed-off-by: Mario Trangoni <mjtrangoni@gmail.com>
2022-01-02 18:21:43 +01:00
Ben Reedy
746158d354 Merge pull request #895 from akrauza/ADCS
Add Collector for Active Directory Certificate Services (ADCS)
2022-01-02 19:43:47 +10:00
Ben Reedy
d9f4264fc4 Merge pull request #898 from breed808/github_actions
Migrate CI/CD to Github Actions
2022-01-02 19:15:40 +10:00
Austin D. Krauza
a89b53779d Initial commit for ADCS collector
Signed-off-by: Austin D. Krauza <krauza.austin@gmail.com>
2022-01-02 01:24:11 -05:00
Ben Reedy
27ceeecff3 Merge pull request #902 from breed808/textfile
Move textfile mtime metric from loop
2022-01-02 08:32:08 +10:00
Ben Reedy
1ba5835af6 Move textfile mtime metric from loop
Loop was erroneously creating duplicate `windows_textfile_mtime_seconds`
metrics, causing the exporter to return a HTTP 500 error and no metrics
from any collector.

Signed-off-by: Ben Reedy <breed808@breed808.com>
2022-01-01 11:48:19 +10:00
Ben Reedy
0db956aa4d Migrate CI/CD to Github Actions
Signed-off-by: Ben Reedy <breed808@breed808.com>
2022-01-01 10:04:33 +10:00
Mario Trangoni
9d1628a329 promtool: Fix windows_time_ntp_client_time_source_count
Related to #659, this is a breaking change!

Fixes
```
windows_time_ntp_client_time_source_count non-histogram and non-summary metrics should not have "_count" suffix
```
for the time collector.

Signed-off-by: Mario Trangoni <mjtrangoni@gmail.com>
2021-12-31 13:51:07 +01:00
Mario Trangoni
fc33fa320b promtool: Fix *_handle_count and *_thread_count
Related to #659, this is a breaking change!

Fixes

```
windows_process_handle_count non-histogram and non-summary metrics should not have "_count" suffix
windows_process_thread_count non-histogram and non-summary metrics should not have "_count" suffix
```

for process and terminal_services collectors.

Signed-off-by: Mario Trangoni <mjtrangoni@gmail.com>
2021-12-31 13:51:07 +01:00
Ben Reedy
b6f88cbbdd Use pwsh to run e2e-test target
Powershell >= 5 is required for the `New-Guid` command in the e2e script.

Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-12-30 20:49:46 +10:00
Calle Pettersson
4b9b9e97cb Merge pull request #893 from prometheus-community/new-appveyor-token
Update CI token
2021-12-28 22:00:26 +01:00
Calle Pettersson
3ebe0e937e Update CI token
Signed-off-by: Calle Pettersson <calle@cape.nu>
2021-12-28 21:44:22 +01:00
Ben Reedy
4d771d2bce Merge pull request #892 from mjtrangoni/fix-golanci-lint
Fix and update golanci-lint reported issues
2021-12-25 10:34:02 +10:00
Mario Trangoni
919f90a571 golangci-lint: Acknowledge all remaining checks and update golanci-lint to v1.43.0
Signed-off-by: Mario Trangoni <mjtrangoni@gmail.com>
2021-12-24 11:19:05 +01:00
Ben Reedy
c7d07a37ea Merge pull request #883 from breed808/msi_listen_port
Remove explicit LISTEN_PORT from MSI installer
2021-12-19 08:30:21 +10:00
Ben Reedy
87c21bfa50 Merge pull request #891 from breed808/update_perflib
Update Perflib dependency
2021-12-19 08:27:14 +10:00
Mario Trangoni
df4f6b206b revive: make type exportable and remove unnecessary log word
See,
```
log/gokit_adapter.go:9:26: unexported-return: exported func NewToolkitAdapter returns unexported type *log.logAdapter, which can be annoying to use (revive)
func NewToolkitAdapter() *logAdapter {
                         ^
```

Signed-off-by: Mario Trangoni <mjtrangoni@gmail.com>
2021-12-18 19:54:31 +01:00
Mario Trangoni
9e3c585a28 revive: Remove unnecessary = 0 from var declaration.
See,
```
$ GOOS=windows GOARCH=amd64 golangci-lint run  ./... 2>1 | grep var-declaration
collector/os.go:205:22: var-declaration: should drop = 0 from declaration of var fsipf; it is the zero value (revive)
collector/os.go:226:23: var-declaration: should drop = 0 from declaration of var pfbRaw; it is the zero value (revive)
```

Signed-off-by: Mario Trangoni <mjtrangoni@gmail.com>
2021-12-18 19:30:47 +01:00
Mario Trangoni
e4a43c539b codespell: Fix word spelling issues
See,
```
$ codespell --skip=".git,./vendor" --ignore-words-list=calle
./exporter.go:262: overriden ==> overridden
./collector/dfsr.go:132: receieved ==> received
./collector/dns.go:140: reponses ==> responses
./collector/exchange.go:238: occational ==> occasional
./collector/mssql.go:1961: shoud ==> should
./collector/process.go:137: sharable ==> shareable
./collector/remote_fx.go:64: seccond ==> second
./docs/collector.dfsr.md:47: fils ==> fills, files, file
./docs/collector.exchange.md:39: lengt ==> length
./docs/collector.fsrmquota.md:3: Ressource ==> Resource
./docs/collector.fsrmquota.md:51: Ressource ==> Resource
./docs/collector.os.md:20: sotred ==> sorted, stored
./docs/collector.process.md:56: sharable ==> shareable
./docs/collector.smtp.md:27: mailformed ==> malformed
```

Signed-off-by: Mario Trangoni <mjtrangoni@gmail.com>
2021-12-18 19:19:06 +01:00
Mario Trangoni
03e15a0f80 unconvert: Remove unnecessary conversion
See,
```
collector/os.go:306:10: unnecessary conversion (unconvert)
		float64(fsipf),
		       ^
```

Signed-off-by: Mario Trangoni <mjtrangoni@gmail.com>
2021-12-18 19:05:31 +01:00
Mario Trangoni
b98a956d51 gofmt: Fix File is not gofmt-ed with -s for go1.17
Signed-off-by: Mario Trangoni <mjtrangoni@gmail.com>
2021-12-18 19:01:29 +01:00
Calle Pettersson
524bfde5a3 Merge pull request #887 from SouenMazouin/fix/request-error-total-iis
fix: add missing metrics for IIS version >= 8
2021-12-18 15:28:17 +01:00
Ben Reedy
963cee0a13 Update Perflib dependency
Dependabot has likely passed over this as there has been no tagged
release.

Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-12-18 19:31:08 +10:00
Ben Reedy
45e9357ad9 Remove explicit LISTEN_PORT from MSI installer
Explicit setting of listening port in the service definition causes port
setting in configuration file to be ignored.

Exporter already defines a default port (9812) if one is not specified,
so no impact from this change is anticipated.

Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-12-18 18:34:47 +10:00
Souen Mazouin
6120ea9be1 fix: add missing metrics for IIS version >= 8
Allows the following metrics to be exposed again, they had disappeared after the migration to perflib :
- worker_request_errors_total
- worker_current_websocket_requests
- worker_websocket_connection_accepted_total
- worker_websocket_connection_rejected_total

Signed-off-by: Souen Mazouin <souen.mazouin@cdiscount.com>
2021-12-14 17:44:08 +01:00
Ben Reedy
376060b053 Merge pull request #884 from prometheus-community/dependabot/go_modules/github.com/prometheus/exporter-toolkit-0.7.1
Bump github.com/prometheus/exporter-toolkit from 0.7.0 to 0.7.1
2021-12-14 10:45:31 +10:00
dependabot[bot]
e04c4aab29 Bump github.com/prometheus/exporter-toolkit from 0.7.0 to 0.7.1
Bumps [github.com/prometheus/exporter-toolkit](https://github.com/prometheus/exporter-toolkit) from 0.7.0 to 0.7.1.
- [Release notes](https://github.com/prometheus/exporter-toolkit/releases)
- [Changelog](https://github.com/prometheus/exporter-toolkit/blob/master/CHANGELOG.md)
- [Commits](https://github.com/prometheus/exporter-toolkit/compare/v0.7.0...v0.7.1)

---
updated-dependencies:
- dependency-name: github.com/prometheus/exporter-toolkit
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-12-06 11:35:00 +00:00
Ben Reedy
479e6b1381 Merge pull request #882 from geraudster/fix/textfile_default_path
Fix default path for textfile collector
2021-12-02 13:13:13 +10:00
Géraud Duge de bernonville
f6f7dc96e9 Get EXE directory
Signed-off-by: Géraud Duge de bernonville <geraud.dugedebernonville@ext.cdiscount.com>
2021-12-01 10:41:46 +01:00
Ben Reedy
f84f54afda Merge pull request #875 from prometheus-community/dependabot/go_modules/github.com/Microsoft/hcsshim-0.9.1
Bump github.com/Microsoft/hcsshim from 0.8.6 to 0.9.1
2021-11-15 08:27:59 +10:00
dependabot[bot]
e22ef6e3cc Bump github.com/Microsoft/hcsshim from 0.8.6 to 0.9.1
Bumps [github.com/Microsoft/hcsshim](https://github.com/Microsoft/hcsshim) from 0.8.6 to 0.9.1.
- [Release notes](https://github.com/Microsoft/hcsshim/releases)
- [Commits](https://github.com/Microsoft/hcsshim/compare/v0.8.6...v0.9.1)

---
updated-dependencies:
- dependency-name: github.com/Microsoft/hcsshim
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-11-14 21:57:35 +00:00
Ben Reedy
02b69afe8b Merge pull request #874 from prometheus-community/dependabot/go_modules/github.com/sirupsen/logrus-1.8.1
Bump github.com/sirupsen/logrus from 1.6.0 to 1.8.1
2021-11-15 07:42:52 +10:00
dependabot[bot]
b7a0a09e58 Bump github.com/sirupsen/logrus from 1.6.0 to 1.8.1
Bumps [github.com/sirupsen/logrus](https://github.com/sirupsen/logrus) from 1.6.0 to 1.8.1.
- [Release notes](https://github.com/sirupsen/logrus/releases)
- [Changelog](https://github.com/sirupsen/logrus/blob/master/CHANGELOG.md)
- [Commits](https://github.com/sirupsen/logrus/compare/v1.6.0...v1.8.1)

---
updated-dependencies:
- dependency-name: github.com/sirupsen/logrus
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-11-14 21:29:14 +00:00
Ben Reedy
6105792f29 Merge pull request #876 from prometheus-community/dependabot/go_modules/github.com/dimchansky/utfbom-1.1.1
Bump github.com/dimchansky/utfbom from 1.1.0 to 1.1.1
2021-11-15 07:23:25 +10:00
Ben Reedy
1fbc626ee2 Merge pull request #873 from prometheus-community/dependabot/go_modules/github.com/prometheus/common-0.32.1
Bump github.com/prometheus/common from 0.32.0 to 0.32.1
2021-11-15 07:21:13 +10:00
dependabot[bot]
ca07abc1cd Bump github.com/dimchansky/utfbom from 1.1.0 to 1.1.1
Bumps [github.com/dimchansky/utfbom](https://github.com/dimchansky/utfbom) from 1.1.0 to 1.1.1.
- [Release notes](https://github.com/dimchansky/utfbom/releases)
- [Commits](https://github.com/dimchansky/utfbom/compare/v1.1.0...v1.1.1)

---
updated-dependencies:
- dependency-name: github.com/dimchansky/utfbom
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-11-14 11:50:42 +00:00
dependabot[bot]
60583c3366 Bump github.com/prometheus/common from 0.32.0 to 0.32.1
Bumps [github.com/prometheus/common](https://github.com/prometheus/common) from 0.32.0 to 0.32.1.
- [Release notes](https://github.com/prometheus/common/releases)
- [Commits](https://github.com/prometheus/common/compare/v0.32.0...v0.32.1)

---
updated-dependencies:
- dependency-name: github.com/prometheus/common
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-11-14 11:42:09 +00:00
Ben Reedy
a7dcf5896c Merge pull request #871 from breed808/dependabot
Add Dependabot dependency tracking
2021-11-14 21:38:36 +10:00
Ben Reedy
438cb87fc7 Add Dependabot dependency tracking
Bot will submit PRs when new dependency versions are detected,
preventing dependencies from becoming out-of-date.

Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-11-14 21:34:26 +10:00
Ben Reedy
f8b6260ab5 Merge pull request #862 from breed808/dependencies
Update dependencies
2021-11-14 11:11:43 +10:00
Calle Pettersson
d2b3f0f94b Merge pull request #869 from rnjstjdgh/master
Update collector.net.md
2021-11-11 09:14:54 +01:00
rnjstjdgh
d6b4466bc3 Update collector.net.md
Signed-off-by: rnjstjdgh <gshgsh0831@gmail.com>
2021-11-11 14:52:32 +09:00
Calle Pettersson
ce3d517cb3 Merge pull request #863 from jsturtevant/fix-service-identification
use IsWindowsService to detect if running as service
2021-11-05 18:47:18 +01:00
James Sturtevant
a6ea021468 use IsWindowsService to detect if running as service
Signed-off-by: James Sturtevant <jstur@microsoft.com>
2021-11-05 10:15:39 -07:00
Ben Reedy
b58dfdf4f3 Update perflib_exporter dependency
Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-11-05 18:30:03 +10:00
Ben Reedy
676eb55f99 Update Prometheus dependencies
Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-11-05 18:30:01 +10:00
Ben Reedy
121d9980c1 Replace go-kit/kit with go-kit/log
External log package has been extracted to a separate external
repository and module.

Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-11-05 18:29:59 +10:00
Calle Pettersson
947d8473e0 Merge pull request #861 from prometheus-community/maintainers-contacts
Update MAINTAINERS with security contacts
2021-10-29 10:36:43 +02:00
Calle Pettersson
c1569686f7 Update MAINTAINERS with security contacts
Signed-off-by: Calle Pettersson <calle@cape.nu>
2021-10-27 20:46:46 +02:00
Ben Reedy
75966fd37c Merge pull request #848 from ArtamonovEvgenii/master
Set relative default path for textfile collector
2021-10-23 14:27:00 +10:00
eartamonov
d0cfc14af9 Set relative default path for textfile collector
Signed-off-by: Artamonov Evgenii <evgenyi.artamonov@gmail.com>
2021-10-19 14:23:11 +03:00
Ben Reedy
941b66d342 Merge pull request #846 from JDA88/patch-1
Document expected delays in the size metrics
2021-10-01 08:13:58 +10:00
Ben Reedy
388195be97 Update e2e output to match help text changes
Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-10-01 08:09:03 +10:00
JDA88
bbefd8ac97 Document expected delays in the size metrics
Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-10-01 07:58:04 +10:00
Ben Reedy
5b92e1bd3d Merge pull request #841 from breed808/thermal_empty
Thermalzone: return error on empty result
2021-10-01 05:45:09 +10:00
Dave Owen
82f17fd607 Collect IIS metrics using Perflib (#832)
Rewrite IIS collector to use Perflib

Signed-off-by: David Owen <dowen@meddbase.com>
2021-09-25 17:00:39 +02:00
Ben Reedy
3e37b7b6f0 Merge pull request #840 from newrelic-forks/fix_service_memory_leak
Service Api collection close servicehandler to avoid memory leak
2021-09-25 18:22:21 +10:00
Ben Reedy
5d29ff6497 Thermalzone: return error on empty result
Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-09-25 15:35:45 +10:00
Alvaro Cabanas
f4f5aaf146 Service Api collection close servicehandler to avoid memory leak
Signed-off-by: Alvaro Cabanas <acabanas@newrelic.com>
2021-09-23 17:45:31 +02:00
Ben Reedy
5931604b58 Merge pull request #812 from carlossscastro/master
Services collection using API (no WMI)
2021-08-26 08:26:07 +10:00
Carlos Castro
67ca5e5ef2 Update service.go
Signed-off-by: Carlos Castro <ccastro@newrelic.com>
2021-08-25 17:19:41 +01:00
Carlos Castro
384183120f Update service.go
Signed-off-by: Carlos Castro <ccastro@newrelic.com>
2021-08-25 17:19:41 +01:00
Carlos Castro
a9ac2d4672 Collect services using windows api
Signed-off-by: Carlos Castro <ccastro@newrelic.com>
2021-08-25 17:19:41 +01:00
Benjamin Blattberg
1b96bb6d08 Add MSSQL Wait Statistics (#793)
Signed-off-by: benjaminjb <benjamin.blattberg@gmail.com>
2021-06-29 21:32:08 +02:00
Ben Reedy
cc45eeb90b Merge pull request #809 from breed808/process_working_set_private
Add missing Process Collector metrics
2021-06-25 08:36:43 +10:00
Ben Reedy
4b2cd0a024 Merge pull request #759 from breed808/textfile
Fix textfile crashes with duplicate metrics
2021-06-25 08:36:21 +10:00
Ben Reedy
ad447a6b08 Add unit suffix to process working_set metric
Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-06-19 09:02:30 +10:00
Ben Reedy
e4d7604193 Move process metric documentation to markdown file
Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-06-19 09:02:28 +10:00
Ben Reedy
757f88be04 Add missing process counters
Working Set Private and Working Set Peak were being collected, but not
exposed by the exporter.

Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-06-19 09:02:26 +10:00
Calle Pettersson
cff484b5e1 Merge pull request #804 from max-len/bandwidth-bytes
Export CurrentBandwidth as bytes
2021-06-16 20:16:45 +02:00
Calle Pettersson
2dc568b5cd Merge pull request #805 from max-len/typo
Fix typo: process_memory_limit_bytes
2021-06-16 20:14:55 +02:00
Calle Pettersson
448f505729 Merge pull request #807 from max-len/doc-cpu
Fix doc: collector.cpu.md
2021-06-16 20:12:59 +02:00
Max Lendrich
6d1ba11a8e Fix doc: collector.cpu.md
Signed-off-by: Max Lendrich <maximilian.lendrich@sap.com>
2021-06-16 15:18:29 +02:00
Max Lendrich
0f5a232142 Fix typo
Signed-off-by: Max Lendrich <maximilian.lendrich@sap.com>
2021-06-15 12:38:23 +02:00
Max Lendrich
bbab591570 Export CurrentBandwidth as bytes
From https://prometheus.io/docs/practices/naming/:
To avoid confusion combining different metrics, always use bytes, even
where bits appear more common.

Fixes #800

Signed-off-by: Max Lendrich <maximilian.lendrich@sap.com>
2021-06-14 17:33:27 +02:00
Ben Reedy
2bc3c1859a Merge pull request #802 from breed808/log_dependency
Replace deprecated log lib in remaining collectors
2021-06-12 19:52:29 +10:00
Ben Reedy
7c61a4dc25 Run "go mod tidy" on project
Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-06-12 11:57:46 +10:00
Ben Reedy
5a57da53be Replace deprecated log lib in remaining collectors
Some collectors were missed when migrating to the local
github.com/prometheus-community/windows_exporter/log library.

Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-06-12 11:57:40 +10:00
Calle Pettersson
72c46664db Merge pull request #789 from Wittionary/issue-776
Fixes #776
2021-05-25 07:35:49 +02:00
Witt Allen
8689c41c68 Added a 'data source' field to specify hcsshim of Host Compute Services in Hyper-V is used
Signed-off-by: Witt Allen <qwert59@gmail.com>
2021-05-24 00:57:20 -05:00
Calle Pettersson
74eac8f29b Merge pull request #788 from benridley/bugfix_sysinfo_layout
Correct layout of SystemInfo structs
2021-05-21 09:41:34 +02:00
Ben Ridley
bb48f1caac Correct layout of SystemInfo structs to prevent incorrect fields being read
Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-05-20 16:30:52 -07:00
Ben Reedy
068d03bd01 Merge pull request #783 from breed808/msmq_remove_hardcoded_queue
Remove hard-coded "Computer Queues" filter
2021-05-17 16:58:50 +10:00
Ben Reedy
5072879dca Check duplicates across entire textfile set
Check all textfile metrics will be checked for duplicates. If duplicates
are detected, drop all metrics and log error.

Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-05-17 16:54:28 +10:00
Ben Reedy
0fb7eec670 Remove hard-coded "Computer Queues" filter
msmq collector would only collect from a hard-coded "Computer Queues"
queue.
Removal of filter allows other queues to be queried with
the collector.msmq.msmq-where flag.

Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-05-16 14:53:54 +10:00
Ben Reedy
4293497b29 Fix textfile crashes with duplicate metrics
Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-05-12 20:57:17 +10:00
Ben Reedy
95f10f19cb Merge pull request #778 from Wittionary/fix-issue-777
Fixes #777
2021-05-03 14:23:03 +10:00
Witt
288f2a60e7 Changed 'Yes' to 'No' to reflect current state of collectors enabled by default
Signed-off-by: Witt Allen <qwert59@gmail.com>
2021-05-02 19:40:33 -05:00
Ben Reedy
2e32b0e2b1 Merge pull request #767 from louij2/patch-1
Update collector.service.md
2021-05-01 13:14:26 +10:00
Calle Pettersson
09759a4e8c Merge pull request #698 from ramonsmits/patch-1
Example - Using [defaults] with `--collectors.enabled` argument
2021-04-25 19:53:42 +02:00
louij2
dfd42a6c0c Update collector.service.md
Added more details for monitoring multiple services.

Signed-off-by: Luca Chana <clubdog123@gmail.com>
2021-04-24 21:05:36 +01:00
Ramon Smits
576c3bf918 Example - Using [defaults] with --collectors.enabled argument
Signed-off-by: Ramon Smits <ramon.smits@gmail.com>
2021-04-23 18:52:52 +02:00
Ben Reedy
19fee044bf Merge pull request #765 from breed808/checksums
CI: Output artifacts in single, flat directory.
2021-04-20 19:00:35 +10:00
Ben Reedy
45a74fdb7f CI: Output artifacts in single, flat directory.
Nested directories caused issues with promu checksum output, causing
user checks of the sha265sums.txt file to fail as the filenames did not
match.

Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-04-19 19:38:17 +10:00
Ben Reedy
db00553ca6 Merge pull request #744 from breed808/tests
Add benchmark for each collector
2021-04-01 22:35:08 +10:00
Ben Reedy
a2c4bf6a2d Add benchmark for each collector
Benchmarks will allow for easier identification of slow collectors.
Additionally, they increase test coverage of the collectors, with some
collectors now reaching 80-95% coverage with this change.

Collector benchmarks have been structed so that common functionality is
present in `collector/collector_test.go` as is done with non-test
functionality in `collector/collector.go`.
Test logic that is specific to individual collectors is present in the
collector test file (E.G. `collector/process_test.go` for the Process
collector).

Signed-off-by: Ben Reedy <breed808@breed808.com>
2021-04-01 22:28:54 +10:00
Calle Pettersson
7adcac8f39 Merge pull request #702 from benridley/dev_cs_collector
Replace WMI in cs and os collectors
2021-03-30 21:26:23 +02:00
Ben Ridley
863b7d8ab4 Merge branch 'dev_cs_collector' of https://github.com/benridley/windows_exporter into dev_cs_collector 2021-03-29 10:14:26 -07:00
Ben Ridley
33c6b2c6a5 Address GitHub feedback
- Defer registry close calls
- Ensure size parameter in GetComputerName is properly specified
- Clean up some comments to ensure correctness

Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-29 10:13:36 -07:00
Calle Pettersson
6dee2422e1 Merge pull request #753 from prometheus-community/fix-ci
Update CI to install tools with go install rather than go get
2021-03-28 10:41:25 +02:00
Calle Pettersson
5d224b43ca Update CI to install tools with go install rather than go get
Signed-off-by: Calle Pettersson <calle@cape.nu>
2021-03-27 15:30:50 +01:00
Calle Pettersson
3f2a143104 Merge pull request #748 from majerus1223/remote_interactive
Fix typo on remote_interactive
2021-03-19 11:34:25 +01:00
Ben Ridley
ee3848141c Simplify struct usage and comments
Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-18 16:18:47 -07:00
Ben Ridley
df2a7a9ec0 Remove temporary uintptr values, as the garbage collector can move addresses from under them.
Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-18 16:18:47 -07:00
Ben Ridley
05f0f6f688 Add idiomatic wrappers to be exposed publically, and hide low-level
WinAPI operations

Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-18 16:18:47 -07:00
Ben Ridley
d947d0f6db Refactor remaining sysinfoapi calls into header package
Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-18 16:18:47 -07:00
Ben Ridley
d063bc0842 Add correct scrape context to OS benchmark
Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-18 16:18:47 -07:00
retryW
dd473c4807 Fixed paging free bytes
moved

Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-18 16:18:47 -07:00
retryW
7bd58abd27 Converted PagingFreeBytes to use perflib
Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-18 16:18:47 -07:00
retryW
6f941044c7 Change Sprintf interpolation to use explicit types
Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-18 16:18:47 -07:00
retryW
3da11645cf added os_test.go and removed wmi for testing
Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-18 16:18:47 -07:00
retryW
048bff919e Converted most metrics to non-wmi
Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-18 16:18:47 -07:00
retryW
f76334213d Convert os time and timezone from WMI to native go
Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-18 16:18:47 -07:00
Ben Ridley
71054ac429 Replace the CS collector with native WinAPI calls to sysinfoapi
Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-18 16:18:47 -07:00
Ben Ridley
248b7214e3 Move netapi free back to a defer statement
Signed-off-by: Ben Ridley <benridley29@gmail.com>
2021-03-19 10:13:04 +11:00
majerus
094558b1f1 Fix typo
Signed-off-by: majerus <james_majerus@msn.com>
2021-03-16 09:12:56 -05:00
Ben Reedy
18495abb69 Merge pull request #736 from basroovers/master
Typo in tcp doc
2021-03-07 11:04:18 +10:00
Bas Roovers
cc709ac380 Update collector.tcp.md
Changed windows_tcp_connections_established to gauge in tcp doc

Signed-off-by: Bas Roovers <basroovers@icloud.com>
2021-02-24 14:39:07 +01:00
121 changed files with 4251 additions and 1351 deletions

6
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,6 @@
version: 2
updates:
- package-ecosystem: "gomod"
directory: "/"
schedule:
interval: "weekly"

142
.github/workflows/ci.yml vendored Normal file
View File

@@ -0,0 +1,142 @@
name: windows_exporter CI/CD
# Trigger on pull requests and releases
# Deployments will only occur for releases (see `if` clauses in the build job).
on:
pull_request:
branches:
- master
release:
types:
- published
- edited
jobs:
test:
runs-on: windows-2019
steps:
- uses: actions/checkout@v2
- uses: actions/setup-go@v2
with:
go-version: '^1.17.5'
- name: Test
run: make test
- name: Install e2e deps
run: |
go get github.com/prometheus/promu@v0.11.1
go get github.com/josephspurrier/goversioninfo/cmd/goversioninfo@v1.2.0
# GOPATH\bin dir must be appended to PATH else the `promu` command won't be found
echo "$(go env GOPATH)\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
- name: e2e Test
run: make e2e-test
lint:
runs-on: windows-2019
steps:
# `gofmt` linter run by golangci-lint fails on CRLF line endings (the default for Windows)
- name: Set git to use LF
run: |
git config --global core.autocrlf false
git config --global core.eol lf
- uses: actions/checkout@v2
- uses: actions/setup-go@v2
with:
go-version: '^1.17.5'
- name: golangci-lint
uses: golangci/golangci-lint-action@v2
with:
version: v1.43
args: "--timeout=5m"
# golangci-lint action doesn't always provide helpful output, so re-run without the action for
# better output of the problem.
# The cache from the golangci-lint step is re-used here, so this step should finish quickly.
- name: errors
if: ${{ failure() }}
run: golangci-lint run --timeout=5m -c .golangci.yaml
codespell:
name: Check for spelling errors
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: codespell-project/actions-codespell@master
with:
check_filenames: true
# When using this Action in other repos, the --skip option below can be removed
skip: ./.git
ignore_words_list: calle
build:
runs-on: windows-2019
needs:
- test
- lint
- codespell
steps:
- uses: actions/checkout@v2
with:
# fetch-depth required for gitversion in `Build` step
fetch-depth: 0
- uses: actions/setup-go@v2
with:
go-version: '^1.17.5'
- name: Install Build deps
run: |
go get github.com/prometheus/promu@v0.11.1
go get github.com/josephspurrier/goversioninfo/cmd/goversioninfo@v1.2.0
# GOPATH\bin dir must be added to PATH else the `promu` and `goversioninfo` commands won't be found
echo "$(go env GOPATH)\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
- name: Build
run: |
$ErrorActionPreference = "Stop"
gitversion /output json /showvariable FullSemVer | Set-Content VERSION -PassThru
$Version = Get-Content VERSION
# Windows versioninfo resources need the file version by parts (but product version is free text)
$VersionParts = ($Version -replace '^v?([0-9\.]+).*$','$1').Split(".")
goversioninfo.exe -ver-major $VersionParts[0] -ver-minor $VersionParts[1] -ver-patch $VersionParts[2] -product-version $Version -platform-specific
make crossbuild
# GH requires all files to have different names, so add version/arch to differentiate
foreach($Arch in "amd64","386") {
Move-Item output\$Arch\windows_exporter.exe output\windows_exporter-$Version-$Arch.exe
}
- name: Upload Artifacts
uses: actions/upload-artifact@v2
with:
name: windows_exporter_binaries
path: output\windows_exporter-*.exe
- name: Build Release Artifacts
if: startsWith(github.ref, 'refs/tags/')
run: |
$ErrorActionPreference = "Stop"
$BuildVersion = Get-Content VERSION
$TagName = $env:GITHUB_REF -replace 'refs/tags/', ''
# The MSI version is not semver compliant, so just take the numerical parts
$MSIVersion = $TagName -replace '^v?([0-9\.]+).*$','$1'
foreach($Arch in "amd64","386") {
Write-Verbose "Building windows_exporter $MSIVersion msi for $Arch"
.\installer\build.ps1 -PathToExecutable .\output\windows_exporter-$BuildVersion-$Arch.exe -Version $MSIVersion -Arch "$Arch"
Move-Item installer\Output\windows_exporter-$MSIVersion-$Arch.msi output\
}
promu checksum output\
- name: Release
if: startsWith(github.ref, 'refs/tags/')
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
$TagName = $env:GITHUB_REF -replace 'refs/tags/', ''
Get-ChildItem -Path output\* -Include @('windows_exporter*.msi', 'windows_exporter*.exe', 'sha256sums.txt') | Foreach-Object {gh release upload $TagName $_}

View File

@@ -3,11 +3,10 @@ linters:
enable:
- deadcode
- errcheck
- golint
- revive
- govet
- gofmt
- ineffassign
- interfacer
- structcheck
- unconvert
- varcheck
@@ -20,4 +19,7 @@ issues:
- # Golint has many capitalisation complaints on WMI class names
text: "`?\\w+`? should be `?\\w+`?"
linters:
- golint
- revive
- text: "don't use ALL_CAPS in Go names; use CamelCase"
linters:
- revive

View File

@@ -1,6 +1,9 @@
Contributors in alphabetical order
Maintainers in alphabetical order
* [Ben Reedy](https://github.com/breed808) - breed808@breed808.com
* [Calle Pettersson](https://github.com/carlpett) - calle@cape.nu
Alumni
* [Ben Reedy](https://github.com/breed808)
* [Brian Brazil](https://github.com/brian-brazil)
* [Martin Lindhe](https://github.com/martinlindhe)
* [Calle Pettersson](https://github.com/carlpett)

View File

@@ -8,12 +8,15 @@ windows_exporter.exe: **/*.go
test:
go test -v ./...
bench:
go test -v -bench='benchmark(cpu|logicaldisk|logon|memory|net|process|service|system|tcp|time)collector' ./...
lint:
golangci-lint -c .golangci.yaml run
.PHONY: e2e-test
e2e-test: windows_exporter.exe
powershell -NonInteractive -ExecutionPolicy Bypass -File .\tools\end-to-end-test.ps1
pwsh -NonInteractive -ExecutionPolicy Bypass -File .\tools\end-to-end-test.ps1
fmt:
gofmt -l -w -s .

View File

@@ -1,6 +1,6 @@
# windows_exporter
[![Build status](https://ci.appveyor.com/api/projects/status/xoym3fftr7giasiw/branch/master?svg=true)](https://ci.appveyor.com/project/prometheus-community/windows-exporter)
![Build Status](https://github.com/prometheus-community/windows_exporter/workflows/windows_exporter%20CI/CD/badge.svg)
A Prometheus exporter for Windows machines.
@@ -10,6 +10,7 @@ A Prometheus exporter for Windows machines.
Name | Description | Enabled by default
---------|-------------|--------------------
[ad](docs/collector.ad.md) | Active Directory Domain Services |
[adcs](docs/collector.adcs.md) | Active Directory Certificate Services |
[adfs](docs/collector.adfs.md) | Active Directory Federation Services |
[cache](docs/collector.cache.md) | Cache metrics |
[cpu](docs/collector.cpu.md) | CPU usage | &#10003;
@@ -76,7 +77,7 @@ Flag | Description | Default value
`--telemetry.addr` | host:port for exporter. | `:9182`
`--telemetry.path` | URL path for surfacing collected metrics. | `/metrics`
`--telemetry.max-requests` | Maximum number of concurrent requests. 0 to disable. | `5`
`--collectors.enabled` | Comma-separated list of collectors to use. Use `[defaults]` as a placeholder for all the collectors enabled by default." | `[defaults]`
`--collectors.enabled` | Comma-separated list of collectors to use. Use `[defaults]` as a placeholder which gets expanded containing all the collectors enabled by default." | `[defaults]`
`--collectors.print` | If true, print available collectors and exit. |
`--scrape.timeout-margin` | Seconds to subtract from the timeout allowed by the client. Tune to allow for overhead or high loads. | `0.5`
`--web.config.file` | A [web config][web_config] for setting up TLS and Auth | None
@@ -140,6 +141,14 @@ The prometheus metrics will be exposed on [localhost:9182](http://localhost:9182
When there are multiple processes with the same name, WMI represents those after the first instance as `process-name#index`. So to get them all, rather than just the first one, the [regular expression](https://en.wikipedia.org/wiki/Regular_expression) must use `.+`. See [process](docs/collector.process.md) for more information.
### Using [defaults] with `--collectors.enabled` argument
Using `[defaults]` with `--collectors.enabled` argument which gets expanded with all default collectors.
.\windows_exporter.exe --collectors.enabled "[defaults],process,container"
This enables the additional process and container collectors on top of the defaults.
### Using a configuration file
YAML configuration files can be specified with the `--config.file` flag. E.G. `.\windows_exporter.exe --config.file=config.yml`

View File

@@ -1,84 +0,0 @@
version: "{build}"
os: Visual Studio 2019
build: off
environment:
GOPATH: c:\gopath
GO111MODULE: on
clone_folder: c:\gopath\src\github.com\prometheus-community\windows_exporter
install:
- mkdir %GOPATH%\bin
- set PATH=%GOPATH%\bin;%PATH%
- set PATH=%PATH%;C:\msys64\mingw64\bin
- choco install gitversion.portable make -y
- ps: |
appveyor DownloadFile https://github.com/golangci/golangci-lint/releases/download/v1.21.0/golangci-lint-1.21.0-windows-amd64.zip
Expand-Archive golangci-lint-1.21.0-windows-amd64.zip
Move-Item golangci-lint-1.21.0-windows-amd64\golangci-lint-1.21.0-windows-amd64\golangci-lint.exe $env:GOPATH\bin\golangci-lint.exe
- ps: |
$env:GO111MODULE="off"
go get -u github.com/prometheus/promu
go get -u github.com/josephspurrier/goversioninfo/cmd/goversioninfo
$env:GO111MODULE="on"
test_script:
- make test
after_test:
- make lint
- make e2e-test
build_script:
- ps: |
# go mod download (or, if we don't call it, go build) will write every dependent package name to
# stderr, which will be interpreted as an error and abort the build if ErrorActionPreference is Stop,
# so we need to run it before setting the preference.
go mod download
$ErrorActionPreference = "Stop"
gitversion /output json /showvariable FullSemVer | Set-Content VERSION -PassThru
$Version = Get-Content VERSION
# Windows versioninfo resources need the file version by parts (but product version is free text)
$VersionParts = ($Version -replace '^v?([0-9\.]+).*$','$1').Split(".")
goversioninfo.exe -ver-major $VersionParts[0] -ver-minor $VersionParts[1] -ver-patch $VersionParts[2] -product-version $Version -platform-specific
make crossbuild
# GH requires all files to have different names, so add version/arch to differentiate
foreach($Arch in "amd64","386") {
Rename-Item output\$Arch\windows_exporter.exe -NewName windows_exporter-$Version-$Arch.exe
}
after_build:
- ps: |
# Build installer packages only on tagged releases
if($env:APPVEYOR_REPO_TAG -ne "True") {
return
}
$ErrorActionPreference = "Stop"
$BuildVersion = Get-Content VERSION
# The MSI version is not semver compliant, so just take the numerical parts
$MSIVersion = $env:APPVEYOR_REPO_TAG_NAME -replace '^v?([0-9\.]+).*$','$1'
foreach($Arch in "amd64","386") {
Write-Verbose "Building windows_exporter $MSIVersion msi for $Arch"
.\installer\build.ps1 -PathToExecutable .\output\$Arch\windows_exporter-$BuildVersion-$Arch.exe -Version $MSIVersion -Arch "$Arch"
Move-Item installer\Output\windows_exporter-$MSIVersion-$Arch.msi output\$Arch\
}
- promu checksum output\
artifacts:
- name: Artifacts
path: output\**\*
deploy:
- provider: GitHub
description: windows_exporter version $(appveyor_build_version)
artifact: Artifacts
auth_token:
secure: 'hFR7Ymxt/Rb25p4BweFvMNhX03lHD9kXJXrRlC/KbThazHuLD5NTx2ibMI6LYRsr'
draft: false
prerelease: false
on:
appveyor_repo_tag: true

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

9
collector/ad_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkADCollector(b *testing.B) {
benchmarkCollector(b, "ad", NewADCollector)
}

242
collector/adcs.go Normal file
View File

@@ -0,0 +1,242 @@
//go:build windows
// +build windows
package collector
import (
"errors"
"github.com/prometheus-community/windows_exporter/log"
"github.com/prometheus/client_golang/prometheus"
"strings"
)
func init() {
registerCollector("adcs", adcsCollectorMethod, "Certification Authority")
}
type adcsCollector struct {
RequestsPerSecond *prometheus.Desc
RequestProcessingTime *prometheus.Desc
RetrievalsPerSecond *prometheus.Desc
RetrievalProcessingTime *prometheus.Desc
FailedRequestsPerSecond *prometheus.Desc
IssuedRequestsPerSecond *prometheus.Desc
PendingRequestsPerSecond *prometheus.Desc
RequestCryptographicSigningTime *prometheus.Desc
RequestPolicyModuleProcessingTime *prometheus.Desc
ChallengeResponsesPerSecond *prometheus.Desc
ChallengeResponseProcessingTime *prometheus.Desc
SignedCertificateTimestampListsPerSecond *prometheus.Desc
SignedCertificateTimestampListProcessingTime *prometheus.Desc
}
// ADCSCollectorMethod ...
func adcsCollectorMethod() (Collector, error) {
const subsystem = "adcs"
return &adcsCollector{
RequestsPerSecond: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "requests_total"),
"Total certificate requests processed",
[]string{"cert_template"},
nil,
),
RequestProcessingTime: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "request_processing_time_seconds"),
"Last time elapsed for certificate requests",
[]string{"cert_template"},
nil,
),
RetrievalsPerSecond: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "retrievals_total"),
"Total certificate retrieval requests processed",
[]string{"cert_template"},
nil,
),
RetrievalProcessingTime: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "retrievals_processing_time_seconds"),
"Last time elapsed for certificate retrieval request",
[]string{"cert_template"},
nil,
),
FailedRequestsPerSecond: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "failed_requests_total"),
"Total failed certificate requests processed",
[]string{"cert_template"},
nil,
),
IssuedRequestsPerSecond: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "issued_requests_total"),
"Total issued certificate requests processed",
[]string{"cert_template"},
nil,
),
PendingRequestsPerSecond: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "pending_requests_total"),
"Total pending certificate requests processed",
[]string{"cert_template"},
nil,
),
RequestCryptographicSigningTime: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "request_cryptographic_signing_time_seconds"),
"Last time elapsed for signing operation request",
[]string{"cert_template"},
nil,
),
RequestPolicyModuleProcessingTime: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "request_policy_module_processing_time_seconds"),
"Last time elapsed for policy module processing request",
[]string{"cert_template"},
nil,
),
ChallengeResponsesPerSecond: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "challenge_responses_total"),
"Total certificate challenge responses processed",
[]string{"cert_template"},
nil,
),
ChallengeResponseProcessingTime: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "challenge_response_processing_time_seconds"),
"Last time elapsed for challenge response",
[]string{"cert_template"},
nil,
),
SignedCertificateTimestampListsPerSecond: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "signed_certificate_timestamp_lists_total"),
"Total Signed Certificate Timestamp Lists processed",
[]string{"cert_template"},
nil,
),
SignedCertificateTimestampListProcessingTime: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "signed_certificate_timestamp_list_processing_time_seconds"),
"Last time elapsed for Signed Certificate Timestamp List",
[]string{"cert_template"},
nil,
),
}, nil
}
func (c *adcsCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collectADCSCounters(ctx, ch); err != nil {
log.Error("Failed collecting ADCS Metrics:", desc, err)
return err
}
return nil
}
type perflibADCS struct {
Name string
RequestsPerSecond float64 `perflib:"Requests/sec"`
RequestProcessingTime float64 `perflib:"Request processing time (ms)"`
RetrievalsPerSecond float64 `perflib:"Retrievals/sec"`
RetrievalProcessingTime float64 `perflib:"Retrieval processing time (ms)"`
FailedRequestsPerSecond float64 `perflib:"Failed Requests/sec"`
IssuedRequestsPerSecond float64 `perflib:"Issued Requests/sec"`
PendingRequestsPerSecond float64 `perflib:"Pending Requests/sec"`
RequestCryptographicSigningTime float64 `perflib:"Request cryptographic signing time (ms)"`
RequestPolicyModuleProcessingTime float64 `perflib:"Request policy module processing time (ms)"`
ChallengeResponsesPerSecond float64 `perflib:"Challenge Responses/sec"`
ChallengeResponseProcessingTime float64 `perflib:"Challenge Response processing time (ms)"`
SignedCertificateTimestampListsPerSecond float64 `perflib:"Signed Certificate Timestamp Lists/sec"`
SignedCertificateTimestampListProcessingTime float64 `perflib:"Signed Certificate Timestamp List processing time (ms)"`
}
func (c *adcsCollector) collectADCSCounters(ctx *ScrapeContext, ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
dst := make([]perflibADCS, 0)
if _, ok := ctx.perfObjects["Certification Authority"]; !ok {
return nil, errors.New("Perflib did not contain an entry for Certification Authority")
}
err := unmarshalObject(ctx.perfObjects["Certification Authority"], &dst)
if err != nil {
return nil, err
}
if len(dst) == 0 {
return nil, errors.New("Perflib query for Certification Authority (ADCS) returned empty result set")
}
for _, d := range dst {
n := strings.ToLower(d.Name)
if n == "" {
continue
}
ch <- prometheus.MustNewConstMetric(
c.RequestsPerSecond,
prometheus.CounterValue,
d.RequestsPerSecond,
d.Name,
)
ch <- prometheus.MustNewConstMetric(
c.RequestProcessingTime,
prometheus.GaugeValue,
milliSecToSec(d.RequestProcessingTime),
d.Name,
)
ch <- prometheus.MustNewConstMetric(
c.RetrievalsPerSecond,
prometheus.CounterValue,
d.RetrievalsPerSecond,
d.Name,
)
ch <- prometheus.MustNewConstMetric(
c.RetrievalProcessingTime,
prometheus.GaugeValue,
milliSecToSec(d.RetrievalProcessingTime),
d.Name,
)
ch <- prometheus.MustNewConstMetric(
c.FailedRequestsPerSecond,
prometheus.CounterValue,
d.FailedRequestsPerSecond,
d.Name,
)
ch <- prometheus.MustNewConstMetric(
c.IssuedRequestsPerSecond,
prometheus.CounterValue,
d.IssuedRequestsPerSecond,
d.Name,
)
ch <- prometheus.MustNewConstMetric(
c.PendingRequestsPerSecond,
prometheus.CounterValue,
d.PendingRequestsPerSecond,
d.Name,
)
ch <- prometheus.MustNewConstMetric(
c.RequestCryptographicSigningTime,
prometheus.GaugeValue,
milliSecToSec(d.RequestCryptographicSigningTime),
d.Name,
)
ch <- prometheus.MustNewConstMetric(
c.RequestPolicyModuleProcessingTime,
prometheus.GaugeValue,
milliSecToSec(d.RequestPolicyModuleProcessingTime),
d.Name,
)
ch <- prometheus.MustNewConstMetric(
c.ChallengeResponsesPerSecond,
prometheus.CounterValue,
d.ChallengeResponsesPerSecond,
d.Name,
)
ch <- prometheus.MustNewConstMetric(
c.ChallengeResponseProcessingTime,
prometheus.GaugeValue,
milliSecToSec(d.ChallengeResponseProcessingTime),
d.Name,
)
ch <- prometheus.MustNewConstMetric(
c.SignedCertificateTimestampListsPerSecond,
prometheus.CounterValue,
d.SignedCertificateTimestampListsPerSecond,
d.Name,
)
ch <- prometheus.MustNewConstMetric(
c.SignedCertificateTimestampListProcessingTime,
prometheus.GaugeValue,
milliSecToSec(d.SignedCertificateTimestampListProcessingTime),
d.Name,
)
}
return nil, nil
}

9
collector/adcs_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkADCSCollector(b *testing.B) {
benchmarkCollector(b, "adcs", adcsCollectorMethod)
}

View File

@@ -1,9 +1,11 @@
//go:build windows
// +build windows
package collector
import (
"github.com/prometheus/client_golang/prometheus"
"math"
)
func init() {
@@ -11,17 +13,49 @@ func init() {
}
type adfsCollector struct {
adLoginConnectionFailures *prometheus.Desc
certificateAuthentications *prometheus.Desc
deviceAuthentications *prometheus.Desc
extranetAccountLockouts *prometheus.Desc
federatedAuthentications *prometheus.Desc
passportAuthentications *prometheus.Desc
passiveRequests *prometheus.Desc
passwordChangeFailed *prometheus.Desc
passwordChangeSucceeded *prometheus.Desc
tokenRequests *prometheus.Desc
windowsIntegratedAuthentications *prometheus.Desc
adLoginConnectionFailures *prometheus.Desc
certificateAuthentications *prometheus.Desc
deviceAuthentications *prometheus.Desc
extranetAccountLockouts *prometheus.Desc
federatedAuthentications *prometheus.Desc
passportAuthentications *prometheus.Desc
passiveRequests *prometheus.Desc
passwordChangeFailed *prometheus.Desc
passwordChangeSucceeded *prometheus.Desc
tokenRequests *prometheus.Desc
windowsIntegratedAuthentications *prometheus.Desc
oAuthAuthZRequests *prometheus.Desc
oAuthClientAuthentications *prometheus.Desc
oAuthClientAuthenticationsFailures *prometheus.Desc
oAuthClientCredentialsRequestFailures *prometheus.Desc
oAuthClientCredentialsRequests *prometheus.Desc
oAuthClientPrivateKeyJwtAuthenticationFailures *prometheus.Desc
oAuthClientPrivateKeyJwtAuthentications *prometheus.Desc
oAuthClientSecretBasicAuthenticationFailures *prometheus.Desc
oAuthClientSecretBasicAuthentications *prometheus.Desc
oAuthClientSecretPostAuthenticationFailures *prometheus.Desc
oAuthClientSecretPostAuthentications *prometheus.Desc
oAuthClientWindowsIntegratedAuthenticationFailures *prometheus.Desc
oAuthClientWindowsIntegratedAuthentications *prometheus.Desc
oAuthLogonCertificateRequestFailures *prometheus.Desc
oAuthLogonCertificateTokenRequests *prometheus.Desc
oAuthPasswordGrantRequestFailures *prometheus.Desc
oAuthPasswordGrantRequests *prometheus.Desc
oAuthTokenRequests *prometheus.Desc
samlPTokenRequests *prometheus.Desc
ssoAuthenticationFailures *prometheus.Desc
ssoAuthentications *prometheus.Desc
wsfedTokenRequests *prometheus.Desc
wstrustTokenRequests *prometheus.Desc
upAuthenticationFailures *prometheus.Desc
upAuthentications *prometheus.Desc
externalAuthenticationFailures *prometheus.Desc
externalAuthentications *prometheus.Desc
artifactDBFailures *prometheus.Desc
avgArtifactDBQueryTime *prometheus.Desc
configDBFailures *prometheus.Desc
avgConfigDBQueryTime *prometheus.Desc
federationMetadataRequests *prometheus.Desc
}
// newADFSCollector constructs a new adfsCollector
@@ -95,21 +129,245 @@ func newADFSCollector() (Collector, error) {
nil,
nil,
),
oAuthAuthZRequests: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "oauth_authorization_requests_total"),
"Total number of incoming requests to the OAuth Authorization endpoint",
nil,
nil,
),
oAuthClientAuthentications: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "oauth_client_authentication_success_total"),
"Total number of successful OAuth client Authentications",
nil,
nil,
),
oAuthClientAuthenticationsFailures: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "oauth_client_authentication_failure_total"),
"Total number of failed OAuth client Authentications",
nil,
nil,
),
oAuthClientCredentialsRequestFailures: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "oauth_client_credentials_failure_total"),
"Total number of failed OAuth Client Credentials Requests",
nil,
nil,
),
oAuthClientCredentialsRequests: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "oauth_client_credentials_success_total"),
"Total number of successful RP tokens issued for OAuth Client Credentials Requests",
nil,
nil,
),
oAuthClientPrivateKeyJwtAuthenticationFailures: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "oauth_client_privkey_jtw_authentication_failure_total"),
"Total number of failed OAuth Client Private Key Jwt Authentications",
nil,
nil,
),
oAuthClientPrivateKeyJwtAuthentications: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "oauth_client_privkey_jwt_authentications_success_total"),
"Total number of successful OAuth Client Private Key Jwt Authentications",
nil,
nil,
),
oAuthClientSecretBasicAuthenticationFailures: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "oauth_client_secret_basic_authentications_failure_total"),
"Total number of failed OAuth Client Secret Basic Authentications",
nil,
nil,
),
oAuthClientSecretBasicAuthentications: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "oauth_client_secret_basic_authentications_success_total"),
"Total number of successful OAuth Client Secret Basic Authentications",
nil,
nil,
),
oAuthClientSecretPostAuthenticationFailures: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "oauth_client_secret_post_authentications_failure_total"),
"Total number of failed OAuth Client Secret Post Authentications",
nil,
nil,
),
oAuthClientSecretPostAuthentications: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "oauth_client_secret_post_authentications_success_total"),
"Total number of successful OAuth Client Secret Post Authentications",
nil,
nil,
),
oAuthClientWindowsIntegratedAuthenticationFailures: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "oauth_client_windows_authentications_failure_total"),
"Total number of failed OAuth Client Windows Integrated Authentications",
nil,
nil,
),
oAuthClientWindowsIntegratedAuthentications: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "oauth_client_windows_authentications_success_total"),
"Total number of successful OAuth Client Windows Integrated Authentications",
nil,
nil,
),
oAuthLogonCertificateRequestFailures: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "oauth_logon_certificate_requests_failure_total"),
"Total number of failed OAuth Logon Certificate Requests",
nil,
nil,
),
oAuthLogonCertificateTokenRequests: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "oauth_logon_certificate_token_requests_success_total"),
"Total number of successful RP tokens issued for OAuth Logon Certificate Requests",
nil,
nil,
),
oAuthPasswordGrantRequestFailures: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "oauth_password_grant_requests_failure_total"),
"Total number of failed OAuth Password Grant Requests",
nil,
nil,
),
oAuthPasswordGrantRequests: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "oauth_password_grant_requests_success_total"),
"Total number of successful OAuth Password Grant Requests",
nil,
nil,
),
oAuthTokenRequests: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "oauth_token_requests_success_total"),
"Total number of successful RP tokens issued over OAuth protocol",
nil,
nil,
),
samlPTokenRequests: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "samlp_token_requests_success_total"),
"Total number of successful RP tokens issued over SAML-P protocol",
nil,
nil,
),
ssoAuthenticationFailures: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "sso_authentications_failure_total"),
"Total number of failed SSO authentications",
nil,
nil,
),
ssoAuthentications: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "sso_authentications_success_total"),
"Total number of successful SSO authentications",
nil,
nil,
),
wsfedTokenRequests: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "wsfed_token_requests_success_total"),
"Total number of successful RP tokens issued over WS-Fed protocol",
nil,
nil,
),
wstrustTokenRequests: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "wstrust_token_requests_success_total"),
"Total number of successful RP tokens issued over WS-Trust protocol",
nil,
nil,
),
upAuthenticationFailures: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "userpassword_authentications_failure_total"),
"Total number of failed AD U/P authentications",
nil,
nil,
),
upAuthentications: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "userpassword_authentications_success_total"),
"Total number of successful AD U/P authentications",
nil,
nil,
),
externalAuthenticationFailures: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "external_authentications_failure_total"),
"Total number of failed authentications from external MFA providers",
nil,
nil,
),
externalAuthentications: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "external_authentications_success_total"),
"Total number of successful authentications from external MFA providers",
nil,
nil,
),
artifactDBFailures: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "db_artifact_failure_total"),
"Total number of failures connecting to the artifact database",
nil,
nil,
),
avgArtifactDBQueryTime: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "db_artifact_query_time_seconds_total"),
"Accumulator of time taken for an artifact database query",
nil,
nil,
),
configDBFailures: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "db_config_failure_total"),
"Total number of failures connecting to the configuration database",
nil,
nil,
),
avgConfigDBQueryTime: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "db_config_query_time_seconds_total"),
"Accumulator of time taken for a configuration database query",
nil,
nil,
),
federationMetadataRequests: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "federation_metadata_requests_total"),
"Total number of Federation Metadata requests",
nil,
nil,
),
}, nil
}
type perflibADFS struct {
AdLoginConnectionFailures float64 `perflib:"AD login Connection Failures"`
CertificateAuthentications float64 `perflib:"Certificate Authentications"`
DeviceAuthentications float64 `perflib:"Device Authentications"`
ExtranetAccountLockouts float64 `perflib:"Extranet Account Lockouts"`
FederatedAuthentications float64 `perflib:"Federated Authentications"`
PassportAuthentications float64 `perflib:"Microsoft Passport Authentications"`
PassiveRequests float64 `perflib:"Passive Requests"`
PasswordChangeFailed float64 `perflib:"Password Change Failed Requests"`
PasswordChangeSucceeded float64 `perflib:"Password Change Successful Requests"`
TokenRequests float64 `perflib:"Token Requests"`
WindowsIntegratedAuthentications float64 `perflib:"Windows Integrated Authentications"`
AdLoginConnectionFailures float64 `perflib:"AD Login Connection Failures"`
CertificateAuthentications float64 `perflib:"Certificate Authentications"`
DeviceAuthentications float64 `perflib:"Device Authentications"`
ExtranetAccountLockouts float64 `perflib:"Extranet Account Lockouts"`
FederatedAuthentications float64 `perflib:"Federated Authentications"`
PassportAuthentications float64 `perflib:"Microsoft Passport Authentications"`
PassiveRequests float64 `perflib:"Passive Requests"`
PasswordChangeFailed float64 `perflib:"Password Change Failed Requests"`
PasswordChangeSucceeded float64 `perflib:"Password Change Successful Requests"`
TokenRequests float64 `perflib:"Token Requests"`
WindowsIntegratedAuthentications float64 `perflib:"Windows Integrated Authentications"`
OAuthAuthZRequests float64 `perflib:"OAuth AuthZ Requests"`
OAuthClientAuthentications float64 `perflib:"OAuth Client Authentications"`
OAuthClientAuthenticationFailures float64 `perflib:"OAuth Client Authentications Failures"`
OAuthClientCredentialRequestFailures float64 `perflib:"OAuth Client Credentials Request Failures"`
OAuthClientCredentialRequests float64 `perflib:"OAuth Client Credentials Requests"`
OAuthClientPrivKeyJWTAuthnFailures float64 `perflib:"OAuth Client Private Key Jwt Authentication Failures"`
OAuthClientPrivKeyJWTAuthentications float64 `perflib:"OAuth Client Private Key Jwt Authentications"`
OAuthClientBasicAuthnFailures float64 `perflib:"OAuth Client Secret Basic Authentication Failures"`
OAuthClientBasicAuthentications float64 `perflib:"OAuth Client Secret Basic Authentication Requests"`
OAuthClientSecretPostAuthnFailures float64 `perflib:"OAuth Client Secret Post Authentication Failures"`
OAuthClientSecretPostAuthentications float64 `perflib:"OAuth Client Secret Post Authentications"`
OAuthClientWindowsAuthnFailures float64 `perflib:"OAuth Client Windows Integrated Authentication Failures"`
OAuthClientWindowsAuthentications float64 `perflib:"OAuth Client Windows Integrated Authentications"`
OAuthLogonCertRequestFailures float64 `perflib:"OAuth Logon Certificate Request Failures"`
OAuthLogonCertTokenRequests float64 `perflib:"OAuth Logon Certificate Token Requests"`
OAuthPasswordGrantRequestFailures float64 `perflib:"OAuth Password Grant Request Failures"`
OAuthPasswordGrantRequests float64 `perflib:"OAuth Password Grant Requests"`
OAuthTokenRequests float64 `perflib:"OAuth Token Requests"`
SAMLPTokenRequests float64 `perflib:"SAML-P Token Requests"`
SSOAuthenticationFailures float64 `perflib:"SSO Authentication Failures"`
SSOAuthentications float64 `perflib:"SSO Authentications"`
WSFedTokenRequests float64 `perflib:"WS-Fed Token Requests"`
WSTrustTokenRequests float64 `perflib:"WS-Trust Token Requests"`
UsernamePasswordAuthnFailures float64 `perflib:"U/P Authentication Failures"`
UsernamePasswordAuthentications float64 `perflib:"U/P Authentications"`
ExternalAuthentications float64 `perflib:"External Authentications"`
ExternalAuthNFailures float64 `perflib:"External Authentication Failures"`
ArtifactDBFailures float64 `perflib:"Artifact Database Connection Failures"`
AvgArtifactDBQueryTime float64 `perflib:"Average Artifact Database Query Time"`
ConfigDBFailures float64 `perflib:"Configuration Database Connection Failures"`
AvgConfigDBQueryTime float64 `perflib:"Average Config Database Query Time"`
FederationMetadataRequests float64 `perflib:"Federation Metadata Requests"`
}
func (c *adfsCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
@@ -184,5 +442,197 @@ func (c *adfsCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric)
prometheus.CounterValue,
adfsData[0].WindowsIntegratedAuthentications,
)
ch <- prometheus.MustNewConstMetric(
c.oAuthAuthZRequests,
prometheus.CounterValue,
adfsData[0].OAuthAuthZRequests,
)
ch <- prometheus.MustNewConstMetric(
c.oAuthClientAuthentications,
prometheus.CounterValue,
adfsData[0].OAuthClientAuthentications,
)
ch <- prometheus.MustNewConstMetric(
c.oAuthClientAuthenticationsFailures,
prometheus.CounterValue,
adfsData[0].OAuthClientAuthenticationFailures,
)
ch <- prometheus.MustNewConstMetric(
c.oAuthClientCredentialsRequestFailures,
prometheus.CounterValue,
adfsData[0].OAuthClientCredentialRequestFailures,
)
ch <- prometheus.MustNewConstMetric(
c.oAuthClientCredentialsRequests,
prometheus.CounterValue,
adfsData[0].OAuthClientCredentialRequests,
)
ch <- prometheus.MustNewConstMetric(
c.oAuthClientPrivateKeyJwtAuthenticationFailures,
prometheus.CounterValue,
adfsData[0].OAuthClientPrivKeyJWTAuthnFailures,
)
ch <- prometheus.MustNewConstMetric(
c.oAuthClientPrivateKeyJwtAuthentications,
prometheus.CounterValue,
adfsData[0].OAuthClientPrivKeyJWTAuthentications,
)
ch <- prometheus.MustNewConstMetric(
c.oAuthClientSecretBasicAuthenticationFailures,
prometheus.CounterValue,
adfsData[0].OAuthClientBasicAuthnFailures,
)
ch <- prometheus.MustNewConstMetric(
c.oAuthClientSecretBasicAuthentications,
prometheus.CounterValue,
adfsData[0].OAuthClientBasicAuthentications,
)
ch <- prometheus.MustNewConstMetric(
c.oAuthClientSecretPostAuthenticationFailures,
prometheus.CounterValue,
adfsData[0].OAuthClientSecretPostAuthnFailures,
)
ch <- prometheus.MustNewConstMetric(
c.oAuthClientSecretPostAuthentications,
prometheus.CounterValue,
adfsData[0].OAuthClientSecretPostAuthentications,
)
ch <- prometheus.MustNewConstMetric(
c.oAuthClientWindowsIntegratedAuthenticationFailures,
prometheus.CounterValue,
adfsData[0].OAuthClientWindowsAuthnFailures,
)
ch <- prometheus.MustNewConstMetric(
c.oAuthClientWindowsIntegratedAuthentications,
prometheus.CounterValue,
adfsData[0].OAuthClientWindowsAuthentications,
)
ch <- prometheus.MustNewConstMetric(
c.oAuthLogonCertificateRequestFailures,
prometheus.CounterValue,
adfsData[0].OAuthLogonCertRequestFailures,
)
ch <- prometheus.MustNewConstMetric(
c.oAuthLogonCertificateTokenRequests,
prometheus.CounterValue,
adfsData[0].OAuthLogonCertTokenRequests,
)
ch <- prometheus.MustNewConstMetric(
c.oAuthPasswordGrantRequestFailures,
prometheus.CounterValue,
adfsData[0].OAuthPasswordGrantRequestFailures,
)
ch <- prometheus.MustNewConstMetric(
c.oAuthPasswordGrantRequests,
prometheus.CounterValue,
adfsData[0].OAuthPasswordGrantRequests,
)
ch <- prometheus.MustNewConstMetric(
c.oAuthTokenRequests,
prometheus.CounterValue,
adfsData[0].OAuthTokenRequests,
)
ch <- prometheus.MustNewConstMetric(
c.samlPTokenRequests,
prometheus.CounterValue,
adfsData[0].SAMLPTokenRequests,
)
ch <- prometheus.MustNewConstMetric(
c.ssoAuthenticationFailures,
prometheus.CounterValue,
adfsData[0].SSOAuthenticationFailures,
)
ch <- prometheus.MustNewConstMetric(
c.ssoAuthentications,
prometheus.CounterValue,
adfsData[0].SSOAuthentications,
)
ch <- prometheus.MustNewConstMetric(
c.wsfedTokenRequests,
prometheus.CounterValue,
adfsData[0].WSFedTokenRequests,
)
ch <- prometheus.MustNewConstMetric(
c.wstrustTokenRequests,
prometheus.CounterValue,
adfsData[0].WSTrustTokenRequests,
)
ch <- prometheus.MustNewConstMetric(
c.upAuthenticationFailures,
prometheus.CounterValue,
adfsData[0].UsernamePasswordAuthnFailures,
)
ch <- prometheus.MustNewConstMetric(
c.upAuthentications,
prometheus.CounterValue,
adfsData[0].UsernamePasswordAuthentications,
)
ch <- prometheus.MustNewConstMetric(
c.externalAuthenticationFailures,
prometheus.CounterValue,
adfsData[0].ExternalAuthNFailures,
)
ch <- prometheus.MustNewConstMetric(
c.externalAuthentications,
prometheus.CounterValue,
adfsData[0].ExternalAuthentications,
)
ch <- prometheus.MustNewConstMetric(
c.artifactDBFailures,
prometheus.CounterValue,
adfsData[0].ArtifactDBFailures,
)
ch <- prometheus.MustNewConstMetric(
c.avgArtifactDBQueryTime,
prometheus.CounterValue,
adfsData[0].AvgArtifactDBQueryTime*math.Pow(10, -8),
)
ch <- prometheus.MustNewConstMetric(
c.configDBFailures,
prometheus.CounterValue,
adfsData[0].ConfigDBFailures,
)
ch <- prometheus.MustNewConstMetric(
c.avgConfigDBQueryTime,
prometheus.CounterValue,
adfsData[0].AvgConfigDBQueryTime*math.Pow(10, -8),
)
ch <- prometheus.MustNewConstMetric(
c.federationMetadataRequests,
prometheus.CounterValue,
adfsData[0].FederationMetadataRequests,
)
return nil
}

9
collector/adfs_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkADFSCollector(b *testing.B) {
benchmarkCollector(b, "adfs", newADFSCollector)
}

View File

@@ -1,10 +1,11 @@
//go:build windows
// +build windows
package collector
import (
"github.com/prometheus-community/windows_exporter/log"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {

View File

@@ -148,3 +148,7 @@ func expandEnabledChildCollectors(enabled string) []string {
sort.Strings(result)
return result
}
func milliSecToSec(t float64) float64 {
return t / 1000
}

View File

@@ -3,6 +3,8 @@ package collector
import (
"reflect"
"testing"
"github.com/prometheus/client_golang/prometheus"
)
func TestExpandChildCollectors(t *testing.T) {
@@ -32,3 +34,27 @@ func TestExpandChildCollectors(t *testing.T) {
})
}
}
func benchmarkCollector(b *testing.B, name string, collectFunc func() (Collector, error)) {
// Create perflib scrape context. Some perflib collectors required a correct context,
// or will fail during benchmark.
scrapeContext, err := PrepareScrapeContext([]string{name})
if err != nil {
b.Error(err)
}
c, err := collectFunc()
if err != nil {
b.Error(err)
}
metrics := make(chan prometheus.Metric)
go func() {
for {
<-metrics
}
}()
for i := 0; i < b.N; i++ {
c.Collect(scrapeContext, metrics) //nolint:errcheck
}
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkContainerCollector(b *testing.B) {
benchmarkCollector(b, "container", NewContainerMetricsCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector
@@ -8,8 +9,8 @@ import (
"strings"
"github.com/StackExchange/wmi"
"github.com/prometheus-community/windows_exporter/log"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {

9
collector/cpu_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkCPUCollector(b *testing.B) {
benchmarkCollector(b, "cpu", newCPUCollector)
}

View File

@@ -1,12 +1,12 @@
//go:build windows
// +build windows
package collector
import (
"errors"
"github.com/StackExchange/wmi"
"github.com/prometheus-community/windows_exporter/headers/sysinfoapi"
"github.com/prometheus-community/windows_exporter/log"
"github.com/prometheus/client_golang/prometheus"
)
@@ -60,51 +60,47 @@ func (c *CSCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) e
return nil
}
// Win32_ComputerSystem docs:
// - https://msdn.microsoft.com/en-us/library/aa394102
type Win32_ComputerSystem struct {
NumberOfLogicalProcessors uint32
TotalPhysicalMemory uint64
DNSHostname string
Domain string
Workgroup *string
}
func (c *CSCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_ComputerSystem
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
// Get systeminfo for number of processors
systemInfo := sysinfoapi.GetSystemInfo()
// Get memory status for physical memory
mem, err := sysinfoapi.GlobalMemoryStatusEx()
if err != nil {
return nil, err
}
if len(dst) == 0 {
return nil, errors.New("WMI query returned empty result set")
}
ch <- prometheus.MustNewConstMetric(
c.LogicalProcessors,
prometheus.GaugeValue,
float64(dst[0].NumberOfLogicalProcessors),
float64(systemInfo.NumberOfProcessors),
)
ch <- prometheus.MustNewConstMetric(
c.PhysicalMemoryBytes,
prometheus.GaugeValue,
float64(dst[0].TotalPhysicalMemory),
float64(mem.TotalPhys),
)
var fqdn string
if dst[0].Workgroup == nil || dst[0].Domain != *dst[0].Workgroup {
fqdn = dst[0].DNSHostname + "." + dst[0].Domain
} else {
fqdn = dst[0].DNSHostname
hostname, err := sysinfoapi.GetComputerName(sysinfoapi.ComputerNameDNSHostname)
if err != nil {
return nil, err
}
domain, err := sysinfoapi.GetComputerName(sysinfoapi.ComputerNameDNSDomain)
if err != nil {
return nil, err
}
fqdn, err := sysinfoapi.GetComputerName(sysinfoapi.ComputerNameDNSFullyQualified)
if err != nil {
return nil, err
}
ch <- prometheus.MustNewConstMetric(
c.Hostname,
prometheus.GaugeValue,
1.0,
dst[0].DNSHostname,
dst[0].Domain,
hostname,
domain,
fqdn,
)

9
collector/cs_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkCsCollector(b *testing.B) {
benchmarkCollector(b, "cs", NewCSCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector
@@ -128,7 +129,7 @@ func NewDFSRCollector() (Collector, error) {
ConnectionFilesReceivedTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "connection_received_files_total"),
"Total number of files receieved for connection",
"Total number of files received for connection",
[]string{"name"},
nil,
),

9
collector/dfsr_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkDFSRCollector(b *testing.B) {
benchmarkCollector(b, "dfsr", NewDFSRCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

9
collector/dhcp_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkDHCPCollector(b *testing.B) {
benchmarkCollector(b, "dhcp", NewDhcpCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector
@@ -136,7 +137,7 @@ func NewDNSCollector() (Collector, error) {
),
Responses: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "responses_total"),
"Number of reponses sent by DNS server",
"Number of responses sent by DNS server",
[]string{"protocol"},
nil,
),

9
collector/dns_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkDNSCollector(b *testing.B) {
benchmarkCollector(b, "dns", NewDNSCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector
@@ -234,7 +235,7 @@ func (c *exchangeCollector) collectADAccessProcesses(ctx *ScrapeContext, ch chan
}
// since we're not including the PID suffix from the instance names in the label names,
// we get an occational duplicate. This seems to affect about 4 instances only on this object.
// we get an occasional duplicate. This seems to affect about 4 instances only on this object.
labelUseCount[labelName]++
if labelUseCount[labelName] > 1 {
labelName = fmt.Sprintf("%s_%d", labelName, labelUseCount[labelName])

View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkExchangeCollector(b *testing.B) {
benchmarkCollector(b, "exchange", newExchangeCollector)
}

View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkFsrmQuotaCollector(b *testing.B) {
benchmarkCollector(b, "fsrmquota", newFSRMQuotaCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

9
collector/hyperv_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkHypervCollector(b *testing.B) {
benchmarkCollector(b, "hyperv", NewHyperVCollector)
}

File diff suppressed because it is too large Load Diff

9
collector/iis_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkIISCollector(b *testing.B) {
benchmarkCollector(b, "iis", NewIISCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector
@@ -103,14 +104,14 @@ func NewLogicalDiskCollector() (Collector, error) {
FreeSpace: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "free_bytes"),
"Free space in bytes (LogicalDisk.PercentFreeSpace)",
"Free space in bytes, updates every 10-15 min (LogicalDisk.PercentFreeSpace)",
[]string{"volume"},
nil,
),
TotalSpace: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "size_bytes"),
"Total space in bytes (LogicalDisk.PercentFreeSpace_Base)",
"Total space in bytes, updates every 10-15 min (LogicalDisk.PercentFreeSpace_Base)",
[]string{"volume"},
nil,
),

View File

@@ -0,0 +1,13 @@
package collector
import (
"testing"
)
func BenchmarkLogicalDiskCollector(b *testing.B) {
// Whitelist is not set in testing context (kingpin flags not parsed), causing the collector to skip all disks.
localVolumeWhitelist := ".+"
volumeWhitelist = &localVolumeWhitelist
benchmarkCollector(b, "logical_disk", NewLogicalDiskCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

10
collector/logon_test.go Normal file
View File

@@ -0,0 +1,10 @@
package collector
import (
"testing"
)
func BenchmarkLogonCollector(b *testing.B) {
// No context name required as collector source is WMI
benchmarkCollector(b, "", NewLogonCollector)
}

View File

@@ -1,6 +1,7 @@
// returns data points from Win32_PerfRawData_PerfOS_Memory
// <add link to documentation here> - Win32_PerfRawData_PerfOS_Memory class
//go:build windows
// +build windows
package collector

9
collector/memory_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkMemoryCollector(b *testing.B) {
benchmarkCollector(b, "memory", NewMemoryCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector
@@ -93,29 +94,27 @@ func (c *Win32_PerfRawData_MSMQ_MSMQQueueCollector) collect(ch chan<- prometheus
}
for _, msmq := range dst {
if msmq.Name == "Computer Queues" {
continue
}
ch <- prometheus.MustNewConstMetric(
c.BytesinJournalQueue,
prometheus.GaugeValue,
float64(msmq.BytesinJournalQueue),
strings.ToLower(msmq.Name),
)
ch <- prometheus.MustNewConstMetric(
c.BytesinQueue,
prometheus.GaugeValue,
float64(msmq.BytesinQueue),
strings.ToLower(msmq.Name),
)
ch <- prometheus.MustNewConstMetric(
c.MessagesinJournalQueue,
prometheus.GaugeValue,
float64(msmq.MessagesinJournalQueue),
strings.ToLower(msmq.Name),
)
ch <- prometheus.MustNewConstMetric(
c.MessagesinQueue,
prometheus.GaugeValue,

10
collector/msmq_test.go Normal file
View File

@@ -0,0 +1,10 @@
package collector
import (
"testing"
)
func BenchmarkMsmqCollector(b *testing.B) {
// No context name required as collector source is WMI
benchmarkCollector(b, "", NewMSMQCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector
@@ -70,7 +71,7 @@ func getMSSQLInstances() mssqlInstancesType {
type mssqlCollectorsMap map[string]mssqlCollectorFunc
func mssqlAvailableClassCollectors() string {
return "accessmethods,availreplica,bufman,databases,dbreplica,genstats,locks,memmgr,sqlstats,sqlerrors,transactions"
return "accessmethods,availreplica,bufman,databases,dbreplica,genstats,locks,memmgr,sqlstats,sqlerrors,transactions,waitstats"
}
func (c *MSSQLCollector) getMSSQLCollectors() mssqlCollectorsMap {
@@ -86,6 +87,7 @@ func (c *MSSQLCollector) getMSSQLCollectors() mssqlCollectorsMap {
mssqlCollectors["sqlstats"] = c.collectSQLStats
mssqlCollectors["sqlerrors"] = c.collectSQLErrors
mssqlCollectors["transactions"] = c.collectTransactions
mssqlCollectors["waitstats"] = c.collectWaitStats
return mssqlCollectors
}
@@ -121,6 +123,8 @@ func mssqlGetPerfObjectName(sqlInstance string, collector string) string {
suffix = "SQL Statistics"
case "transactions":
suffix = "Transactions"
case "waitstats":
suffix = "Wait Statistics"
}
return (prefix + suffix)
}
@@ -382,6 +386,20 @@ type MSSQLCollector struct {
TransactionsVersionStoreCreationUnits *prometheus.Desc
TransactionsVersionStoreTruncationUnits *prometheus.Desc
// Win32_PerfRawData_{instance}_SQLServerWaitStatistics
WaitStatsLockWaits *prometheus.Desc
WaitStatsMemoryGrantQueueWaits *prometheus.Desc
WaitStatsThreadSafeMemoryObjectsWaits *prometheus.Desc
WaitStatsLogWriteWaits *prometheus.Desc
WaitStatsLogBufferWaits *prometheus.Desc
WaitStatsNetworkIOWaits *prometheus.Desc
WaitStatsPageIOLatchWaits *prometheus.Desc
WaitStatsPageLatchWaits *prometheus.Desc
WaitStatsNonpageLatchWaits *prometheus.Desc
WaitStatsWaitForTheWorkerWaits *prometheus.Desc
WaitStatsWorkspaceSynchronizationWaits *prometheus.Desc
WaitStatsTransactionOwnershipWaits *prometheus.Desc
mssqlInstances mssqlInstancesType
mssqlCollectors mssqlCollectorsMap
mssqlChildCollectorFailure int
@@ -1789,6 +1807,91 @@ func NewMSSQLCollector() (Collector, error) {
nil,
),
// Win32_PerfRawData_{instance}_SQLServerWaitStatistics
WaitStatsLockWaits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "waitstats_lock_waits"),
"(WaitStats.LockWaits)",
[]string{"mssql_instance", "item"},
nil,
),
WaitStatsMemoryGrantQueueWaits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "waitstats_memory_grant_queue_waits"),
"(WaitStats.MemoryGrantQueueWaits)",
[]string{"mssql_instance", "item"},
nil,
),
WaitStatsThreadSafeMemoryObjectsWaits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "waitstats_thread_safe_memory_objects_waits"),
"(WaitStats.ThreadSafeMemoryObjectsWaits)",
[]string{"mssql_instance", "item"},
nil,
),
WaitStatsLogWriteWaits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "waitstats_log_write_waits"),
"(WaitStats.LogWriteWaits)",
[]string{"mssql_instance", "item"},
nil,
),
WaitStatsLogBufferWaits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "waitstats_log_buffer_waits"),
"(WaitStats.LogBufferWaits)",
[]string{"mssql_instance", "item"},
nil,
),
WaitStatsNetworkIOWaits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "waitstats_network_io_waits"),
"(WaitStats.NetworkIOWaits)",
[]string{"mssql_instance", "item"},
nil,
),
WaitStatsPageIOLatchWaits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "waitstats_page_io_latch_waits"),
"(WaitStats.PageIOLatchWaits)",
[]string{"mssql_instance", "item"},
nil,
),
WaitStatsPageLatchWaits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "waitstats_page_latch_waits"),
"(WaitStats.PageLatchWaits)",
[]string{"mssql_instance", "item"},
nil,
),
WaitStatsNonpageLatchWaits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "waitstats_nonpage_latch_waits"),
"(WaitStats.NonpageLatchWaits)",
[]string{"mssql_instance", "item"},
nil,
),
WaitStatsWaitForTheWorkerWaits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "waitstats_wait_for_the_worker_waits"),
"(WaitStats.WaitForTheWorkerWaits)",
[]string{"mssql_instance", "item"},
nil,
),
WaitStatsWorkspaceSynchronizationWaits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "waitstats_workspace_synchronization_waits"),
"(WaitStats.WorkspaceSynchronizationWaits)",
[]string{"mssql_instance", "item"},
nil,
),
WaitStatsTransactionOwnershipWaits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "waitstats_transaction_ownership_waits"),
"(WaitStats.TransactionOwnershipWaits)",
[]string{"mssql_instance", "item"},
nil,
),
mssqlInstances: mssqlInstances,
}
@@ -1855,7 +1958,7 @@ func (c *MSSQLCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric
}
wg.Wait()
// this shoud return an error if any? some? children errord.
// this should return an error if any? some? children errord.
if c.mssqlChildCollectorFailure > 0 {
return errors.New("at least one child collector failed")
}
@@ -3731,6 +3834,123 @@ func (c *MSSQLCollector) collectSQLStats(ctx *ScrapeContext, ch chan<- prometheu
return nil, nil
}
// Win32_PerfRawData_MSSQLSERVER_SQLServerWaitStatistics docs:
// - https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-wait-statistics-object
type mssqlWaitStatistics struct {
Name string
WaitStatsLockWaits float64 `perflib:"Lock waits"`
WaitStatsMemoryGrantQueueWaits float64 `perflib:"Memory grant queue waits"`
WaitStatsThreadSafeMemoryObjectsWaits float64 `perflib:"Thread-safe memory objects waits"`
WaitStatsLogWriteWaits float64 `perflib:"Log write waits"`
WaitStatsLogBufferWaits float64 `perflib:"Log buffer waits"`
WaitStatsNetworkIOWaits float64 `perflib:"Network IO waits"`
WaitStatsPageIOLatchWaits float64 `perflib:"Page IO latch waits"`
WaitStatsPageLatchWaits float64 `perflib:"Page latch waits"`
WaitStatsNonpageLatchWaits float64 `perflib:"Non-Page latch waits"`
WaitStatsWaitForTheWorkerWaits float64 `perflib:"Wait for the worker"`
WaitStatsWorkspaceSynchronizationWaits float64 `perflib:"Workspace synchronization waits"`
WaitStatsTransactionOwnershipWaits float64 `perflib:"Transaction ownership waits"`
}
func (c *MSSQLCollector) collectWaitStats(ctx *ScrapeContext, ch chan<- prometheus.Metric, sqlInstance string) (*prometheus.Desc, error) {
var dst []mssqlWaitStatistics
log.Debugf("mssql_waitstats collector iterating sql instance %s.", sqlInstance)
if err := unmarshalObject(ctx.perfObjects[mssqlGetPerfObjectName(sqlInstance, "waitstats")], &dst); err != nil {
return nil, err
}
for _, v := range dst {
item := v.Name
ch <- prometheus.MustNewConstMetric(
c.WaitStatsLockWaits,
prometheus.CounterValue,
v.WaitStatsLockWaits,
sqlInstance, item,
)
ch <- prometheus.MustNewConstMetric(
c.WaitStatsMemoryGrantQueueWaits,
prometheus.CounterValue,
v.WaitStatsMemoryGrantQueueWaits,
sqlInstance, item,
)
ch <- prometheus.MustNewConstMetric(
c.WaitStatsThreadSafeMemoryObjectsWaits,
prometheus.CounterValue,
v.WaitStatsThreadSafeMemoryObjectsWaits,
sqlInstance, item,
)
ch <- prometheus.MustNewConstMetric(
c.WaitStatsLogWriteWaits,
prometheus.CounterValue,
v.WaitStatsLogWriteWaits,
sqlInstance, item,
)
ch <- prometheus.MustNewConstMetric(
c.WaitStatsLogBufferWaits,
prometheus.CounterValue,
v.WaitStatsLogBufferWaits,
sqlInstance, item,
)
ch <- prometheus.MustNewConstMetric(
c.WaitStatsNetworkIOWaits,
prometheus.CounterValue,
v.WaitStatsNetworkIOWaits,
sqlInstance, item,
)
ch <- prometheus.MustNewConstMetric(
c.WaitStatsPageIOLatchWaits,
prometheus.CounterValue,
v.WaitStatsPageIOLatchWaits,
sqlInstance, item,
)
ch <- prometheus.MustNewConstMetric(
c.WaitStatsPageLatchWaits,
prometheus.CounterValue,
v.WaitStatsPageLatchWaits,
sqlInstance, item,
)
ch <- prometheus.MustNewConstMetric(
c.WaitStatsNonpageLatchWaits,
prometheus.CounterValue,
v.WaitStatsNonpageLatchWaits,
sqlInstance, item,
)
ch <- prometheus.MustNewConstMetric(
c.WaitStatsWaitForTheWorkerWaits,
prometheus.CounterValue,
v.WaitStatsWaitForTheWorkerWaits,
sqlInstance, item,
)
ch <- prometheus.MustNewConstMetric(
c.WaitStatsWorkspaceSynchronizationWaits,
prometheus.CounterValue,
v.WaitStatsWorkspaceSynchronizationWaits,
sqlInstance, item,
)
ch <- prometheus.MustNewConstMetric(
c.WaitStatsTransactionOwnershipWaits,
prometheus.CounterValue,
v.WaitStatsTransactionOwnershipWaits,
sqlInstance, item,
)
}
return nil, nil
}
type mssqlSQLErrors struct {
Name string
ErrorsPersec float64 `perflib:"Errors/sec"`

9
collector/mssql_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkMSSQLCollector(b *testing.B) {
benchmarkCollector(b, "mssql", NewMSSQLCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector
@@ -118,7 +119,7 @@ func NewNetworkCollector() (Collector, error) {
nil,
),
CurrentBandwidth: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "current_bandwidth"),
prometheus.BuildFQName(Namespace, subsystem, "current_bandwidth_bytes"),
"(Network.CurrentBandwidth)",
[]string{"nic"},
nil,
@@ -251,7 +252,7 @@ func (c *NetworkCollector) collect(ctx *ScrapeContext, ch chan<- prometheus.Metr
ch <- prometheus.MustNewConstMetric(
c.CurrentBandwidth,
prometheus.GaugeValue,
nic.CurrentBandwidth,
nic.CurrentBandwidth/8,
name,
)
}

View File

@@ -1,8 +1,11 @@
//go:build windows
// +build windows
package collector
import "testing"
import (
"testing"
)
func TestNetworkToInstanceName(t *testing.T) {
data := map[string]string{
@@ -15,3 +18,10 @@ func TestNetworkToInstanceName(t *testing.T) {
}
}
}
func BenchmarkNetCollector(b *testing.B) {
// Whitelist is not set in testing context (kingpin flags not parsed), causing the collector to skip all interfaces.
localNicWhitelist := ".+"
nicWhitelist = &localNicWhitelist
benchmarkCollector(b, "net", NewNetworkCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

View File

@@ -0,0 +1,10 @@
package collector
import (
"testing"
)
func BenchmarkNetFrameworkNETCLRExceptionsCollector(b *testing.B) {
// No context name required as collector source is WMI
benchmarkCollector(b, "", NewNETFramework_NETCLRExceptionsCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

View File

@@ -0,0 +1,10 @@
package collector
import (
"testing"
)
func BenchmarkNETFrameworkNETCLRInteropCollector(b *testing.B) {
// No context name required as collector source is WMI
benchmarkCollector(b, "", NewNETFramework_NETCLRInteropCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

View File

@@ -0,0 +1,10 @@
package collector
import (
"testing"
)
func BenchmarkNETFrameworkNETCLRJitCollector(b *testing.B) {
// No context name required as collector source is WMI
benchmarkCollector(b, "", NewNETFramework_NETCLRJitCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

View File

@@ -0,0 +1,10 @@
package collector
import (
"testing"
)
func BenchmarkNETFrameworkNETCLRLoadingCollector(b *testing.B) {
// No context name required as collector source is WMI
benchmarkCollector(b, "", NewNETFramework_NETCLRLoadingCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

View File

@@ -0,0 +1,10 @@
package collector
import (
"testing"
)
func BenchmarkNETFrameworkNETCLRLocksAndThreadsCollector(b *testing.B) {
// No context name required as collector source is WMI
benchmarkCollector(b, "", NewNETFramework_NETCLRLocksAndThreadsCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

View File

@@ -0,0 +1,10 @@
package collector
import (
"testing"
)
func BenchmarkNETFrameworkNETCLRMemoryCollector(b *testing.B) {
// No context name required as collector source is WMI
benchmarkCollector(b, "", NewNETFramework_NETCLRMemoryCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

View File

@@ -0,0 +1,10 @@
package collector
import (
"testing"
)
func BenchmarkNETFrameworkNETCLRRemotingCollector(b *testing.B) {
// No context name required as collector source is WMI
benchmarkCollector(b, "", NewNETFramework_NETCLRRemotingCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

View File

@@ -0,0 +1,10 @@
package collector
import (
"testing"
)
func BenchmarkNETFrameworkNETCLRSecurityCollector(b *testing.B) {
// No context name required as collector source is WMI
benchmarkCollector(b, "", NewNETFramework_NETCLRSecurityCollector)
}

View File

@@ -1,18 +1,24 @@
//go:build windows
// +build windows
package collector
import (
"errors"
"fmt"
"os"
"strings"
"time"
"github.com/StackExchange/wmi"
"github.com/prometheus-community/windows_exporter/headers/netapi32"
"github.com/prometheus-community/windows_exporter/headers/psapi"
"github.com/prometheus-community/windows_exporter/headers/sysinfoapi"
"github.com/prometheus-community/windows_exporter/log"
"github.com/prometheus/client_golang/prometheus"
"golang.org/x/sys/windows/registry"
)
func init() {
registerCollector("os", NewOSCollector)
registerCollector("os", NewOSCollector, "Paging File")
}
// A OSCollector is a Prometheus collector for WMI metrics
@@ -32,6 +38,12 @@ type OSCollector struct {
Timezone *prometheus.Desc
}
type pagingFileCounter struct {
Name string
Usage float64 `perflib:"% Usage"`
UsagePeak float64 `perflib:"% Usage Peak"`
}
// NewOSCollector ...
func NewOSCollector() (Collector, error) {
const subsystem = "os"
@@ -86,7 +98,7 @@ func NewOSCollector() (Collector, error) {
nil,
),
ProcessMemoryLimitBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "process_memory_limix_bytes"),
prometheus.BuildFQName(Namespace, subsystem, "process_memory_limit_bytes"),
"OperatingSystem.MaxProcessMemorySize",
nil,
nil,
@@ -121,7 +133,7 @@ func NewOSCollector() (Collector, error) {
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *OSCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
if desc, err := c.collect(ctx, ch); err != nil {
log.Error("failed collecting os metrics:", desc, err)
return err
}
@@ -146,41 +158,102 @@ type Win32_OperatingSystem struct {
Version string
}
func (c *OSCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_OperatingSystem
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
func (c *OSCollector) collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
nwgi, err := netapi32.GetWorkstationInfo()
if err != nil {
return nil, err
}
if len(dst) == 0 {
return nil, errors.New("WMI query returned empty result set")
gmse, err := sysinfoapi.GlobalMemoryStatusEx()
if err != nil {
return nil, err
}
currentTime := time.Now()
timezoneName, _ := currentTime.Zone()
// Get total allocation of paging files across all disks.
memManKey, err := registry.OpenKey(registry.LOCAL_MACHINE, `SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management`, registry.QUERY_VALUE)
defer memManKey.Close()
if err != nil {
return nil, err
}
pagingFiles, _, err := memManKey.GetStringsValue("ExistingPageFiles")
if err != nil {
return nil, err
}
// Get build number and product name from registry
ntKey, err := registry.OpenKey(registry.LOCAL_MACHINE, `SOFTWARE\Microsoft\Windows NT\CurrentVersion`, registry.QUERY_VALUE)
defer ntKey.Close()
if err != nil {
return nil, err
}
pn, _, err := ntKey.GetStringValue("ProductName")
if err != nil {
return nil, err
}
bn, _, err := ntKey.GetStringValue("CurrentBuildNumber")
if err != nil {
return nil, err
}
var fsipf float64
for _, pagingFile := range pagingFiles {
fileString := strings.ReplaceAll(pagingFile, `\??\`, "")
file, err := os.Stat(fileString)
if err != nil {
return nil, err
}
fsipf += float64(file.Size())
}
gpi, err := psapi.GetPerformanceInfo()
if err != nil {
return nil, err
}
var pfc = make([]pagingFileCounter, 0)
if err := unmarshalObject(ctx.perfObjects["Paging File"], &pfc); err != nil {
return nil, err
}
// Get current page file usage.
var pfbRaw float64
for _, pageFile := range pfc {
if strings.Contains(strings.ToLower(pageFile.Name), "_total") {
continue
}
pfbRaw += pageFile.Usage
}
// Subtract from total page file allocation on disk.
pfb := fsipf - (pfbRaw * float64(gpi.PageSize))
ch <- prometheus.MustNewConstMetric(
c.OSInformation,
prometheus.GaugeValue,
1.0,
dst[0].Caption,
dst[0].Version,
fmt.Sprintf("Microsoft %s", pn), // Caption
fmt.Sprintf("%d.%d.%s", nwgi.VersionMajor, nwgi.VersionMinor, bn), // Version
)
ch <- prometheus.MustNewConstMetric(
c.PhysicalMemoryFreeBytes,
prometheus.GaugeValue,
float64(dst[0].FreePhysicalMemory*1024), // KiB -> bytes
float64(gmse.AvailPhys),
)
time := dst[0].LocalDateTime
ch <- prometheus.MustNewConstMetric(
c.Time,
prometheus.GaugeValue,
float64(time.Unix()),
float64(currentTime.Unix()),
)
timezoneName, _ := time.Zone()
ch <- prometheus.MustNewConstMetric(
c.Timezone,
prometheus.GaugeValue,
@@ -191,55 +264,58 @@ func (c *OSCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, er
ch <- prometheus.MustNewConstMetric(
c.PagingFreeBytes,
prometheus.GaugeValue,
float64(dst[0].FreeSpaceInPagingFiles*1024), // KiB -> bytes
pfb,
)
ch <- prometheus.MustNewConstMetric(
c.VirtualMemoryFreeBytes,
prometheus.GaugeValue,
float64(dst[0].FreeVirtualMemory*1024), // KiB -> bytes
float64(gmse.AvailPageFile),
)
// Windows has no defined limit, and is based off available resources. This currently isn't calculated by WMI and is set to default value.
// https://techcommunity.microsoft.com/t5/windows-blog-archive/pushing-the-limits-of-windows-processes-and-threads/ba-p/723824
// https://docs.microsoft.com/en-us/windows/win32/cimwin32prov/win32-operatingsystem
ch <- prometheus.MustNewConstMetric(
c.ProcessesLimit,
prometheus.GaugeValue,
float64(dst[0].MaxNumberOfProcesses),
float64(4294967295),
)
ch <- prometheus.MustNewConstMetric(
c.ProcessMemoryLimitBytes,
prometheus.GaugeValue,
float64(dst[0].MaxProcessMemorySize*1024), // KiB -> bytes
float64(gmse.TotalVirtual),
)
ch <- prometheus.MustNewConstMetric(
c.Processes,
prometheus.GaugeValue,
float64(dst[0].NumberOfProcesses),
float64(gpi.ProcessCount),
)
ch <- prometheus.MustNewConstMetric(
c.Users,
prometheus.GaugeValue,
float64(dst[0].NumberOfUsers),
float64(nwgi.LoggedOnUsers),
)
ch <- prometheus.MustNewConstMetric(
c.PagingLimitBytes,
prometheus.GaugeValue,
float64(dst[0].SizeStoredInPagingFiles*1024), // KiB -> bytes
fsipf,
)
ch <- prometheus.MustNewConstMetric(
c.VirtualMemoryBytes,
prometheus.GaugeValue,
float64(dst[0].TotalVirtualMemorySize*1024), // KiB -> bytes
float64(gmse.TotalPageFile),
)
ch <- prometheus.MustNewConstMetric(
c.VisibleMemoryBytes,
prometheus.GaugeValue,
float64(dst[0].TotalVisibleMemorySize*1024), // KiB -> bytes
float64(gmse.TotalPhys),
)
return nil, nil

9
collector/os_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkOSCollector(b *testing.B) {
benchmarkCollector(b, "os", NewOSCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector
@@ -42,6 +43,8 @@ type processCollector struct {
PrivateBytes *prometheus.Desc
ThreadCount *prometheus.Desc
VirtualBytes *prometheus.Desc
WorkingSetPrivate *prometheus.Desc
WorkingSetPeak *prometheus.Desc
WorkingSet *prometheus.Desc
processWhitelistPattern *regexp.Regexp
@@ -65,43 +68,43 @@ func newProcessCollector() (Collector, error) {
),
CPUTimeTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "cpu_time_total"),
"Returns elapsed time that all of the threads of this process used the processor to execute instructions by mode (privileged, user). An instruction is the basic unit of execution in a computer, a thread is the object that executes instructions, and a process is the object created when a program is run. Code executed to handle some hardware interrupts and trap conditions is included in this count.",
"Returns elapsed time that all of the threads of this process used the processor to execute instructions by mode (privileged, user).",
[]string{"process", "process_id", "creating_process_id", "mode"},
nil,
),
HandleCount: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "handle_count"),
prometheus.BuildFQName(Namespace, subsystem, "handles"),
"Total number of handles the process has open. This number is the sum of the handles currently open by each thread in the process.",
[]string{"process", "process_id", "creating_process_id"},
nil,
),
IOBytesTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "io_bytes_total"),
"Bytes issued to I/O operations in different modes (read, write, other). This property counts all I/O activity generated by the process to include file, network, and device I/Os. Read and write mode includes data operations; other mode includes those that do not involve data, such as control operations. ",
"Bytes issued to I/O operations in different modes (read, write, other).",
[]string{"process", "process_id", "creating_process_id", "mode"},
nil,
),
IOOperationsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "io_operations_total"),
"I/O operations issued in different modes (read, write, other). This property counts all I/O activity generated by the process to include file, network, and device I/Os. Read and write mode includes data operations; other mode includes those that do not involve data, such as control operations. ",
"I/O operations issued in different modes (read, write, other).",
[]string{"process", "process_id", "creating_process_id", "mode"},
nil,
),
PageFaultsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "page_faults_total"),
"Page faults by the threads executing in this process. A page fault occurs when a thread refers to a virtual memory page that is not in its working set in main memory. This can cause the page not to be fetched from disk if it is on the standby list and hence already in main memory, or if it is in use by another process with which the page is shared.",
"Page faults by the threads executing in this process.",
[]string{"process", "process_id", "creating_process_id"},
nil,
),
PageFileBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "page_file_bytes"),
"Current number of bytes this process has used in the paging file(s). Paging files are used to store pages of memory used by the process that are not contained in other files. Paging files are shared by all processes, and lack of space in paging files can prevent other processes from allocating memory.",
"Current number of bytes this process has used in the paging file(s).",
[]string{"process", "process_id", "creating_process_id"},
nil,
),
PoolBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "pool_bytes"),
"Pool Bytes is the last observed number of bytes in the paged or nonpaged pool. The nonpaged pool is an area of system memory (physical memory used by the operating system) for objects that cannot be written to disk, but must remain in physical memory as long as they are allocated. The paged pool is an area of system memory (physical memory used by the operating system) for objects that can be written to disk when they are not being used. Nonpaged pool bytes is calculated differently than paged pool bytes, so it might not equal the total of paged pool bytes.",
"Pool Bytes is the last observed number of bytes in the paged or nonpaged pool.",
[]string{"process", "process_id", "creating_process_id", "pool"},
nil,
),
@@ -118,20 +121,32 @@ func newProcessCollector() (Collector, error) {
nil,
),
ThreadCount: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "thread_count"),
"Number of threads currently active in this process. An instruction is the basic unit of execution in a processor, and a thread is the object that executes instructions. Every running process has at least one thread.",
prometheus.BuildFQName(Namespace, subsystem, "threads"),
"Number of threads currently active in this process.",
[]string{"process", "process_id", "creating_process_id"},
nil,
),
VirtualBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "virtual_bytes"),
"Current size, in bytes, of the virtual address space that the process is using. Use of virtual address space does not necessarily imply corresponding use of either disk or main memory pages. Virtual space is finite and, by using too much, the process can limit its ability to load libraries.",
"Current size, in bytes, of the virtual address space that the process is using.",
[]string{"process", "process_id", "creating_process_id"},
nil,
),
WorkingSetPrivate: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "working_set_private_bytes"),
"Size of the working set, in bytes, that is use for this process only and not shared nor shareable by other processes.",
[]string{"process", "process_id", "creating_process_id"},
nil,
),
WorkingSetPeak: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "working_set_peak_bytes"),
"Maximum size, in bytes, of the Working Set of this process at any point in time. The Working Set is the set of memory pages touched recently by the threads in the process.",
[]string{"process", "process_id", "creating_process_id"},
nil,
),
WorkingSet: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "working_set"),
"Maximum number of bytes in the working set of this process at any point in time. The working set is the set of memory pages touched recently by the threads in the process. If free memory in the computer is above a threshold, pages are left in the working set of a process even if they are not in use. When free memory falls below a threshold, pages are trimmed from working sets. If they are needed, they are then soft-faulted back into the working set before they leave main memory.",
prometheus.BuildFQName(Namespace, subsystem, "working_set_bytes"),
"Maximum number of bytes in the working set of this process at any point in time. The working set is the set of memory pages touched recently by the threads in the process.",
[]string{"process", "process_id", "creating_process_id"},
nil,
),
@@ -380,6 +395,24 @@ func (c *processCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metr
cpid,
)
ch <- prometheus.MustNewConstMetric(
c.WorkingSetPrivate,
prometheus.GaugeValue,
process.WorkingSetPrivate,
processName,
pid,
cpid,
)
ch <- prometheus.MustNewConstMetric(
c.WorkingSetPeak,
prometheus.GaugeValue,
process.WorkingSetPeak,
processName,
pid,
cpid,
)
ch <- prometheus.MustNewConstMetric(
c.WorkingSet,
prometheus.GaugeValue,

14
collector/process_test.go Normal file
View File

@@ -0,0 +1,14 @@
package collector
import (
"testing"
)
func BenchmarkProcessCollector(b *testing.B) {
// Whitelist is not set in testing context (kingpin flags not parsed), causing the collector to skip all processes.
localProcessWhitelist := ".+"
processWhitelist = &localProcessWhitelist
// No context name required as collector source is WMI
benchmarkCollector(b, "", newProcessCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector
@@ -60,7 +61,7 @@ func NewRemoteFx() (Collector, error) {
),
CurrentTCPBandwidth: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "net_current_tcp_bandwidth"),
"TCP Bandwidth detected in bytes per seccond.",
"TCP Bandwidth detected in bytes per second.",
[]string{"session_name"},
nil,
),
@@ -345,7 +346,3 @@ func (c *RemoteFxCollector) collectRemoteFXGraphicsCounters(ctx *ScrapeContext,
return nil, nil
}
func milliSecToSec(t float64) float64 {
return t / 1000
}

View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkRemoteFXCollector(b *testing.B) {
benchmarkCollector(b, "remote_fx", NewRemoteFx)
}

View File

@@ -1,14 +1,17 @@
//go:build windows
// +build windows
package collector
import (
"strconv"
"fmt"
"strings"
"github.com/StackExchange/wmi"
"github.com/prometheus-community/windows_exporter/log"
"github.com/prometheus/client_golang/prometheus"
"golang.org/x/sys/windows"
"golang.org/x/sys/windows/svc/mgr"
"gopkg.in/alecthomas/kingpin.v2"
)
@@ -21,6 +24,10 @@ var (
"collector.service.services-where",
"WQL 'where' clause to use in WMI metrics query. Limits the response to the services you specify and reduces the size of the response.",
).Default("").String()
useAPI = kingpin.Flag(
"collector.service.use-api",
"Use API calls to collect service data instead of WMI. Flag 'collector.service.services-where' won't be effective.",
).Default("false").Bool()
)
// A serviceCollector is a Prometheus collector for WMI Win32_Service metrics
@@ -40,6 +47,9 @@ func NewserviceCollector() (Collector, error) {
if *serviceWhereClause == "" {
log.Warn("No where-clause specified for service collector. This will generate a very large number of metrics!")
}
if *useAPI {
log.Warn("API collection is enabled.")
}
return &serviceCollector{
Information: prometheus.NewDesc(
@@ -73,9 +83,16 @@ func NewserviceCollector() (Collector, error) {
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *serviceCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
log.Error("failed collecting service metrics:", desc, err)
return err
if *useAPI {
if err := c.collectAPI(ch); err != nil {
log.Error("failed collecting API service metrics:", err)
return err
}
} else {
if err := c.collectWMI(ch); err != nil {
log.Error("failed collecting WMI service metrics:", err)
return err
}
}
return nil
}
@@ -103,6 +120,15 @@ var (
"paused",
"unknown",
}
apiStateValues = map[uint]string{
windows.SERVICE_CONTINUE_PENDING: "continue pending",
windows.SERVICE_PAUSE_PENDING: "pause pending",
windows.SERVICE_PAUSED: "paused",
windows.SERVICE_RUNNING: "running",
windows.SERVICE_START_PENDING: "start pending",
windows.SERVICE_STOP_PENDING: "stop pending",
windows.SERVICE_STOPPED: "stopped",
}
allStartModes = []string{
"boot",
"system",
@@ -110,6 +136,13 @@ var (
"manual",
"disabled",
}
apiStartModeValues = map[uint32]string{
windows.SERVICE_AUTO_START: "auto",
windows.SERVICE_BOOT_START: "boot",
windows.SERVICE_DEMAND_START: "manual",
windows.SERVICE_DISABLED: "disabled",
windows.SERVICE_SYSTEM_START: "system",
}
allStatuses = []string{
"ok",
"error",
@@ -126,14 +159,14 @@ var (
}
)
func (c *serviceCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
func (c *serviceCollector) collectWMI(ch chan<- prometheus.Metric) error {
var dst []Win32_Service
q := queryAllWhere(&dst, c.queryWhereClause)
if err := wmi.Query(q, &dst); err != nil {
return nil, err
return err
}
for _, service := range dst {
pid := strconv.FormatUint(uint64(service.ProcessId), 10)
pid := fmt.Sprintf("%d", uint64(service.ProcessId))
runAs := ""
if service.StartName != nil {
@@ -191,5 +224,82 @@ func (c *serviceCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Des
)
}
}
return nil, nil
return nil
}
func (c *serviceCollector) collectAPI(ch chan<- prometheus.Metric) error {
svcmgrConnection, err := mgr.Connect()
if err != nil {
return err
}
defer svcmgrConnection.Disconnect() //nolint:errcheck
// List All Services from the Services Manager
serviceList, err := svcmgrConnection.ListServices()
if err != nil {
return err
}
// Iterate through the Services List
for _, service := range serviceList {
// Retrieve handle for each service
serviceHandle, err := svcmgrConnection.OpenService(service)
if err != nil {
continue
}
defer serviceHandle.Close()
// Get Service Configuration
serviceConfig, err := serviceHandle.Config()
if err != nil {
continue
}
// Get Service Current Status
serviceStatus, err := serviceHandle.Query()
if err != nil {
continue
}
pid := fmt.Sprintf("%d", uint64(serviceStatus.ProcessId))
ch <- prometheus.MustNewConstMetric(
c.Information,
prometheus.GaugeValue,
1.0,
strings.ToLower(service),
serviceConfig.DisplayName,
pid,
serviceConfig.ServiceStartName,
)
for _, state := range apiStateValues {
isCurrentState := 0.0
if state == apiStateValues[uint(serviceStatus.State)] {
isCurrentState = 1.0
}
ch <- prometheus.MustNewConstMetric(
c.State,
prometheus.GaugeValue,
isCurrentState,
strings.ToLower(service),
state,
)
}
for _, startMode := range apiStartModeValues {
isCurrentStartMode := 0.0
if startMode == apiStartModeValues[serviceConfig.StartType] {
isCurrentStartMode = 1.0
}
ch <- prometheus.MustNewConstMetric(
c.StartMode,
prometheus.GaugeValue,
isCurrentStartMode,
strings.ToLower(service),
startMode,
)
}
}
return nil
}

View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkServiceCollector(b *testing.B) {
benchmarkCollector(b, "service", NewserviceCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

9
collector/smtp_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkSmtpCollector(b *testing.B) {
benchmarkCollector(b, "smtp", NewSMTPCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

9
collector/system_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkSystemCollector(b *testing.B) {
benchmarkCollector(b, "system", NewSystemCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector
@@ -30,13 +31,13 @@ func NewTCPCollector() (Collector, error) {
return &TCPCollector{
ConnectionFailures: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "connection_failures"),
prometheus.BuildFQName(Namespace, subsystem, "connection_failures_total"),
"(TCP.ConnectionFailures)",
[]string{"af"},
nil,
),
ConnectionsActive: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "connections_active"),
prometheus.BuildFQName(Namespace, subsystem, "connections_active_total"),
"(TCP.ConnectionsActive)",
[]string{"af"},
nil,
@@ -48,13 +49,13 @@ func NewTCPCollector() (Collector, error) {
nil,
),
ConnectionsPassive: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "connections_passive"),
prometheus.BuildFQName(Namespace, subsystem, "connections_passive_total"),
"(TCP.ConnectionsPassive)",
[]string{"af"},
nil,
),
ConnectionsReset: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "connections_reset"),
prometheus.BuildFQName(Namespace, subsystem, "connections_reset_total"),
"(TCP.ConnectionsReset)",
[]string{"af"},
nil,

9
collector/tcp_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkTCPCollector(b *testing.B) {
benchmarkCollector(b, "tcp", NewTCPCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector
@@ -81,7 +82,7 @@ func NewTerminalServicesCollector() (Collector, error) {
nil,
),
HandleCount: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "handle_count"),
prometheus.BuildFQName(Namespace, subsystem, "handles"),
"Total number of handles currently opened by this process. This number is the sum of the handles currently opened by each thread in this process.",
[]string{"session_name"},
nil,
@@ -141,7 +142,7 @@ func NewTerminalServicesCollector() (Collector, error) {
nil,
),
ThreadCount: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "thread_count"),
prometheus.BuildFQName(Namespace, subsystem, "threads"),
"Number of threads currently active in this process. An instruction is the basic unit of execution in a processor, and a thread is the object that executes instructions. Every running process has at least one thread.",
[]string{"session_name"},
nil,

View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkTerminalServicesCollector(b *testing.B) {
benchmarkCollector(b, "terminal_services", NewTerminalServicesCollector)
}

View File

@@ -11,6 +11,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
//go:build !notextfile
// +build !notextfile
package collector
@@ -21,6 +22,7 @@ import (
"io/ioutil"
"os"
"path/filepath"
"reflect"
"sort"
"strings"
"time"
@@ -37,7 +39,7 @@ var (
textFileDirectory = kingpin.Flag(
"collector.textfile.directory",
"Directory to read text files with metrics from.",
).Default("C:\\Program Files\\windows_exporter\\textfile_inputs").String()
).Default(getDefaultPath()).String()
mtimeDesc = prometheus.NewDesc(
prometheus.BuildFQName(Namespace, "textfile", "mtime_seconds"),
@@ -65,6 +67,31 @@ func NewTextFileCollector() (Collector, error) {
}, nil
}
// Given a slice of metric families, determine if any two entries are duplicates.
// Duplicates will be detected where the metric name, labels and label values are identical.
func duplicateMetricEntry(metricFamilies []*dto.MetricFamily) bool {
uniqueMetrics := make(map[string]map[string]string)
for _, metricFamily := range metricFamilies {
metric_name := *metricFamily.Name
for _, metric := range metricFamily.Metric {
metric_labels := metric.GetLabel()
labels := make(map[string]string)
for _, label := range metric_labels {
labels[label.GetName()] = label.GetValue()
}
// Check if key is present before appending
_, mapContainsKey := uniqueMetrics[metric_name]
// Duplicate metric found with identical labels & label values
if mapContainsKey == true && reflect.DeepEqual(uniqueMetrics[metric_name], labels) {
return true
}
uniqueMetrics[metric_name] = labels
}
}
return false
}
func convertMetricFamily(metricFamily *dto.MetricFamily, ch chan<- prometheus.Metric) {
var valType prometheus.ValueType
var val float64
@@ -223,6 +250,10 @@ func (c *textFileCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Met
error = 1.0
}
// Create empty metricFamily slice here and append parsedFamilies to it inside the loop.
// Once loop is complete, raise error if any duplicates are present.
// This will ensure that duplicate metrics are correctly detected between multiple .prom files.
var metricFamilies = []*dto.MetricFamily{}
fileLoop:
for _, f := range files {
if !strings.HasSuffix(f.Name(), ".prom") {
@@ -271,7 +302,16 @@ fileLoop:
// a failure does not appear fresh.
mtimes[f.Name()] = f.ModTime()
for _, mf := range parsedFamilies {
for _, metricFamily := range parsedFamilies {
metricFamilies = append(metricFamilies, metricFamily)
}
}
if duplicateMetricEntry(metricFamilies) {
log.Errorf("Duplicate metrics detected in files")
error = 1.0
} else {
for _, mf := range metricFamilies {
convertMetricFamily(mf, ch)
}
}
@@ -297,3 +337,8 @@ func checkBOM(encoding utfbom.Encoding) error {
return fmt.Errorf(encoding.String())
}
func getDefaultPath() string {
execPath, _ := os.Executable()
return filepath.Join(filepath.Dir(execPath), "textfile_inputs")
}

View File

@@ -5,6 +5,8 @@ import (
"io/ioutil"
"strings"
"testing"
dto "github.com/prometheus/client_model/go"
)
func TestCRFilter(t *testing.T) {
@@ -45,3 +47,108 @@ func TestCheckBOM(t *testing.T) {
}
}
}
func TestDuplicateMetricEntry(t *testing.T) {
metric_name := "windows_sometest"
metric_help := "This is a Test."
metric_type := dto.MetricType_GAUGE
gauge_value := 1.0
gauge := dto.Gauge{
Value: &gauge_value,
}
label1_name := "display_name"
label1_value := "foobar"
label1 := dto.LabelPair{
Name: &label1_name,
Value: &label1_value,
}
label2_name := "display_version"
label2_value := "13.4.0"
label2 := dto.LabelPair{
Name: &label2_name,
Value: &label2_value,
}
metric1 := dto.Metric{
Label: []*dto.LabelPair{&label1, &label2},
Gauge: &gauge,
}
metric2 := dto.Metric{
Label: []*dto.LabelPair{&label1, &label2},
Gauge: &gauge,
}
duplicate := dto.MetricFamily{
Name: &metric_name,
Help: &metric_help,
Type: &metric_type,
Metric: []*dto.Metric{&metric1, &metric2},
}
duplicateFamily := []*dto.MetricFamily{}
duplicateFamily = append(duplicateFamily, &duplicate)
// Ensure detection for duplicate metrics
if !duplicateMetricEntry(duplicateFamily) {
t.Errorf("Duplicate not found in duplicateFamily")
}
label3_name := "test"
label3_value := "1.0"
label3 := dto.LabelPair{
Name: &label3_name,
Value: &label3_value,
}
metric3 := dto.Metric{
Label: []*dto.LabelPair{&label1, &label2, &label3},
Gauge: &gauge,
}
differentLabels := dto.MetricFamily{
Name: &metric_name,
Help: &metric_help,
Type: &metric_type,
Metric: []*dto.Metric{&metric1, &metric3},
}
duplicateFamily = []*dto.MetricFamily{}
duplicateFamily = append(duplicateFamily, &differentLabels)
// Additional label on second metric should not be cause for duplicate detection
if duplicateMetricEntry(duplicateFamily) {
t.Errorf("Unexpected duplicate found in differentLabels")
}
label4_value := "2.0"
label4 := dto.LabelPair{
Name: &label3_name,
Value: &label4_value,
}
metric4 := dto.Metric{
Label: []*dto.LabelPair{&label1, &label2, &label4},
Gauge: &gauge,
}
differentValues := dto.MetricFamily{
Name: &metric_name,
Help: &metric_help,
Type: &metric_type,
Metric: []*dto.Metric{&metric3, &metric4},
}
duplicateFamily = []*dto.MetricFamily{}
duplicateFamily = append(duplicateFamily, &differentValues)
// Additional label with different values metric should not be cause for duplicate detection
if duplicateMetricEntry(duplicateFamily) {
t.Errorf("Unexpected duplicate found in differentValues")
}
}

View File

@@ -1,6 +1,8 @@
package collector
import (
"errors"
"github.com/StackExchange/wmi"
"github.com/prometheus-community/windows_exporter/log"
"github.com/prometheus/client_golang/prometheus"
@@ -75,6 +77,11 @@ func (c *thermalZoneCollector) collect(ch chan<- prometheus.Metric) (*prometheus
return nil, err
}
// ThermalZone collector has been known to 'successfully' return an empty result.
if len(dst) == 0 {
return nil, errors.New("Empty results set for collector")
}
for _, info := range dst {
//Divide by 10 and subtract 273.15 to convert decikelvin to celsius
ch <- prometheus.MustNewConstMetric(

View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkThermalZoneCollector(b *testing.B) {
benchmarkCollector(b, "thermalzone", NewThermalZoneCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector
@@ -44,7 +45,7 @@ func newTimeCollector() (Collector, error) {
nil,
),
NTPClientTimeSourceCount: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "ntp_client_time_source_count"),
prometheus.BuildFQName(Namespace, subsystem, "ntp_client_time_sources"),
"Active number of NTP Time sources being used by the client",
nil,
nil,

9
collector/time_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkTimeCollector(b *testing.B) {
benchmarkCollector(b, "time", newTimeCollector)
}

View File

@@ -1,3 +1,4 @@
//go:build windows
// +build windows
package collector

9
collector/vmware_test.go Normal file
View File

@@ -0,0 +1,9 @@
package collector
import (
"testing"
)
func BenchmarkVmwareCollector(b *testing.B) {
benchmarkCollector(b, "vmware", NewVmwareCollector)
}

55
docs/collector.adcs.md Normal file
View File

@@ -0,0 +1,55 @@
# adcs collector
The adcs collector exposes metrics about Active Directory Certificate Services, Note that this collector has only been tested against Windows Server 2019.
Other Windows Server versions may work but are not tested.
|||
-|-
Metric name prefix | `adcs`
Data source | Perflib
Counters | `Certification Authority`
Enabled by default? | No
## Flags
None
## Metrics
Name | Description | Type | Labels
-----|-------------|------|-------
|requests_total|Total certificate requests processed|counter|`cert_template`|
|request_processing_time_seconds|Last time elapsed for certificate requests|gauge|`cert_template`|
|retrievals_total|Last time elapsed for certificate requests|counter|`cert_template`|
|retrievals_processing_time_seconds|Last time elapsed for certificate retrieval request|gauge|`cert_template`|
|failed_requests_total|Total failed certificate requests processed|counter|`cert_template`|
|issued_requests_total|Total issued certificate requests processed|counter|`cert_template`|
|pending_requests_total|Total pending certificate requests processed|counter|`cert_template`|
|request_cryptographic_signing_time_seconds|Last time elapsed for signing operation request|gauge|`cert_template`|
|request_policy_module_processing_time_seconds|Last time elapsed for policy module processing request|gauge|`cert_template`|
|challenge_responses_total|Total certificate challenge responses processed|counter|`cert_template`|
|challenge_response_processing_time_seconds|Last time elapsed for challenge response|gauge|`cert_template`|
|signed_certificate_timestamp_lists_total|Total Signed Certificate Timestamp Lists processed|counter|`cert_template`|
|signed_certificate_timestamp_list_processing_time_seconds|Last time elapsed for Signed Certificate Timestamp List|gauge|`cert_template`|
### Example metric
```
windows_adcs_issued_requests_total{cert_template="Administrator"} 0
windows_adcs_issued_requests_total{cert_template="DirectoryEmailReplication"} 0
windows_adcs_issued_requests_total{cert_template="DomainController"} 1
windows_adcs_issued_requests_total{cert_template="DomainControllerAuthentication"} 0
windows_adcs_issued_requests_total{cert_template="EFS"} 0
windows_adcs_issued_requests_total{cert_template="EFSRecovery"} 0
windows_adcs_issued_requests_total{cert_template="KerberosAuthentication"} 0
windows_adcs_issued_requests_total{cert_template="Machine"} 0
windows_adcs_issued_requests_total{cert_template="SubCA"} 0
windows_adcs_issued_requests_total{cert_template="User"} 0
windows_adcs_issued_requests_total{cert_template="WebServer"} 0
windows_adcs_issued_requests_total{cert_template="_Total"} 1
```
## Useful queries
_This collector does not yet have any useful queries added, we would appreciate your help adding them!_
## Alerting examples
_This collector does not yet have alerting examples, we would appreciate your help adding them!_

View File

@@ -1,6 +1,6 @@
# adfs collector
The adfs collector exposes metrics about Active Directory Federation Services. Note that this collector has only been tested against ADFS 4.0 (2016).
The ADFS collector exposes metrics about Active Directory Federation Services. Note that this collector has only been tested against ADFS 4.0/ [Farm Behavior (FLB) 3](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/deployment/upgrading-to-ad-fs-in-windows-server#ad-fs-farm-behavior-levels-fbl) (Server 2016).
Other ADFS versions may work but are not tested.
|||
@@ -28,6 +28,49 @@ Name | Description | Type | Labels
`windows_adfs_password_change_succeeded_total` | Total number of succeeded password changes. The Password Change Portal must be enabled in the AD FS Management tool in order to allow user password changes | counter | None
`windows_adfs_token_requests_total` | Total number of requested access tokens | counter | None
`windows_adfs_windows_integrated_authentications_total` | Total number of Windows integrated authentications using Kerberos or NTLM | counter | None
`ad_login_connection_failures_total` | Total number of connection failures to an Active Directory domain controller | counter | None
`certificate_authentications_total` | Total number of User Certificate authentications | counter | None
`device_authentications_total` | Total number of Device authentications | counter | None
`extranet_account_lockouts_total` | Total number of Extranet Account Lockouts | counter | None
`federated_authentications_total` | Total number of authentications from a federated source | counter | None
`passport_authentications_total` | Total number of Microsoft Passport SSO authentications | counter | None
`passive_requests_total` | Total number of passive (browser-based) requests | counter | None
`password_change_failed_total` | Total number of failed password changes | counter | None
`password_change_succeeded_total` | Total number of successful password changes | counter | None
`token_requests_total` | Total number of token requests | counter | None
`windows_integrated_authentications_total` | Total number of Windows integrated authentications (Kerberos/NTLM) | counter | None
`oauth_authorization_requests_total` | Total number of incoming requests to the OAuth Authorization endpoint | counter | None
`oauth_client_authentication_success_total` | Total number of successful OAuth client Authentications | counter | None
`oauth_client_authentication_failure_total` | Total number of failed OAuth client Authentications | counter | None
`oauth_client_credentials_failure_total` | Total number of failed OAuth Client Credentials Requests | counter | None
`oauth_client_credentials_success_total` | Total number of successful RP tokens issued for OAuth Client Credentials Requests | counter | None
`oauth_client_privkey_jtw_authentication_failure_total` | Total number of failed OAuth Client Private Key Jwt Authentications | counter | None
`oauth_client_privkey_jwt_authentications_success_total` | Total number of successful OAuth Client Private Key Jwt Authentications | counter | None
`oauth_client_secret_basic_authentications_failure_total` | Total number of failed OAuth Client Secret Basic Authentications | counter | None
`oauth_client_secret_basic_authentications_success_total` | Total number of successful OAuth Client Secret Basic Authentications | counter | None
`oauth_client_secret_post_authentications_failure_total` | Total number of failed OAuth Client Secret Post Authentications | counter | None
`oauth_client_secret_post_authentications_success_total` | Total number of successful OAuth Client Secret Post Authentications | counter | None
`oauth_client_windows_authentications_failure_total` | Total number of failed OAuth Client Windows Integrated Authentications | counter | None
`oauth_client_windows_authentications_success_total` | Total number of successful OAuth Client Windows Integrated Authentications | counter | None
`oauth_logon_certificate_requests_failure_total` | Total number of failed OAuth Logon Certificate Requests | counter | None
`oauth_logon_certificate_token_requests_success_total` | Total number of successful RP tokens issued for OAuth Logon Certificate Requests | counter | None
`oauth_password_grant_requests_failure_total` | Total number of failed OAuth Password Grant Requests | counter | None
`oauth_password_grant_requests_success_total` | Total number of successful OAuth Password Grant Requests | counter | None
`oauth_token_requests_success_total` | Total number of successful RP tokens issued over OAuth protocol | counter | None
`samlp_token_requests_success_total` | Total number of successful RP tokens issued over SAML-P protocol | counter | None
`sso_authentications_failure_total` | Total number of failed SSO authentications | counter | None
`sso_authentications_success_total` | Total number of successful SSO authentications | counter | None
`wsfed_token_requests_success_total` | Total number of successful RP tokens issued over WS-Fed protocol | counter | None
`wstrust_token_requests_success_total` | Total number of successful RP tokens issued over WS-Trust protocol | counter | None
`userpassword_authentications_failure_total` | Total number of failed AD U/P authentications | counter | None
`userpassword_authentications_success_total` | Total number of successful AD U/P authentications | counter | None
`external_authentications_failure_total` | Total number of failed authentications from external MFA providers | counter | None
`external_authentications_success_total` | Total number of successful authentications from external MFA providers | counter | None
`db_artifact_failure_total` | Total number of failures connecting to the artifact database | counter | None
`db_artifact_query_time_seconds_total` | Accumulator of time taken for an artifact database query | counter | None
`db_config_failure_total` | Total number of failures connecting to the configuration database | counter | None
`db_config_query_time_seconds_total` | Accumulator of time taken for a configuration database query | counter | None
`federation_metadata_requests_total` | Total number of Federation Metadata requests | counter | None
### Example metric
Show rate of device authentications in AD FS:
@@ -37,6 +80,11 @@ rate(windows_adfs_device_authentications)[2m]
## Useful queries
|Query|Description|
|---|----|
|`rate(windows_adfs_oauth_password_grant_requests_failure_total[5m])`| Rate of OAuth requests failing due to bad client/resource values|
|`rate(windows_adfs_userpassword_authentications_failures_total[5m])`| Rate of `/adfs/oauth2/token/` requests failing due to bad username/password values (possible credential spraying)|
## Alerting examples
**prometheus.rules**
```yaml

View File

@@ -1,10 +1,11 @@
# container collector
The container collector exposes metrics about containers running on system
The container collector exposes metrics about containers running on a Hyper-V system
|||
-|-
Metric name prefix | `container`
Data source | [hcsshim](https://github.com/Microsoft/hcsshim)
Enabled by default? | No
## Flags

View File

@@ -27,11 +27,11 @@ These metrics are only exposed on Windows Server 2008R2 and later:
Name | Description | Type | Labels
-----|-------------|------|-------
`windows_cpu_clock_interrupts_total` | Total number of received and serviced clock tick interrupts | `core`
`windows_cpu_idle_break_events_total` | Total number of time processor was woken from idle | `core`
`windows_cpu_parking_status` | Parking Status represents whether a processor is parked or not | `gauge`
`windows_cpu_core_frequency_mhz` | Core frequency in megahertz | `gauge`
`windows_cpu_processor_performance` | Processor Performance is the average performance of the processor while it is executing instructions, as a percentage of the nominal performance of the processor. On some processors, Processor Performance may exceed 100% | `gauge`
`windows_cpu_clock_interrupts_total` | Total number of received and serviced clock tick interrupts | counter | `core`
`windows_cpu_idle_break_events_total` | Total number of time processor was woken from idle | counter | `core`
`windows_cpu_parking_status` | Parking Status represents whether a processor is parked or not | gauge | `core`
`windows_cpu_core_frequency_mhz` | Core frequency in megahertz | gauge | `core`
`windows_cpu_processor_performance` | Processor Performance is the average performance of the processor while it is executing instructions, as a percentage of the nominal performance of the processor. On some processors, Processor Performance may exceed 100% | gauge | `core`
### Example metric
Show frequency of host CPU cores

View File

@@ -44,7 +44,7 @@ Name | Description | Type | Labels
`windows_dfsr_folder_deleted_bytes_cleaned_up_total` | Total size (in bytes) of replicating deleted files and folders that were cleaned up from the Conflict and Deleted folder. | gauge | name
`windows_dfsr_folder_deleted_bytes_generated_total` | Total size (in bytes) of replicated deleted files and folders that were moved to the Conflict and Deleted folder after they were deleted from a replicated folder on a sending member. | counter | name
`windows_dfsr_folder_deleted_files_cleaned_up_total` | Number of files and folders that were cleaned up from the Conflict and Deleted folder. | counter | name
`windows_dfsr_folder_deleted_files_generated_total` | Number of deleted fils and folders that were moved to the Conflict and Deleted folder. | counter | name
`windows_dfsr_folder_deleted_files_generated_total` | Number of deleted files and folders that were moved to the Conflict and Deleted folder. | counter | name
`windows_dfsr_folder_file_installs_retried_total` | Total number of file installs that are being retried due to sharing violations or other errors encountered when installing the files. The DFS Replication service replicates staged files into a staging folder, uncompresses them in the Installing folder, and renames them to the target location. The second and third steps of this process are known as installing the file. | counter | name
`windows_dfsr_folder_file_installs_succeeded_total` | Total number of files that were successfully received from sending members and installed locally on this server. The DFS Replication service replicates staged files into a staging folder, uncompresses them in the Installing folder, and renames them to the target location. The second and third steps of this process are known as installing the file. | counter | name
`windows_dfsr_folder_files_received_total` | Total number of files received. | counter | name

View File

@@ -36,7 +36,7 @@ Name | Description
`windows_exchange_transport_queues_internal_active_remote_delivery` | Internal Active Remote Delivery Queue length
`windows_exchange_transport_queues_active_mailbox_delivery` | Active Mailbox Delivery Queue length
`windows_exchange_transport_queues_retry_mailbox_delivery` | Retry Mailbox Delivery Queue length
`windows_exchange_transport_queues_unreachable` | Unreachable Queue lengt
`windows_exchange_transport_queues_unreachable` | Unreachable Queue length
`windows_exchange_transport_queues_external_largest_delivery` | External Largest Delivery Queue length
`windows_exchange_transport_queues_internal_largest_delivery` | Internal Largest Delivery Queue length
`windows_exchange_transport_queues_poison` | Poison Queue length

View File

@@ -1,6 +1,6 @@
# Microsoft File Server Resource Manager (FSRM) Quotas collector
The fsrmquota collector exposes metrics about File Server Ressource Manager Quotas. Note that this collector has only been tested against Windows server 2012R2.
The fsrmquota collector exposes metrics about File Server Resource Manager Quotas. Note that this collector has only been tested against Windows server 2012R2.
Other FSRM versions may work but are not tested.
|||
@@ -48,5 +48,5 @@ rate(windows_fsrmquota_usage_bytes)[1d]
severity: "high"
annotations:
summary: "High Quotas Usage"
description: "High use of File Ressource.\n Quotas: {{ $labels.path }}\n Current use : {{ $value }}"
description: "High use of File Resource.\n Quotas: {{ $labels.path }}\n Current use : {{ $value }}"
```

View File

@@ -5,7 +5,7 @@ The iis collector exposes metrics about the IIS server
|||
-|-
Metric name prefix | `iis`
Classes | `Win32_PerfRawData_W3SVC_WebService`<br/>`Win32_PerfRawData_APPPOOLCountersProvider_APPPOOLWAS`<br/>`Win32_PerfRawData_W3SVCW3WPCounterProvider_W3SVCW3WP`<br/>`Win32_PerfRawData_W3SVC_WebServiceCache`
Data source | Perflib
Enabled by default? | No
## Flags

View File

@@ -30,11 +30,15 @@ Name | Description | Type | Labels
`writes_total` | Rate of write operations on the disk | counter | `volume`
`read_seconds_total` | Seconds the disk was busy servicing read requests | counter | `volume`
`write_seconds_total` | Seconds the disk was busy servicing write requests | counter | `volume`
`free_bytes` | Unused space of the disk in bytes | gauge | `volume`
`size_bytes` | Total size of the disk in bytes | gauge | `volume`
`free_bytes` | Unused space of the disk in bytes (not real time, updates every 10-15 min) | gauge | `volume`
`size_bytes` | Total size of the disk in bytes (not real time, updates every 10-15 min) | gauge | `volume`
`idle_seconds_total` | Seconds the disk was idle (not servicing read/write requests) | counter | `volume`
`split_ios_total` | Number of I/Os to the disk split into multiple I/Os | counter | `volume`
### Warning about size metrics
The `free_bytes` and `size_bytes` metrics are not updated in real time and might have a delay of 10-15min.
This is the same behavior as the windows performance counters.
### Example metric
Query the rate of write operations to a disk
```

View File

@@ -27,7 +27,7 @@ windows_logon_logon_type{status="interactive"}
## Useful queries
Query the total number of local and remote (I.E. Terminal Services) interactive sessions.
```
windows_logon_logon_type{status=~"interactive|remoteinteractive"}
windows_logon_logon_type{status=~"interactive|remote_interactive"}
```
## Alerting examples

View File

@@ -7,7 +7,7 @@ The memory collector exposes metrics about system memory usage
Metric name prefix | `memory`
Data source | Perflib
Classes | `Win32_PerfRawData_PerfOS_Memory`
Enabled by default? | Yes
Enabled by default? | No
## Flags

View File

@@ -5,14 +5,14 @@ The mssql collector exposes metrics about the MSSQL server
|||
-|-
Metric name prefix | `mssql`
Classes | [`Win32_PerfRawData_MSSQLSERVER_SQLServerAccessMethods`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-access-methods-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerAvailabilityReplica`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-availability-replica)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerBufferManager`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-buffer-manager-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerDatabaseReplica`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-database-replica)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerDatabases`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-databases-object?view=sql-server-2017)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerGeneralStatistics`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-general-statistics-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerLocks`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-locks-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerMemoryManager`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-memory-manager-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerSQLStatistics`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-sql-statistics-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerSQLErrors`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-sql-errors-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerTransactions`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-transactions-object)
Classes | [`Win32_PerfRawData_MSSQLSERVER_SQLServerAccessMethods`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-access-methods-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerAvailabilityReplica`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-availability-replica)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerBufferManager`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-buffer-manager-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerDatabaseReplica`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-database-replica)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerDatabases`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-databases-object?view=sql-server-2017)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerGeneralStatistics`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-general-statistics-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerLocks`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-locks-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerMemoryManager`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-memory-manager-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerSQLStatistics`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-sql-statistics-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerSQLErrors`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-sql-errors-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerTransactions`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-transactions-object)<br/>[`Win32_PerfRawData_MSSQLSERVER_SQLServerWaitStatistics`](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-wait-statistics-object)
Enabled by default? | No
## Flags
### `--collectors.mssql.classes-enabled`
Comma-separated list of MSSQL WMI classes to use. Supported values are `accessmethods`, `availreplica`, `bufman`, `databases`, `dbreplica`, `genstats`, `locks`, `memmgr`, `sqlstats`, `sqlerrors` and `transactions`.
Comma-separated list of MSSQL WMI classes to use. Supported values are `accessmethods`, `availreplica`, `bufman`, `databases`, `dbreplica`, `genstats`, `locks`, `memmgr`, `sqlstats`, `sqlerrors`, `transactions`, and `waitstats`.
### `--collectors.mssql.class-print`
@@ -127,7 +127,7 @@ Name | Description | Type | Labels
`windows_mssql_databases_bulk_copy_rows` | Number of rows bulk copied per second | counter | `mssql_instance`, `database`
`windows_mssql_databases_bulk_copy_bytes` | Amount of data bulk copied (in kilobytes) per second | counter | `mssql_instance`, `database`
`windows_mssql_databases_commit_table_entries` | he size (row count) of the in-memory portion of the commit table for the database | counter | `mssql_instance`, `database`
`windows_mssql_databases_data_files_size_bytes` | Cumulative size (in kilobytes) of all the data files in the database including any automatic growth. Monitoring this counter is useful, for example, for determining the correct size of tempdb | counter | `mssql_instance`, `database`
`windows_mssql_databases_data_files_size_bytes` | Cumulative size (in kilobytes) of all the data files in the database including any automatic growth. Monitoring this counter is useful, for example, for determining the correct size of tempdb | gauge | `mssql_instance`, `database`
`windows_mssql_databases_dbcc_logical_scan_bytes` | Number of logical read scan bytes per second for database console commands (DBCC) | counter | `mssql_instance`, `database`
`windows_mssql_databases_group_commit_stall_seconds` | Group stall time (microseconds) per second | counter | `mssql_instance`, `database`
`windows_mssql_databases_log_flushed_bytes` | Total number of log bytes flushed | counter | `mssql_instance`, `database`
@@ -244,6 +244,18 @@ Name | Description | Type | Labels
`windows_mssql_transactions_version_store_units` | The number of active allocation units in the snapshot isolation version store in tempdb | counter | `mssql_instance`
`windows_mssql_transactions_version_store_creation_units` | The number of allocation units that have been created in the snapshot isolation store since the instance of the Database Engine was started | counter | `mssql_instance`
`windows_mssql_transactions_version_store_truncation_units` | The number of allocation units that have been removed from the snapshot isolation store since the instance of the Database Engine was started | counter | `mssql_instance`
`windows_mssql_waitstats_lock_waits` | Statistics for processes waiting on a lock | gauge | `mssql_instance`, `item`
`windows_mssql_waitstats_memory_grant_queue_waits` | Statistics for processes waiting for memory grant to become available | gauge | `mssql_instance`, `item`
`windows_mssql_waitstats_thread_safe_memory_objects_waits` | Statistics for processes waiting on thread-safe memory allocators | gauge | `mssql_instance`, `item`
`windows_mssql_waitstats_log_write_waits` | Statistics for processes waiting for log buffer to be written | gauge | `mssql_instance`, `item`
`windows_mssql_waitstats_log_buffer_waits` | Statistics for processes waiting for log buffer to be available | gauge | `mssql_instance`, `item`
`windows_mssql_waitstats_network_io_waits` | Statistics relevant to wait on network I/O | gauge | `mssql_instance`, `item`
`windows_mssql_waitstats_page_io_latch_waits` | Statistics relevant to page I/O latches | gauge | `mssql_instance`, `item`
`windows_mssql_waitstats_page_latch_waits` | Statistics relevant to page latches, not including I/O latches | gauge | `mssql_instance`, `item`
`windows_mssql_waitstats_nonpage_latch_waits` | Statistics relevant to non-page latches | gauge | `mssql_instance`, `item`
`windows_mssql_waitstats_wait_for_the_worker_waits` | Statistics relevant to processes waiting for worker to become available | gauge | `mssql_instance`, `item`
`windows_mssql_waitstats_workspace_synchronization_waits` | Statistics relevant to processes synchronizing access to workspace | gauge | `mssql_instance`, `item`
`windows_mssql_waitstats_transaction_ownership_waits` | Statistics relevant to processes synchronizing access to transaction | gauge | `mssql_instance`, `item`
### Example metric
_This collector does not yet have explained examples, we would appreciate your help adding them!_

Some files were not shown because too many files have changed in this diff Show More