mirror of
https://github.com/prometheus-community/windows_exporter.git
synced 2026-02-08 05:56:37 +00:00
Compare commits
196 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4d771d2bce | ||
|
|
919f90a571 | ||
|
|
c7d07a37ea | ||
|
|
87c21bfa50 | ||
|
|
df4f6b206b | ||
|
|
9e3c585a28 | ||
|
|
e4a43c539b | ||
|
|
03e15a0f80 | ||
|
|
b98a956d51 | ||
|
|
524bfde5a3 | ||
|
|
963cee0a13 | ||
|
|
45e9357ad9 | ||
|
|
6120ea9be1 | ||
|
|
376060b053 | ||
|
|
e04c4aab29 | ||
|
|
479e6b1381 | ||
|
|
f6f7dc96e9 | ||
|
|
f84f54afda | ||
|
|
e22ef6e3cc | ||
|
|
02b69afe8b | ||
|
|
b7a0a09e58 | ||
|
|
6105792f29 | ||
|
|
1fbc626ee2 | ||
|
|
ca07abc1cd | ||
|
|
60583c3366 | ||
|
|
a7dcf5896c | ||
|
|
438cb87fc7 | ||
|
|
f8b6260ab5 | ||
|
|
d2b3f0f94b | ||
|
|
d6b4466bc3 | ||
|
|
ce3d517cb3 | ||
|
|
a6ea021468 | ||
|
|
b58dfdf4f3 | ||
|
|
676eb55f99 | ||
|
|
121d9980c1 | ||
|
|
947d8473e0 | ||
|
|
c1569686f7 | ||
|
|
75966fd37c | ||
|
|
d0cfc14af9 | ||
|
|
941b66d342 | ||
|
|
388195be97 | ||
|
|
bbefd8ac97 | ||
|
|
5b92e1bd3d | ||
|
|
82f17fd607 | ||
|
|
3e37b7b6f0 | ||
|
|
5d29ff6497 | ||
|
|
f4f5aaf146 | ||
|
|
5931604b58 | ||
|
|
67ca5e5ef2 | ||
|
|
384183120f | ||
|
|
a9ac2d4672 | ||
|
|
1b96bb6d08 | ||
|
|
cc45eeb90b | ||
|
|
4b2cd0a024 | ||
|
|
ad447a6b08 | ||
|
|
e4d7604193 | ||
|
|
757f88be04 | ||
|
|
cff484b5e1 | ||
|
|
2dc568b5cd | ||
|
|
448f505729 | ||
|
|
6d1ba11a8e | ||
|
|
0f5a232142 | ||
|
|
bbab591570 | ||
|
|
2bc3c1859a | ||
|
|
7c61a4dc25 | ||
|
|
5a57da53be | ||
|
|
72c46664db | ||
|
|
8689c41c68 | ||
|
|
74eac8f29b | ||
|
|
bb48f1caac | ||
|
|
068d03bd01 | ||
|
|
5072879dca | ||
|
|
0fb7eec670 | ||
|
|
4293497b29 | ||
|
|
95f10f19cb | ||
|
|
288f2a60e7 | ||
|
|
2e32b0e2b1 | ||
|
|
09759a4e8c | ||
|
|
dfd42a6c0c | ||
|
|
576c3bf918 | ||
|
|
19fee044bf | ||
|
|
45a74fdb7f | ||
|
|
db00553ca6 | ||
|
|
a2c4bf6a2d | ||
|
|
7adcac8f39 | ||
|
|
863b7d8ab4 | ||
|
|
33c6b2c6a5 | ||
|
|
6dee2422e1 | ||
|
|
5d224b43ca | ||
|
|
3f2a143104 | ||
|
|
ee3848141c | ||
|
|
df2a7a9ec0 | ||
|
|
05f0f6f688 | ||
|
|
d947d0f6db | ||
|
|
d063bc0842 | ||
|
|
dd473c4807 | ||
|
|
7bd58abd27 | ||
|
|
6f941044c7 | ||
|
|
3da11645cf | ||
|
|
048bff919e | ||
|
|
f76334213d | ||
|
|
71054ac429 | ||
|
|
248b7214e3 | ||
|
|
094558b1f1 | ||
|
|
18495abb69 | ||
|
|
f316d81d50 | ||
|
|
cc709ac380 | ||
|
|
2262b88fac | ||
|
|
795cc5ca85 | ||
|
|
ce0513f69d | ||
|
|
ee146b3710 | ||
|
|
a9752ebc1e | ||
|
|
d54aa033b1 | ||
|
|
3682c1b9af | ||
|
|
5af2a781cb | ||
|
|
7086e0f627 | ||
|
|
682378e170 | ||
|
|
c1fff498c6 | ||
|
|
648b6e0ab5 | ||
|
|
e9abe4d5f5 | ||
|
|
0af38ddbcf | ||
|
|
b615301efc | ||
|
|
25eb64bb3d | ||
|
|
19fbd57f60 | ||
|
|
b9b60f1ea0 | ||
|
|
eaa003f5af | ||
|
|
c5a545540d | ||
|
|
054cf5c5f5 | ||
|
|
cdc81b03d5 | ||
|
|
e141e531ed | ||
|
|
b44d855fe0 | ||
|
|
4b66473d2e | ||
|
|
556138189a | ||
|
|
7456afecae | ||
|
|
8407f4aeb8 | ||
|
|
6b8e9bee3f | ||
|
|
eb15f8ee80 | ||
|
|
2c7bea1892 | ||
|
|
59ba77b87f | ||
|
|
9723aa2218 | ||
|
|
9d03debcb6 | ||
|
|
2837bdfb50 | ||
|
|
a1a986f4d0 | ||
|
|
769b15eb86 | ||
|
|
b5ce53fdac | ||
|
|
ccac306c2d | ||
|
|
df0618e64d | ||
|
|
433e00a20b | ||
|
|
e8ffeaa0d7 | ||
|
|
c93b709f96 | ||
|
|
b300998b4b | ||
|
|
6e0ac6a1fc | ||
|
|
ec6b7210e3 | ||
|
|
704f6e2fe4 | ||
|
|
7a16d111b0 | ||
|
|
82471f39cd | ||
|
|
23dafc93ed | ||
|
|
cdbb27d0b4 | ||
|
|
2fbd0464dc | ||
|
|
f616589c5f | ||
|
|
f623c0ed89 | ||
|
|
ce5c6eed72 | ||
|
|
d7122930d0 | ||
|
|
96aa2cf095 | ||
|
|
6231eb43e8 | ||
|
|
0880ec6a1a | ||
|
|
8f85475725 | ||
|
|
a4aef9b3c7 | ||
|
|
637fc246af | ||
|
|
6b141a128c | ||
|
|
e97a04ed65 | ||
|
|
cdfe3cf258 | ||
|
|
24fe6813b2 | ||
|
|
7eab1fc411 | ||
|
|
78918f7034 | ||
|
|
59e72c7016 | ||
|
|
49c082d594 | ||
|
|
b7b19aafa0 | ||
|
|
3624ea3bba | ||
|
|
898c17e657 | ||
|
|
f9790f03fb | ||
|
|
3708c85611 | ||
|
|
7a5dc3c6f5 | ||
|
|
8f2f9d83f9 | ||
|
|
c5ea575fb1 | ||
|
|
be39c1126a | ||
|
|
6765935d17 | ||
|
|
51dd61beeb | ||
|
|
a30422c31c | ||
|
|
332a903757 | ||
|
|
6e518f21bb | ||
|
|
3bf94cdaf6 | ||
|
|
94bda6aa79 | ||
|
|
380eff24c9 | ||
|
|
2ebea42de5 | ||
|
|
d39d5230ab |
1
.github/CODEOWNERS
vendored
Normal file
1
.github/CODEOWNERS
vendored
Normal file
@@ -0,0 +1 @@
|
||||
* @prometheus-community/windows_exporter-reviewers
|
||||
6
.github/dependabot.yml
vendored
Normal file
6
.github/dependabot.yml
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
version: 2
|
||||
updates:
|
||||
- package-ecosystem: "gomod"
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
3
.gitignore
vendored
3
.gitignore
vendored
@@ -4,4 +4,5 @@ VERSION
|
||||
*.un~
|
||||
output/
|
||||
.vscode
|
||||
.idea
|
||||
.idea
|
||||
*.syso
|
||||
|
||||
@@ -3,11 +3,10 @@ linters:
|
||||
enable:
|
||||
- deadcode
|
||||
- errcheck
|
||||
- golint
|
||||
- revive
|
||||
- govet
|
||||
- gofmt
|
||||
- ineffassign
|
||||
- interfacer
|
||||
- structcheck
|
||||
- unconvert
|
||||
- varcheck
|
||||
@@ -20,4 +19,7 @@ issues:
|
||||
- # Golint has many capitalisation complaints on WMI class names
|
||||
text: "`?\\w+`? should be `?\\w+`?"
|
||||
linters:
|
||||
- golint
|
||||
- revive
|
||||
- text: "don't use ALL_CAPS in Go names; use CamelCase"
|
||||
linters:
|
||||
- revive
|
||||
|
||||
@@ -1,5 +1,9 @@
|
||||
Contributors in alphabetical order
|
||||
Maintainers in alphabetical order
|
||||
|
||||
* [Ben Reedy](https://github.com/breed808) - breed808@breed808.com
|
||||
* [Calle Pettersson](https://github.com/carlpett) - calle@cape.nu
|
||||
|
||||
Alumni
|
||||
|
||||
* [Brian Brazil](https://github.com/brian-brazil)
|
||||
* [Martin Lindhe](https://github.com/martinlindhe)
|
||||
* [Calle Pettersson](https://github.com/carlpett)
|
||||
|
||||
11
Makefile
11
Makefile
@@ -1,14 +1,23 @@
|
||||
export GOOS=windows
|
||||
|
||||
build:
|
||||
.PHONY: build
|
||||
build: windows_exporter.exe
|
||||
windows_exporter.exe: **/*.go
|
||||
promu build -v
|
||||
|
||||
test:
|
||||
go test -v ./...
|
||||
|
||||
bench:
|
||||
go test -v -bench='benchmark(cpu|logicaldisk|logon|memory|net|process|service|system|tcp|time)collector' ./...
|
||||
|
||||
lint:
|
||||
golangci-lint -c .golangci.yaml run
|
||||
|
||||
.PHONY: e2e-test
|
||||
e2e-test: windows_exporter.exe
|
||||
powershell -NonInteractive -ExecutionPolicy Bypass -File .\tools\end-to-end-test.ps1
|
||||
|
||||
fmt:
|
||||
gofmt -l -w -s .
|
||||
|
||||
|
||||
67
README.md
67
README.md
@@ -11,9 +11,12 @@ Name | Description | Enabled by default
|
||||
---------|-------------|--------------------
|
||||
[ad](docs/collector.ad.md) | Active Directory Domain Services |
|
||||
[adfs](docs/collector.adfs.md) | Active Directory Federation Services |
|
||||
[cache](docs/collector.cache.md) | Cache metrics |
|
||||
[cpu](docs/collector.cpu.md) | CPU usage | ✓
|
||||
[cpu_info](docs/collector.cpu_info.md) | CPU Information |
|
||||
[cs](docs/collector.cs.md) | "Computer System" metrics (system properties, num cpus/total memory) | ✓
|
||||
[container](docs/collector.container.md) | Container metrics |
|
||||
[dfsr](docs/collector.dfsr.md) | DFSR metrics |
|
||||
[dhcp](docs/collector.dhcp.md) | DHCP Server |
|
||||
[dns](docs/collector.dns.md) | DNS Server |
|
||||
[exchange](docs/collector.exchange.md) | Exchange metrics |
|
||||
@@ -38,8 +41,10 @@ Name | Description | Enabled by default
|
||||
[process](docs/collector.process.md) | Per-process metrics |
|
||||
[remote_fx](docs/collector.remote_fx.md) | RemoteFX protocol (RDP) metrics |
|
||||
[service](docs/collector.service.md) | Service state metrics | ✓
|
||||
[smtp](docs/collector.smtp.md) | IIS SMTP Server |
|
||||
[system](docs/collector.system.md) | System calls | ✓
|
||||
[tcp](docs/collector.tcp.md) | TCP connections |
|
||||
[time](docs/collector.time.md) | Windows Time Service |
|
||||
[thermalzone](docs/collector.thermalzone.md) | Thermal information
|
||||
[terminal_services](docs/collector.terminal_services.md) | Terminal services (RDS)
|
||||
[textfile](docs/collector.textfile.md) | Read prometheus metrics from a text file | ✓
|
||||
@@ -47,6 +52,21 @@ Name | Description | Enabled by default
|
||||
|
||||
See the linked documentation on each collector for more information on reported metrics, configuration settings and usage examples.
|
||||
|
||||
### Filtering enabled collectors
|
||||
|
||||
The `windows_exporter` will expose all metrics from enabled collectors by default. This is the recommended way to collect metrics to avoid errors when comparing metrics of different families.
|
||||
|
||||
For advanced use the `windows_exporter` can be passed an optional list of collectors to filter metrics. The `collect[]` parameter may be used multiple times. In Prometheus configuration you can use this syntax under the [scrape config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#<scrape_config>).
|
||||
|
||||
```
|
||||
params:
|
||||
collect[]:
|
||||
- foo
|
||||
- bar
|
||||
```
|
||||
|
||||
This can be useful for having different Prometheus servers collect specific metrics from nodes.
|
||||
|
||||
## Flags
|
||||
|
||||
windows_exporter accepts flags to configure certain behaviours. The ones configuring the global behaviour of the exporter are listed below, while collector-specific ones are documented in the respective collector documentation above.
|
||||
@@ -56,9 +76,10 @@ Flag | Description | Default value
|
||||
`--telemetry.addr` | host:port for exporter. | `:9182`
|
||||
`--telemetry.path` | URL path for surfacing collected metrics. | `/metrics`
|
||||
`--telemetry.max-requests` | Maximum number of concurrent requests. 0 to disable. | `5`
|
||||
`--collectors.enabled` | Comma-separated list of collectors to use. Use `[defaults]` as a placeholder for all the collectors enabled by default." | `[defaults]`
|
||||
`--collectors.enabled` | Comma-separated list of collectors to use. Use `[defaults]` as a placeholder which gets expanded containing all the collectors enabled by default." | `[defaults]`
|
||||
`--collectors.print` | If true, print available collectors and exit. |
|
||||
`--scrape.timeout-margin` | Seconds to subtract from the timeout allowed by the client. Tune to allow for overhead or high loads. | `0.5`
|
||||
`--web.config.file` | A [web config][web_config] for setting up TLS and Auth | None
|
||||
|
||||
## Installation
|
||||
The latest release can be downloaded from the [releases page](https://github.com/prometheus-community/windows_exporter/releases).
|
||||
@@ -74,6 +95,7 @@ Name | Description
|
||||
`LISTEN_PORT` | The port to bind to. Defaults to 9182.
|
||||
`METRICS_PATH` | The path at which to serve metrics. Defaults to `/metrics`
|
||||
`TEXTFILE_DIR` | As the `--collector.textfile.directory` flag, provide a directory to read text files with metrics from
|
||||
`REMOTE_ADDR` | Allows setting comma separated remote IP addresses for the Windows Firewall exception (whitelist). Defaults to an empty string (any remote address).
|
||||
`EXTRA_FLAGS` | Allows passing full CLI flags. Defaults to an empty string.
|
||||
|
||||
Parameters are sent to the installer via `msiexec`. Example invocations:
|
||||
@@ -92,10 +114,9 @@ On some older versions of Windows you may need to surround parameter values with
|
||||
msiexec /i C:\Users\Administrator\Downloads\windows_exporter.msi ENABLED_COLLECTORS="ad,iis,logon,memory,process,tcp,thermalzone" TEXTFILE_DIR="C:\custom_metrics\"
|
||||
```
|
||||
|
||||
## Roadmap
|
||||
|
||||
See [open issues](https://github.com/prometheus-community/windows_exporter/issues)
|
||||
## Supported versions
|
||||
|
||||
windows_exporter supports Windows Server versions 2008R2 and later, and desktop Windows version 7 and later.
|
||||
|
||||
## Usage
|
||||
|
||||
@@ -119,7 +140,45 @@ The prometheus metrics will be exposed on [localhost:9182](http://localhost:9182
|
||||
|
||||
When there are multiple processes with the same name, WMI represents those after the first instance as `process-name#index`. So to get them all, rather than just the first one, the [regular expression](https://en.wikipedia.org/wiki/Regular_expression) must use `.+`. See [process](docs/collector.process.md) for more information.
|
||||
|
||||
### Using [defaults] with `--collectors.enabled` argument
|
||||
|
||||
Using `[defaults]` with `--collectors.enabled` argument which gets expanded with all default collectors.
|
||||
|
||||
.\windows_exporter.exe --collectors.enabled "[defaults],process,container"
|
||||
|
||||
This enables the additional process and container collectors on top of the defaults.
|
||||
|
||||
### Using a configuration file
|
||||
|
||||
YAML configuration files can be specified with the `--config.file` flag. E.G. `.\windows_exporter.exe --config.file=config.yml`
|
||||
|
||||
```yaml
|
||||
collectors:
|
||||
enabled: cpu,cs,net,service
|
||||
collector:
|
||||
service:
|
||||
services-where: "Name='windows_exporter'"
|
||||
log:
|
||||
level: warn
|
||||
```
|
||||
|
||||
An example configuration file can be found [here](docs/example_config.yml).
|
||||
|
||||
#### Configuration file notes
|
||||
|
||||
Configuration file values can be mixed with CLI flags. E.G.
|
||||
|
||||
`.\windows_exporter.exe --collectors.enabled=cpu,logon`
|
||||
|
||||
```yaml
|
||||
log:
|
||||
level: debug
|
||||
```
|
||||
|
||||
CLI flags enjoy a higher priority over values specified in the configuration file.
|
||||
|
||||
## License
|
||||
|
||||
Under [MIT](LICENSE)
|
||||
|
||||
[web_config]: https://github.com/prometheus/exporter-toolkit/blob/master/docs/web-configuration.md
|
||||
|
||||
6
SECURITY.md
Normal file
6
SECURITY.md
Normal file
@@ -0,0 +1,6 @@
|
||||
# Reporting a security issue
|
||||
|
||||
The Prometheus security policy, including how to report vulnerabilities, can be
|
||||
found here:
|
||||
|
||||
https://prometheus.io/docs/operating/security/
|
||||
31
appveyor.yml
31
appveyor.yml
@@ -1,8 +1,7 @@
|
||||
version: "{build}"
|
||||
|
||||
os: Visual Studio 2017
|
||||
os: Visual Studio 2019
|
||||
build: off
|
||||
stack: go 1.13
|
||||
|
||||
environment:
|
||||
GOPATH: c:\gopath
|
||||
@@ -13,22 +12,21 @@ clone_folder: c:\gopath\src\github.com\prometheus-community\windows_exporter
|
||||
install:
|
||||
- mkdir %GOPATH%\bin
|
||||
- set PATH=%GOPATH%\bin;%PATH%
|
||||
- set PATH=%PATH%;C:\mingw-w64\x86_64-7.2.0-posix-seh-rt_v5-rev1\mingw64\bin
|
||||
- set PATH=%PATH%;C:\msys64\mingw64\bin
|
||||
- choco install gitversion.portable make -y
|
||||
- ps: |
|
||||
appveyor DownloadFile https://github.com/golangci/golangci-lint/releases/download/v1.21.0/golangci-lint-1.21.0-windows-amd64.zip
|
||||
Expand-Archive golangci-lint-1.21.0-windows-amd64.zip
|
||||
Move-Item golangci-lint-1.21.0-windows-amd64\golangci-lint-1.21.0-windows-amd64\golangci-lint.exe $env:GOPATH\bin\golangci-lint.exe
|
||||
- ps: |
|
||||
$env:GO111MODULE="off"
|
||||
go get -u github.com/prometheus/promu
|
||||
$env:GO111MODULE="on"
|
||||
appveyor DownloadFile https://github.com/golangci/golangci-lint/releases/download/v1.43.0/golangci-lint-1.43.0-windows-amd64.zip
|
||||
Expand-Archive golangci-lint-1.43.0-windows-amd64.zip
|
||||
Move-Item golangci-lint-1.43.0-windows-amd64\golangci-lint-1.43.0-windows-amd64\golangci-lint.exe $env:GOPATH\bin\golangci-lint.exe
|
||||
- go install github.com/prometheus/promu@v0.11.1
|
||||
- go install github.com/josephspurrier/goversioninfo/cmd/goversioninfo@v1.2.0
|
||||
|
||||
test_script:
|
||||
- make test
|
||||
|
||||
after_test:
|
||||
- make lint
|
||||
- make e2e-test
|
||||
|
||||
build_script:
|
||||
- ps: |
|
||||
@@ -37,12 +35,17 @@ build_script:
|
||||
# so we need to run it before setting the preference.
|
||||
go mod download
|
||||
$ErrorActionPreference = "Stop"
|
||||
|
||||
gitversion /output json /showvariable FullSemVer | Set-Content VERSION -PassThru
|
||||
$Version = Get-Content VERSION
|
||||
# Windows versioninfo resources need the file version by parts (but product version is free text)
|
||||
$VersionParts = ($Version -replace '^v?([0-9\.]+).*$','$1').Split(".")
|
||||
goversioninfo.exe -ver-major $VersionParts[0] -ver-minor $VersionParts[1] -ver-patch $VersionParts[2] -product-version $Version -platform-specific
|
||||
|
||||
make crossbuild
|
||||
# GH requires all files to have different names, so add version/arch to differentiate
|
||||
foreach($Arch in "amd64","386") {
|
||||
Rename-Item output\$Arch\windows_exporter.exe -NewName windows_exporter-$Version-$Arch.exe
|
||||
Move-Item output\$Arch\windows_exporter.exe output\windows_exporter-$Version-$Arch.exe
|
||||
}
|
||||
|
||||
after_build:
|
||||
@@ -57,14 +60,14 @@ after_build:
|
||||
$MSIVersion = $env:APPVEYOR_REPO_TAG_NAME -replace '^v?([0-9\.]+).*$','$1'
|
||||
foreach($Arch in "amd64","386") {
|
||||
Write-Verbose "Building windows_exporter $MSIVersion msi for $Arch"
|
||||
.\installer\build.ps1 -PathToExecutable .\output\$Arch\windows_exporter-$BuildVersion-$Arch.exe -Version $MSIVersion -Arch "$Arch"
|
||||
Move-Item installer\Output\windows_exporter-$MSIVersion-$Arch.msi output\$Arch\
|
||||
.\installer\build.ps1 -PathToExecutable .\output\windows_exporter-$BuildVersion-$Arch.exe -Version $MSIVersion -Arch "$Arch"
|
||||
Move-Item installer\Output\windows_exporter-$MSIVersion-$Arch.msi output\
|
||||
}
|
||||
- promu checksum output\
|
||||
|
||||
artifacts:
|
||||
- name: Artifacts
|
||||
path: output\**\*
|
||||
path: output\*
|
||||
|
||||
deploy:
|
||||
- provider: GitHub
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
@@ -6,8 +7,8 @@ import (
|
||||
"errors"
|
||||
|
||||
"github.com/StackExchange/wmi"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
||||
9
collector/ad_test.go
Normal file
9
collector/ad_test.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkADCollector(b *testing.B) {
|
||||
benchmarkCollector(b, "ad", NewADCollector)
|
||||
}
|
||||
@@ -1,3 +1,4 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
@@ -30,67 +31,67 @@ func newADFSCollector() (Collector, error) {
|
||||
|
||||
return &adfsCollector{
|
||||
adLoginConnectionFailures: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "ad_login_connection_failures"),
|
||||
prometheus.BuildFQName(Namespace, subsystem, "ad_login_connection_failures_total"),
|
||||
"Total number of connection failures to an Active Directory domain controller",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
certificateAuthentications: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "certificate_authentications"),
|
||||
prometheus.BuildFQName(Namespace, subsystem, "certificate_authentications_total"),
|
||||
"Total number of User Certificate authentications",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
deviceAuthentications: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "device_authentications"),
|
||||
prometheus.BuildFQName(Namespace, subsystem, "device_authentications_total"),
|
||||
"Total number of Device authentications",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
extranetAccountLockouts: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "extranet_account_lockouts"),
|
||||
prometheus.BuildFQName(Namespace, subsystem, "extranet_account_lockouts_total"),
|
||||
"Total number of Extranet Account Lockouts",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
federatedAuthentications: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "federated_authentications"),
|
||||
prometheus.BuildFQName(Namespace, subsystem, "federated_authentications_total"),
|
||||
"Total number of authentications from a federated source",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
passportAuthentications: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "passport_authentications"),
|
||||
prometheus.BuildFQName(Namespace, subsystem, "passport_authentications_total"),
|
||||
"Total number of Microsoft Passport SSO authentications",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
passiveRequests: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "passive_requests"),
|
||||
prometheus.BuildFQName(Namespace, subsystem, "passive_requests_total"),
|
||||
"Total number of passive (browser-based) requests",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
passwordChangeFailed: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "password_change_failed"),
|
||||
prometheus.BuildFQName(Namespace, subsystem, "password_change_failed_total"),
|
||||
"Total number of failed password changes",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
passwordChangeSucceeded: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "password_change_succeeded"),
|
||||
prometheus.BuildFQName(Namespace, subsystem, "password_change_succeeded_total"),
|
||||
"Total number of successful password changes",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
tokenRequests: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "token_requests"),
|
||||
prometheus.BuildFQName(Namespace, subsystem, "token_requests_total"),
|
||||
"Total number of token requests",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
windowsIntegratedAuthentications: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "windows_integrated_authentications"),
|
||||
prometheus.BuildFQName(Namespace, subsystem, "windows_integrated_authentications_total"),
|
||||
"Total number of Windows integrated authentications (Kerberos/NTLM)",
|
||||
nil,
|
||||
nil,
|
||||
|
||||
9
collector/adfs_test.go
Normal file
9
collector/adfs_test.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkADFSCollector(b *testing.B) {
|
||||
benchmarkCollector(b, "adfs", newADFSCollector)
|
||||
}
|
||||
453
collector/cache.go
Normal file
453
collector/cache.go
Normal file
@@ -0,0 +1,453 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
func init() {
|
||||
registerCollector("cache", newCacheCollector, "Cache")
|
||||
}
|
||||
|
||||
// A CacheCollector is a Prometheus collector for Perflib Cache metrics
|
||||
type CacheCollector struct {
|
||||
AsyncCopyReadsTotal *prometheus.Desc
|
||||
AsyncDataMapsTotal *prometheus.Desc
|
||||
AsyncFastReadsTotal *prometheus.Desc
|
||||
AsyncMDLReadsTotal *prometheus.Desc
|
||||
AsyncPinReadsTotal *prometheus.Desc
|
||||
CopyReadHitsTotal *prometheus.Desc
|
||||
CopyReadsTotal *prometheus.Desc
|
||||
DataFlushesTotal *prometheus.Desc
|
||||
DataFlushPagesTotal *prometheus.Desc
|
||||
DataMapHitsPercent *prometheus.Desc
|
||||
DataMapPinsTotal *prometheus.Desc
|
||||
DataMapsTotal *prometheus.Desc
|
||||
DirtyPages *prometheus.Desc
|
||||
DirtyPageThreshold *prometheus.Desc
|
||||
FastReadNotPossiblesTotal *prometheus.Desc
|
||||
FastReadResourceMissesTotal *prometheus.Desc
|
||||
FastReadsTotal *prometheus.Desc
|
||||
LazyWriteFlushesTotal *prometheus.Desc
|
||||
LazyWritePagesTotal *prometheus.Desc
|
||||
MDLReadHitsTotal *prometheus.Desc
|
||||
MDLReadsTotal *prometheus.Desc
|
||||
PinReadHitsTotal *prometheus.Desc
|
||||
PinReadsTotal *prometheus.Desc
|
||||
ReadAheadsTotal *prometheus.Desc
|
||||
SyncCopyReadsTotal *prometheus.Desc
|
||||
SyncDataMapsTotal *prometheus.Desc
|
||||
SyncFastReadsTotal *prometheus.Desc
|
||||
SyncMDLReadsTotal *prometheus.Desc
|
||||
SyncPinReadsTotal *prometheus.Desc
|
||||
}
|
||||
|
||||
// NewCacheCollector ...
|
||||
func newCacheCollector() (Collector, error) {
|
||||
const subsystem = "cache"
|
||||
return &CacheCollector{
|
||||
AsyncCopyReadsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "async_copy_reads_total"),
|
||||
"(AsyncCopyReadsTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
AsyncDataMapsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "async_data_maps_total"),
|
||||
"(AsyncDataMapsTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
AsyncFastReadsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "async_fast_reads_total"),
|
||||
"(AsyncFastReadsTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
AsyncMDLReadsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "async_mdl_reads_total"),
|
||||
"(AsyncMDLReadsTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
AsyncPinReadsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "async_pin_reads_total"),
|
||||
"(AsyncPinReadsTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
CopyReadHitsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "copy_read_hits_total"),
|
||||
"(CopyReadHitsTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
CopyReadsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "copy_reads_total"),
|
||||
"(CopyReadsTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
DataFlushesTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "data_flushes_total"),
|
||||
"(DataFlushesTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
DataFlushPagesTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "data_flush_pages_total"),
|
||||
"(DataFlushPagesTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
DataMapHitsPercent: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "data_map_hits_percent"),
|
||||
"(DataMapHitsPercent)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
DataMapPinsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "data_map_pins_total"),
|
||||
"(DataMapPinsTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
DataMapsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "data_maps_total"),
|
||||
"(DataMapsTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
DirtyPages: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "dirty_pages"),
|
||||
"(DirtyPages)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
DirtyPageThreshold: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "dirty_page_threshold"),
|
||||
"(DirtyPageThreshold)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
FastReadNotPossiblesTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "fast_read_not_possibles_total"),
|
||||
"(FastReadNotPossiblesTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
FastReadResourceMissesTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "fast_read_resource_misses_total"),
|
||||
"(FastReadResourceMissesTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
FastReadsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "fast_reads_total"),
|
||||
"(FastReadsTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
LazyWriteFlushesTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "lazy_write_flushes_total"),
|
||||
"(LazyWriteFlushesTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
LazyWritePagesTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "lazy_write_pages_total"),
|
||||
"(LazyWritePagesTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
MDLReadHitsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "mdl_read_hits_total"),
|
||||
"(MDLReadHitsTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
MDLReadsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "mdl_reads_total"),
|
||||
"(MDLReadsTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
PinReadHitsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "pin_read_hits_total"),
|
||||
"(PinReadHitsTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
PinReadsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "pin_reads_total"),
|
||||
"(PinReadsTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
ReadAheadsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "read_aheads_total"),
|
||||
"(ReadAheadsTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
SyncCopyReadsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "sync_copy_reads_total"),
|
||||
"(SyncCopyReadsTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
SyncDataMapsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "sync_data_maps_total"),
|
||||
"(SyncDataMapsTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
SyncFastReadsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "sync_fast_reads_total"),
|
||||
"(SyncFastReadsTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
SyncMDLReadsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "sync_mdl_reads_total"),
|
||||
"(SyncMDLReadsTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
SyncPinReadsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "sync_pin_reads_total"),
|
||||
"(SyncPinReadsTotal)",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Collect implements the Collector interface
|
||||
func (c *CacheCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
|
||||
if desc, err := c.collect(ctx, ch); err != nil {
|
||||
log.Error("failed collecting cache metrics:", desc, err)
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Perflib "Cache":
|
||||
// - https://docs.microsoft.com/en-us/previous-versions/aa394267(v=vs.85)
|
||||
type perflibCache struct {
|
||||
AsyncCopyReadsTotal float64 `perflib:"Async Copy Reads/sec"`
|
||||
AsyncDataMapsTotal float64 `perflib:"Async Data Maps/sec"`
|
||||
AsyncFastReadsTotal float64 `perflib:"Async Fast Reads/sec"`
|
||||
AsyncMDLReadsTotal float64 `perflib:"Async MDL Reads/sec"`
|
||||
AsyncPinReadsTotal float64 `perflib:"Async Pin Reads/sec"`
|
||||
CopyReadHitsTotal float64 `perflib:"Copy Read Hits %"`
|
||||
CopyReadsTotal float64 `perflib:"Copy Reads/sec"`
|
||||
DataFlushesTotal float64 `perflib:"Data Flushes/sec"`
|
||||
DataFlushPagesTotal float64 `perflib:"Data Flush Pages/sec"`
|
||||
DataMapHitsPercent float64 `perflib:"Data Map Hits %"`
|
||||
DataMapPinsTotal float64 `perflib:"Data Map Pins/sec"`
|
||||
DataMapsTotal float64 `perflib:"Data Maps/sec"`
|
||||
DirtyPages float64 `perflib:"Dirty Pages"`
|
||||
DirtyPageThreshold float64 `perflib:"Dirty Page Threshold"`
|
||||
FastReadNotPossiblesTotal float64 `perflib:"Fast Read Not Possibles/sec"`
|
||||
FastReadResourceMissesTotal float64 `perflib:"Fast Read Resource Misses/sec"`
|
||||
FastReadsTotal float64 `perflib:"Fast Reads/sec"`
|
||||
LazyWriteFlushesTotal float64 `perflib:"Lazy Write Flushes/sec"`
|
||||
LazyWritePagesTotal float64 `perflib:"Lazy Write Pages/sec"`
|
||||
MDLReadHitsTotal float64 `perflib:"MDL Read Hits %"`
|
||||
MDLReadsTotal float64 `perflib:"MDL Reads/sec"`
|
||||
PinReadHitsTotal float64 `perflib:"Pin Read Hits %"`
|
||||
PinReadsTotal float64 `perflib:"Pin Reads/sec"`
|
||||
ReadAheadsTotal float64 `perflib:"Read Aheads/sec"`
|
||||
SyncCopyReadsTotal float64 `perflib:"Sync Copy Reads/sec"`
|
||||
SyncDataMapsTotal float64 `perflib:"Sync Data Maps/sec"`
|
||||
SyncFastReadsTotal float64 `perflib:"Sync Fast Reads/sec"`
|
||||
SyncMDLReadsTotal float64 `perflib:"Sync MDL Reads/sec"`
|
||||
SyncPinReadsTotal float64 `perflib:"Sync Pin Reads/sec"`
|
||||
}
|
||||
|
||||
func (c *CacheCollector) collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
|
||||
var dst []perflibCache // Single-instance class, array is required but will have single entry.
|
||||
if err := unmarshalObject(ctx.perfObjects["Cache"], &dst); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.AsyncCopyReadsTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].AsyncCopyReadsTotal,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.AsyncDataMapsTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].AsyncDataMapsTotal,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.AsyncFastReadsTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].AsyncFastReadsTotal,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.AsyncMDLReadsTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].AsyncMDLReadsTotal,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.AsyncPinReadsTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].AsyncPinReadsTotal,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.CopyReadHitsTotal,
|
||||
prometheus.GaugeValue,
|
||||
dst[0].CopyReadHitsTotal,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.CopyReadsTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].CopyReadsTotal,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.DataFlushesTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].DataFlushesTotal,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.DataFlushPagesTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].DataFlushPagesTotal,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.DataMapHitsPercent,
|
||||
prometheus.GaugeValue,
|
||||
dst[0].DataMapHitsPercent,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.DataMapPinsTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].DataMapPinsTotal,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.DataMapsTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].DataMapsTotal,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.DirtyPages,
|
||||
prometheus.GaugeValue,
|
||||
dst[0].DirtyPages,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.DirtyPageThreshold,
|
||||
prometheus.GaugeValue,
|
||||
dst[0].DirtyPageThreshold,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FastReadNotPossiblesTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].FastReadNotPossiblesTotal,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FastReadResourceMissesTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].FastReadResourceMissesTotal,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FastReadsTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].FastReadsTotal,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.LazyWriteFlushesTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].LazyWriteFlushesTotal,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.LazyWritePagesTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].LazyWritePagesTotal,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.MDLReadHitsTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].MDLReadHitsTotal,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.MDLReadsTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].MDLReadsTotal,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.PinReadHitsTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].PinReadHitsTotal,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.PinReadsTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].PinReadsTotal,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.ReadAheadsTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].ReadAheadsTotal,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.SyncCopyReadsTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].SyncCopyReadsTotal,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.SyncDataMapsTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].SyncDataMapsTotal,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.SyncFastReadsTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].SyncFastReadsTotal,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.SyncMDLReadsTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].SyncMDLReadsTotal,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.SyncPinReadsTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].SyncPinReadsTotal,
|
||||
)
|
||||
|
||||
return nil, nil
|
||||
}
|
||||
@@ -2,12 +2,13 @@ package collector
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/leoluk/perflib_exporter/perflib"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
"golang.org/x/sys/windows/registry"
|
||||
)
|
||||
|
||||
@@ -119,3 +120,31 @@ func boolToFloat(b bool) float64 {
|
||||
}
|
||||
return 0.0
|
||||
}
|
||||
|
||||
func find(slice []string, val string) bool {
|
||||
for _, item := range slice {
|
||||
if item == val {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Used by more complex collectors where user input specifies enabled child collectors.
|
||||
// Splits provided child collectors and deduplicate.
|
||||
func expandEnabledChildCollectors(enabled string) []string {
|
||||
separated := strings.Split(enabled, ",")
|
||||
unique := map[string]bool{}
|
||||
for _, s := range separated {
|
||||
if s != "" {
|
||||
unique[s] = true
|
||||
}
|
||||
}
|
||||
result := make([]string, 0, len(unique))
|
||||
for s := range unique {
|
||||
result = append(result, s)
|
||||
}
|
||||
// Ensure result is ordered, to prevent test failure
|
||||
sort.Strings(result)
|
||||
return result
|
||||
}
|
||||
|
||||
60
collector/collector_test.go
Normal file
60
collector/collector_test.go
Normal file
@@ -0,0 +1,60 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
func TestExpandChildCollectors(t *testing.T) {
|
||||
cases := []struct {
|
||||
name string
|
||||
input string
|
||||
expectedOutput []string
|
||||
}{
|
||||
{
|
||||
name: "simple",
|
||||
input: "testing1,testing2,testing3",
|
||||
expectedOutput: []string{"testing1", "testing2", "testing3"},
|
||||
},
|
||||
{
|
||||
name: "duplicate",
|
||||
input: "testing1,testing2,testing2,testing3",
|
||||
expectedOutput: []string{"testing1", "testing2", "testing3"},
|
||||
},
|
||||
}
|
||||
|
||||
for _, c := range cases {
|
||||
t.Run(c.name, func(t *testing.T) {
|
||||
output := expandEnabledChildCollectors(c.input)
|
||||
if !reflect.DeepEqual(output, c.expectedOutput) {
|
||||
t.Errorf("Output mismatch, expected %+v, got %+v", c.expectedOutput, output)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func benchmarkCollector(b *testing.B, name string, collectFunc func() (Collector, error)) {
|
||||
// Create perflib scrape context. Some perflib collectors required a correct context,
|
||||
// or will fail during benchmark.
|
||||
scrapeContext, err := PrepareScrapeContext([]string{name})
|
||||
if err != nil {
|
||||
b.Error(err)
|
||||
}
|
||||
c, err := collectFunc()
|
||||
if err != nil {
|
||||
b.Error(err)
|
||||
}
|
||||
|
||||
metrics := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
for {
|
||||
<-metrics
|
||||
}
|
||||
}()
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
c.Collect(scrapeContext, metrics) //nolint:errcheck
|
||||
}
|
||||
}
|
||||
@@ -1,11 +1,12 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"github.com/Microsoft/hcsshim"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
)
|
||||
|
||||
func init() {
|
||||
@@ -168,70 +169,67 @@ func (c *ContainerMetricsCollector) collect(ch chan<- prometheus.Metric) (*prome
|
||||
}
|
||||
|
||||
for _, containerDetails := range containers {
|
||||
containerId := containerDetails.ID
|
||||
|
||||
container, err := hcsshim.OpenContainer(containerId)
|
||||
container, err := hcsshim.OpenContainer(containerDetails.ID)
|
||||
if container != nil {
|
||||
defer containerClose(container)
|
||||
}
|
||||
if err != nil {
|
||||
log.Error("err in opening container: ", containerId, err)
|
||||
log.Error("err in opening container: ", containerDetails.ID, err)
|
||||
continue
|
||||
}
|
||||
|
||||
cstats, err := container.Statistics()
|
||||
if err != nil {
|
||||
log.Error("err in fetching container Statistics: ", containerId, err)
|
||||
log.Error("err in fetching container Statistics: ", containerDetails.ID, err)
|
||||
continue
|
||||
}
|
||||
// HCS V1 is for docker runtime. Add the docker:// prefix on container_id
|
||||
containerId = "docker://" + containerId
|
||||
containerIdWithPrefix := getContainerIdWithPrefix(containerDetails)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.ContainerAvailable,
|
||||
prometheus.CounterValue,
|
||||
1,
|
||||
containerId,
|
||||
containerIdWithPrefix,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.UsageCommitBytes,
|
||||
prometheus.GaugeValue,
|
||||
float64(cstats.Memory.UsageCommitBytes),
|
||||
containerId,
|
||||
containerIdWithPrefix,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.UsageCommitPeakBytes,
|
||||
prometheus.GaugeValue,
|
||||
float64(cstats.Memory.UsageCommitPeakBytes),
|
||||
containerId,
|
||||
containerIdWithPrefix,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.UsagePrivateWorkingSetBytes,
|
||||
prometheus.GaugeValue,
|
||||
float64(cstats.Memory.UsagePrivateWorkingSetBytes),
|
||||
containerId,
|
||||
containerIdWithPrefix,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.RuntimeTotal,
|
||||
prometheus.CounterValue,
|
||||
float64(cstats.Processor.TotalRuntime100ns)*ticksToSecondsScaleFactor,
|
||||
containerId,
|
||||
containerIdWithPrefix,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.RuntimeUser,
|
||||
prometheus.CounterValue,
|
||||
float64(cstats.Processor.RuntimeUser100ns)*ticksToSecondsScaleFactor,
|
||||
containerId,
|
||||
containerIdWithPrefix,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.RuntimeKernel,
|
||||
prometheus.CounterValue,
|
||||
float64(cstats.Processor.RuntimeKernel100ns)*ticksToSecondsScaleFactor,
|
||||
containerId,
|
||||
containerIdWithPrefix,
|
||||
)
|
||||
|
||||
if len(cstats.Network) == 0 {
|
||||
log.Info("No Network Stats for container: ", containerId)
|
||||
log.Info("No Network Stats for container: ", containerDetails.ID)
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -242,37 +240,37 @@ func (c *ContainerMetricsCollector) collect(ch chan<- prometheus.Metric) (*prome
|
||||
c.BytesReceived,
|
||||
prometheus.CounterValue,
|
||||
float64(networkInterface.BytesReceived),
|
||||
containerId, networkInterface.EndpointId,
|
||||
containerIdWithPrefix, networkInterface.EndpointId,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.BytesSent,
|
||||
prometheus.CounterValue,
|
||||
float64(networkInterface.BytesSent),
|
||||
containerId, networkInterface.EndpointId,
|
||||
containerIdWithPrefix, networkInterface.EndpointId,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.PacketsReceived,
|
||||
prometheus.CounterValue,
|
||||
float64(networkInterface.PacketsReceived),
|
||||
containerId, networkInterface.EndpointId,
|
||||
containerIdWithPrefix, networkInterface.EndpointId,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.PacketsSent,
|
||||
prometheus.CounterValue,
|
||||
float64(networkInterface.PacketsSent),
|
||||
containerId, networkInterface.EndpointId,
|
||||
containerIdWithPrefix, networkInterface.EndpointId,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.DroppedPacketsIncoming,
|
||||
prometheus.CounterValue,
|
||||
float64(networkInterface.DroppedPacketsIncoming),
|
||||
containerId, networkInterface.EndpointId,
|
||||
containerIdWithPrefix, networkInterface.EndpointId,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.DroppedPacketsOutgoing,
|
||||
prometheus.CounterValue,
|
||||
float64(networkInterface.DroppedPacketsOutgoing),
|
||||
containerId, networkInterface.EndpointId,
|
||||
containerIdWithPrefix, networkInterface.EndpointId,
|
||||
)
|
||||
break
|
||||
}
|
||||
@@ -280,3 +278,13 @@ func (c *ContainerMetricsCollector) collect(ch chan<- prometheus.Metric) (*prome
|
||||
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func getContainerIdWithPrefix(containerDetails hcsshim.ContainerProperties) string {
|
||||
switch containerDetails.Owner {
|
||||
case "containerd-shim-runhcs-v1.exe":
|
||||
return "containerd://" + containerDetails.ID
|
||||
default:
|
||||
// default to docker or if owner is not set
|
||||
return "docker://" + containerDetails.ID
|
||||
}
|
||||
}
|
||||
|
||||
9
collector/container_test.go
Normal file
9
collector/container_test.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkContainerCollector(b *testing.B) {
|
||||
benchmarkCollector(b, "container", NewContainerMetricsCollector)
|
||||
}
|
||||
@@ -1,3 +1,4 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
|
||||
97
collector/cpu_info.go
Normal file
97
collector/cpu_info.go
Normal file
@@ -0,0 +1,97 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/StackExchange/wmi"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
func init() {
|
||||
registerCollector("cpu_info", newCpuInfoCollector)
|
||||
}
|
||||
|
||||
// If you are adding additional labels to the metric, make sure that they get added in here as well. See below for explanation.
|
||||
const (
|
||||
win32ProcessorQuery = "SELECT Architecture, DeviceId, Description, Family, L2CacheSize, L3CacheSize, Name FROM Win32_Processor"
|
||||
)
|
||||
|
||||
// A CpuInfoCollector is a Prometheus collector for a few WMI metrics in Win32_Processor
|
||||
type CpuInfoCollector struct {
|
||||
CpuInfo *prometheus.Desc
|
||||
}
|
||||
|
||||
func newCpuInfoCollector() (Collector, error) {
|
||||
return &CpuInfoCollector{
|
||||
CpuInfo: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, "", "cpu_info"),
|
||||
"Labeled CPU information as provided provided by Win32_Processor",
|
||||
[]string{
|
||||
"architecture",
|
||||
"device_id",
|
||||
"description",
|
||||
"family",
|
||||
"l2_cache_size",
|
||||
"l3_cache_size",
|
||||
"name"},
|
||||
nil,
|
||||
),
|
||||
}, nil
|
||||
}
|
||||
|
||||
type win32_Processor struct {
|
||||
Architecture uint32
|
||||
DeviceID string
|
||||
Description string
|
||||
Family uint16
|
||||
L2CacheSize uint32
|
||||
L3CacheSize uint32
|
||||
Name string
|
||||
}
|
||||
|
||||
// Collect sends the metric values for each metric
|
||||
// to the provided prometheus Metric channel.
|
||||
func (c *CpuInfoCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
|
||||
if desc, err := c.collect(ch); err != nil {
|
||||
log.Error("failed collecting cpu_info metrics:", desc, err)
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *CpuInfoCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
|
||||
var dst []win32_Processor
|
||||
// We use a static query here because the provided methods in wmi.go all issue a SELECT *;
|
||||
// This results in the time consuming LoadPercentage field being read which seems to measure each CPU
|
||||
// serially over a 1 second interval, so the scrape time is at least 1s * num_sockets
|
||||
if err := wmi.Query(win32ProcessorQuery, &dst); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(dst) == 0 {
|
||||
return nil, errors.New("WMI query returned empty result set")
|
||||
}
|
||||
|
||||
// Some CPUs end up exposing trailing spaces for certain strings, so clean them up
|
||||
for _, processor := range dst {
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.CpuInfo,
|
||||
prometheus.GaugeValue,
|
||||
1.0,
|
||||
strconv.Itoa(int(processor.Architecture)),
|
||||
strings.TrimRight(processor.DeviceID, " "),
|
||||
strings.TrimRight(processor.Description, " "),
|
||||
strconv.Itoa(int(processor.Family)),
|
||||
strconv.Itoa(int(processor.L2CacheSize)),
|
||||
strconv.Itoa(int(processor.L3CacheSize)),
|
||||
strings.TrimRight(processor.Name, " "),
|
||||
)
|
||||
}
|
||||
|
||||
return nil, nil
|
||||
}
|
||||
9
collector/cpu_test.go
Normal file
9
collector/cpu_test.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkCPUCollector(b *testing.B) {
|
||||
benchmarkCollector(b, "cpu", newCPUCollector)
|
||||
}
|
||||
@@ -1,13 +1,13 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"github.com/prometheus-community/windows_exporter/headers/sysinfoapi"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
|
||||
"github.com/StackExchange/wmi"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
)
|
||||
|
||||
func init() {
|
||||
@@ -60,51 +60,47 @@ func (c *CSCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) e
|
||||
return nil
|
||||
}
|
||||
|
||||
// Win32_ComputerSystem docs:
|
||||
// - https://msdn.microsoft.com/en-us/library/aa394102
|
||||
type Win32_ComputerSystem struct {
|
||||
NumberOfLogicalProcessors uint32
|
||||
TotalPhysicalMemory uint64
|
||||
DNSHostname string
|
||||
Domain string
|
||||
Workgroup *string
|
||||
}
|
||||
|
||||
func (c *CSCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
|
||||
var dst []Win32_ComputerSystem
|
||||
q := queryAll(&dst)
|
||||
if err := wmi.Query(q, &dst); err != nil {
|
||||
// Get systeminfo for number of processors
|
||||
systemInfo := sysinfoapi.GetSystemInfo()
|
||||
|
||||
// Get memory status for physical memory
|
||||
mem, err := sysinfoapi.GlobalMemoryStatusEx()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(dst) == 0 {
|
||||
return nil, errors.New("WMI query returned empty result set")
|
||||
}
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.LogicalProcessors,
|
||||
prometheus.GaugeValue,
|
||||
float64(dst[0].NumberOfLogicalProcessors),
|
||||
float64(systemInfo.NumberOfProcessors),
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.PhysicalMemoryBytes,
|
||||
prometheus.GaugeValue,
|
||||
float64(dst[0].TotalPhysicalMemory),
|
||||
float64(mem.TotalPhys),
|
||||
)
|
||||
|
||||
var fqdn string
|
||||
if dst[0].Workgroup == nil || dst[0].Domain != *dst[0].Workgroup {
|
||||
fqdn = dst[0].DNSHostname + "." + dst[0].Domain
|
||||
} else {
|
||||
fqdn = dst[0].DNSHostname
|
||||
hostname, err := sysinfoapi.GetComputerName(sysinfoapi.ComputerNameDNSHostname)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
domain, err := sysinfoapi.GetComputerName(sysinfoapi.ComputerNameDNSDomain)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
fqdn, err := sysinfoapi.GetComputerName(sysinfoapi.ComputerNameDNSFullyQualified)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.Hostname,
|
||||
prometheus.GaugeValue,
|
||||
1.0,
|
||||
dst[0].DNSHostname,
|
||||
dst[0].Domain,
|
||||
hostname,
|
||||
domain,
|
||||
fqdn,
|
||||
)
|
||||
|
||||
|
||||
9
collector/cs_test.go
Normal file
9
collector/cs_test.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkCsCollector(b *testing.B) {
|
||||
benchmarkCollector(b, "cs", NewCSCollector)
|
||||
}
|
||||
810
collector/dfsr.go
Normal file
810
collector/dfsr.go
Normal file
@@ -0,0 +1,810 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"gopkg.in/alecthomas/kingpin.v2"
|
||||
)
|
||||
|
||||
var dfsrEnabledCollectors = kingpin.Flag("collectors.dfsr.sources-enabled", "Comma-seperated list of DFSR Perflib sources to use.").Default("connection,folder,volume").String()
|
||||
|
||||
func init() {
|
||||
// Perflib sources are dynamic, depending on the enabled child collectors
|
||||
var perflibDependencies []string
|
||||
for _, source := range expandEnabledChildCollectors(*dfsrEnabledCollectors) {
|
||||
perflibDependencies = append(perflibDependencies, dfsrGetPerfObjectName(source))
|
||||
}
|
||||
|
||||
registerCollector("dfsr", NewDFSRCollector, perflibDependencies...)
|
||||
}
|
||||
|
||||
// DFSRCollector contains the metric and state data of the DFSR collectors.
|
||||
type DFSRCollector struct {
|
||||
// Connection source
|
||||
ConnectionBandwidthSavingsUsingDFSReplicationTotal *prometheus.Desc
|
||||
ConnectionBytesReceivedTotal *prometheus.Desc
|
||||
ConnectionCompressedSizeOfFilesReceivedTotal *prometheus.Desc
|
||||
ConnectionFilesReceivedTotal *prometheus.Desc
|
||||
ConnectionRDCBytesReceivedTotal *prometheus.Desc
|
||||
ConnectionRDCCompressedSizeOfFilesReceivedTotal *prometheus.Desc
|
||||
ConnectionRDCSizeOfFilesReceivedTotal *prometheus.Desc
|
||||
ConnectionRDCNumberofFilesReceivedTotal *prometheus.Desc
|
||||
ConnectionSizeOfFilesReceivedTotal *prometheus.Desc
|
||||
|
||||
// Folder source
|
||||
FolderBandwidthSavingsUsingDFSReplicationTotal *prometheus.Desc
|
||||
FolderCompressedSizeOfFilesReceivedTotal *prometheus.Desc
|
||||
FolderConflictBytesCleanedupTotal *prometheus.Desc
|
||||
FolderConflictBytesGeneratedTotal *prometheus.Desc
|
||||
FolderConflictFilesCleanedUpTotal *prometheus.Desc
|
||||
FolderConflictFilesGeneratedTotal *prometheus.Desc
|
||||
FolderConflictFolderCleanupsCompletedTotal *prometheus.Desc
|
||||
FolderConflictSpaceInUse *prometheus.Desc
|
||||
FolderDeletedSpaceInUse *prometheus.Desc
|
||||
FolderDeletedBytesCleanedUpTotal *prometheus.Desc
|
||||
FolderDeletedBytesGeneratedTotal *prometheus.Desc
|
||||
FolderDeletedFilesCleanedUpTotal *prometheus.Desc
|
||||
FolderDeletedFilesGeneratedTotal *prometheus.Desc
|
||||
FolderFileInstallsRetriedTotal *prometheus.Desc
|
||||
FolderFileInstallsSucceededTotal *prometheus.Desc
|
||||
FolderFilesReceivedTotal *prometheus.Desc
|
||||
FolderRDCBytesReceivedTotal *prometheus.Desc
|
||||
FolderRDCCompressedSizeOfFilesReceivedTotal *prometheus.Desc
|
||||
FolderRDCNumberofFilesReceivedTotal *prometheus.Desc
|
||||
FolderRDCSizeOfFilesReceivedTotal *prometheus.Desc
|
||||
FolderSizeOfFilesReceivedTotal *prometheus.Desc
|
||||
FolderStagingSpaceInUse *prometheus.Desc
|
||||
FolderStagingBytesCleanedUpTotal *prometheus.Desc
|
||||
FolderStagingBytesGeneratedTotal *prometheus.Desc
|
||||
FolderStagingFilesCleanedUpTotal *prometheus.Desc
|
||||
FolderStagingFilesGeneratedTotal *prometheus.Desc
|
||||
FolderUpdatesDroppedTotal *prometheus.Desc
|
||||
|
||||
// Volume source
|
||||
VolumeDatabaseLookupsTotal *prometheus.Desc
|
||||
VolumeDatabaseCommitsTotal *prometheus.Desc
|
||||
VolumeUSNJournalUnreadPercentage *prometheus.Desc
|
||||
VolumeUSNJournalRecordsAcceptedTotal *prometheus.Desc
|
||||
VolumeUSNJournalRecordsReadTotal *prometheus.Desc
|
||||
|
||||
// Map of child collector functions used during collection
|
||||
dfsrChildCollectors []dfsrCollectorFunc
|
||||
}
|
||||
|
||||
type dfsrCollectorFunc func(ctx *ScrapeContext, ch chan<- prometheus.Metric) error
|
||||
|
||||
// Map Perflib sources to DFSR collector names
|
||||
// E.G. volume -> DFS Replication Service Volumes
|
||||
func dfsrGetPerfObjectName(collector string) string {
|
||||
prefix := "DFS "
|
||||
suffix := ""
|
||||
switch collector {
|
||||
case "connection":
|
||||
suffix = "Replication Connections"
|
||||
case "folder":
|
||||
suffix = "Replicated Folders"
|
||||
case "volume":
|
||||
suffix = "Replication Service Volumes"
|
||||
}
|
||||
return (prefix + suffix)
|
||||
}
|
||||
|
||||
// NewDFSRCollector is registered
|
||||
func NewDFSRCollector() (Collector, error) {
|
||||
log.Info("dfsr collector is in an experimental state! Metrics for this collector have not been tested.")
|
||||
const subsystem = "dfsr"
|
||||
|
||||
enabled := expandEnabledChildCollectors(*dfsrEnabledCollectors)
|
||||
perfCounters := make([]string, 0, len(enabled))
|
||||
for _, c := range enabled {
|
||||
perfCounters = append(perfCounters, dfsrGetPerfObjectName(c))
|
||||
}
|
||||
addPerfCounterDependencies(subsystem, perfCounters)
|
||||
|
||||
dfsrCollector := DFSRCollector{
|
||||
// Connection
|
||||
ConnectionBandwidthSavingsUsingDFSReplicationTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "connection_bandwidth_savings_using_dfs_replication_bytes_total"),
|
||||
"Total bytes of bandwidth saved using DFS Replication for this connection",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
ConnectionBytesReceivedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "connection_bytes_received_total"),
|
||||
"Total bytes received for connection",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
ConnectionCompressedSizeOfFilesReceivedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "connection_compressed_size_of_files_received_bytes_total"),
|
||||
"Total compressed size of files received on the connection, in bytes",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
ConnectionFilesReceivedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "connection_received_files_total"),
|
||||
"Total number of files received for connection",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
ConnectionRDCBytesReceivedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "connection_rdc_received_bytes_total"),
|
||||
"Total bytes received on the connection while replicating files using Remote Differential Compression",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
ConnectionRDCCompressedSizeOfFilesReceivedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "connection_rdc_compressed_size_of_received_files_bytes_total"),
|
||||
"Total uncompressed size of files received with Remote Differential Compression for connection",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
ConnectionRDCNumberofFilesReceivedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "connection_rdc_received_files_total"),
|
||||
"Total number of files received using remote differential compression",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
ConnectionRDCSizeOfFilesReceivedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "connection_rdc_size_of_received_files_bytes_total"),
|
||||
"Total size of received Remote Differential Compression files, in bytes.",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
ConnectionSizeOfFilesReceivedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "connection_files_received_bytes_total"),
|
||||
"Total size of files received, in bytes",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
// Folder
|
||||
FolderBandwidthSavingsUsingDFSReplicationTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_bandwidth_savings_using_dfs_replication_bytes_total"),
|
||||
"Total bytes of bandwidth saved using DFS Replication for this folder",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderCompressedSizeOfFilesReceivedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_compressed_size_of_received_files_bytes_total"),
|
||||
"Total compressed size of files received on the folder, in bytes",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderConflictBytesCleanedupTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_conflict_cleaned_up_bytes_total"),
|
||||
"Total size of conflict loser files and folders deleted from the Conflict and Deleted folder, in bytes",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderConflictBytesGeneratedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_conflict_generated_bytes_total"),
|
||||
"Total size of conflict loser files and folders moved to the Conflict and Deleted folder, in bytes",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderConflictFilesCleanedUpTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_conflict_cleaned_up_files_total"),
|
||||
"Number of conflict loser files deleted from the Conflict and Deleted folder",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderConflictFilesGeneratedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_conflict_generated_files_total"),
|
||||
"Number of files and folders moved to the Conflict and Deleted folder",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderConflictFolderCleanupsCompletedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_conflict_folder_cleanups_total"),
|
||||
"Number of deletions of conflict loser files and folders in the Conflict and Deleted",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderConflictSpaceInUse: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_conflict_space_in_use_bytes"),
|
||||
"Total size of the conflict loser files and folders currently in the Conflict and Deleted folder",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderDeletedSpaceInUse: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_deleted_space_in_use_bytes"),
|
||||
"Total size (in bytes) of the deleted files and folders currently in the Conflict and Deleted folder",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderDeletedBytesCleanedUpTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_deleted_cleaned_up_bytes_total"),
|
||||
"Total size (in bytes) of replicating deleted files and folders that were cleaned up from the Conflict and Deleted folder",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderDeletedBytesGeneratedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_deleted_generated_bytes_total"),
|
||||
"Total size (in bytes) of replicated deleted files and folders that were moved to the Conflict and Deleted folder after they were deleted from a replicated folder on a sending member",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderDeletedFilesCleanedUpTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_deleted_cleaned_up_files_total"),
|
||||
"Number of files and folders that were cleaned up from the Conflict and Deleted folder",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderDeletedFilesGeneratedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_deleted_generated_files_total"),
|
||||
"Number of deleted files and folders that were moved to the Conflict and Deleted folder",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderFileInstallsRetriedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_file_installs_retried_total"),
|
||||
"Total number of file installs that are being retried due to sharing violations or other errors encountered when installing the files",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderFileInstallsSucceededTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_file_installs_succeeded_total"),
|
||||
"Total number of files that were successfully received from sending members and installed locally on this server",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderFilesReceivedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_received_files_total"),
|
||||
"Total number of files received",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderRDCBytesReceivedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_rdc_received_bytes_total"),
|
||||
"Total number of bytes received in replicating files using Remote Differential Compression",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderRDCCompressedSizeOfFilesReceivedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_rdc_compressed_size_of_received_files_bytes_total"),
|
||||
"Total compressed size (in bytes) of the files received with Remote Differential Compression",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderRDCNumberofFilesReceivedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_rdc_received_files_total"),
|
||||
"Total number of files received with Remote Differential Compression",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderRDCSizeOfFilesReceivedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_rdc_files_received_bytes_total"),
|
||||
"Total uncompressed size (in bytes) of the files received with Remote Differential Compression",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderSizeOfFilesReceivedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_files_received_bytes_total"),
|
||||
"Total uncompressed size (in bytes) of the files received",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderStagingSpaceInUse: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_staging_space_in_use_bytes"),
|
||||
"Total size of files and folders currently in the staging folder.",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderStagingBytesCleanedUpTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_staging_cleaned_up_bytes_total"),
|
||||
"Total size (in bytes) of the files and folders that have been cleaned up from the staging folder",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderStagingBytesGeneratedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_staging_generated_bytes_total"),
|
||||
"Total size (in bytes) of replicated files and folders in the staging folder created by the DFS Replication service since last restart",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderStagingFilesCleanedUpTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_staging_cleaned_up_files_total"),
|
||||
"Total number of files and folders that have been cleaned up from the staging folder",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderStagingFilesGeneratedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_staging_generated_files_total"),
|
||||
"Total number of times replicated files and folders have been staged by the DFS Replication service",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
FolderUpdatesDroppedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "folder_dropped_updates_total"),
|
||||
"Total number of redundant file replication update records that have been ignored by the DFS Replication service because they did not change the replicated file or folder",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
// Volume
|
||||
VolumeDatabaseCommitsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "volume_database_commits_total"),
|
||||
"Total number of DFSR Volume database commits",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
VolumeDatabaseLookupsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "volume_database_lookups_total"),
|
||||
"Total number of DFSR Volume database lookups",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
VolumeUSNJournalUnreadPercentage: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "volume_usn_journal_unread_percentage"),
|
||||
"Percentage of DFSR Volume USN journal records that are unread",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
VolumeUSNJournalRecordsAcceptedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "volume_usn_journal_accepted_records_total"),
|
||||
"Total number of USN journal records accepted",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
VolumeUSNJournalRecordsReadTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "volume_usn_journal_read_records_total"),
|
||||
"Total number of DFSR Volume USN journal records read",
|
||||
[]string{"name"},
|
||||
nil,
|
||||
),
|
||||
}
|
||||
|
||||
dfsrCollector.dfsrChildCollectors = dfsrCollector.getDFSRChildCollectors(enabled)
|
||||
|
||||
return &dfsrCollector, nil
|
||||
}
|
||||
|
||||
// Maps enabled child collectors names to their relevant collection function,
|
||||
// for use in DFSRCollector.Collect()
|
||||
func (c *DFSRCollector) getDFSRChildCollectors(enabledCollectors []string) []dfsrCollectorFunc {
|
||||
var dfsrCollectors []dfsrCollectorFunc
|
||||
for _, collector := range enabledCollectors {
|
||||
switch collector {
|
||||
case "connection":
|
||||
dfsrCollectors = append(dfsrCollectors, c.collectConnection)
|
||||
case "folder":
|
||||
dfsrCollectors = append(dfsrCollectors, c.collectFolder)
|
||||
case "volume":
|
||||
dfsrCollectors = append(dfsrCollectors, c.collectVolume)
|
||||
}
|
||||
}
|
||||
|
||||
return dfsrCollectors
|
||||
}
|
||||
|
||||
// Collect implements the Collector interface.
|
||||
// Sends metric values for each metric to the provided prometheus Metric channel.
|
||||
func (c *DFSRCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
|
||||
for _, fn := range c.dfsrChildCollectors {
|
||||
err := fn(ctx, ch)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Perflib: "DFS Replication Service Connections"
|
||||
type PerflibDFSRConnection struct {
|
||||
Name string
|
||||
|
||||
BandwidthSavingsUsingDFSReplicationTotal float64 `perflib:"Bandwidth Savings Using DFS Replication"`
|
||||
BytesReceivedTotal float64 `perflib:"Total Bytes Received"`
|
||||
CompressedSizeOfFilesReceivedTotal float64 `perflib:"Compressed Size of Files Received"`
|
||||
FilesReceivedTotal float64 `perflib:"Total Files Received"`
|
||||
RDCBytesReceivedTotal float64 `perflib:"RDC Bytes Received"`
|
||||
RDCCompressedSizeOfFilesReceivedTotal float64 `perflib:"RDC Compressed Size of Files Received"`
|
||||
RDCNumberofFilesReceivedTotal float64 `perflib:"RDC Number of Files Received"`
|
||||
RDCSizeOfFilesReceivedTotal float64 `perflib:"RDC Size of Files Received"`
|
||||
SizeOfFilesReceivedTotal float64 `perflib:"Size of Files Received"`
|
||||
}
|
||||
|
||||
func (c *DFSRCollector) collectConnection(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
|
||||
var dst []PerflibDFSRConnection
|
||||
if err := unmarshalObject(ctx.perfObjects["DFS Replication Connections"], &dst); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, connection := range dst {
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.ConnectionBandwidthSavingsUsingDFSReplicationTotal,
|
||||
prometheus.CounterValue,
|
||||
connection.BandwidthSavingsUsingDFSReplicationTotal,
|
||||
connection.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.ConnectionBytesReceivedTotal,
|
||||
prometheus.CounterValue,
|
||||
connection.BytesReceivedTotal,
|
||||
connection.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.ConnectionCompressedSizeOfFilesReceivedTotal,
|
||||
prometheus.CounterValue,
|
||||
connection.CompressedSizeOfFilesReceivedTotal,
|
||||
connection.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.ConnectionFilesReceivedTotal,
|
||||
prometheus.CounterValue,
|
||||
connection.FilesReceivedTotal,
|
||||
connection.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.ConnectionRDCBytesReceivedTotal,
|
||||
prometheus.CounterValue,
|
||||
connection.RDCBytesReceivedTotal,
|
||||
connection.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.ConnectionRDCCompressedSizeOfFilesReceivedTotal,
|
||||
prometheus.CounterValue,
|
||||
connection.RDCCompressedSizeOfFilesReceivedTotal,
|
||||
connection.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.ConnectionRDCSizeOfFilesReceivedTotal,
|
||||
prometheus.CounterValue,
|
||||
connection.RDCSizeOfFilesReceivedTotal,
|
||||
connection.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.ConnectionRDCNumberofFilesReceivedTotal,
|
||||
prometheus.CounterValue,
|
||||
connection.RDCNumberofFilesReceivedTotal,
|
||||
connection.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.ConnectionSizeOfFilesReceivedTotal,
|
||||
prometheus.CounterValue,
|
||||
connection.SizeOfFilesReceivedTotal,
|
||||
connection.Name,
|
||||
)
|
||||
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Perflib: "DFS Replicated Folder"
|
||||
type PerflibDFSRFolder struct {
|
||||
Name string
|
||||
|
||||
BandwidthSavingsUsingDFSReplicationTotal float64 `perflib:"Bandwidth Savings Using DFS Replication"`
|
||||
CompressedSizeOfFilesReceivedTotal float64 `perflib:"Compressed Size of Files Received"`
|
||||
ConflictBytesCleanedupTotal float64 `perflib:"Conflict Bytes Cleaned Up"`
|
||||
ConflictBytesGeneratedTotal float64 `perflib:"Conflict Bytes Generated"`
|
||||
ConflictFilesCleanedUpTotal float64 `perflib:"Conflict Files Cleaned Up"`
|
||||
ConflictFilesGeneratedTotal float64 `perflib:"Conflict Files Generated"`
|
||||
ConflictFolderCleanupsCompletedTotal float64 `perflib:"Conflict Folder Cleanups Completed"`
|
||||
ConflictSpaceInUse float64 `perflib:"Conflict Space In Use"`
|
||||
DeletedSpaceInUse float64 `perflib:"Deleted Space In Use"`
|
||||
DeletedBytesCleanedUpTotal float64 `perflib:"Deleted Bytes Cleaned Up"`
|
||||
DeletedBytesGeneratedTotal float64 `perflib:"Deleted Bytes Generated"`
|
||||
DeletedFilesCleanedUpTotal float64 `perflib:"Deleted Files Cleaned Up"`
|
||||
DeletedFilesGeneratedTotal float64 `perflib:"Deleted Files Generated"`
|
||||
FileInstallsRetriedTotal float64 `perflib:"File Installs Retried"`
|
||||
FileInstallsSucceededTotal float64 `perflib:"File Installs Succeeded"`
|
||||
FilesReceivedTotal float64 `perflib:"Total Files Received"`
|
||||
RDCBytesReceivedTotal float64 `perflib:"RDC Bytes Received"`
|
||||
RDCCompressedSizeOfFilesReceivedTotal float64 `perflib:"RDC Compressed Size of Files Received"`
|
||||
RDCNumberofFilesReceivedTotal float64 `perflib:"RDC Number of Files Received"`
|
||||
RDCSizeOfFilesReceivedTotal float64 `perflib:"RDC Size of Files Received"`
|
||||
SizeOfFilesReceivedTotal float64 `perflib:"Size of Files Received"`
|
||||
StagingSpaceInUse float64 `perflib:"Staging Space In Use"`
|
||||
StagingBytesCleanedUpTotal float64 `perflib:"Staging Bytes Cleaned Up"`
|
||||
StagingBytesGeneratedTotal float64 `perflib:"Staging Bytes Generated"`
|
||||
StagingFilesCleanedUpTotal float64 `perflib:"Staging Files Cleaned Up"`
|
||||
StagingFilesGeneratedTotal float64 `perflib:"Staging Files Generated"`
|
||||
UpdatesDroppedTotal float64 `perflib:"Updates Dropped"`
|
||||
}
|
||||
|
||||
func (c *DFSRCollector) collectFolder(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
|
||||
var dst []PerflibDFSRFolder
|
||||
if err := unmarshalObject(ctx.perfObjects["DFS Replicated Folders"], &dst); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, folder := range dst {
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderBandwidthSavingsUsingDFSReplicationTotal,
|
||||
prometheus.CounterValue,
|
||||
folder.BandwidthSavingsUsingDFSReplicationTotal,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderCompressedSizeOfFilesReceivedTotal,
|
||||
prometheus.CounterValue,
|
||||
folder.CompressedSizeOfFilesReceivedTotal,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderConflictBytesCleanedupTotal,
|
||||
prometheus.CounterValue,
|
||||
folder.ConflictBytesCleanedupTotal,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderConflictBytesGeneratedTotal,
|
||||
prometheus.CounterValue,
|
||||
folder.ConflictBytesGeneratedTotal,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderConflictFilesCleanedUpTotal,
|
||||
prometheus.CounterValue,
|
||||
folder.ConflictFilesCleanedUpTotal,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderConflictFilesGeneratedTotal,
|
||||
prometheus.CounterValue,
|
||||
folder.ConflictFilesGeneratedTotal,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderConflictFolderCleanupsCompletedTotal,
|
||||
prometheus.CounterValue,
|
||||
folder.ConflictFolderCleanupsCompletedTotal,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderConflictSpaceInUse,
|
||||
prometheus.GaugeValue,
|
||||
folder.ConflictSpaceInUse,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderDeletedSpaceInUse,
|
||||
prometheus.GaugeValue,
|
||||
folder.DeletedSpaceInUse,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderDeletedBytesCleanedUpTotal,
|
||||
prometheus.CounterValue,
|
||||
folder.DeletedBytesCleanedUpTotal,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderDeletedBytesGeneratedTotal,
|
||||
prometheus.CounterValue,
|
||||
folder.DeletedBytesGeneratedTotal,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderDeletedFilesCleanedUpTotal,
|
||||
prometheus.CounterValue,
|
||||
folder.DeletedFilesCleanedUpTotal,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderDeletedFilesGeneratedTotal,
|
||||
prometheus.CounterValue,
|
||||
folder.DeletedFilesGeneratedTotal,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderFileInstallsRetriedTotal,
|
||||
prometheus.CounterValue,
|
||||
folder.FileInstallsRetriedTotal,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderFileInstallsSucceededTotal,
|
||||
prometheus.CounterValue,
|
||||
folder.FileInstallsSucceededTotal,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderFilesReceivedTotal,
|
||||
prometheus.CounterValue,
|
||||
folder.FilesReceivedTotal,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderRDCBytesReceivedTotal,
|
||||
prometheus.CounterValue,
|
||||
folder.RDCBytesReceivedTotal,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderRDCCompressedSizeOfFilesReceivedTotal,
|
||||
prometheus.CounterValue,
|
||||
folder.RDCCompressedSizeOfFilesReceivedTotal,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderRDCNumberofFilesReceivedTotal,
|
||||
prometheus.CounterValue,
|
||||
folder.RDCNumberofFilesReceivedTotal,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderRDCSizeOfFilesReceivedTotal,
|
||||
prometheus.CounterValue,
|
||||
folder.RDCSizeOfFilesReceivedTotal,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderSizeOfFilesReceivedTotal,
|
||||
prometheus.CounterValue,
|
||||
folder.SizeOfFilesReceivedTotal,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderStagingSpaceInUse,
|
||||
prometheus.GaugeValue,
|
||||
folder.StagingSpaceInUse,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderStagingBytesCleanedUpTotal,
|
||||
prometheus.CounterValue,
|
||||
folder.StagingBytesCleanedUpTotal,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderStagingBytesGeneratedTotal,
|
||||
prometheus.CounterValue,
|
||||
folder.StagingBytesGeneratedTotal,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderStagingFilesCleanedUpTotal,
|
||||
prometheus.CounterValue,
|
||||
folder.StagingFilesCleanedUpTotal,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderStagingFilesGeneratedTotal,
|
||||
prometheus.CounterValue,
|
||||
folder.StagingFilesGeneratedTotal,
|
||||
folder.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.FolderUpdatesDroppedTotal,
|
||||
prometheus.CounterValue,
|
||||
folder.UpdatesDroppedTotal,
|
||||
folder.Name,
|
||||
)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Perflib: "DFS Replication Service Volumes"
|
||||
type PerflibDFSRVolume struct {
|
||||
Name string
|
||||
|
||||
DatabaseCommitsTotal float64 `perflib:"Database Commits"`
|
||||
DatabaseLookupsTotal float64 `perflib:"Database Lookups"`
|
||||
USNJournalRecordsReadTotal float64 `perflib:"USN Journal Records Read"`
|
||||
USNJournalRecordsAcceptedTotal float64 `perflib:"USN Journal Records Accepted"`
|
||||
USNJournalUnreadPercentage float64 `perflib:"USN Journal Records Unread Percentage"`
|
||||
}
|
||||
|
||||
func (c *DFSRCollector) collectVolume(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
|
||||
var dst []PerflibDFSRVolume
|
||||
if err := unmarshalObject(ctx.perfObjects["DFS Replication Service Volumes"], &dst); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, volume := range dst {
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.VolumeDatabaseLookupsTotal,
|
||||
prometheus.CounterValue,
|
||||
volume.DatabaseLookupsTotal,
|
||||
volume.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.VolumeDatabaseCommitsTotal,
|
||||
prometheus.CounterValue,
|
||||
volume.DatabaseCommitsTotal,
|
||||
volume.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.VolumeUSNJournalRecordsAcceptedTotal,
|
||||
prometheus.CounterValue,
|
||||
volume.USNJournalRecordsAcceptedTotal,
|
||||
volume.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.VolumeUSNJournalRecordsReadTotal,
|
||||
prometheus.CounterValue,
|
||||
volume.USNJournalRecordsReadTotal,
|
||||
volume.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.VolumeUSNJournalUnreadPercentage,
|
||||
prometheus.GaugeValue,
|
||||
volume.USNJournalUnreadPercentage,
|
||||
volume.Name,
|
||||
)
|
||||
|
||||
}
|
||||
return nil
|
||||
}
|
||||
9
collector/dfsr_test.go
Normal file
9
collector/dfsr_test.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkDFSRCollector(b *testing.B) {
|
||||
benchmarkCollector(b, "dfsr", NewDFSRCollector)
|
||||
}
|
||||
@@ -1,3 +1,4 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
|
||||
9
collector/dhcp_test.go
Normal file
9
collector/dhcp_test.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkDHCPCollector(b *testing.B) {
|
||||
benchmarkCollector(b, "dhcp", NewDhcpCollector)
|
||||
}
|
||||
@@ -1,3 +1,4 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
@@ -6,8 +7,8 @@ import (
|
||||
"errors"
|
||||
|
||||
"github.com/StackExchange/wmi"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
)
|
||||
|
||||
func init() {
|
||||
@@ -81,8 +82,8 @@ func NewDNSCollector() (Collector, error) {
|
||||
nil,
|
||||
),
|
||||
MemoryUsedBytes: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "memory_used_bytes_total"),
|
||||
"Total memory used by DNS server",
|
||||
prometheus.BuildFQName(Namespace, subsystem, "memory_used_bytes"),
|
||||
"Current memory used by DNS server",
|
||||
[]string{"area"},
|
||||
nil,
|
||||
),
|
||||
@@ -136,7 +137,7 @@ func NewDNSCollector() (Collector, error) {
|
||||
),
|
||||
Responses: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "responses_total"),
|
||||
"Number of reponses sent by DNS server",
|
||||
"Number of responses sent by DNS server",
|
||||
[]string{"protocol"},
|
||||
nil,
|
||||
),
|
||||
|
||||
9
collector/dns_test.go
Normal file
9
collector/dns_test.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkDNSCollector(b *testing.B) {
|
||||
benchmarkCollector(b, "dns", NewDNSCollector)
|
||||
}
|
||||
@@ -1,3 +1,4 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
@@ -7,8 +8,8 @@ import (
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
"gopkg.in/alecthomas/kingpin.v2"
|
||||
)
|
||||
|
||||
@@ -65,7 +66,7 @@ type exchangeCollector struct {
|
||||
RPCOperationsPerSec *prometheus.Desc
|
||||
UserCount *prometheus.Desc
|
||||
|
||||
ActiveCollFuncs []func(ctx *ScrapeContext, ch chan<- prometheus.Metric) error
|
||||
enabledCollectors []string
|
||||
}
|
||||
|
||||
var (
|
||||
@@ -86,6 +87,11 @@ var (
|
||||
"collectors.exchange.list",
|
||||
"List the collectors along with their perflib object name/ids",
|
||||
).Bool()
|
||||
|
||||
argExchangeCollectorsEnabled = kingpin.Flag(
|
||||
"collectors.exchange.enabled",
|
||||
"Comma-separated list of collectors to use. Defaults to all, if not specified.",
|
||||
).Default("").String()
|
||||
)
|
||||
|
||||
// newExchangeCollector returns a new Collector
|
||||
@@ -139,6 +145,8 @@ func newExchangeCollector() (Collector, error) {
|
||||
MailboxServerProxyFailureRate: desc("http_proxy_mailbox_proxy_failure_rate", "% of failures between this CAS and MBX servers over the last 200 samples", "name"),
|
||||
PingCommandsPending: desc("activesync_ping_cmds_pending", "Number of ping commands currently pending in the queue"),
|
||||
SyncCommandsPerSec: desc("activesync_sync_cmds_total", "Number of sync commands processed per second. Clients use this command to synchronize items within a folder"),
|
||||
|
||||
enabledCollectors: make([]string, 0, len(exchangeAllCollectorNames)),
|
||||
}
|
||||
|
||||
collectorDesc := map[string]string{
|
||||
@@ -161,12 +169,27 @@ func newExchangeCollector() (Collector, error) {
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
if *argExchangeCollectorsEnabled == "" {
|
||||
for _, collectorName := range exchangeAllCollectorNames {
|
||||
c.enabledCollectors = append(c.enabledCollectors, collectorName)
|
||||
}
|
||||
} else {
|
||||
for _, collectorName := range strings.Split(*argExchangeCollectorsEnabled, ",") {
|
||||
if find(exchangeAllCollectorNames, collectorName) {
|
||||
c.enabledCollectors = append(c.enabledCollectors, collectorName)
|
||||
} else {
|
||||
return nil, fmt.Errorf("Unknown exchange collector: %s", collectorName)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return &c, nil
|
||||
}
|
||||
|
||||
// Collect collects exchange metrics and sends them to prometheus
|
||||
func (c *exchangeCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
|
||||
for collectorName, collectorFunc := range map[string]func(ctx *ScrapeContext, ch chan<- prometheus.Metric) error{
|
||||
|
||||
collectorFuncs := map[string]func(ctx *ScrapeContext, ch chan<- prometheus.Metric) error{
|
||||
"ADAccessProcesses": c.collectADAccessProcesses,
|
||||
"TransportQueues": c.collectTransportQueues,
|
||||
"HttpProxy": c.collectHTTPProxy,
|
||||
@@ -176,8 +199,10 @@ func (c *exchangeCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Met
|
||||
"Autodiscover": c.collectAutoDiscover,
|
||||
"WorkloadManagement": c.collectWorkloadManagementWorkloads,
|
||||
"RpcClientAccess": c.collectRPC,
|
||||
} {
|
||||
if err := collectorFunc(ctx, ch); err != nil {
|
||||
}
|
||||
|
||||
for _, collectorName := range c.enabledCollectors {
|
||||
if err := collectorFuncs[collectorName](ctx, ch); err != nil {
|
||||
log.Errorf("Error in %s: %s", collectorName, err)
|
||||
return err
|
||||
}
|
||||
@@ -210,7 +235,7 @@ func (c *exchangeCollector) collectADAccessProcesses(ctx *ScrapeContext, ch chan
|
||||
}
|
||||
|
||||
// since we're not including the PID suffix from the instance names in the label names,
|
||||
// we get an occational duplicate. This seems to affect about 4 instances only on this object.
|
||||
// we get an occasional duplicate. This seems to affect about 4 instances only on this object.
|
||||
labelUseCount[labelName]++
|
||||
if labelUseCount[labelName] > 1 {
|
||||
labelName = fmt.Sprintf("%s_%d", labelName, labelUseCount[labelName])
|
||||
|
||||
9
collector/exchange_test.go
Normal file
9
collector/exchange_test.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkExchangeCollector(b *testing.B) {
|
||||
benchmarkCollector(b, "exchange", newExchangeCollector)
|
||||
}
|
||||
@@ -2,8 +2,8 @@ package collector
|
||||
|
||||
import (
|
||||
"github.com/StackExchange/wmi"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
||||
9
collector/fsrmquota_test.go
Normal file
9
collector/fsrmquota_test.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkFsrmQuotaCollector(b *testing.B) {
|
||||
benchmarkCollector(b, "fsrmquota", newFSRMQuotaCollector)
|
||||
}
|
||||
@@ -1,3 +1,4 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
@@ -6,8 +7,8 @@ import (
|
||||
"strings"
|
||||
|
||||
"github.com/StackExchange/wmi"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
)
|
||||
|
||||
func init() {
|
||||
@@ -1001,8 +1002,17 @@ func (c *HyperVCollector) collectVmCpuUsage(ch chan<- prometheus.Metric) (*prome
|
||||
}
|
||||
// The name format is <VM Name>:Hv VP <vcore id>
|
||||
parts := strings.Split(obj.Name, ":")
|
||||
if len(parts) != 2 {
|
||||
log.Warnf("Unexpected format of Name in collectVmCpuUsage: %q, expected %q. Skipping.", obj.Name, "<VM Name>:Hv VP <vcore id>")
|
||||
continue
|
||||
}
|
||||
coreParts := strings.Split(parts[1], " ")
|
||||
if len(coreParts) != 3 {
|
||||
log.Warnf("Unexpected format of core identifier in collectVmCpuUsage: %q, expected %q. Skipping.", parts[1], "Hv VP <vcore id>")
|
||||
continue
|
||||
}
|
||||
vmName := parts[0]
|
||||
coreId := strings.Split(parts[1], " ")[2]
|
||||
coreId := coreParts[2]
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.VMGuestRunTime,
|
||||
|
||||
9
collector/hyperv_test.go
Normal file
9
collector/hyperv_test.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkHypervCollector(b *testing.B) {
|
||||
benchmarkCollector(b, "hyperv", NewHyperVCollector)
|
||||
}
|
||||
1798
collector/iis.go
1798
collector/iis.go
File diff suppressed because it is too large
Load Diff
9
collector/iis_test.go
Normal file
9
collector/iis_test.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkIISCollector(b *testing.B) {
|
||||
benchmarkCollector(b, "iis", NewIISCollector)
|
||||
}
|
||||
@@ -1,3 +1,4 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
@@ -6,8 +7,8 @@ import (
|
||||
"fmt"
|
||||
"regexp"
|
||||
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
"gopkg.in/alecthomas/kingpin.v2"
|
||||
)
|
||||
|
||||
@@ -103,14 +104,14 @@ func NewLogicalDiskCollector() (Collector, error) {
|
||||
|
||||
FreeSpace: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "free_bytes"),
|
||||
"Free space in bytes (LogicalDisk.PercentFreeSpace)",
|
||||
"Free space in bytes, updates every 10-15 min (LogicalDisk.PercentFreeSpace)",
|
||||
[]string{"volume"},
|
||||
nil,
|
||||
),
|
||||
|
||||
TotalSpace: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "size_bytes"),
|
||||
"Total space in bytes (LogicalDisk.PercentFreeSpace_Base)",
|
||||
"Total space in bytes, updates every 10-15 min (LogicalDisk.PercentFreeSpace_Base)",
|
||||
[]string{"volume"},
|
||||
nil,
|
||||
),
|
||||
|
||||
13
collector/logical_disk_test.go
Normal file
13
collector/logical_disk_test.go
Normal file
@@ -0,0 +1,13 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkLogicalDiskCollector(b *testing.B) {
|
||||
// Whitelist is not set in testing context (kingpin flags not parsed), causing the collector to skip all disks.
|
||||
localVolumeWhitelist := ".+"
|
||||
volumeWhitelist = &localVolumeWhitelist
|
||||
|
||||
benchmarkCollector(b, "logical_disk", NewLogicalDiskCollector)
|
||||
}
|
||||
@@ -1,3 +1,4 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
@@ -6,8 +7,8 @@ import (
|
||||
"errors"
|
||||
|
||||
"github.com/StackExchange/wmi"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
||||
10
collector/logon_test.go
Normal file
10
collector/logon_test.go
Normal file
@@ -0,0 +1,10 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkLogonCollector(b *testing.B) {
|
||||
// No context name required as collector source is WMI
|
||||
benchmarkCollector(b, "", NewLogonCollector)
|
||||
}
|
||||
@@ -1,13 +1,14 @@
|
||||
// returns data points from Win32_PerfRawData_PerfOS_Memory
|
||||
// <add link to documentation here> - Win32_PerfRawData_PerfOS_Memory class
|
||||
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
||||
9
collector/memory_test.go
Normal file
9
collector/memory_test.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkMemoryCollector(b *testing.B) {
|
||||
benchmarkCollector(b, "memory", NewMemoryCollector)
|
||||
}
|
||||
@@ -1,3 +1,4 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
@@ -6,8 +7,8 @@ import (
|
||||
"strings"
|
||||
|
||||
"github.com/StackExchange/wmi"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
"gopkg.in/alecthomas/kingpin.v2"
|
||||
)
|
||||
|
||||
@@ -93,29 +94,27 @@ func (c *Win32_PerfRawData_MSMQ_MSMQQueueCollector) collect(ch chan<- prometheus
|
||||
}
|
||||
|
||||
for _, msmq := range dst {
|
||||
|
||||
if msmq.Name == "Computer Queues" {
|
||||
continue
|
||||
}
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.BytesinJournalQueue,
|
||||
prometheus.GaugeValue,
|
||||
float64(msmq.BytesinJournalQueue),
|
||||
strings.ToLower(msmq.Name),
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.BytesinQueue,
|
||||
prometheus.GaugeValue,
|
||||
float64(msmq.BytesinQueue),
|
||||
strings.ToLower(msmq.Name),
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.MessagesinJournalQueue,
|
||||
prometheus.GaugeValue,
|
||||
float64(msmq.MessagesinJournalQueue),
|
||||
strings.ToLower(msmq.Name),
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.MessagesinQueue,
|
||||
prometheus.GaugeValue,
|
||||
|
||||
10
collector/msmq_test.go
Normal file
10
collector/msmq_test.go
Normal file
@@ -0,0 +1,10 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkMsmqCollector(b *testing.B) {
|
||||
// No context name required as collector source is WMI
|
||||
benchmarkCollector(b, "", NewMSMQCollector)
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
9
collector/mssql_test.go
Normal file
9
collector/mssql_test.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkMSSQLCollector(b *testing.B) {
|
||||
benchmarkCollector(b, "mssql", NewMSSQLCollector)
|
||||
}
|
||||
@@ -1,3 +1,4 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
@@ -6,8 +7,8 @@ import (
|
||||
"fmt"
|
||||
"regexp"
|
||||
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
"gopkg.in/alecthomas/kingpin.v2"
|
||||
)
|
||||
|
||||
@@ -70,25 +71,25 @@ func NewNetworkCollector() (Collector, error) {
|
||||
nil,
|
||||
),
|
||||
PacketsOutboundDiscarded: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "packets_outbound_discarded"),
|
||||
prometheus.BuildFQName(Namespace, subsystem, "packets_outbound_discarded_total"),
|
||||
"(Network.PacketsOutboundDiscarded)",
|
||||
[]string{"nic"},
|
||||
nil,
|
||||
),
|
||||
PacketsOutboundErrors: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "packets_outbound_errors"),
|
||||
prometheus.BuildFQName(Namespace, subsystem, "packets_outbound_errors_total"),
|
||||
"(Network.PacketsOutboundErrors)",
|
||||
[]string{"nic"},
|
||||
nil,
|
||||
),
|
||||
PacketsReceivedDiscarded: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "packets_received_discarded"),
|
||||
prometheus.BuildFQName(Namespace, subsystem, "packets_received_discarded_total"),
|
||||
"(Network.PacketsReceivedDiscarded)",
|
||||
[]string{"nic"},
|
||||
nil,
|
||||
),
|
||||
PacketsReceivedErrors: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "packets_received_errors"),
|
||||
prometheus.BuildFQName(Namespace, subsystem, "packets_received_errors_total"),
|
||||
"(Network.PacketsReceivedErrors)",
|
||||
[]string{"nic"},
|
||||
nil,
|
||||
@@ -100,7 +101,7 @@ func NewNetworkCollector() (Collector, error) {
|
||||
nil,
|
||||
),
|
||||
PacketsReceivedUnknown: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "packets_received_unknown"),
|
||||
prometheus.BuildFQName(Namespace, subsystem, "packets_received_unknown_total"),
|
||||
"(Network.PacketsReceivedUnknown)",
|
||||
[]string{"nic"},
|
||||
nil,
|
||||
@@ -118,7 +119,7 @@ func NewNetworkCollector() (Collector, error) {
|
||||
nil,
|
||||
),
|
||||
CurrentBandwidth: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "current_bandwidth"),
|
||||
prometheus.BuildFQName(Namespace, subsystem, "current_bandwidth_bytes"),
|
||||
"(Network.CurrentBandwidth)",
|
||||
[]string{"nic"},
|
||||
nil,
|
||||
@@ -251,7 +252,7 @@ func (c *NetworkCollector) collect(ctx *ScrapeContext, ch chan<- prometheus.Metr
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.CurrentBandwidth,
|
||||
prometheus.GaugeValue,
|
||||
nic.CurrentBandwidth,
|
||||
nic.CurrentBandwidth/8,
|
||||
name,
|
||||
)
|
||||
}
|
||||
|
||||
@@ -1,8 +1,11 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
|
||||
import "testing"
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestNetworkToInstanceName(t *testing.T) {
|
||||
data := map[string]string{
|
||||
@@ -15,3 +18,10 @@ func TestNetworkToInstanceName(t *testing.T) {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkNetCollector(b *testing.B) {
|
||||
// Whitelist is not set in testing context (kingpin flags not parsed), causing the collector to skip all interfaces.
|
||||
localNicWhitelist := ".+"
|
||||
nicWhitelist = &localNicWhitelist
|
||||
benchmarkCollector(b, "net", NewNetworkCollector)
|
||||
}
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"github.com/StackExchange/wmi"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
||||
10
collector/netframework_clrexceptions_test.go
Normal file
10
collector/netframework_clrexceptions_test.go
Normal file
@@ -0,0 +1,10 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkNetFrameworkNETCLRExceptionsCollector(b *testing.B) {
|
||||
// No context name required as collector source is WMI
|
||||
benchmarkCollector(b, "", NewNETFramework_NETCLRExceptionsCollector)
|
||||
}
|
||||
@@ -1,11 +1,12 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"github.com/StackExchange/wmi"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
||||
10
collector/netframework_clrinterop_test.go
Normal file
10
collector/netframework_clrinterop_test.go
Normal file
@@ -0,0 +1,10 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkNETFrameworkNETCLRInteropCollector(b *testing.B) {
|
||||
// No context name required as collector source is WMI
|
||||
benchmarkCollector(b, "", NewNETFramework_NETCLRInteropCollector)
|
||||
}
|
||||
@@ -1,11 +1,12 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"github.com/StackExchange/wmi"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
||||
10
collector/netframework_clrjit_test.go
Normal file
10
collector/netframework_clrjit_test.go
Normal file
@@ -0,0 +1,10 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkNETFrameworkNETCLRJitCollector(b *testing.B) {
|
||||
// No context name required as collector source is WMI
|
||||
benchmarkCollector(b, "", NewNETFramework_NETCLRJitCollector)
|
||||
}
|
||||
@@ -1,11 +1,12 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"github.com/StackExchange/wmi"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
||||
10
collector/netframework_clrloading_test.go
Normal file
10
collector/netframework_clrloading_test.go
Normal file
@@ -0,0 +1,10 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkNETFrameworkNETCLRLoadingCollector(b *testing.B) {
|
||||
// No context name required as collector source is WMI
|
||||
benchmarkCollector(b, "", NewNETFramework_NETCLRLoadingCollector)
|
||||
}
|
||||
@@ -1,11 +1,12 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"github.com/StackExchange/wmi"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
||||
10
collector/netframework_clrlocksandthreads_test.go
Normal file
10
collector/netframework_clrlocksandthreads_test.go
Normal file
@@ -0,0 +1,10 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkNETFrameworkNETCLRLocksAndThreadsCollector(b *testing.B) {
|
||||
// No context name required as collector source is WMI
|
||||
benchmarkCollector(b, "", NewNETFramework_NETCLRLocksAndThreadsCollector)
|
||||
}
|
||||
@@ -1,11 +1,12 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"github.com/StackExchange/wmi"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
||||
10
collector/netframework_clrmemory_test.go
Normal file
10
collector/netframework_clrmemory_test.go
Normal file
@@ -0,0 +1,10 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkNETFrameworkNETCLRMemoryCollector(b *testing.B) {
|
||||
// No context name required as collector source is WMI
|
||||
benchmarkCollector(b, "", NewNETFramework_NETCLRMemoryCollector)
|
||||
}
|
||||
@@ -1,11 +1,12 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"github.com/StackExchange/wmi"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
||||
10
collector/netframework_clrremoting_test.go
Normal file
10
collector/netframework_clrremoting_test.go
Normal file
@@ -0,0 +1,10 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkNETFrameworkNETCLRRemotingCollector(b *testing.B) {
|
||||
// No context name required as collector source is WMI
|
||||
benchmarkCollector(b, "", NewNETFramework_NETCLRRemotingCollector)
|
||||
}
|
||||
@@ -1,11 +1,12 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"github.com/StackExchange/wmi"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
||||
10
collector/netframework_clrsecurity_test.go
Normal file
10
collector/netframework_clrsecurity_test.go
Normal file
@@ -0,0 +1,10 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkNETFrameworkNETCLRSecurityCollector(b *testing.B) {
|
||||
// No context name required as collector source is WMI
|
||||
benchmarkCollector(b, "", NewNETFramework_NETCLRSecurityCollector)
|
||||
}
|
||||
134
collector/os.go
134
collector/os.go
@@ -1,18 +1,24 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/StackExchange/wmi"
|
||||
"github.com/prometheus-community/windows_exporter/headers/netapi32"
|
||||
"github.com/prometheus-community/windows_exporter/headers/psapi"
|
||||
"github.com/prometheus-community/windows_exporter/headers/sysinfoapi"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
"golang.org/x/sys/windows/registry"
|
||||
)
|
||||
|
||||
func init() {
|
||||
registerCollector("os", NewOSCollector)
|
||||
registerCollector("os", NewOSCollector, "Paging File")
|
||||
}
|
||||
|
||||
// A OSCollector is a Prometheus collector for WMI metrics
|
||||
@@ -32,6 +38,12 @@ type OSCollector struct {
|
||||
Timezone *prometheus.Desc
|
||||
}
|
||||
|
||||
type pagingFileCounter struct {
|
||||
Name string
|
||||
Usage float64 `perflib:"% Usage"`
|
||||
UsagePeak float64 `perflib:"% Usage Peak"`
|
||||
}
|
||||
|
||||
// NewOSCollector ...
|
||||
func NewOSCollector() (Collector, error) {
|
||||
const subsystem = "os"
|
||||
@@ -86,7 +98,7 @@ func NewOSCollector() (Collector, error) {
|
||||
nil,
|
||||
),
|
||||
ProcessMemoryLimitBytes: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "process_memory_limix_bytes"),
|
||||
prometheus.BuildFQName(Namespace, subsystem, "process_memory_limit_bytes"),
|
||||
"OperatingSystem.MaxProcessMemorySize",
|
||||
nil,
|
||||
nil,
|
||||
@@ -121,7 +133,7 @@ func NewOSCollector() (Collector, error) {
|
||||
// Collect sends the metric values for each metric
|
||||
// to the provided prometheus Metric channel.
|
||||
func (c *OSCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
|
||||
if desc, err := c.collect(ch); err != nil {
|
||||
if desc, err := c.collect(ctx, ch); err != nil {
|
||||
log.Error("failed collecting os metrics:", desc, err)
|
||||
return err
|
||||
}
|
||||
@@ -146,41 +158,102 @@ type Win32_OperatingSystem struct {
|
||||
Version string
|
||||
}
|
||||
|
||||
func (c *OSCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
|
||||
var dst []Win32_OperatingSystem
|
||||
q := queryAll(&dst)
|
||||
if err := wmi.Query(q, &dst); err != nil {
|
||||
func (c *OSCollector) collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
|
||||
nwgi, err := netapi32.GetWorkstationInfo()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if len(dst) == 0 {
|
||||
return nil, errors.New("WMI query returned empty result set")
|
||||
gmse, err := sysinfoapi.GlobalMemoryStatusEx()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
currentTime := time.Now()
|
||||
timezoneName, _ := currentTime.Zone()
|
||||
|
||||
// Get total allocation of paging files across all disks.
|
||||
memManKey, err := registry.OpenKey(registry.LOCAL_MACHINE, `SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management`, registry.QUERY_VALUE)
|
||||
defer memManKey.Close()
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
pagingFiles, _, err := memManKey.GetStringsValue("ExistingPageFiles")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Get build number and product name from registry
|
||||
ntKey, err := registry.OpenKey(registry.LOCAL_MACHINE, `SOFTWARE\Microsoft\Windows NT\CurrentVersion`, registry.QUERY_VALUE)
|
||||
defer ntKey.Close()
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
pn, _, err := ntKey.GetStringValue("ProductName")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
bn, _, err := ntKey.GetStringValue("CurrentBuildNumber")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var fsipf float64
|
||||
for _, pagingFile := range pagingFiles {
|
||||
fileString := strings.ReplaceAll(pagingFile, `\??\`, "")
|
||||
file, err := os.Stat(fileString)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
fsipf += float64(file.Size())
|
||||
}
|
||||
|
||||
gpi, err := psapi.GetPerformanceInfo()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var pfc = make([]pagingFileCounter, 0)
|
||||
if err := unmarshalObject(ctx.perfObjects["Paging File"], &pfc); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Get current page file usage.
|
||||
var pfbRaw float64
|
||||
for _, pageFile := range pfc {
|
||||
if strings.Contains(strings.ToLower(pageFile.Name), "_total") {
|
||||
continue
|
||||
}
|
||||
pfbRaw += pageFile.Usage
|
||||
}
|
||||
|
||||
// Subtract from total page file allocation on disk.
|
||||
pfb := fsipf - (pfbRaw * float64(gpi.PageSize))
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.OSInformation,
|
||||
prometheus.GaugeValue,
|
||||
1.0,
|
||||
dst[0].Caption,
|
||||
dst[0].Version,
|
||||
fmt.Sprintf("Microsoft %s", pn), // Caption
|
||||
fmt.Sprintf("%d.%d.%s", nwgi.VersionMajor, nwgi.VersionMinor, bn), // Version
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.PhysicalMemoryFreeBytes,
|
||||
prometheus.GaugeValue,
|
||||
float64(dst[0].FreePhysicalMemory*1024), // KiB -> bytes
|
||||
float64(gmse.AvailPhys),
|
||||
)
|
||||
|
||||
time := dst[0].LocalDateTime
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.Time,
|
||||
prometheus.GaugeValue,
|
||||
float64(time.Unix()),
|
||||
float64(currentTime.Unix()),
|
||||
)
|
||||
|
||||
timezoneName, _ := time.Zone()
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.Timezone,
|
||||
prometheus.GaugeValue,
|
||||
@@ -191,55 +264,58 @@ func (c *OSCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, er
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.PagingFreeBytes,
|
||||
prometheus.GaugeValue,
|
||||
float64(dst[0].FreeSpaceInPagingFiles*1024), // KiB -> bytes
|
||||
pfb,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.VirtualMemoryFreeBytes,
|
||||
prometheus.GaugeValue,
|
||||
float64(dst[0].FreeVirtualMemory*1024), // KiB -> bytes
|
||||
float64(gmse.AvailPageFile),
|
||||
)
|
||||
|
||||
// Windows has no defined limit, and is based off available resources. This currently isn't calculated by WMI and is set to default value.
|
||||
// https://techcommunity.microsoft.com/t5/windows-blog-archive/pushing-the-limits-of-windows-processes-and-threads/ba-p/723824
|
||||
// https://docs.microsoft.com/en-us/windows/win32/cimwin32prov/win32-operatingsystem
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.ProcessesLimit,
|
||||
prometheus.GaugeValue,
|
||||
float64(dst[0].MaxNumberOfProcesses),
|
||||
float64(4294967295),
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.ProcessMemoryLimitBytes,
|
||||
prometheus.GaugeValue,
|
||||
float64(dst[0].MaxProcessMemorySize*1024), // KiB -> bytes
|
||||
float64(gmse.TotalVirtual),
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.Processes,
|
||||
prometheus.GaugeValue,
|
||||
float64(dst[0].NumberOfProcesses),
|
||||
float64(gpi.ProcessCount),
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.Users,
|
||||
prometheus.GaugeValue,
|
||||
float64(dst[0].NumberOfUsers),
|
||||
float64(nwgi.LoggedOnUsers),
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.PagingLimitBytes,
|
||||
prometheus.GaugeValue,
|
||||
float64(dst[0].SizeStoredInPagingFiles*1024), // KiB -> bytes
|
||||
fsipf,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.VirtualMemoryBytes,
|
||||
prometheus.GaugeValue,
|
||||
float64(dst[0].TotalVirtualMemorySize*1024), // KiB -> bytes
|
||||
float64(gmse.TotalPageFile),
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.VisibleMemoryBytes,
|
||||
prometheus.GaugeValue,
|
||||
float64(dst[0].TotalVisibleMemorySize*1024), // KiB -> bytes
|
||||
float64(gmse.TotalPhys),
|
||||
)
|
||||
|
||||
return nil, nil
|
||||
|
||||
9
collector/os_test.go
Normal file
9
collector/os_test.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkOSCollector(b *testing.B) {
|
||||
benchmarkCollector(b, "os", NewOSCollector)
|
||||
}
|
||||
@@ -7,7 +7,7 @@ import (
|
||||
|
||||
perflibCollector "github.com/leoluk/perflib_exporter/collector"
|
||||
"github.com/leoluk/perflib_exporter/perflib"
|
||||
"github.com/prometheus/common/log"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
)
|
||||
|
||||
var nametable = perflib.QueryNameTable("Counter 009") // Reads the names in English TODO: validate that the English names are always present
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
@@ -9,8 +10,8 @@ import (
|
||||
"strings"
|
||||
|
||||
"github.com/StackExchange/wmi"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
"gopkg.in/alecthomas/kingpin.v2"
|
||||
)
|
||||
|
||||
@@ -42,6 +43,8 @@ type processCollector struct {
|
||||
PrivateBytes *prometheus.Desc
|
||||
ThreadCount *prometheus.Desc
|
||||
VirtualBytes *prometheus.Desc
|
||||
WorkingSetPrivate *prometheus.Desc
|
||||
WorkingSetPeak *prometheus.Desc
|
||||
WorkingSet *prometheus.Desc
|
||||
|
||||
processWhitelistPattern *regexp.Regexp
|
||||
@@ -65,7 +68,7 @@ func newProcessCollector() (Collector, error) {
|
||||
),
|
||||
CPUTimeTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "cpu_time_total"),
|
||||
"Returns elapsed time that all of the threads of this process used the processor to execute instructions by mode (privileged, user). An instruction is the basic unit of execution in a computer, a thread is the object that executes instructions, and a process is the object created when a program is run. Code executed to handle some hardware interrupts and trap conditions is included in this count.",
|
||||
"Returns elapsed time that all of the threads of this process used the processor to execute instructions by mode (privileged, user).",
|
||||
[]string{"process", "process_id", "creating_process_id", "mode"},
|
||||
nil,
|
||||
),
|
||||
@@ -77,31 +80,31 @@ func newProcessCollector() (Collector, error) {
|
||||
),
|
||||
IOBytesTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "io_bytes_total"),
|
||||
"Bytes issued to I/O operations in different modes (read, write, other). This property counts all I/O activity generated by the process to include file, network, and device I/Os. Read and write mode includes data operations; other mode includes those that do not involve data, such as control operations. ",
|
||||
"Bytes issued to I/O operations in different modes (read, write, other).",
|
||||
[]string{"process", "process_id", "creating_process_id", "mode"},
|
||||
nil,
|
||||
),
|
||||
IOOperationsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "io_operations_total"),
|
||||
"I/O operations issued in different modes (read, write, other). This property counts all I/O activity generated by the process to include file, network, and device I/Os. Read and write mode includes data operations; other mode includes those that do not involve data, such as control operations. ",
|
||||
"I/O operations issued in different modes (read, write, other).",
|
||||
[]string{"process", "process_id", "creating_process_id", "mode"},
|
||||
nil,
|
||||
),
|
||||
PageFaultsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "page_faults_total"),
|
||||
"Page faults by the threads executing in this process. A page fault occurs when a thread refers to a virtual memory page that is not in its working set in main memory. This can cause the page not to be fetched from disk if it is on the standby list and hence already in main memory, or if it is in use by another process with which the page is shared.",
|
||||
"Page faults by the threads executing in this process.",
|
||||
[]string{"process", "process_id", "creating_process_id"},
|
||||
nil,
|
||||
),
|
||||
PageFileBytes: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "page_file_bytes"),
|
||||
"Current number of bytes this process has used in the paging file(s). Paging files are used to store pages of memory used by the process that are not contained in other files. Paging files are shared by all processes, and lack of space in paging files can prevent other processes from allocating memory.",
|
||||
"Current number of bytes this process has used in the paging file(s).",
|
||||
[]string{"process", "process_id", "creating_process_id"},
|
||||
nil,
|
||||
),
|
||||
PoolBytes: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "pool_bytes"),
|
||||
"Pool Bytes is the last observed number of bytes in the paged or nonpaged pool. The nonpaged pool is an area of system memory (physical memory used by the operating system) for objects that cannot be written to disk, but must remain in physical memory as long as they are allocated. The paged pool is an area of system memory (physical memory used by the operating system) for objects that can be written to disk when they are not being used. Nonpaged pool bytes is calculated differently than paged pool bytes, so it might not equal the total of paged pool bytes.",
|
||||
"Pool Bytes is the last observed number of bytes in the paged or nonpaged pool.",
|
||||
[]string{"process", "process_id", "creating_process_id", "pool"},
|
||||
nil,
|
||||
),
|
||||
@@ -119,19 +122,31 @@ func newProcessCollector() (Collector, error) {
|
||||
),
|
||||
ThreadCount: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "thread_count"),
|
||||
"Number of threads currently active in this process. An instruction is the basic unit of execution in a processor, and a thread is the object that executes instructions. Every running process has at least one thread.",
|
||||
"Number of threads currently active in this process.",
|
||||
[]string{"process", "process_id", "creating_process_id"},
|
||||
nil,
|
||||
),
|
||||
VirtualBytes: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "virtual_bytes"),
|
||||
"Current size, in bytes, of the virtual address space that the process is using. Use of virtual address space does not necessarily imply corresponding use of either disk or main memory pages. Virtual space is finite and, by using too much, the process can limit its ability to load libraries.",
|
||||
"Current size, in bytes, of the virtual address space that the process is using.",
|
||||
[]string{"process", "process_id", "creating_process_id"},
|
||||
nil,
|
||||
),
|
||||
WorkingSetPrivate: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "working_set_private_bytes"),
|
||||
"Size of the working set, in bytes, that is use for this process only and not shared nor shareable by other processes.",
|
||||
[]string{"process", "process_id", "creating_process_id"},
|
||||
nil,
|
||||
),
|
||||
WorkingSetPeak: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "working_set_peak_bytes"),
|
||||
"Maximum size, in bytes, of the Working Set of this process at any point in time. The Working Set is the set of memory pages touched recently by the threads in the process.",
|
||||
[]string{"process", "process_id", "creating_process_id"},
|
||||
nil,
|
||||
),
|
||||
WorkingSet: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "working_set"),
|
||||
"Maximum number of bytes in the working set of this process at any point in time. The working set is the set of memory pages touched recently by the threads in the process. If free memory in the computer is above a threshold, pages are left in the working set of a process even if they are not in use. When free memory falls below a threshold, pages are trimmed from working sets. If they are needed, they are then soft-faulted back into the working set before they leave main memory.",
|
||||
prometheus.BuildFQName(Namespace, subsystem, "working_set_bytes"),
|
||||
"Maximum number of bytes in the working set of this process at any point in time. The working set is the set of memory pages touched recently by the threads in the process.",
|
||||
[]string{"process", "process_id", "creating_process_id"},
|
||||
nil,
|
||||
),
|
||||
@@ -380,6 +395,24 @@ func (c *processCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metr
|
||||
cpid,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.WorkingSetPrivate,
|
||||
prometheus.GaugeValue,
|
||||
process.WorkingSetPrivate,
|
||||
processName,
|
||||
pid,
|
||||
cpid,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.WorkingSetPeak,
|
||||
prometheus.GaugeValue,
|
||||
process.WorkingSetPeak,
|
||||
processName,
|
||||
pid,
|
||||
cpid,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.WorkingSet,
|
||||
prometheus.GaugeValue,
|
||||
|
||||
14
collector/process_test.go
Normal file
14
collector/process_test.go
Normal file
@@ -0,0 +1,14 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkProcessCollector(b *testing.B) {
|
||||
// Whitelist is not set in testing context (kingpin flags not parsed), causing the collector to skip all processes.
|
||||
localProcessWhitelist := ".+"
|
||||
processWhitelist = &localProcessWhitelist
|
||||
|
||||
// No context name required as collector source is WMI
|
||||
benchmarkCollector(b, "", newProcessCollector)
|
||||
}
|
||||
@@ -1,3 +1,4 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
@@ -5,8 +6,8 @@ package collector
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
)
|
||||
|
||||
func init() {
|
||||
@@ -60,7 +61,7 @@ func NewRemoteFx() (Collector, error) {
|
||||
),
|
||||
CurrentTCPBandwidth: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "net_current_tcp_bandwidth"),
|
||||
"TCP Bandwidth detected in bytes per seccond.",
|
||||
"TCP Bandwidth detected in bytes per second.",
|
||||
[]string{"session_name"},
|
||||
nil,
|
||||
),
|
||||
|
||||
9
collector/remote_fx_test.go
Normal file
9
collector/remote_fx_test.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkRemoteFXCollector(b *testing.B) {
|
||||
benchmarkCollector(b, "remote_fx", NewRemoteFx)
|
||||
}
|
||||
@@ -1,14 +1,17 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"strconv"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/StackExchange/wmi"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
"golang.org/x/sys/windows"
|
||||
"golang.org/x/sys/windows/svc/mgr"
|
||||
"gopkg.in/alecthomas/kingpin.v2"
|
||||
)
|
||||
|
||||
@@ -21,6 +24,10 @@ var (
|
||||
"collector.service.services-where",
|
||||
"WQL 'where' clause to use in WMI metrics query. Limits the response to the services you specify and reduces the size of the response.",
|
||||
).Default("").String()
|
||||
useAPI = kingpin.Flag(
|
||||
"collector.service.use-api",
|
||||
"Use API calls to collect service data instead of WMI. Flag 'collector.service.services-where' won't be effective.",
|
||||
).Default("false").Bool()
|
||||
)
|
||||
|
||||
// A serviceCollector is a Prometheus collector for WMI Win32_Service metrics
|
||||
@@ -40,6 +47,9 @@ func NewserviceCollector() (Collector, error) {
|
||||
if *serviceWhereClause == "" {
|
||||
log.Warn("No where-clause specified for service collector. This will generate a very large number of metrics!")
|
||||
}
|
||||
if *useAPI {
|
||||
log.Warn("API collection is enabled.")
|
||||
}
|
||||
|
||||
return &serviceCollector{
|
||||
Information: prometheus.NewDesc(
|
||||
@@ -73,9 +83,16 @@ func NewserviceCollector() (Collector, error) {
|
||||
// Collect sends the metric values for each metric
|
||||
// to the provided prometheus Metric channel.
|
||||
func (c *serviceCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
|
||||
if desc, err := c.collect(ch); err != nil {
|
||||
log.Error("failed collecting service metrics:", desc, err)
|
||||
return err
|
||||
if *useAPI {
|
||||
if err := c.collectAPI(ch); err != nil {
|
||||
log.Error("failed collecting API service metrics:", err)
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
if err := c.collectWMI(ch); err != nil {
|
||||
log.Error("failed collecting WMI service metrics:", err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@@ -103,6 +120,15 @@ var (
|
||||
"paused",
|
||||
"unknown",
|
||||
}
|
||||
apiStateValues = map[uint]string{
|
||||
windows.SERVICE_CONTINUE_PENDING: "continue pending",
|
||||
windows.SERVICE_PAUSE_PENDING: "pause pending",
|
||||
windows.SERVICE_PAUSED: "paused",
|
||||
windows.SERVICE_RUNNING: "running",
|
||||
windows.SERVICE_START_PENDING: "start pending",
|
||||
windows.SERVICE_STOP_PENDING: "stop pending",
|
||||
windows.SERVICE_STOPPED: "stopped",
|
||||
}
|
||||
allStartModes = []string{
|
||||
"boot",
|
||||
"system",
|
||||
@@ -110,6 +136,13 @@ var (
|
||||
"manual",
|
||||
"disabled",
|
||||
}
|
||||
apiStartModeValues = map[uint32]string{
|
||||
windows.SERVICE_AUTO_START: "auto",
|
||||
windows.SERVICE_BOOT_START: "boot",
|
||||
windows.SERVICE_DEMAND_START: "manual",
|
||||
windows.SERVICE_DISABLED: "disabled",
|
||||
windows.SERVICE_SYSTEM_START: "system",
|
||||
}
|
||||
allStatuses = []string{
|
||||
"ok",
|
||||
"error",
|
||||
@@ -126,14 +159,14 @@ var (
|
||||
}
|
||||
)
|
||||
|
||||
func (c *serviceCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
|
||||
func (c *serviceCollector) collectWMI(ch chan<- prometheus.Metric) error {
|
||||
var dst []Win32_Service
|
||||
q := queryAllWhere(&dst, c.queryWhereClause)
|
||||
if err := wmi.Query(q, &dst); err != nil {
|
||||
return nil, err
|
||||
return err
|
||||
}
|
||||
for _, service := range dst {
|
||||
pid := strconv.FormatUint(uint64(service.ProcessId), 10)
|
||||
pid := fmt.Sprintf("%d", uint64(service.ProcessId))
|
||||
|
||||
runAs := ""
|
||||
if service.StartName != nil {
|
||||
@@ -191,5 +224,82 @@ func (c *serviceCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Des
|
||||
)
|
||||
}
|
||||
}
|
||||
return nil, nil
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *serviceCollector) collectAPI(ch chan<- prometheus.Metric) error {
|
||||
svcmgrConnection, err := mgr.Connect()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer svcmgrConnection.Disconnect() //nolint:errcheck
|
||||
|
||||
// List All Services from the Services Manager
|
||||
serviceList, err := svcmgrConnection.ListServices()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Iterate through the Services List
|
||||
for _, service := range serviceList {
|
||||
// Retrieve handle for each service
|
||||
serviceHandle, err := svcmgrConnection.OpenService(service)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
defer serviceHandle.Close()
|
||||
|
||||
// Get Service Configuration
|
||||
serviceConfig, err := serviceHandle.Config()
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Get Service Current Status
|
||||
serviceStatus, err := serviceHandle.Query()
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
pid := fmt.Sprintf("%d", uint64(serviceStatus.ProcessId))
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.Information,
|
||||
prometheus.GaugeValue,
|
||||
1.0,
|
||||
strings.ToLower(service),
|
||||
serviceConfig.DisplayName,
|
||||
pid,
|
||||
serviceConfig.ServiceStartName,
|
||||
)
|
||||
|
||||
for _, state := range apiStateValues {
|
||||
isCurrentState := 0.0
|
||||
if state == apiStateValues[uint(serviceStatus.State)] {
|
||||
isCurrentState = 1.0
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.State,
|
||||
prometheus.GaugeValue,
|
||||
isCurrentState,
|
||||
strings.ToLower(service),
|
||||
state,
|
||||
)
|
||||
}
|
||||
|
||||
for _, startMode := range apiStartModeValues {
|
||||
isCurrentStartMode := 0.0
|
||||
if startMode == apiStartModeValues[serviceConfig.StartType] {
|
||||
isCurrentStartMode = 1.0
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.StartMode,
|
||||
prometheus.GaugeValue,
|
||||
isCurrentStartMode,
|
||||
strings.ToLower(service),
|
||||
startMode,
|
||||
)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
9
collector/service_test.go
Normal file
9
collector/service_test.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkServiceCollector(b *testing.B) {
|
||||
benchmarkCollector(b, "service", NewserviceCollector)
|
||||
}
|
||||
694
collector/smtp.go
Normal file
694
collector/smtp.go
Normal file
@@ -0,0 +1,694 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"gopkg.in/alecthomas/kingpin.v2"
|
||||
"regexp"
|
||||
)
|
||||
|
||||
func init() {
|
||||
registerCollector("smtp", NewSMTPCollector, "SMTP Server")
|
||||
}
|
||||
|
||||
var (
|
||||
serverWhitelist = kingpin.Flag("collector.smtp.server-whitelist", "Regexp of virtual servers to whitelist. Server name must both match whitelist and not match blacklist to be included.").Default(".+").String()
|
||||
serverBlacklist = kingpin.Flag("collector.smtp.server-blacklist", "Regexp of virtual servers to blacklist. Server name must both match whitelist and not match blacklist to be included.").String()
|
||||
)
|
||||
|
||||
type SMTPCollector struct {
|
||||
BadmailedMessagesBadPickupFileTotal *prometheus.Desc
|
||||
BadmailedMessagesGeneralFailureTotal *prometheus.Desc
|
||||
BadmailedMessagesHopCountExceededTotal *prometheus.Desc
|
||||
BadmailedMessagesNDROfDSNTotal *prometheus.Desc
|
||||
BadmailedMessagesNoRecipientsTotal *prometheus.Desc
|
||||
BadmailedMessagesTriggeredViaEventTotal *prometheus.Desc
|
||||
BytesSentTotal *prometheus.Desc
|
||||
BytesReceivedTotal *prometheus.Desc
|
||||
CategorizerQueueLength *prometheus.Desc
|
||||
ConnectionErrorsTotal *prometheus.Desc
|
||||
CurrentMessagesInLocalDelivery *prometheus.Desc
|
||||
DirectoryDropsTotal *prometheus.Desc
|
||||
DNSQueriesTotal *prometheus.Desc
|
||||
DSNFailuresTotal *prometheus.Desc
|
||||
ETRNMessagesTotal *prometheus.Desc
|
||||
InboundConnectionsCurrent *prometheus.Desc
|
||||
InboundConnectionsTotal *prometheus.Desc
|
||||
LocalQueueLength *prometheus.Desc
|
||||
LocalRetryQueueLength *prometheus.Desc
|
||||
MailFilesOpen *prometheus.Desc
|
||||
MessageBytesReceivedTotal *prometheus.Desc
|
||||
MessageBytesSentTotal *prometheus.Desc
|
||||
MessageDeliveryRetriesTotal *prometheus.Desc
|
||||
MessageSendRetriesTotal *prometheus.Desc
|
||||
MessagesCurrentlyUndeliverable *prometheus.Desc
|
||||
MessagesDeliveredTotal *prometheus.Desc
|
||||
MessagesPendingRouting *prometheus.Desc
|
||||
MessagesReceivedTotal *prometheus.Desc
|
||||
MessagesRefusedForAddressObjectsTotal *prometheus.Desc
|
||||
MessagesRefusedForMailObjectsTotal *prometheus.Desc
|
||||
MessagesRefusedForSizeTotal *prometheus.Desc
|
||||
MessagesSentTotal *prometheus.Desc
|
||||
MessagesSubmittedTotal *prometheus.Desc
|
||||
NDRsGeneratedTotal *prometheus.Desc
|
||||
OutboundConnectionsCurrent *prometheus.Desc
|
||||
OutboundConnectionsRefusedTotal *prometheus.Desc
|
||||
OutboundConnectionsTotal *prometheus.Desc
|
||||
QueueFilesOpen *prometheus.Desc
|
||||
PickupDirectoryMessagesRetrievedTotal *prometheus.Desc
|
||||
RemoteQueueLength *prometheus.Desc
|
||||
RemoteRetryQueueLength *prometheus.Desc
|
||||
RoutingTableLookupsTotal *prometheus.Desc
|
||||
|
||||
serverWhitelistPattern *regexp.Regexp
|
||||
serverBlacklistPattern *regexp.Regexp
|
||||
}
|
||||
|
||||
func NewSMTPCollector() (Collector, error) {
|
||||
log.Info("smtp collector is in an experimental state! Metrics for this collector have not been tested.")
|
||||
const subsystem = "smtp"
|
||||
|
||||
return &SMTPCollector{
|
||||
BadmailedMessagesBadPickupFileTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "badmailed_messages_bad_pickup_file_total"),
|
||||
"Total number of malformed pickup messages sent to badmail",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
BadmailedMessagesGeneralFailureTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "badmailed_messages_general_failure_total"),
|
||||
"Total number of messages sent to badmail for reasons not associated with a specific counter",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
BadmailedMessagesHopCountExceededTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "badmailed_messages_hop_count_exceeded_total"),
|
||||
"Total number of messages sent to badmail because they had exceeded the maximum hop count",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
BadmailedMessagesNDROfDSNTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "badmailed_messages_ndr_of_dns_total"),
|
||||
"Total number of Delivery Status Notifications sent to badmail because they could not be delivered",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
BadmailedMessagesNoRecipientsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "badmailed_messages_no_recipients_total"),
|
||||
"Total number of messages sent to badmail because they had no recipients",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
BadmailedMessagesTriggeredViaEventTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "badmailed_messages_triggered_via_event_total"),
|
||||
"Total number of messages sent to badmail at the request of a server event sink",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
BytesSentTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "bytes_sent_total"),
|
||||
"Total number of bytes sent",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
BytesReceivedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "bytes_received_total"),
|
||||
"Total number of bytes received",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
CategorizerQueueLength: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "categorizer_queue_length"),
|
||||
"Number of messages in the categorizer queue",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
ConnectionErrorsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "connection_errors_total"),
|
||||
"Total number of connection errors",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
CurrentMessagesInLocalDelivery: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "current_messages_in_local_delivery"),
|
||||
"Number of messages that are currently being processed by a server event sink for local delivery",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
DirectoryDropsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "directory_drops_total"),
|
||||
"Total number of messages placed in a drop directory",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
DSNFailuresTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "dsn_failures_total"),
|
||||
"Total number of failed DSN generation attempts",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
DNSQueriesTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "dns_queries_total"),
|
||||
"Total number of DNS lookups",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
ETRNMessagesTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "etrn_messages_total"),
|
||||
"Total number of ETRN messages received by the server",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
InboundConnectionsCurrent: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "inbound_connections_current"),
|
||||
"Total number of connections currently inbound",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
InboundConnectionsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "inbound_connections_total"),
|
||||
"Total number of inbound connections received",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
LocalQueueLength: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "local_queue_length"),
|
||||
"Number of messages in the local queue",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
LocalRetryQueueLength: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "local_retry_queue_length"),
|
||||
"Number of messages in the local retry queue",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
MailFilesOpen: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "mail_files_open"),
|
||||
"Number of handles to open mail files",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
MessageBytesReceivedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "message_bytes_received_total"),
|
||||
"Total number of bytes received in messages",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
MessageBytesSentTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "message_bytes_sent_total"),
|
||||
"Total number of bytes sent in messages",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
MessageDeliveryRetriesTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "message_delivery_retries_total"),
|
||||
"Total number of local deliveries that were retried",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
MessageSendRetriesTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "message_send_retries_total"),
|
||||
"Total number of outbound message sends that were retried",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
MessagesCurrentlyUndeliverable: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "messages_currently_undeliverable"),
|
||||
"Number of messages that have been reported as currently undeliverable by routing",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
MessagesDeliveredTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "messages_delivered_total"),
|
||||
"Total number of messages delivered to local mailboxes",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
MessagesPendingRouting: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "messages_pending_routing"),
|
||||
"Number of messages that have been categorized but not routed",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
MessagesReceivedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "messages_received_total"),
|
||||
"Total number of inbound messages accepted",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
MessagesRefusedForAddressObjectsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "messages_refused_for_address_objects_total"),
|
||||
"Total number of messages refused due to no address objects",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
MessagesRefusedForMailObjectsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "messages_refused_for_mail_objects_total"),
|
||||
"Total number of messages refused due to no mail objects",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
MessagesRefusedForSizeTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "messages_refused_for_size_total"),
|
||||
"Total number of messages rejected because they were too big",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
MessagesSentTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "messages_sent_total"),
|
||||
"Total number of outbound messages sent",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
MessagesSubmittedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "messages_submitted_total"),
|
||||
"Total number of messages submitted to queuing for delivery",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
NDRsGeneratedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "ndrs_generated_total"),
|
||||
"Total number of non-delivery reports that have been generated",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
OutboundConnectionsCurrent: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "outbound_connections_current"),
|
||||
"Number of connections currently outbound",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
OutboundConnectionsRefusedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "outbound_connections_refused_total"),
|
||||
"Total number of connection attempts refused by remote sites",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
OutboundConnectionsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "outbound_connections_total"),
|
||||
"Total number of outbound connections attempted",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
PickupDirectoryMessagesRetrievedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "pickup_directory_messages_retrieved_total"),
|
||||
"Total number of messages retrieved from the mail pick-up directory",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
QueueFilesOpen: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "queue_files_open"),
|
||||
"Number of handles to open queue files",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
RemoteQueueLength: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "remote_queue_length"),
|
||||
"Number of messages in the remote queue",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
RemoteRetryQueueLength: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "remote_retry_queue_length"),
|
||||
"Number of messages in the retry queue for remote delivery",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
RoutingTableLookupsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "routing_table_lookups_total"),
|
||||
"Total number of routing table lookups",
|
||||
[]string{"site"},
|
||||
nil,
|
||||
),
|
||||
|
||||
serverWhitelistPattern: regexp.MustCompile(fmt.Sprintf("^(?:%s)$", *serverWhitelist)),
|
||||
serverBlacklistPattern: regexp.MustCompile(fmt.Sprintf("^(?:%s)$", *serverBlacklist)),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Collect sends the metric values for each metric
|
||||
// to the provided prometheus Metric channel.
|
||||
func (c *SMTPCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
|
||||
if desc, err := c.collect(ctx, ch); err != nil {
|
||||
log.Error("failed collecting smtp metrics:", desc, err)
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Perflib: "SMTP Server"
|
||||
type PerflibSMTPServer struct {
|
||||
Name string
|
||||
|
||||
BadmailedMessagesBadPickupFileTotal float64 `perflib:"Badmailed Messages (Bad Pickup File)"`
|
||||
BadmailedMessagesGeneralFailureTotal float64 `perflib:"Badmailed Messages (General Failure)"`
|
||||
BadmailedMessagesHopCountExceededTotal float64 `perflib:"Badmailed Messages (Hop Count Exceeded)"`
|
||||
BadmailedMessagesNDROfDSNTotal float64 `perflib:"Badmailed Messages (NDR of DSN)"`
|
||||
BadmailedMessagesNoRecipientsTotal float64 `perflib:"Badmailed Messages (No Recipients)"`
|
||||
BadmailedMessagesTriggeredViaEventTotal float64 `perflib:"Badmailed Messages (Triggered via Event)"`
|
||||
BytesSentTotal float64 `perflib:"Bytes Sent Total"`
|
||||
BytesReceivedTotal float64 `perflib:"Bytes Received Total"`
|
||||
CategorizerQueueLength float64 `perflib:"Categorizer Queue Length"`
|
||||
ConnectionErrorsTotal float64 `perflib:"Total Connection Errors"`
|
||||
CurrentMessagesInLocalDelivery float64 `perflib:"Current Messages in Local Delivery"`
|
||||
DirectoryDropsTotal float64 `perflib:"Directory Drops Total"`
|
||||
DNSQueriesTotal float64 `perflib:"DNS Queries Total"`
|
||||
DSNFailuresTotal float64 `perflib:"Total DSN Failures"`
|
||||
ETRNMessagesTotal float64 `perflib:"ETRN Messages Total"`
|
||||
InboundConnectionsCurrent float64 `perflib:"Inbound Connections Current"`
|
||||
InboundConnectionsTotal float64 `perflib:"Inbound Connections Total"`
|
||||
LocalQueueLength float64 `perflib:"Local Queue Length"`
|
||||
LocalRetryQueueLength float64 `perflib:"Local Retry Queue Length"`
|
||||
MailFilesOpen float64 `perflib:"Number of MailFiles Open"`
|
||||
MessageBytesReceivedTotal float64 `perflib:"Message Bytes Received Total"`
|
||||
MessageBytesSentTotal float64 `perflib:"Message Bytes Sent Total"`
|
||||
MessageDeliveryRetriesTotal float64 `perflib:"Message Delivery Retries"`
|
||||
MessageSendRetriesTotal float64 `perflib:"Message Send Retries"`
|
||||
MessagesCurrentlyUndeliverable float64 `perflib:"Messages Currently Undeliverable"`
|
||||
MessagesDeliveredTotal float64 `perflib:"Messages Delivered Total"`
|
||||
MessagesPendingRouting float64 `perflib:"Messages Pending Routing"`
|
||||
MessagesReceivedTotal float64 `perflib:"Messages Received Total"`
|
||||
MessagesRefusedForAddressObjectsTotal float64 `perflib:"Messages Refused for Address Objects"`
|
||||
MessagesRefusedForMailObjectsTotal float64 `perflib:"Messages Refused for Mail Objects"`
|
||||
MessagesRefusedForSizeTotal float64 `perflib:"Messages Refused for Size"`
|
||||
MessagesSentTotal float64 `perflib:"Messages Sent Total"`
|
||||
MessagesSubmittedTotal float64 `perflib:"Total messages submitted"`
|
||||
NDRsGeneratedTotal float64 `perflib:"NDRs Generated"`
|
||||
OutboundConnectionsCurrent float64 `perflib:"Outbound Connections Current"`
|
||||
OutboundConnectionsRefusedTotal float64 `perflib:"Outbound Connections Refused"`
|
||||
OutboundConnectionsTotal float64 `perflib:"Outbound Connections Total"`
|
||||
QueueFilesOpen float64 `perflib:"Number of QueueFiles Open"`
|
||||
PickupDirectoryMessagesRetrievedTotal float64 `perflib:"Pickup Directory Messages Retrieved Total"`
|
||||
RemoteQueueLength float64 `perflib:"Remote Queue Length"`
|
||||
RemoteRetryQueueLength float64 `perflib:"Remote Retry Queue Length"`
|
||||
RoutingTableLookupsTotal float64 `perflib:"Routing Table Lookups Total"`
|
||||
}
|
||||
|
||||
func (c *SMTPCollector) collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
|
||||
var dst []PerflibSMTPServer
|
||||
if err := unmarshalObject(ctx.perfObjects["SMTP Server"], &dst); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
for _, server := range dst {
|
||||
if server.Name == "_Total" ||
|
||||
c.serverBlacklistPattern.MatchString(server.Name) ||
|
||||
!c.serverWhitelistPattern.MatchString(server.Name) {
|
||||
continue
|
||||
}
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.BadmailedMessagesBadPickupFileTotal,
|
||||
prometheus.CounterValue,
|
||||
server.BadmailedMessagesBadPickupFileTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.BadmailedMessagesHopCountExceededTotal,
|
||||
prometheus.CounterValue,
|
||||
server.BadmailedMessagesHopCountExceededTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.BadmailedMessagesNDROfDSNTotal,
|
||||
prometheus.CounterValue,
|
||||
server.BadmailedMessagesNDROfDSNTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.BadmailedMessagesNoRecipientsTotal,
|
||||
prometheus.CounterValue,
|
||||
server.BadmailedMessagesNoRecipientsTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.BadmailedMessagesTriggeredViaEventTotal,
|
||||
prometheus.CounterValue,
|
||||
server.BadmailedMessagesTriggeredViaEventTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.BytesSentTotal,
|
||||
prometheus.CounterValue,
|
||||
server.BytesSentTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.BytesReceivedTotal,
|
||||
prometheus.CounterValue,
|
||||
server.BytesReceivedTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.CategorizerQueueLength,
|
||||
prometheus.GaugeValue,
|
||||
server.CategorizerQueueLength,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.ConnectionErrorsTotal,
|
||||
prometheus.CounterValue,
|
||||
server.ConnectionErrorsTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.CurrentMessagesInLocalDelivery,
|
||||
prometheus.GaugeValue,
|
||||
server.CurrentMessagesInLocalDelivery,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.DirectoryDropsTotal,
|
||||
prometheus.CounterValue,
|
||||
server.DirectoryDropsTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.DSNFailuresTotal,
|
||||
prometheus.CounterValue,
|
||||
server.DSNFailuresTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.DNSQueriesTotal,
|
||||
prometheus.CounterValue,
|
||||
server.DNSQueriesTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.ETRNMessagesTotal,
|
||||
prometheus.CounterValue,
|
||||
server.ETRNMessagesTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.InboundConnectionsTotal,
|
||||
prometheus.CounterValue,
|
||||
server.InboundConnectionsTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.InboundConnectionsCurrent,
|
||||
prometheus.GaugeValue,
|
||||
server.InboundConnectionsCurrent,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.LocalQueueLength,
|
||||
prometheus.GaugeValue,
|
||||
server.LocalQueueLength,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.LocalRetryQueueLength,
|
||||
prometheus.GaugeValue,
|
||||
server.LocalRetryQueueLength,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.MailFilesOpen,
|
||||
prometheus.GaugeValue,
|
||||
server.MailFilesOpen,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.MessageBytesReceivedTotal,
|
||||
prometheus.CounterValue,
|
||||
server.MessageBytesReceivedTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.MessageBytesSentTotal,
|
||||
prometheus.CounterValue,
|
||||
server.MessageBytesSentTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.MessageDeliveryRetriesTotal,
|
||||
prometheus.CounterValue,
|
||||
server.MessageDeliveryRetriesTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.MessageSendRetriesTotal,
|
||||
prometheus.CounterValue,
|
||||
server.MessageSendRetriesTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.MessagesCurrentlyUndeliverable,
|
||||
prometheus.GaugeValue,
|
||||
server.MessagesCurrentlyUndeliverable,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.MessagesDeliveredTotal,
|
||||
prometheus.CounterValue,
|
||||
server.MessagesDeliveredTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.MessagesPendingRouting,
|
||||
prometheus.GaugeValue,
|
||||
server.MessagesPendingRouting,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.MessagesReceivedTotal,
|
||||
prometheus.CounterValue,
|
||||
server.MessagesReceivedTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.MessagesRefusedForAddressObjectsTotal,
|
||||
prometheus.CounterValue,
|
||||
server.MessagesRefusedForAddressObjectsTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.MessagesRefusedForMailObjectsTotal,
|
||||
prometheus.CounterValue,
|
||||
server.MessagesRefusedForMailObjectsTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.MessagesRefusedForSizeTotal,
|
||||
prometheus.CounterValue,
|
||||
server.MessagesRefusedForSizeTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.MessagesSentTotal,
|
||||
prometheus.CounterValue,
|
||||
server.MessagesSentTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.MessagesSubmittedTotal,
|
||||
prometheus.CounterValue,
|
||||
server.MessagesSubmittedTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.NDRsGeneratedTotal,
|
||||
prometheus.CounterValue,
|
||||
server.NDRsGeneratedTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.OutboundConnectionsCurrent,
|
||||
prometheus.GaugeValue,
|
||||
server.OutboundConnectionsCurrent,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.OutboundConnectionsRefusedTotal,
|
||||
prometheus.CounterValue,
|
||||
server.OutboundConnectionsRefusedTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.OutboundConnectionsTotal,
|
||||
prometheus.CounterValue,
|
||||
server.OutboundConnectionsTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.QueueFilesOpen,
|
||||
prometheus.GaugeValue,
|
||||
server.QueueFilesOpen,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.PickupDirectoryMessagesRetrievedTotal,
|
||||
prometheus.CounterValue,
|
||||
server.PickupDirectoryMessagesRetrievedTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.RemoteQueueLength,
|
||||
prometheus.GaugeValue,
|
||||
server.RemoteQueueLength,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.RemoteRetryQueueLength,
|
||||
prometheus.GaugeValue,
|
||||
server.RemoteRetryQueueLength,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.RoutingTableLookupsTotal,
|
||||
prometheus.CounterValue,
|
||||
server.RoutingTableLookupsTotal,
|
||||
server.Name,
|
||||
)
|
||||
|
||||
}
|
||||
return nil, nil
|
||||
}
|
||||
9
collector/smtp_test.go
Normal file
9
collector/smtp_test.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkSmtpCollector(b *testing.B) {
|
||||
benchmarkCollector(b, "smtp", NewSMTPCollector)
|
||||
}
|
||||
@@ -1,10 +1,11 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
||||
9
collector/system_test.go
Normal file
9
collector/system_test.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkSystemCollector(b *testing.B) {
|
||||
benchmarkCollector(b, "system", NewSystemCollector)
|
||||
}
|
||||
110
collector/tcp.go
110
collector/tcp.go
@@ -1,19 +1,18 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"github.com/StackExchange/wmi"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
)
|
||||
|
||||
func init() {
|
||||
registerCollector("tcp", NewTCPCollector)
|
||||
registerCollector("tcp", NewTCPCollector, "TCPv4", "TCPv6")
|
||||
}
|
||||
|
||||
// A TCPCollector is a Prometheus collector for WMI Win32_PerfRawData_Tcpip_TCPv4 metrics
|
||||
// A TCPCollector is a Prometheus collector for WMI Win32_PerfRawData_Tcpip_TCPv{4,6} metrics
|
||||
type TCPCollector struct {
|
||||
ConnectionFailures *prometheus.Desc
|
||||
ConnectionsActive *prometheus.Desc
|
||||
@@ -34,55 +33,55 @@ func NewTCPCollector() (Collector, error) {
|
||||
ConnectionFailures: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "connection_failures"),
|
||||
"(TCP.ConnectionFailures)",
|
||||
nil,
|
||||
[]string{"af"},
|
||||
nil,
|
||||
),
|
||||
ConnectionsActive: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "connections_active"),
|
||||
"(TCP.ConnectionsActive)",
|
||||
nil,
|
||||
[]string{"af"},
|
||||
nil,
|
||||
),
|
||||
ConnectionsEstablished: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "connections_established"),
|
||||
"(TCP.ConnectionsEstablished)",
|
||||
nil,
|
||||
[]string{"af"},
|
||||
nil,
|
||||
),
|
||||
ConnectionsPassive: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "connections_passive"),
|
||||
"(TCP.ConnectionsPassive)",
|
||||
nil,
|
||||
[]string{"af"},
|
||||
nil,
|
||||
),
|
||||
ConnectionsReset: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "connections_reset"),
|
||||
"(TCP.ConnectionsReset)",
|
||||
nil,
|
||||
[]string{"af"},
|
||||
nil,
|
||||
),
|
||||
SegmentsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "segments_total"),
|
||||
"(TCP.SegmentsTotal)",
|
||||
nil,
|
||||
[]string{"af"},
|
||||
nil,
|
||||
),
|
||||
SegmentsReceivedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "segments_received_total"),
|
||||
"(TCP.SegmentsReceivedTotal)",
|
||||
nil,
|
||||
[]string{"af"},
|
||||
nil,
|
||||
),
|
||||
SegmentsRetransmittedTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "segments_retransmitted_total"),
|
||||
"(TCP.SegmentsRetransmittedTotal)",
|
||||
nil,
|
||||
[]string{"af"},
|
||||
nil,
|
||||
),
|
||||
SegmentsSentTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "segments_sent_total"),
|
||||
"(TCP.SegmentsSentTotal)",
|
||||
nil,
|
||||
[]string{"af"},
|
||||
nil,
|
||||
),
|
||||
}, nil
|
||||
@@ -91,7 +90,7 @@ func NewTCPCollector() (Collector, error) {
|
||||
// Collect sends the metric values for each metric
|
||||
// to the provided prometheus Metric channel.
|
||||
func (c *TCPCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
|
||||
if desc, err := c.collect(ch); err != nil {
|
||||
if desc, err := c.collect(ctx, ch); err != nil {
|
||||
log.Error("failed collecting tcp metrics:", desc, err)
|
||||
return err
|
||||
}
|
||||
@@ -100,75 +99,94 @@ func (c *TCPCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric)
|
||||
|
||||
// Win32_PerfRawData_Tcpip_TCPv4 docs
|
||||
// - https://msdn.microsoft.com/en-us/library/aa394341(v=vs.85).aspx
|
||||
type Win32_PerfRawData_Tcpip_TCPv4 struct {
|
||||
ConnectionFailures uint64
|
||||
ConnectionsActive uint64
|
||||
ConnectionsEstablished uint64
|
||||
ConnectionsPassive uint64
|
||||
ConnectionsReset uint64
|
||||
SegmentsPersec uint64
|
||||
SegmentsReceivedPersec uint64
|
||||
SegmentsRetransmittedPersec uint64
|
||||
SegmentsSentPersec uint64
|
||||
// The TCPv6 performance object uses the same fields.
|
||||
type tcp struct {
|
||||
ConnectionFailures float64 `perflib:"Connection Failures"`
|
||||
ConnectionsActive float64 `perflib:"Connections Active"`
|
||||
ConnectionsEstablished float64 `perflib:"Connections Established"`
|
||||
ConnectionsPassive float64 `perflib:"Connections Passive"`
|
||||
ConnectionsReset float64 `perflib:"Connections Reset"`
|
||||
SegmentsPersec float64 `perflib:"Segments/sec"`
|
||||
SegmentsReceivedPersec float64 `perflib:"Segments Received/sec"`
|
||||
SegmentsRetransmittedPersec float64 `perflib:"Segments Retransmitted/sec"`
|
||||
SegmentsSentPersec float64 `perflib:"Segments Sent/sec"`
|
||||
}
|
||||
|
||||
func (c *TCPCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
|
||||
var dst []Win32_PerfRawData_Tcpip_TCPv4
|
||||
|
||||
q := queryAll(&dst)
|
||||
if err := wmi.Query(q, &dst); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(dst) == 0 {
|
||||
return nil, errors.New("WMI query returned empty result set")
|
||||
}
|
||||
|
||||
// Counters
|
||||
func writeTCPCounters(metrics tcp, labels []string, c *TCPCollector, ch chan<- prometheus.Metric) {
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.ConnectionFailures,
|
||||
prometheus.CounterValue,
|
||||
float64(dst[0].ConnectionFailures),
|
||||
metrics.ConnectionFailures,
|
||||
labels...,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.ConnectionsActive,
|
||||
prometheus.CounterValue,
|
||||
float64(dst[0].ConnectionsActive),
|
||||
metrics.ConnectionsActive,
|
||||
labels...,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.ConnectionsEstablished,
|
||||
prometheus.GaugeValue,
|
||||
float64(dst[0].ConnectionsEstablished),
|
||||
metrics.ConnectionsEstablished,
|
||||
labels...,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.ConnectionsPassive,
|
||||
prometheus.CounterValue,
|
||||
float64(dst[0].ConnectionsPassive),
|
||||
metrics.ConnectionsPassive,
|
||||
labels...,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.ConnectionsReset,
|
||||
prometheus.CounterValue,
|
||||
float64(dst[0].ConnectionsReset),
|
||||
metrics.ConnectionsReset,
|
||||
labels...,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.SegmentsTotal,
|
||||
prometheus.CounterValue,
|
||||
float64(dst[0].SegmentsPersec),
|
||||
metrics.SegmentsPersec,
|
||||
labels...,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.SegmentsReceivedTotal,
|
||||
prometheus.CounterValue,
|
||||
float64(dst[0].SegmentsReceivedPersec),
|
||||
metrics.SegmentsReceivedPersec,
|
||||
labels...,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.SegmentsRetransmittedTotal,
|
||||
prometheus.CounterValue,
|
||||
float64(dst[0].SegmentsRetransmittedPersec),
|
||||
metrics.SegmentsRetransmittedPersec,
|
||||
labels...,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.SegmentsSentTotal,
|
||||
prometheus.CounterValue,
|
||||
float64(dst[0].SegmentsSentPersec),
|
||||
metrics.SegmentsSentPersec,
|
||||
labels...,
|
||||
)
|
||||
}
|
||||
|
||||
func (c *TCPCollector) collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
|
||||
var dst []tcp
|
||||
|
||||
// TCPv4 counters
|
||||
if err := unmarshalObject(ctx.perfObjects["TCPv4"], &dst); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(dst) != 0 {
|
||||
writeTCPCounters(dst[0], []string{"ipv4"}, c, ch)
|
||||
}
|
||||
|
||||
// TCPv6 counters
|
||||
if err := unmarshalObject(ctx.perfObjects["TCPv6"], &dst); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(dst) != 0 {
|
||||
writeTCPCounters(dst[0], []string{"ipv6"}, c, ch)
|
||||
}
|
||||
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
9
collector/tcp_test.go
Normal file
9
collector/tcp_test.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkTCPCollector(b *testing.B) {
|
||||
benchmarkCollector(b, "tcp", NewTCPCollector)
|
||||
}
|
||||
@@ -1,3 +1,4 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
@@ -7,8 +8,8 @@ import (
|
||||
"strings"
|
||||
|
||||
"github.com/StackExchange/wmi"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
)
|
||||
|
||||
const ConnectionBrokerFeatureID uint32 = 133
|
||||
|
||||
9
collector/terminal_services_test.go
Normal file
9
collector/terminal_services_test.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkTerminalServicesCollector(b *testing.B) {
|
||||
benchmarkCollector(b, "terminal_services", NewTerminalServicesCollector)
|
||||
}
|
||||
@@ -11,6 +11,7 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
//go:build !notextfile
|
||||
// +build !notextfile
|
||||
|
||||
package collector
|
||||
@@ -21,15 +22,16 @@ import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/dimchansky/utfbom"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
"github.com/prometheus/common/expfmt"
|
||||
"github.com/prometheus/common/log"
|
||||
kingpin "gopkg.in/alecthomas/kingpin.v2"
|
||||
)
|
||||
|
||||
@@ -37,7 +39,7 @@ var (
|
||||
textFileDirectory = kingpin.Flag(
|
||||
"collector.textfile.directory",
|
||||
"Directory to read text files with metrics from.",
|
||||
).Default("C:\\Program Files\\windows_exporter\\textfile_inputs").String()
|
||||
).Default(getDefaultPath()).String()
|
||||
|
||||
mtimeDesc = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, "textfile", "mtime_seconds"),
|
||||
@@ -65,6 +67,31 @@ func NewTextFileCollector() (Collector, error) {
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Given a slice of metric families, determine if any two entries are duplicates.
|
||||
// Duplicates will be detected where the metric name, labels and label values are identical.
|
||||
func duplicateMetricEntry(metricFamilies []*dto.MetricFamily) bool {
|
||||
uniqueMetrics := make(map[string]map[string]string)
|
||||
for _, metricFamily := range metricFamilies {
|
||||
metric_name := *metricFamily.Name
|
||||
for _, metric := range metricFamily.Metric {
|
||||
metric_labels := metric.GetLabel()
|
||||
labels := make(map[string]string)
|
||||
for _, label := range metric_labels {
|
||||
labels[label.GetName()] = label.GetValue()
|
||||
}
|
||||
// Check if key is present before appending
|
||||
_, mapContainsKey := uniqueMetrics[metric_name]
|
||||
|
||||
// Duplicate metric found with identical labels & label values
|
||||
if mapContainsKey == true && reflect.DeepEqual(uniqueMetrics[metric_name], labels) {
|
||||
return true
|
||||
}
|
||||
uniqueMetrics[metric_name] = labels
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func convertMetricFamily(metricFamily *dto.MetricFamily, ch chan<- prometheus.Metric) {
|
||||
var valType prometheus.ValueType
|
||||
var val float64
|
||||
@@ -223,6 +250,10 @@ func (c *textFileCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Met
|
||||
error = 1.0
|
||||
}
|
||||
|
||||
// Create empty metricFamily slice here and append parsedFamilies to it inside the loop.
|
||||
// Once loop is complete, raise error if any duplicates are present.
|
||||
// This will ensure that duplicate metrics are correctly detected between multiple .prom files.
|
||||
var metricFamilies = []*dto.MetricFamily{}
|
||||
fileLoop:
|
||||
for _, f := range files {
|
||||
if !strings.HasSuffix(f.Name(), ".prom") {
|
||||
@@ -271,12 +302,20 @@ fileLoop:
|
||||
// a failure does not appear fresh.
|
||||
mtimes[f.Name()] = f.ModTime()
|
||||
|
||||
for _, mf := range parsedFamilies {
|
||||
convertMetricFamily(mf, ch)
|
||||
for _, metricFamily := range parsedFamilies {
|
||||
metricFamilies = append(metricFamilies, metricFamily)
|
||||
}
|
||||
}
|
||||
|
||||
c.exportMTimes(mtimes, ch)
|
||||
if duplicateMetricEntry(metricFamilies) {
|
||||
log.Errorf("Duplicate metrics detected in files")
|
||||
error = 1.0
|
||||
} else {
|
||||
for _, mf := range metricFamilies {
|
||||
convertMetricFamily(mf, ch)
|
||||
c.exportMTimes(mtimes, ch)
|
||||
}
|
||||
}
|
||||
|
||||
// Export if there were errors.
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
@@ -297,3 +336,8 @@ func checkBOM(encoding utfbom.Encoding) error {
|
||||
|
||||
return fmt.Errorf(encoding.String())
|
||||
}
|
||||
|
||||
func getDefaultPath() string {
|
||||
execPath, _ := os.Executable()
|
||||
return filepath.Join(filepath.Dir(execPath), "textfile_inputs")
|
||||
}
|
||||
|
||||
@@ -5,6 +5,8 @@ import (
|
||||
"io/ioutil"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
)
|
||||
|
||||
func TestCRFilter(t *testing.T) {
|
||||
@@ -45,3 +47,108 @@ func TestCheckBOM(t *testing.T) {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestDuplicateMetricEntry(t *testing.T) {
|
||||
metric_name := "windows_sometest"
|
||||
metric_help := "This is a Test."
|
||||
metric_type := dto.MetricType_GAUGE
|
||||
|
||||
gauge_value := 1.0
|
||||
|
||||
gauge := dto.Gauge{
|
||||
Value: &gauge_value,
|
||||
}
|
||||
|
||||
label1_name := "display_name"
|
||||
label1_value := "foobar"
|
||||
|
||||
label1 := dto.LabelPair{
|
||||
Name: &label1_name,
|
||||
Value: &label1_value,
|
||||
}
|
||||
|
||||
label2_name := "display_version"
|
||||
label2_value := "13.4.0"
|
||||
|
||||
label2 := dto.LabelPair{
|
||||
Name: &label2_name,
|
||||
Value: &label2_value,
|
||||
}
|
||||
|
||||
metric1 := dto.Metric{
|
||||
Label: []*dto.LabelPair{&label1, &label2},
|
||||
Gauge: &gauge,
|
||||
}
|
||||
|
||||
metric2 := dto.Metric{
|
||||
Label: []*dto.LabelPair{&label1, &label2},
|
||||
Gauge: &gauge,
|
||||
}
|
||||
|
||||
duplicate := dto.MetricFamily{
|
||||
Name: &metric_name,
|
||||
Help: &metric_help,
|
||||
Type: &metric_type,
|
||||
Metric: []*dto.Metric{&metric1, &metric2},
|
||||
}
|
||||
|
||||
duplicateFamily := []*dto.MetricFamily{}
|
||||
duplicateFamily = append(duplicateFamily, &duplicate)
|
||||
|
||||
// Ensure detection for duplicate metrics
|
||||
if !duplicateMetricEntry(duplicateFamily) {
|
||||
t.Errorf("Duplicate not found in duplicateFamily")
|
||||
}
|
||||
|
||||
label3_name := "test"
|
||||
label3_value := "1.0"
|
||||
|
||||
label3 := dto.LabelPair{
|
||||
Name: &label3_name,
|
||||
Value: &label3_value,
|
||||
}
|
||||
metric3 := dto.Metric{
|
||||
Label: []*dto.LabelPair{&label1, &label2, &label3},
|
||||
Gauge: &gauge,
|
||||
}
|
||||
|
||||
differentLabels := dto.MetricFamily{
|
||||
Name: &metric_name,
|
||||
Help: &metric_help,
|
||||
Type: &metric_type,
|
||||
Metric: []*dto.Metric{&metric1, &metric3},
|
||||
}
|
||||
|
||||
duplicateFamily = []*dto.MetricFamily{}
|
||||
duplicateFamily = append(duplicateFamily, &differentLabels)
|
||||
|
||||
// Additional label on second metric should not be cause for duplicate detection
|
||||
if duplicateMetricEntry(duplicateFamily) {
|
||||
t.Errorf("Unexpected duplicate found in differentLabels")
|
||||
}
|
||||
|
||||
label4_value := "2.0"
|
||||
|
||||
label4 := dto.LabelPair{
|
||||
Name: &label3_name,
|
||||
Value: &label4_value,
|
||||
}
|
||||
metric4 := dto.Metric{
|
||||
Label: []*dto.LabelPair{&label1, &label2, &label4},
|
||||
Gauge: &gauge,
|
||||
}
|
||||
|
||||
differentValues := dto.MetricFamily{
|
||||
Name: &metric_name,
|
||||
Help: &metric_help,
|
||||
Type: &metric_type,
|
||||
Metric: []*dto.Metric{&metric3, &metric4},
|
||||
}
|
||||
duplicateFamily = []*dto.MetricFamily{}
|
||||
duplicateFamily = append(duplicateFamily, &differentValues)
|
||||
|
||||
// Additional label with different values metric should not be cause for duplicate detection
|
||||
if duplicateMetricEntry(duplicateFamily) {
|
||||
t.Errorf("Unexpected duplicate found in differentValues")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,9 +1,11 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"github.com/StackExchange/wmi"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
)
|
||||
|
||||
func init() {
|
||||
@@ -75,6 +77,11 @@ func (c *thermalZoneCollector) collect(ch chan<- prometheus.Metric) (*prometheus
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// ThermalZone collector has been known to 'successfully' return an empty result.
|
||||
if len(dst) == 0 {
|
||||
return nil, errors.New("Empty results set for collector")
|
||||
}
|
||||
|
||||
for _, info := range dst {
|
||||
//Divide by 10 and subtract 273.15 to convert decikelvin to celsius
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
|
||||
9
collector/thermalzone_test.go
Normal file
9
collector/thermalzone_test.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkThermalZoneCollector(b *testing.B) {
|
||||
benchmarkCollector(b, "thermalzone", NewThermalZoneCollector)
|
||||
}
|
||||
131
collector/time.go
Normal file
131
collector/time.go
Normal file
@@ -0,0 +1,131 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
func init() {
|
||||
registerCollector("time", newTimeCollector, "Windows Time Service")
|
||||
}
|
||||
|
||||
// TimeCollector is a Prometheus collector for Perflib counter metrics
|
||||
type TimeCollector struct {
|
||||
ClockFrequencyAdjustmentPPBTotal *prometheus.Desc
|
||||
ComputedTimeOffset *prometheus.Desc
|
||||
NTPClientTimeSourceCount *prometheus.Desc
|
||||
NTPRoundtripDelay *prometheus.Desc
|
||||
NTPServerIncomingRequestsTotal *prometheus.Desc
|
||||
NTPServerOutgoingResponsesTotal *prometheus.Desc
|
||||
}
|
||||
|
||||
func newTimeCollector() (Collector, error) {
|
||||
if getWindowsVersion() <= 6.1 {
|
||||
return nil, errors.New("Windows version older than Server 2016 detected. The time collector will not run and should be disabled via CLI flags or configuration file")
|
||||
|
||||
}
|
||||
const subsystem = "time"
|
||||
|
||||
return &TimeCollector{
|
||||
ClockFrequencyAdjustmentPPBTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "clock_frequency_adjustment_ppb_total"),
|
||||
"Total adjustment made to the local system clock frequency by W32Time in Parts Per Billion (PPB) units.",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
ComputedTimeOffset: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "computed_time_offset_seconds"),
|
||||
"Absolute time offset between the system clock and the chosen time source, in seconds",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
NTPClientTimeSourceCount: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "ntp_client_time_source_count"),
|
||||
"Active number of NTP Time sources being used by the client",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
NTPRoundtripDelay: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "ntp_round_trip_delay_seconds"),
|
||||
"Roundtrip delay experienced by the NTP client in receiving a response from the server for the most recent request, in seconds",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
NTPServerOutgoingResponsesTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "ntp_server_outgoing_responses_total"),
|
||||
"Total number of requests responded to by NTP server",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
NTPServerIncomingRequestsTotal: prometheus.NewDesc(
|
||||
prometheus.BuildFQName(Namespace, subsystem, "ntp_server_incoming_requests_total"),
|
||||
"Total number of requests received by NTP server",
|
||||
nil,
|
||||
nil,
|
||||
),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Collect sends the metric values for each metric
|
||||
// to the provided prometheus Metric channel.
|
||||
func (c *TimeCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
|
||||
if desc, err := c.collect(ctx, ch); err != nil {
|
||||
log.Error("failed collecting time metrics:", desc, err)
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Perflib "Windows Time Service"
|
||||
type windowsTime struct {
|
||||
ClockFrequencyAdjustmentPPBTotal float64 `perflib:"Clock Frequency Adjustment (ppb)"`
|
||||
ComputedTimeOffset float64 `perflib:"Computed Time Offset"`
|
||||
NTPClientTimeSourceCount float64 `perflib:"NTP Client Time Source Count"`
|
||||
NTPRoundtripDelay float64 `perflib:"NTP Roundtrip Delay"`
|
||||
NTPServerIncomingRequestsTotal float64 `perflib:"NTP Server Incoming Requests"`
|
||||
NTPServerOutgoingResponsesTotal float64 `perflib:"NTP Server Outgoing Responses"`
|
||||
}
|
||||
|
||||
func (c *TimeCollector) collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
|
||||
var dst []windowsTime // Single-instance class, array is required but will have single entry.
|
||||
if err := unmarshalObject(ctx.perfObjects["Windows Time Service"], &dst); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.ClockFrequencyAdjustmentPPBTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].ClockFrequencyAdjustmentPPBTotal,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.ComputedTimeOffset,
|
||||
prometheus.GaugeValue,
|
||||
dst[0].ComputedTimeOffset/1000000, // microseconds -> seconds
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.NTPClientTimeSourceCount,
|
||||
prometheus.GaugeValue,
|
||||
dst[0].NTPClientTimeSourceCount,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.NTPRoundtripDelay,
|
||||
prometheus.GaugeValue,
|
||||
dst[0].NTPRoundtripDelay/1000000, // microseconds -> seconds
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.NTPServerIncomingRequestsTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].NTPServerIncomingRequestsTotal,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
c.NTPServerOutgoingResponsesTotal,
|
||||
prometheus.CounterValue,
|
||||
dst[0].NTPServerOutgoingResponsesTotal,
|
||||
)
|
||||
return nil, nil
|
||||
}
|
||||
9
collector/time_test.go
Normal file
9
collector/time_test.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkTimeCollector(b *testing.B) {
|
||||
benchmarkCollector(b, "time", newTimeCollector)
|
||||
}
|
||||
@@ -1,3 +1,4 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package collector
|
||||
@@ -6,8 +7,8 @@ import (
|
||||
"errors"
|
||||
|
||||
"github.com/StackExchange/wmi"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/common/log"
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
||||
9
collector/vmware_test.go
Normal file
9
collector/vmware_test.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkVmwareCollector(b *testing.B) {
|
||||
benchmarkCollector(b, "vmware", NewVmwareCollector)
|
||||
}
|
||||
@@ -4,7 +4,7 @@ import (
|
||||
"bytes"
|
||||
"reflect"
|
||||
|
||||
"github.com/prometheus/common/log"
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
)
|
||||
|
||||
func className(src interface{}) string {
|
||||
|
||||
84
config/config.go
Normal file
84
config/config.go
Normal file
@@ -0,0 +1,84 @@
|
||||
// Copyright 2018 Prometheus Team
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
|
||||
"github.com/prometheus-community/windows_exporter/log"
|
||||
"gopkg.in/alecthomas/kingpin.v2"
|
||||
"gopkg.in/yaml.v2"
|
||||
)
|
||||
|
||||
type getFlagger interface {
|
||||
GetFlag(name string) *kingpin.FlagClause
|
||||
}
|
||||
|
||||
// Resolver represents a configuration file resolver for kingpin.
|
||||
type Resolver struct {
|
||||
flags map[string]string
|
||||
}
|
||||
|
||||
// NewResolver returns a Resolver structure.
|
||||
func NewResolver(file string) (*Resolver, error) {
|
||||
flags := map[string]string{}
|
||||
log.Infof("Loading configuration file: %v", file)
|
||||
if _, err := os.Stat(file); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
b, err := ioutil.ReadFile(file)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var rawValues map[string]interface{}
|
||||
err = yaml.Unmarshal(b, &rawValues)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Flatten nested YAML values
|
||||
flattenedValues := flatten(rawValues)
|
||||
for k, v := range flattenedValues {
|
||||
if _, ok := flags[k]; !ok {
|
||||
flags[k] = v
|
||||
}
|
||||
}
|
||||
return &Resolver{flags: flags}, nil
|
||||
}
|
||||
|
||||
func (c *Resolver) setDefault(v getFlagger) {
|
||||
for name, value := range c.flags {
|
||||
f := v.GetFlag(name)
|
||||
if f != nil {
|
||||
f.Default(value)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Bind sets active flags with their default values from the configuration file(s).
|
||||
func (c *Resolver) Bind(app *kingpin.Application, args []string) error {
|
||||
// Parse the command line arguments to get the selected command.
|
||||
pc, err := app.ParseContext(args)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
c.setDefault(app)
|
||||
if pc.SelectedCommand != nil {
|
||||
c.setDefault(pc.SelectedCommand)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
61
config/flatten.go
Normal file
61
config/flatten.go
Normal file
@@ -0,0 +1,61 @@
|
||||
package config
|
||||
|
||||
import "fmt"
|
||||
|
||||
// flatten flattens the nested struct.
|
||||
//
|
||||
// All keys will be joined by dot
|
||||
// e.g. {"a": {"b":"c"}} => {"a.b":"c"}
|
||||
// or {"a": {"b":[1,2]}} => {"a.b.0":1, "a.b.1": 2}
|
||||
func flatten(data map[string]interface{}) map[string]string {
|
||||
ret := make(map[string]string)
|
||||
for k, v := range data {
|
||||
switch typed := v.(type) {
|
||||
case map[interface{}]interface{}:
|
||||
for fk, fv := range flatten(convertMap(typed)) {
|
||||
ret[fmt.Sprintf("%s.%s", k, fk)] = fv
|
||||
}
|
||||
case map[string]interface{}:
|
||||
for fk, fv := range flatten(typed) {
|
||||
ret[fmt.Sprintf("%s.%s", k, fk)] = fv
|
||||
}
|
||||
case []interface{}:
|
||||
for fk, fv := range flattenSlice(typed) {
|
||||
ret[fmt.Sprintf("%s.%s", k, fk)] = fv
|
||||
}
|
||||
default:
|
||||
ret[k] = fmt.Sprint(typed)
|
||||
}
|
||||
}
|
||||
return ret
|
||||
}
|
||||
func flattenSlice(data []interface{}) map[string]string {
|
||||
ret := make(map[string]string)
|
||||
for idx, v := range data {
|
||||
switch typed := v.(type) {
|
||||
case map[interface{}]interface{}:
|
||||
for fk, fv := range flatten(convertMap(typed)) {
|
||||
ret[fmt.Sprintf("%d,%s", idx, fk)] = fv
|
||||
}
|
||||
case map[string]interface{}:
|
||||
for fk, fv := range flatten(typed) {
|
||||
ret[fmt.Sprintf("%d,%s", idx, fk)] = fv
|
||||
}
|
||||
case []interface{}:
|
||||
for fk, fv := range flattenSlice(typed) {
|
||||
ret[fmt.Sprintf("%d,%s", idx, fk)] = fv
|
||||
}
|
||||
default:
|
||||
ret[fmt.Sprint(idx)] = fmt.Sprint(typed)
|
||||
}
|
||||
}
|
||||
return ret
|
||||
}
|
||||
|
||||
func convertMap(originalMap map[interface{}]interface{}) map[string]interface{} {
|
||||
convertedMap := map[string]interface{}{}
|
||||
for key, value := range originalMap {
|
||||
convertedMap[key.(string)] = value
|
||||
}
|
||||
return convertedMap
|
||||
}
|
||||
33
config/flatten_test.go
Normal file
33
config/flatten_test.go
Normal file
@@ -0,0 +1,33 @@
|
||||
package config
|
||||
|
||||
import (
|
||||
"gopkg.in/yaml.v2"
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// Unmarshal good configuration file and confirm data is flattened correctly
|
||||
func TestConfigFlattening(t *testing.T) {
|
||||
goodYamlConfig := []byte(`---
|
||||
|
||||
collectors:
|
||||
enabled: cpu,net,service
|
||||
|
||||
log:
|
||||
level: debug`)
|
||||
var data map[string]interface{}
|
||||
err := yaml.Unmarshal(goodYamlConfig, &data)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
expectedResult := map[string]string{
|
||||
"collectors.enabled": "cpu,net,service",
|
||||
"log.level": "debug",
|
||||
}
|
||||
flattenedValues := flatten(data)
|
||||
|
||||
if !reflect.DeepEqual(expectedResult, flattenedValues) {
|
||||
t.Errorf("Flattened values do not match!\nExpected result: %s\nActual result: %s", expectedResult, flattenedValues)
|
||||
}
|
||||
}
|
||||
@@ -6,6 +6,7 @@ This directory contains documentation of the collectors in the windows_exporter,
|
||||
- [`adfs`](collector.adfs.md)
|
||||
- [`cpu`](collector.cpu.md)
|
||||
- [`cs`](collector.cs.md)
|
||||
- [`dfsr`](collector.dfsr.md)
|
||||
- [`dhcp`](collector.dhcp.md)
|
||||
- [`dns`](collector.dns.md)
|
||||
- [`hyperv`](collector.hyperv.md)
|
||||
@@ -28,8 +29,10 @@ This directory contains documentation of the collectors in the windows_exporter,
|
||||
- [`process`](collector.process.md)
|
||||
- [`remote_fx`](collector.remote_fx.md)
|
||||
- [`service`](collector.service.md)
|
||||
- [`smtp`](collector.smtp.md)
|
||||
- [`system`](collector.system.md)
|
||||
- [`tcp`](collector.tcp.md)
|
||||
- [`terminal_services`](collector.terminal_services.md)
|
||||
- [`textfile`](collector.textfile.md)
|
||||
- [`time`](collector.time.md)
|
||||
- [`vmware`](collector.vmware.md)
|
||||
|
||||
@@ -18,16 +18,16 @@ None
|
||||
|
||||
Name | Description | Type | Labels
|
||||
-----|-------------|------|-------
|
||||
`windows_adfs_ad_login_connection_failures` | Total number of connection failures between the ADFS server and the Active Directory domain controller(s) | counter | None
|
||||
`windows_adfs_certificate_authentications` | Total number of [User Certificate](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication) authentications. I.E. smart cards or mobile devices with provisioned client certificates | counter | None
|
||||
`windows_adfs_device_authentications` | Total number of [device authentications](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/device-authentication-controls-in-ad-fs) (SignedToken, clientTLS, PkeyAuth). Device authentication is only available on ADFS 2016 or later | counter | None
|
||||
`windows_adfs_extranet_account_lockouts` | Total number of [extranet lockouts](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-smart-lockout-protection). Requires the Extranet Lockout feature to be enabled | counter | None
|
||||
`windows_adfs_federated_authentications` | Total number of authentications from federated sources. E.G. Office365 | counter | None
|
||||
`windows_adfs_passport_authentications` | Total number of authentications from [Microsoft Passport](https://en.wikipedia.org/wiki/Microsoft_account) (now named Microsoft Account) | counter | None
|
||||
`windows_adfs_password_change_failed` | Total number of failed password changes. The Password Change Portal must be enabled in the AD FS Management tool in order to allow user password changes | counter | None
|
||||
`windows_adfs_password_change_succeeded` | Total number of succeeded password changes. The Password Change Portal must be enabled in the AD FS Management tool in order to allow user password changes | counter | None
|
||||
`windows_adfs_token_requests` | Total number of requested access tokens | counter | None
|
||||
`windows_adfs_windows_integrated_authentications` | Total number of Windows integrated authentications using Kerberos or NTLM | counter | None
|
||||
`windows_adfs_ad_login_connection_failures_total` | Total number of connection failures between the ADFS server and the Active Directory domain controller(s) | counter | None
|
||||
`windows_adfs_certificate_authentications_total` | Total number of [User Certificate](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication) authentications. I.E. smart cards or mobile devices with provisioned client certificates | counter | None
|
||||
`windows_adfs_device_authentications_total` | Total number of [device authentications](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/device-authentication-controls-in-ad-fs) (SignedToken, clientTLS, PkeyAuth). Device authentication is only available on ADFS 2016 or later | counter | None
|
||||
`windows_adfs_extranet_account_lockouts_total` | Total number of [extranet lockouts](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-smart-lockout-protection). Requires the Extranet Lockout feature to be enabled | counter | None
|
||||
`windows_adfs_federated_authentications_total` | Total number of authentications from federated sources. E.G. Office365 | counter | None
|
||||
`windows_adfs_passport_authentications_total` | Total number of authentications from [Microsoft Passport](https://en.wikipedia.org/wiki/Microsoft_account) (now named Microsoft Account) | counter | None
|
||||
`windows_adfs_password_change_failed_total` | Total number of failed password changes. The Password Change Portal must be enabled in the AD FS Management tool in order to allow user password changes | counter | None
|
||||
`windows_adfs_password_change_succeeded_total` | Total number of succeeded password changes. The Password Change Portal must be enabled in the AD FS Management tool in order to allow user password changes | counter | None
|
||||
`windows_adfs_token_requests_total` | Total number of requested access tokens | counter | None
|
||||
`windows_adfs_windows_integrated_authentications_total` | Total number of Windows integrated authentications using Kerberos or NTLM | counter | None
|
||||
|
||||
### Example metric
|
||||
Show rate of device authentications in AD FS:
|
||||
|
||||
60
docs/collector.cache.md
Normal file
60
docs/collector.cache.md
Normal file
@@ -0,0 +1,60 @@
|
||||
# cache collector
|
||||
|
||||
The cache collector exposes metrics about file system cache
|
||||
|
||||
|||
|
||||
-|-
|
||||
Metric name prefix | `cache`
|
||||
Data Source | Perflib
|
||||
Classes | [`Win32_PerfFormattedData_PerfOS_Cache`](https://docs.microsoft.com/en-us/previous-versions/aa394267(v=vs.85))
|
||||
Enabled by default? | No
|
||||
|
||||
## Flags
|
||||
|
||||
None
|
||||
|
||||
## Metrics
|
||||
|
||||
Name | Description | Type | Labels
|
||||
-----|-------------|------|-------
|
||||
`windows_cache_async_copy_reads_total` | Number of times that a filesystem, such as NTFS, maps a page of a file into the file system cache to read a page. | counter | None
|
||||
`windows_cache_async_data_maps_total` | Number of times that a filesystem, such as NTFS, maps a page of a file into the file system cache to read the page, and wishes to wait for the page to be retrieved if it is not in main memory. | counter | None
|
||||
`windows_cache_async_fast_reads_total` | Number of reads from the file system cache that bypass the installed file system and retrieve the data directly from the cache. | counter | None
|
||||
`windows_cache_async_mdl_reads_total` | Number of reads from the file system cache that use a Memory Descriptor List (MDL) to access the pages. | counter | None
|
||||
`windows_cache_async_pin_reads_total` | Number of reads from the file system cache preparatory to writing the data back to disk. Pages read in this fashion are pinned in memory at the completion of the read. | counter | None
|
||||
`windows_cache_copy_read_hits_total` | Number of copy read requests that hit the cache, that is, they did not require a disk read in order to provide access to the page in the cache. | counter | None
|
||||
`windows_cache_copy_reads_total` | Number of reads from pages of the file system cache that involve a memory copy of the data from the cache to the application's buffer. | counter | None
|
||||
`windows_cache_data_flushes_total` | Number of times the file system cache has flushed its contents to disk as the result of a request to flush or to satisfy a write-through file write request. | counter | None
|
||||
`windows_cache_data_flush_pages_total` | Number of pages the file system cache has flushed to disk as a result of a request to flush or to satisfy a write-through file write request. | counter | None
|
||||
`windows_cache_data_map_hits_total` | Number of data maps in the file system cache that could be resolved without having to retrieve a page from the disk, because the page was already in physical memory. | counter | None
|
||||
`windows_cache_data_map_pins_total` | Number of data maps in the file system cache that resulted in pinning a page in main memory, an action usually preparatory to writing to the file on disk. | counter | None
|
||||
`windows_cache_data_maps_total` | Number of times that a file system such as NTFS, maps a page of a file into the file system cache to read the page. | counter | None
|
||||
`windows_cache_dirty_pages` | Number of dirty pages on the system cache. | gauge | None
|
||||
`windows_cache_dirty_page_threshold` | Threshold for number of dirty pages on system cache. | gauge | None
|
||||
`windows_cache_fast_read_not_possibles_total` | Number of attempts by an Application Program Interface (API) function call to bypass the file system to get to data in the file system cache that could not be honored without invoking the file system. | counter | None
|
||||
`windows_cache_fast_read_resource_misses_total` | Number of cache misses necessitated by the lack of available resources to satisfy the request. | counter | None
|
||||
`windows_cache_fast_reads_total` | Number of reads from the file system cache that bypass the installed file system and retrieve the data directly from the cache. | counter | None
|
||||
`windows_cache_lazy_write_flushes_total` | Number of Lazy Write flushes the Lazy Writer thread has written to disk. Lazy Writing is the process of updating the disk after the page has been changed in memory, so that the application that changed the file does not have to wait for the disk write to be complete before proceeding. | counter | None
|
||||
`windows_cache_lazy_write_pages_total` | Number of Lazy Write pages the Lazy Writer thread has written to disk. Lazy Writing is the process of updating the disk after the page has been changed in memory, so that the application that changed the file does not have to wait for the disk write to be complete before proceeding. | counter | None
|
||||
`windows_cache_mdl_read_hits_total` | Number of Memory Descriptor List (MDL) Read requests to the file system cache that hit the cache, i.e., did not require disk accesses in order to provide memory access to the page(s) in the cache. | counter | None
|
||||
`windows_cache_mdl_reads_total` | Number of reads from the file system cache that use a Memory Descriptor List (MDL) to access the data. | counter | None
|
||||
`windows_cache_pin_read_hits_total` | Number of pin read requests that hit the file system cache, i.e., did not require a disk read in order to provide access to the page in the file system cache. While pinned, a page's physical address in the file system cache will not be altered. | counter | None
|
||||
`windows_cache_pin_reads_total` | Number of reads into the file system cache preparatory to writing the data back to disk. Pages read in this fashion are pinned in memory at the completion of the read. While pinned, a page's physical address in the file system cache will not be altered. | counter | None
|
||||
`windows_cache_read_aheads_total` | Number of reads from the file system cache in which the Cache detects sequential access to a file. The read aheads permit the data to be transferred in larger blocks than those being requested by the application, reducing the overhead per access. | counter | None
|
||||
`windows_cache_sync_copy_reads_total` | Number of reads from pages of the file system cache that involve a memory copy of the data from the cache to the application's buffer. The file system will not regain control until the copy operation is complete, even if the disk must be accessed to retrieve the page. | counter | None
|
||||
`windows_cache_sync_data_maps_total` | Number of times that a file system such as NTFS maps a page of a file into the file system cache to read the page. | counter | None
|
||||
`windows_cache_sync_fast_reads_total` | Number of reads from the file system cache that bypass the installed file system and retrieve the data directly from the cache. If the data is not in the cache, the request (application program call) will wait until the data has been retrieved from disk. | counter | None
|
||||
`windows_cache_sync_mdl_reads_total` | Number of reads from the file system cache that use a Memory Descriptor List (MDL) to access the pages. If the accessed page(s) are not in main memory, the caller will wait for the pages to fault in from the disk. | counter | None
|
||||
`windows_cache_sync_pin_reads_total` | Number of reads into the file system cache preparatory to writing the data back to disk. The file system will not regain control until the page is pinned in the file system cache, in particular if the disk must be accessed to retrieve the page. | counter | None
|
||||
|
||||
### Example metric
|
||||
Percentage of copy reads that hit the cache
|
||||
```
|
||||
windows_cache_copy_read_hits_total / windows_cache_copy_reads_total * 100
|
||||
```
|
||||
|
||||
## Useful queries
|
||||
_This collector does not yet have any useful queries added, we would appreciate your help adding them!_
|
||||
|
||||
## Alerting examples
|
||||
_This collector does not yet have alerting examples, we would appreciate your help adding them!_
|
||||
@@ -1,10 +1,11 @@
|
||||
# container collector
|
||||
|
||||
The container collector exposes metrics about containers running on system
|
||||
The container collector exposes metrics about containers running on a Hyper-V system
|
||||
|
||||
|||
|
||||
-|-
|
||||
Metric name prefix | `container`
|
||||
Data source | [hcsshim](https://github.com/Microsoft/hcsshim)
|
||||
Enabled by default? | No
|
||||
|
||||
## Flags
|
||||
|
||||
@@ -27,11 +27,11 @@ These metrics are only exposed on Windows Server 2008R2 and later:
|
||||
|
||||
Name | Description | Type | Labels
|
||||
-----|-------------|------|-------
|
||||
`windows_cpu_clock_interrupts_total` | Total number of received and serviced clock tick interrupts | `core`
|
||||
`windows_cpu_idle_break_events_total` | Total number of time processor was woken from idle | `core`
|
||||
`windows_cpu_parking_status` | Parking Status represents whether a processor is parked or not | `gauge`
|
||||
`windows_cpu_core_frequency_mhz` | Core frequency in megahertz | `gauge`
|
||||
`windows_cpu_processor_performance` | Processor Performance is the average performance of the processor while it is executing instructions, as a percentage of the nominal performance of the processor. On some processors, Processor Performance may exceed 100% | `gauge`
|
||||
`windows_cpu_clock_interrupts_total` | Total number of received and serviced clock tick interrupts | counter | `core`
|
||||
`windows_cpu_idle_break_events_total` | Total number of time processor was woken from idle | counter | `core`
|
||||
`windows_cpu_parking_status` | Parking Status represents whether a processor is parked or not | gauge | `core`
|
||||
`windows_cpu_core_frequency_mhz` | Core frequency in megahertz | gauge | `core`
|
||||
`windows_cpu_processor_performance` | Processor Performance is the average performance of the processor while it is executing instructions, as a percentage of the nominal performance of the processor. On some processors, Processor Performance may exceed 100% | gauge | `core`
|
||||
|
||||
### Example metric
|
||||
Show frequency of host CPU cores
|
||||
|
||||
32
docs/collector.cpu_info.md
Normal file
32
docs/collector.cpu_info.md
Normal file
@@ -0,0 +1,32 @@
|
||||
# cpu_info collector
|
||||
|
||||
The cpu_info collector exposes metrics detailing a per-socket breakdown of the Processors in the system
|
||||
|
||||
|||
|
||||
-|-
|
||||
Metric name prefix | `cpu_info`
|
||||
Data source | wmi
|
||||
Classes | [`Win32_Processor`](https://docs.microsoft.com/en-us/windows/win32/cimwin32prov/win32-processor)
|
||||
Enabled by default? | No
|
||||
|
||||
## Flags
|
||||
|
||||
None
|
||||
|
||||
## Metrics
|
||||
|
||||
Name | Description | Type | Labels
|
||||
-----|-------------|------|-------
|
||||
`windows_cpu_info` | Labeled CPU information | gauge | `architecture`, `device_id`, `description`, `family`, `l2_cache_size` `l3_cache_size`, `name`
|
||||
|
||||
### Example metric
|
||||
```
|
||||
windows_cpu_info{architecture="9",description="AMD64 Family 23 Model 49 Stepping 0",device_id="CPU0",family="107",l2_cache_size="32768",l3_cache_size="262144",name="AMD EPYC 7702P 64-Core Processor"} 1
|
||||
```
|
||||
The value of the metric is irrelevant, but the labels expose some useful information on the CPU installed in each socket.
|
||||
|
||||
## Useful queries
|
||||
_This collector does not yet have any useful queries added, we would appreciate your help adding them!_
|
||||
|
||||
## Alerting examples
|
||||
_This collector does not yet have alerting examples, we would appreciate your help adding them!_
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user