Compare commits

..

1 Commits

Author SHA1 Message Date
Ashok Siyani
d9404c6cc2 added terminal services collector for number of session count 2020-03-23 09:40:40 +00:00
445 changed files with 18140 additions and 57931 deletions

View File

@@ -1,19 +0,0 @@
root = true
[*]
charset = utf-8
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
tab_width = 4
[*.{json,wxs,xml}]
indent_style = space
indent_size = 4
[renovate.json]
indent_size = 2
[*.{yml,yaml}]
indent_style = space
indent_size = 2

3
.gitattributes vendored
View File

@@ -1,3 +0,0 @@
*.go text eol=lf
*.sh text eol=lf
Makefile text eol=lf

1
.github/CODEOWNERS vendored
View File

@@ -1 +0,0 @@
* @prometheus-community/windows_exporter-reviewers

View File

@@ -1,96 +0,0 @@
name: 🐞 Bug
description: Something is not working as indended.
labels: [ 🐞 bug ]
body:
- type: markdown
attributes:
value: |-
> [!NOTE]
> Windows Server 2012 and Windows Server 2012 R2 are no longer supported by the windows_exporter project.
Thanks for taking the time to fill out this bug report!
- type: markdown
attributes:
value: |-
> [!NOTE]
> If you encounter "Counter not found" issues, try to re-build the performance counter first.
```
PS C:\WINDOWS\system32> cd c:\windows\system32
PS C:\windows\system32> lodctr /R
Error: Unable to rebuild performance counter setting from system backup store, error code is 2
PS C:\windows\system32> cd ..
PS C:\windows> cd syswow64
PS C:\windows\syswow64> lodctr /R
Info: Successfully rebuilt performance counter setting from system backup store
PS C:\windows\syswow64> winmgmt.exe /RESYNCPERF
```
----
- type: textarea
attributes:
label: Current Behavior
description: A concise description of what you're experiencing.
placeholder: |
When I do <X>, <Y> happens and I see the error message attached below:
```...```
validations:
required: true
- type: textarea
attributes:
label: Expected Behavior
description: A concise description of what you expected to happen.
placeholder: When I do <X>, <Z> should happen instead.
validations:
required: true
- type: textarea
attributes:
label: Steps To Reproduce
description: Steps to reproduce the behavior.
placeholder: |
1. In this environment...
2. With this config...
3. Run '...'
4. See error...
render: Markdown
validations:
required: false
- type: textarea
attributes:
label: Environment
description: |
examples:
- **windows_exporter Version**: 0.26
- **Windows Server Version**: 2019
value: |
- windows_exporter Version:
- Windows Server Version:
validations:
required: true
- type: textarea
attributes:
label: windows_exporter logs
description: |
Log of windows_exporter.
⚠️ Without proving logs, we unable to assist here. ⚠️
render: shell
validations:
required: true
- type: textarea
attributes:
label: Anything else?
description: |
Links? References? Anything that will give us more context about the issue you are encountering!
Tip: You can attach images or log files by clicking this area to highlight it and then dragging files in.
validations:
required: false

View File

@@ -1 +0,0 @@
blank_issues_enabled: false

View File

@@ -1,41 +0,0 @@
name: ✨ Enhancement / Feature / Task
description: Some feature is missing or incomplete.
labels: [ ✨ enhancement ]
body:
- type: textarea
attributes:
label: Problem Statement
description: Without specifying a solution, describe what the project is missing today.
placeholder: |
The rotating project logo has a fixed size and color.
There is no way to make it larger and more shiny.
validations:
required: false
- type: textarea
attributes:
label: Proposed Solution
description: Describe the proposed solution to the problem above.
placeholder: |
- Implement 2 new flags CLI: ```--logo-color=FFD700``` and ```--logo-size=100```
- Let these flags control the size of the rotating project logo.
validations:
required: false
- type: textarea
attributes:
label: Additional information
placeholder: |
We considered adjusting the logo size to the phase of the moon, but there was no
reliable data source in air-gapped environments.
validations:
required: false
- type: textarea
attributes:
label: Acceptance Criteria
placeholder: |
- [ ] As a user, I can control the size of the rotating logo using a CLI flag.
- [ ] As a user, I can control the color of the rotating logo using a CLI flag.
- [ ] Defaults are reasonably set.
- [ ] New settings are appropriately documented.
- [ ] No breaking change for current users of the rotating logo feature.
validations:
required: false

View File

@@ -1,49 +0,0 @@
name: ❓ Question
description: Something is not clear.
labels: [ ❓ question ]
body:
- type: markdown
attributes:
value: |-
> [!NOTE]
> If you encounter "Counter not found" issues, try to re-build the performance counter first.
```
PS C:\WINDOWS\system32> cd c:\windows\system32
PS C:\windows\system32> lodctr /R
Error: Unable to rebuild performance counter setting from system backup store, error code is 2
PS C:\windows\system32> cd ..
PS C:\windows> cd syswow64
PS C:\windows\syswow64> lodctr /R
Info: Successfully rebuilt performance counter setting from system backup store
PS C:\windows\syswow64> winmgmt.exe /RESYNCPERF
```
----
- type: textarea
attributes:
label: Problem Statement
description: Without specifying a solution, describe what the project is missing today.
placeholder: |
The rotating project logo has a fixed size and color.
There is no way to make it larger and more shiny.
validations:
required: false
- type: textarea
attributes:
label: Environment
description: |
examples:
- **windows_exporter Version**: 0.26
- **Windows Server Version**: 2019
value: |
- windows_exporter Version:
- Windows Server Version:
validations:
required: true

View File

@@ -1,40 +0,0 @@
<!--
Please give your PR a title in the form "area: short description". For example "cpu: reduce usage by 95%" or "docs: fix typo in installation.md".
If your PR is to fix an issue, put "Fixes #issue-number" in the description.
Don't forget!
- Please sign CNCF's Developer Certificate of Origin and sign-off your commits by adding the -s / --signoff flag to `git commit`. See https://github.com/apps/dco for more information.
- If the PR adds or changes a behaviour or fixes a bug of an exported API it would need a unit/e2e test.
- Performance improvements would need a benchmark test to prove it.
- All comments should start with a capital letter and end with a full stop.
-->
#### What this PR does / why we need it
#### Which issue this PR fixes
*(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
- fixes #
#### Special notes for your reviewer
#### Particularly user-facing changes
#### Checklist
Complete these before marking the PR as `ready to review`:
<!-- [Place an '[x]' (no spaces) in all applicable fields.] -->
- [ ] [DCO](https://github.com/prometheus-community/helm-charts/blob/main/CONTRIBUTING.md#sign-off-your-work) signed
- [ ] The PR title has a summary of the changes and the area they affect
- [ ] The PR body has a summary to reflect any significant (and particularly user-facing) changes introduced by this PR

23
.github/release.yml vendored
View File

@@ -1,23 +0,0 @@
changelog:
exclude:
labels:
- chore
categories:
- title: 💥 Breaking Changes
labels:
- 💥 breaking-change
- title: ✨ Exciting New Features
labels:
- ✨ enhancement
- title: 🐞 Bug Fixes
labels:
- 🐞 bug
- title: 🛠️ Dependencies
labels:
- 🛠️ dependencies
- title: 📖 Documentation
labels:
- 📖 docs
- title: Other Changes
labels:
- "*"

View File

@@ -1,61 +0,0 @@
---
name: Push README to Docker Hub
on:
push:
paths:
- "README.md"
- "README-containers.md"
- ".github/workflows/container_description.yml"
branches: [ main, master ]
permissions:
contents: read
jobs:
PushDockerHubReadme:
runs-on: ubuntu-latest
name: Push README to Docker Hub
if: github.repository_owner == 'prometheus' || github.repository_owner == 'prometheus-community' # Don't run this workflow on forks.
steps:
- name: git checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Set docker hub repo name
run: echo "DOCKER_REPO_NAME=$(make docker-repo-name)" >> $GITHUB_ENV
- name: Push README to Dockerhub
uses: christian-korneck/update-container-description-action@d36005551adeaba9698d8d67a296bd16fa91f8e8 # v1
env:
DOCKER_USER: ${{ secrets.DOCKER_HUB_LOGIN }}
DOCKER_PASS: ${{ secrets.DOCKER_HUB_PASSWORD }}
with:
destination_container_repo: ${{ env.DOCKER_REPO_NAME }}
provider: dockerhub
short_description: ${{ env.DOCKER_REPO_NAME }}
# Empty string results in README-containers.md being pushed if it
# exists. Otherwise, README.md is pushed.
readme_file: ''
PushQuayIoReadme:
runs-on: ubuntu-latest
name: Push README to quay.io
if: github.repository_owner == 'prometheus' || github.repository_owner == 'prometheus-community' # Don't run this workflow on forks.
steps:
- name: git checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Set quay.io org name
run: echo "DOCKER_REPO=$(echo quay.io/${GITHUB_REPOSITORY_OWNER} | tr -d '-')" >> $GITHUB_ENV
- name: Set quay.io repo name
run: echo "DOCKER_REPO_NAME=$(make docker-repo-name)" >> $GITHUB_ENV
- name: Push README to quay.io
uses: christian-korneck/update-container-description-action@d36005551adeaba9698d8d67a296bd16fa91f8e8 # v1
env:
DOCKER_APIKEY: ${{ secrets.QUAY_IO_API_TOKEN }}
with:
destination_container_repo: ${{ env.DOCKER_REPO_NAME }}
provider: quay
# Empty string results in README-containers.md being pushed if it
# exists. Otherwise, README.md is pushed.
readme_file: ''

View File

@@ -1,95 +0,0 @@
name: Linting
# Trigger on pull requests and pushes to master branch where Go-related files
# have been changed.
on:
push:
branches:
- master
- next
- main
- "0.*"
- "1.*"
pull_request:
env:
VERSION_PROMU: '0.14.0'
PROMTOOL_VER: '2.43.0'
jobs:
test:
runs-on: windows-2025
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
with:
go-version-file: 'go.mod'
- name: Test
run: make test
- name: Install e2e deps
run: |
Invoke-WebRequest -Uri https://github.com/prometheus/promu/releases/download/v$($Env:VERSION_PROMU)/promu-$($Env:VERSION_PROMU).windows-amd64.zip -OutFile promu-$($Env:VERSION_PROMU).windows-amd64.zip
Expand-Archive -Path promu-$($Env:VERSION_PROMU).windows-amd64.zip -DestinationPath .
Copy-Item -Path promu-$($Env:VERSION_PROMU).windows-amd64\promu.exe -Destination "$(go env GOPATH)\bin"
# GOPATH\bin dir must be appended to PATH else the `promu` command won't be found
echo "$(go env GOPATH)\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
- name: e2e Test
run: make e2e-test
promtool:
runs-on: windows-2025
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
with:
go-version-file: 'go.mod'
- name: Install promtool
run: |
Invoke-WebRequest -Uri https://github.com/prometheus/prometheus/releases/download/v$($Env:PROMTOOL_VER)/prometheus-$($Env:PROMTOOL_VER).windows-amd64.zip -OutFile prometheus-$($Env:PROMTOOL_VER).windows-amd64.zip
Expand-Archive -Path prometheus-$($Env:PROMTOOL_VER).windows-amd64.zip -DestinationPath .
Copy-Item -Path prometheus-$($Env:PROMTOOL_VER).windows-amd64\promtool.exe -Destination "$(go env GOPATH)\bin"
Invoke-WebRequest -Uri https://github.com/prometheus/promu/releases/download/v$($Env:VERSION_PROMU)/promu-$($Env:VERSION_PROMU).windows-amd64.zip -OutFile promu-$($Env:VERSION_PROMU).windows-amd64.zip
Expand-Archive -Path promu-$($Env:VERSION_PROMU).windows-amd64.zip -DestinationPath .
Copy-Item -Path promu-$($Env:VERSION_PROMU).windows-amd64\promu.exe -Destination "$(go env GOPATH)\bin"
# GOPATH\bin dir must be appended to PATH else the `promu` command won't be found
echo "$(go env GOPATH)\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
- name: Promtool
run: make promtool
- name: Upload windows_exporter.exe
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: always()
with:
name: windows_exporter.amd64.exe
path: 'windows_exporter.exe'
retention-days: 7
if-no-files-found: error
lint:
runs-on: windows-2025
steps:
# `gofmt` linter run by golangci-lint fails on CRLF line endings (the default for Windows)
- name: Set git to use LF
run: |
git config --global core.autocrlf false
git config --global core.eol lf
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
with:
go-version-file: 'go.mod'
- name: golangci-lint
uses: golangci/golangci-lint-action@4afd733a84b1f43292c63897423277bb7f4313a9 # v8.0.0
with:
# renovate: github=golangci/golangci-lint
version: v2.4.0
args: "--max-same-issues=0"

View File

@@ -1,48 +0,0 @@
name: Validate Pull Request
on:
pull_request:
types:
- opened
- reopened
- synchronize
- labeled
- unlabeled
jobs:
required-labels-missing:
name: required labels missing
runs-on: ubuntu-latest
steps:
- name: check
if: >-
!contains(github.event.pull_request.labels.*.name, '💥 breaking-change')
&& !contains(github.event.pull_request.labels.*.name, '✨ enhancement')
&& !contains(github.event.pull_request.labels.*.name, '🐞 bug')
&& !contains(github.event.pull_request.labels.*.name, '📖 docs')
&& !contains(github.event.pull_request.labels.*.name, 'chore')
&& !contains(github.event.pull_request.labels.*.name, '🛠️ dependencies')
run: >-
echo One of the following labels is missing on this PR:
breaking-change
enhancement
bug
docs
chore
&& exit 1
title:
name: check title prefix
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- name: check
run: |
PR_TITLE_PREFIX=$(echo "$PR_TITLE" | cut -d':' -f1)
if [[ -d "internal/collector/$PR_TITLE_PREFIX" ]] || [[ -d "internal/$PR_TITLE_PREFIX" ]] || [[ -d "pkg/$PR_TITLE_PREFIX" ]] || [[ -d "$PR_TITLE_PREFIX" ]] || [[ "$PR_TITLE_PREFIX" == "docs" ]] || [[ "$PR_TITLE_PREFIX" == "ci" ]] || [[ "$PR_TITLE_PREFIX" == "revert" ]] || [[ "$PR_TITLE_PREFIX" == "fix" ]] || [[ "$PR_TITLE_PREFIX" == "fix(deps)" ]] || [[ "$PR_TITLE_PREFIX" == "feat" ]] || [[ "$PR_TITLE_PREFIX" == "chore" ]] || [[ "$PR_TITLE_PREFIX" == "chore(docs)" ]] || [[ "$PR_TITLE_PREFIX" == "chore(deps)" ]] || [[ "$PR_TITLE_PREFIX" == "*" ]] || [[ "$PR_TITLE_PREFIX" == "Release"* ]] || [[ "$PR_TITLE_PREFIX" == "Synchronize common files from prometheus/prometheus" ]] || [[ "$PR_TITLE_PREFIX" == "[0."* ]] || [[ "$PR_TITLE_PREFIX" == "[1."* ]]; then
exit 0
fi
echo "PR title must start with an name of an collector package"
echo "Example: 'logical_disk: description'"
exit 1
env:
PR_TITLE: ${{ github.event.pull_request.title }}

View File

@@ -1,244 +0,0 @@
name: Releases
# Trigger on releases.
on:
push:
branches:
- master
pull_request:
workflow_dispatch:
release:
types:
- published
- edited
permissions:
contents: write
packages: write
env:
VERSION_PROMU: '0.17.0'
jobs:
build:
runs-on: windows-2025
environment: build
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
with:
fetch-depth: '0'
- uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
with:
go-version-file: 'go.mod'
- name: Install WiX
run: |
dotnet tool install --global wix --version 5.0.2
- name: Install WiX extensions
run: |
wix extension add -g WixToolset.Util.wixext/5.0.2
wix extension add -g WixToolset.Ui.wixext/5.0.2
wix extension add -g WixToolset.Firewall.wixext/5.0.2
- name: Install Build deps
run: |
Invoke-WebRequest -Uri https://github.com/prometheus/promu/releases/download/v$($Env:VERSION_PROMU)/promu-$($Env:VERSION_PROMU).windows-amd64.zip -OutFile promu-$($Env:VERSION_PROMU).windows-amd64.zip
Expand-Archive -Path promu-$($Env:VERSION_PROMU).windows-amd64.zip -DestinationPath .
Copy-Item -Path promu-$($Env:VERSION_PROMU).windows-amd64\promu.exe -Destination "$(go env GOPATH)\bin"
# GOPATH\bin dir must be added to PATH else the `promu` commands won't be found
echo "$(go env GOPATH)\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
- name: Build
run: |
$ErrorActionPreference = "Stop"
$Version = git describe --tags --always
$Version = $Version -replace 'v', ''
# '+' symbols are invalid characters in image tags
$Version = $Version -replace '\+', '_'
$Version | Set-Content VERSION -PassThru
make build-all
# GH requires all files to have different names, so add version/arch to differentiate
foreach($Arch in "amd64", "arm64") {
Move-Item output\$Arch\windows_exporter.exe output\windows_exporter-$Version-$Arch.exe
}
Get-ChildItem -Path output
- name: Sign build artifacts
if: ${{ (github.event_name != 'pull_request' && github.repository == 'prometheus-community/windows_exporter') || (github.event_name == 'pull_request' && github.event.pull_request.head.repo.full_name == 'prometheus-community/windows_exporter') }}
run: |
$ErrorActionPreference = "Stop"
$Version = Get-Content VERSION
$b64 = $env:CODE_SIGN_KEY
$filename = 'windows_exporter_CodeSign.pfx'
$bytes = [Convert]::FromBase64String($b64)
[IO.File]::WriteAllBytes($filename, $bytes)
$basePath = "C:\Program Files (x86)\Windows Kits\10\bin"
$latestSigntool = Get-ChildItem -Path $basePath -Directory |
Where-Object { $_.Name -match "^\d+\.\d+\.\d+\.\d+$" } |
Sort-Object { [Version]$_.Name } -Descending |
Select-Object -First 1 |
ForEach-Object { Join-Path $_.FullName "x64\signtool.exe" }
if (Test-Path $latestSigntool) {
Write-Output $latestSigntool
} else {
Write-Output "signtool.exe not found"
}
foreach($Arch in "amd64", "arm64") {
& $latestSigntool sign /v /tr "http://timestamp.digicert.com" /d "Prometheus exporter for Windows machines" /td SHA256 /fd SHA256 /a /f "windows_exporter_CodeSign.pfx" /p $env:CODE_SIGN_PASSWORD "output\windows_exporter-$Version-$Arch.exe"
}
rm windows_exporter_CodeSign.pfx
env:
CODE_SIGN_KEY: ${{ secrets.CODE_SIGN_KEY }}
CODE_SIGN_PASSWORD: ${{ secrets.CODE_SIGN_PASSWORD }}
- name: Build Release Artifacts
run: |
$ErrorActionPreference = "Stop"
$Version = Get-Content VERSION
foreach($Arch in "amd64", "arm64") {
Write-Host "Building windows_exporter $Version msi for $Arch"
.\installer\build.ps1 -PathToExecutable .\output\windows_exporter-$Version-$Arch.exe -Version $Version -Arch "$Arch"
}
Move-Item installer\*.msi output\
Get-ChildItem -Path output\ g
- name: Sign installer artifacts
if: ${{ (github.event_name != 'pull_request' && github.repository == 'prometheus-community/windows_exporter') || (github.event_name == 'pull_request' && github.event.pull_request.head.repo.full_name == 'prometheus-community/windows_exporter') }}
run: |
$ErrorActionPreference = "Stop"
$Version = Get-Content VERSION
$b64 = $env:CODE_SIGN_KEY
$filename = 'windows_exporter_CodeSign.pfx'
$bytes = [Convert]::FromBase64String($b64)
[IO.File]::WriteAllBytes($filename, $bytes)
$basePath = "C:\Program Files (x86)\Windows Kits\10\bin"
$latestSigntool = Get-ChildItem -Path $basePath -Directory |
Where-Object { $_.Name -match "^\d+\.\d+\.\d+\.\d+$" } |
Sort-Object { [Version]$_.Name } -Descending |
Select-Object -First 1 |
ForEach-Object { Join-Path $_.FullName "x64\signtool.exe" }
if (Test-Path $latestSigntool) {
Write-Output $latestSigntool
} else {
Write-Output "signtool.exe not found"
}
foreach($Arch in "amd64", "arm64") {
& $latestSigntool sign /v /tr "http://timestamp.digicert.com" /d "Prometheus exporter for Windows machines" /td SHA256 /fd SHA256 /a /f "windows_exporter_CodeSign.pfx" /p $env:CODE_SIGN_PASSWORD "output\windows_exporter-$Version-$Arch.msi"
}
rm windows_exporter_CodeSign.pfx
env:
CODE_SIGN_KEY: ${{ secrets.CODE_SIGN_KEY }}
CODE_SIGN_PASSWORD: ${{ secrets.CODE_SIGN_PASSWORD }}
- name: Generate checksums
run: |
promu checksum output
cat output\sha256sums.txt
- name: Upload Artifacts
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: windows_exporter_binaries
path: |
output\windows_exporter-*.exe
output\windows_exporter-*.msi
- name: Release
if: startsWith(github.ref, 'refs/tags/')
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
$TagName = $env:GITHUB_REF -replace 'refs/tags/', ''
Get-ChildItem -Path output\* -Include @('windows_exporter*.msi', 'windows_exporter*.exe', 'sha256sums.txt') | Foreach-Object {gh release upload $TagName $_}
docker:
name: Build docker images
runs-on: ubuntu-latest
needs:
- build
env:
DOCKER_BUILD_SUMMARY: false
DOCKER_BUILD_RECORD_UPLOAD: false
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
with:
fetch-depth: '0'
- name: Download Artifacts
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
with:
name: windows_exporter_binaries
- name: Login to Docker Hub
if: ${{ github.event_name != 'pull_request' }}
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
username: ${{ secrets.DOCKER_HUB_LOGIN }}
password: ${{ secrets.DOCKER_HUB_PASSWORD }}
- name: Login to quay.io
if: ${{ github.event_name != 'pull_request' }}
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: quay.io
username: ${{ secrets.QUAY_IO_LOGIN }}
password: ${{ secrets.QUAY_IO_PASSWORD }}
- name: Login to GitHub container registry
if: ${{ github.event_name != 'pull_request' }}
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Docker meta
id: meta
uses: docker/metadata-action@c1e51972afc2121e065aed6d45c65596fe445f3f # v5.8.0
with:
images: |
ghcr.io/prometheus-community/windows-exporter
docker.io/prometheuscommunity/windows-exporter
quay.io/prometheuscommunity/windows-exporter
tags: |
type=semver,pattern={{version}}
type=ref,event=branch
type=ref,event=pr
labels: |
org.opencontainers.image.title=windows_exporter
org.opencontainers.image.description=A Prometheus exporter for Windows machines.
org.opencontainers.image.vendor=The Prometheus Community
org.opencontainers.image.licenses=MIT
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Build and push
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
context: .
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
platforms: windows/amd64
annotations: ${{ steps.meta.outputs.labels }}

View File

@@ -1,26 +0,0 @@
name: Spell checking
# Trigger on pull requests, and pushes to master branch.
on:
push:
branches:
- master
pull_request:
branches:
- master
env:
VERSION_PROMU: 'v0.14.0'
jobs:
codespell:
name: Check for spelling errors
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: codespell-project/actions-codespell@master
with:
check_filenames: true
# When using this Action in other repos, the --skip option below can be removed
skip: ./.git,go.mod,go.sum
ignore_words_list: calle,Entires

View File

@@ -1,29 +0,0 @@
name: 'Close stale issues and PRs'
on:
workflow_dispatch: {}
schedule:
- cron: '30 1 * * *'
permissions:
issues: write
pull-requests: write
jobs:
stale:
if: github.repository_owner == 'prometheus' || github.repository_owner == 'prometheus-community' # Don't run this workflow on forks.
runs-on: ubuntu-latest
steps:
- uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9.1.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
# opt out of defaults to avoid marking issues as stale and closing them
# https://github.com/actions/stale#days-before-close
# https://github.com/actions/stale#days-before-stale
days-before-stale: -1
days-before-close: -1
stale-pr-message: ''
stale-issue-message: 'This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs.'
operations-per-run: 30
# override days-before-stale, for only marking the pull requests as stale
days-before-issue-stale: 90
days-before-issue-close: 30
stale-pr-label: stale
exempt-pr-labels: keepalive

View File

@@ -1,31 +0,0 @@
name: Stale Check
on:
workflow_dispatch: {}
schedule:
- cron: '16 22 * * *'
permissions:
issues: write
pull-requests: write
jobs:
stale:
if: github.repository_owner == 'prometheus' || github.repository_owner == 'prometheus-community' # Don't run this workflow on forks.
runs-on: ubuntu-latest
steps:
- uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9.1.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
# opt out of defaults to avoid marking issues as stale and closing them
# https://github.com/actions/stale#days-before-close
# https://github.com/actions/stale#days-before-stale
days-before-stale: -1
days-before-close: -1
# Setting it to empty string to skip comments.
# https://github.com/actions/stale#stale-pr-message
# https://github.com/actions/stale#stale-issue-message
stale-pr-message: ''
stale-issue-message: ''
operations-per-run: 30
# override days-before-stale, for only marking the pull requests as stale
days-before-pr-stale: 60
stale-pr-label: stale
exempt-pr-labels: keepalive

19
.gitignore vendored
View File

@@ -4,21 +4,4 @@ VERSION
*.un~
output/
.vscode
*.syso
installer/*.msi
installer/*.log
installer/*.wixpdb
local/
/.idea/*
!/.idea/inspectionProfiles/
/.idea/inspectionProfiles/*
!/.idea/inspectionProfiles/Project_Default.xml
!/.idea/dictionaries/
/.idea/dictionaries/*
!/.idea/dictionaries/project.xml
/.idea/copyright/*
!/.idea/copyright/profiles_settings.xml
!/.idea/copyright/windows_exporter.xml
!/.idea/vcs.xml
!/.idea/go.imports.xml
.idea

View File

@@ -1,144 +1,23 @@
version: "2"
linters:
default: all
disable:
- cyclop
- depguard
- dogsled
- dupl
- err113
- exhaustive
- exhaustruct
- fatcontext
- funcorder
- funlen
- gocognit
- goconst
- gocyclo
- godot
- lll
- maintidx
- mnd
- noinlineerr
- paralleltest
- tagliatelle
- testpackage
- varnamelen
- wrapcheck
- wsl
settings:
forbidigo:
forbid:
- pattern: ^(fmt\.Print(|f|ln)|print|println)$
- pattern: ^syscall\.(.{1,7}|.{7}[^N]|.{9,})$
msg: use golang.org/x/sys/windows instead of syscall
- pattern: ^windows\.NewLazyDLL$
msg: use NewLazySystemDLL instead NewLazyDLL
goheader:
values:
const:
COMPANY: The Prometheus Authors
template: |-
SPDX-License-Identifier: Apache-2.0
Copyright {{ COMPANY }}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
gomoddirectives:
toolchain-forbidden: true
gosec:
excludes:
- G101
- G115
govet:
enable-all: true
disable:
- fieldalignment
- shadow
revive:
rules:
- name: var-naming
arguments:
- [ ] # AllowList - do not remove as args for the rule are positional and won't work without lists first
- [ ] # DenyList
- - skip-package-name-checks: true
sloglint:
no-mixed-args: true
kv-only: false
attr-only: true
no-global: all
context: scope
static-msg: false
no-raw-keys: false
key-naming-case: snake
forbidden-keys:
- time
- level
- msg
- source
args-on-sep-lines: true
staticcheck:
checks:
- -ST1003
- all
tagliatelle:
case:
rules:
json: camel
yaml: snake
use-field-name: true
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
rules:
- linters:
- revive
text: '`?\w+`? should be `?\w+`?'
- linters:
- revive
text: don't use ALL_CAPS in Go names; use CamelCase
- path: .+\.go$
text: don't use underscores in Go names
- path: .+\.go$
text: don't use an underscore in package name
- path: .+\.go$
text: exported type .+ should have comment or be unexported
- linters:
- staticcheck
text: "ST1003:"
paths:
- third_party$
- builtin$
- examples$
formatters:
disable-all: true
enable:
- gci
- gofmt
- gofumpt
- goimports
settings:
gci:
sections:
- prefix(github.com/prometheus-community/windows_exporter/internal/windowsservice)
- standard
- default
custom-order: true
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$
- deadcode
- errcheck
- golint
- govet
- gofmt
- ineffassign
- interfacer
- structcheck
- unconvert
- varcheck
issues:
exclude:
- don't use underscores in Go names
- exported type .+ should have comment or be unexported
exclude-rules:
- # Golint has many capitalisation complaints on WMI class names
text: "`?\\w+`? should be `?\\w+`?"
linters:
- golint

View File

@@ -1,11 +0,0 @@
<component name="CopyrightManager">
<settings default="windows_exporter">
<module2copyright>
<element module="All Changed Files" copyright="windows_exporter" />
</module2copyright>
<LanguageOptions name="Go">
<option name="fileTypeOverride" value="3" />
<option name="block" value="false" />
</LanguageOptions>
</settings>
</component>

View File

@@ -1,7 +0,0 @@
<component name="CopyrightManager">
<copyright>
<option name="keyword" value="The Prometheus Authors" />
<option name="notice" value="SPDX-License-Identifier: Apache-2.0&#10;&#10;Copyright The Prometheus Authors&#10;Licensed under the Apache License, Version 2.0 (the &quot;License&quot;);&#10;you may not use this file except in compliance with the License.&#10;You may obtain a copy of the License at&#10;&#10;http://www.apache.org/licenses/LICENSE-2.0&#10;&#10;Unless required by applicable law or agreed to in writing, software&#10;distributed under the License is distributed on an &quot;AS IS&quot; BASIS,&#10;WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.&#10;See the License for the specific language governing permissions and&#10;limitations under the License." />
<option name="myName" value="windows_exporter" />
</copyright>
</component>

View File

@@ -1,16 +0,0 @@
<component name="ProjectDictionaryState">
<dictionary name="project">
<words>
<w>containerd</w>
<w>endpointstats</w>
<w>gochecknoglobals</w>
<w>lpwstr</w>
<w>luid</w>
<w>operationoptions</w>
<w>setupapi</w>
<w>spdx</w>
<w>textfile</w>
<w>vmcompute</w>
</words>
</dictionary>
</component>

11
.idea/go.imports.xml generated
View File

@@ -1,11 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="GoImports">
<option name="excludedPackages">
<array>
<option value="github.com/pkg/errors" />
<option value="golang.org/x/net/context" />
</array>
</option>
</component>
</project>

View File

@@ -1,6 +0,0 @@
<component name="InspectionProjectProfileManager">
<profile version="1.0">
<option name="myName" value="Project Default" />
<inspection_tool class="GoLinter" enabled="true" level="WARNING" enabled_by_default="true" />
</profile>
</component>

6
.idea/vcs.xml generated
View File

@@ -1,6 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="VcsDirectoryMappings">
<mapping directory="" vcs="Git" />
</component>
</project>

View File

@@ -1,27 +1,18 @@
go:
# Whenever the Go version is updated here,
# .github/workflows should also be updated.
version: 1.23
cgo: false
repository:
path: github.com/prometheus-community/windows_exporter
path: github.com/martinlindhe/wmi_exporter
build:
binaries:
- name: windows_exporter
path: ./cmd/windows_exporter
tags:
all:
- trimpath
ldflags: |
-X github.com/prometheus/common/version.Version={{.Version}}
-X github.com/prometheus/common/version.Revision={{.Revision}}
-X github.com/prometheus/common/version.Branch={{.Branch}}
-X github.com/prometheus/common/version.BuildUser={{user}}@{{host}}
-X github.com/prometheus/common/version.BuildDate={{date "20060102-15:04:05"}}
binaries:
- name: wmi_exporter
ldflags: |
-X github.com/prometheus/common/version.Version={{.Version}}
-X github.com/prometheus/common/version.Revision={{.Revision}}
-X github.com/prometheus/common/version.Branch={{.Branch}}
-X github.com/prometheus/common/version.BuildUser={{user}}@{{host}}
-X github.com/prometheus/common/version.BuildDate={{date "20060102-15:04:05"}}
tarball:
files:
- LICENSE
files:
- LICENSE
crossbuild:
platforms:
- windows/amd64
- windows/arm64
platforms:
- windows/amd64
- windows/386

View File

@@ -1,30 +0,0 @@
<!--
~ SPDX-License-Identifier: Apache-2.0
~
~ Copyright The Prometheus Authors
~ Licensed under the Apache License, Version 2.0 (the "License");
~ you may not use this file except in compliance with the License.
~ You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS,
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="all" type="GoApplicationRunConfiguration" factoryName="Go Application" folderName="run">
<module name="windows_exporter" />
<working_directory value="$PROJECT_DIR$" />
<parameters value="--web.listen-address=127.0.0.1:9182 --log.level=info --collectors.enabled=ad,adcs,adfs,cache,container,cpu,cpu_info,dfsr,dhcp,diskdrive,dns,exchange,filetime,fsrmquota,hyperv,iis,license,logical_disk,memory,mscluster,msmq,mssql,net,netframework,nps,os,pagefile,performancecounter,physical_disk,printer,process,remote_fx,scheduled_task,service,smb,smbclient,smtp,system,tcp,terminal_services,thermalzone,time,udp,update,vmware,performancecounter --debug.enabled --collector.performancecounter.objects='[{ &quot;name&quot;: &quot;memory&quot;, &quot;type&quot;: &quot;formatted&quot;, &quot;object&quot;: &quot;Memory&quot;, &quot;counters&quot;: [{ &quot;name&quot;:&quot;Cache Faults/sec&quot;, &quot;type&quot;:&quot;counter&quot; }]}]'" />
<sudo value="true" />
<kind value="PACKAGE" />
<package value="github.com/prometheus-community/windows_exporter/cmd/windows_exporter" />
<directory value="$PROJECT_DIR$" />
<filePath value="$PROJECT_DIR$/exporter.go" />
<method v="2" />
</configuration>
</component>

4
AUTHORS.md Normal file
View File

@@ -0,0 +1,4 @@
Contributors in alphabetical order
* [Martin Lindhe](https://github.com/martinlindhe)
* [Calle Pettersson](https://github.com/carlpett)

View File

@@ -1,3 +0,0 @@
# Prometheus Community Code of Conduct
Prometheus follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).

View File

@@ -1,82 +0,0 @@
# Contributing
windows_exporter uses GitHub to manage reviews of pull requests.
* If you are a new contributor see: [Steps to Contribute](#steps-to-contribute)
* If you have a trivial fix or improvement, go ahead and create a pull request,
addressing (with `@...`) a suitable maintainer of this repository (see
[MAINTAINERS.md](MAINTAINERS.md)) in the description of the pull request.
* If you plan to do something more involved, first discuss your ideas
as [github issue](https://github.com/prometheus-community/windows_exporter/issues).
This will avoid unnecessary work and surely give you and us a good deal
of inspiration. New collectors are unlikely to be accepted, since the
`performancecounter` collector is the preferred way to collect metrics.
* Relevant coding style guidelines are the [Go Code Review
Comments](https://code.google.com/p/go-wiki/wiki/CodeReviewComments)
and the _Formatting and style_ section of Peter Bourgon's [Go: Best
Practices for Production
Environments](https://peter.bourgon.org/go-in-production/#formatting-and-style).
gofmt and [golangci-lint](https://github.com/golangci/golangci-lint) are your friends.
* Be sure to sign off on the [DCO](https://github.com/probot/dco#how-it-works).
## Steps to Contribute
Should you wish to work on an issue, please claim it first by commenting on the GitHub issue that you want to work on it. This is to prevent duplicated efforts from contributors on the same issue.
For quickly compiling and testing your changes do:
```bash
# For building.
go build -o windows_exporter.exe ./cmd/windows_exporter/
./windows_exporter.exe
# For testing.
make test # Make sure all the tests pass before you commit and push :)
```
To run a collection of Go linters through [`golangci-lint`](https://github.com/golangci/golangci-lint), do:
```bash
make lint
```
If it reports an issue and you think that the warning needs to be disregarded or is a false-positive, you can add a special comment `//nolint:linter1[,linter2,...]` before the offending line. Use this sparingly though, fixing the code to comply with the linter's recommendation is in general the preferred course of action. See [this section of the golangci-lint documentation](https://golangci-lint.run/usage/false-positives/#nolint-directive) for more information.
All our issues are regularly tagged so that you can also filter down the issues involving the components you want to work on. For our labeling policy refer [the wiki page](https://github.com/prometheus/prometheus/wiki/Label-Names-and-Descriptions).
## Pull Request Checklist
* Branch from the main branch and, if needed, rebase to the current main branch before submitting your pull request. If it doesn't merge cleanly with main you may be asked to rebase your changes.
* Commits should be as small as possible, while ensuring that each commit is correct independently (i.e., each commit should compile and pass tests).
* The PR title should be of the format: `subsystem: what this PR does` (for example, `cpu: Add support for thing` or `docs: fix typo`).
* If your patch is not getting reviewed or you need a specific person to review it, you can @-reply a reviewer asking for a review in the pull request or a comment.
* Add tests relevant to the fixed bug or new feature.
## Dependency management
The Prometheus project uses [Go modules](https://golang.org/cmd/go/#hdr-Modules__module_versions__and_more) to manage dependencies on external packages.
To add or update a new dependency, use the `go get` command:
```bash
# Pick the latest tagged release.
go get example.com/some/module/pkg@latest
# Pick a specific version.
go get example.com/some/module/pkg@vX.Y.Z
```
Tidy up the `go.mod` and `go.sum` files:
```bash
# The GO111MODULE variable can be omitted when the code isn't located in GOPATH.
GO111MODULE=on go mod tidy
```
You have to commit the changes to `go.mod` and `go.sum` before submitting the pull request.

View File

@@ -1,13 +0,0 @@
# mcr.microsoft.com/oss/kubernetes/windows-host-process-containers-base-image:v1.0.0
# Using this image as a base for HostProcess containers has a few advantages over using other base images for Windows containers including:
# - Smaller image size
# - OS compatibility (works on any Windows version that supports containers)
# This image MUST be built with docker buildx build (buildx) command on a Linux system.
# Ref: https://github.com/microsoft/windows-host-process-containers-base-image
ARG BASE="mcr.microsoft.com/oss/kubernetes/windows-host-process-containers-base-image:v1.0.0"
FROM $BASE
COPY windows_exporter*-amd64.exe /windows_exporter.exe
ENTRYPOINT ["windows_exporter.exe"]

View File

@@ -1,7 +1,6 @@
The MIT License (MIT)
Copyright (c) 2016 Martin Lindhe
Copyright (c) 2021 The Prometheus Authors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -1,10 +0,0 @@
Maintainers in alphabetical order
* [Ben Reedy](https://github.com/breed808) - breed808@breed808.com
* [Calle Pettersson](https://github.com/carlpett) - calle@cape.nu
* [Jan-Otto Kröpke](https://github.com/jkroepke) - github@jkroepke.de
Alumni
* [Brian Brazil](https://github.com/brian-brazil)
* [Martin Lindhe](https://github.com/martinlindhe)

View File

@@ -1,84 +1,19 @@
GOOS ?= windows
VERSION ?= $(shell cat VERSION)
DOCKER ?= docker
export GOOS=windows
# DOCKER_REPO is the official image repository name at docker.io, quay.io.
DOCKER_REPO ?= prometheuscommunity
DOCKER_IMAGE_NAME ?= windows-exporter
# ALL_DOCKER_REPOS is the list of repositories to push the image to. ghcr.io requires that org name be the same as the image repo name.
ALL_DOCKER_REPOS ?= docker.io/$(DOCKER_REPO) ghcr.io/prometheus-community # quay.io/$(DOCKER_REPO)
# Image Variables for host process Container
# Windows image build is heavily influenced by https://github.com/kubernetes/kubernetes/blob/master/cluster/images/etcd/Makefile
OS ?= ltsc2019
ALL_OS ?= ltsc2019 ltsc2022
BASE_IMAGE ?= mcr.microsoft.com/windows/nanoserver
.PHONY: build
build: generate windows_exporter.exe
windows_exporter.exe: pkg/**/*.go
build:
promu build -v
.PHONY: generate
generate:
go generate ./...
test:
go test -v ./...
bench:
go test -v -bench='benchmarkcollector' ./internal/collectors/{cpu,logical_disk,physical_disk,memory,net,printer,process,service,system,tcp,time}
lint:
golangci-lint -c .golangci.yaml run
.PHONY: e2e-test
e2e-test: windows_exporter.exe
powershell -NonInteractive -ExecutionPolicy Bypass -File .\tools\end-to-end-test.ps1
.PHONY: promtool
promtool: windows_exporter.exe
pwsh -NonInteractive -ExecutionPolicy Bypass -File .\tools\promtool.ps1
fmt:
gofmt -l -w -s .
crossbuild: generate
crossbuild:
# The prometheus/golang-builder image for promu crossbuild doesn't exist
# on Windows, so for now, we'll just build twice
GOARCH=amd64 promu build --prefix=output/amd64
GOARCH=arm64 promu build --prefix=output/arm64
.PHONY: package
package: crossbuild
powershell -NonInteractive -ExecutionPolicy Bypass -File .\installer\build.ps1 -PathToExecutable .\output\amd64\windows_exporter.exe -Version $(shell git describe --tags --abbrev=0)
build-image: crossbuild
$(DOCKER) build --build-arg=BASE=$(BASE_IMAGE):$(OS) -f Dockerfile -t local/$(DOCKER_IMAGE_NAME):$(VERSION)-$(OS) .
build-hostprocess:
$(DOCKER) buildx build --build-arg=BASE=mcr.microsoft.com/oss/kubernetes/windows-host-process-containers-base-image:v1.0.0 -f Dockerfile -t local/$(DOCKER_IMAGE_NAME):$(VERSION)-hostprocess .
sub-build-%:
$(MAKE) OS=$* build-image
build-all: crossbuild
@for docker_repo in ${DOCKER_REPO}; do \
echo $(DOCKER) buildx build -f Dockerfile -t $${docker_repo}/$(DOCKER_IMAGE_NAME):$(VERSION) .; \
done
push:
@for docker_repo in ${DOCKER_REPO}; do \
echo $(DOCKER) buildx build --push -f Dockerfile -t $${docker_repo}/$(DOCKER_IMAGE_NAME):$(VERSION) .; \
done
.PHONY: push-all
push-all: build-all
$(MAKE) DOCKER_REPO="$(ALL_DOCKER_REPOS)" push
# Mandatory target for container description sync action
.PHONY: docker-repo-name
docker-repo-name:
@echo "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)"
GOARCH=386 promu build --prefix=output/386

273
README.md
View File

@@ -1,246 +1,109 @@
# windows_exporter
# WMI exporter
[![CI](https://github.com/prometheus-community/windows_exporter/actions/workflows/release.yml/badge.svg)](https://github.com/prometheus-community/windows_exporter)
[![Linting](https://github.com/prometheus-community/windows_exporter/actions/workflows/lint.yml/badge.svg)](https://github.com/prometheus-community/windows_exporter)
[![GitHub license](https://img.shields.io/github/license/prometheus-community/windows_exporter)](https://github.com/prometheus-community/windows_exporter/blob/master/LICENSE.txt)
[![Current Release](https://img.shields.io/github/release/prometheus-community/windows_exporter.svg?logo=github)](https://github.com/prometheus-community/windows_exporter/releases/latest)
[![GitHub Repo stars](https://img.shields.io/github/stars/prometheus-community/windows_exporter?style=flat&logo=github)](https://github.com/prometheus-community/windows_exporter/stargazers)
[![GitHub all releases](https://img.shields.io/github/downloads/prometheus-community/windows_exporter/total?logo=github)](https://github.com/prometheus-community/windows_exporter/releases/latest)
[![Go Report Card](https://goreportcard.com/badge/github.com/prometheus-community/windows_exporter)](https://goreportcard.com/report/github.com/prometheus-community/windows_exporter)
[![Build status](https://ci.appveyor.com/api/projects/status/ljwan71as6pf2joe/branch/master?svg=true)](https://ci.appveyor.com/project/martinlindhe/wmi-exporter)
Prometheus exporter for Windows machines, using the WMI (Windows Management Instrumentation).
A Prometheus exporter for Windows machines.
## Collectors
| Name | Description | Enabled by default |
|------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------|
| [ad](docs/collector.ad.md) | Active Directory Domain Services | |
| [adcs](docs/collector.adcs.md) | Active Directory Certificate Services | |
| [adfs](docs/collector.adfs.md) | Active Directory Federation Services | |
| [cache](docs/collector.cache.md) | Cache metrics | |
| [cpu](docs/collector.cpu.md) | CPU usage | &#10003; |
| [cpu_info](docs/collector.cpu_info.md) | CPU Information | |
| [container](docs/collector.container.md) | Container metrics | |
| [diskdrive](docs/collector.diskdrive.md) | Diskdrive metrics | |
| [dfsr](docs/collector.dfsr.md) | DFSR metrics | |
| [dhcp](docs/collector.dhcp.md) | DHCP Server | |
| [dns](docs/collector.dns.md) | DNS Server | |
| [exchange](docs/collector.exchange.md) | Exchange metrics | |
| [filetime](docs/collector.filetime.md) | FileTime metrics | |
| [fsrmquota](docs/collector.fsrmquota.md) | Microsoft File Server Resource Manager (FSRM) Quotas collector | |
| [gpu](docs/collector.gpu.md) | GPU metrics | |
| [hyperv](docs/collector.hyperv.md) | Hyper-V hosts | |
| [iis](docs/collector.iis.md) | IIS sites and applications | |
| [license](docs/collector.license.md) | Windows license status | |
| [logical_disk](docs/collector.logical_disk.md) | Logical disks, disk I/O | &#10003; |
| [memory](docs/collector.memory.md) | Memory usage metrics | &#10003; |
| [mscluster](docs/collector.mscluster.md) | MSCluster metrics | |
| [msmq](docs/collector.msmq.md) | MSMQ queues | |
| [mssql](docs/collector.mssql.md) | [SQL Server Performance Objects](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/use-sql-server-objects#SQLServerPOs) metrics | |
| [netframework](docs/collector.netframework.md) | .NET Framework metrics | |
| [net](docs/collector.net.md) | Network interface I/O | &#10003; |
| [os](docs/collector.os.md) | OS metrics (memory, processes, users) | &#10003; |
| [pagefile](docs/collector.pagefile.md) | pagefile metrics | |
| [performancecounter](docs/collector.performancecounter.md) | Custom performance counter metrics | |
| [physical_disk](docs/collector.physical_disk.md) | physical disk metrics | &#10003; |
| [printer](docs/collector.printer.md) | Printer metrics | |
| [process](docs/collector.process.md) | Per-process metrics | |
| [remote_fx](docs/collector.remote_fx.md) | RemoteFX protocol (RDP) metrics | |
| [scheduled_task](docs/collector.scheduled_task.md) | Scheduled Tasks metrics | |
| [service](docs/collector.service.md) | Service state metrics | &#10003; |
| [smb](docs/collector.smb.md) | SMB Server | |
| [smbclient](docs/collector.smbclient.md) | SMB Client | |
| [smtp](docs/collector.smtp.md) | IIS SMTP Server | |
| [system](docs/collector.system.md) | System calls | &#10003; |
| [tcp](docs/collector.tcp.md) | TCP connections | |
| [terminal_services](docs/collector.terminal_services.md) | Terminal services (RDS) | |
| [textfile](docs/collector.textfile.md) | Read prometheus metrics from a text file | |
| [thermalzone](docs/collector.thermalzone.md) | Thermal information | |
| [time](docs/collector.time.md) | Windows Time Service | |
| [udp](docs/collector.udp.md) | UDP connections | |
| [update](docs/collector.update.md) | Windows Update Service | |
| [vmware](docs/collector.vmware.md) | Performance counters installed by the Vmware Guest agent | |
Name | Description | Enabled by default
---------|-------------|--------------------
[ad](docs/collector.ad.md) | Active Directory Domain Services |
[adfs](docs/collector.adfs.md) | Active Directory Federation Services |
[cpu](docs/collector.cpu.md) | CPU usage | &#10003;
[cs](docs/collector.cs.md) | "Computer System" metrics (system properties, num cpus/total memory) | &#10003;
[container](docs/collector.container.md) | Container metrics |
[dns](docs/collector.dns.md) | DNS Server |
[hyperv](docs/collector.hyperv.md) | Hyper-V hosts |
[iis](docs/collector.iis.md) | IIS sites and applications |
[logical_disk](docs/collector.logical_disk.md) | Logical disks, disk I/O | &#10003;
[logon](docs/collector.logon.md) | User logon sessions |
[memory](docs/collector.memory.md) | Memory usage metrics |
[msmq](docs/collector.msmq.md) | MSMQ queues |
[mssql](docs/collector.mssql.md) | [SQL Server Performance Objects](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/use-sql-server-objects#SQLServerPOs) metrics |
[netframework_clrexceptions](docs/collector.netframework_clrexceptions.md) | .NET Framework CLR Exceptions |
[netframework_clrinterop](docs/collector.netframework_clrinterop.md) | .NET Framework Interop Metrics |
[netframework_clrjit](docs/collector.netframework_clrjit.md) | .NET Framework JIT metrics |
[netframework_clrloading](docs/collector.netframework_clrloading.md) | .NET Framework CLR Loading metrics |
[netframework_clrlocksandthreads](docs/collector.netframework_clrlocksandthreads.md) | .NET Framework locks and metrics threads |
[netframework_clrmemory](docs/collector.netframework_clrmemory.md) | .NET Framework Memory metrics |
[netframework_clrremoting](docs/collector.netframework_clrremoting.md) | .NET Framework Remoting metrics |
[netframework_clrsecurity](docs/collector.netframework_clrsecurity.md) | .NET Framework Security Check metrics |
[net](docs/collector.net.md) | Network interface I/O | &#10003;
[os](docs/collector.os.md) | OS metrics (memory, processes, users) | &#10003;
[process](docs/collector.process.md) | Per-process metrics |
[service](docs/collector.service.md) | Service state metrics | &#10003;
[system](docs/collector.system.md) | System calls | &#10003;
[tcp](docs/collector.tcp.md) | TCP connections |
[thermalzone](docs/collector.thermalzone.md) | Thermal information
[textfile](docs/collector.textfile.md) | Read prometheus metrics from a text file | &#10003;
[vmware](docs/collector.vmware.md) | Performance counters installed by the Vmware Guest agent |
See the linked documentation on each collector for more information on reported metrics, configuration settings and usage examples.
### Filtering enabled collectors
The `windows_exporter` will expose all metrics from enabled collectors by default. This is the recommended way to collect metrics to avoid errors when comparing metrics of different families.
For advanced use the `windows_exporter` can be passed an optional list of collectors to filter metrics. The `collect[]` parameter may be used multiple times. In Prometheus configuration you can use this syntax under the [scrape config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#<scrape_config>).
```
params:
collect[]:
- foo
- bar
```
This can be useful for having different Prometheus servers collect specific metrics from nodes.
## Flags
windows_exporter accepts flags to configure certain behaviours. The ones configuring the global behaviour of the exporter are listed below, while collector-specific ones are documented in the respective collector documentation above.
| Flag | Description | Default value |
|---------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|
| `--web.listen-address` | host:port for exporter. | `:9182` |
| `--telemetry.path` | URL path for surfacing collected metrics. | `/metrics` |
| `--collectors.enabled` | Comma-separated list of collectors to use. Use `[defaults]` as a placeholder which gets expanded containing all the collectors enabled by default. | `[defaults]` |
| `--scrape.timeout-margin` | Seconds to subtract from the timeout allowed by the client. Tune to allow for overhead or high loads. | `0.5` |
| `--web.config.file` | A [web config][web_config] for setting up TLS and Auth | None |
| `--config.file` | [Using a config file](#using-a-configuration-file) from path | None |
| `--log.file` | Output file of log messages. One of [stdout, stderr, eventlog, \<path to log file>]<br>**NOTE:** The MSI installer will add a default argument to the installed service setting this to eventlog | stderr |
## Installation
The latest release can be downloaded from the [releases page](https://github.com/martinlindhe/wmi_exporter/releases).
The latest release can be downloaded from the [releases page](https://github.com/prometheus-community/windows_exporter/releases).
Each release provides a .msi installer. The installer will setup the WMI Exporter as a Windows service, as well as create an exception in the Windows Firewall.
All binaries and installation packages are signed with an self-signed certificate. The public key can be found [here](https://github.com/prometheus-community/windows_exporter/blob/master/installer/codesign.cer).
Once import into the trusted root certificate store, the binaries and installation packages will be trusted.
If the installer is run without any parameters, the exporter will run with default settings for enabled collectors, ports, etc. The following parameters are available:
Each release provides a .msi installer. The installer will setup the windows_exporter as a Windows service, as well as create an exception in the Windows Firewall.
Name | Description
-----|------------
`ENABLED_COLLECTORS` | As the `--collectors.enabled` flag, provide a comma-separated list of enabled collectors
`LISTEN_ADDR` | The IP address to bind to. Defaults to 0.0.0.0
`LISTEN_PORT` | The port to bind to. Defaults to 9182.
`METRICS_PATH` | The path at which to serve metrics. Defaults to `/metrics`
`TEXTFILE_DIR` | As the `--collector.textfile.directory` flag, provide a directory to read text files with metrics from
`EXTRA_FLAGS` | Allows passing full CLI flags. Defaults to an empty string.
If the installer is run without any parameters, the exporter will run with default settings for enabled collectors, ports, etc.
The installer provides a configuration file to customize the exporter.
The configuration file
* is located in the same directory as the exporter executable.
* has the YAML format and is provided with the `--config.file` parameter.
* can be used to enable or disable collectors, set collector-specific parameters, and set global parameters.
The following parameters are available:
| Name | Description |
|----------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `ENABLED_COLLECTORS` | As the `--collectors.enabled` flag, provide a comma-separated list of enabled collectors |
| `CONFIG_FILE` | Use the `--config.file` flag to specify a config file. If empty, default config file at install dir will be used. If set, the config file must be exist before the installation is started. | |
| `LISTEN_ADDR` | The IP address to bind to. Defaults to an empty string. (any local address) |
| `LISTEN_PORT` | The port to bind to. Defaults to `9182`. |
| `METRICS_PATH` | The path at which to serve metrics. Defaults to `/metrics` |
| `TEXTFILE_DIRS` | Use the `--collector.textfile.directories` flag to specify one or more directories, separated by commas, where the collector should read text files containing metrics |
| `REMOTE_ADDR` | Allows setting comma separated remote IP addresses for the Windows Firewall exception (allow list). Defaults to an empty string (any remote address). |
| `EXTRA_FLAGS` | Allows passing full CLI flags. Defaults to an empty string. For `--collectors.enabled` and `--config.file`, use the specialized properties `ENABLED_COLLECTORS` and `CONFIG_FILE` |
| `ADDLOCAL` | Enables features within the windows_exporter installer. Supported values: `FirewallException` |
| `REMOVE` | Disables features within the windows_exporter installer. Supported values: `FirewallException` |
| `APPLICATIONFOLDER` | Directory to install windows_exporter. Defaults to `C:\Program Files\windows_exporter` |
> [!NOTE]
> The installer properties are always preferred over the values defined in the config file. If you prefer to configure via the config file, avoid using any of the properties listed above.
Parameters are sent to the installer via `msiexec`.
On PowerShell, the `--%` should be passed before defining properties.
Example invocations:
Parameters are sent to the installer via `msiexec`. Example invocations:
```powershell
msiexec /i <path-to-msi-file> --% ENABLED_COLLECTORS=os,iis LISTEN_PORT=5000
msiexec /i <path-to-msi-file> ENABLED_COLLECTORS=os,iis LISTEN_PORT=5000
```
Example service collector with a custom query.
```powershell
msiexec /i <path-to-msi-file> --% ENABLED_COLLECTORS=os,service EXTRA_FLAGS="--collectors.exchange.enabled=""ADAccessProcesses"""
msiexec /i <path-to-msi-file> ENABLED_COLLECTORS=os,service --% EXTRA_FLAGS="--collector.service.services-where ""Name LIKE 'sql%'"""
```
Define a config file.
On some older versions of Windows you may need to surround parameter values with double quotes to get the install command parsing properly:
```powershell
msiexec /i <path-to-msi-file> --% CONFIG_FILE="D:\config.yaml"
msiexec /i C:\Users\Administrator\Downloads\wmi_exporter.msi ENABLED_COLLECTORS="ad,iis,logon,memory,process,tcp,thermalzone" TEXTFILE_DIR="C:\custom_metrics\"
```
Alternative install directory
```powershell
msiexec /i <path-to-msi-file> --% ADDLOCAL=FirewallException APPLICATIONFOLDER="F:\Program Files\windows_exporter"
```
## Roadmap
On some older versions of Windows,
you may need to surround parameter values with double quotes to get the installation command parsing properly:
```powershell
msiexec /i C:\Users\Administrator\Downloads\windows_exporter.msi --% ENABLED_COLLECTORS="ad,iis,memory,process,tcp,textfile,thermalzone" TEXTFILE_DIRS="C:\custom_metrics\"
```
See [open issues](https://github.com/martinlindhe/wmi_exporter/issues)
To install the exporter with creating a firewall exception, use the following command:
```powershell
msiexec /i <path-to-msi-file> --% ADDLOCAL=FirewallException
```
## Usage
PowerShell versions 7.3 and above require [PSNativeCommandArgumentPassing](https://learn.microsoft.com/en-us/powershell/scripting/learn/experimental-features?view=powershell-7.3) to be set to `Legacy` when using `--% EXTRA_FLAGS`:
go get -u github.com/prometheus/promu
go get -u github.com/martinlindhe/wmi_exporter
cd $env:GOPATH/src/github.com/martinlindhe/wmi_exporter
promu build -v .
.\wmi_exporter.exe
```powershell
$PSNativeCommandArgumentPassing = 'Legacy'
msiexec /i <path-to-msi-file> ENABLED_COLLECTORS=os,service --% EXTRA_FLAGS="--collectors.exchange.enabled=""ADAccessProcesses"""
```
The prometheus metrics will be exposed on [localhost:9182](http://localhost:9182)
## Docker Implementation
## Examples
The windows_exporter can be run as a Docker container. The Docker image is available on
### Enable only service collector and specify a custom query
* [Docker Hub](https://hub.docker.com/r/prometheuscommunity/windows-exporter): `docker.io/prometheuscommunity/windows-exporter`
* [GitHub Container Registry](https://github.com/prometheus-community/windows_exporter/pkgs/container/windows-exporter): `ghcr.io/prometheus-community/windows-exporter`
* [quay.io Registry](https://quay.io/repository/prometheuscommunity/windows-exporter): `quay.io/prometheuscommunity/windows-exporter`
.\wmi_exporter.exe --collectors.enabled "service" --collector.service.services-where "Name='wmi_exporter'"
### Tags
### Enable only process collector and specify a custom query
The Docker image is tagged with the version of the exporter. The `latest` tag is also available and points to the latest release.
.\wmi_exporter.exe --collectors.enabled "process" --collector.process.processes-where "Name LIKE 'firefox%'"
## Kubernetes Implementation
When there are multiple processes with the same name, WMI represents those after the first instance as `process-name#index`. So to get them all, rather than just the first one, the query needs to be a wildcard search using a `%` character.
See detailed steps to install on Windows Kubernetes [here](./kubernetes/kubernetes.md).
Please note that in Windows batch scripts (and when using the `cmd` command prompt), the `%` character is reserved, so it has to be escaped with another `%`. For example, the wildcard syntax for searching for all firefox processes is `firefox%%`.
## Supported versions
`windows_exporter` supports Windows Server versions 2016 and later, and desktop Windows version 10 and 11 (21H2 or later).
There are known compatibility issues with Windows Server 2012 R2 and earlier versions.
### HTTP Endpoints
windows_exporter provides the following HTTP endpoints:
* `/metrics`: Exposes metrics in the [Prometheus text format](https://prometheus.io/docs/instrumenting/exposition_formats/).
* `/health`: Returns 200 OK when the exporter is running.
* `/debug/pprof/`: Exposes the [pprof](https://golang.org/pkg/net/http/pprof/) endpoints. Only, if `--debug.enabled` is set.
### Using [defaults] with `--collectors.enabled` argument
Using `[defaults]` with `--collectors.enabled` argument which gets expanded with all default collectors.
.\windows_exporter.exe --collectors.enabled "[defaults],process,container"
This enables the additional process and container collectors on top of the defaults.
### Using a configuration file
YAML configuration files can be specified with the `--config.file` flag. e.g. `.\windows_exporter.exe --config.file=config.yml`. If you are using the absolute path, make sure to quote the path, e.g. `.\windows_exporter.exe --config.file="C:\Program Files\windows_exporter\config.yml"`
```yaml
collectors:
enabled: cpu,net,service
collector:
service:
include: windows_exporter
log:
level: warn
```
An example configuration file can be found [here](docs/example_config.yml).
#### Configuration file notes
Configuration file values can be mixed with CLI flags. E.G.
`.\windows_exporter.exe --collectors.enabled=cpu`
```yaml
log:
level: debug
```
CLI flags enjoy a higher priority over values specified in the configuration file.
## License
Under [MIT](LICENSE)
[web_config]: https://github.com/prometheus/exporter-toolkit/blob/master/docs/web-configuration.md

View File

@@ -1,6 +0,0 @@
# Reporting a security issue
The Prometheus security policy, including how to report vulnerabilities, can be
found here:
<https://prometheus.io/docs/operating/security/>

78
appveyor.yml Normal file
View File

@@ -0,0 +1,78 @@
version: "{build}"
os: Visual Studio 2017
build: off
stack: go 1.13
environment:
GOPATH: c:\gopath
GO111MODULE: on
clone_folder: c:\gopath\src\github.com\martinlindhe\wmi_exporter
install:
- mkdir %GOPATH%\bin
- set PATH=%GOPATH%\bin;%PATH%
- set PATH=%PATH%;C:\mingw-w64\x86_64-7.2.0-posix-seh-rt_v5-rev1\mingw64\bin
- choco install gitversion.portable make -y
- ps: |
appveyor DownloadFile https://github.com/golangci/golangci-lint/releases/download/v1.21.0/golangci-lint-1.21.0-windows-amd64.zip
Expand-Archive golangci-lint-1.21.0-windows-amd64.zip
Move-Item golangci-lint-1.21.0-windows-amd64\golangci-lint-1.21.0-windows-amd64\golangci-lint.exe $env:GOPATH\bin\golangci-lint.exe
- ps: |
$env:GO111MODULE="off"
go get -u github.com/prometheus/promu
$env:GO111MODULE="on"
test_script:
- make test
after_test:
- make lint
build_script:
- ps: |
# go mod download (or, if we don't call it, go build) will write every dependent package name to
# stderr, which will be interpreted as an error and abort the build if ErrorActionPreference is Stop,
# so we need to run it before setting the preference.
go mod download
$ErrorActionPreference = "Stop"
gitversion /output json /showvariable FullSemVer | Set-Content VERSION -PassThru
$Version = Get-Content VERSION
make crossbuild
# GH requires all files to have different names, so add version/arch to differentiate
foreach($Arch in "amd64","386") {
Rename-Item output\$Arch\wmi_exporter.exe -NewName wmi_exporter-$Version-$Arch.exe
}
after_build:
- ps: |
# Build installer packages only on tagged releases
if($env:APPVEYOR_REPO_TAG -ne "True") {
return
}
$ErrorActionPreference = "Stop"
$BuildVersion = Get-Content VERSION
# The MSI version is not semver compliant, so just take the numerical parts
$MSIVersion = $env:APPVEYOR_REPO_TAG_NAME -replace '^v?([0-9\.]+).*$','$1'
foreach($Arch in "amd64","386") {
Write-Verbose "Building wmi_exporter $MSIVersion msi for $Arch"
.\installer\build.ps1 -PathToExecutable .\output\$Arch\wmi_exporter-$BuildVersion-$Arch.exe -Version $MSIVersion -Arch "$Arch"
Move-Item installer\Output\wmi_exporter-$MSIVersion-$Arch.msi output\$Arch\
}
- promu checksum output\
artifacts:
- name: Artifacts
path: output\**\*
deploy:
- provider: GitHub
description: WMI Exporter version $(appveyor_build_version)
artifact: Artifacts
auth_token:
secure: 'CrXWeTf7qONUOEki5olFfGEUPMLDeHj61koDXV3OVEaLgtACmnVHsKUub9POflda'
draft: false
prerelease: false
on:
appveyor_repo_tag: true

View File

@@ -1,215 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
//
// Copyright The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//go:build windows
package main
import (
"errors"
"fmt"
"os"
"strings"
"unsafe"
"golang.org/x/sys/windows"
"golang.org/x/sys/windows/svc"
"golang.org/x/sys/windows/svc/eventlog"
)
const serviceName = "windows_exporter"
//nolint:gochecknoglobals
var (
// exitCodeCh is a channel to send an exit code from the main function to the service manager.
// Additionally, if there is an error in the IsService var declaration,
// the exit code is sent to the service manager as well.
exitCodeCh = make(chan int, 1)
// stopCh is a channel to send a signal to the service manager that the service is stopping.
stopCh = make(chan struct{})
// serviceManagerFinishedCh is a channel to send a signal to the main function that the service manager has stopped the service.
serviceManagerFinishedCh = make(chan struct{}, 1)
)
// IsService variable declaration allows initiating time-sensitive components like registering the Windows service
// as early as possible in the startup process.
// init functions are called in the order they are declared, so this package should be imported first.
//
// Ref: https://github.com/prometheus-community/windows_exporter/issues/551#issuecomment-1220774835
//
// Declare imports on this package should be avoided where possible.
// var declaration run before init function, so it guarantees that windows_exporter respond to service manager early
// and avoid timeout.
// The order of the var declaration and init functions depends on the filename as well. The filename should be 0_service.go
// Ref: https://medium.com/@markbates/go-init-order-dafa89fcef22
//
//nolint:gochecknoglobals
var IsService = func() bool {
var err error
isService, err := isWindowsService()
if err != nil {
logToFile(fmt.Sprintf("failed to detect service: %v", err))
return false
}
if !isService {
return false
}
defer func() {
go func() {
err := svc.Run(serviceName, &windowsExporterService{})
if err != nil {
// https://github.com/open-telemetry/opentelemetry-collector/pull/9042
if !errors.Is(err, windows.ERROR_FAILED_SERVICE_CONTROLLER_CONNECT) {
if logErr := logToEventToLog(windows.EVENTLOG_ERROR_TYPE, fmt.Sprintf("failed to start service: %v", err)); logErr != nil {
logToFile(fmt.Sprintf("failed to start service: %v", err))
}
}
}
serviceManagerFinishedCh <- struct{}{}
}()
}()
if err := logToEventToLog(windows.EVENTLOG_INFORMATION_TYPE, "attempting to start exporter service"); err != nil {
logToFile(fmt.Sprintf("failed sent log to event log: %v", err))
exitCodeCh <- 2
}
return true
}()
type windowsExporterService struct{}
// Execute is the entry point for the Windows service manager.
func (s *windowsExporterService) Execute(_ []string, r <-chan svc.ChangeRequest, changes chan<- svc.Status) (bool, uint32) {
changes <- svc.Status{State: svc.StartPending}
// Send a signal to the main function that the service is running.
changes <- svc.Status{State: svc.Running, Accepts: svc.AcceptStop | svc.AcceptShutdown}
for {
select {
case exitCodeCh := <-exitCodeCh:
// Stop the service if an exit code from the main function is received.
changes <- svc.Status{State: svc.StopPending}
return true, uint32(exitCodeCh)
case c := <-r:
// Handle the service control request.
switch c.Cmd {
case svc.Interrogate:
changes <- c.CurrentStatus
case svc.Stop, svc.Shutdown:
// Stop the service if a stop or shutdown request is received.
_ = logToEventToLog(windows.EVENTLOG_INFORMATION_TYPE, "service stop received")
changes <- svc.Status{State: svc.StopPending}
// Send a signal to the main function to stop the service.
stopCh <- struct{}{}
// Wait for the main function to stop the service.
return false, uint32(<-exitCodeCh)
default:
_ = logToEventToLog(windows.EVENTLOG_ERROR_TYPE, fmt.Sprintf("unexpected control request #%d", c))
}
}
}
}
// logToEventToLog logs a message to the Windows event log.
func logToEventToLog(eType uint16, msg string) error {
eventLog, err := eventlog.Open(serviceName)
if err != nil {
return fmt.Errorf("failed to open event log: %w", err)
}
defer func(eventLog *eventlog.Log) {
_ = eventLog.Close()
}(eventLog)
switch eType {
case windows.EVENTLOG_ERROR_TYPE:
err = eventLog.Error(102, msg)
case windows.EVENTLOG_WARNING_TYPE:
err = eventLog.Warning(101, msg)
case windows.EVENTLOG_INFORMATION_TYPE:
err = eventLog.Info(100, msg)
}
if err != nil {
return fmt.Errorf("error report event: %w", err)
}
return nil
}
func logToFile(msg string) {
if file, err := os.CreateTemp("", "windows_exporter.service.error.log"); err == nil {
_, _ = file.WriteString(msg)
_ = file.Close()
}
}
// isWindowsService is a clone of "golang.org/x/sys/windows/svc:IsWindowsService", but with a fix
// for Windows containers.
// Go cloned the .NET implementation of this function, which has since
// been patched to support Windows containers, which don't use Session ID 0 for services.
// https://github.com/dotnet/runtime/pull/74188
// This function can be replaced with go's once go brings in the fix.
//
// Copyright 2023-present Datadog, Inc.
// Licensed under the Apache License, Version 2.0 (the "License");
// https://github.com/DataDog/datadog-agent/blob/46740e82ef40a04c4be545ed8c16a4b0d1f046cf/pkg/util/winutil/servicemain/servicemain.go#L128
func isWindowsService() (bool, error) {
var currentProcess windows.PROCESS_BASIC_INFORMATION
infoSize := uint32(unsafe.Sizeof(currentProcess))
err := windows.NtQueryInformationProcess(windows.CurrentProcess(), windows.ProcessBasicInformation, unsafe.Pointer(&currentProcess), infoSize, &infoSize)
if err != nil {
return false, err
}
var parentProcess *windows.SYSTEM_PROCESS_INFORMATION
for infoSize = uint32((unsafe.Sizeof(*parentProcess) + unsafe.Sizeof(uintptr(0))) * 1024); ; {
parentProcess = (*windows.SYSTEM_PROCESS_INFORMATION)(unsafe.Pointer(&make([]byte, infoSize)[0]))
err = windows.NtQuerySystemInformation(windows.SystemProcessInformation, unsafe.Pointer(parentProcess), infoSize, &infoSize)
if err == nil {
break
} else if !errors.Is(err, windows.STATUS_INFO_LENGTH_MISMATCH) {
return false, err
}
}
for ; ; parentProcess = (*windows.SYSTEM_PROCESS_INFORMATION)(unsafe.Pointer(uintptr(unsafe.Pointer(parentProcess)) + uintptr(parentProcess.NextEntryOffset))) {
if parentProcess.UniqueProcessID == currentProcess.InheritedFromUniqueProcessId {
return strings.EqualFold("services.exe", parentProcess.ImageName.String()), nil
}
if parentProcess.NextEntryOffset == 0 {
break
}
}
return false, nil
}

View File

@@ -1,23 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
//
// Copyright The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
/*
The main package for the windows_exporter executable.
usage: windows_exporter [<flags>]
A metrics collector for Windows.
*/
package main

View File

@@ -1,329 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
//
// Copyright The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//go:build windows
//go:generate go run github.com/tc-hib/go-winres@v0.3.3 make --product-version=git-tag --file-version=git-tag --arch=amd64,arm64
package main
import (
"context"
"errors"
"fmt"
"log/slog"
"net/http"
"net/http/pprof"
"os"
"os/signal"
"os/user"
"runtime"
"runtime/debug"
"slices"
"strings"
"time"
"github.com/alecthomas/kingpin/v2"
"github.com/prometheus-community/windows_exporter/internal/config"
"github.com/prometheus-community/windows_exporter/internal/httphandler"
"github.com/prometheus-community/windows_exporter/internal/log"
"github.com/prometheus-community/windows_exporter/internal/log/flag"
"github.com/prometheus-community/windows_exporter/internal/utils"
"github.com/prometheus-community/windows_exporter/pkg/collector"
"github.com/prometheus/common/version"
"github.com/prometheus/exporter-toolkit/web"
webflag "github.com/prometheus/exporter-toolkit/web/kingpinflag"
"golang.org/x/sys/windows"
)
func main() {
ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt, os.Kill)
exitCode := run(ctx, os.Args[1:])
stop()
// If we are running as a service, we need to signal the service control manager that we are done.
if !IsService {
os.Exit(exitCode)
}
exitCodeCh <- exitCode
// Wait for the service control manager to signal that we are done.
<-serviceManagerFinishedCh
}
func run(ctx context.Context, args []string) int {
startTime := time.Now()
app := kingpin.New("windows_exporter", "A metrics collector for Windows.")
var (
configFile = app.Flag(
"config.file",
"YAML configuration file to use. Values set in this file will be overridden by CLI flags.",
).String()
webConfig = webflag.AddFlags(app, ":9182")
metricsPath = app.Flag(
"telemetry.path",
"URL path for surfacing collected metrics.",
).Default("/metrics").String()
disableExporterMetrics = app.Flag(
"web.disable-exporter-metrics",
"Exclude metrics about the exporter itself (promhttp_*, process_*, go_*).",
).Bool()
enabledCollectors = app.Flag(
"collectors.enabled",
"Comma-separated list of collectors to use. Use '[defaults]' as a placeholder for all the collectors enabled by default.").
Default(collector.DefaultCollectors).String()
disabledCollectors = app.Flag(
"collectors.disabled",
"Comma-separated list of collectors to exclude. Can be used to disable collector from the defaults.").
Default("").String()
timeoutMargin = app.Flag(
"scrape.timeout-margin",
"Seconds to subtract from the timeout allowed by the client. Tune to allow for overhead or high loads.",
).Default("0.5").Float64()
debugEnabled = app.Flag(
"debug.enabled",
"If true, windows_exporter will expose debug endpoints under /debug/pprof.",
).Default("false").Bool()
processPriority = app.Flag(
"process.priority",
"Priority of the exporter process. Higher priorities may improve exporter responsiveness during periods of system load. Can be one of [\"realtime\", \"high\", \"abovenormal\", \"normal\", \"belownormal\", \"low\"]",
).Default("normal").String()
memoryLimit = app.Flag(
"process.memory-limit",
"Limit memory usage in bytes. This is a soft-limit and not guaranteed. 0 means no limit. Read more at https://pkg.go.dev/runtime/debug#SetMemoryLimit .",
).Default("200000000").Int64()
)
logFile := &log.AllowedFile{}
_ = logFile.Set("stdout")
if IsService {
_ = logFile.Set("eventlog")
}
logConfig := &log.Config{File: logFile}
flag.AddFlags(app, logConfig)
app.Version(version.Print("windows_exporter"))
app.HelpFlag.Short('h')
// Initialize collectors before loading and parsing CLI arguments
collectors := collector.NewWithFlags(app)
if err := config.Parse(app, args); err != nil {
//nolint:sloglint // we do not have an logger yet
slog.LogAttrs(ctx, slog.LevelError, "Failed to load configuration",
slog.Any("err", err),
)
return 1
}
debug.SetMemoryLimit(*memoryLimit)
logger, err := log.New(logConfig)
if err != nil {
logger.LogAttrs(ctx, slog.LevelError, "failed to create logger",
slog.Any("err", err),
)
return 1
}
logger.LogAttrs(ctx, slog.LevelDebug, "logging has Started")
if configFile != nil && *configFile != "" {
logger.LogAttrs(ctx, slog.LevelInfo, "using configuration file: "+*configFile)
}
if err = setPriorityWindows(ctx, logger, os.Getpid(), *processPriority); err != nil {
logger.LogAttrs(ctx, slog.LevelError, "failed to set process priority",
slog.Any("err", err),
)
return 1
}
enabledCollectorList := expandEnabledCollectors(*enabledCollectors)
if err := collectors.Enable(enabledCollectorList); err != nil {
logger.LogAttrs(ctx, slog.LevelError, "couldn't enable collectors",
slog.Any("err", err),
)
return 1
}
if *disabledCollectors != "" {
collectors.Disable(slices.Compact(strings.Split(*disabledCollectors, ",")))
}
// Initialize collectors before loading
if err = collectors.Build(ctx, logger); err != nil {
for _, err := range utils.SplitError(err) {
logger.LogAttrs(ctx, slog.LevelError, "couldn't initialize collector",
slog.Any("err", err),
)
return 1
}
}
logCurrentUser(ctx, logger)
logger.InfoContext(ctx, "Enabled collectors: "+strings.Join(enabledCollectorList, ", "))
mux := http.NewServeMux()
mux.Handle("GET /health", httphandler.NewHealthHandler())
mux.Handle("GET /version", httphandler.NewVersionHandler())
mux.Handle("GET "+*metricsPath, httphandler.New(logger, collectors, &httphandler.Options{
DisableExporterMetrics: *disableExporterMetrics,
TimeoutMargin: *timeoutMargin,
}))
if *debugEnabled {
mux.HandleFunc("GET /debug/pprof/", pprof.Index)
mux.HandleFunc("GET /debug/pprof/cmdline", pprof.Cmdline)
mux.HandleFunc("GET /debug/pprof/profile", pprof.Profile)
mux.HandleFunc("GET /debug/pprof/symbol", pprof.Symbol)
mux.HandleFunc("GET /debug/pprof/trace", pprof.Trace)
}
logger.LogAttrs(ctx, slog.LevelInfo, fmt.Sprintf("starting windows_exporter in %s", time.Since(startTime)),
slog.String("version", version.Version),
slog.String("branch", version.Branch),
slog.String("revision", version.GetRevision()),
slog.String("goversion", version.GoVersion),
slog.String("builddate", version.BuildDate),
slog.Int("maxprocs", runtime.GOMAXPROCS(0)),
)
server := &http.Server{
ReadHeaderTimeout: 5 * time.Second,
IdleTimeout: 60 * time.Second,
ReadTimeout: 5 * time.Second,
WriteTimeout: 5 * time.Minute,
Handler: mux,
}
errCh := make(chan error, 1)
go func() {
if err := web.ListenAndServe(server, webConfig, logger); err != nil && !errors.Is(err, http.ErrServerClosed) {
errCh <- err
}
close(errCh)
}()
select {
case <-ctx.Done():
logger.LogAttrs(ctx, slog.LevelInfo, "Shutting down windows_exporter via kill signal")
case <-stopCh:
logger.LogAttrs(ctx, slog.LevelInfo, "Shutting down windows_exporter via service control")
case err := <-errCh:
if err != nil {
logger.LogAttrs(ctx, slog.LevelError, "Failed to start windows_exporter",
slog.Any("err", err),
)
return 1
}
}
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
//nolint:contextcheck // create a new context for server shutdown
if err = server.Shutdown(ctx); err != nil {
//nolint:contextcheck
logger.LogAttrs(ctx, slog.LevelError, "Failed to shutdown windows_exporter",
slog.Any("err", err),
)
} else {
//nolint:contextcheck
logger.LogAttrs(ctx, slog.LevelInfo, "windows_exporter has shut down")
}
return 0
}
func logCurrentUser(ctx context.Context, logger *slog.Logger) {
u, err := user.Current()
if err != nil {
logger.LogAttrs(ctx, slog.LevelWarn, "Unable to determine which user is running this exporter. More info: https://github.com/golang/go/issues/37348",
slog.Any("err", err),
)
return
}
logger.LogAttrs(ctx, slog.LevelInfo, "Running as "+u.Username)
if strings.Contains(u.Username, "ContainerAdministrator") || strings.Contains(u.Username, "ContainerUser") {
logger.LogAttrs(ctx, slog.LevelWarn, "Running as a preconfigured Windows Container user. This may mean you do not have Windows HostProcess containers configured correctly and some functionality will not work as expected.")
}
}
// setPriorityWindows sets the priority of the current process to the specified value.
func setPriorityWindows(ctx context.Context, logger *slog.Logger, pid int, priority string) error {
// Mapping of priority names to uin32 values required by windows.SetPriorityClass.
priorityStringToInt := map[string]uint32{
"realtime": windows.REALTIME_PRIORITY_CLASS,
"high": windows.HIGH_PRIORITY_CLASS,
"abovenormal": windows.ABOVE_NORMAL_PRIORITY_CLASS,
"normal": windows.NORMAL_PRIORITY_CLASS,
"belownormal": windows.BELOW_NORMAL_PRIORITY_CLASS,
"low": windows.IDLE_PRIORITY_CLASS,
}
winPriority, ok := priorityStringToInt[priority]
// Only set process priority if a non-default and valid value has been set
if !ok || winPriority == windows.NORMAL_PRIORITY_CLASS {
return nil
}
logger.LogAttrs(ctx, slog.LevelDebug, "setting process priority to "+priority)
// https://learn.microsoft.com/en-us/windows/win32/procthread/process-security-and-access-rights
handle, err := windows.OpenProcess(
windows.STANDARD_RIGHTS_REQUIRED|windows.SYNCHRONIZE|windows.SPECIFIC_RIGHTS_ALL,
false, uint32(pid),
)
if err != nil {
return fmt.Errorf("failed to open own process: %w", err)
}
if err = windows.SetPriorityClass(handle, winPriority); err != nil {
return fmt.Errorf("failed to set priority class: %w", err)
}
if err = windows.CloseHandle(handle); err != nil {
return fmt.Errorf("failed to close handle: %w", err)
}
return nil
}
func expandEnabledCollectors(enabled string) []string {
expanded := strings.ReplaceAll(enabled, "[defaults]", collector.DefaultCollectors)
return slices.Compact(strings.Split(expanded, ","))
}

View File

@@ -1,199 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
//
// Copyright The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//go:build windows
package main
import (
"context"
"errors"
"fmt"
"io"
"net"
"net/http"
"net/url"
"os"
"strings"
"testing"
"time"
"github.com/stretchr/testify/require"
"golang.org/x/sys/windows"
)
//nolint:tparallel
func TestRun(t *testing.T) {
t.Parallel()
for _, tc := range []struct {
name string
args []string
config string
metricsEndpoint string
exitCode int
}{
{
name: "default",
args: []string{},
metricsEndpoint: "http://127.0.0.1:9182/metrics",
},
{
name: "web.listen-address",
args: []string{"--web.listen-address=127.0.0.1:8080"},
metricsEndpoint: "http://127.0.0.1:8080/metrics",
},
{
name: "web.listen-address",
args: []string{"--web.listen-address=127.0.0.1:8081", "--web.listen-address=[::1]:8081"},
metricsEndpoint: "http://[::1]:8081/metrics",
},
{
name: "config",
args: []string{"--config.file=config.yaml"},
config: `{"web":{"listen-address":"127.0.0.1:8082"}}`,
metricsEndpoint: "http://127.0.0.1:8082/metrics",
},
{
name: "web.listen-address with config",
args: []string{"--config.file=config.yaml", "--web.listen-address=127.0.0.1:8084"},
config: `{"web":{"listen-address":"127.0.0.1:8083"}}`,
metricsEndpoint: "http://127.0.0.1:8084/metrics",
},
} {
t.Run(tc.name, func(t *testing.T) {
ctx, cancel := context.WithCancel(t.Context())
defer cancel()
if tc.config != "" {
// Create a temporary config file.
tmpfile, err := os.CreateTemp(t.TempDir(), "config-*.yaml")
require.NoError(t, err)
t.Cleanup(func() {
require.NoError(t, tmpfile.Close())
})
_, err = tmpfile.WriteString(tc.config)
require.NoError(t, err)
for i, arg := range tc.args {
tc.args[i] = strings.ReplaceAll(arg, "config.yaml", tmpfile.Name())
}
}
exitCodeCh := make(chan int)
var stdout string
go func() {
stdout = captureOutput(t, func() {
// Simulate the service control manager signaling that we are done.
exitCodeCh <- run(ctx, tc.args)
})
}()
t.Cleanup(func() {
select {
case exitCode := <-exitCodeCh:
require.Equal(t, tc.exitCode, exitCode)
case <-time.After(2 * time.Second):
t.Fatalf("timed out waiting for exit code, want %d", tc.exitCode)
}
})
if tc.exitCode != 0 {
return
}
uri, err := url.Parse(tc.metricsEndpoint)
require.NoError(t, err)
err = waitUntilListening(t, "tcp", uri.Host)
require.NoError(t, err, "LOGS:\n%s", stdout)
req, err := http.NewRequestWithContext(ctx, http.MethodGet, tc.metricsEndpoint, nil)
require.NoError(t, err)
resp, err := http.DefaultClient.Do(req)
require.NoError(t, err, "LOGS:\n%s", stdout)
require.Equal(t, http.StatusOK, resp.StatusCode)
body, err := io.ReadAll(resp.Body)
require.NoError(t, err)
err = resp.Body.Close()
require.NoError(t, err)
require.NotEmpty(t, body)
require.Contains(t, string(body), "# HELP windows_exporter_build_info")
cancel()
})
}
}
func captureOutput(tb testing.TB, f func()) string {
tb.Helper()
orig := os.Stdout
r, w, _ := os.Pipe()
os.Stdout = w
f()
os.Stdout = orig
_ = w.Close()
out, _ := io.ReadAll(r)
return string(out)
}
func waitUntilListening(tb testing.TB, network, address string) error {
tb.Helper()
var (
conn net.Conn
err error
)
dialer := &net.Dialer{Timeout: 100 * time.Millisecond}
for range 20 {
conn, err = dialer.DialContext(tb.Context(), network, address)
if err == nil {
_ = conn.Close()
return nil
}
if errors.Is(err, windows.Errno(10061)) {
time.Sleep(50 * time.Millisecond)
continue
}
break
}
var winErr windows.Errno
if errors.As(err, &winErr) {
return fmt.Errorf("listener not listening: %w (#%d)", winErr, uint32(winErr))
}
return fmt.Errorf("listener not listening: %w", err)
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

View File

@@ -1,56 +0,0 @@
{
"RT_GROUP_ICON": {
"APP": {
"0000": [
"icon.png"
]
}
},
"RT_MANIFEST": {
"#1": {
"0409": {
"description": "A Prometheus exporter for Windows machines.",
"minimum-os": "win7",
"execution-level": "as invoker",
"ui-access": false,
"auto-elevate": false,
"dpi-awareness": "system",
"disable-theming": false,
"disable-window-filtering": false,
"high-resolution-scrolling-aware": false,
"ultra-high-resolution-scrolling-aware": false,
"long-path-aware": false,
"printer-driver-isolation": false,
"gdi-scaling": false,
"segment-heap": false,
"use-common-controls-v6": false
}
}
},
"RT_VERSION": {
"#1": {
"0000": {
"fixed": {
"file_version": "0.0.0.0",
"product_version": "0.0.0.0"
},
"info": {
"0409": {
"Comments": "",
"CompanyName": "Prometheus Community",
"FileDescription": "A Prometheus exporter for Windows machines.",
"FileVersion": "",
"InternalName": "windows_exporter",
"LegalCopyright": "",
"LegalTrademarks": "",
"OriginalFilename": "",
"PrivateBuild": "",
"ProductName": "windows_exporter",
"ProductVersion": "",
"SpecialBuild": ""
}
}
}
}
}
}

1314
collector/ad.go Normal file

File diff suppressed because it is too large Load Diff

188
collector/adfs.go Normal file
View File

@@ -0,0 +1,188 @@
// +build windows
package collector
import (
"github.com/prometheus/client_golang/prometheus"
)
func init() {
registerCollector("adfs", newADFSCollector)
}
type adfsCollector struct {
adLoginConnectionFailures *prometheus.Desc
certificateAuthentications *prometheus.Desc
deviceAuthentications *prometheus.Desc
extranetAccountLockouts *prometheus.Desc
federatedAuthentications *prometheus.Desc
passportAuthentications *prometheus.Desc
passiveRequests *prometheus.Desc
passwordChangeFailed *prometheus.Desc
passwordChangeSucceeded *prometheus.Desc
tokenRequests *prometheus.Desc
windowsIntegratedAuthentications *prometheus.Desc
}
// newADFSCollector constructs a new adfsCollector
func newADFSCollector() (Collector, error) {
const subsystem = "adfs"
return &adfsCollector{
adLoginConnectionFailures: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "ad_login_connection_failures"),
"Total number of connection failures to an Active Directory domain controller",
nil,
nil,
),
certificateAuthentications: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "certificate_authentications"),
"Total number of User Certificate authentications",
nil,
nil,
),
deviceAuthentications: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "device_authentications"),
"Total number of Device authentications",
nil,
nil,
),
extranetAccountLockouts: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "extranet_account_lockouts"),
"Total number of Extranet Account Lockouts",
nil,
nil,
),
federatedAuthentications: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "federated_authentications"),
"Total number of authentications from a federated source",
nil,
nil,
),
passportAuthentications: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "passport_authentications"),
"Total number of Microsoft Passport SSO authentications",
nil,
nil,
),
passiveRequests: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "passive_requests"),
"Total number of passive (browser-based) requests",
nil,
nil,
),
passwordChangeFailed: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "password_change_failed"),
"Total number of failed password changes",
nil,
nil,
),
passwordChangeSucceeded: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "password_change_succeeded"),
"Total number of successful password changes",
nil,
nil,
),
tokenRequests: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "token_requests"),
"Total number of token requests",
nil,
nil,
),
windowsIntegratedAuthentications: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "windows_integrated_authentications"),
"Total number of Windows integrated authentications (Kerberos/NTLM)",
nil,
nil,
),
}, nil
}
type perflibADFS struct {
AdLoginConnectionFailures float64 `perflib:"AD login Connection Failures"`
CertificateAuthentications float64 `perflib:"Certificate Authentications"`
DeviceAuthentications float64 `perflib:"Device Authentications"`
ExtranetAccountLockouts float64 `perflib:"Extranet Account Lockouts"`
FederatedAuthentications float64 `perflib:"Federated Authentications"`
PassportAuthentications float64 `perflib:"Microsoft Passport Authentications"`
PassiveRequests float64 `perflib:"Passive Requests"`
PasswordChangeFailed float64 `perflib:"Password Change Failed Requests"`
PasswordChangeSucceeded float64 `perflib:"Password Change Successful Requests"`
TokenRequests float64 `perflib:"Token Requests"`
WindowsIntegratedAuthentications float64 `perflib:"Windows Integrated Authentications"`
}
func (c *adfsCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
var adfsData []perflibADFS
err := unmarshalObject(ctx.perfObjects["AD FS"], &adfsData)
if err != nil {
return err
}
ch <- prometheus.MustNewConstMetric(
c.adLoginConnectionFailures,
prometheus.CounterValue,
adfsData[0].AdLoginConnectionFailures,
)
ch <- prometheus.MustNewConstMetric(
c.certificateAuthentications,
prometheus.CounterValue,
adfsData[0].CertificateAuthentications,
)
ch <- prometheus.MustNewConstMetric(
c.deviceAuthentications,
prometheus.CounterValue,
adfsData[0].DeviceAuthentications,
)
ch <- prometheus.MustNewConstMetric(
c.extranetAccountLockouts,
prometheus.CounterValue,
adfsData[0].ExtranetAccountLockouts,
)
ch <- prometheus.MustNewConstMetric(
c.federatedAuthentications,
prometheus.CounterValue,
adfsData[0].FederatedAuthentications,
)
ch <- prometheus.MustNewConstMetric(
c.passportAuthentications,
prometheus.CounterValue,
adfsData[0].PassportAuthentications,
)
ch <- prometheus.MustNewConstMetric(
c.passiveRequests,
prometheus.CounterValue,
adfsData[0].PassiveRequests,
)
ch <- prometheus.MustNewConstMetric(
c.passwordChangeFailed,
prometheus.CounterValue,
adfsData[0].PasswordChangeFailed,
)
ch <- prometheus.MustNewConstMetric(
c.passwordChangeSucceeded,
prometheus.CounterValue,
adfsData[0].PasswordChangeSucceeded,
)
ch <- prometheus.MustNewConstMetric(
c.tokenRequests,
prometheus.CounterValue,
adfsData[0].TokenRequests,
)
ch <- prometheus.MustNewConstMetric(
c.windowsIntegratedAuthentications,
prometheus.CounterValue,
adfsData[0].WindowsIntegratedAuthentications,
)
return nil
}

106
collector/collector.go Normal file
View File

@@ -0,0 +1,106 @@
package collector
import (
"strconv"
"strings"
"github.com/leoluk/perflib_exporter/perflib"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
"golang.org/x/sys/windows/registry"
)
// ...
const (
// TODO: Make package-local
Namespace = "wmi"
// Conversion factors
ticksToSecondsScaleFactor = 1 / 1e7
windowsEpoch = 116444736000000000
)
// getWindowsVersion reads the version number of the OS from the Registry
// See https://docs.microsoft.com/en-us/windows/desktop/sysinfo/operating-system-version
func getWindowsVersion() float64 {
k, err := registry.OpenKey(registry.LOCAL_MACHINE, `SOFTWARE\Microsoft\Windows NT\CurrentVersion`, registry.QUERY_VALUE)
if err != nil {
log.Warn("Couldn't open registry", err)
return 0
}
defer func() {
err = k.Close()
if err != nil {
log.Warnf("Failed to close registry key: %v", err)
}
}()
currentv, _, err := k.GetStringValue("CurrentVersion")
if err != nil {
log.Warn("Couldn't open registry to determine current Windows version:", err)
return 0
}
currentv_flt, err := strconv.ParseFloat(currentv, 64)
log.Debugf("Detected Windows version %f\n", currentv_flt)
return currentv_flt
}
type collectorBuilder func() (Collector, error)
var (
builders = make(map[string]collectorBuilder)
perfCounterDependencies = make(map[string]string)
)
func registerCollector(name string, builder collectorBuilder, perfCounterNames ...string) {
builders[name] = builder
perfIndicies := make([]string, 0, len(perfCounterNames))
for _, cn := range perfCounterNames {
perfIndicies = append(perfIndicies, MapCounterToIndex(cn))
}
perfCounterDependencies[name] = strings.Join(perfIndicies, " ")
}
func Available() []string {
cs := make([]string, 0, len(builders))
for c := range builders {
cs = append(cs, c)
}
return cs
}
func Build(collector string) (Collector, error) {
return builders[collector]()
}
func getPerfQuery(collectors []string) string {
parts := make([]string, 0, len(collectors))
for _, c := range collectors {
if p := perfCounterDependencies[c]; p != "" {
parts = append(parts, p)
}
}
return strings.Join(parts, " ")
}
// Collector is the interface a collector has to implement.
type Collector interface {
// Get new metrics and expose them via prometheus registry.
Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) (err error)
}
type ScrapeContext struct {
perfObjects map[string]*perflib.PerfObject
}
// PrepareScrapeContext creates a ScrapeContext to be used during a single scrape
func PrepareScrapeContext(collectors []string) (*ScrapeContext, error) {
q := getPerfQuery(collectors) // TODO: Memoize
objs, err := getPerflibSnapshot(q)
if err != nil {
return nil, err
}
return &ScrapeContext{objs}, nil
}

282
collector/container.go Normal file
View File

@@ -0,0 +1,282 @@
// +build windows
package collector
import (
"github.com/Microsoft/hcsshim"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {
registerCollector("container", NewContainerMetricsCollector)
}
// A ContainerMetricsCollector is a Prometheus collector for containers metrics
type ContainerMetricsCollector struct {
// Presence
ContainerAvailable *prometheus.Desc
// Number of containers
ContainersCount *prometheus.Desc
// memory
UsageCommitBytes *prometheus.Desc
UsageCommitPeakBytes *prometheus.Desc
UsagePrivateWorkingSetBytes *prometheus.Desc
// CPU
RuntimeTotal *prometheus.Desc
RuntimeUser *prometheus.Desc
RuntimeKernel *prometheus.Desc
// Network
BytesReceived *prometheus.Desc
BytesSent *prometheus.Desc
PacketsReceived *prometheus.Desc
PacketsSent *prometheus.Desc
DroppedPacketsIncoming *prometheus.Desc
DroppedPacketsOutgoing *prometheus.Desc
}
// NewContainerMetricsCollector constructs a new ContainerMetricsCollector
func NewContainerMetricsCollector() (Collector, error) {
const subsystem = "container"
return &ContainerMetricsCollector{
ContainerAvailable: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "available"),
"Available",
[]string{"container_id"},
nil,
),
ContainersCount: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "count"),
"Number of containers",
nil,
nil,
),
UsageCommitBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "memory_usage_commit_bytes"),
"Memory Usage Commit Bytes",
[]string{"container_id"},
nil,
),
UsageCommitPeakBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "memory_usage_commit_peak_bytes"),
"Memory Usage Commit Peak Bytes",
[]string{"container_id"},
nil,
),
UsagePrivateWorkingSetBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "memory_usage_private_working_set_bytes"),
"Memory Usage Private Working Set Bytes",
[]string{"container_id"},
nil,
),
RuntimeTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "cpu_usage_seconds_total"),
"Total Run time in Seconds",
[]string{"container_id"},
nil,
),
RuntimeUser: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "cpu_usage_seconds_usermode"),
"Run Time in User mode in Seconds",
[]string{"container_id"},
nil,
),
RuntimeKernel: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "cpu_usage_seconds_kernelmode"),
"Run time in Kernel mode in Seconds",
[]string{"container_id"},
nil,
),
BytesReceived: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "network_receive_bytes_total"),
"Bytes Received on Interface",
[]string{"container_id", "interface"},
nil,
),
BytesSent: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "network_transmit_bytes_total"),
"Bytes Sent on Interface",
[]string{"container_id", "interface"},
nil,
),
PacketsReceived: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "network_receive_packets_total"),
"Packets Received on Interface",
[]string{"container_id", "interface"},
nil,
),
PacketsSent: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "network_transmit_packets_total"),
"Packets Sent on Interface",
[]string{"container_id", "interface"},
nil,
),
DroppedPacketsIncoming: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "network_receive_packets_dropped_total"),
"Dropped Incoming Packets on Interface",
[]string{"container_id", "interface"},
nil,
),
DroppedPacketsOutgoing: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "network_transmit_packets_dropped_total"),
"Dropped Outgoing Packets on Interface",
[]string{"container_id", "interface"},
nil,
),
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *ContainerMetricsCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
log.Error("failed collecting ContainerMetricsCollector metrics:", desc, err)
return err
}
return nil
}
// containerClose closes the container resource
func containerClose(c hcsshim.Container) {
err := c.Close()
if err != nil {
log.Error(err)
}
}
func (c *ContainerMetricsCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
// Types Container is passed to get the containers compute systems only
containers, err := hcsshim.GetContainers(hcsshim.ComputeSystemQuery{Types: []string{"Container"}})
if err != nil {
log.Error("Err in Getting containers:", err)
return nil, err
}
count := len(containers)
ch <- prometheus.MustNewConstMetric(
c.ContainersCount,
prometheus.GaugeValue,
float64(count),
)
if count == 0 {
return nil, nil
}
for _, containerDetails := range containers {
containerId := containerDetails.ID
container, err := hcsshim.OpenContainer(containerId)
if container != nil {
defer containerClose(container)
}
if err != nil {
log.Error("err in opening container: ", containerId, err)
continue
}
cstats, err := container.Statistics()
if err != nil {
log.Error("err in fetching container Statistics: ", containerId, err)
continue
}
// HCS V1 is for docker runtime. Add the docker:// prefix on container_id
containerId = "docker://" + containerId
ch <- prometheus.MustNewConstMetric(
c.ContainerAvailable,
prometheus.CounterValue,
1,
containerId,
)
ch <- prometheus.MustNewConstMetric(
c.UsageCommitBytes,
prometheus.GaugeValue,
float64(cstats.Memory.UsageCommitBytes),
containerId,
)
ch <- prometheus.MustNewConstMetric(
c.UsageCommitPeakBytes,
prometheus.GaugeValue,
float64(cstats.Memory.UsageCommitPeakBytes),
containerId,
)
ch <- prometheus.MustNewConstMetric(
c.UsagePrivateWorkingSetBytes,
prometheus.GaugeValue,
float64(cstats.Memory.UsagePrivateWorkingSetBytes),
containerId,
)
ch <- prometheus.MustNewConstMetric(
c.RuntimeTotal,
prometheus.CounterValue,
float64(cstats.Processor.TotalRuntime100ns)*ticksToSecondsScaleFactor,
containerId,
)
ch <- prometheus.MustNewConstMetric(
c.RuntimeUser,
prometheus.CounterValue,
float64(cstats.Processor.RuntimeUser100ns)*ticksToSecondsScaleFactor,
containerId,
)
ch <- prometheus.MustNewConstMetric(
c.RuntimeKernel,
prometheus.CounterValue,
float64(cstats.Processor.RuntimeKernel100ns)*ticksToSecondsScaleFactor,
containerId,
)
if len(cstats.Network) == 0 {
log.Info("No Network Stats for container: ", containerId)
continue
}
networkStats := cstats.Network
for _, networkInterface := range networkStats {
ch <- prometheus.MustNewConstMetric(
c.BytesReceived,
prometheus.CounterValue,
float64(networkInterface.BytesReceived),
containerId, networkInterface.EndpointId,
)
ch <- prometheus.MustNewConstMetric(
c.BytesSent,
prometheus.CounterValue,
float64(networkInterface.BytesSent),
containerId, networkInterface.EndpointId,
)
ch <- prometheus.MustNewConstMetric(
c.PacketsReceived,
prometheus.CounterValue,
float64(networkInterface.PacketsReceived),
containerId, networkInterface.EndpointId,
)
ch <- prometheus.MustNewConstMetric(
c.PacketsSent,
prometheus.CounterValue,
float64(networkInterface.PacketsSent),
containerId, networkInterface.EndpointId,
)
ch <- prometheus.MustNewConstMetric(
c.DroppedPacketsIncoming,
prometheus.CounterValue,
float64(networkInterface.DroppedPacketsIncoming),
containerId, networkInterface.EndpointId,
)
ch <- prometheus.MustNewConstMetric(
c.DroppedPacketsOutgoing,
prometheus.CounterValue,
float64(networkInterface.DroppedPacketsOutgoing),
containerId, networkInterface.EndpointId,
)
break
}
}
return nil, nil
}

375
collector/cpu.go Normal file
View File

@@ -0,0 +1,375 @@
// +build windows
package collector
import (
"strings"
"github.com/prometheus/client_golang/prometheus"
)
func init() {
var deps string
// See below for 6.05 magic value
if getWindowsVersion() > 6.05 {
deps = "Processor Information"
} else {
deps = "Processor"
}
registerCollector("cpu", newCPUCollector, deps)
}
type cpuCollectorBasic struct {
CStateSecondsTotal *prometheus.Desc
TimeTotal *prometheus.Desc
InterruptsTotal *prometheus.Desc
DPCsTotal *prometheus.Desc
}
type cpuCollectorFull struct {
CStateSecondsTotal *prometheus.Desc
TimeTotal *prometheus.Desc
InterruptsTotal *prometheus.Desc
DPCsTotal *prometheus.Desc
ClockInterruptsTotal *prometheus.Desc
IdleBreakEventsTotal *prometheus.Desc
ParkingStatus *prometheus.Desc
ProcessorFrequencyMHz *prometheus.Desc
ProcessorMaxFrequencyMHz *prometheus.Desc
ProcessorPerformance *prometheus.Desc
}
// newCPUCollector constructs a new cpuCollector, appropriate for the running OS
func newCPUCollector() (Collector, error) {
const subsystem = "cpu"
version := getWindowsVersion()
// For Windows 2008 (version 6.0) or earlier we only have the "Processor"
// class. As of Windows 2008 R2 (version 6.1) the more detailed
// "Processor Information" set is available (although some of the counters
// are added in later versions, so we aren't guaranteed to get all of
// them).
// Value 6.05 was selected to split between Windows versions.
if version < 6.05 {
return &cpuCollectorBasic{
CStateSecondsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "cstate_seconds_total"),
"Time spent in low-power idle state",
[]string{"core", "state"},
nil,
),
TimeTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "time_total"),
"Time that processor spent in different modes (idle, user, system, ...)",
[]string{"core", "mode"},
nil,
),
InterruptsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "interrupts_total"),
"Total number of received and serviced hardware interrupts",
[]string{"core"},
nil,
),
DPCsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "dpcs_total"),
"Total number of received and serviced deferred procedure calls (DPCs)",
[]string{"core"},
nil,
),
}, nil
}
return &cpuCollectorFull{
CStateSecondsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "cstate_seconds_total"),
"Time spent in low-power idle state",
[]string{"core", "state"},
nil,
),
TimeTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "time_total"),
"Time that processor spent in different modes (idle, user, system, ...)",
[]string{"core", "mode"},
nil,
),
InterruptsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "interrupts_total"),
"Total number of received and serviced hardware interrupts",
[]string{"core"},
nil,
),
DPCsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "dpcs_total"),
"Total number of received and serviced deferred procedure calls (DPCs)",
[]string{"core"},
nil,
),
ClockInterruptsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "clock_interrupts_total"),
"Total number of received and serviced clock tick interrupts",
[]string{"core"},
nil,
),
IdleBreakEventsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "idle_break_events_total"),
"Total number of time processor was woken from idle",
[]string{"core"},
nil,
),
ParkingStatus: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "parking_status"),
"Parking Status represents whether a processor is parked or not",
[]string{"core"},
nil,
),
ProcessorFrequencyMHz: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "core_frequency_mhz"),
"Core frequency in megahertz",
[]string{"core"},
nil,
),
ProcessorPerformance: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "processor_performance"),
"Processor Performance is the average performance of the processor while it is executing instructions, as a percentage of the nominal performance of the processor. On some processors, Processor Performance may exceed 100%",
[]string{"core"},
nil,
),
}, nil
}
type perflibProcessor struct {
Name string
C1Transitions float64 `perflib:"C1 Transitions/sec"`
C2Transitions float64 `perflib:"C2 Transitions/sec"`
C3Transitions float64 `perflib:"C3 Transitions/sec"`
DPCRate float64 `perflib:"DPC Rate"`
DPCsQueued float64 `perflib:"DPCs Queued/sec"`
Interrupts float64 `perflib:"Interrupts/sec"`
PercentC2Time float64 `perflib:"% C1 Time"`
PercentC3Time float64 `perflib:"% C2 Time"`
PercentC1Time float64 `perflib:"% C3 Time"`
PercentDPCTime float64 `perflib:"% DPC Time"`
PercentIdleTime float64 `perflib:"% Idle Time"`
PercentInterruptTime float64 `perflib:"% Interrupt Time"`
PercentPrivilegedTime float64 `perflib:"% Privileged Time"`
PercentProcessorTime float64 `perflib:"% Processor Time"`
PercentUserTime float64 `perflib:"% User Time"`
}
func (c *cpuCollectorBasic) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
data := make([]perflibProcessor, 0)
err := unmarshalObject(ctx.perfObjects["Processor"], &data)
if err != nil {
return err
}
for _, cpu := range data {
if strings.Contains(strings.ToLower(cpu.Name), "_total") {
continue
}
core := cpu.Name
ch <- prometheus.MustNewConstMetric(
c.CStateSecondsTotal,
prometheus.CounterValue,
cpu.PercentC1Time,
core, "c1",
)
ch <- prometheus.MustNewConstMetric(
c.CStateSecondsTotal,
prometheus.CounterValue,
cpu.PercentC2Time,
core, "c2",
)
ch <- prometheus.MustNewConstMetric(
c.CStateSecondsTotal,
prometheus.CounterValue,
cpu.PercentC3Time,
core, "c3",
)
ch <- prometheus.MustNewConstMetric(
c.TimeTotal,
prometheus.CounterValue,
cpu.PercentIdleTime,
core, "idle",
)
ch <- prometheus.MustNewConstMetric(
c.TimeTotal,
prometheus.CounterValue,
cpu.PercentInterruptTime,
core, "interrupt",
)
ch <- prometheus.MustNewConstMetric(
c.TimeTotal,
prometheus.CounterValue,
cpu.PercentDPCTime,
core, "dpc",
)
ch <- prometheus.MustNewConstMetric(
c.TimeTotal,
prometheus.CounterValue,
cpu.PercentPrivilegedTime,
core, "privileged",
)
ch <- prometheus.MustNewConstMetric(
c.TimeTotal,
prometheus.CounterValue,
cpu.PercentUserTime,
core, "user",
)
ch <- prometheus.MustNewConstMetric(
c.InterruptsTotal,
prometheus.CounterValue,
cpu.Interrupts,
core,
)
ch <- prometheus.MustNewConstMetric(
c.DPCsTotal,
prometheus.CounterValue,
cpu.DPCsQueued,
core,
)
}
return nil
}
type perflibProcessorInformation struct {
Name string
C1TimeSeconds float64 `perflib:"% C1 Time"`
C2TimeSeconds float64 `perflib:"% C2 Time"`
C3TimeSeconds float64 `perflib:"% C3 Time"`
C1TransitionsTotal float64 `perflib:"C1 Transitions/sec"`
C2TransitionsTotal float64 `perflib:"C2 Transitions/sec"`
C3TransitionsTotal float64 `perflib:"C3 Transitions/sec"`
ClockInterruptsTotal float64 `perflib:"Clock Interrupts/sec"`
DPCsQueuedTotal float64 `perflib:"DPCs Queued/sec"`
DPCTimeSeconds float64 `perflib:"% DPC Time"`
IdleBreakEventsTotal float64 `perflib:"Idle Break Events/sec"`
IdleTimeSeconds float64 `perflib:"% Idle Time"`
InterruptsTotal float64 `perflib:"Interrupts/sec"`
InterruptTimeSeconds float64 `perflib:"% Interrupt Time"`
ParkingStatus float64 `perflib:"Parking Status"`
PerformanceLimitPercent float64 `perflib:"% Performance Limit"`
PriorityTimeSeconds float64 `perflib:"% Priority Time"`
PrivilegedTimeSeconds float64 `perflib:"% Privileged Time"`
PrivilegedUtilitySeconds float64 `perflib:"% Privileged Utility"`
ProcessorFrequencyMHz float64 `perflib:"Processor Frequency"`
ProcessorPerformance float64 `perflib:"% Processor Performance"`
ProcessorTimeSeconds float64 `perflib:"% Processor Time"`
ProcessorUtilityRate float64 `perflib:"% Processor Utility"`
UserTimeSeconds float64 `perflib:"% User Time"`
}
func (c *cpuCollectorFull) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
data := make([]perflibProcessorInformation, 0)
err := unmarshalObject(ctx.perfObjects["Processor Information"], &data)
if err != nil {
return err
}
for _, cpu := range data {
if strings.Contains(strings.ToLower(cpu.Name), "_total") {
continue
}
core := cpu.Name
ch <- prometheus.MustNewConstMetric(
c.CStateSecondsTotal,
prometheus.CounterValue,
cpu.C1TimeSeconds,
core, "c1",
)
ch <- prometheus.MustNewConstMetric(
c.CStateSecondsTotal,
prometheus.CounterValue,
cpu.C2TimeSeconds,
core, "c2",
)
ch <- prometheus.MustNewConstMetric(
c.CStateSecondsTotal,
prometheus.CounterValue,
cpu.C3TimeSeconds,
core, "c3",
)
ch <- prometheus.MustNewConstMetric(
c.TimeTotal,
prometheus.CounterValue,
cpu.IdleTimeSeconds,
core, "idle",
)
ch <- prometheus.MustNewConstMetric(
c.TimeTotal,
prometheus.CounterValue,
cpu.InterruptTimeSeconds,
core, "interrupt",
)
ch <- prometheus.MustNewConstMetric(
c.TimeTotal,
prometheus.CounterValue,
cpu.DPCTimeSeconds,
core, "dpc",
)
ch <- prometheus.MustNewConstMetric(
c.TimeTotal,
prometheus.CounterValue,
cpu.PrivilegedTimeSeconds,
core, "privileged",
)
ch <- prometheus.MustNewConstMetric(
c.TimeTotal,
prometheus.CounterValue,
cpu.UserTimeSeconds,
core, "user",
)
ch <- prometheus.MustNewConstMetric(
c.InterruptsTotal,
prometheus.CounterValue,
cpu.InterruptsTotal,
core,
)
ch <- prometheus.MustNewConstMetric(
c.DPCsTotal,
prometheus.CounterValue,
cpu.DPCsQueuedTotal,
core,
)
ch <- prometheus.MustNewConstMetric(
c.ClockInterruptsTotal,
prometheus.CounterValue,
cpu.ClockInterruptsTotal,
core,
)
ch <- prometheus.MustNewConstMetric(
c.IdleBreakEventsTotal,
prometheus.CounterValue,
cpu.IdleBreakEventsTotal,
core,
)
ch <- prometheus.MustNewConstMetric(
c.ParkingStatus,
prometheus.GaugeValue,
cpu.ParkingStatus,
core,
)
ch <- prometheus.MustNewConstMetric(
c.ProcessorFrequencyMHz,
prometheus.GaugeValue,
cpu.ProcessorFrequencyMHz,
core,
)
ch <- prometheus.MustNewConstMetric(
c.ProcessorPerformance,
prometheus.GaugeValue,
cpu.ProcessorPerformance,
core,
)
}
return nil
}

112
collector/cs.go Normal file
View File

@@ -0,0 +1,112 @@
// +build windows
package collector
import (
"errors"
"github.com/StackExchange/wmi"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {
registerCollector("cs", NewCSCollector)
}
// A CSCollector is a Prometheus collector for WMI metrics
type CSCollector struct {
PhysicalMemoryBytes *prometheus.Desc
LogicalProcessors *prometheus.Desc
Hostname *prometheus.Desc
}
// NewCSCollector ...
func NewCSCollector() (Collector, error) {
const subsystem = "cs"
return &CSCollector{
LogicalProcessors: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "logical_processors"),
"ComputerSystem.NumberOfLogicalProcessors",
nil,
nil,
),
PhysicalMemoryBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "physical_memory_bytes"),
"ComputerSystem.TotalPhysicalMemory",
nil,
nil,
),
Hostname: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "hostname"),
"Labeled system hostname information as provided by ComputerSystem.DNSHostName and ComputerSystem.Domain",
[]string{
"hostname",
"domain",
"fqdn"},
nil,
),
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *CSCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
log.Error("failed collecting cs metrics:", desc, err)
return err
}
return nil
}
// Win32_ComputerSystem docs:
// - https://msdn.microsoft.com/en-us/library/aa394102
type Win32_ComputerSystem struct {
NumberOfLogicalProcessors uint32
TotalPhysicalMemory uint64
DNSHostname string
Domain string
Workgroup *string
}
func (c *CSCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_ComputerSystem
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
return nil, err
}
if len(dst) == 0 {
return nil, errors.New("WMI query returned empty result set")
}
ch <- prometheus.MustNewConstMetric(
c.LogicalProcessors,
prometheus.GaugeValue,
float64(dst[0].NumberOfLogicalProcessors),
)
ch <- prometheus.MustNewConstMetric(
c.PhysicalMemoryBytes,
prometheus.GaugeValue,
float64(dst[0].TotalPhysicalMemory),
)
var fqdn string
if dst[0].Workgroup == nil || dst[0].Domain != *dst[0].Workgroup {
fqdn = dst[0].DNSHostname + "." + dst[0].Domain
} else {
fqdn = dst[0].DNSHostname
}
ch <- prometheus.MustNewConstMetric(
c.Hostname,
prometheus.GaugeValue,
1.0,
dst[0].DNSHostname,
dst[0].Domain,
fqdn,
)
return nil, nil
}

498
collector/dns.go Normal file
View File

@@ -0,0 +1,498 @@
// +build windows
package collector
import (
"errors"
"github.com/StackExchange/wmi"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {
registerCollector("dns", NewDNSCollector)
}
// A DNSCollector is a Prometheus collector for WMI Win32_PerfRawData_DNS_DNS metrics
type DNSCollector struct {
ZoneTransferRequestsReceived *prometheus.Desc
ZoneTransferRequestsSent *prometheus.Desc
ZoneTransferResponsesReceived *prometheus.Desc
ZoneTransferSuccessReceived *prometheus.Desc
ZoneTransferSuccessSent *prometheus.Desc
ZoneTransferFailures *prometheus.Desc
MemoryUsedBytes *prometheus.Desc
DynamicUpdatesQueued *prometheus.Desc
DynamicUpdatesReceived *prometheus.Desc
DynamicUpdatesFailures *prometheus.Desc
NotifyReceived *prometheus.Desc
NotifySent *prometheus.Desc
SecureUpdateFailures *prometheus.Desc
SecureUpdateReceived *prometheus.Desc
Queries *prometheus.Desc
Responses *prometheus.Desc
RecursiveQueries *prometheus.Desc
RecursiveQueryFailures *prometheus.Desc
RecursiveQuerySendTimeouts *prometheus.Desc
WinsQueries *prometheus.Desc
WinsResponses *prometheus.Desc
UnmatchedResponsesReceived *prometheus.Desc
}
// NewDNSCollector ...
func NewDNSCollector() (Collector, error) {
const subsystem = "dns"
return &DNSCollector{
ZoneTransferRequestsReceived: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "zone_transfer_requests_received_total"),
"Number of zone transfer requests (AXFR/IXFR) received by the master DNS server",
[]string{"qtype"},
nil,
),
ZoneTransferRequestsSent: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "zone_transfer_requests_sent_total"),
"Number of zone transfer requests (AXFR/IXFR) sent by the secondary DNS server",
[]string{"qtype"},
nil,
),
ZoneTransferResponsesReceived: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "zone_transfer_response_received_total"),
"Number of zone transfer responses (AXFR/IXFR) received by the secondary DNS server",
[]string{"qtype"},
nil,
),
ZoneTransferSuccessReceived: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "zone_transfer_success_received_total"),
"Number of successful zone transfers (AXFR/IXFR) received by the secondary DNS server",
[]string{"qtype", "protocol"},
nil,
),
ZoneTransferSuccessSent: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "zone_transfer_success_sent_total"),
"Number of successful zone transfers (AXFR/IXFR) of the master DNS server",
[]string{"qtype"},
nil,
),
ZoneTransferFailures: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "zone_transfer_failures_total"),
"Number of failed zone transfers of the master DNS server",
nil,
nil,
),
MemoryUsedBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "memory_used_bytes_total"),
"Total memory used by DNS server",
[]string{"area"},
nil,
),
DynamicUpdatesQueued: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "dynamic_updates_queued"),
"Number of dynamic updates queued by the DNS server",
nil,
nil,
),
DynamicUpdatesReceived: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "dynamic_updates_received_total"),
"Number of secure update requests received by the DNS server",
[]string{"operation"},
nil,
),
DynamicUpdatesFailures: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "dynamic_updates_failures_total"),
"Number of dynamic updates which timed out or were rejected by the DNS server",
[]string{"reason"},
nil,
),
NotifyReceived: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "notify_received_total"),
"Number of notifies received by the secondary DNS server",
nil,
nil,
),
NotifySent: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "notify_sent_total"),
"Number of notifies sent by the master DNS server",
nil,
nil,
),
SecureUpdateFailures: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "secure_update_failures_total"),
"Number of secure updates that failed on the DNS server",
nil,
nil,
),
SecureUpdateReceived: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "secure_update_received_total"),
"Number of secure update requests received by the DNS server",
nil,
nil,
),
Queries: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "queries_total"),
"Number of queries received by DNS server",
[]string{"protocol"},
nil,
),
Responses: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "responses_total"),
"Number of reponses sent by DNS server",
[]string{"protocol"},
nil,
),
RecursiveQueries: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "recursive_queries_total"),
"Number of recursive queries received by DNS server",
nil,
nil,
),
RecursiveQueryFailures: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "recursive_query_failures_total"),
"Number of recursive query failures",
nil,
nil,
),
RecursiveQuerySendTimeouts: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "recursive_query_send_timeouts_total"),
"Number of recursive query sending timeouts",
nil,
nil,
),
WinsQueries: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "wins_queries_total"),
"Number of WINS lookup requests received by the server",
[]string{"direction"},
nil,
),
WinsResponses: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "wins_responses_total"),
"Number of WINS lookup responses sent by the server",
[]string{"direction"},
nil,
),
UnmatchedResponsesReceived: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "unmatched_responses_total"),
"Number of response packets received by the DNS server that do not match any outstanding remote query",
nil,
nil,
),
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *DNSCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
log.Error("failed collecting dns metrics:", desc, err)
return err
}
return nil
}
// Win32_PerfRawData_DNS_DNS docs:
// - https://msdn.microsoft.com/en-us/library/ms803992.aspx?f=255&MSPPError=-2147217396
// - https://technet.microsoft.com/en-us/library/cc977686.aspx
type Win32_PerfRawData_DNS_DNS struct {
AXFRRequestReceived uint32
AXFRRequestSent uint32
AXFRResponseReceived uint32
AXFRSuccessReceived uint32
AXFRSuccessSent uint32
CachingMemory uint32
DatabaseNodeMemory uint32
DynamicUpdateNoOperation uint32
DynamicUpdateQueued uint32
DynamicUpdateRejected uint32
DynamicUpdateTimeOuts uint32
DynamicUpdateWrittentoDatabase uint32
IXFRRequestReceived uint32
IXFRRequestSent uint32
IXFRResponseReceived uint32
IXFRSuccessSent uint32
IXFRTCPSuccessReceived uint32
IXFRUDPSuccessReceived uint32
NbstatMemory uint32
NotifyReceived uint32
NotifySent uint32
RecordFlowMemory uint32
RecursiveQueries uint32
RecursiveQueryFailure uint32
RecursiveSendTimeOuts uint32
SecureUpdateFailure uint32
SecureUpdateReceived uint32
TCPMessageMemory uint32
TCPQueryReceived uint32
TCPResponseSent uint32
UDPMessageMemory uint32
UDPQueryReceived uint32
UDPResponseSent uint32
UnmatchedResponsesReceived uint32
WINSLookupReceived uint32
WINSResponseSent uint32
WINSReverseLookupReceived uint32
WINSReverseResponseSent uint32
ZoneTransferFailure uint32
ZoneTransferSOARequestSent uint32
}
func (c *DNSCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_PerfRawData_DNS_DNS
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
return nil, err
}
if len(dst) == 0 {
return nil, errors.New("WMI query returned empty result set")
}
ch <- prometheus.MustNewConstMetric(
c.ZoneTransferRequestsReceived,
prometheus.CounterValue,
float64(dst[0].AXFRRequestReceived),
"full",
)
ch <- prometheus.MustNewConstMetric(
c.ZoneTransferRequestsReceived,
prometheus.CounterValue,
float64(dst[0].IXFRRequestReceived),
"incremental",
)
ch <- prometheus.MustNewConstMetric(
c.ZoneTransferRequestsSent,
prometheus.CounterValue,
float64(dst[0].AXFRRequestSent),
"full",
)
ch <- prometheus.MustNewConstMetric(
c.ZoneTransferRequestsSent,
prometheus.CounterValue,
float64(dst[0].IXFRRequestSent),
"incremental",
)
ch <- prometheus.MustNewConstMetric(
c.ZoneTransferRequestsSent,
prometheus.CounterValue,
float64(dst[0].ZoneTransferSOARequestSent),
"soa",
)
ch <- prometheus.MustNewConstMetric(
c.ZoneTransferResponsesReceived,
prometheus.CounterValue,
float64(dst[0].AXFRResponseReceived),
"full",
)
ch <- prometheus.MustNewConstMetric(
c.ZoneTransferResponsesReceived,
prometheus.CounterValue,
float64(dst[0].IXFRResponseReceived),
"incremental",
)
ch <- prometheus.MustNewConstMetric(
c.ZoneTransferSuccessReceived,
prometheus.CounterValue,
float64(dst[0].AXFRSuccessReceived),
"full",
"tcp",
)
ch <- prometheus.MustNewConstMetric(
c.ZoneTransferSuccessReceived,
prometheus.CounterValue,
float64(dst[0].IXFRTCPSuccessReceived),
"incremental",
"tcp",
)
ch <- prometheus.MustNewConstMetric(
c.ZoneTransferSuccessReceived,
prometheus.CounterValue,
float64(dst[0].IXFRTCPSuccessReceived),
"incremental",
"udp",
)
ch <- prometheus.MustNewConstMetric(
c.ZoneTransferSuccessSent,
prometheus.CounterValue,
float64(dst[0].AXFRSuccessSent),
"full",
)
ch <- prometheus.MustNewConstMetric(
c.ZoneTransferSuccessSent,
prometheus.CounterValue,
float64(dst[0].IXFRSuccessSent),
"incremental",
)
ch <- prometheus.MustNewConstMetric(
c.ZoneTransferFailures,
prometheus.CounterValue,
float64(dst[0].ZoneTransferFailure),
)
ch <- prometheus.MustNewConstMetric(
c.MemoryUsedBytes,
prometheus.GaugeValue,
float64(dst[0].CachingMemory),
"caching",
)
ch <- prometheus.MustNewConstMetric(
c.MemoryUsedBytes,
prometheus.GaugeValue,
float64(dst[0].DatabaseNodeMemory),
"database_node",
)
ch <- prometheus.MustNewConstMetric(
c.MemoryUsedBytes,
prometheus.GaugeValue,
float64(dst[0].NbstatMemory),
"nbstat",
)
ch <- prometheus.MustNewConstMetric(
c.MemoryUsedBytes,
prometheus.GaugeValue,
float64(dst[0].RecordFlowMemory),
"record_flow",
)
ch <- prometheus.MustNewConstMetric(
c.MemoryUsedBytes,
prometheus.GaugeValue,
float64(dst[0].TCPMessageMemory),
"tcp_message",
)
ch <- prometheus.MustNewConstMetric(
c.MemoryUsedBytes,
prometheus.GaugeValue,
float64(dst[0].UDPMessageMemory),
"udp_message",
)
ch <- prometheus.MustNewConstMetric(
c.DynamicUpdatesReceived,
prometheus.CounterValue,
float64(dst[0].DynamicUpdateNoOperation),
"noop",
)
ch <- prometheus.MustNewConstMetric(
c.DynamicUpdatesReceived,
prometheus.CounterValue,
float64(dst[0].DynamicUpdateWrittentoDatabase),
"written",
)
ch <- prometheus.MustNewConstMetric(
c.DynamicUpdatesQueued,
prometheus.GaugeValue,
float64(dst[0].DynamicUpdateQueued),
)
ch <- prometheus.MustNewConstMetric(
c.DynamicUpdatesFailures,
prometheus.CounterValue,
float64(dst[0].DynamicUpdateRejected),
"rejected",
)
ch <- prometheus.MustNewConstMetric(
c.DynamicUpdatesFailures,
prometheus.CounterValue,
float64(dst[0].DynamicUpdateTimeOuts),
"timeout",
)
ch <- prometheus.MustNewConstMetric(
c.NotifyReceived,
prometheus.CounterValue,
float64(dst[0].NotifyReceived),
)
ch <- prometheus.MustNewConstMetric(
c.NotifySent,
prometheus.CounterValue,
float64(dst[0].NotifySent),
)
ch <- prometheus.MustNewConstMetric(
c.RecursiveQueries,
prometheus.CounterValue,
float64(dst[0].RecursiveQueries),
)
ch <- prometheus.MustNewConstMetric(
c.RecursiveQueryFailures,
prometheus.CounterValue,
float64(dst[0].RecursiveQueryFailure),
)
ch <- prometheus.MustNewConstMetric(
c.RecursiveQuerySendTimeouts,
prometheus.CounterValue,
float64(dst[0].RecursiveSendTimeOuts),
)
ch <- prometheus.MustNewConstMetric(
c.Queries,
prometheus.CounterValue,
float64(dst[0].TCPQueryReceived),
"tcp",
)
ch <- prometheus.MustNewConstMetric(
c.Queries,
prometheus.CounterValue,
float64(dst[0].UDPQueryReceived),
"udp",
)
ch <- prometheus.MustNewConstMetric(
c.Responses,
prometheus.CounterValue,
float64(dst[0].TCPResponseSent),
"tcp",
)
ch <- prometheus.MustNewConstMetric(
c.Responses,
prometheus.CounterValue,
float64(dst[0].UDPResponseSent),
"udp",
)
ch <- prometheus.MustNewConstMetric(
c.UnmatchedResponsesReceived,
prometheus.CounterValue,
float64(dst[0].UnmatchedResponsesReceived),
)
ch <- prometheus.MustNewConstMetric(
c.WinsQueries,
prometheus.CounterValue,
float64(dst[0].WINSLookupReceived),
"forward",
)
ch <- prometheus.MustNewConstMetric(
c.WinsQueries,
prometheus.CounterValue,
float64(dst[0].WINSReverseLookupReceived),
"reverse",
)
ch <- prometheus.MustNewConstMetric(
c.WinsResponses,
prometheus.CounterValue,
float64(dst[0].WINSResponseSent),
"forward",
)
ch <- prometheus.MustNewConstMetric(
c.WinsResponses,
prometheus.CounterValue,
float64(dst[0].WINSReverseResponseSent),
"reverse",
)
ch <- prometheus.MustNewConstMetric(
c.SecureUpdateFailures,
prometheus.CounterValue,
float64(dst[0].SecureUpdateFailure),
)
ch <- prometheus.MustNewConstMetric(
c.SecureUpdateReceived,
prometheus.CounterValue,
float64(dst[0].SecureUpdateReceived),
)
return nil, nil
}

1423
collector/hyperv.go Normal file

File diff suppressed because it is too large Load Diff

1957
collector/iis.go Normal file

File diff suppressed because it is too large Load Diff

302
collector/logical_disk.go Normal file
View File

@@ -0,0 +1,302 @@
// +build windows
package collector
import (
"fmt"
"regexp"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
"gopkg.in/alecthomas/kingpin.v2"
)
func init() {
registerCollector("logical_disk", NewLogicalDiskCollector, "LogicalDisk")
}
var (
volumeWhitelist = kingpin.Flag(
"collector.logical_disk.volume-whitelist",
"Regexp of volumes to whitelist. Volume name must both match whitelist and not match blacklist to be included.",
).Default(".+").String()
volumeBlacklist = kingpin.Flag(
"collector.logical_disk.volume-blacklist",
"Regexp of volumes to blacklist. Volume name must both match whitelist and not match blacklist to be included.",
).Default("").String()
)
// A LogicalDiskCollector is a Prometheus collector for perflib logicalDisk metrics
type LogicalDiskCollector struct {
RequestsQueued *prometheus.Desc
ReadBytesTotal *prometheus.Desc
ReadsTotal *prometheus.Desc
WriteBytesTotal *prometheus.Desc
WritesTotal *prometheus.Desc
ReadTime *prometheus.Desc
WriteTime *prometheus.Desc
TotalSpace *prometheus.Desc
FreeSpace *prometheus.Desc
IdleTime *prometheus.Desc
SplitIOs *prometheus.Desc
ReadLatency *prometheus.Desc
WriteLatency *prometheus.Desc
ReadWriteLatency *prometheus.Desc
volumeWhitelistPattern *regexp.Regexp
volumeBlacklistPattern *regexp.Regexp
}
// NewLogicalDiskCollector ...
func NewLogicalDiskCollector() (Collector, error) {
const subsystem = "logical_disk"
return &LogicalDiskCollector{
RequestsQueued: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "requests_queued"),
"The number of requests queued to the disk (LogicalDisk.CurrentDiskQueueLength)",
[]string{"volume"},
nil,
),
ReadBytesTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "read_bytes_total"),
"The number of bytes transferred from the disk during read operations (LogicalDisk.DiskReadBytesPerSec)",
[]string{"volume"},
nil,
),
ReadsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "reads_total"),
"The number of read operations on the disk (LogicalDisk.DiskReadsPerSec)",
[]string{"volume"},
nil,
),
WriteBytesTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "write_bytes_total"),
"The number of bytes transferred to the disk during write operations (LogicalDisk.DiskWriteBytesPerSec)",
[]string{"volume"},
nil,
),
WritesTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "writes_total"),
"The number of write operations on the disk (LogicalDisk.DiskWritesPerSec)",
[]string{"volume"},
nil,
),
ReadTime: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "read_seconds_total"),
"Seconds that the disk was busy servicing read requests (LogicalDisk.PercentDiskReadTime)",
[]string{"volume"},
nil,
),
WriteTime: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "write_seconds_total"),
"Seconds that the disk was busy servicing write requests (LogicalDisk.PercentDiskWriteTime)",
[]string{"volume"},
nil,
),
FreeSpace: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "free_bytes"),
"Free space in bytes (LogicalDisk.PercentFreeSpace)",
[]string{"volume"},
nil,
),
TotalSpace: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "size_bytes"),
"Total space in bytes (LogicalDisk.PercentFreeSpace_Base)",
[]string{"volume"},
nil,
),
IdleTime: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "idle_seconds_total"),
"Seconds that the disk was idle (LogicalDisk.PercentIdleTime)",
[]string{"volume"},
nil,
),
SplitIOs: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "split_ios_total"),
"The number of I/Os to the disk were split into multiple I/Os (LogicalDisk.SplitIOPerSec)",
[]string{"volume"},
nil,
),
ReadLatency: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "read_latency_seconds_total"),
"Shows the average time, in seconds, of a read operation from the disk (LogicalDisk.AvgDiskSecPerRead)",
[]string{"volume"},
nil,
),
WriteLatency: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "write_latency_seconds_total"),
"Shows the average time, in seconds, of a write operation to the disk (LogicalDisk.AvgDiskSecPerWrite)",
[]string{"volume"},
nil,
),
ReadWriteLatency: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "read_write_latency_seconds_total"),
"Shows the time, in seconds, of the average disk transfer (LogicalDisk.AvgDiskSecPerTransfer)",
[]string{"volume"},
nil,
),
volumeWhitelistPattern: regexp.MustCompile(fmt.Sprintf("^(?:%s)$", *volumeWhitelist)),
volumeBlacklistPattern: regexp.MustCompile(fmt.Sprintf("^(?:%s)$", *volumeBlacklist)),
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *LogicalDiskCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ctx, ch); err != nil {
log.Error("failed collecting logical_disk metrics:", desc, err)
return err
}
return nil
}
// Win32_PerfRawData_PerfDisk_LogicalDisk docs:
// - https://msdn.microsoft.com/en-us/windows/hardware/aa394307(v=vs.71) - Win32_PerfRawData_PerfDisk_LogicalDisk class
// - https://msdn.microsoft.com/en-us/library/ms803973.aspx - LogicalDisk object reference
type logicalDisk struct {
Name string
CurrentDiskQueueLength float64 `perflib:"Current Disk Queue Length"`
DiskReadBytesPerSec float64 `perflib:"Disk Read Bytes/sec"`
DiskReadsPerSec float64 `perflib:"Disk Reads/sec"`
DiskWriteBytesPerSec float64 `perflib:"Disk Write Bytes/sec"`
DiskWritesPerSec float64 `perflib:"Disk Writes/sec"`
PercentDiskReadTime float64 `perflib:"% Disk Read Time"`
PercentDiskWriteTime float64 `perflib:"% Disk Write Time"`
PercentFreeSpace float64 `perflib:"% Free Space_Base"`
PercentFreeSpace_Base float64 `perflib:"Free Megabytes"`
PercentIdleTime float64 `perflib:"% Idle Time"`
SplitIOPerSec float64 `perflib:"Split IO/Sec"`
AvgDiskSecPerRead float64 `perflib:"Avg. Disk sec/Read"`
AvgDiskSecPerWrite float64 `perflib:"Avg. Disk sec/Write"`
AvgDiskSecPerTransfer float64 `perflib:"Avg. Disk sec/Transfer"`
}
func (c *LogicalDiskCollector) collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []logicalDisk
if err := unmarshalObject(ctx.perfObjects["LogicalDisk"], &dst); err != nil {
return nil, err
}
for _, volume := range dst {
if volume.Name == "_Total" ||
c.volumeBlacklistPattern.MatchString(volume.Name) ||
!c.volumeWhitelistPattern.MatchString(volume.Name) {
continue
}
ch <- prometheus.MustNewConstMetric(
c.RequestsQueued,
prometheus.GaugeValue,
volume.CurrentDiskQueueLength,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.ReadBytesTotal,
prometheus.CounterValue,
volume.DiskReadBytesPerSec,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.ReadsTotal,
prometheus.CounterValue,
volume.DiskReadsPerSec,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.WriteBytesTotal,
prometheus.CounterValue,
volume.DiskWriteBytesPerSec,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.WritesTotal,
prometheus.CounterValue,
volume.DiskWritesPerSec,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.ReadTime,
prometheus.CounterValue,
volume.PercentDiskReadTime,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.WriteTime,
prometheus.CounterValue,
volume.PercentDiskWriteTime,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.FreeSpace,
prometheus.GaugeValue,
volume.PercentFreeSpace_Base*1024*1024,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.TotalSpace,
prometheus.GaugeValue,
volume.PercentFreeSpace*1024*1024,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.IdleTime,
prometheus.CounterValue,
volume.PercentIdleTime,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.SplitIOs,
prometheus.CounterValue,
volume.SplitIOPerSec,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.ReadLatency,
prometheus.CounterValue,
volume.AvgDiskSecPerRead*ticksToSecondsScaleFactor,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.WriteLatency,
prometheus.CounterValue,
volume.AvgDiskSecPerWrite*ticksToSecondsScaleFactor,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.ReadWriteLatency,
prometheus.CounterValue,
volume.AvgDiskSecPerTransfer*ticksToSecondsScaleFactor,
volume.Name,
)
}
return nil, nil
}

199
collector/logon.go Normal file
View File

@@ -0,0 +1,199 @@
// +build windows
package collector
import (
"errors"
"github.com/StackExchange/wmi"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {
registerCollector("logon", NewLogonCollector)
}
// A LogonCollector is a Prometheus collector for WMI metrics
type LogonCollector struct {
LogonType *prometheus.Desc
}
// NewLogonCollector ...
func NewLogonCollector() (Collector, error) {
const subsystem = "logon"
return &LogonCollector{
LogonType: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "logon_type"),
"Number of active logon sessions (LogonSession.LogonType)",
[]string{"status"},
nil,
),
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *LogonCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
log.Error("failed collecting user metrics:", desc, err)
return err
}
return nil
}
// Win32_LogonSession docs:
// - https://docs.microsoft.com/en-us/windows/win32/cimwin32prov/win32-logonsession
type Win32_LogonSession struct {
LogonType uint32
}
func (c *LogonCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_LogonSession
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
return nil, err
}
if len(dst) == 0 {
return nil, errors.New("WMI query returned empty result set")
}
// Init counters
system := 0
interactive := 0
network := 0
batch := 0
service := 0
proxy := 0
unlock := 0
networkcleartext := 0
newcredentials := 0
remoteinteractive := 0
cachedinteractive := 0
cachedremoteinteractive := 0
cachedunlock := 0
for _, entry := range dst {
switch entry.LogonType {
case 0:
system++
case 2:
interactive++
case 3:
network++
case 4:
batch++
case 5:
service++
case 6:
proxy++
case 7:
unlock++
case 8:
networkcleartext++
case 9:
newcredentials++
case 10:
remoteinteractive++
case 11:
cachedinteractive++
case 12:
cachedremoteinteractive++
case 13:
cachedunlock++
}
}
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(system),
"system",
)
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(interactive),
"interactive",
)
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(network),
"network",
)
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(batch),
"batch",
)
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(service),
"service",
)
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(proxy),
"proxy",
)
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(unlock),
"unlock",
)
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(networkcleartext),
"network_clear_text",
)
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(newcredentials),
"new_credentials",
)
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(remoteinteractive),
"remote_interactive",
)
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(cachedinteractive),
"cached_interactive",
)
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(remoteinteractive),
"cached_remote_interactive",
)
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(cachedunlock),
"cached_unlock",
)
return nil, nil
}

502
collector/memory.go Normal file
View File

@@ -0,0 +1,502 @@
// returns data points from Win32_PerfRawData_PerfOS_Memory
// <add link to documentation here> - Win32_PerfRawData_PerfOS_Memory class
// +build windows
package collector
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {
registerCollector("memory", NewMemoryCollector, "Memory")
}
// A MemoryCollector is a Prometheus collector for perflib Memory metrics
type MemoryCollector struct {
AvailableBytes *prometheus.Desc
CacheBytes *prometheus.Desc
CacheBytesPeak *prometheus.Desc
CacheFaultsTotal *prometheus.Desc
CommitLimit *prometheus.Desc
CommittedBytes *prometheus.Desc
DemandZeroFaultsTotal *prometheus.Desc
FreeAndZeroPageListBytes *prometheus.Desc
FreeSystemPageTableEntries *prometheus.Desc
ModifiedPageListBytes *prometheus.Desc
PageFaultsTotal *prometheus.Desc
SwapPageReadsTotal *prometheus.Desc
SwapPagesReadTotal *prometheus.Desc
SwapPagesWrittenTotal *prometheus.Desc
SwapPageOperationsTotal *prometheus.Desc
SwapPageWritesTotal *prometheus.Desc
PoolNonpagedAllocsTotal *prometheus.Desc
PoolNonpagedBytes *prometheus.Desc
PoolPagedAllocsTotal *prometheus.Desc
PoolPagedBytes *prometheus.Desc
PoolPagedResidentBytes *prometheus.Desc
StandbyCacheCoreBytes *prometheus.Desc
StandbyCacheNormalPriorityBytes *prometheus.Desc
StandbyCacheReserveBytes *prometheus.Desc
SystemCacheResidentBytes *prometheus.Desc
SystemCodeResidentBytes *prometheus.Desc
SystemCodeTotalBytes *prometheus.Desc
SystemDriverResidentBytes *prometheus.Desc
SystemDriverTotalBytes *prometheus.Desc
TransitionFaultsTotal *prometheus.Desc
TransitionPagesRepurposedTotal *prometheus.Desc
WriteCopiesTotal *prometheus.Desc
}
// NewMemoryCollector ...
func NewMemoryCollector() (Collector, error) {
const subsystem = "memory"
return &MemoryCollector{
AvailableBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "available_bytes"),
"The amount of physical memory immediately available for allocation to a process or for system use. It is equal to the sum of memory assigned to"+
" the standby (cached), free and zero page lists (AvailableBytes)",
nil,
nil,
),
CacheBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "cache_bytes"),
"(CacheBytes)",
nil,
nil,
),
CacheBytesPeak: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "cache_bytes_peak"),
"(CacheBytesPeak)",
nil,
nil,
),
CacheFaultsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "cache_faults_total"),
"(CacheFaultsPersec)",
nil,
nil,
),
CommitLimit: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "commit_limit"),
"(CommitLimit)",
nil,
nil,
),
CommittedBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "committed_bytes"),
"(CommittedBytes)",
nil,
nil,
),
DemandZeroFaultsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "demand_zero_faults_total"),
"The number of zeroed pages required to satisfy faults. Zeroed pages, pages emptied of previously stored data and filled with zeros, are a security"+
" feature of Windows that prevent processes from seeing data stored by earlier processes that used the memory space (DemandZeroFaults)",
nil,
nil,
),
FreeAndZeroPageListBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "free_and_zero_page_list_bytes"),
"(FreeAndZeroPageListBytes)",
nil,
nil,
),
FreeSystemPageTableEntries: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "free_system_page_table_entries"),
"(FreeSystemPageTableEntries)",
nil,
nil,
),
ModifiedPageListBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "modified_page_list_bytes"),
"(ModifiedPageListBytes)",
nil,
nil,
),
PageFaultsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "page_faults_total"),
"(PageFaultsPersec)",
nil,
nil,
),
SwapPageReadsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "swap_page_reads_total"),
"Number of disk page reads (a single read operation reading several pages is still only counted once) (PageReadsPersec)",
nil,
nil,
),
SwapPagesReadTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "swap_pages_read_total"),
"Number of pages read across all page reads (ie counting all pages read even if they are read in a single operation) (PagesInputPersec)",
nil,
nil,
),
SwapPagesWrittenTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "swap_pages_written_total"),
"Number of pages written across all page writes (ie counting all pages written even if they are written in a single operation) (PagesOutputPersec)",
nil,
nil,
),
SwapPageOperationsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "swap_page_operations_total"),
"Total number of swap page read and writes (PagesPersec)",
nil,
nil,
),
SwapPageWritesTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "swap_page_writes_total"),
"Number of disk page writes (a single write operation writing several pages is still only counted once) (PageWritesPersec)",
nil,
nil,
),
PoolNonpagedAllocsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "pool_nonpaged_allocs_total"),
"The number of calls to allocate space in the nonpaged pool. The nonpaged pool is an area of system memory area for objects that cannot be written"+
" to disk, and must remain in physical memory as long as they are allocated (PoolNonpagedAllocs)",
nil,
nil,
),
PoolNonpagedBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "pool_nonpaged_bytes_total"),
"(PoolNonpagedBytes)",
nil,
nil,
),
PoolPagedAllocsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "pool_paged_allocs_total"),
"(PoolPagedAllocs)",
nil,
nil,
),
PoolPagedBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "pool_paged_bytes"),
"(PoolPagedBytes)",
nil,
nil,
),
PoolPagedResidentBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "pool_paged_resident_bytes"),
"(PoolPagedResidentBytes)",
nil,
nil,
),
StandbyCacheCoreBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "standby_cache_core_bytes"),
"(StandbyCacheCoreBytes)",
nil,
nil,
),
StandbyCacheNormalPriorityBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "standby_cache_normal_priority_bytes"),
"(StandbyCacheNormalPriorityBytes)",
nil,
nil,
),
StandbyCacheReserveBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "standby_cache_reserve_bytes"),
"(StandbyCacheReserveBytes)",
nil,
nil,
),
SystemCacheResidentBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "system_cache_resident_bytes"),
"(SystemCacheResidentBytes)",
nil,
nil,
),
SystemCodeResidentBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "system_code_resident_bytes"),
"(SystemCodeResidentBytes)",
nil,
nil,
),
SystemCodeTotalBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "system_code_total_bytes"),
"(SystemCodeTotalBytes)",
nil,
nil,
),
SystemDriverResidentBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "system_driver_resident_bytes"),
"(SystemDriverResidentBytes)",
nil,
nil,
),
SystemDriverTotalBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "system_driver_total_bytes"),
"(SystemDriverTotalBytes)",
nil,
nil,
),
TransitionFaultsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "transition_faults_total"),
"(TransitionFaultsPersec)",
nil,
nil,
),
TransitionPagesRepurposedTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "transition_pages_repurposed_total"),
"(TransitionPagesRePurposedPersec)",
nil,
nil,
),
WriteCopiesTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "write_copies_total"),
"The number of page faults caused by attempting to write that were satisfied by copying the page from elsewhere in physical memory (WriteCopiesPersec)",
nil,
nil,
),
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *MemoryCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ctx, ch); err != nil {
log.Error("failed collecting memory metrics:", desc, err)
return err
}
return nil
}
type memory struct {
AvailableBytes float64 `perflib:"Available Bytes"`
AvailableKBytes float64 `perflib:"Available KBytes"`
AvailableMBytes float64 `perflib:"Available MBytes"`
CacheBytes float64 `perflib:"Cache Bytes"`
CacheBytesPeak float64 `perflib:"Cache Bytes Peak"`
CacheFaultsPersec float64 `perflib:"Cache Faults/sec"`
CommitLimit float64 `perflib:"Commit Limit"`
CommittedBytes float64 `perflib:"Committed Bytes"`
DemandZeroFaultsPersec float64 `perflib:"Demand Zero Faults/sec"`
FreeAndZeroPageListBytes float64 `perflib:"Free & Zero Page List Bytes"`
FreeSystemPageTableEntries float64 `perflib:"Free System Page Table Entries"`
ModifiedPageListBytes float64 `perflib:"Modified Page List Bytes"`
PageFaultsPersec float64 `perflib:"Page Faults/sec"`
PageReadsPersec float64 `perflib:"Page Reads/sec"`
PagesInputPersec float64 `perflib:"Pages Input/sec"`
PagesOutputPersec float64 `perflib:"Pages Output/sec"`
PagesPersec float64 `perflib:"Pages/sec"`
PageWritesPersec float64 `perflib:"Page Writes/sec"`
PoolNonpagedAllocs float64 `perflib:"Pool Nonpaged Allocs"`
PoolNonpagedBytes float64 `perflib:"Pool Nonpaged Bytes"`
PoolPagedAllocs float64 `perflib:"Pool Paged Allocs"`
PoolPagedBytes float64 `perflib:"Pool Paged Bytes"`
PoolPagedResidentBytes float64 `perflib:"Pool Paged Resident Bytes"`
StandbyCacheCoreBytes float64 `perflib:"Standby Cache Core Bytes"`
StandbyCacheNormalPriorityBytes float64 `perflib:"Standby Cache Normal Priority Bytes"`
StandbyCacheReserveBytes float64 `perflib:"Standby Cache Reserve Bytes"`
SystemCacheResidentBytes float64 `perflib:"System Cache Resident Bytes"`
SystemCodeResidentBytes float64 `perflib:"System Code Resident Bytes"`
SystemCodeTotalBytes float64 `perflib:"System Code Total Bytes"`
SystemDriverResidentBytes float64 `perflib:"System Driver Resident Bytes"`
SystemDriverTotalBytes float64 `perflib:"System Driver Total Bytes"`
TransitionFaultsPersec float64 `perflib:"Transition Faults/sec"`
TransitionPagesRePurposedPersec float64 `perflib:"Transition Pages RePurposed/sec"`
WriteCopiesPersec float64 `perflib:"Write Copies/sec"`
}
func (c *MemoryCollector) collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []memory
if err := unmarshalObject(ctx.perfObjects["Memory"], &dst); err != nil {
return nil, err
}
ch <- prometheus.MustNewConstMetric(
c.AvailableBytes,
prometheus.GaugeValue,
dst[0].AvailableBytes,
)
ch <- prometheus.MustNewConstMetric(
c.CacheBytes,
prometheus.GaugeValue,
dst[0].CacheBytes,
)
ch <- prometheus.MustNewConstMetric(
c.CacheBytesPeak,
prometheus.GaugeValue,
dst[0].CacheBytesPeak,
)
ch <- prometheus.MustNewConstMetric(
c.CacheFaultsTotal,
prometheus.GaugeValue,
dst[0].CacheFaultsPersec,
)
ch <- prometheus.MustNewConstMetric(
c.CommitLimit,
prometheus.GaugeValue,
dst[0].CommitLimit,
)
ch <- prometheus.MustNewConstMetric(
c.CommittedBytes,
prometheus.GaugeValue,
dst[0].CommittedBytes,
)
ch <- prometheus.MustNewConstMetric(
c.DemandZeroFaultsTotal,
prometheus.GaugeValue,
dst[0].DemandZeroFaultsPersec,
)
ch <- prometheus.MustNewConstMetric(
c.FreeAndZeroPageListBytes,
prometheus.GaugeValue,
dst[0].FreeAndZeroPageListBytes,
)
ch <- prometheus.MustNewConstMetric(
c.FreeSystemPageTableEntries,
prometheus.GaugeValue,
dst[0].FreeSystemPageTableEntries,
)
ch <- prometheus.MustNewConstMetric(
c.ModifiedPageListBytes,
prometheus.GaugeValue,
dst[0].ModifiedPageListBytes,
)
ch <- prometheus.MustNewConstMetric(
c.PageFaultsTotal,
prometheus.GaugeValue,
dst[0].PageFaultsPersec,
)
ch <- prometheus.MustNewConstMetric(
c.SwapPageReadsTotal,
prometheus.GaugeValue,
dst[0].PageReadsPersec,
)
ch <- prometheus.MustNewConstMetric(
c.SwapPagesReadTotal,
prometheus.GaugeValue,
dst[0].PagesInputPersec,
)
ch <- prometheus.MustNewConstMetric(
c.SwapPagesWrittenTotal,
prometheus.GaugeValue,
dst[0].PagesOutputPersec,
)
ch <- prometheus.MustNewConstMetric(
c.SwapPageOperationsTotal,
prometheus.GaugeValue,
dst[0].PagesPersec,
)
ch <- prometheus.MustNewConstMetric(
c.SwapPageWritesTotal,
prometheus.GaugeValue,
dst[0].PageWritesPersec,
)
ch <- prometheus.MustNewConstMetric(
c.PoolNonpagedAllocsTotal,
prometheus.GaugeValue,
dst[0].PoolNonpagedAllocs,
)
ch <- prometheus.MustNewConstMetric(
c.PoolNonpagedBytes,
prometheus.GaugeValue,
dst[0].PoolNonpagedBytes,
)
ch <- prometheus.MustNewConstMetric(
c.PoolPagedAllocsTotal,
prometheus.GaugeValue,
dst[0].PoolPagedAllocs,
)
ch <- prometheus.MustNewConstMetric(
c.PoolPagedBytes,
prometheus.GaugeValue,
dst[0].PoolPagedBytes,
)
ch <- prometheus.MustNewConstMetric(
c.PoolPagedResidentBytes,
prometheus.GaugeValue,
dst[0].PoolPagedResidentBytes,
)
ch <- prometheus.MustNewConstMetric(
c.StandbyCacheCoreBytes,
prometheus.GaugeValue,
dst[0].StandbyCacheCoreBytes,
)
ch <- prometheus.MustNewConstMetric(
c.StandbyCacheNormalPriorityBytes,
prometheus.GaugeValue,
dst[0].StandbyCacheNormalPriorityBytes,
)
ch <- prometheus.MustNewConstMetric(
c.StandbyCacheReserveBytes,
prometheus.GaugeValue,
dst[0].StandbyCacheReserveBytes,
)
ch <- prometheus.MustNewConstMetric(
c.SystemCacheResidentBytes,
prometheus.GaugeValue,
dst[0].SystemCacheResidentBytes,
)
ch <- prometheus.MustNewConstMetric(
c.SystemCodeResidentBytes,
prometheus.GaugeValue,
dst[0].SystemCodeResidentBytes,
)
ch <- prometheus.MustNewConstMetric(
c.SystemCodeTotalBytes,
prometheus.GaugeValue,
dst[0].SystemCodeTotalBytes,
)
ch <- prometheus.MustNewConstMetric(
c.SystemDriverResidentBytes,
prometheus.GaugeValue,
dst[0].SystemDriverResidentBytes,
)
ch <- prometheus.MustNewConstMetric(
c.SystemDriverTotalBytes,
prometheus.GaugeValue,
dst[0].SystemDriverTotalBytes,
)
ch <- prometheus.MustNewConstMetric(
c.TransitionFaultsTotal,
prometheus.GaugeValue,
dst[0].TransitionFaultsPersec,
)
ch <- prometheus.MustNewConstMetric(
c.TransitionPagesRepurposedTotal,
prometheus.GaugeValue,
dst[0].TransitionPagesRePurposedPersec,
)
ch <- prometheus.MustNewConstMetric(
c.WriteCopiesTotal,
prometheus.GaugeValue,
dst[0].WriteCopiesPersec,
)
return nil, nil
}

127
collector/msmq.go Normal file
View File

@@ -0,0 +1,127 @@
// +build windows
package collector
import (
"strings"
"github.com/StackExchange/wmi"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
"gopkg.in/alecthomas/kingpin.v2"
)
func init() {
registerCollector("msmq", NewMSMQCollector)
}
var (
msmqWhereClause = kingpin.Flag("collector.msmq.msmq-where", "WQL 'where' clause to use in WMI metrics query. Limits the response to the msmqs you specify and reduces the size of the response.").String()
)
// A Win32_PerfRawData_MSMQ_MSMQQueueCollector is a Prometheus collector for WMI Win32_PerfRawData_MSMQ_MSMQQueue metrics
type Win32_PerfRawData_MSMQ_MSMQQueueCollector struct {
BytesinJournalQueue *prometheus.Desc
BytesinQueue *prometheus.Desc
MessagesinJournalQueue *prometheus.Desc
MessagesinQueue *prometheus.Desc
queryWhereClause string
}
// NewWin32_PerfRawData_MSMQ_MSMQQueueCollector ...
func NewMSMQCollector() (Collector, error) {
const subsystem = "msmq"
if *msmqWhereClause == "" {
log.Warn("No where-clause specified for msmq collector. This will generate a very large number of metrics!")
}
return &Win32_PerfRawData_MSMQ_MSMQQueueCollector{
BytesinJournalQueue: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "bytes_in_journal_queue"),
"Size of queue journal in bytes",
[]string{"name"},
nil,
),
BytesinQueue: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "bytes_in_queue"),
"Size of queue in bytes",
[]string{"name"},
nil,
),
MessagesinJournalQueue: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "messages_in_journal_queue"),
"Count messages in queue journal",
[]string{"name"},
nil,
),
MessagesinQueue: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "messages_in_queue"),
"Count messages in queue",
[]string{"name"},
nil,
),
queryWhereClause: *msmqWhereClause,
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *Win32_PerfRawData_MSMQ_MSMQQueueCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
log.Error("failed collecting msmq metrics:", desc, err)
return err
}
return nil
}
type Win32_PerfRawData_MSMQ_MSMQQueue struct {
Name string
BytesinJournalQueue uint64
BytesinQueue uint64
MessagesinJournalQueue uint64
MessagesinQueue uint64
}
func (c *Win32_PerfRawData_MSMQ_MSMQQueueCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_PerfRawData_MSMQ_MSMQQueue
q := queryAllWhere(&dst, c.queryWhereClause)
if err := wmi.Query(q, &dst); err != nil {
return nil, err
}
for _, msmq := range dst {
if msmq.Name == "Computer Queues" {
continue
}
ch <- prometheus.MustNewConstMetric(
c.BytesinJournalQueue,
prometheus.GaugeValue,
float64(msmq.BytesinJournalQueue),
strings.ToLower(msmq.Name),
)
ch <- prometheus.MustNewConstMetric(
c.BytesinQueue,
prometheus.GaugeValue,
float64(msmq.BytesinQueue),
strings.ToLower(msmq.Name),
)
ch <- prometheus.MustNewConstMetric(
c.MessagesinJournalQueue,
prometheus.GaugeValue,
float64(msmq.MessagesinJournalQueue),
strings.ToLower(msmq.Name),
)
ch <- prometheus.MustNewConstMetric(
c.MessagesinQueue,
prometheus.GaugeValue,
float64(msmq.MessagesinQueue),
strings.ToLower(msmq.Name),
)
}
return nil, nil
}

3887
collector/mssql.go Normal file

File diff suppressed because it is too large Load Diff

259
collector/net.go Normal file
View File

@@ -0,0 +1,259 @@
// +build windows
package collector
import (
"fmt"
"regexp"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
"gopkg.in/alecthomas/kingpin.v2"
)
func init() {
registerCollector("net", NewNetworkCollector, "Network Interface")
}
var (
nicWhitelist = kingpin.Flag(
"collector.net.nic-whitelist",
"Regexp of NIC:s to whitelist. NIC name must both match whitelist and not match blacklist to be included.",
).Default(".+").String()
nicBlacklist = kingpin.Flag(
"collector.net.nic-blacklist",
"Regexp of NIC:s to blacklist. NIC name must both match whitelist and not match blacklist to be included.",
).Default("").String()
nicNameToUnderscore = regexp.MustCompile("[^a-zA-Z0-9]")
)
// A NetworkCollector is a Prometheus collector for Perflib Network Interface metrics
type NetworkCollector struct {
BytesReceivedTotal *prometheus.Desc
BytesSentTotal *prometheus.Desc
BytesTotal *prometheus.Desc
PacketsOutboundDiscarded *prometheus.Desc
PacketsOutboundErrors *prometheus.Desc
PacketsTotal *prometheus.Desc
PacketsReceivedDiscarded *prometheus.Desc
PacketsReceivedErrors *prometheus.Desc
PacketsReceivedTotal *prometheus.Desc
PacketsReceivedUnknown *prometheus.Desc
PacketsSentTotal *prometheus.Desc
CurrentBandwidth *prometheus.Desc
nicWhitelistPattern *regexp.Regexp
nicBlacklistPattern *regexp.Regexp
}
// NewNetworkCollector ...
func NewNetworkCollector() (Collector, error) {
const subsystem = "net"
return &NetworkCollector{
BytesReceivedTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "bytes_received_total"),
"(Network.BytesReceivedPerSec)",
[]string{"nic"},
nil,
),
BytesSentTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "bytes_sent_total"),
"(Network.BytesSentPerSec)",
[]string{"nic"},
nil,
),
BytesTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "bytes_total"),
"(Network.BytesTotalPerSec)",
[]string{"nic"},
nil,
),
PacketsOutboundDiscarded: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "packets_outbound_discarded"),
"(Network.PacketsOutboundDiscarded)",
[]string{"nic"},
nil,
),
PacketsOutboundErrors: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "packets_outbound_errors"),
"(Network.PacketsOutboundErrors)",
[]string{"nic"},
nil,
),
PacketsReceivedDiscarded: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "packets_received_discarded"),
"(Network.PacketsReceivedDiscarded)",
[]string{"nic"},
nil,
),
PacketsReceivedErrors: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "packets_received_errors"),
"(Network.PacketsReceivedErrors)",
[]string{"nic"},
nil,
),
PacketsReceivedTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "packets_received_total"),
"(Network.PacketsReceivedPerSec)",
[]string{"nic"},
nil,
),
PacketsReceivedUnknown: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "packets_received_unknown"),
"(Network.PacketsReceivedUnknown)",
[]string{"nic"},
nil,
),
PacketsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "packets_total"),
"(Network.PacketsPerSec)",
[]string{"nic"},
nil,
),
PacketsSentTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "packets_sent_total"),
"(Network.PacketsSentPerSec)",
[]string{"nic"},
nil,
),
CurrentBandwidth: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "current_bandwidth"),
"(Network.CurrentBandwidth)",
[]string{"nic"},
nil,
),
nicWhitelistPattern: regexp.MustCompile(fmt.Sprintf("^(?:%s)$", *nicWhitelist)),
nicBlacklistPattern: regexp.MustCompile(fmt.Sprintf("^(?:%s)$", *nicBlacklist)),
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *NetworkCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ctx, ch); err != nil {
log.Error("failed collecting net metrics:", desc, err)
return err
}
return nil
}
// mangleNetworkName mangles Network Adapter name (non-alphanumeric to _)
// that is used in networkInterface.
func mangleNetworkName(name string) string {
return nicNameToUnderscore.ReplaceAllString(name, "_")
}
// Win32_PerfRawData_Tcpip_NetworkInterface docs:
// - https://technet.microsoft.com/en-us/security/aa394340(v=vs.80)
type networkInterface struct {
BytesReceivedPerSec float64 `perflib:"Bytes Received/sec"`
BytesSentPerSec float64 `perflib:"Bytes Sent/sec"`
BytesTotalPerSec float64 `perflib:"Bytes Total/sec"`
Name string
PacketsOutboundDiscarded float64 `perflib:"Packets Outbound Discarded"`
PacketsOutboundErrors float64 `perflib:"Packets Outbound Errors"`
PacketsPerSec float64 `perflib:"Packets/sec"`
PacketsReceivedDiscarded float64 `perflib:"Packets Received Discarded"`
PacketsReceivedErrors float64 `perflib:"Packets Received Errors"`
PacketsReceivedPerSec float64 `perflib:"Packets Received/sec"`
PacketsReceivedUnknown float64 `perflib:"Packets Received Unknown"`
PacketsSentPerSec float64 `perflib:"Packets Sent/sec"`
CurrentBandwidth float64 `perflib:"Current Bandwidth"`
}
func (c *NetworkCollector) collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []networkInterface
if err := unmarshalObject(ctx.perfObjects["Network Interface"], &dst); err != nil {
return nil, err
}
for _, nic := range dst {
if c.nicBlacklistPattern.MatchString(nic.Name) ||
!c.nicWhitelistPattern.MatchString(nic.Name) {
continue
}
name := mangleNetworkName(nic.Name)
if name == "" {
continue
}
// Counters
ch <- prometheus.MustNewConstMetric(
c.BytesReceivedTotal,
prometheus.CounterValue,
nic.BytesReceivedPerSec,
name,
)
ch <- prometheus.MustNewConstMetric(
c.BytesSentTotal,
prometheus.CounterValue,
nic.BytesSentPerSec,
name,
)
ch <- prometheus.MustNewConstMetric(
c.BytesTotal,
prometheus.CounterValue,
nic.BytesTotalPerSec,
name,
)
ch <- prometheus.MustNewConstMetric(
c.PacketsOutboundDiscarded,
prometheus.CounterValue,
nic.PacketsOutboundDiscarded,
name,
)
ch <- prometheus.MustNewConstMetric(
c.PacketsOutboundErrors,
prometheus.CounterValue,
nic.PacketsOutboundErrors,
name,
)
ch <- prometheus.MustNewConstMetric(
c.PacketsTotal,
prometheus.CounterValue,
nic.PacketsPerSec,
name,
)
ch <- prometheus.MustNewConstMetric(
c.PacketsReceivedDiscarded,
prometheus.CounterValue,
nic.PacketsReceivedDiscarded,
name,
)
ch <- prometheus.MustNewConstMetric(
c.PacketsReceivedErrors,
prometheus.CounterValue,
nic.PacketsReceivedErrors,
name,
)
ch <- prometheus.MustNewConstMetric(
c.PacketsReceivedTotal,
prometheus.CounterValue,
nic.PacketsReceivedPerSec,
name,
)
ch <- prometheus.MustNewConstMetric(
c.PacketsReceivedUnknown,
prometheus.CounterValue,
nic.PacketsReceivedUnknown,
name,
)
ch <- prometheus.MustNewConstMetric(
c.PacketsSentTotal,
prometheus.CounterValue,
nic.PacketsSentPerSec,
name,
)
ch <- prometheus.MustNewConstMetric(
c.CurrentBandwidth,
prometheus.GaugeValue,
nic.CurrentBandwidth,
name,
)
}
return nil, nil
}

17
collector/net_test.go Normal file
View File

@@ -0,0 +1,17 @@
// +build windows
package collector
import "testing"
func TestNetworkToInstanceName(t *testing.T) {
data := map[string]string{
"Intel[R] Dual Band Wireless-AC 8260": "Intel_R__Dual_Band_Wireless_AC_8260",
}
for in, out := range data {
got := mangleNetworkName(in)
if got != out {
t.Error("expected", out, "got", got)
}
}
}

View File

@@ -0,0 +1,117 @@
// +build windows
package collector
import (
"github.com/StackExchange/wmi"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {
registerCollector("netframework_clrexceptions", NewNETFramework_NETCLRExceptionsCollector)
}
// A NETFramework_NETCLRExceptionsCollector is a Prometheus collector for WMI Win32_PerfRawData_NETFramework_NETCLRExceptions metrics
type NETFramework_NETCLRExceptionsCollector struct {
NumberofExcepsThrown *prometheus.Desc
NumberofFilters *prometheus.Desc
NumberofFinallys *prometheus.Desc
ThrowToCatchDepth *prometheus.Desc
}
// NewNETFramework_NETCLRExceptionsCollector ...
func NewNETFramework_NETCLRExceptionsCollector() (Collector, error) {
const subsystem = "netframework_clrexceptions"
return &NETFramework_NETCLRExceptionsCollector{
NumberofExcepsThrown: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "exceptions_thrown_total"),
"Displays the total number of exceptions thrown since the application started. This includes both .NET exceptions and unmanaged exceptions that are converted into .NET exceptions.",
[]string{"process"},
nil,
),
NumberofFilters: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "exceptions_filters_total"),
"Displays the total number of .NET exception filters executed. An exception filter evaluates regardless of whether an exception is handled.",
[]string{"process"},
nil,
),
NumberofFinallys: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "exceptions_finallys_total"),
"Displays the total number of finally blocks executed. Only the finally blocks executed for an exception are counted; finally blocks on normal code paths are not counted by this counter.",
[]string{"process"},
nil,
),
ThrowToCatchDepth: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "throw_to_catch_depth_total"),
"Displays the total number of stack frames traversed, from the frame that threw the exception to the frame that handled the exception.",
[]string{"process"},
nil,
),
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *NETFramework_NETCLRExceptionsCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
log.Error("failed collecting win32_perfrawdata_netframework_netclrexceptions metrics:", desc, err)
return err
}
return nil
}
type Win32_PerfRawData_NETFramework_NETCLRExceptions struct {
Name string
NumberofExcepsThrown uint32
NumberofExcepsThrownPersec uint32
NumberofFiltersPersec uint32
NumberofFinallysPersec uint32
ThrowToCatchDepthPersec uint32
}
func (c *NETFramework_NETCLRExceptionsCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_PerfRawData_NETFramework_NETCLRExceptions
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
return nil, err
}
for _, process := range dst {
if process.Name == "_Global_" {
continue
}
ch <- prometheus.MustNewConstMetric(
c.NumberofExcepsThrown,
prometheus.CounterValue,
float64(process.NumberofExcepsThrown),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.NumberofFilters,
prometheus.CounterValue,
float64(process.NumberofFiltersPersec),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.NumberofFinallys,
prometheus.CounterValue,
float64(process.NumberofFinallysPersec),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.ThrowToCatchDepth,
prometheus.CounterValue,
float64(process.ThrowToCatchDepthPersec),
process.Name,
)
}
return nil, nil
}

View File

@@ -0,0 +1,103 @@
// +build windows
package collector
import (
"github.com/StackExchange/wmi"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {
registerCollector("netframework_clrinterop", NewNETFramework_NETCLRInteropCollector)
}
// A NETFramework_NETCLRInteropCollector is a Prometheus collector for WMI Win32_PerfRawData_NETFramework_NETCLRInterop metrics
type NETFramework_NETCLRInteropCollector struct {
NumberofCCWs *prometheus.Desc
Numberofmarshalling *prometheus.Desc
NumberofStubs *prometheus.Desc
}
// NewNETFramework_NETCLRInteropCollector ...
func NewNETFramework_NETCLRInteropCollector() (Collector, error) {
const subsystem = "netframework_clrinterop"
return &NETFramework_NETCLRInteropCollector{
NumberofCCWs: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "com_callable_wrappers_total"),
"Displays the current number of COM callable wrappers (CCWs). A CCW is a proxy for a managed object being referenced from an unmanaged COM client.",
[]string{"process"},
nil,
),
Numberofmarshalling: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "interop_marshalling_total"),
"Displays the total number of times arguments and return values have been marshaled from managed to unmanaged code, and vice versa, since the application started.",
[]string{"process"},
nil,
),
NumberofStubs: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "interop_stubs_created_total"),
"Displays the current number of stubs created by the common language runtime. Stubs are responsible for marshaling arguments and return values from managed to unmanaged code, and vice versa, during a COM interop call or a platform invoke call.",
[]string{"process"},
nil,
),
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *NETFramework_NETCLRInteropCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
log.Error("failed collecting win32_perfrawdata_netframework_netclrinterop metrics:", desc, err)
return err
}
return nil
}
type Win32_PerfRawData_NETFramework_NETCLRInterop struct {
Name string
NumberofCCWs uint32
Numberofmarshalling uint32
NumberofStubs uint32
NumberofTLBexportsPersec uint32
NumberofTLBimportsPersec uint32
}
func (c *NETFramework_NETCLRInteropCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_PerfRawData_NETFramework_NETCLRInterop
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
return nil, err
}
for _, process := range dst {
if process.Name == "_Global_" {
continue
}
ch <- prometheus.MustNewConstMetric(
c.NumberofCCWs,
prometheus.CounterValue,
float64(process.NumberofCCWs),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.Numberofmarshalling,
prometheus.CounterValue,
float64(process.Numberofmarshalling),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.NumberofStubs,
prometheus.CounterValue,
float64(process.NumberofStubs),
process.Name,
)
}
return nil, nil
}

View File

@@ -0,0 +1,119 @@
// +build windows
package collector
import (
"github.com/StackExchange/wmi"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {
registerCollector("netframework_clrjit", NewNETFramework_NETCLRJitCollector)
}
// A NETFramework_NETCLRJitCollector is a Prometheus collector for WMI Win32_PerfRawData_NETFramework_NETCLRJit metrics
type NETFramework_NETCLRJitCollector struct {
NumberofMethodsJitted *prometheus.Desc
TimeinJit *prometheus.Desc
StandardJitFailures *prometheus.Desc
TotalNumberofILBytesJitted *prometheus.Desc
}
// NewNETFramework_NETCLRJitCollector ...
func NewNETFramework_NETCLRJitCollector() (Collector, error) {
const subsystem = "netframework_clrjit"
return &NETFramework_NETCLRJitCollector{
NumberofMethodsJitted: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "jit_methods_total"),
"Displays the total number of methods JIT-compiled since the application started. This counter does not include pre-JIT-compiled methods.",
[]string{"process"},
nil,
),
TimeinJit: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "jit_time_percent"),
"Displays the percentage of time spent in JIT compilation. This counter is updated at the end of every JIT compilation phase. A JIT compilation phase occurs when a method and its dependencies are compiled.",
[]string{"process"},
nil,
),
StandardJitFailures: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "jit_standard_failures_total"),
"Displays the peak number of methods the JIT compiler has failed to compile since the application started. This failure can occur if the MSIL cannot be verified or if there is an internal error in the JIT compiler.",
[]string{"process"},
nil,
),
TotalNumberofILBytesJitted: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "jit_il_bytes_total"),
"Displays the total number of Microsoft intermediate language (MSIL) bytes compiled by the just-in-time (JIT) compiler since the application started",
[]string{"process"},
nil,
),
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *NETFramework_NETCLRJitCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
log.Error("failed collecting win32_perfrawdata_netframework_netclrjit metrics:", desc, err)
return err
}
return nil
}
type Win32_PerfRawData_NETFramework_NETCLRJit struct {
Name string
Frequency_PerfTime uint32
ILBytesJittedPersec uint32
NumberofILBytesJitted uint32
NumberofMethodsJitted uint32
PercentTimeinJit uint32
StandardJitFailures uint32
TotalNumberofILBytesJitted uint32
}
func (c *NETFramework_NETCLRJitCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_PerfRawData_NETFramework_NETCLRJit
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
return nil, err
}
for _, process := range dst {
if process.Name == "_Global_" {
continue
}
ch <- prometheus.MustNewConstMetric(
c.NumberofMethodsJitted,
prometheus.CounterValue,
float64(process.NumberofMethodsJitted),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.TimeinJit,
prometheus.GaugeValue,
float64(process.PercentTimeinJit)/float64(process.Frequency_PerfTime),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.StandardJitFailures,
prometheus.GaugeValue,
float64(process.StandardJitFailures),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.TotalNumberofILBytesJitted,
prometheus.CounterValue,
float64(process.TotalNumberofILBytesJitted),
process.Name,
)
}
return nil, nil
}

View File

@@ -0,0 +1,198 @@
// +build windows
package collector
import (
"github.com/StackExchange/wmi"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {
registerCollector("netframework_clrloading", NewNETFramework_NETCLRLoadingCollector)
}
// A NETFramework_NETCLRLoadingCollector is a Prometheus collector for WMI Win32_PerfRawData_NETFramework_NETCLRLoading metrics
type NETFramework_NETCLRLoadingCollector struct {
BytesinLoaderHeap *prometheus.Desc
Currentappdomains *prometheus.Desc
CurrentAssemblies *prometheus.Desc
CurrentClassesLoaded *prometheus.Desc
TotalAppdomains *prometheus.Desc
Totalappdomainsunloaded *prometheus.Desc
TotalAssemblies *prometheus.Desc
TotalClassesLoaded *prometheus.Desc
TotalNumberofLoadFailures *prometheus.Desc
}
// NewNETFramework_NETCLRLoadingCollector ...
func NewNETFramework_NETCLRLoadingCollector() (Collector, error) {
const subsystem = "netframework_clrloading"
return &NETFramework_NETCLRLoadingCollector{
BytesinLoaderHeap: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "loader_heap_size_bytes"),
"Displays the current size, in bytes, of the memory committed by the class loader across all application domains. Committed memory is the physical space reserved in the disk paging file.",
[]string{"process"},
nil,
),
Currentappdomains: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "appdomains_loaded_current"),
"Displays the current number of application domains loaded in this application.",
[]string{"process"},
nil,
),
CurrentAssemblies: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "assemblies_loaded_current"),
"Displays the current number of assemblies loaded across all application domains in the currently running application. If the assembly is loaded as domain-neutral from multiple application domains, this counter is incremented only once.",
[]string{"process"},
nil,
),
CurrentClassesLoaded: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "classes_loaded_current"),
"Displays the current number of classes loaded in all assemblies.",
[]string{"process"},
nil,
),
TotalAppdomains: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "appdomains_loaded_total"),
"Displays the peak number of application domains loaded since the application started.",
[]string{"process"},
nil,
),
Totalappdomainsunloaded: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "appdomains_unloaded_total"),
"Displays the total number of application domains unloaded since the application started. If an application domain is loaded and unloaded multiple times, this counter increments each time the application domain is unloaded.",
[]string{"process"},
nil,
),
TotalAssemblies: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "assemblies_loaded_total"),
"Displays the total number of assemblies loaded since the application started. If the assembly is loaded as domain-neutral from multiple application domains, this counter is incremented only once.",
[]string{"process"},
nil,
),
TotalClassesLoaded: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "classes_loaded_total"),
"Displays the cumulative number of classes loaded in all assemblies since the application started.",
[]string{"process"},
nil,
),
TotalNumberofLoadFailures: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "class_load_failures_total"),
"Displays the peak number of classes that have failed to load since the application started.",
[]string{"process"},
nil,
),
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *NETFramework_NETCLRLoadingCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
log.Error("failed collecting win32_perfrawdata_netframework_netclrloading metrics:", desc, err)
return err
}
return nil
}
type Win32_PerfRawData_NETFramework_NETCLRLoading struct {
Name string
AssemblySearchLength uint32
BytesinLoaderHeap uint64
Currentappdomains uint32
CurrentAssemblies uint32
CurrentClassesLoaded uint32
PercentTimeLoading uint64
Rateofappdomains uint32
Rateofappdomainsunloaded uint32
RateofAssemblies uint32
RateofClassesLoaded uint32
RateofLoadFailures uint32
TotalAppdomains uint32
Totalappdomainsunloaded uint32
TotalAssemblies uint32
TotalClassesLoaded uint32
TotalNumberofLoadFailures uint32
}
func (c *NETFramework_NETCLRLoadingCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_PerfRawData_NETFramework_NETCLRLoading
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
return nil, err
}
for _, process := range dst {
if process.Name == "_Global_" {
continue
}
ch <- prometheus.MustNewConstMetric(
c.BytesinLoaderHeap,
prometheus.GaugeValue,
float64(process.BytesinLoaderHeap),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.Currentappdomains,
prometheus.GaugeValue,
float64(process.Currentappdomains),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.CurrentAssemblies,
prometheus.GaugeValue,
float64(process.CurrentAssemblies),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.CurrentClassesLoaded,
prometheus.GaugeValue,
float64(process.CurrentClassesLoaded),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.TotalAppdomains,
prometheus.CounterValue,
float64(process.TotalAppdomains),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.Totalappdomainsunloaded,
prometheus.CounterValue,
float64(process.Totalappdomainsunloaded),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.TotalAssemblies,
prometheus.CounterValue,
float64(process.TotalAssemblies),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.TotalClassesLoaded,
prometheus.CounterValue,
float64(process.TotalClassesLoaded),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.TotalNumberofLoadFailures,
prometheus.CounterValue,
float64(process.TotalNumberofLoadFailures),
process.Name,
)
}
return nil, nil
}

View File

@@ -0,0 +1,164 @@
// +build windows
package collector
import (
"github.com/StackExchange/wmi"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {
registerCollector("netframework_clrlocksandthreads", NewNETFramework_NETCLRLocksAndThreadsCollector)
}
// A NETFramework_NETCLRLocksAndThreadsCollector is a Prometheus collector for WMI Win32_PerfRawData_NETFramework_NETCLRLocksAndThreads metrics
type NETFramework_NETCLRLocksAndThreadsCollector struct {
CurrentQueueLength *prometheus.Desc
NumberofcurrentlogicalThreads *prometheus.Desc
NumberofcurrentphysicalThreads *prometheus.Desc
Numberofcurrentrecognizedthreads *prometheus.Desc
Numberoftotalrecognizedthreads *prometheus.Desc
QueueLengthPeak *prometheus.Desc
TotalNumberofContentions *prometheus.Desc
}
// NewNETFramework_NETCLRLocksAndThreadsCollector ...
func NewNETFramework_NETCLRLocksAndThreadsCollector() (Collector, error) {
const subsystem = "netframework_clrlocksandthreads"
return &NETFramework_NETCLRLocksAndThreadsCollector{
CurrentQueueLength: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "current_queue_length"),
"Displays the total number of threads that are currently waiting to acquire a managed lock in the application.",
[]string{"process"},
nil,
),
NumberofcurrentlogicalThreads: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "current_logical_threads"),
"Displays the number of current managed thread objects in the application. This counter maintains the count of both running and stopped threads. ",
[]string{"process"},
nil,
),
NumberofcurrentphysicalThreads: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "physical_threads_current"),
"Displays the number of native operating system threads created and owned by the common language runtime to act as underlying threads for managed thread objects. This counter's value does not include the threads used by the runtime in its internal operations; it is a subset of the threads in the operating system process.",
[]string{"process"},
nil,
),
Numberofcurrentrecognizedthreads: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "recognized_threads_current"),
"Displays the number of threads that are currently recognized by the runtime. These threads are associated with a corresponding managed thread object. The runtime does not create these threads, but they have run inside the runtime at least once.",
[]string{"process"},
nil,
),
Numberoftotalrecognizedthreads: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "recognized_threads_total"),
"Displays the total number of threads that have been recognized by the runtime since the application started. These threads are associated with a corresponding managed thread object. The runtime does not create these threads, but they have run inside the runtime at least once.",
[]string{"process"},
nil,
),
QueueLengthPeak: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "queue_length_total"),
"Displays the total number of threads that waited to acquire a managed lock since the application started.",
[]string{"process"},
nil,
),
TotalNumberofContentions: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "contentions_total"),
"Displays the total number of times that threads in the runtime have attempted to acquire a managed lock unsuccessfully.",
[]string{"process"},
nil,
),
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *NETFramework_NETCLRLocksAndThreadsCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
log.Error("failed collecting win32_perfrawdata_netframework_netclrlocksandthreads metrics:", desc, err)
return err
}
return nil
}
type Win32_PerfRawData_NETFramework_NETCLRLocksAndThreads struct {
Name string
ContentionRatePersec uint32
CurrentQueueLength uint32
NumberofcurrentlogicalThreads uint32
NumberofcurrentphysicalThreads uint32
Numberofcurrentrecognizedthreads uint32
Numberoftotalrecognizedthreads uint32
QueueLengthPeak uint32
QueueLengthPersec uint32
RateOfRecognizedThreadsPersec uint32
TotalNumberofContentions uint32
}
func (c *NETFramework_NETCLRLocksAndThreadsCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_PerfRawData_NETFramework_NETCLRLocksAndThreads
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
return nil, err
}
for _, process := range dst {
if process.Name == "_Global_" {
continue
}
ch <- prometheus.MustNewConstMetric(
c.CurrentQueueLength,
prometheus.GaugeValue,
float64(process.CurrentQueueLength),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.NumberofcurrentlogicalThreads,
prometheus.GaugeValue,
float64(process.NumberofcurrentlogicalThreads),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.NumberofcurrentphysicalThreads,
prometheus.GaugeValue,
float64(process.NumberofcurrentphysicalThreads),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.Numberofcurrentrecognizedthreads,
prometheus.GaugeValue,
float64(process.Numberofcurrentrecognizedthreads),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.Numberoftotalrecognizedthreads,
prometheus.CounterValue,
float64(process.Numberoftotalrecognizedthreads),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.QueueLengthPeak,
prometheus.CounterValue,
float64(process.QueueLengthPeak),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.TotalNumberofContentions,
prometheus.CounterValue,
float64(process.TotalNumberofContentions),
process.Name,
)
}
return nil, nil
}

View File

@@ -0,0 +1,302 @@
// +build windows
package collector
import (
"github.com/StackExchange/wmi"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {
registerCollector("netframework_clrmemory", NewNETFramework_NETCLRMemoryCollector)
}
// A NETFramework_NETCLRMemoryCollector is a Prometheus collector for WMI Win32_PerfRawData_NETFramework_NETCLRMemory metrics
type NETFramework_NETCLRMemoryCollector struct {
AllocatedBytes *prometheus.Desc
FinalizationSurvivors *prometheus.Desc
HeapSize *prometheus.Desc
PromotedBytes *prometheus.Desc
NumberGCHandles *prometheus.Desc
NumberCollections *prometheus.Desc
NumberInducedGC *prometheus.Desc
NumberofPinnedObjects *prometheus.Desc
NumberofSinkBlocksinuse *prometheus.Desc
NumberTotalCommittedBytes *prometheus.Desc
NumberTotalreservedBytes *prometheus.Desc
TimeinGC *prometheus.Desc
PromotedFinalizationMemoryfromGen0 *prometheus.Desc
PromotedMemoryfromGen0 *prometheus.Desc
PromotedMemoryfromGen1 *prometheus.Desc
}
// NewNETFramework_NETCLRMemoryCollector ...
func NewNETFramework_NETCLRMemoryCollector() (Collector, error) {
const subsystem = "netframework_clrmemory"
return &NETFramework_NETCLRMemoryCollector{
AllocatedBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "allocated_bytes_total"),
"Displays the total number of bytes allocated on the garbage collection heap.",
[]string{"process"},
nil,
),
FinalizationSurvivors: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "finalization_survivors"),
"Displays the number of garbage-collected objects that survive a collection because they are waiting to be finalized.",
[]string{"process"},
nil,
),
HeapSize: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "heap_size_bytes"),
"Displays the maximum bytes that can be allocated; it does not indicate the current number of bytes allocated.",
[]string{"process", "area"},
nil,
),
PromotedBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "promoted_bytes"),
"Displays the bytes that were promoted from the generation to the next one during the last GC. Memory is promoted when it survives a garbage collection.",
[]string{"process", "area"},
nil,
),
NumberGCHandles: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "number_gc_handles"),
"Displays the current number of garbage collection handles in use. Garbage collection handles are handles to resources external to the common language runtime and the managed environment.",
[]string{"process"},
nil,
),
NumberCollections: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "collections_total"),
"Displays the number of times the generation objects are garbage collected since the application started.",
[]string{"process", "area"},
nil,
),
NumberInducedGC: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "induced_gc_total"),
"Displays the peak number of times garbage collection was performed because of an explicit call to GC.Collect.",
[]string{"process"},
nil,
),
NumberofPinnedObjects: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "number_pinned_objects"),
"Displays the number of pinned objects encountered in the last garbage collection.",
[]string{"process"},
nil,
),
NumberofSinkBlocksinuse: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "number_sink_blocksinuse"),
"Displays the current number of synchronization blocks in use. Synchronization blocks are per-object data structures allocated for storing synchronization information. They hold weak references to managed objects and must be scanned by the garbage collector.",
[]string{"process"},
nil,
),
NumberTotalCommittedBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "committed_bytes"),
"Displays the amount of virtual memory, in bytes, currently committed by the garbage collector. Committed memory is the physical memory for which space has been reserved in the disk paging file.",
[]string{"process"},
nil,
),
NumberTotalreservedBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "reserved_bytes"),
"Displays the amount of virtual memory, in bytes, currently reserved by the garbage collector. Reserved memory is the virtual memory space reserved for the application when no disk or main memory pages have been used.",
[]string{"process"},
nil,
),
TimeinGC: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "gc_time_percent"),
"Displays the percentage of time that was spent performing a garbage collection in the last sample.",
[]string{"process"},
nil,
),
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *NETFramework_NETCLRMemoryCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
log.Error("failed collecting win32_perfrawdata_netframework_netclrmemory metrics:", desc, err)
return err
}
return nil
}
type Win32_PerfRawData_NETFramework_NETCLRMemory struct {
Name string
AllocatedBytesPersec uint64
FinalizationSurvivors uint64
Frequency_PerfTime uint64
Gen0heapsize uint64
Gen0PromotedBytesPerSec uint64
Gen1heapsize uint64
Gen1PromotedBytesPerSec uint64
Gen2heapsize uint64
LargeObjectHeapsize uint64
NumberBytesinallHeaps uint64
NumberGCHandles uint64
NumberGen0Collections uint64
NumberGen1Collections uint64
NumberGen2Collections uint64
NumberInducedGC uint64
NumberofPinnedObjects uint64
NumberofSinkBlocksinuse uint64
NumberTotalcommittedBytes uint64
NumberTotalreservedBytes uint64
PercentTimeinGC uint32
ProcessID uint64
PromotedFinalizationMemoryfromGen0 uint64
PromotedMemoryfromGen0 uint64
PromotedMemoryfromGen1 uint64
}
func (c *NETFramework_NETCLRMemoryCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_PerfRawData_NETFramework_NETCLRMemory
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
return nil, err
}
for _, process := range dst {
if process.Name == "_Global_" {
continue
}
ch <- prometheus.MustNewConstMetric(
c.AllocatedBytes,
prometheus.CounterValue,
float64(process.AllocatedBytesPersec),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.FinalizationSurvivors,
prometheus.GaugeValue,
float64(process.FinalizationSurvivors),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.HeapSize,
prometheus.GaugeValue,
float64(process.Gen0heapsize),
process.Name,
"Gen0",
)
ch <- prometheus.MustNewConstMetric(
c.PromotedBytes,
prometheus.GaugeValue,
float64(process.Gen0PromotedBytesPerSec),
process.Name,
"Gen0",
)
ch <- prometheus.MustNewConstMetric(
c.HeapSize,
prometheus.GaugeValue,
float64(process.Gen1heapsize),
process.Name,
"Gen1",
)
ch <- prometheus.MustNewConstMetric(
c.PromotedBytes,
prometheus.GaugeValue,
float64(process.Gen1PromotedBytesPerSec),
process.Name,
"Gen1",
)
ch <- prometheus.MustNewConstMetric(
c.HeapSize,
prometheus.GaugeValue,
float64(process.Gen2heapsize),
process.Name,
"Gen2",
)
ch <- prometheus.MustNewConstMetric(
c.HeapSize,
prometheus.GaugeValue,
float64(process.LargeObjectHeapsize),
process.Name,
"LOH",
)
ch <- prometheus.MustNewConstMetric(
c.NumberGCHandles,
prometheus.GaugeValue,
float64(process.NumberGCHandles),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.NumberCollections,
prometheus.CounterValue,
float64(process.NumberGen0Collections),
process.Name,
"Gen0",
)
ch <- prometheus.MustNewConstMetric(
c.NumberCollections,
prometheus.CounterValue,
float64(process.NumberGen1Collections),
process.Name,
"Gen1",
)
ch <- prometheus.MustNewConstMetric(
c.NumberCollections,
prometheus.CounterValue,
float64(process.NumberGen2Collections),
process.Name,
"Gen2",
)
ch <- prometheus.MustNewConstMetric(
c.NumberInducedGC,
prometheus.CounterValue,
float64(process.NumberInducedGC),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.NumberofPinnedObjects,
prometheus.GaugeValue,
float64(process.NumberofPinnedObjects),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.NumberofSinkBlocksinuse,
prometheus.GaugeValue,
float64(process.NumberofSinkBlocksinuse),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.NumberTotalCommittedBytes,
prometheus.GaugeValue,
float64(process.NumberTotalcommittedBytes),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.NumberTotalreservedBytes,
prometheus.GaugeValue,
float64(process.NumberTotalreservedBytes),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.TimeinGC,
prometheus.GaugeValue,
float64(process.PercentTimeinGC)/float64(process.Frequency_PerfTime),
process.Name,
)
}
return nil, nil
}

View File

@@ -0,0 +1,147 @@
// +build windows
package collector
import (
"github.com/StackExchange/wmi"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {
registerCollector("netframework_clrremoting", NewNETFramework_NETCLRRemotingCollector)
}
// A NETFramework_NETCLRRemotingCollector is a Prometheus collector for WMI Win32_PerfRawData_NETFramework_NETCLRRemoting metrics
type NETFramework_NETCLRRemotingCollector struct {
Channels *prometheus.Desc
ContextBoundClassesLoaded *prometheus.Desc
ContextBoundObjects *prometheus.Desc
ContextProxies *prometheus.Desc
Contexts *prometheus.Desc
TotalRemoteCalls *prometheus.Desc
}
// NewNETFramework_NETCLRRemotingCollector ...
func NewNETFramework_NETCLRRemotingCollector() (Collector, error) {
const subsystem = "netframework_clrremoting"
return &NETFramework_NETCLRRemotingCollector{
Channels: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "channels_total"),
"Displays the total number of remoting channels registered across all application domains since application started.",
[]string{"process"},
nil,
),
ContextBoundClassesLoaded: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "context_bound_classes_loaded"),
"Displays the current number of context-bound classes that are loaded.",
[]string{"process"},
nil,
),
ContextBoundObjects: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "context_bound_objects_total"),
"Displays the total number of context-bound objects allocated.",
[]string{"process"},
nil,
),
ContextProxies: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "context_proxies_total"),
"Displays the total number of remoting proxy objects in this process since it started.",
[]string{"process"},
nil,
),
Contexts: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "contexts"),
"Displays the current number of remoting contexts in the application.",
[]string{"process"},
nil,
),
TotalRemoteCalls: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "remote_calls_total"),
"Displays the total number of remote procedure calls invoked since the application started.",
[]string{"process"},
nil,
),
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *NETFramework_NETCLRRemotingCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
log.Error("failed collecting win32_perfrawdata_netframework_netclrremoting metrics:", desc, err)
return err
}
return nil
}
type Win32_PerfRawData_NETFramework_NETCLRRemoting struct {
Name string
Channels uint32
ContextBoundClassesLoaded uint32
ContextBoundObjectsAllocPersec uint32
ContextProxies uint32
Contexts uint32
RemoteCallsPersec uint32
TotalRemoteCalls uint32
}
func (c *NETFramework_NETCLRRemotingCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_PerfRawData_NETFramework_NETCLRRemoting
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
return nil, err
}
for _, process := range dst {
if process.Name == "_Global_" {
continue
}
ch <- prometheus.MustNewConstMetric(
c.Channels,
prometheus.CounterValue,
float64(process.Channels),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.ContextBoundClassesLoaded,
prometheus.GaugeValue,
float64(process.ContextBoundClassesLoaded),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.ContextBoundObjects,
prometheus.CounterValue,
float64(process.ContextBoundObjectsAllocPersec),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.ContextProxies,
prometheus.CounterValue,
float64(process.ContextProxies),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.Contexts,
prometheus.GaugeValue,
float64(process.Contexts),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.TotalRemoteCalls,
prometheus.CounterValue,
float64(process.TotalRemoteCalls),
process.Name,
)
}
return nil, nil
}

View File

@@ -0,0 +1,118 @@
// +build windows
package collector
import (
"github.com/StackExchange/wmi"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {
registerCollector("netframework_clrsecurity", NewNETFramework_NETCLRSecurityCollector)
}
// A NETFramework_NETCLRSecurityCollector is a Prometheus collector for WMI Win32_PerfRawData_NETFramework_NETCLRSecurity metrics
type NETFramework_NETCLRSecurityCollector struct {
NumberLinkTimeChecks *prometheus.Desc
TimeinRTchecks *prometheus.Desc
StackWalkDepth *prometheus.Desc
TotalRuntimeChecks *prometheus.Desc
}
// NewNETFramework_NETCLRSecurityCollector ...
func NewNETFramework_NETCLRSecurityCollector() (Collector, error) {
const subsystem = "netframework_clrsecurity"
return &NETFramework_NETCLRSecurityCollector{
NumberLinkTimeChecks: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "link_time_checks_total"),
"Displays the total number of link-time code access security checks since the application started.",
[]string{"process"},
nil,
),
TimeinRTchecks: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "rt_checks_time_percent"),
"Displays the percentage of time spent performing runtime code access security checks in the last sample.",
[]string{"process"},
nil,
),
StackWalkDepth: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "stack_walk_depth"),
"Displays the depth of the stack during that last runtime code access security check.",
[]string{"process"},
nil,
),
TotalRuntimeChecks: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "runtime_checks_total"),
"Displays the total number of runtime code access security checks performed since the application started.",
[]string{"process"},
nil,
),
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *NETFramework_NETCLRSecurityCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
log.Error("failed collecting win32_perfrawdata_netframework_netclrsecurity metrics:", desc, err)
return err
}
return nil
}
type Win32_PerfRawData_NETFramework_NETCLRSecurity struct {
Name string
Frequency_PerfTime uint32
NumberLinkTimeChecks uint32
PercentTimeinRTchecks uint32
PercentTimeSigAuthenticating uint64
StackWalkDepth uint32
TotalRuntimeChecks uint32
}
func (c *NETFramework_NETCLRSecurityCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_PerfRawData_NETFramework_NETCLRSecurity
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
return nil, err
}
for _, process := range dst {
if process.Name == "_Global_" {
continue
}
ch <- prometheus.MustNewConstMetric(
c.NumberLinkTimeChecks,
prometheus.CounterValue,
float64(process.NumberLinkTimeChecks),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.TimeinRTchecks,
prometheus.GaugeValue,
float64(process.PercentTimeinRTchecks)/float64(process.Frequency_PerfTime),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.StackWalkDepth,
prometheus.GaugeValue,
float64(process.StackWalkDepth),
process.Name,
)
ch <- prometheus.MustNewConstMetric(
c.TotalRuntimeChecks,
prometheus.CounterValue,
float64(process.TotalRuntimeChecks),
process.Name,
)
}
return nil, nil
}

246
collector/os.go Normal file
View File

@@ -0,0 +1,246 @@
// +build windows
package collector
import (
"errors"
"time"
"github.com/StackExchange/wmi"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {
registerCollector("os", NewOSCollector)
}
// A OSCollector is a Prometheus collector for WMI metrics
type OSCollector struct {
OSInformation *prometheus.Desc
PhysicalMemoryFreeBytes *prometheus.Desc
PagingFreeBytes *prometheus.Desc
VirtualMemoryFreeBytes *prometheus.Desc
ProcessesLimit *prometheus.Desc
ProcessMemoryLimitBytes *prometheus.Desc
Processes *prometheus.Desc
Users *prometheus.Desc
PagingLimitBytes *prometheus.Desc
VirtualMemoryBytes *prometheus.Desc
VisibleMemoryBytes *prometheus.Desc
Time *prometheus.Desc
Timezone *prometheus.Desc
}
// NewOSCollector ...
func NewOSCollector() (Collector, error) {
const subsystem = "os"
return &OSCollector{
OSInformation: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "info"),
"OperatingSystem.Caption, OperatingSystem.Version",
[]string{"product", "version"},
nil,
),
PagingLimitBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "paging_limit_bytes"),
"OperatingSystem.SizeStoredInPagingFiles",
nil,
nil,
),
PagingFreeBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "paging_free_bytes"),
"OperatingSystem.FreeSpaceInPagingFiles",
nil,
nil,
),
PhysicalMemoryFreeBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "physical_memory_free_bytes"),
"OperatingSystem.FreePhysicalMemory",
nil,
nil,
),
Time: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "time"),
"OperatingSystem.LocalDateTime",
nil,
nil,
),
Timezone: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "timezone"),
"OperatingSystem.LocalDateTime",
[]string{"timezone"},
nil,
),
Processes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "processes"),
"OperatingSystem.NumberOfProcesses",
nil,
nil,
),
ProcessesLimit: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "processes_limit"),
"OperatingSystem.MaxNumberOfProcesses",
nil,
nil,
),
ProcessMemoryLimitBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "process_memory_limix_bytes"),
"OperatingSystem.MaxProcessMemorySize",
nil,
nil,
),
Users: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "users"),
"OperatingSystem.NumberOfUsers",
nil,
nil,
),
VirtualMemoryBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "virtual_memory_bytes"),
"OperatingSystem.TotalVirtualMemorySize",
nil,
nil,
),
VisibleMemoryBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "visible_memory_bytes"),
"OperatingSystem.TotalVisibleMemorySize",
nil,
nil,
),
VirtualMemoryFreeBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "virtual_memory_free_bytes"),
"OperatingSystem.FreeVirtualMemory",
nil,
nil,
),
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *OSCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
log.Error("failed collecting os metrics:", desc, err)
return err
}
return nil
}
// Win32_OperatingSystem docs:
// - https://msdn.microsoft.com/en-us/library/aa394239 - Win32_OperatingSystem class
type Win32_OperatingSystem struct {
Caption string
FreePhysicalMemory uint64
FreeSpaceInPagingFiles uint64
FreeVirtualMemory uint64
LocalDateTime time.Time
MaxNumberOfProcesses uint32
MaxProcessMemorySize uint64
NumberOfProcesses uint32
NumberOfUsers uint32
SizeStoredInPagingFiles uint64
TotalVirtualMemorySize uint64
TotalVisibleMemorySize uint64
Version string
}
func (c *OSCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_OperatingSystem
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
return nil, err
}
if len(dst) == 0 {
return nil, errors.New("WMI query returned empty result set")
}
ch <- prometheus.MustNewConstMetric(
c.OSInformation,
prometheus.GaugeValue,
1.0,
dst[0].Caption,
dst[0].Version,
)
ch <- prometheus.MustNewConstMetric(
c.PhysicalMemoryFreeBytes,
prometheus.GaugeValue,
float64(dst[0].FreePhysicalMemory*1024), // KiB -> bytes
)
time := dst[0].LocalDateTime
ch <- prometheus.MustNewConstMetric(
c.Time,
prometheus.GaugeValue,
float64(time.Unix()),
)
timezoneName, _ := time.Zone()
ch <- prometheus.MustNewConstMetric(
c.Timezone,
prometheus.GaugeValue,
1.0,
timezoneName,
)
ch <- prometheus.MustNewConstMetric(
c.PagingFreeBytes,
prometheus.GaugeValue,
float64(dst[0].FreeSpaceInPagingFiles*1024), // KiB -> bytes
)
ch <- prometheus.MustNewConstMetric(
c.VirtualMemoryFreeBytes,
prometheus.GaugeValue,
float64(dst[0].FreeVirtualMemory*1024), // KiB -> bytes
)
ch <- prometheus.MustNewConstMetric(
c.ProcessesLimit,
prometheus.GaugeValue,
float64(dst[0].MaxNumberOfProcesses),
)
ch <- prometheus.MustNewConstMetric(
c.ProcessMemoryLimitBytes,
prometheus.GaugeValue,
float64(dst[0].MaxProcessMemorySize*1024), // KiB -> bytes
)
ch <- prometheus.MustNewConstMetric(
c.Processes,
prometheus.GaugeValue,
float64(dst[0].NumberOfProcesses),
)
ch <- prometheus.MustNewConstMetric(
c.Users,
prometheus.GaugeValue,
float64(dst[0].NumberOfUsers),
)
ch <- prometheus.MustNewConstMetric(
c.PagingLimitBytes,
prometheus.GaugeValue,
float64(dst[0].SizeStoredInPagingFiles*1024), // KiB -> bytes
)
ch <- prometheus.MustNewConstMetric(
c.VirtualMemoryBytes,
prometheus.GaugeValue,
float64(dst[0].TotalVirtualMemorySize*1024), // KiB -> bytes
)
ch <- prometheus.MustNewConstMetric(
c.VisibleMemoryBytes,
prometheus.GaugeValue,
float64(dst[0].TotalVisibleMemorySize*1024), // KiB -> bytes
)
return nil, nil
}

107
collector/perflib.go Normal file
View File

@@ -0,0 +1,107 @@
package collector
import (
"fmt"
"reflect"
"strconv"
perflibCollector "github.com/leoluk/perflib_exporter/collector"
"github.com/leoluk/perflib_exporter/perflib"
"github.com/prometheus/common/log"
)
var nametable = perflib.QueryNameTable("Counter 009") // Reads the names in English TODO: validate that the English names are always present
func MapCounterToIndex(name string) string {
return strconv.Itoa(int(nametable.LookupIndex(name)))
}
func getPerflibSnapshot(objNames string) (map[string]*perflib.PerfObject, error) {
objects, err := perflib.QueryPerformanceData(objNames)
if err != nil {
return nil, err
}
indexed := make(map[string]*perflib.PerfObject)
for _, obj := range objects {
indexed[obj.Name] = obj
}
return indexed, nil
}
func unmarshalObject(obj *perflib.PerfObject, vs interface{}) error {
if obj == nil {
return fmt.Errorf("counter not found")
}
rv := reflect.ValueOf(vs)
if rv.Kind() != reflect.Ptr || rv.IsNil() {
return fmt.Errorf("%v is nil or not a pointer to slice", reflect.TypeOf(vs))
}
ev := rv.Elem()
if ev.Kind() != reflect.Slice {
return fmt.Errorf("%v is not slice", reflect.TypeOf(vs))
}
// Ensure sufficient length
if ev.Cap() < len(obj.Instances) {
nvs := reflect.MakeSlice(ev.Type(), len(obj.Instances), len(obj.Instances))
ev.Set(nvs)
}
for idx, instance := range obj.Instances {
target := ev.Index(idx)
rt := target.Type()
counters := make(map[string]*perflib.PerfCounter, len(instance.Counters))
for _, ctr := range instance.Counters {
if ctr.Def.IsBaseValue && !ctr.Def.IsNanosecondCounter {
counters[ctr.Def.Name+"_Base"] = ctr
} else {
counters[ctr.Def.Name] = ctr
}
}
for i := 0; i < target.NumField(); i++ {
f := rt.Field(i)
tag := f.Tag.Get("perflib")
if tag == "" {
continue
}
ctr, found := counters[tag]
if !found {
log.Debugf("missing counter %q, have %v", tag, counterMapKeys(counters))
continue
}
if !target.Field(i).CanSet() {
return fmt.Errorf("tagged field %v cannot be written to", f.Name)
}
if fieldType := target.Field(i).Type(); fieldType != reflect.TypeOf((*float64)(nil)).Elem() {
return fmt.Errorf("tagged field %v has wrong type %v, must be float64", f.Name, fieldType)
}
switch ctr.Def.CounterType {
case perflibCollector.PERF_ELAPSED_TIME:
target.Field(i).SetFloat(float64(ctr.Value-windowsEpoch) / float64(obj.Frequency))
case perflibCollector.PERF_100NSEC_TIMER, perflibCollector.PERF_PRECISION_100NS_TIMER:
target.Field(i).SetFloat(float64(ctr.Value) * ticksToSecondsScaleFactor)
default:
target.Field(i).SetFloat(float64(ctr.Value))
}
}
if instance.Name != "" && target.FieldByName("Name").CanSet() {
target.FieldByName("Name").SetString(instance.Name)
}
}
return nil
}
func counterMapKeys(m map[string]*perflib.PerfCounter) []string {
keys := make([]string, 0, len(m))
for k := range m {
keys = append(keys, k)
}
return keys
}

125
collector/perflib_test.go Normal file
View File

@@ -0,0 +1,125 @@
package collector
import (
"reflect"
"testing"
perflibCollector "github.com/leoluk/perflib_exporter/collector"
"github.com/leoluk/perflib_exporter/perflib"
)
type simple struct {
ValA float64 `perflib:"Something"`
ValB float64 `perflib:"Something Else"`
}
func TestUnmarshalPerflib(t *testing.T) {
cases := []struct {
name string
obj *perflib.PerfObject
expectedOutput []simple
expectError bool
}{
{
name: "nil check",
obj: nil,
expectedOutput: []simple{},
expectError: true,
},
{
name: "Simple",
obj: &perflib.PerfObject{
Instances: []*perflib.PerfInstance{
{
Counters: []*perflib.PerfCounter{
{
Def: &perflib.PerfCounterDef{
Name: "Something",
CounterType: perflibCollector.PERF_COUNTER_COUNTER,
},
Value: 123,
},
},
},
},
},
expectedOutput: []simple{{ValA: 123}},
expectError: false,
},
{
name: "Multiple properties",
obj: &perflib.PerfObject{
Instances: []*perflib.PerfInstance{
{
Counters: []*perflib.PerfCounter{
{
Def: &perflib.PerfCounterDef{
Name: "Something",
CounterType: perflibCollector.PERF_COUNTER_COUNTER,
},
Value: 123,
},
{
Def: &perflib.PerfCounterDef{
Name: "Something Else",
CounterType: perflibCollector.PERF_COUNTER_COUNTER,
},
Value: 256,
},
},
},
},
},
expectedOutput: []simple{{ValA: 123, ValB: 256}},
expectError: false,
},
{
name: "Multiple instances",
obj: &perflib.PerfObject{
Instances: []*perflib.PerfInstance{
{
Counters: []*perflib.PerfCounter{
{
Def: &perflib.PerfCounterDef{
Name: "Something",
CounterType: perflibCollector.PERF_COUNTER_COUNTER,
},
Value: 321,
},
},
},
{
Counters: []*perflib.PerfCounter{
{
Def: &perflib.PerfCounterDef{
Name: "Something",
CounterType: perflibCollector.PERF_COUNTER_COUNTER,
},
Value: 231,
},
},
},
},
},
expectedOutput: []simple{{ValA: 321}, {ValA: 231}},
expectError: false,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
output := make([]simple, 0)
err := unmarshalObject(c.obj, &output)
if err != nil && !c.expectError {
t.Errorf("Did not expect error, got %q", err)
}
if err == nil && c.expectError {
t.Errorf("Expected an error, but got ok")
}
if err == nil && !reflect.DeepEqual(output, c.expectedOutput) {
t.Errorf("Output mismatch, expected %+v, got %+v", c.expectedOutput, output)
}
})
}
}

401
collector/process.go Normal file
View File

@@ -0,0 +1,401 @@
// +build windows
package collector
import (
"strconv"
"strings"
"github.com/StackExchange/wmi"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
"gopkg.in/alecthomas/kingpin.v2"
)
func init() {
registerCollector("process", NewProcessCollector)
}
var (
processWhereClause = kingpin.Flag(
"collector.process.processes-where",
"WQL 'where' clause to use in WMI metrics query. Limits the response to the processes you specify and reduces the size of the response.",
).Default("").String()
)
// A ProcessCollector is a Prometheus collector for WMI Win32_PerfRawData_PerfProc_Process metrics
type ProcessCollector struct {
StartTime *prometheus.Desc
CPUTimeTotal *prometheus.Desc
HandleCount *prometheus.Desc
IOBytesTotal *prometheus.Desc
IOOperationsTotal *prometheus.Desc
PageFaultsTotal *prometheus.Desc
PageFileBytes *prometheus.Desc
PoolBytes *prometheus.Desc
PriorityBase *prometheus.Desc
PrivateBytes *prometheus.Desc
ThreadCount *prometheus.Desc
VirtualBytes *prometheus.Desc
WorkingSet *prometheus.Desc
queryWhereClause string
}
// NewProcessCollector ...
func NewProcessCollector() (Collector, error) {
const subsystem = "process"
if *processWhereClause == "" {
log.Warn("No where-clause specified for process collector. This will generate a very large number of metrics!")
}
return &ProcessCollector{
StartTime: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "start_time"),
"Time of process start.",
[]string{"process", "process_id", "creating_process_id"},
nil,
),
CPUTimeTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "cpu_time_total"),
"Returns elapsed time that all of the threads of this process used the processor to execute instructions by mode (privileged, user). An instruction is the basic unit of execution in a computer, a thread is the object that executes instructions, and a process is the object created when a program is run. Code executed to handle some hardware interrupts and trap conditions is included in this count.",
[]string{"process", "process_id", "creating_process_id", "mode"},
nil,
),
HandleCount: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "handle_count"),
"Total number of handles the process has open. This number is the sum of the handles currently open by each thread in the process.",
[]string{"process", "process_id", "creating_process_id"},
nil,
),
IOBytesTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "io_bytes_total"),
"Bytes issued to I/O operations in different modes (read, write, other). This property counts all I/O activity generated by the process to include file, network, and device I/Os. Read and write mode includes data operations; other mode includes those that do not involve data, such as control operations. ",
[]string{"process", "process_id", "creating_process_id", "mode"},
nil,
),
IOOperationsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "io_operations_total"),
"I/O operations issued in different modes (read, write, other). This property counts all I/O activity generated by the process to include file, network, and device I/Os. Read and write mode includes data operations; other mode includes those that do not involve data, such as control operations. ",
[]string{"process", "process_id", "creating_process_id", "mode"},
nil,
),
PageFaultsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "page_faults_total"),
"Page faults by the threads executing in this process. A page fault occurs when a thread refers to a virtual memory page that is not in its working set in main memory. This can cause the page not to be fetched from disk if it is on the standby list and hence already in main memory, or if it is in use by another process with which the page is shared.",
[]string{"process", "process_id", "creating_process_id"},
nil,
),
PageFileBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "page_file_bytes"),
"Current number of bytes this process has used in the paging file(s). Paging files are used to store pages of memory used by the process that are not contained in other files. Paging files are shared by all processes, and lack of space in paging files can prevent other processes from allocating memory.",
[]string{"process", "process_id", "creating_process_id"},
nil,
),
PoolBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "pool_bytes"),
"Pool Bytes is the last observed number of bytes in the paged or nonpaged pool. The nonpaged pool is an area of system memory (physical memory used by the operating system) for objects that cannot be written to disk, but must remain in physical memory as long as they are allocated. The paged pool is an area of system memory (physical memory used by the operating system) for objects that can be written to disk when they are not being used. Nonpaged pool bytes is calculated differently than paged pool bytes, so it might not equal the total of paged pool bytes.",
[]string{"process", "process_id", "creating_process_id", "pool"},
nil,
),
PriorityBase: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "priority_base"),
"Current base priority of this process. Threads within a process can raise and lower their own base priority relative to the process base priority of the process.",
[]string{"process", "process_id", "creating_process_id"},
nil,
),
PrivateBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "private_bytes"),
"Current number of bytes this process has allocated that cannot be shared with other processes.",
[]string{"process", "process_id", "creating_process_id"},
nil,
),
ThreadCount: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "thread_count"),
"Number of threads currently active in this process. An instruction is the basic unit of execution in a processor, and a thread is the object that executes instructions. Every running process has at least one thread.",
[]string{"process", "process_id", "creating_process_id"},
nil,
),
VirtualBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "virtual_bytes"),
"Current size, in bytes, of the virtual address space that the process is using. Use of virtual address space does not necessarily imply corresponding use of either disk or main memory pages. Virtual space is finite and, by using too much, the process can limit its ability to load libraries.",
[]string{"process", "process_id", "creating_process_id"},
nil,
),
WorkingSet: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "working_set"),
"Maximum number of bytes in the working set of this process at any point in time. The working set is the set of memory pages touched recently by the threads in the process. If free memory in the computer is above a threshold, pages are left in the working set of a process even if they are not in use. When free memory falls below a threshold, pages are trimmed from working sets. If they are needed, they are then soft-faulted back into the working set before they leave main memory.",
[]string{"process", "process_id", "creating_process_id"},
nil,
),
queryWhereClause: *processWhereClause,
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *ProcessCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
log.Error("failed collecting process metrics:", desc, err)
return err
}
return nil
}
// Win32_PerfRawData_PerfProc_Process docs:
// - https://msdn.microsoft.com/en-us/library/aa394323(v=vs.85).aspx
type Win32_PerfRawData_PerfProc_Process struct {
Name string
CreatingProcessID uint32
ElapsedTime uint64
Frequency_Object uint64
HandleCount uint32
IDProcess uint32
IODataBytesPersec uint64
IODataOperationsPersec uint64
IOOtherBytesPersec uint64
IOOtherOperationsPersec uint64
IOReadBytesPersec uint64
IOReadOperationsPersec uint64
IOWriteBytesPersec uint64
IOWriteOperationsPersec uint64
PageFaultsPersec uint32
PageFileBytes uint64
PageFileBytesPeak uint64
PercentPrivilegedTime uint64
PercentProcessorTime uint64
PercentUserTime uint64
PoolNonpagedBytes uint32
PoolPagedBytes uint32
PriorityBase uint32
PrivateBytes uint64
ThreadCount uint32
Timestamp_Object uint64
VirtualBytes uint64
VirtualBytesPeak uint64
WorkingSet uint64
WorkingSetPeak uint64
WorkingSetPrivate uint64
}
type WorkerProcess struct {
AppPoolName string
ProcessId uint32
}
func (c *ProcessCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_PerfRawData_PerfProc_Process
q := queryAllWhere(&dst, c.queryWhereClause)
if err := wmi.Query(q, &dst); err != nil {
return nil, err
}
var dst_wp []WorkerProcess
q_wp := queryAll(&dst_wp)
if err := wmi.QueryNamespace(q_wp, &dst_wp, "root\\WebAdministration"); err != nil {
log.Debugf("Could not query WebAdministration namespace for IIS worker processes: %v. Skipping", err)
}
for _, process := range dst {
if process.Name == "_Total" {
continue
}
// Duplicate processes are suffixed # and an index number. Remove those.
processName := strings.Split(process.Name, "#")[0]
pid := strconv.FormatUint(uint64(process.IDProcess), 10)
cpid := strconv.FormatUint(uint64(process.CreatingProcessID), 10)
for _, wp := range dst_wp {
if wp.ProcessId == process.IDProcess {
processName = strings.Join([]string{processName, wp.AppPoolName}, "_")
break
}
}
ch <- prometheus.MustNewConstMetric(
c.StartTime,
prometheus.GaugeValue,
// convert from Windows timestamp (1 jan 1601) to unix timestamp (1 jan 1970)
float64(process.ElapsedTime-116444736000000000)/float64(process.Frequency_Object),
processName,
pid,
cpid,
)
ch <- prometheus.MustNewConstMetric(
c.HandleCount,
prometheus.GaugeValue,
float64(process.HandleCount),
processName,
pid,
cpid,
)
ch <- prometheus.MustNewConstMetric(
c.CPUTimeTotal,
prometheus.CounterValue,
float64(process.PercentPrivilegedTime)*ticksToSecondsScaleFactor,
processName,
pid,
cpid,
"privileged",
)
ch <- prometheus.MustNewConstMetric(
c.CPUTimeTotal,
prometheus.CounterValue,
float64(process.PercentUserTime)*ticksToSecondsScaleFactor,
processName,
pid,
cpid,
"user",
)
ch <- prometheus.MustNewConstMetric(
c.IOBytesTotal,
prometheus.CounterValue,
float64(process.IOOtherBytesPersec),
processName,
pid,
cpid,
"other",
)
ch <- prometheus.MustNewConstMetric(
c.IOOperationsTotal,
prometheus.CounterValue,
float64(process.IOOtherOperationsPersec),
processName,
pid,
cpid,
"other",
)
ch <- prometheus.MustNewConstMetric(
c.IOBytesTotal,
prometheus.CounterValue,
float64(process.IOReadBytesPersec),
processName,
pid,
cpid,
"read",
)
ch <- prometheus.MustNewConstMetric(
c.IOOperationsTotal,
prometheus.CounterValue,
float64(process.IOReadOperationsPersec),
processName,
pid,
cpid,
"read",
)
ch <- prometheus.MustNewConstMetric(
c.IOBytesTotal,
prometheus.CounterValue,
float64(process.IOWriteBytesPersec),
processName,
pid,
cpid,
"write",
)
ch <- prometheus.MustNewConstMetric(
c.IOOperationsTotal,
prometheus.CounterValue,
float64(process.IOWriteOperationsPersec),
processName,
pid,
cpid,
"write",
)
ch <- prometheus.MustNewConstMetric(
c.PageFaultsTotal,
prometheus.CounterValue,
float64(process.PageFaultsPersec),
processName,
pid,
cpid,
)
ch <- prometheus.MustNewConstMetric(
c.PageFileBytes,
prometheus.GaugeValue,
float64(process.PageFileBytes),
processName,
pid,
cpid,
)
ch <- prometheus.MustNewConstMetric(
c.PoolBytes,
prometheus.GaugeValue,
float64(process.PoolNonpagedBytes),
processName,
pid,
cpid,
"nonpaged",
)
ch <- prometheus.MustNewConstMetric(
c.PoolBytes,
prometheus.GaugeValue,
float64(process.PoolPagedBytes),
processName,
pid,
cpid,
"paged",
)
ch <- prometheus.MustNewConstMetric(
c.PriorityBase,
prometheus.GaugeValue,
float64(process.PriorityBase),
processName,
pid,
cpid,
)
ch <- prometheus.MustNewConstMetric(
c.PrivateBytes,
prometheus.GaugeValue,
float64(process.PrivateBytes),
processName,
pid,
cpid,
)
ch <- prometheus.MustNewConstMetric(
c.ThreadCount,
prometheus.GaugeValue,
float64(process.ThreadCount),
processName,
pid,
cpid,
)
ch <- prometheus.MustNewConstMetric(
c.VirtualBytes,
prometheus.GaugeValue,
float64(process.VirtualBytes),
processName,
pid,
cpid,
)
ch <- prometheus.MustNewConstMetric(
c.WorkingSet,
prometheus.GaugeValue,
float64(process.WorkingSet),
processName,
pid,
cpid,
)
}
return nil, nil
}

169
collector/service.go Normal file
View File

@@ -0,0 +1,169 @@
// +build windows
package collector
import (
"strings"
"github.com/StackExchange/wmi"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
"gopkg.in/alecthomas/kingpin.v2"
)
func init() {
registerCollector("service", NewserviceCollector)
}
var (
serviceWhereClause = kingpin.Flag(
"collector.service.services-where",
"WQL 'where' clause to use in WMI metrics query. Limits the response to the services you specify and reduces the size of the response.",
).Default("").String()
)
// A serviceCollector is a Prometheus collector for WMI Win32_Service metrics
type serviceCollector struct {
State *prometheus.Desc
StartMode *prometheus.Desc
Status *prometheus.Desc
queryWhereClause string
}
// NewserviceCollector ...
func NewserviceCollector() (Collector, error) {
const subsystem = "service"
if *serviceWhereClause == "" {
log.Warn("No where-clause specified for service collector. This will generate a very large number of metrics!")
}
return &serviceCollector{
State: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "state"),
"The state of the service (State)",
[]string{"name", "state"},
nil,
),
StartMode: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "start_mode"),
"The start mode of the service (StartMode)",
[]string{"name", "start_mode"},
nil,
),
Status: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "status"),
"The status of the service (Status)",
[]string{"name", "status"},
nil,
),
queryWhereClause: *serviceWhereClause,
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *serviceCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
log.Error("failed collecting service metrics:", desc, err)
return err
}
return nil
}
// Win32_Service docs:
// - https://msdn.microsoft.com/en-us/library/aa394418(v=vs.85).aspx
type Win32_Service struct {
Name string
State string
Status string
StartMode string
}
var (
allStates = []string{
"stopped",
"start pending",
"stop pending",
"running",
"continue pending",
"pause pending",
"paused",
"unknown",
}
allStartModes = []string{
"boot",
"system",
"auto",
"manual",
"disabled",
}
allStatuses = []string{
"ok",
"error",
"degraded",
"unknown",
"pred fail",
"starting",
"stopping",
"service",
"stressed",
"nonrecover",
"no contact",
"lost comm",
}
)
func (c *serviceCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_Service
q := queryAllWhere(&dst, c.queryWhereClause)
if err := wmi.Query(q, &dst); err != nil {
return nil, err
}
for _, service := range dst {
for _, state := range allStates {
isCurrentState := 0.0
if state == strings.ToLower(service.State) {
isCurrentState = 1.0
}
ch <- prometheus.MustNewConstMetric(
c.State,
prometheus.GaugeValue,
isCurrentState,
strings.ToLower(service.Name),
state,
)
}
for _, startMode := range allStartModes {
isCurrentStartMode := 0.0
if startMode == strings.ToLower(service.StartMode) {
isCurrentStartMode = 1.0
}
ch <- prometheus.MustNewConstMetric(
c.StartMode,
prometheus.GaugeValue,
isCurrentStartMode,
strings.ToLower(service.Name),
startMode,
)
}
for _, status := range allStatuses {
isCurrentStatus := 0.0
if status == strings.ToLower(service.Status) {
isCurrentStatus = 1.0
}
ch <- prometheus.MustNewConstMetric(
c.Status,
prometheus.GaugeValue,
isCurrentStatus,
strings.ToLower(service.Name),
status,
)
}
}
return nil, nil
}

126
collector/system.go Normal file
View File

@@ -0,0 +1,126 @@
// +build windows
package collector
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {
registerCollector("system", NewSystemCollector, "System")
}
// A SystemCollector is a Prometheus collector for WMI metrics
type SystemCollector struct {
ContextSwitchesTotal *prometheus.Desc
ExceptionDispatchesTotal *prometheus.Desc
ProcessorQueueLength *prometheus.Desc
SystemCallsTotal *prometheus.Desc
SystemUpTime *prometheus.Desc
Threads *prometheus.Desc
}
// NewSystemCollector ...
func NewSystemCollector() (Collector, error) {
const subsystem = "system"
return &SystemCollector{
ContextSwitchesTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "context_switches_total"),
"Total number of context switches (WMI source: PerfOS_System.ContextSwitchesPersec)",
nil,
nil,
),
ExceptionDispatchesTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "exception_dispatches_total"),
"Total number of exceptions dispatched (WMI source: PerfOS_System.ExceptionDispatchesPersec)",
nil,
nil,
),
ProcessorQueueLength: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "processor_queue_length"),
"Length of processor queue (WMI source: PerfOS_System.ProcessorQueueLength)",
nil,
nil,
),
SystemCallsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "system_calls_total"),
"Total number of system calls (WMI source: PerfOS_System.SystemCallsPersec)",
nil,
nil,
),
SystemUpTime: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "system_up_time"),
"System boot time (WMI source: PerfOS_System.SystemUpTime)",
nil,
nil,
),
Threads: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "threads"),
"Current number of threads (WMI source: PerfOS_System.Threads)",
nil,
nil,
),
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *SystemCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ctx, ch); err != nil {
log.Error("failed collecting system metrics:", desc, err)
return err
}
return nil
}
// Win32_PerfRawData_PerfOS_System docs:
// - https://web.archive.org/web/20050830140516/http://msdn.microsoft.com/library/en-us/wmisdk/wmi/win32_perfrawdata_perfos_system.asp
type system struct {
ContextSwitchesPersec float64 `perflib:"Context Switches/sec"`
ExceptionDispatchesPersec float64 `perflib:"Exception Dispatches/sec"`
ProcessorQueueLength float64 `perflib:"Processor Queue Length"`
SystemCallsPersec float64 `perflib:"System Calls/sec"`
SystemUpTime float64 `perflib:"System Up Time"`
Threads float64 `perflib:"Threads"`
}
func (c *SystemCollector) collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []system
if err := unmarshalObject(ctx.perfObjects["System"], &dst); err != nil {
return nil, err
}
ch <- prometheus.MustNewConstMetric(
c.ContextSwitchesTotal,
prometheus.CounterValue,
dst[0].ContextSwitchesPersec,
)
ch <- prometheus.MustNewConstMetric(
c.ExceptionDispatchesTotal,
prometheus.CounterValue,
dst[0].ExceptionDispatchesPersec,
)
ch <- prometheus.MustNewConstMetric(
c.ProcessorQueueLength,
prometheus.GaugeValue,
dst[0].ProcessorQueueLength,
)
ch <- prometheus.MustNewConstMetric(
c.SystemCallsTotal,
prometheus.CounterValue,
dst[0].SystemCallsPersec,
)
ch <- prometheus.MustNewConstMetric(
c.SystemUpTime,
prometheus.GaugeValue,
dst[0].SystemUpTime,
)
ch <- prometheus.MustNewConstMetric(
c.Threads,
prometheus.GaugeValue,
dst[0].Threads,
)
return nil, nil
}

174
collector/tcp.go Normal file
View File

@@ -0,0 +1,174 @@
// +build windows
package collector
import (
"errors"
"github.com/StackExchange/wmi"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {
registerCollector("tcp", NewTCPCollector)
}
// A TCPCollector is a Prometheus collector for WMI Win32_PerfRawData_Tcpip_TCPv4 metrics
type TCPCollector struct {
ConnectionFailures *prometheus.Desc
ConnectionsActive *prometheus.Desc
ConnectionsEstablished *prometheus.Desc
ConnectionsPassive *prometheus.Desc
ConnectionsReset *prometheus.Desc
SegmentsTotal *prometheus.Desc
SegmentsReceivedTotal *prometheus.Desc
SegmentsRetransmittedTotal *prometheus.Desc
SegmentsSentTotal *prometheus.Desc
}
// NewTCPCollector ...
func NewTCPCollector() (Collector, error) {
const subsystem = "tcp"
return &TCPCollector{
ConnectionFailures: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "connection_failures"),
"(TCP.ConnectionFailures)",
nil,
nil,
),
ConnectionsActive: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "connections_active"),
"(TCP.ConnectionsActive)",
nil,
nil,
),
ConnectionsEstablished: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "connections_established"),
"(TCP.ConnectionsEstablished)",
nil,
nil,
),
ConnectionsPassive: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "connections_passive"),
"(TCP.ConnectionsPassive)",
nil,
nil,
),
ConnectionsReset: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "connections_reset"),
"(TCP.ConnectionsReset)",
nil,
nil,
),
SegmentsTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "segments_total"),
"(TCP.SegmentsTotal)",
nil,
nil,
),
SegmentsReceivedTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "segments_received_total"),
"(TCP.SegmentsReceivedTotal)",
nil,
nil,
),
SegmentsRetransmittedTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "segments_retransmitted_total"),
"(TCP.SegmentsRetransmittedTotal)",
nil,
nil,
),
SegmentsSentTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "segments_sent_total"),
"(TCP.SegmentsSentTotal)",
nil,
nil,
),
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *TCPCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
log.Error("failed collecting tcp metrics:", desc, err)
return err
}
return nil
}
// Win32_PerfRawData_Tcpip_TCPv4 docs
// - https://msdn.microsoft.com/en-us/library/aa394341(v=vs.85).aspx
type Win32_PerfRawData_Tcpip_TCPv4 struct {
ConnectionFailures uint64
ConnectionsActive uint64
ConnectionsEstablished uint64
ConnectionsPassive uint64
ConnectionsReset uint64
SegmentsPersec uint64
SegmentsReceivedPersec uint64
SegmentsRetransmittedPersec uint64
SegmentsSentPersec uint64
}
func (c *TCPCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_PerfRawData_Tcpip_TCPv4
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
return nil, err
}
if len(dst) == 0 {
return nil, errors.New("WMI query returned empty result set")
}
// Counters
ch <- prometheus.MustNewConstMetric(
c.ConnectionFailures,
prometheus.CounterValue,
float64(dst[0].ConnectionFailures),
)
ch <- prometheus.MustNewConstMetric(
c.ConnectionsActive,
prometheus.CounterValue,
float64(dst[0].ConnectionsActive),
)
ch <- prometheus.MustNewConstMetric(
c.ConnectionsEstablished,
prometheus.GaugeValue,
float64(dst[0].ConnectionsEstablished),
)
ch <- prometheus.MustNewConstMetric(
c.ConnectionsPassive,
prometheus.CounterValue,
float64(dst[0].ConnectionsPassive),
)
ch <- prometheus.MustNewConstMetric(
c.ConnectionsReset,
prometheus.CounterValue,
float64(dst[0].ConnectionsReset),
)
ch <- prometheus.MustNewConstMetric(
c.SegmentsTotal,
prometheus.CounterValue,
float64(dst[0].SegmentsPersec),
)
ch <- prometheus.MustNewConstMetric(
c.SegmentsReceivedTotal,
prometheus.CounterValue,
float64(dst[0].SegmentsReceivedPersec),
)
ch <- prometheus.MustNewConstMetric(
c.SegmentsRetransmittedTotal,
prometheus.CounterValue,
float64(dst[0].SegmentsRetransmittedPersec),
)
ch <- prometheus.MustNewConstMetric(
c.SegmentsSentTotal,
prometheus.CounterValue,
float64(dst[0].SegmentsSentPersec),
)
return nil, nil
}

View File

@@ -0,0 +1,84 @@
// +build windows
package collector
import (
"errors"
"github.com/StackExchange/wmi"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {
registerCollector("terminal_services", NewTerminalServicesCollector)
}
// A TerminalServicesCollector is a Prometheus collector for WMI
// Win32_PerfRawData_LocalSessionManager_TerminalServices & Win32_PerfRawData_TermService_TerminalServicesSession metrics
type TerminalServicesCollector struct {
Local_session_count *prometheus.Desc
}
// NewTerminalServicesCollector ...
func NewTerminalServicesCollector() (Collector, error) {
const subsystem = "terminal_services"
return &TerminalServicesCollector{
Local_session_count: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "local_session_count"),
"Number of Terminal Services sessions",
[]string{"session"},
nil,
),
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *TerminalServicesCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collectTSSessionCount(ch); err != nil {
log.Error("failed collecting terminal services session count metrics:", desc, err)
return err
}
return nil
}
type Win32_PerfRawData_LocalSessionManager_TerminalServices struct {
ActiveSessions uint32
InactiveSessions uint32
TotalSessions uint32
}
func (c *TerminalServicesCollector) collectTSSessionCount(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_PerfRawData_LocalSessionManager_TerminalServices
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
return nil, err
}
if len(dst) == 0 {
return nil, errors.New("WMI query returned empty result set")
}
ch <- prometheus.MustNewConstMetric(
c.Local_session_count,
prometheus.GaugeValue,
float64(dst[0].ActiveSessions),
"active",
)
ch <- prometheus.MustNewConstMetric(
c.Local_session_count,
prometheus.GaugeValue,
float64(dst[0].InactiveSessions),
"inactive",
)
ch <- prometheus.MustNewConstMetric(
c.Local_session_count,
prometheus.GaugeValue,
float64(dst[0].TotalSessions),
"total",
)
return nil, nil
}

299
collector/textfile.go Normal file
View File

@@ -0,0 +1,299 @@
// Copyright 2015 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build !notextfile
package collector
import (
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"sort"
"strings"
"time"
"github.com/dimchansky/utfbom"
"github.com/prometheus/client_golang/prometheus"
dto "github.com/prometheus/client_model/go"
"github.com/prometheus/common/expfmt"
"github.com/prometheus/common/log"
kingpin "gopkg.in/alecthomas/kingpin.v2"
)
var (
textFileDirectory = kingpin.Flag(
"collector.textfile.directory",
"Directory to read text files with metrics from.",
).Default("C:\\Program Files\\wmi_exporter\\textfile_inputs").String()
mtimeDesc = prometheus.NewDesc(
"wmi_textfile_mtime_seconds",
"Unixtime mtime of textfiles successfully read.",
[]string{"file"},
nil,
)
)
type textFileCollector struct {
path string
// Only set for testing to get predictable output.
mtime *float64
}
func init() {
registerCollector("textfile", NewTextFileCollector)
}
// NewTextFileCollector returns a new Collector exposing metrics read from files
// in the given textfile directory.
func NewTextFileCollector() (Collector, error) {
return &textFileCollector{
path: *textFileDirectory,
}, nil
}
func convertMetricFamily(metricFamily *dto.MetricFamily, ch chan<- prometheus.Metric) {
var valType prometheus.ValueType
var val float64
allLabelNames := map[string]struct{}{}
for _, metric := range metricFamily.Metric {
labels := metric.GetLabel()
for _, label := range labels {
if _, ok := allLabelNames[label.GetName()]; !ok {
allLabelNames[label.GetName()] = struct{}{}
}
}
}
for _, metric := range metricFamily.Metric {
if metric.TimestampMs != nil {
log.Warnf("Ignoring unsupported custom timestamp on textfile collector metric %v", metric)
}
labels := metric.GetLabel()
var names []string
var values []string
for _, label := range labels {
names = append(names, label.GetName())
values = append(values, label.GetValue())
}
for k := range allLabelNames {
present := false
for _, name := range names {
if k == name {
present = true
break
}
}
if present == false {
names = append(names, k)
values = append(values, "")
}
}
metricType := metricFamily.GetType()
switch metricType {
case dto.MetricType_COUNTER:
valType = prometheus.CounterValue
val = metric.Counter.GetValue()
case dto.MetricType_GAUGE:
valType = prometheus.GaugeValue
val = metric.Gauge.GetValue()
case dto.MetricType_UNTYPED:
valType = prometheus.UntypedValue
val = metric.Untyped.GetValue()
case dto.MetricType_SUMMARY:
quantiles := map[float64]float64{}
for _, q := range metric.Summary.Quantile {
quantiles[q.GetQuantile()] = q.GetValue()
}
ch <- prometheus.MustNewConstSummary(
prometheus.NewDesc(
*metricFamily.Name,
metricFamily.GetHelp(),
names, nil,
),
metric.Summary.GetSampleCount(),
metric.Summary.GetSampleSum(),
quantiles, values...,
)
case dto.MetricType_HISTOGRAM:
buckets := map[float64]uint64{}
for _, b := range metric.Histogram.Bucket {
buckets[b.GetUpperBound()] = b.GetCumulativeCount()
}
ch <- prometheus.MustNewConstHistogram(
prometheus.NewDesc(
*metricFamily.Name,
metricFamily.GetHelp(),
names, nil,
),
metric.Histogram.GetSampleCount(),
metric.Histogram.GetSampleSum(),
buckets, values...,
)
default:
log.Errorf("unknown metric type for file")
continue
}
if metricType == dto.MetricType_GAUGE || metricType == dto.MetricType_COUNTER || metricType == dto.MetricType_UNTYPED {
ch <- prometheus.MustNewConstMetric(
prometheus.NewDesc(
*metricFamily.Name,
metricFamily.GetHelp(),
names, nil,
),
valType, val, values...,
)
}
}
}
func (c *textFileCollector) exportMTimes(mtimes map[string]time.Time, ch chan<- prometheus.Metric) {
// Export the mtimes of the successful files.
if len(mtimes) > 0 {
// Sorting is needed for predictable output comparison in tests.
filenames := make([]string, 0, len(mtimes))
for filename := range mtimes {
filenames = append(filenames, filename)
}
sort.Strings(filenames)
for _, filename := range filenames {
mtime := float64(mtimes[filename].UnixNano() / 1e9)
if c.mtime != nil {
mtime = *c.mtime
}
ch <- prometheus.MustNewConstMetric(mtimeDesc, prometheus.GaugeValue, mtime, filename)
}
}
}
type carriageReturnFilteringReader struct {
r io.Reader
}
// Read returns data from the underlying io.Reader, but with \r filtered out
func (cr carriageReturnFilteringReader) Read(p []byte) (int, error) {
buf := make([]byte, len(p))
n, err := cr.r.Read(buf)
if err != nil && err != io.EOF {
return n, err
}
pi := 0
for i := 0; i < n; i++ {
if buf[i] != '\r' {
p[pi] = buf[i]
pi++
}
}
return pi, err
}
// Update implements the Collector interface.
func (c *textFileCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
error := 0.0
mtimes := map[string]time.Time{}
// Iterate over files and accumulate their metrics.
files, err := ioutil.ReadDir(c.path)
if err != nil && c.path != "" {
log.Errorf("Error reading textfile collector directory %q: %s", c.path, err)
error = 1.0
}
fileLoop:
for _, f := range files {
if !strings.HasSuffix(f.Name(), ".prom") {
continue
}
path := filepath.Join(c.path, f.Name())
log.Debugf("Processing file %q", path)
file, err := os.Open(path)
if err != nil {
log.Errorf("Error opening %q: %v", path, err)
error = 1.0
continue
}
var parser expfmt.TextParser
r, encoding := utfbom.Skip(carriageReturnFilteringReader{r: file})
if err = checkBOM(encoding); err != nil {
log.Errorf("Invalid file encoding detected in %s: %s - file must be UTF8", path, err.Error())
error = 1.0
continue
}
parsedFamilies, err := parser.TextToMetricFamilies(r)
closeErr := file.Close()
if closeErr != nil {
log.Warnf("Error closing file: %v", err)
}
if err != nil {
log.Errorf("Error parsing %q: %v", path, err)
error = 1.0
continue
}
for _, mf := range parsedFamilies {
for _, m := range mf.Metric {
if m.TimestampMs != nil {
log.Errorf("Textfile %q contains unsupported client-side timestamps, skipping entire file", path)
error = 1.0
continue fileLoop
}
}
if mf.Help == nil {
help := fmt.Sprintf("Metric read from %s", path)
mf.Help = &help
}
}
// Only set this once it has been parsed and validated, so that
// a failure does not appear fresh.
mtimes[f.Name()] = f.ModTime()
for _, mf := range parsedFamilies {
convertMetricFamily(mf, ch)
}
}
c.exportMTimes(mtimes, ch)
// Export if there were errors.
ch <- prometheus.MustNewConstMetric(
prometheus.NewDesc(
"wmi_textfile_scrape_error",
"1 if there was an error opening or reading a file, 0 otherwise",
nil, nil,
),
prometheus.GaugeValue, error,
)
return nil
}
func checkBOM(encoding utfbom.Encoding) error {
if encoding == utfbom.Unknown || encoding == utfbom.UTF8 {
return nil
}
return fmt.Errorf(encoding.String())
}

View File

@@ -0,0 +1,47 @@
package collector
import (
"github.com/dimchansky/utfbom"
"io/ioutil"
"strings"
"testing"
)
func TestCRFilter(t *testing.T) {
sr := strings.NewReader("line 1\r\nline 2")
cr := carriageReturnFilteringReader{r: sr}
b, err := ioutil.ReadAll(cr)
if err != nil {
t.Error(err)
}
if string(b) != "line 1\nline 2" {
t.Errorf("Unexpected output %q", b)
}
}
func TestCheckBOM(t *testing.T) {
testdata := []struct {
encoding utfbom.Encoding
err string
}{
{utfbom.Unknown, ""},
{utfbom.UTF8, ""},
{utfbom.UTF16BigEndian, "UTF16BigEndian"},
{utfbom.UTF16LittleEndian, "UTF16LittleEndian"},
{utfbom.UTF32BigEndian, "UTF32BigEndian"},
{utfbom.UTF32LittleEndian, "UTF32LittleEndian"},
}
for _, d := range testdata {
err := checkBOM(d.encoding)
if d.err == "" && err != nil {
t.Error(err)
}
if d.err != "" && err == nil {
t.Errorf("Missing expected error %s", d.err)
}
if err != nil && !strings.Contains(err.Error(), d.err) {
t.Error(err)
}
}
}

103
collector/thermalzone.go Normal file
View File

@@ -0,0 +1,103 @@
package collector
import (
"github.com/StackExchange/wmi"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {
registerCollector("thermalzone", NewThermalZoneCollector)
}
// A thermalZoneCollector is a Prometheus collector for WMI Win32_PerfRawData_Counters_ThermalZoneInformation metrics
type thermalZoneCollector struct {
PercentPassiveLimit *prometheus.Desc
Temperature *prometheus.Desc
ThrottleReasons *prometheus.Desc
}
// NewThermalZoneCollector ...
func NewThermalZoneCollector() (Collector, error) {
const subsystem = "thermalzone"
return &thermalZoneCollector{
Temperature: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "temperature_celsius"),
"(Temperature)",
[]string{
"name",
},
nil,
),
PercentPassiveLimit: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "percent_passive_limit"),
"(PercentPassiveLimit)",
[]string{
"name",
},
nil,
),
ThrottleReasons: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "throttle_reasons"),
"(ThrottleReasons)",
[]string{
"name",
},
nil,
),
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *thermalZoneCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
log.Error("failed collecting thermalzone metrics:", desc, err)
return err
}
return nil
}
// Win32_PerfRawData_Counters_ThermalZoneInformation docs:
// https://wutils.com/wmi/root/cimv2/win32_perfrawdata_counters_thermalzoneinformation/
type Win32_PerfRawData_Counters_ThermalZoneInformation struct {
Name string
HighPrecisionTemperature uint32
PercentPassiveLimit uint32
ThrottleReasons uint32
}
func (c *thermalZoneCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_PerfRawData_Counters_ThermalZoneInformation
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
return nil, err
}
for _, info := range dst {
//Divide by 10 and subtract 273.15 to convert decikelvin to celsius
ch <- prometheus.MustNewConstMetric(
c.Temperature,
prometheus.GaugeValue,
(float64(info.HighPrecisionTemperature)/10.0)-273.15,
info.Name,
)
ch <- prometheus.MustNewConstMetric(
c.PercentPassiveLimit,
prometheus.GaugeValue,
float64(info.PercentPassiveLimit),
info.Name,
)
ch <- prometheus.MustNewConstMetric(
c.ThrottleReasons,
prometheus.GaugeValue,
float64(info.ThrottleReasons),
info.Name,
)
}
return nil, nil
}

344
collector/vmware.go Normal file
View File

@@ -0,0 +1,344 @@
// +build windows
package collector
import (
"errors"
"github.com/StackExchange/wmi"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {
registerCollector("vmware", NewVmwareCollector)
}
// A VmwareCollector is a Prometheus collector for WMI Win32_PerfRawData_vmGuestLib_VMem/Win32_PerfRawData_vmGuestLib_VCPU metrics
type VmwareCollector struct {
MemActive *prometheus.Desc
MemBallooned *prometheus.Desc
MemLimit *prometheus.Desc
MemMapped *prometheus.Desc
MemOverhead *prometheus.Desc
MemReservation *prometheus.Desc
MemShared *prometheus.Desc
MemSharedSaved *prometheus.Desc
MemShares *prometheus.Desc
MemSwapped *prometheus.Desc
MemTargetSize *prometheus.Desc
MemUsed *prometheus.Desc
CpuLimitMHz *prometheus.Desc
CpuReservationMHz *prometheus.Desc
CpuShares *prometheus.Desc
CpuStolenTotal *prometheus.Desc
CpuTimeTotal *prometheus.Desc
EffectiveVMSpeedMHz *prometheus.Desc
HostProcessorSpeedMHz *prometheus.Desc
}
// NewVmwareCollector constructs a new VmwareCollector
func NewVmwareCollector() (Collector, error) {
const subsystem = "vmware"
return &VmwareCollector{
MemActive: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "mem_active_bytes"),
"(MemActiveMB)",
nil,
nil,
),
MemBallooned: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "mem_ballooned_bytes"),
"(MemBalloonedMB)",
nil,
nil,
),
MemLimit: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "mem_limit_bytes"),
"(MemLimitMB)",
nil,
nil,
),
MemMapped: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "mem_mapped_bytes"),
"(MemMappedMB)",
nil,
nil,
),
MemOverhead: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "mem_overhead_bytes"),
"(MemOverheadMB)",
nil,
nil,
),
MemReservation: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "mem_reservation_bytes"),
"(MemReservationMB)",
nil,
nil,
),
MemShared: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "mem_shared_bytes"),
"(MemSharedMB)",
nil,
nil,
),
MemSharedSaved: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "mem_shared_saved_bytes"),
"(MemSharedSavedMB)",
nil,
nil,
),
MemShares: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "mem_shares"),
"(MemShares)",
nil,
nil,
),
MemSwapped: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "mem_swapped_bytes"),
"(MemSwappedMB)",
nil,
nil,
),
MemTargetSize: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "mem_target_size_bytes"),
"(MemTargetSizeMB)",
nil,
nil,
),
MemUsed: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "mem_used_bytes"),
"(MemUsedMB)",
nil,
nil,
),
CpuLimitMHz: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "cpu_limit_mhz"),
"(CpuLimitMHz)",
nil,
nil,
),
CpuReservationMHz: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "cpu_reservation_mhz"),
"(CpuReservationMHz)",
nil,
nil,
),
CpuShares: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "cpu_shares"),
"(CpuShares)",
nil,
nil,
),
CpuStolenTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "cpu_stolen_seconds_total"),
"(CpuStolenMs)",
nil,
nil,
),
CpuTimeTotal: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "cpu_time_seconds_total"),
"(CpuTimePercents)",
nil,
nil,
),
EffectiveVMSpeedMHz: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "effective_vm_speed_mhz"),
"(EffectiveVMSpeedMHz)",
nil,
nil,
),
HostProcessorSpeedMHz: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "host_processor_speed_mhz"),
"(HostProcessorSpeedMHz)",
nil,
nil,
),
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *VmwareCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collectMem(ch); err != nil {
log.Error("failed collecting vmware memory metrics:", desc, err)
return err
}
if desc, err := c.collectCpu(ch); err != nil {
log.Error("failed collecting vmware cpu metrics:", desc, err)
return err
}
return nil
}
type Win32_PerfRawData_vmGuestLib_VMem struct {
MemActiveMB uint64
MemBalloonedMB uint64
MemLimitMB uint64
MemMappedMB uint64
MemOverheadMB uint64
MemReservationMB uint64
MemSharedMB uint64
MemSharedSavedMB uint64
MemShares uint64
MemSwappedMB uint64
MemTargetSizeMB uint64
MemUsedMB uint64
}
type Win32_PerfRawData_vmGuestLib_VCPU struct {
CpuLimitMHz uint64
CpuReservationMHz uint64
CpuShares uint64
CpuStolenMs uint64
CpuTimePercents uint64
EffectiveVMSpeedMHz uint64
HostProcessorSpeedMHz uint64
}
func (c *VmwareCollector) collectMem(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_PerfRawData_vmGuestLib_VMem
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
return nil, err
}
if len(dst) == 0 {
return nil, errors.New("WMI query returned empty result set")
}
ch <- prometheus.MustNewConstMetric(
c.MemActive,
prometheus.GaugeValue,
mbToBytes(dst[0].MemActiveMB),
)
ch <- prometheus.MustNewConstMetric(
c.MemBallooned,
prometheus.GaugeValue,
mbToBytes(dst[0].MemBalloonedMB),
)
ch <- prometheus.MustNewConstMetric(
c.MemLimit,
prometheus.GaugeValue,
mbToBytes(dst[0].MemLimitMB),
)
ch <- prometheus.MustNewConstMetric(
c.MemMapped,
prometheus.GaugeValue,
mbToBytes(dst[0].MemMappedMB),
)
ch <- prometheus.MustNewConstMetric(
c.MemOverhead,
prometheus.GaugeValue,
mbToBytes(dst[0].MemOverheadMB),
)
ch <- prometheus.MustNewConstMetric(
c.MemReservation,
prometheus.GaugeValue,
mbToBytes(dst[0].MemReservationMB),
)
ch <- prometheus.MustNewConstMetric(
c.MemShared,
prometheus.GaugeValue,
mbToBytes(dst[0].MemSharedMB),
)
ch <- prometheus.MustNewConstMetric(
c.MemSharedSaved,
prometheus.GaugeValue,
mbToBytes(dst[0].MemSharedSavedMB),
)
ch <- prometheus.MustNewConstMetric(
c.MemShares,
prometheus.GaugeValue,
float64(dst[0].MemShares),
)
ch <- prometheus.MustNewConstMetric(
c.MemSwapped,
prometheus.GaugeValue,
mbToBytes(dst[0].MemSwappedMB),
)
ch <- prometheus.MustNewConstMetric(
c.MemTargetSize,
prometheus.GaugeValue,
mbToBytes(dst[0].MemTargetSizeMB),
)
ch <- prometheus.MustNewConstMetric(
c.MemUsed,
prometheus.GaugeValue,
mbToBytes(dst[0].MemUsedMB),
)
return nil, nil
}
func mbToBytes(mb uint64) float64 {
return float64(mb * 1024 * 1024)
}
func (c *VmwareCollector) collectCpu(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_PerfRawData_vmGuestLib_VCPU
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
return nil, err
}
if len(dst) == 0 {
return nil, errors.New("WMI query returned empty result set")
}
ch <- prometheus.MustNewConstMetric(
c.CpuLimitMHz,
prometheus.GaugeValue,
float64(dst[0].CpuLimitMHz),
)
ch <- prometheus.MustNewConstMetric(
c.CpuReservationMHz,
prometheus.GaugeValue,
float64(dst[0].CpuReservationMHz),
)
ch <- prometheus.MustNewConstMetric(
c.CpuShares,
prometheus.GaugeValue,
float64(dst[0].CpuShares),
)
ch <- prometheus.MustNewConstMetric(
c.CpuStolenTotal,
prometheus.CounterValue,
float64(dst[0].CpuStolenMs)*ticksToSecondsScaleFactor,
)
ch <- prometheus.MustNewConstMetric(
c.CpuTimeTotal,
prometheus.CounterValue,
float64(dst[0].CpuTimePercents)*ticksToSecondsScaleFactor,
)
ch <- prometheus.MustNewConstMetric(
c.EffectiveVMSpeedMHz,
prometheus.GaugeValue,
float64(dst[0].EffectiveVMSpeedMHz),
)
ch <- prometheus.MustNewConstMetric(
c.HostProcessorSpeedMHz,
prometheus.GaugeValue,
float64(dst[0].HostProcessorSpeedMHz),
)
return nil, nil
}

63
collector/wmi.go Normal file
View File

@@ -0,0 +1,63 @@
package collector
import (
"bytes"
"reflect"
"github.com/prometheus/common/log"
)
func className(src interface{}) string {
s := reflect.Indirect(reflect.ValueOf(src))
t := s.Type()
if s.Kind() == reflect.Slice {
t = t.Elem()
}
return t.Name()
}
func queryAll(src interface{}) string {
var b bytes.Buffer
b.WriteString("SELECT * FROM ")
b.WriteString(className(src))
log.Debugf("Generated WMI query %s", b.String())
return b.String()
}
func queryAllForClass(src interface{}, class string) string {
var b bytes.Buffer
b.WriteString("SELECT * FROM ")
b.WriteString(class)
log.Debugf("Generated WMI query %s", b.String())
return b.String()
}
func queryAllWhere(src interface{}, where string) string {
var b bytes.Buffer
b.WriteString("SELECT * FROM ")
b.WriteString(className(src))
if where != "" {
b.WriteString(" WHERE ")
b.WriteString(where)
}
log.Debugf("Generated WMI query %s", b.String())
return b.String()
}
func queryAllForClassWhere(src interface{}, class string, where string) string {
var b bytes.Buffer
b.WriteString("SELECT * FROM ")
b.WriteString(class)
if where != "" {
b.WriteString(" WHERE ")
b.WriteString(where)
}
log.Debugf("Generated WMI query %s", b.String())
return b.String()
}

115
collector/wmi_test.go Normal file
View File

@@ -0,0 +1,115 @@
package collector
import (
"testing"
)
type fakeWmiClass struct {
Name string
SomeProperty int
}
var (
mapQueryAll = func(src interface{}, class string, where string) string {
return queryAll(src)
}
mapQueryAllWhere = func(src interface{}, class string, where string) string {
return queryAllWhere(src, where)
}
mapQueryAllForClass = func(src interface{}, class string, where string) string {
return queryAllForClass(src, class)
}
mapQueryAllForClassWhere = func(src interface{}, class string, where string) string {
return queryAllForClassWhere(src, class, where)
}
)
type queryFunc func(src interface{}, class string, where string) string
func TestCreateQuery(t *testing.T) {
cases := []struct {
desc string
dst interface{}
class string
where string
queryFunc queryFunc
expected string
}{
{
desc: "queryAll on single instance",
dst: fakeWmiClass{},
queryFunc: mapQueryAll,
expected: "SELECT * FROM fakeWmiClass",
},
{
desc: "queryAll on slice",
dst: []fakeWmiClass{},
queryFunc: mapQueryAll,
expected: "SELECT * FROM fakeWmiClass",
},
{
desc: "queryAllWhere on single instance",
dst: fakeWmiClass{},
where: "foo = bar",
queryFunc: mapQueryAllWhere,
expected: "SELECT * FROM fakeWmiClass WHERE foo = bar",
},
{
desc: "queryAllWhere on slice",
dst: []fakeWmiClass{},
where: "foo = bar",
queryFunc: mapQueryAllWhere,
expected: "SELECT * FROM fakeWmiClass WHERE foo = bar",
},
{
desc: "queryAllWhere on single instance with empty where",
dst: fakeWmiClass{},
queryFunc: mapQueryAllWhere,
expected: "SELECT * FROM fakeWmiClass",
},
{
desc: "queryAllForClass on single instance",
dst: fakeWmiClass{},
class: "someClass",
queryFunc: mapQueryAllForClass,
expected: "SELECT * FROM someClass",
},
{
desc: "queryAllForClass on slice",
dst: []fakeWmiClass{},
class: "someClass",
queryFunc: mapQueryAllForClass,
expected: "SELECT * FROM someClass",
},
{
desc: "queryAllForClassWhere on single instance",
dst: fakeWmiClass{},
class: "someClass",
where: "foo = bar",
queryFunc: mapQueryAllForClassWhere,
expected: "SELECT * FROM someClass WHERE foo = bar",
},
{
desc: "queryAllForClassWhere on slice",
dst: []fakeWmiClass{},
class: "someClass",
where: "foo = bar",
queryFunc: mapQueryAllForClassWhere,
expected: "SELECT * FROM someClass WHERE foo = bar",
},
{
desc: "queryAllForClassWhere on single instance with empty where",
dst: fakeWmiClass{},
class: "someClass",
queryFunc: mapQueryAllForClassWhere,
expected: "SELECT * FROM someClass",
},
}
for _, c := range cases {
t.Run(c.desc, func(t *testing.T) {
if q := c.queryFunc(c.dst, c.class, c.where); q != c.expected {
t.Errorf("Case %q failed: Expected %q, got %q", c.desc, c.expected, q)
}
})
}
}

View File

@@ -1,27 +0,0 @@
# example configuration file for windows_exporter
collectors:
enabled: cpu,cpu_info,exchange,iis,logical_disk,memory,net,os,performancecounter,process,remote_fx,service,system,tcp,time,terminal_services,textfile
collector:
textfile:
directories:
- 'C:\MyDir1'
- 'C:\MyDir2'
service:
include: "windows_exporter"
performancecounter:
objects: |-
- name: photon_udp
object: "Photon Socket Server: UDP"
instances: ["*"]
counters:
- name: "UDP: Datagrams in"
metric: "photon_udp_datagrams"
labels:
direction: "in"
- name: "UDP: Datagrams out"
metric: "photon_udp_datagrams"
labels:
direction: "out"
log:
level: warn

View File

@@ -0,0 +1,140 @@
{{ template "head" . }}
{{ template "prom_content_head" . }}
<h1>Node Overview - {{ reReplaceAll "(.*?://)([^:/]+?)(:\\d+)?/.*" "$2" .Params.instance }}</h1>
<h3>CPU Usage</h3>
<div id="cpuGraph"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#cpuGraph"),
expr: "sum by (mode)(irate(wmi_cpu_time_total{job='node',instance='{{ .Params.instance }}',mode!='idle'}[5m]))",
renderer: 'area',
max: {{ with printf "count(count by (cpu)(wmi_cpu_time_total{job='node',instance='%s'}))" .Params.instance | query }}{{ . | first | value }}{{ else}}undefined{{end}},
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yTitle: 'Cores'
})
</script>
<h3>Network Utilization</h3>
<div id="networkioGraph"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#networkioGraph"),
expr: [
"irate(wmi_net_bytes_sent_total{job='node',instance='{{ .Params.instance }}',nic!~'^isatap_ec2_internal'}[5m])",
"irate(wmi_net_bytes_received_total{job='node',instance='{{ .Params.instance }}',nic!~'^isatap_ec2_internal'}[5m])",
],
min: 0,
name: [ 'sent', 'received' ],
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "B",
yTitle: 'Network IO'
})
</script>
<h3>Disk I/O Utilization</h3>
<div id="diskioGraph"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#diskioGraph"),
expr: [
"100 - irate(wmi_logical_disk_idle_seconds_total{job='node',instance='{{ .Params.instance }}',volume!~'^HarddiskVolume.*$'}[5m]) * 100",
],
min: 0,
name: '[[ volume ]]',
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "%",
yTitle: 'Disk I/O Utilization'
})
</script>
<h3>Memory</h3>
<div id="memoryGraph"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#memoryGraph"),
renderer: 'area',
expr: [
"wmi_cs_physical_memory_bytes{job='node',instance='{{ .Params.instance }}'}",
"wmi_os_physical_memory_free_bytes{job='node',instance='{{ .Params.instance }}'}",
"wmi_cs_physical_memory__bytes{job='node',instance='{{ .Params.instance }}'} - wmi_os_physical_memory_free_bytes{job='node',instance='{{.Params.instance}}'}",
"wmi_os_virtual_memory_bytes{job='node',instance='{{ .Params.instance }}'}",
],
name: ["Physical", "Free", "Used", "Virtual"],
min: 0,
yUnits: "B",
yAxisFormatter: PromConsole.NumberFormatter.humanize1024,
yHoverFormatter: PromConsole.NumberFormatter.humanize1024,
yTitle: 'Memory'
})
</script>
{{ template "prom_right_table_head" }}
<tr><th colspan="2">Overview</th></tr>
<tr>
<td>User CPU</td>
<td>{{ template "prom_query_drilldown" (args (printf "sum(irate(wmi_cpu_time_total{job='node',instance='%s',mode='user'}[5m])) * 100 / count(count by (cpu)(wmi_cpu_time_total{job='node',instance='%s'}))" .Params.instance .Params.instance) "%" "printf.1f") }}</td>
</tr>
<tr>
<td>Privileged CPU</td>
<td>{{ template "prom_query_drilldown" (args (printf "sum(irate(wmi_cpu_time_total{job='node',instance='%s',mode='privileged'}[5m])) * 100 / count(count by (cpu)(wmi_cpu_time_total{job='node',instance='%s'}))" .Params.instance .Params.instance) "%" "printf.1f") }}</td>
</tr>
<tr>
<td>Memory Total</td>
<td>{{ template "prom_query_drilldown" (args (printf "wmi_cs_physical_memory_bytes{job='node',instance='%s'}" .Params.instance) "B" "humanize1024") }}</td>
</tr>
<tr>
<td>Memory Free</td>
<td>{{ template "prom_query_drilldown" (args (printf "wmi_os_physical_memory_free_bytes{job='node',instance='%s'}" .Params.instance) "B" "humanize1024") }}</td>
</tr>
<tr>
<th colspan="2">Network</th>
</tr>
{{ range printf "wmi_net_bytes_received_total{job='node',instance='%s',nic!='isatap_ec2_internal'}" .Params.instance | query | sortByLabel "nic" }}
<tr>
<td>{{ .Labels.nic }} Received</td>
<td>{{ template "prom_query_drilldown" (args (printf "irate(wmi_net_bytes_received_total{job='node',instance='%s',nic='%s'}[5m])" .Labels.instance .Labels.nic) "B/s" "humanize") }}</td>
</tr>
<tr>
<td>{{ .Labels.nic }} Transmitted</td>
<td>{{ template "prom_query_drilldown" (args (printf "irate(wmi_net_bytes_sent_total{job='node',instance='%s',nic='%s'}[5m])" .Labels.instance .Labels.nic) "B/s" "humanize") }}</td>
</tr>
{{ end }}
</tr>
<tr>
<th colspan="2">Disks</th>
</tr>
{{ range printf "wmi_logical_disk_size_bytes{job='node',instance='%s',volume!~'^HarddiskVolume.*$'}" .Params.instance | query | sortByLabel "volume" }}
<tr>
<td>{{ .Labels.volume }} Utilization</td>
<td>{{ template "prom_query_drilldown" (args (printf "100 - irate(wmi_logical_disk_idle_seconds_total{job='node',instance='%s',volume='%s'}[5m]) * 100" .Labels.instance .Labels.volume) "%" "printf.1f") }}</td>
</tr>
{{ end }}
{{ range printf "wmi_logical_disk_size_bytes{job='node',instance='%s',volume!~'^HarddiskVolume.*$'}" .Params.instance | query | sortByLabel "volume" }}
<tr>
<td>{{ .Labels.volume }} Throughput</td>
<td>{{ template "prom_query_drilldown" (args (printf "irate(wmi_logical_disk_read_bytes_total{job='node',instance='%s',volume='%s'}[5m]) + irate(wmi_logical_disk_write_bytes_total{job='node',instance='%s',volume='%s'}[5m])" .Labels.instance .Labels.volume .Labels.instance .Labels.volume) "B/s" "humanize") }}</td>
</tr>
{{ end }}
<tr>
<th colspan="2">Filesystem Fullness</th>
</tr>
{{ define "roughlyNearZero" }}
{{ if gt .1 . }}~0{{ else }}{{ printf "%.1f" . }}{{ end }}
{{ end }}
{{ range printf "wmi_logical_disk_size_bytes{job='node',instance='%s',volume!~'^HarddiskVolume.*$'}" .Params.instance | query | sortByLabel "volume" }}
<tr>
<td>{{ .Labels.volume }}</td>
<td>{{ template "prom_query_drilldown" (args (printf "100 - wmi_logical_disk_free_bytes{job='node',instance='%s',volume='%s'} / wmi_logical_disk_size_bytes{job='node'} * 100" .Labels.instance .Labels.volume) "%" "roughlyNearZero") }}</td>
</tr>
{{ end }}
</tr>
{{ template "prom_right_table_tail" }}
{{ template "prom_content_tail" . }}
{{ template "tail" }}

View File

@@ -1,49 +1,32 @@
# Documentation
This directory contains documentation of the collectors in the windows_exporter, with information such as what metrics are exported, any flags for additional configuration, and some example usage in alerts and queries.
This directory contains documentation of the collectors in the WMI exporter, with information such as what metrics are exported, any flags for additional configuration, and some example usage in alerts and queries.
# Collectors
- [`ad`](collector.ad.md)
- [`adcs`](collector.adcs.md)
- [`adfs`](collector.adfs.md)
- [`cache`](collector.cache.md)
- [`container`](collector.container.md)
- [`cpu`](collector.cpu.md)
- [`cpu_info`](collector.cpu_info.md)
- [`dfsr`](collector.dfsr.md)
- [`dhcp`](collector.dhcp.md)
- [`diskdrive`](collector.diskdrive.md)
- [`cs`](collector.cs.md)
- [`dns`](collector.dns.md)
- [`exchange`](collector.exchange.md)
- [`fsrmquota`](collector.fsrmquota.md)
- [`hyperv`](collector.hyperv.md)
- [`iis`](collector.iis.md)
- [`license`](collector.license.md)
- [`logical_disk`](collector.logical_disk.md)
- [`logon`](collector.logon.md)
- [`memory`](collector.memory.md)
- [`mscluster`](collector.mscluster.md)
- [`msmq`](collector.msmq.md)
- [`mssql`](collector.mssql.md)
- [`netframework_clrexceptions`](collector.netframework_clrexceptions.md)
- [`netframework_clrinterop`](collector.netframework_clrinterop.md)
- [`netframework_clrjit`](collector.netframework_clrjit.md)
- [`netframework_clrloading`](collector.netframework_clrloading.md)
- [`netframework_clrlocksandthreads`](collector.netframework_clrlocksandthreads.md)
- [`netframework_clrmemory`](collector.netframework_clrmemory.md)
- [`netframework_clrremoting`](collector.netframework_clrremoting.md)
- [`netframework_clrsecurity`](collector.netframework_clrsecurity.md)
- [`net`](collector.net.md)
- [`netframework`](collector.netframework.md)
- [`nps`](collector.nps.md)
- [`os`](collector.os.md)
- [`pagefile`](collector.pagefile.md)
- [`performancecounter`](collector.performancecounter.md)
- [`physical_disk`](collector.physical_disk.md)
- [`printer`](collector.printer.md)
- [`process`](collector.process.md)
- [`remote_fx`](collector.remote_fx.md)
- [`scheduled_task`](collector.scheduled_task.md)
- [`service`](collector.service.md)
- [`smb`](collector.smb.md)
- [`smbclient`](collector.smbclient.md)
- [`smtp`](collector.smtp.md)
- [`system`](collector.system.md)
- [`tcp`](collector.tcp.md)
- [`terminal_services`](collector.terminal_services.md)
- [`textfile`](collector.textfile.md)
- [`thermalzone`](collector.thermalzone.md)
- [`time`](collector.time.md)
- [`udp`](collector.udp.md)
- [`update`](collector.update.md)
- [`vmware`](collector.vmware.md)

View File

@@ -1,47 +1,47 @@
# %name% collector
The %name% collector exposes metrics about ...
|||
-|-
Metric name prefix | `%name%`
Classes | [`...`](https://msdn.microsoft.com/en-us/library/...)
Enabled by default? | Yes/No
## Flags
### `--collector....`
Add description...
Example: `--collector....`
## Metrics
Name | Description | Type | Labels
-----|-------------|------|-------
`windows_...` | ... | counter/gauge/histogram/summary | ...
### Example metric
...:
`windows_...{...} 1`
## Useful queries
### Add queries...
`...`
## Alerting examples
### Add alerts...
```yaml
- alert: ""
expr: ""
for: ""
labels:
urgency: ""
annotations:
summary: ""
```
# %name% collector
The %name% collector exposes metrics about ...
|||
-|-
Metric name prefix | `%name%`
Classes | [`...`](https://msdn.microsoft.com/en-us/library/...)
Enabled by default? | Yes/No
## Flags
### `--collector....`
Add description...
Example: `--collector....`
## Metrics
Name | Description | Type | Labels
-----|-------------|------|-------
`wmi_...` | ... | counter/gauge/histogram/summary | ...
### Example metric
...:
`wmi_...{...} 1`
## Useful queries
### Add queries...
`...`
## Alerting examples
### Add alerts...
```yaml
- alert: ""
expr: ""
for: ""
labels:
urgency: ""
annotations:
summary: ""
```

View File

@@ -1,89 +1,88 @@
# ad collector
The ad collector exposes metrics about a Active Directory Domain Services domain controller
|||
-|-
Metric name prefix | `ad`
Classes | [`Win32_PerfRawData_DirectoryServices_DirectoryServices`](https://msdn.microsoft.com/en-us/library/ms803980.aspx)
Enabled by default? | No
## Flags
None
## Metrics
Name | Description | Type | Labels
-----|-------------|------|-------
`windows_ad_address_book_operations_total` | _Not yet documented_ | counter | `operation`
`windows_ad_address_book_client_sessions` | _Not yet documented_ | gauge | None
`windows_ad_approximate_highest_distinguished_name_tag` | _Not yet documented_ | gauge | None
`windows_ad_atq_estimated_delay_seconds` | _Not yet documented_ | gauge | None
`windows_ad_atq_outstanding_requests` | _Not yet documented_ | gauge | None
`windows_ad_atq_average_request_latency` | _Not yet documented_ | gauge | None
`windows_ad_atq_current_threads` | _Not yet documented_ | gauge | `service`
`windows_ad_searches_total` | _Not yet documented_ | counter | `scope`
`windows_ad_database_operations_total` | _Not yet documented_ | counter | `operation`
`windows_ad_binds_total` | _Not yet documented_ | counter | `bind_method`
`windows_ad_replication_highest_usn` | _Not yet documented_ | counter | `state`
`windows_ad_replication_data_intrasite_bytes_total` | _Not yet documented_ | counter | `direction`
`windows_ad_replication_data_intersite_bytes_total` | _Not yet documented_ | counter | `direction`
`windows_ad_replication_inbound_sync_objects_remaining` | _Not yet documented_ | gauge | None
`windows_ad_replication_inbound_link_value_updates_remaining` | _Not yet documented_ | gauge | None
`windows_ad_replication_inbound_objects_updated_total` | _Not yet documented_ | counter | None
`windows_ad_replication_inbound_objects_filtered_total` | _Not yet documented_ | counter | None
`windows_ad_replication_inbound_properties_updated_total` | _Not yet documented_ | counter | None
`windows_ad_replication_inbound_properties_filtered_total` | _Not yet documented_ | counter | None
`windows_ad_replication_pending_operations` | _Not yet documented_ | gauge | None
`windows_ad_replication_pending_synchronizations` | _Not yet documented_ | gauge | None
`windows_ad_replication_sync_requests_total` | _Not yet documented_ | counter | None
`windows_ad_replication_sync_requests_success_total` | _Not yet documented_ | counter | None
`windows_ad_replication_sync_requests_schema_mismatch_failure_total` | _Not yet documented_ | counter | None
`windows_ad_name_translations_total` | _Not yet documented_ | counter | `target_name`
`windows_ad_change_monitors_registered` | _Not yet documented_ | gauge | None
`windows_ad_change_monitor_updates_pending` | _Not yet documented_ | gauge | None
`windows_ad_name_cache_hits_total` | _Not yet documented_ | counter | None
`windows_ad_name_cache_lookups_total` | _Not yet documented_ | counter | None
`windows_ad_directory_operations_total` | _Not yet documented_ | counter | `operation`, `origin`
`windows_ad_directory_search_suboperations_total` | _Not yet documented_ | counter | None
`windows_ad_security_descriptor_propagation_events_total` | _Not yet documented_ | counter | None
`windows_ad_security_descriptor_propagation_events_queued` | _Not yet documented_ | gauge | None
`windows_ad_security_descriptor_propagation_access_wait_total_seconds` | _Not yet documented_ | gauge | None
`windows_ad_security_descriptor_propagation_items_queued_total` | _Not yet documented_ | counter | None
`windows_ad_directory_service_threads` | _Not yet documented_ | gauge | None
`windows_ad_ldap_closed_connections_total` | _Not yet documented_ | counter | None
`windows_ad_ldap_opened_connections_total` | _Not yet documented_ | counter | `type`
`windows_ad_ldap_active_threads` | _Not yet documented_ | gauge | None
`windows_ad_ldap_last_bind_time_seconds` | _Not yet documented_ | gauge | None
`windows_ad_ldap_searches_total` | _Not yet documented_ | counter | None
`windows_ad_ldap_udp_operations_total` | _Not yet documented_ | counter | None
`windows_ad_ldap_writes_total` | _Not yet documented_ | counter | None
`windows_ad_ldap_client_sessions` | This is the number of sessions opened by LDAP clients at the time the data is taken. This is helpful in determining LDAP client activity and if the DC is able to handle the load. Of course, spikes during normal periods of authentication — such as first thing in the morning — are not necessarily a problem, but long sustained periods of high values indicate an overworked DC | gauge | None
`windows_ad_link_values_cleaned_total` | _Not yet documented_ | counter | None
`windows_ad_phantom_objects_cleaned_total` | _Not yet documented_ | counter | None
`windows_ad_phantom_objects_visited_total` | _Not yet documented_ | counter | None
`windows_ad_sam_group_membership_evaluations_total` | _Not yet documented_ | counter | `group_type`
`windows_ad_sam_group_membership_global_catalog_evaluations_total` | _Not yet documented_ | counter | None
`windows_ad_sam_group_membership_evaluations_nontransitive_total` | _Not yet documented_ | counter | None
`windows_ad_sam_group_membership_evaluations_transitive_total` | _Not yet documented_ | counter | None
`windows_ad_sam_group_evaluation_latency` | _Not yet documented_ | gauge | `evaluation_type`
`windows_ad_sam_computer_creation_requests_total` | _Not yet documented_ | counter | None
`windows_ad_sam_computer_creation_successful_requests_total` | _Not yet documented_ | counter | None
`windows_ad_sam_user_creation_requests_total` | _Not yet documented_ | counter | None
`windows_ad_sam_user_creation_successful_requests_total` | _Not yet documented_ | counter | None
`windows_ad_sam_query_display_requests_total` | _Not yet documented_ | counter | None
`windows_ad_sam_enumerations_total` | _Not yet documented_ | counter | None
`windows_ad_sam_membership_changes_total` | _Not yet documented_ | counter | None
`windows_ad_sam_password_changes_total` | _Not yet documented_ | counter | None
`windows_ad_tombstoned_objects_collected_total` | _Not yet documented_ | counter | None
`windows_ad_tombstoned_objects_visited_total` | _Not yet documented_ | counter | None
### Example metric
_This collector does not yet have explained examples, we would appreciate your help adding them!_
## Useful queries
_This collector does not yet have any useful queries added, we would appreciate your help adding them!_
## Alerting examples
_This collector does not yet have alerting examples, we would appreciate your help adding them!_
# ad collector
The ad collector exposes metrics about a Active Directory Domain Services domain controller
|||
-|-
Metric name prefix | `ad`
Classes | [`Win32_PerfRawData_DirectoryServices_DirectoryServices`](https://msdn.microsoft.com/en-us/library/ms803980.aspx)
Enabled by default? | No
## Flags
None
## Metrics
Name | Description | Type | Labels
-----|-------------|------|-------
`wmi_ad_address_book_operations_total` | _Not yet documented_ | counter | `operation`
`wmi_ad_address_book_client_sessions` | _Not yet documented_ | gauge | None
`wmi_ad_approximate_highest_distinguished_name_tag` | _Not yet documented_ | gauge | None
`wmi_ad_atq_estimated_delay_seconds` | _Not yet documented_ | gauge | None
`wmi_ad_atq_outstanding_requests` | _Not yet documented_ | gauge | None
`wmi_ad_atq_average_request_latency` | _Not yet documented_ | gauge | None
`wmi_ad_atq_current_threads` | _Not yet documented_ | gauge | `service`
`wmi_ad_searches_total` | _Not yet documented_ | counter | `scope`
`wmi_ad_database_operations_total` | _Not yet documented_ | counter | `operation`
`wmi_ad_binds_total` | _Not yet documented_ | counter | `bind_method`
`wmi_ad_replication_highest_usn` | _Not yet documented_ | counter | `state`
`wmi_ad_replication_data_intrasite_bytes_total` | _Not yet documented_ | counter | `direction`
`wmi_ad_replication_data_intersite_bytes_total` | _Not yet documented_ | counter | `direction`
`wmi_ad_replication_inbound_sync_objects_remaining` | _Not yet documented_ | gauge | None
`wmi_ad_replication_inbound_link_value_updates_remaining` | _Not yet documented_ | gauge | None
`wmi_ad_replication_inbound_objects_updated_total` | _Not yet documented_ | counter | None
`wmi_ad_replication_inbound_objects_filtered_total` | _Not yet documented_ | counter | None
`wmi_ad_replication_inbound_properties_updated_total` | _Not yet documented_ | counter | None
`wmi_ad_replication_inbound_properties_filtered_total` | _Not yet documented_ | counter | None
`wmi_ad_replication_pending_operations` | _Not yet documented_ | gauge | None
`wmi_ad_replication_pending_synchronizations` | _Not yet documented_ | gauge | None
`wmi_ad_replication_sync_requests_total` | _Not yet documented_ | counter | None
`wmi_ad_replication_sync_requests_success_total` | _Not yet documented_ | counter | None
`wmi_ad_replication_sync_requests_schema_mismatch_failure_total` | _Not yet documented_ | counter | None
`wmi_ad_name_translations_total` | _Not yet documented_ | counter | `target_name`
`wmi_ad_change_monitors_registered` | _Not yet documented_ | gauge | None
`wmi_ad_change_monitor_updates_pending` | _Not yet documented_ | gauge | None
`wmi_ad_name_cache_hits_total` | _Not yet documented_ | counter | None
`wmi_ad_name_cache_lookups_total` | _Not yet documented_ | counter | None
`wmi_ad_directory_operations_total` | _Not yet documented_ | counter | `operation`, `origin`
`wmi_ad_directory_search_suboperations_total` | _Not yet documented_ | counter | None
`wmi_ad_security_descriptor_propagation_events_total` | _Not yet documented_ | counter | None
`wmi_ad_security_descriptor_propagation_events_queued` | _Not yet documented_ | gauge | None
`wmi_ad_security_descriptor_propagation_access_wait_total_seconds` | _Not yet documented_ | gauge | None
`wmi_ad_security_descriptor_propagation_items_queued_total` | _Not yet documented_ | counter | None
`wmi_ad_directory_service_threads` | _Not yet documented_ | gauge | None
`wmi_ad_ldap_closed_connections_total` | _Not yet documented_ | counter | None
`wmi_ad_ldap_opened_connections_total` | _Not yet documented_ | counter | `type`
`wmi_ad_ldap_active_threads` | _Not yet documented_ | gauge | None
`wmi_ad_ldap_last_bind_time_seconds` | _Not yet documented_ | gauge | None
`wmi_ad_ldap_searches_total` | _Not yet documented_ | counter | None
`wmi_ad_ldap_udp_operations_total` | _Not yet documented_ | counter | None
`wmi_ad_ldap_writes_total` | _Not yet documented_ | counter | None
`wmi_ad_link_values_cleaned_total` | _Not yet documented_ | counter | None
`wmi_ad_phantom_objects_cleaned_total` | _Not yet documented_ | counter | None
`wmi_ad_phantom_objects_visited_total` | _Not yet documented_ | counter | None
`wmi_ad_sam_group_membership_evaluations_total` | _Not yet documented_ | counter | `group_type`
`wmi_ad_sam_group_membership_global_catalog_evaluations_total` | _Not yet documented_ | counter | None
`wmi_ad_sam_group_membership_evaluations_nontransitive_total` | _Not yet documented_ | counter | None
`wmi_ad_sam_group_membership_evaluations_transitive_total` | _Not yet documented_ | counter | None
`wmi_ad_sam_group_evaluation_latency` | _Not yet documented_ | gauge | `evaluation_type`
`wmi_ad_sam_computer_creation_requests_total` | _Not yet documented_ | counter | None
`wmi_ad_sam_computer_creation_successful_requests_total` | _Not yet documented_ | counter | None
`wmi_ad_sam_user_creation_requests_total` | _Not yet documented_ | counter | None
`wmi_ad_sam_user_creation_successful_requests_total` | _Not yet documented_ | counter | None
`wmi_ad_sam_query_display_requests_total` | _Not yet documented_ | counter | None
`wmi_ad_sam_enumerations_total` | _Not yet documented_ | counter | None
`wmi_ad_sam_membership_changes_total` | _Not yet documented_ | counter | None
`wmi_ad_sam_password_changes_total` | _Not yet documented_ | counter | None
`wmi_ad_tombstoned_objects_collected_total` | _Not yet documented_ | counter | None
`wmi_ad_tombstoned_objects_visited_total` | _Not yet documented_ | counter | None
### Example metric
_This collector does not yet have explained examples, we would appreciate your help adding them!_
## Useful queries
_This collector does not yet have any useful queries added, we would appreciate your help adding them!_
## Alerting examples
_This collector does not yet have alerting examples, we would appreciate your help adding them!_

View File

@@ -1,55 +0,0 @@
# adcs collector
The adcs collector exposes metrics about Active Directory Certificate Services, Note that this collector has only been tested against Windows Server 2019.
Other Windows Server versions may work but are not tested.
|||
-|-
Metric name prefix | `adcs`
Data source | Perflib
Counters | `Certification Authority`
Enabled by default? | No
## Flags
None
## Metrics
Name | Description | Type | Labels
-----|-------------|------|-------
|requests_total|Total certificate requests processed|counter|`cert_template`|
|request_processing_time_seconds|Last time elapsed for certificate requests|gauge|`cert_template`|
|retrievals_total|Last time elapsed for certificate requests|counter|`cert_template`|
|retrievals_processing_time_seconds|Last time elapsed for certificate retrieval request|gauge|`cert_template`|
|failed_requests_total|Total failed certificate requests processed|counter|`cert_template`|
|issued_requests_total|Total issued certificate requests processed|counter|`cert_template`|
|pending_requests_total|Total pending certificate requests processed|counter|`cert_template`|
|request_cryptographic_signing_time_seconds|Last time elapsed for signing operation request|gauge|`cert_template`|
|request_policy_module_processing_time_seconds|Last time elapsed for policy module processing request|gauge|`cert_template`|
|challenge_responses_total|Total certificate challenge responses processed|counter|`cert_template`|
|challenge_response_processing_time_seconds|Last time elapsed for challenge response|gauge|`cert_template`|
|signed_certificate_timestamp_lists_total|Total Signed Certificate Timestamp Lists processed|counter|`cert_template`|
|signed_certificate_timestamp_list_processing_time_seconds|Last time elapsed for Signed Certificate Timestamp List|gauge|`cert_template`|
### Example metric
```
windows_adcs_issued_requests_total{cert_template="Administrator"} 0
windows_adcs_issued_requests_total{cert_template="DirectoryEmailReplication"} 0
windows_adcs_issued_requests_total{cert_template="DomainController"} 1
windows_adcs_issued_requests_total{cert_template="DomainControllerAuthentication"} 0
windows_adcs_issued_requests_total{cert_template="EFS"} 0
windows_adcs_issued_requests_total{cert_template="EFSRecovery"} 0
windows_adcs_issued_requests_total{cert_template="KerberosAuthentication"} 0
windows_adcs_issued_requests_total{cert_template="Machine"} 0
windows_adcs_issued_requests_total{cert_template="SubCA"} 0
windows_adcs_issued_requests_total{cert_template="User"} 0
windows_adcs_issued_requests_total{cert_template="WebServer"} 0
windows_adcs_issued_requests_total{cert_template="_Total"} 1
```
## Useful queries
_This collector does not yet have any useful queries added, we would appreciate your help adding them!_
## Alerting examples
_This collector does not yet have alerting examples, we would appreciate your help adding them!_

View File

@@ -1,89 +1,51 @@
# adfs collector
The ADFS collector exposes metrics about Active Directory Federation Services. Note that this collector has only been tested against ADFS 4.0/ [Farm Behavior (FLB) 3](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/deployment/upgrading-to-ad-fs-in-windows-server#ad-fs-farm-behavior-levels-fbl) (Server 2016).
Other ADFS versions may work but are not tested.
|||
-|-
Metric name prefix | `adfs`
Data source | Perflib
Counters | `AD FS`
Enabled by default? | No
## Flags
None
## Metrics
Name | Description | Type | Labels
-----|-------------|------|-------
`windows_adfs_ad_login_connection_failures_total` | Total number of connection failures between the ADFS server and the Active Directory domain controller(s) | counter | None
`windows_adfs_certificate_authentications_total` | Total number of [User Certificate](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication) authentications. I.E. smart cards or mobile devices with provisioned client certificates | counter | None
`windows_adfs_device_authentications_total` | Total number of [device authentications](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/device-authentication-controls-in-ad-fs) (SignedToken, clientTLS, PkeyAuth). Device authentication is only available on ADFS 2016 or later | counter | None
`windows_adfs_extranet_account_lockouts_total` | Total number of [extranet lockouts](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-smart-lockout-protection). Requires the Extranet Lockout feature to be enabled | counter | None
`windows_adfs_federated_authentications_total` | Total number of authentications from federated sources. E.G. Office365 | counter | None
`windows_adfs_passport_authentications_total` | Total number of authentications from [Microsoft Passport](https://en.wikipedia.org/wiki/Microsoft_account) (now named Microsoft Account) | counter | None
`windows_adfs_password_change_failed_total` | Total number of failed password changes. The Password Change Portal must be enabled in the AD FS Management tool in order to allow user password changes | counter | None
`windows_adfs_password_change_succeeded_total` | Total number of succeeded password changes. The Password Change Portal must be enabled in the AD FS Management tool in order to allow user password changes | counter | None
`windows_adfs_token_requests_total` | Total number of requested access tokens | counter | None
`windows_adfs_windows_integrated_authentications_total` | Total number of Windows integrated authentications using Kerberos or NTLM | counter | None
`windows_adfs_passive_requests_total` | Total number of passive (browser-based) requests | counter | None
`windows_adfs_oauth_authorization_requests_total` | Total number of incoming requests to the OAuth Authorization endpoint | counter | None
`windows_adfs_oauth_client_authentication_success_total` | Total number of successful OAuth client Authentications | counter | None
`windows_adfs_oauth_client_authentication_failure_total` | Total number of failed OAuth client Authentications | counter | None
`windows_adfs_oauth_client_credentials_failure_total` | Total number of failed OAuth Client Credentials Requests | counter | None
`windows_adfs_oauth_client_credentials_success_total` | Total number of successful RP tokens issued for OAuth Client Credentials Requests | counter | None
`windows_adfs_oauth_client_privkey_jwt_authentication_failure_total` | Total number of failed OAuth Client Private Key Jwt Authentications | counter | None
`windows_adfs_oauth_client_privkey_jwt_authentications_success_total` | Total number of successful OAuth Client Private Key Jwt Authentications | counter | None
`windows_adfs_oauth_client_secret_basic_authentications_failure_total` | Total number of failed OAuth Client Secret Basic Authentications | counter | None
`windows_adfs_oauth_client_secret_basic_authentications_success_total` | Total number of successful OAuth Client Secret Basic Authentications | counter | None
`windows_adfs_oauth_client_secret_post_authentications_failure_total` | Total number of failed OAuth Client Secret Post Authentications | counter | None
`windows_adfs_oauth_client_secret_post_authentications_success_total` | Total number of successful OAuth Client Secret Post Authentications | counter | None
`windows_adfs_oauth_client_windows_authentications_failure_total` | Total number of failed OAuth Client Windows Integrated Authentications | counter | None
`windows_adfs_oauth_client_windows_authentications_success_total` | Total number of successful OAuth Client Windows Integrated Authentications | counter | None
`windows_adfs_oauth_logon_certificate_requests_failure_total` | Total number of failed OAuth Logon Certificate Requests | counter | None
`windows_adfs_oauth_logon_certificate_token_requests_success_total` | Total number of successful RP tokens issued for OAuth Logon Certificate Requests | counter | None
`windows_adfs_oauth_password_grant_requests_failure_total` | Total number of failed OAuth Password Grant Requests | counter | None
`windows_adfs_oauth_password_grant_requests_success_total` | Total number of successful OAuth Password Grant Requests | counter | None
`windows_adfs_oauth_token_requests_success_total` | Total number of successful RP tokens issued over OAuth protocol | counter | None
`windows_adfs_samlp_token_requests_success_total` | Total number of successful RP tokens issued over SAML-P protocol | counter | None
`windows_adfs_sso_authentications_failure_total` | Total number of failed SSO authentications | counter | None
`windows_adfs_sso_authentications_success_total` | Total number of successful SSO authentications | counter | None
`windows_adfs_wsfed_token_requests_success_total` | Total number of successful RP tokens issued over WS-Fed protocol | counter | None
`windows_adfs_wstrust_token_requests_success_total` | Total number of successful RP tokens issued over WS-Trust protocol | counter | None
`windows_adfs_userpassword_authentications_failure_total` | Total number of failed AD U/P authentications | counter | None
`windows_adfs_userpassword_authentications_success_total` | Total number of successful AD U/P authentications | counter | None
`windows_adfs_external_authentications_failure_total` | Total number of failed authentications from external MFA providers | counter | None
`windows_adfs_external_authentications_success_total` | Total number of successful authentications from external MFA providers | counter | None
`windows_adfs_db_artifact_failure_total` | Total number of failures connecting to the artifact database | counter | None
`windows_adfs_db_artifact_query_time_seconds_total` | Accumulator of time taken for an artifact database query | counter | None
`windows_adfs_db_config_failure_total` | Total number of failures connecting to the configuration database | counter | None
`windows_adfs_db_config_query_time_seconds_total` | Accumulator of time taken for a configuration database query | counter | None
`windows_adfs_federation_metadata_requests_total` | Total number of Federation Metadata requests | counter | None
### Example metric
Show rate of device authentications in AD FS:
```
rate(windows_adfs_device_authentications)[2m]
```
## Useful queries
|Query|Description|
|---|----|
|`rate(windows_adfs_oauth_password_grant_requests_failure_total[5m])`| Rate of OAuth requests failing due to bad client/resource values|
|`rate(windows_adfs_userpassword_authentications_failures_total[5m])`| Rate of `/adfs/oauth2/token/` requests failing due to bad username/password values (possible credential spraying)|
## Alerting examples
**prometheus.rules**
```yaml
- alert: "HighExtranetLockouts"
expr: "rate(windows_adfs_extranet_account_lockouts)[2m] > 100"
for: "10m"
labels:
severity: "high"
annotations:
summary: "High number of AD FS extranet lockouts"
description: "High number of AD FS extranet lockouts may indicate a password spray attack.\n Server: {{ $labels.instance }}\n Number of lockouts: {{ $value }}"
```
# adfs collector
The adfs collector exposes metrics about Active Directory Federation Services. Note that this collector has only been tested against ADFS 4.0 (2016).
Other ADFS versions may work but are not tested.
|||
-|-
Metric name prefix | `adfs`
Data source | Perflib
Counters | `AD FS`
Enabled by default? | No
## Flags
None
## Metrics
Name | Description | Type | Labels
-----|-------------|------|-------
`wmi_adfs_ad_login_connection_failures` | Total number of connection failures between the ADFS server and the Active Directory domain controller(s) | counter | None
`wmi_adfs_certificate_authentications` | Total number of [User Certificate](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication) authentications. I.E. smart cards or mobile devices with provisioned client certificates | counter | None
`wmi_adfs_device_authentications` | Total number of [device authentications](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/device-authentication-controls-in-ad-fs) (SignedToken, clientTLS, PkeyAuth). Device authentication is only available on ADFS 2016 or later | counter | None
`wmi_adfs_extranet_account_lockouts` | Total number of [extranet lockouts](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-smart-lockout-protection). Requires the Extranet Lockout feature to be enabled | counter | None
`wmi_adfs_federated_authentications` | Total number of authentications from federated sources. E.G. Office365 | counter | None
`wmi_adfs_passport_authentications` | Total number of authentications from [Microsoft Passport](https://en.wikipedia.org/wiki/Microsoft_account) (now named Microsoft Account) | counter | None
`wmi_adfs_password_change_failed` | Total number of failed password changes. The Password Change Portal must be enabled in the AD FS Management tool in order to allow user password changes | counter | None
`wmi_adfs_password_change_succeeded` | Total number of succeeded password changes. The Password Change Portal must be enabled in the AD FS Management tool in order to allow user password changes | counter | None
`wmi_adfs_token_requests` | Total number of requested access tokens | counter | None
`wmi_adfs_windows_integrated_authentications` | Total number of Windows integrated authentications using Kerberos or NTLM | counter | None
### Example metric
Show rate of device authentications in AD FS:
```
rate(wmi_adfs_device_authentications)[2m]
```
## Useful queries
## Alerting examples
**prometheus.rules**
```yaml
- alert: "HighExtranetLockouts"
expr: "rate(wmi_adfs_extranet_account_lockouts)[2m] > 100"
for: "10m"
labels:
severity: "high"
annotations:
summary: "High number of AD FS extranet lockouts"
description: "High number of AD FS extranet lockouts may indicate a password spray attack.\n Server: {{ $labels.instance }}\n Number of lockouts: {{ $value }}"
```

View File

@@ -1,60 +0,0 @@
# cache collector
The cache collector exposes metrics about file system cache
|||
-|-
Metric name prefix | `cache`
Data Source | Perflib
Classes | [`Win32_PerfFormattedData_PerfOS_Cache`](https://docs.microsoft.com/en-us/previous-versions/aa394267(v=vs.85))
Enabled by default? | No
## Flags
None
## Metrics
Name | Description | Type | Labels
-----|-------------|------|-------
`windows_cache_async_copy_reads_total` | Number of times that a filesystem, such as NTFS, maps a page of a file into the file system cache to read a page. | counter | None
`windows_cache_async_data_maps_total` | Number of times that a filesystem, such as NTFS, maps a page of a file into the file system cache to read the page, and wishes to wait for the page to be retrieved if it is not in main memory. | counter | None
`windows_cache_async_fast_reads_total` | Number of reads from the file system cache that bypass the installed file system and retrieve the data directly from the cache. | counter | None
`windows_cache_async_mdl_reads_total` | Number of reads from the file system cache that use a Memory Descriptor List (MDL) to access the pages. | counter | None
`windows_cache_async_pin_reads_total` | Number of reads from the file system cache preparatory to writing the data back to disk. Pages read in this fashion are pinned in memory at the completion of the read. | counter | None
`windows_cache_copy_read_hits_total` | Number of copy read requests that hit the cache, that is, they did not require a disk read in order to provide access to the page in the cache. | counter | None
`windows_cache_copy_reads_total` | Number of reads from pages of the file system cache that involve a memory copy of the data from the cache to the application's buffer. | counter | None
`windows_cache_data_flushes_total` | Number of times the file system cache has flushed its contents to disk as the result of a request to flush or to satisfy a write-through file write request. | counter | None
`windows_cache_data_flush_pages_total` | Number of pages the file system cache has flushed to disk as a result of a request to flush or to satisfy a write-through file write request. | counter | None
`windows_cache_data_map_hits_total` | Number of data maps in the file system cache that could be resolved without having to retrieve a page from the disk, because the page was already in physical memory. | counter | None
`windows_cache_data_map_pins_total` | Number of data maps in the file system cache that resulted in pinning a page in main memory, an action usually preparatory to writing to the file on disk. | counter | None
`windows_cache_data_maps_total` | Number of times that a file system such as NTFS, maps a page of a file into the file system cache to read the page. | counter | None
`windows_cache_dirty_pages` | Number of dirty pages on the system cache. | gauge | None
`windows_cache_dirty_page_threshold` | Threshold for number of dirty pages on system cache. | gauge | None
`windows_cache_fast_read_not_possibles_total` | Number of attempts by an Application Program Interface (API) function call to bypass the file system to get to data in the file system cache that could not be honored without invoking the file system. | counter | None
`windows_cache_fast_read_resource_misses_total` | Number of cache misses necessitated by the lack of available resources to satisfy the request. | counter | None
`windows_cache_fast_reads_total` | Number of reads from the file system cache that bypass the installed file system and retrieve the data directly from the cache. | counter | None
`windows_cache_lazy_write_flushes_total` | Number of Lazy Write flushes the Lazy Writer thread has written to disk. Lazy Writing is the process of updating the disk after the page has been changed in memory, so that the application that changed the file does not have to wait for the disk write to be complete before proceeding. | counter | None
`windows_cache_lazy_write_pages_total` | Number of Lazy Write pages the Lazy Writer thread has written to disk. Lazy Writing is the process of updating the disk after the page has been changed in memory, so that the application that changed the file does not have to wait for the disk write to be complete before proceeding. | counter | None
`windows_cache_mdl_read_hits_total` | Number of Memory Descriptor List (MDL) Read requests to the file system cache that hit the cache, i.e., did not require disk accesses in order to provide memory access to the page(s) in the cache. | counter | None
`windows_cache_mdl_reads_total` | Number of reads from the file system cache that use a Memory Descriptor List (MDL) to access the data. | counter | None
`windows_cache_pin_read_hits_total` | Number of pin read requests that hit the file system cache, i.e., did not require a disk read in order to provide access to the page in the file system cache. While pinned, a page's physical address in the file system cache will not be altered. | counter | None
`windows_cache_pin_reads_total` | Number of reads into the file system cache preparatory to writing the data back to disk. Pages read in this fashion are pinned in memory at the completion of the read. While pinned, a page's physical address in the file system cache will not be altered. | counter | None
`windows_cache_read_aheads_total` | Number of reads from the file system cache in which the Cache detects sequential access to a file. The read aheads permit the data to be transferred in larger blocks than those being requested by the application, reducing the overhead per access. | counter | None
`windows_cache_sync_copy_reads_total` | Number of reads from pages of the file system cache that involve a memory copy of the data from the cache to the application's buffer. The file system will not regain control until the copy operation is complete, even if the disk must be accessed to retrieve the page. | counter | None
`windows_cache_sync_data_maps_total` | Number of times that a file system such as NTFS maps a page of a file into the file system cache to read the page. | counter | None
`windows_cache_sync_fast_reads_total` | Number of reads from the file system cache that bypass the installed file system and retrieve the data directly from the cache. If the data is not in the cache, the request (application program call) will wait until the data has been retrieved from disk. | counter | None
`windows_cache_sync_mdl_reads_total` | Number of reads from the file system cache that use a Memory Descriptor List (MDL) to access the pages. If the accessed page(s) are not in main memory, the caller will wait for the pages to fault in from the disk. | counter | None
`windows_cache_sync_pin_reads_total` | Number of reads into the file system cache preparatory to writing the data back to disk. The file system will not regain control until the page is pinned in the file system cache, in particular if the disk must be accessed to retrieve the page. | counter | None
### Example metric
Percentage of copy reads that hit the cache
```
windows_cache_copy_read_hits_total / windows_cache_copy_reads_total * 100
```
## Useful queries
_This collector does not yet have any useful queries added, we would appreciate your help adding them!_
## Alerting examples
_This collector does not yet have alerting examples, we would appreciate your help adding them!_

View File

@@ -1,51 +1,42 @@
# container collector
The container collector exposes metrics about containers running on a Hyper-V system
|||
-|-
Metric name prefix | `container`
Data source | [HCS](https://learn.microsoft.com/en-us/virtualization/api/hcs/overview)
Enabled by default? | No
## Flags
None
## Metrics
| Name | Description | Type | Labels |
|------------------------------------------------------------|----------------------------------------|---------|----------------------------------------------------------|
| `windows_container_available` | Available | counter | `container_id`,`namespace`,`pod`,`container`, |
| `windows_container_count` | Number of containers | gauge | `container_id`,`namespace`,`pod`,`container`, |
| `windows_container_cpu_usage_seconds_kernelmode` | Run time in Kernel mode in Seconds | counter | `container_id`,`namespace`,`pod`,`container`, |
| `windows_container_cpu_usage_seconds_usermode` | Run Time in User mode in Seconds | counter | `container_id`,`namespace`,`pod`,`container`, |
| `windows_container_cpu_usage_seconds_total` | Total Run time in Seconds | counter | `container_id`,`namespace`,`pod`,`container`, |
| `windows_container_memory_usage_commit_bytes` | Memory Usage Commit Bytes | gauge | `container_id`,`namespace`,`pod`,`container`, |
| `windows_container_memory_usage_commit_peak_bytes` | Memory Usage Commit Peak Bytes | gauge | `container_id`,`namespace`,`pod`,`container`, |
| `windows_container_memory_usage_private_working_set_bytes` | Memory Usage Private Working Set Bytes | gauge | `container_id`,`namespace`,`pod`,`container`, |
| `windows_container_network_receive_bytes_total` | Bytes Received on Interface | counter | `container_id`,`namespace`,`pod`,`container`,`interface` |
| `windows_container_network_receive_packets_total` | Packets Received on Interface | counter | `container_id`,`namespace`,`pod`,`container`,`interface` |
| `windows_container_network_receive_packets_dropped_total` | Dropped Incoming Packets on Interface | counter | `container_id`,`namespace`,`pod`,`container`,`interface` |
| `windows_container_network_transmit_bytes_total` | Bytes Sent on Interface | counter | `container_id`,`namespace`,`pod`,`container`,`interface` |
| `windows_container_network_transmit_packets_total` | Packets Sent on Interface | counter | `container_id`,`namespace`,`pod`,`container`,`interface` |
| `windows_container_network_transmit_packets_dropped_total` | Dropped Outgoing Packets on Interface | counter | `container_id`,`namespace`,`pod`,`container`,`interface` |
| `windows_container_storage_read_count_normalized_total` | Read Count Normalized | counter | `container_id`,`namespace`,`pod`,`container`, |
| `windows_container_storage_read_size_bytes_total` | Read Size Bytes | counter | `container_id`,`namespace`,`pod`,`container`, |
| `windows_container_storage_write_count_normalized_total` | Write Count Normalized | counter | `container_id`,`namespace`,`pod`,`container`, |
| `windows_container_storage_write_size_bytes_total` | Write Size Bytes | counter | `container_id`,`namespace`,`pod`,`container`, |
### Example metric
_windows_container_network_receive_bytes_total{container_id="docker://1bd30e8b8ac28cbd76a9b697b4d7bb9d760267b0733d1bc55c60024e98d1e43e",interface="822179E7-002C-4280-ABBA-28BCFE401826"} 9.3305343e+07_
This metric means that total _9.3305343e+07_ bytes received on interface _822179E7-002C-4280-ABBA-28BCFE401826_ for container _docker://1bd30e8b8ac28cbd76a9b697b4d7bb9d760267b0733d1bc55c60024e98d1e43e_
## Useful queries
Attach labels namespace/pod/container fow windows container metrics.
```
# kube_pod_container_info(a metric of kube-state-metrics) has labels namespace/pod/container/container_id for a container, while windows container metrics only have container_id.
# Attaching labels namespace/pod/container for windows container metrics, is useful to query for windows pods.
windows_container_network_receive_bytes_total * on(container_id) group_left(namespace, pod, container) kube_pod_container_info{container_id!=""}
```
## Alerting examples
_This collector does not yet have alerting examples, we would appreciate your help adding them!_
# container collector
The container collector exposes metrics about containers running on system
|||
-|-
Metric name prefix | `container`
Enabled by default? | No
## Flags
None
## Metrics
Name | Description | Type | Labels
-----|-------------|------|-------
`wmi_container_available` | Available | counter | `container_id`
`wmi_container_count` | Number of containers | gauge | `container_id`
`wmi_container_cpu_usage_seconds_kernelmode` | Run time in Kernel mode in Seconds | counter | `container_id`
`wmi_container_cpu_usage_seconds_usermode` | Run Time in User mode in Seconds | counter | `container_id`
`wmi_container_cpu_usage_seconds_total` | Total Run time in Seconds | counter | `container_id`
`wmi_container_memory_usage_commit_bytes` | Memory Usage Commit Bytes | gauge | `container_id`
`wmi_container_memory_usage_commit_peak_bytes` | Memory Usage Commit Peak Bytes | gauge | `container_id`
`wmi_container_memory_usage_private_working_set_bytes` | Memory Usage Private Working Set Bytes | gauge | `container_id`
`wmi_container_network_receive_bytes_total` | Bytes Received on Interface | counter | `container_id`, `interface`
`wmi_container_network_receive_packets_total` | Packets Received on Interface | counter | `container_id`, `interface`
`wmi_container_network_receive_packets_dropped_total` | Dropped Incoming Packets on Interface | counter | `container_id`, `interface`
`wmi_container_network_transmit_bytes_total` | Bytes Sent on Interface | counter | `container_id`, `interface`
`wmi_container_network_transmit_packets_total` | Packets Sent on Interface | counter | `container_id`, `interface`
`wmi_container_network_transmit_packets_dropped_total` | Dropped Outgoing Packets on Interface | counter | `container_id`, `interface`
### Example metric
_wmi_container_network_receive_bytes_total{container_id="docker://1bd30e8b8ac28cbd76a9b697b4d7bb9d760267b0733d1bc55c60024e98d1e43e",interface="822179E7-002C-4280-ABBA-28BCFE401826"} 9.3305343e+07_
This metric means that total _9.3305343e+07_ bytes received on interface _822179E7-002C-4280-ABBA-28BCFE401826_ for container _docker://1bd30e8b8ac28cbd76a9b697b4d7bb9d760267b0733d1bc55c60024e98d1e43e_
## Useful queries
_This collector does not yet have any useful queries added, we would appreciate your help adding them!_
## Alerting examples
_This collector does not yet have alerting examples, we would appreciate your help adding them!_

View File

@@ -16,72 +16,45 @@ None
## Metrics
These metrics are available on all versions of Windows:
| Name | Description | Type | Labels |
|--------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|-----------------|
| `windows_cpu_logical_processor` | Number of installed logical processors | counter | `core`, `state` |
| `windows_cpu_cstate_seconds_total` | Time spent in low-power idle states | counter | `core`, `state` |
| `windows_cpu_time_total` | Time that processor spent in different modes (dpc, idle, interrupt, privileged, user) | counter | `core`, `mode` |
| `windows_cpu_interrupts_total` | Total number of received and serviced hardware interrupts | counter | `core` |
| `windows_cpu_dpcs_total` | Total number of received and serviced deferred procedure calls (DPCs) | counter | `core` |
| `windows_cpu_clock_interrupts_total` | Total number of received and serviced clock tick interrupts | counter | `core` |
| `windows_cpu_idle_break_events_total` | Total number of time processor was woken from idle | counter | `core` |
| `windows_cpu_parking_status` | Parking Status represents whether a processor is parked or not | gauge | `core` |
| `windows_cpu_core_frequency_mhz` | Core frequency in megahertz | gauge | `core` |
| `windows_cpu_processor_performance_total` | Processor Performance is the number of CPU cycles executing instructions by each core; it is believed to be similar to the value that the APERF MSR would show, were it exposed | counter | `core` |
| `windows_cpu_processor_mperf_total` | Processor MPerf Total is proportioanl to the number of TSC ticks each core has accumulated while executing instructions. Due to the manner in which it is presented, it should be scaled by 1e2 to properly line up with Processor Performance Total. As above, it is believed to be closely related to the MPERF MSR. | counter | `core` |
| `windows_cpu_processor_rtc_total` | RTC total is assumed to represent the 64Hz tick rate in Windows. It is not by itself useful, but can be used with `windows_cpu_processor_utility_total` to more accurately measure CPU utilisation than with `windows_cpu_time_total` | counter | `core` |
| `windows_cpu_processor_utility_total` | Processor Utility Total is a newer, more accurate measure of CPU utilization, in particular handling modern CPUs with variant CPU frequencies. The rate of this counter divided by the rate of `windows_cpu_processor_rtc_total` should provide an accurate view of CPU utilisation on modern systems, as observed in Task Manager. | counter | `core` |
| `windows_cpu_processor_privileged_utility_total` | Processor Privileged Utility Total, when used in a similar fashion to `windows_cpu_processor_utility_total` will show the portion of CPU utilization which is happening in privileged mode. | counter | `core` |
Name | Description | Type | Labels
-----|-------------|------|-------
`wmi_cpu_cstate_seconds_total` | Time spent in low-power idle states | counter | `core`, `state`
`wmi_cpu_time_total` | Time that processor spent in different modes (idle, user, system, ...) | counter | `core`, `mode`
`wmi_cpu_interrupts_total` | Total number of received and serviced hardware interrupts | counter | `core`
`wmi_cpu_dpcs_total` | Total number of received and serviced deferred procedure calls (DPCs) | counter | `core`
These metrics are only exposed on Windows Server 2008R2 and later:
Name | Description | Type | Labels
-----|-------------|------|-------
`wmi_cpu_clock_interrupts_total` | Total number of received and serviced clock tick interrupts | `core`
`wmi_cpu_idle_break_events_total` | Total number of time processor was woken from idle | `core`
`wmi_cpu_parking_status` | Parking Status represents whether a processor is parked or not | `gauge`
`wmi_cpu_core_frequency_mhz` | Core frequency in megahertz | `gauge`
`wmi_cpu_processor_performance` | Processor Performance is the average performance of the processor while it is executing instructions, as a percentage of the nominal performance of the processor. On some processors, Processor Performance may exceed 100% | `gauge`
### Example metric
Show frequency of host CPU cores
```
windows_cpu_core_frequency_mhz{instance="localhost"}
wmi_cpu_core_frequency_mhz{instance="localhost"}
```
## Useful queries
Show cpu usage by mode.
```
sum by (mode) (irate(windows_cpu_time_total{instance="localhost"}[5m]))
sum by (mode) (irate(wmi_cpu_time_total{instance="localhost"}[5m]))
```
Show per-cpu utilisation using the processor utility metrics
```
rate(windows_cpu_processor_utility_total{instance="localhost"}[5m]) / rate(windows_cpu_processor_rtc_total{instance="localhost"}[5m])
```
Show actual average CPU frequency in Hz
```
avg by(instance) (
1e4 * windows_cpu_core_frequency_mhz{}
* rate(windows_cpu_processor_performance_total{}[5m])
/ rate(windows_cpu_processor_mperf_total{}[5m])
)
```
## Alerting examples
**prometheus.rules**
```yaml
# Alert on hosts with more than 80% CPU usage over a 10 minute period
- alert: CpuUsage
expr: 100 - (avg by (instance) (irate(windows_cpu_time_total{mode="idle"}[2m])) * 100) > 80
expr: 100 - (avg by (instance) (irate(wmi_cpu_time_total{mode="idle"}[2m])) * 100) > 80
for: 10m
labels:
severity: warning
annotations:
summary: "CPU Usage (instance {{ $labels.instance }})"
description: "CPU Usage is more than 80%\n VALUE = {{ $value }}\n LABELS: {{ $labels }}"
# Alert on hosts which are not boosting their CPU frequencies
- alert: NoCpuTurbo
expr: |
avg by(instance) (
1e4 * windows_cpu_core_frequency_mhz{}
* rate(windows_cpu_processor_performance_total{}[5m])
/ rate(windows_cpu_processor_mperf_total{}[5m])
)
/
(1e6 * avg by (instance) (windows_cpu_core_frequency_mhz))
< 1.1
for: 1h
annotations:
summary: "CPU Frequency on {{ $labels.instance }} is less than 110% of base frequency, suggesting it is not able to boost.
```

View File

@@ -1,58 +0,0 @@
# cpu_info collector
The cpu_info collector exposes metrics detailing a per-socket breakdown of the Processors in the system
|||
-|-
Metric name prefix | `cpu_info`
Data source | wmi
Classes | [`Win32_Processor`](https://docs.microsoft.com/en-us/windows/win32/cimwin32prov/win32-processor)
Enabled by default? | No
## Flags
None
## Metrics
| Name | Description | Type | Labels |
|--------------------------------------------|--------------------------------------|-------|--------------------------------------------------------------|
| `windows_cpu_info` | Labelled CPU information | gauge | `architecture`, `description`, `device_id`, `family`, `name` |
| `windows_cpu_info_core` | Number of cores per CPU | gauge | `device_id` |
| `windows_cpu_info_enabled_core` | Number of enabled cores per CPU | gauge | `device_id` |
| `windows_cpu_info_l2_cache_size` | Size of L2 cache per CPU | gauge | `device_id` |
| `windows_cpu_info_l3_cache_size` | Size of L3 cache per CPU | gauge | `device_id` |
| `windows_cpu_info_logical_processor` | Number of logical processors per CPU | gauge | `device_id` |
| `windows_cpu_info_thread` | Number of threads per CPU | gauge | `device_id` |
### Example metric
```
# HELP windows_cpu_info Labelled CPU information as provided by Win32_Processor
# TYPE windows_cpu_info gauge
windows_cpu_info{architecture="9",description="AMD64 Family 25 Model 33 Stepping 2",device_id="CPU0",family="107",name="AMD Ryzen 9 5900X 12-Core Processor"} 1
# HELP windows_cpu_info_core Number of cores per CPU
# TYPE windows_cpu_info_core gauge
windows_cpu_info_core{device_id="CPU0"} 12
# HELP windows_cpu_info_enabled_core Number of enabled cores per CPU
# TYPE windows_cpu_info_enabled_core gauge
windows_cpu_info_enabled_core{device_id="CPU0"} 12
# HELP windows_cpu_info_l2_cache_size Size of L2 cache per CPU
# TYPE windows_cpu_info_l2_cache_size gauge
windows_cpu_info_l2_cache_size{device_id="CPU0"} 6144
# HELP windows_cpu_info_l3_cache_size Size of L3 cache per CPU
# TYPE windows_cpu_info_l3_cache_size gauge
windows_cpu_info_l3_cache_size{device_id="CPU0"} 65536
# HELP windows_cpu_info_logical_processor Number of logical processors per CPU
# TYPE windows_cpu_info_logical_processor gauge
windows_cpu_info_logical_processor{device_id="CPU0"} 24
# HELP windows_cpu_info_thread Number of threads per CPU
# TYPE windows_cpu_info_thread gauge
windows_cpu_info_thread{device_id="CPU0"} 24
```
The value of the metric is irrelevant, but the labels expose some useful information on the CPU installed in each socket.
## Useful queries
_This collector does not yet have any useful queries added, we would appreciate your help adding them!_
## Alerting examples
_This collector does not yet have alerting examples, we would appreciate your help adding them!_

30
docs/collector.cs.md Normal file
View File

@@ -0,0 +1,30 @@
# cs collector
The cs collector exposes metrics detailing the hardware of the computer system
|||
-|-
Metric name prefix | `cs`
Classes | [`Win32_ComputerSystem`](https://msdn.microsoft.com/en-us/library/aa394102)
Enabled by default? | Yes
## Flags
None
## Metrics
Name | Description | Type | Labels
-----|-------------|------|-------
`wmi_cs_logical_processors` | Number of installed logical processors | gauge | None
`wmi_cs_physical_memory_bytes` | Total installed physical memory | gauge | None
`wmi_cs_hostname` | Labeled system hostname information | gauge | `hostname`, `domain`, `fqdn`
### Example metric
_This collector does not yet have explained examples, we would appreciate your help adding them!_
## Useful queries
_This collector does not yet have any useful queries added, we would appreciate your help adding them!_
## Alerting examples
_This collector does not yet have alerting examples, we would appreciate your help adding them!_

View File

@@ -1,75 +0,0 @@
# dfsr collector
The dfsr collector exposes metrics for [DFSR](https://docs.microsoft.com/en-us/windows-server/storage/dfs-replication/dfsr-overview).
**Collector is currently in an experimental state and testing of metrics has not been undertaken.** Feedback on this collector is welcome.
|||
-|-
Metric name prefix | `dfsr`
Data source | Perflib
Enabled by default? | No
## Flags
### `--collectors.dfsr.sources-enabled`
Comma-separated list of DFSR Perflib sources to use. Supported values are `connection`, `folder` and `volume`.
All sources are enabled by default
## Metrics
Name | Description | Type | Labels
-----|-------------|------|-------
`windows_dfsr_collector_duration_seconds` | The time taken for each sub-collector to return | gauge | collector
`windows_dfsr_collector_success` | 1 if sub-collector succeeded, 0 otherwise | gauge | collector
`windows_dfsr_connection_bandwidth_savings_using_dfs_replication_bytes_total` | Total bandwidth (in bytes) saved by the DFS Replication service for this connection, using a combination of remote differential compression (RDC) and other compression technologies that minimize network bandwidth use. | counter | name
`windows_dfsr_connection_bytes_received_total` | Total bytes received for connection | counter | name
`windows_dfsr_connection_compressed_size_of_files_received_bytes_total` | Total compressed size of files received on the connection, in bytes | counter | name
`windows_dfsr_connection_received_files_total` | Total number of files received for connection | counter | name
`windows_dfsr_connection_rdc_received_bytes_total` | Total bytes received on the connection while replicating files using Remote Differential Compression. This is the actual bytes received over the network without the networking protocol overhead | counter | name
`windows_dfsr_connection_rdc_compressed_size_of_received_files_bytes_total` | Total compressed size of files received with Remote Differential Compression. This is the number of bytes that would have been received had RDC not been used. This is not the actual number of bytes received over the network. | counter | name
`windows_dfsr_connection_rdc_received_files_total` | Total number of files received using remote differential compression | counter | name
`windows_dfsr_connection_rdc_size_of_files_received_total` | Total uncompressed size of files received with remote differential compression, in bytes. This is the number of bytes that would have been received had neither compression nor RDC been used. This is not the actual number of bytes received over the network. | counter | name
`windows_dfsr_connection_size_of_files_received_total` | Total uncompressed size of files received on the connection, in bytes. This is the number of bytes that would have been received had DFS Replication compression not been used. | counter | name
`windows_dfsr_folder_bandwidth_savings_using_dfs_replication_bytes_total` | Total bandwidth (in bytes) saved by the DFS Replication service for this folder, using a combination of remote differential compression (RDC) and other compression technologies that minimize network bandwidth use. | counter | name
`windows_dfsr_folder_compressed_size_of_received_files_bytes_total` | Total compressed size of files received for this folder, in bytes | counter | name
`windows_dfsr_folder_conflict_cleaned_up_bytes_total` | Total size of conflict loser files and folders deleted from the Conflict and Deleted folder, in bytes. | counter | name
`windows_dfsr_folder_conflict_generated_bytes_total` | Total size of conflict loser files and folders moved to the Conflict and Deleted folder, in bytes. | counter | name
`windows_dfsr_folder_conflict_cleaned_up_files_total` | Number of conflict loser files deleted from the Conflict and Deleted folder. | counter | name
`windows_dfsr_folder_conflict_generated_files_total` | Number of files and folders moved to the Conflict and Deleted folder | counter | name
`windows_dfsr_folder_conflict_folder_cleanups_total` | Number of deletions of conflict loser files and folders in the Conflict and Deleted | counter | name
`windows_dfsr_folder_conflict_space_in_use` | Total size of the conflict loser files and folders currently in the Conflict and Deleted folder | gauge | name
`windows_dfsr_folder_deleted_space_in_use` | Total size (in bytes) of the deleted files and folders currently in the Conflict and Deleted folder. | gauge | name
`windows_dfsr_folder_deleted_bytes_cleaned_up_total` | Total size (in bytes) of replicating deleted files and folders that were cleaned up from the Conflict and Deleted folder. | gauge | name
`windows_dfsr_folder_deleted_bytes_generated_total` | Total size (in bytes) of replicated deleted files and folders that were moved to the Conflict and Deleted folder after they were deleted from a replicated folder on a sending member. | counter | name
`windows_dfsr_folder_deleted_files_cleaned_up_total` | Number of files and folders that were cleaned up from the Conflict and Deleted folder. | counter | name
`windows_dfsr_folder_deleted_files_generated_total` | Number of deleted files and folders that were moved to the Conflict and Deleted folder. | counter | name
`windows_dfsr_folder_file_installs_retried_total` | Total number of file installs that are being retried due to sharing violations or other errors encountered when installing the files. The DFS Replication service replicates staged files into a staging folder, uncompresses them in the Installing folder, and renames them to the target location. The second and third steps of this process are known as installing the file. | counter | name
`windows_dfsr_folder_file_installs_succeeded_total` | Total number of files that were successfully received from sending members and installed locally on this server. The DFS Replication service replicates staged files into a staging folder, uncompresses them in the Installing folder, and renames them to the target location. The second and third steps of this process are known as installing the file. | counter | name
`windows_dfsr_folder_files_received_total` | Total number of files received. | counter | name
`windows_dfsr_folder_rdc_bytes_received_total` | Total number of bytes received in replicating files using Remote Differential Compression. This is the actual bytes received over the network without the networking protocol overhead. | counter | name
`windows_dfsr_folder_rdc_compressed_size_of_files_received_total` | Total compressed size (in bytes) of the files received with Remote Differential Compression. This is the number of bytes that would have been received had RDC not been used. This is not the actual bytes received over the network. | counter | name
`windows_dfsr_folder_rdc_number_of_files_received_total` | Total number of files received with Remote Differential Compression. | counter | name
`windows_dfsr_folder_rdc_size_of_files_received_total` | Total uncompressed size (in bytes) of the files received with Remote Differential Compression. This is the number of bytes that would have been received had neither compression nor RDC been used. This is not the actual bytes received over the network. | counter | name
`windows_dfsr_folder_size_of_files_received_total` | Total uncompressed size (in bytes) of the files received. | counter | name
`windows_dfsr_folder_staging_space_in_use` | Total size of files and folders currently in the staging folder. This metric will fluctuate as staging space is reclaimed. | gauge | name
`windows_dfsr_folder_staging_bytes_cleaned_up_total` | Total size (in bytes) of the files and folders that have been cleaned up from the staging folder. | counter | name
`windows_dfsr_folder_staging_bytes_generated_total` | Total size (in bytes) of replicated files and folders in the staging folder created by the DFS Replication service since last restart. | counter | name
`windows_dfsr_folder_staging_files_cleaned_up_total` | Total number of files and folders that have been cleaned up from the staging folder. | counter | name
`windows_dfsr_folder_staging_files_generated_total` | Total number of times replicated files and folders have been staged by the DFS Replication service. | counter | name
`windows_dfsr_folder_updates_dropped_total` | Total number of redundant file replication update records that have been ignored by the DFS Replication service because they did not change the replicated file or folder. | counter | name
`windows_dfsr_volume_database_commits_total` | Total number of DFSR Volume database commits. | counter | name
`windows_dfsr_volume_database_lookups_total` | Total number of DFSR Volume database lookups. | counter | name
`windows_dfsr_volume_usn_journal_unread_percentage` | Percentage of DFSR Volume USN journal records that are unread. | gauge | name
`windows_dfsr_volume_usn_journal_accepted_records_total` | Total number of USN journal records accepted. | counter | name
`windows_dfsr_volume_usn_journal_read_records_total` | Total number of DFSR Volume USN journal records read. | counter | name
### Example metric
_This collector does not yet have explained examples, we would appreciate your help adding them!_
## Useful queries
_This collector does not yet have any useful queries added, we would appreciate your help adding them!_
## Alerting examples
_This collector does not yet have alerting examples, we would appreciate your help adding them!_

View File

@@ -1,181 +0,0 @@
# dhcp collector
The dhcp collector exposes DHCP Server metrics
| | |
|---------------------|---------------|
| Metric name prefix | `dhcp` |
| Data source | Perflib |
| Classes | `DHCP Server` |
| Enabled by default? | No |
## Flags
### `--collector.dhcp.enabled`
Comma-separated list of collectors to use. Defaults to all, if not specified.
## Metrics
| Name | Description | Type | Labels |
|--------------------------------------------------------------------------|--------------------------------------------------------------------------------|---------|-----------------------------------------------------|
| `windows_dhcp_ack_total` | Total DHCP Acks sent by the DHCP server | counter | None |
| `windows_dhcp_denied_due_to_match_total` | Total number of DHCP requests denied, based on matches from the Deny List | gauge | None |
| `windows_dhcp_denied_due_to_nonmatch_total` | Total number of DHCP requests denied, based on non-matches from the Allow List | gauge | None |
| `windows_dhcp_declines_total` | Total DHCP Declines received by the DHCP server | counter | None |
| `windows_dhcp_discovers_total` | Total DHCP Discovers received by the DHCP server | counter | None |
| `windows_dhcp_failover_bndack_received_total` | Number of DHCP failover Binding Ack messages received | counter | None |
| `windows_dhcp_failover_bndack_sent_total` | Number of DHCP failover Binding Ack messages sent | counter | None |
| `windows_dhcp_failover_bndupd_dropped_total` | Total number of DHCP failover Binding Updates dropped | counter | None |
| `windows_dhcp_failover_bndupd_received_total` | Number of DHCP failover Binding Update messages received | counter | None |
| `windows_dhcp_failover_bndupd_sent_total` | Number of DHCP failover Binding Update messages sent | counter | None |
| `windows_dhcp_failover_bndupd_pending_in_outbound_queue` | Number of pending outbound DHCP failover Binding Update messages | counter | None |
| `windows_dhcp_failover_transitions_communicationinterrupted_state_total` | Total number of transitions into COMMUNICATION INTERRUPTED state | counter | None |
| `windows_dhcp_failover_transitions_partnerdown_state_total` | Total number of transitions into PARTNER DOWN state | counter | None |
| `windows_dhcp_failover_transitions_recover_total` | Total number of transitions into RECOVER state | counter | None |
| `windows_dhcp_informs_total` | Total DHCP Informs received by the DHCP server | counter | None |
| `windows_dhcp_nacks_total` | Total DHCP Nacks sent by the DHCP server | counter | None |
| `windows_dhcp_offers_total` | Total DHCP Offers sent by the DHCP server | counter | None |
| `windows_dhcp_packets_expired_total` | Total number of packets expired in the DHCP server message queue | counter | None |
| `windows_dhcp_packets_received_total` | Total number of packets received by the DHCP server | counter | None |
| `windows_dhcp_pending_offers_total` | Total number of pending offers in the DHCP server | counter | None |
| `windows_dhcp_releases_total` | Total DHCP Releases received by the DHCP server | counter | None |
| `windows_dhcp_requests_total` | Total DHCP Requests received by the DHCP server | counter | None |
| `windows_dhcp_scope_addresses_free_on_this_server` | DHCP Scope free addresses on this server | gauge | `scope` |
| `windows_dhcp_scope_addresses_free_on_partner_server` | DHCP Scope free addresses on partner server | gauge | `scope` |
| `windows_dhcp_scope_addresses_free` | DHCP Scope free addresses | gauge | `scope` |
| `windows_dhcp_scope_addresses_in_use_on_this_server` | DHCP Scope addresses in use on this server | gauge | `scope` |
| `windows_dhcp_scope_addresses_in_use_on_partner_server` | DHCP Scope addresses in use on partner server | gauge | `scope` |
| `windows_dhcp_scope_addresses_in_use` | DHCP Scope addresses in use | gauge | `scope` |
| `windows_dhcp_scope_info` | DHCP Scope information | gauge | `name`, `superscope_name`, `superscope_id`, `scope` |
| `windows_dhcp_scope_pending_offers` | DHCP Scope pending offers | gauge | `scope` |
| `windows_dhcp_scope_reserved_address` | DHCP Scope reserved addresses | gauge | `scope` |
| `windows_dhcp_scope_state` | DHCP Scope state | gauge | `scope`, `state` |
### Example metric
```
# HELP windows_dhcp_acks_total Total DHCP Acks sent by the DHCP server (AcksTotal)
# TYPE windows_dhcp_acks_total counter
windows_dhcp_acks_total 0
# HELP windows_dhcp_active_queue_length Number of packets in the processing queue of the DHCP server (ActiveQueueLength)
# TYPE windows_dhcp_active_queue_length gauge
windows_dhcp_active_queue_length 0
# HELP windows_dhcp_conflict_check_queue_length Number of packets in the DHCP server queue waiting on conflict detection (ping). (ConflictCheckQueueLength)
# TYPE windows_dhcp_conflict_check_queue_length gauge
windows_dhcp_conflict_check_queue_length 0
# HELP windows_dhcp_declines_total Total DHCP Declines received by the DHCP server (DeclinesTotal)
# TYPE windows_dhcp_declines_total counter
windows_dhcp_declines_total 0
# HELP windows_dhcp_denied_due_to_match_total Total number of DHCP requests denied, based on matches from the Deny list (DeniedDueToMatch)
# TYPE windows_dhcp_denied_due_to_match_total counter
windows_dhcp_denied_due_to_match_total 0
# HELP windows_dhcp_denied_due_to_nonmatch_total Total number of DHCP requests denied, based on non-matches from the Allow list (DeniedDueToNonMatch)
# TYPE windows_dhcp_denied_due_to_nonmatch_total counter
windows_dhcp_denied_due_to_nonmatch_total 0
# HELP windows_dhcp_discovers_total Total DHCP Discovers received by the DHCP server (DiscoversTotal)
# TYPE windows_dhcp_discovers_total counter
windows_dhcp_discovers_total 0
# HELP windows_dhcp_duplicates_dropped_total Total number of duplicate packets received by the DHCP server (DuplicatesDroppedTotal)
# TYPE windows_dhcp_duplicates_dropped_total counter
windows_dhcp_duplicates_dropped_total 0
# HELP windows_dhcp_failover_bndack_received_total Number of DHCP fail over Binding Ack messages received (FailoverBndackReceivedTotal)
# TYPE windows_dhcp_failover_bndack_received_total counter
windows_dhcp_failover_bndack_received_total 0
# HELP windows_dhcp_failover_bndack_sent_total Number of DHCP fail over Binding Ack messages sent (FailoverBndackSentTotal)
# TYPE windows_dhcp_failover_bndack_sent_total counter
windows_dhcp_failover_bndack_sent_total 0
# HELP windows_dhcp_failover_bndupd_dropped_total Total number of DHCP fail over Binding Updates dropped (FailoverBndupdDropped)
# TYPE windows_dhcp_failover_bndupd_dropped_total counter
windows_dhcp_failover_bndupd_dropped_total 0
# HELP windows_dhcp_failover_bndupd_pending_in_outbound_queue Number of pending outbound DHCP fail over Binding Update messages (FailoverBndupdPendingOutboundQueue)
# TYPE windows_dhcp_failover_bndupd_pending_in_outbound_queue gauge
windows_dhcp_failover_bndupd_pending_in_outbound_queue 0
# HELP windows_dhcp_failover_bndupd_received_total Number of DHCP fail over Binding Update messages received (FailoverBndupdReceivedTotal)
# TYPE windows_dhcp_failover_bndupd_received_total counter
windows_dhcp_failover_bndupd_received_total 0
# HELP windows_dhcp_failover_bndupd_sent_total Number of DHCP fail over Binding Update messages sent (FailoverBndupdSentTotal)
# TYPE windows_dhcp_failover_bndupd_sent_total counter
windows_dhcp_failover_bndupd_sent_total 0
# HELP windows_dhcp_failover_transitions_communicationinterrupted_state_total Total number of transitions into COMMUNICATION INTERRUPTED state (FailoverTransitionsCommunicationinterruptedState)
# TYPE windows_dhcp_failover_transitions_communicationinterrupted_state_total counter
windows_dhcp_failover_transitions_communicationinterrupted_state_total 0
# HELP windows_dhcp_failover_transitions_partnerdown_state_total Total number of transitions into PARTNER DOWN state (FailoverTransitionsPartnerdownState)
# TYPE windows_dhcp_failover_transitions_partnerdown_state_total counter
windows_dhcp_failover_transitions_partnerdown_state_total 0
# HELP windows_dhcp_failover_transitions_recover_total Total number of transitions into RECOVER state (FailoverTransitionsRecoverState)
# TYPE windows_dhcp_failover_transitions_recover_total counter
windows_dhcp_failover_transitions_recover_total 0
# HELP windows_dhcp_informs_total Total DHCP Informs received by the DHCP server (InformsTotal)
# TYPE windows_dhcp_informs_total counter
windows_dhcp_informs_total 0
# HELP windows_dhcp_nacks_total Total DHCP Nacks sent by the DHCP server (NacksTotal)
# TYPE windows_dhcp_nacks_total counter
windows_dhcp_nacks_total 0
# HELP windows_dhcp_offer_queue_length Number of packets in the offer queue of the DHCP server (OfferQueueLength)
# TYPE windows_dhcp_offer_queue_length gauge
windows_dhcp_offer_queue_length 0
# HELP windows_dhcp_offers_total Total DHCP Offers sent by the DHCP server (OffersTotal)
# TYPE windows_dhcp_offers_total counter
windows_dhcp_offers_total 0
# HELP windows_dhcp_packets_expired_total Total number of packets expired in the DHCP server message queue (PacketsExpiredTotal)
# TYPE windows_dhcp_packets_expired_total counter
windows_dhcp_packets_expired_total 0
# HELP windows_dhcp_packets_received_total Total number of packets received by the DHCP server (PacketsReceivedTotal)
# TYPE windows_dhcp_packets_received_total counter
windows_dhcp_packets_received_total 0
# HELP windows_dhcp_releases_total Total DHCP Releases received by the DHCP server (ReleasesTotal)
# TYPE windows_dhcp_releases_total counter
windows_dhcp_releases_total 0
# HELP windows_dhcp_requests_total Total DHCP Requests received by the DHCP server (RequestsTotal)
# TYPE windows_dhcp_requests_total counter
windows_dhcp_requests_total 0
# HELP windows_dhcp_scope_addresses_free_total DHCP Scope free addresses
# TYPE windows_dhcp_scope_addresses_free_total gauge
windows_dhcp_scope_addresses_free_total{scope="10.11.12.0/25"} 0
windows_dhcp_scope_addresses_free_total{scope="172.16.0.0/24"} 0
windows_dhcp_scope_addresses_free_total{scope="192.168.0.0/24"} 231
# HELP windows_dhcp_scope_addresses_in_use_total DHCP Scope addresses in use
# TYPE windows_dhcp_scope_addresses_in_use_total gauge
windows_dhcp_scope_addresses_in_use_total{scope="10.11.12.0/25"} 0
windows_dhcp_scope_addresses_in_use_total{scope="172.16.0.0/24"} 0
windows_dhcp_scope_addresses_in_use_total{scope="192.168.0.0/24"} 0
# HELP windows_dhcp_scope_info DHCP Scope information
# TYPE windows_dhcp_scope_info gauge
windows_dhcp_scope_info{name="SUBSUPERSCOPE",scope="172.16.0.0/24",superscope_id="2",superscope_name="SUPERSCOPE"} 1
windows_dhcp_scope_info{name="TEST",scope="192.168.0.0/24",superscope_id="0",superscope_name=""} 1
windows_dhcp_scope_info{name="TEST2",scope="10.11.12.0/25",superscope_id="2",superscope_name="SUPERSCOPE"} 1
# HELP windows_dhcp_scope_pending_offers_total DHCP Scope pending offers
# TYPE windows_dhcp_scope_pending_offers_total gauge
windows_dhcp_scope_pending_offers_total{scope="10.11.12.0/25"} 0
windows_dhcp_scope_pending_offers_total{scope="172.16.0.0/24"} 0
windows_dhcp_scope_pending_offers_total{scope="192.168.0.0/24"} 0
# HELP windows_dhcp_scope_reserved_address_total DHCP Scope reserved addresses
# TYPE windows_dhcp_scope_reserved_address_total gauge
windows_dhcp_scope_reserved_address_total{scope="10.11.12.0/25"} 0
windows_dhcp_scope_reserved_address_total{scope="172.16.0.0/24"} 0
windows_dhcp_scope_reserved_address_total{scope="192.168.0.0/24"} 2
# HELP windows_dhcp_scope_state DHCP Scope state
# TYPE windows_dhcp_scope_state gauge
windows_dhcp_scope_state{scope="10.11.12.0/25",state="Disabled"} 1
windows_dhcp_scope_state{scope="10.11.12.0/25",state="DisabledSwitched"} 0
windows_dhcp_scope_state{scope="10.11.12.0/25",state="Enabled"} 0
windows_dhcp_scope_state{scope="10.11.12.0/25",state="EnabledSwitched"} 0
windows_dhcp_scope_state{scope="10.11.12.0/25",state="InvalidState"} 0
windows_dhcp_scope_state{scope="172.16.0.0/24",state="Disabled"} 1
windows_dhcp_scope_state{scope="172.16.0.0/24",state="DisabledSwitched"} 0
windows_dhcp_scope_state{scope="172.16.0.0/24",state="Enabled"} 0
windows_dhcp_scope_state{scope="172.16.0.0/24",state="EnabledSwitched"} 0
windows_dhcp_scope_state{scope="172.16.0.0/24",state="InvalidState"} 0
windows_dhcp_scope_state{scope="192.168.0.0/24",state="Disabled"} 0
windows_dhcp_scope_state{scope="192.168.0.0/24",state="DisabledSwitched"} 0
windows_dhcp_scope_state{scope="192.168.0.0/24",state="Enabled"} 1
windows_dhcp_scope_state{scope="192.168.0.0/24",state="EnabledSwitched"} 0
windows_dhcp_scope_state{scope="192.168.0.0/24",state="InvalidState"} 0
```
## Useful queries
_This collector does not yet have any useful queries added, we would appreciate your help adding them!_
## Alerting examples
_This collector does not yet have alerting examples, we would appreciate your help adding them!_

View File

@@ -1,40 +0,0 @@
# diskdrive collector
The diskdrive collector exposes metrics about physical disks
| | |
| ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| Metric name prefix | `diskdrive` |
| Classes | [`Win32_DiskDrive`](https://learn.microsoft.com/en-us/windows/win32/cimwin32prov/win32-diskdrive) |
| Enabled by default? | No |
## Flags
None
## Metrics
| Name | Description | Type | Labels |
| ------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | ------ |
| `diskdrive_info` | General identifiable information about the disk drive | gauge | name,caption,device_id,model |
| `diskdrive_availability` | The disk drive's current availability | gauge | name,availability |
| `diskdrive_partitions` | Number of partitions on the drive | gauge | name |
| `diskdrive_size` | Size of the disk drive. It is calculated by multiplying the total number of cylinders, tracks in each cylinder, sectors in each track, and bytes in each sector. | gauge | name |
| `diskdrive_status` | Operational status of the drive | gauge | name,status |
## Alerting examples
**prometheus.rules**
```yaml
groups:
- name: Windows Disk Alerts
rules:
- alert: Drive_Status
expr: windows_disk_drive_status{status="OK"} != 1
for: 10m
labels:
severity: high
annotations:
summary: "Instance: {{ $labels.instance }} has drive status: {{ $labels.status }} on disk {{ $labels.name }}"
description: "Drive Status Unhealthy"
```

View File

@@ -1,98 +1,49 @@
# dns collector
The dns collector exposes metrics about the DNS server
|||
-|-|-
Metric name prefix | `dns` |
Classes | [`Win32_PerfRawData_DNS_DNS`](https://technet.microsoft.com/en-us/library/cc977686.aspx) |
Enabled by default | Yes |
Metric name prefix (error stats) | `windows_dns` |
Classes | [`MicrosoftDNS_Statistic`](https://learn.microsoft.com/en-us/windows/win32/dns/dns-wmi-provider-overview) |
Enabled by default (error stats)? | Yes |
## Flags
Name | Description
-----|------------
`collector.dns.enabled` | Comma-separated list of collectors to use. Available collectors: `metrics`, `wmi_stats`. Defaults to all collectors if not specified.
## Metrics
Name | Description | Type | Labels
-----|-------------|------|-------
`windows_dns_zone_transfer_requests_received_total` | _Not yet documented_ | counter | `qtype`
`windows_dns_zone_transfer_requests_sent_total` | _Not yet documented_ | counter | `qtype`
`windows_dns_zone_transfer_response_received_total` | _Not yet documented_ | counter | `qtype`
`windows_dns_zone_transfer_success_received_total` | _Not yet documented_ | counter | `qtype`, `protocol`
`windows_dns_zone_transfer_success_sent_total` | _Not yet documented_ | counter | `qtype`
`windows_dns_zone_transfer_failures_total` | _Not yet documented_ | counter | None
`windows_dns_memory_used_bytes_total` | _Not yet documented_ | gauge | `area`
`windows_dns_dynamic_updates_queued` | _Not yet documented_ | gauge | None
`windows_dns_dynamic_updates_received_total` | _Not yet documented_ | counter | `operation`
`windows_dns_dynamic_updates_failures_total` | _Not yet documented_ | counter | `reason`
`windows_dns_notify_received_total` | _Not yet documented_ | counter | None
`windows_dns_notify_sent_total` | _Not yet documented_ | counter | None
`windows_dns_secure_update_failures_total` | _Not yet documented_ | counter | None
`windows_dns_secure_update_received_total` | _Not yet documented_ | counter | None
`windows_dns_queries_total` | _Not yet documented_ | counter | `protocol`
`windows_dns_responses_total` | _Not yet documented_ | counter | `protocol`
`windows_dns_recursive_queries_total` | _Not yet documented_ | counter | None
`windows_dns_recursive_query_failures_total` | _Not yet documented_ | counter | None
`windows_dns_recursive_query_send_timeouts_total` | _Not yet documented_ | counter | None
`windows_dns_wins_queries_total` | _Not yet documented_ | counter | `direction`
`windows_dns_wins_responses_total` | _Not yet documented_ | counter | `direction`
`windows_dns_unmatched_responses_total` | _Not yet documented_ | counter | None
`windows_dns_error_stats_total` | DNS error statistics from MicrosoftDNS_Statistic | counter | `name`, `collection_name`, `dns_server`
### Sub-collectors
The DNS collector is split into two sub-collectors:
1. `metrics` - Collects standard DNS performance metrics using PDH (Performance Data Helper)
2. `wmi_stats` - Collects DNS error statistics from the MicrosoftDNS_Statistic WMI class
By default, both sub-collectors are enabled. You can enable specific sub-collectors using the `collector.dns.enabled` flag.
### Example Usage
To enable only DNS error statistics collection:
```powershell
windows_exporter.exe --collector.dns.enabled=wmi_stats
```
To enable only standard DNS metrics:
```powershell
windows_exporter.exe --collector.dns.enabled=metrics
```
To enable both (default behavior):
```powershell
windows_exporter.exe --collector.dns.enabled=metrics,wmi_stats
```
### Example metric
```
windows_dns_wmi_stats_total{collection_name="Error Stats",dns_server="EC2AMAZ-5NNM8M1",name="BadKey"} 0
windows_dns_wmi_stats_total{collection_name="Error Stats",dns_server="EC2AMAZ-5NNM8M1",name="BadSig"} 0
windows_dns_wmi_stats_total{collection_name="Error Stats",dns_server="EC2AMAZ-5NNM8M1",name="BadTime"} 0
windows_dns_wmi_stats_total{collection_name="Error Stats",dns_server="EC2AMAZ-5NNM8M1",name="FormError"} 0
windows_dns_wmi_stats_total{collection_name="Error Stats",dns_server="EC2AMAZ-5NNM8M1",name="Max"} 0
windows_dns_wmi_stats_total{collection_name="Error Stats",dns_server="EC2AMAZ-5NNM8M1",name="NoError"} 0
windows_dns_wmi_stats_total{collection_name="Error Stats",dns_server="EC2AMAZ-5NNM8M1",name="NotAuth"} 0
windows_dns_wmi_stats_total{collection_name="Error Stats",dns_server="EC2AMAZ-5NNM8M1",name="NotImpl"} 0
windows_dns_wmi_stats_total{collection_name="Error Stats",dns_server="EC2AMAZ-5NNM8M1",name="NotZone"} 0
windows_dns_wmi_stats_total{collection_name="Error Stats",dns_server="EC2AMAZ-5NNM8M1",name="NxDomain"} 0
windows_dns_wmi_stats_total{collection_name="Error Stats",dns_server="EC2AMAZ-5NNM8M1",name="NxRRSet"} 0
windows_dns_wmi_stats_total{collection_name="Error Stats",dns_server="EC2AMAZ-5NNM8M1",name="Refused"} 0
windows_dns_wmi_stats_total{collection_name="Error Stats",dns_server="EC2AMAZ-5NNM8M1",name="ServFail"} 0
windows_dns_wmi_stats_total{collection_name="Error Stats",dns_server="EC2AMAZ-5NNM8M1",name="UnknownError"} 0
windows_dns_wmi_stats_total{collection_name="Error Stats",dns_server="EC2AMAZ-5NNM8M1",name="YxDomain"} 0
windows_dns_wmi_stats_total{collection_name="Error Stats",dns_server="EC2AMAZ-5NNM8M1",name="YxRRSet"} 0
```
## Useful queries
_This collector does not yet have any useful queries added, we would appreciate your help adding them!_
## Alerting examples
_This collector does not yet have alerting examples, we would appreciate your help adding them!_
# dns collector
The dns collector exposes metrics about the DNS server
|||
-|-
Metric name prefix | `dns`
Classes | [`Win32_PerfRawData_DNS_DNS`](https://technet.microsoft.com/en-us/library/cc977686.aspx)
Enabled by default? | No
## Flags
None
## Metrics
Name | Description | Type | Labels
-----|-------------|------|-------
`wmi_dns_zone_transfer_requests_received_total` | _Not yet documented_ | counter | `qtype`
`wmi_dns_zone_transfer_requests_sent_total` | _Not yet documented_ | counter | `qtype`
`wmi_dns_zone_transfer_response_received_total` | _Not yet documented_ | counter | `qtype`
`wmi_dns_zone_transfer_success_received_total` | _Not yet documented_ | counter | `qtype`, `protocol`
`wmi_dns_zone_transfer_success_sent_total` | _Not yet documented_ | counter | `qtype`
`wmi_dns_zone_transfer_failures_total` | _Not yet documented_ | counter | None
`wmi_dns_memory_used_bytes_total` | _Not yet documented_ | gauge | `area`
`wmi_dns_dynamic_updates_queued` | _Not yet documented_ | gauge | None
`wmi_dns_dynamic_updates_received_total` | _Not yet documented_ | counter | `operation`
`wmi_dns_dynamic_updates_failures_total` | _Not yet documented_ | counter | `reason`
`wmi_dns_notify_received_total` | _Not yet documented_ | counter | None
`wmi_dns_notify_sent_total` | _Not yet documented_ | counter | None
`wmi_dns_secure_update_failures_total` | _Not yet documented_ | counter | None
`wmi_dns_secure_update_received_total` | _Not yet documented_ | counter | None
`wmi_dns_queries_total` | _Not yet documented_ | counter | `protocol`
`wmi_dns_responses_total` | _Not yet documented_ | counter | `protocol`
`wmi_dns_recursive_queries_total` | _Not yet documented_ | counter | None
`wmi_dns_recursive_query_failures_total` | _Not yet documented_ | counter | None
`wmi_dns_recursive_query_send_timeouts_total` | _Not yet documented_ | counter | None
`wmi_dns_wins_queries_total` | _Not yet documented_ | counter | `direction`
`wmi_dns_wins_responses_total` | _Not yet documented_ | counter | `direction`
`wmi_dns_unmatched_responses_total` | _Not yet documented_ | counter | None
### Example metric
_This collector does not yet have explained examples, we would appreciate your help adding them!_
## Useful queries
_This collector does not yet have any useful queries added, we would appreciate your help adding them!_
## Alerting examples
_This collector does not yet have alerting examples, we would appreciate your help adding them!_

View File

@@ -1,79 +0,0 @@
# exchange collector
The exchange collector collects metrics from MS Exchange hosts through Performance Counter
=======
| | |
|---------------------|---------------------|
| Metric name prefix | `exchange` |
| Source | Performance Counter |
| Enabled by default? | No |
## Flags
### `--collectors.exchange.list`
Lists the Perflib Objects that are queried for data along with the perlfib object id
### `--collectors.exchange.enabled`
Comma-separated list of collectors to use, for example: `--collectors.exchange.enabled=AvailabilityService,OutlookWebAccess`. Matching is case-sensitive. Depending on the exchange installation not all performance counters are available. Use `--collectors.exchange.list` to obtain a list of supported collectors.
## Metrics
| Name | Description |
|-----------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------|
| `windows_exchange_rpc_avg_latency_sec` | The latency (sec), averaged for the past 1024 packets |
| `windows_exchange_rpc_requests` | Number of client requests currently being processed by the RPC Client Access service |
| `windows_exchange_rpc_active_user_count` | Number of unique users that have shown some kind of activity in the last 2 minutes |
| `windows_exchange_rpc_connection_count` | Total number of client connections maintained |
| `windows_exchange_rpc_operations_total` | The rate at which RPC operations occur |
| `windows_exchange_rpc_user_count` | Number of users |
| `windows_exchange_ldap_read_time_sec` | Time (sec) to send an LDAP read request and receive a response |
| `windows_exchange_ldap_search_time_sec` | Time (sec) to send an LDAP search request and receive a response |
| `windows_exchange_ldap_write_time_sec` | Time (sec) to send an LDAP Add/Modify/Delete request and receive a response |
| `windows_exchange_ldap_timeout_errors_total` | Total number of LDAP timeout errors |
| `windows_exchange_ldap_long_running_ops_per_sec` | Long Running LDAP operations per second |
| `windows_exchange_transport_queues_external_active_remote_delivery` | External Active Remote Delivery Queue length |
| `windows_exchange_transport_queues_internal_active_remote_delivery` | Internal Active Remote Delivery Queue length |
| `windows_exchange_transport_queues_active_mailbox_delivery` | Active Mailbox Delivery Queue length |
| `windows_exchange_transport_queues_retry_mailbox_delivery` | Retry Mailbox Delivery Queue length |
| `windows_exchange_transport_queues_unreachable` | Unreachable Queue length |
| `windows_exchange_transport_queues_external_largest_delivery` | External Largest Delivery Queue length |
| `windows_exchange_transport_queues_internal_largest_delivery` | Internal Largest Delivery Queue length |
| `windows_exchange_transport_queues_poison` | Poison Queue length |
| `windows_exchange_transport_queues_messages_queued_for_delivery_total` | Messages Queued For Delivery Total |
| `windows_exchange_transport_queues_messages_submitted_total` | Messages Submitted Total |
| `windows_exchange_transport_queues_messages_delayed_total` | Messages Delayed Total |
| `windows_exchange_transport_queues_messages_completed_delivery_total` | Messages Completed Delivery Total |
| `windows_exchange_transport_queues_aggregate_shadow_queue_length` | The current number of messages in shadow queues |
| `windows_exchange_transport_queues_submission_queue_length` | Submission Queue Length |
| `windows_exchange_transport_queues_delay_queue_length` | Delay Queue Length |
| `windows_exchange_transport_queues_items_completed_delivery_total` | Items Completed Delivery Total |
| `windows_exchange_transport_queues_items_queued_for_delivery_expired_total` | Items Queued For Delivery Expired Total |
| `windows_exchange_transport_queues_items_queued_for_delivery_total` | Items Queued For Delivery Total |
| `windows_exchange_transport_queues_items_resubmitted_total` | Items Resubmitted Total |
| `windows_exchange_http_proxy_mailbox_server_locator_avg_latency_sec` | Average latency (sec) of MailboxServerLocator web service calls |
| `windows_exchange_http_proxy_avg_auth_latency` | Average time spent authenticating CAS requests over the last 200 samples |
| `windows_exchange_http_proxy_outstanding_proxy_requests` | Number of concurrent outstanding proxy requests |
| `windows_exchange_http_proxy_requests_total` | Number of proxy requests processed each second |
| `windows_exchange_availability_service_requests_per_sec` | Number of requests serviced per second |
| `windows_exchange_owa_current_unique_users` | Number of unique users currently logged on to Outlook Web App |
| `windows_exchange_owa_requests_total` | Number of requests handled by Outlook Web App per second |
| `windows_exchange_autodiscover_requests_total` | Number of autodiscover service requests processed each second |
| `windows_exchange_workload_active_tasks` | Number of active tasks currently running in the background for workload management |
| `windows_exchange_workload_completed_tasks` | Number of workload management tasks that have been completed |
| `windows_exchange_workload_queued_tasks` | Number of workload management tasks that are currently queued up waiting to be processed |
| `windows_exchange_workload_yielded_tasks` | The total number of tasks that have been yielded by a workload |
| `windows_exchange_workload_is_active` | Active indicates whether the workload is in an active (1) or paused (0) state |
| `windows_exchange_activesync_requests_total` | Num HTTP requests received from the client via ASP.NET per sec. Shows Current user load |
| `windows_exchange_http_proxy_avg_cas_proccessing_latency_sec` | Average latency (sec) of CAS processing time over the last 200 reqs |
| `windows_exchange_http_proxy_mailbox_proxy_failure_rate` | % of failures between this CAS and MBX servers over the last 200 sample |
| `windows_exchange_activesync_ping_cmds_pending` | Number of ping commands currently pending in the queue |
| `windows_exchange_activesync_sync_cmds_total` | Number of sync commands processed per second. Clients use this command to synchronize items within a folder |
### Example metric
_This collector does not yet have explained examples, we would appreciate your help adding them!_
## Useful queries
_This collector does not yet have any useful queries added, we would appreciate your help adding them!_
## Alerting examples
_This collector does not yet have alerting examples, we would appreciate your help adding them!_

View File

@@ -1,36 +0,0 @@
# filetime collector
The filetime collector exposes modified timestamps of files in the filesystem.
The collector
|||
-|-
Metric name prefix | `filetime`
Enabled by default? | No
## Flags
### `--collectors.filetime.file-patterns`
Comma-separated list of file patterns. Each pattern is a glob pattern that can contain `*`, `?`, and `**` (recursive).
See https://github.com/bmatcuk/doublestar#patterns for an extended description of the pattern syntax.
## Metrics
Name | Description | Type | Labels
-----|-------------|------|-------
`windows_filetime_mtime_timestamp_seconds` | File modification time | gauge | `file`
### Example metric
```
# HELP windows_filetime_mtime_timestamp_seconds File modification time
# TYPE windows_filetime_mtime_timestamp_seconds gauge
windows_filetime_mtime_timestamp_seconds{file="C:\\Users\\admin\\Desktop\\Dashboard.lnk"} 1.726434517e+09
```
## Useful queries
_This collector does not yet have any useful queries added, we would appreciate your help adding them!_
## Alerting examples
_This collector does not yet have alerting examples, we would appreciate your help adding them!_

View File

@@ -1,51 +0,0 @@
# Microsoft File Server Resource Manager (FSRM) Quotas collector
The fsrmquota collector exposes metrics about File Server Resource Manager Quotas. Note that this collector has only been tested against Windows server 2012R2.
Other FSRM versions may work but are not tested.
|||
-|-
Metric name prefix | `fsrmquota`
Data source | wmi
Counters | `FSRMQUOTA`
Enabled by default? | No
## Flags
None
## Metrics
Name | Description | Type | Labels
-----|-------------|------|-------
`windows_fsrmquota_count` | Number of Quotas | counter |None
`windows_fsrmquota_description` | A string up to 1KB in size. Optional. The default value is an empty string. (Description) | counter |`path`, `template`,`description`
`windows_fsrmquota_disabled` | If 1, the quota is disabled. The default value is 0. (Disabled) | counter |`path`, `template`
`windows_fsrmquota_matchestemplate` | If 1, the property values of this quota match those values of the template from which it was derived. (MatchesTemplate) | counter |`path`, `template`
`windows_fsrmquota_peak_usage_bytes ` | The highest amount of disk space usage charged to this quota. (PeakUsage) | counter |`path`, `template`
`windows_fsrmquota_size_bytes` | The size of the quota. If the Template property is not provided then the Size property must be provided (Size) | counter |`path`, `template`
`windows_fsrmquota_softlimit` | If 1, the quota is a soft limit. If 0, the quota is a hard limit. The default value is 0. Optional (SoftLimit) | counter |`path`, `template`
`windows_fsrmquota_template` | A valid quota template name. Up to 1KB in size. Optional (Template) | counter |`path`, `template`
`windows_fsrmquota_usage_bytes` | The current amount of disk space usage charged to this quota. (Usage) | counter |`path`, `template`
### Example metric
Show rate of Quotas usage:
```
rate(windows_fsrmquota_usage_bytes)[1d]
```
## Useful queries
## Alerting examples
**prometheus.rules**
```yaml
- alert: "HighQuotasUsage"
expr: "windows_fsrmquota_usage_bytes{instance="SERVER1.COM:9182"} / windows_fsrmquota_size{instance="SERVER1.COM:9182"} >0.85"
for: "10m"
labels:
severity: "high"
annotations:
summary: "High Quotas Usage"
description: "High use of File Resource.\n Quotas: {{ $labels.path }}\n Current use : {{ $value }}"
```

View File

@@ -1,149 +0,0 @@
# gpu collector
The gpu collector exposes metrics about GPU usage and memory consumption, both at the adapter (physical GPU) and
per-process level.
| | |
|---------------------|--------------------------------------|
| Metric name prefix | `gpu` |
| Data source | Perflib |
| Counters | GPU Engine, GPU Adapter, GPU Process |
| Enabled by default? | No |
## Flags
None
## Metrics
These metrics are available on supported versions of Windows with compatible GPUs and drivers:
### Adapter-level Metrics
| Name | Description | Type | Labels |
|--------------------------------------------------|------------------------------------------------------------------------------------|-------|-----------------------------------------------------------------|
| `windows_gpu_info` | A metric with a constant '1' value labeled with gpu device information. | gauge | `bus_number`,`device_id`,`function_number`,`luid`,`name`,`phys` |
| `windows_gpu_dedicated_system_memory_size_bytes` | The size, in bytes, of memory that is dedicated from system memory. | gauge | `device_id`,`luid` |
| `windows_gpu_dedicated_video_memory_size_bytes` | The size, in bytes, of memory that is dedicated from video memory. | gauge | `device_id`,`luid` |
| `windows_gpu_shared_system_memory_size_bytes` | The size, in bytes, of memory from system memory that can be shared by many users. | gauge | `device_id`,`luid` |
| `windows_gpu_adapter_memory_committed_bytes` | Total committed GPU memory in bytes per physical GPU | gauge | `device_id`,`luid`,`phys` |
| `windows_gpu_adapter_memory_dedicated_bytes` | Dedicated GPU memory usage in bytes per physical GPU | gauge | `device_id`,`luid`,`phys` |
| `windows_gpu_adapter_memory_shared_bytes` | Shared GPU memory usage in bytes per physical GPU | gauge | `device_id`,`luid`,`phys` |
| `windows_gpu_local_adapter_memory_bytes` | Local adapter memory usage in bytes per physical GPU | gauge | `device_id`,`luid`,`phys`,`part` |
| `windows_gpu_non_local_adapter_memory_bytes` | Non-local adapter memory usage in bytes per physical GPU | gauge | `device_id`,`luid`,`phys`,`part` |
### Per-process Metrics
| Name | Description | Type | Labels |
|----------------------------------------------|-------------------------------------------------|---------|-----------------------------------------------------------|
| `windows_gpu_engine_time_seconds` | Total running time of the GPU engine in seconds | counter | `device_id`,`luid`,`phys`, `eng`, `engtype`, `process_id` |
| `windows_gpu_process_memory_committed_bytes` | Total committed GPU memory in bytes per process | gauge | `device_id`,`luid`,`phys`,`process_id` |
| `windows_gpu_process_memory_dedicated_bytes` | Dedicated GPU memory usage in bytes per process | gauge | `device_id`,`luid`,`phys`,`process_id` |
| `windows_gpu_process_memory_local_bytes` | Local GPU memory usage in bytes per process | gauge | `device_id`,`luid`,`phys`,`process_id` |
| `windows_gpu_process_memory_non_local_bytes` | Non-local GPU memory usage in bytes per process | gauge | `device_id`,`luid`,`phys`,`process_id` |
| `windows_gpu_process_memory_shared_bytes` | Shared GPU memory usage in bytes per process | gauge | `device_id`,`luid`,`phys`,`process_id` |
## Metric Labels
* `luid`,`phys`: Physical GPU index (e.g., "0")
* `eng`: GPU engine index (e.g., "0", "1", ...)
* `engtype`: GPU engine type (e.g., "3D", "Copy", "VideoDecode", etc.)
* `process_id`: Process ID
## Example Metric
These are basic queries to help you get started with GPU monitoring on Windows using Prometheus.
**Show GPU information for a specific physical GPU (0):**
```promql
windows_gpu_info{bus_number="8",device_id="PCI\\VEN_10DE&DEV_1B81&SUBSYS_61733842&REV_A1",function_number="0",luid="0x00000000_0x00010F8A",name="NVIDIA GeForce GTX 1070",phys="0"} 1
```
**Show total dedicated GPU memory (in bytes) usage on GPU 0:**
```promql
windows_gpu_adapter_memory_dedicated_bytes{phys="0"}
```
**Aggregate GPU utilization across all processes for a physical GPU (3D engine):**
```promql
sum by (phys) (
rate(windows_gpu_engine_time_seconds{phys="0", engtype="3D"}[1m])
) * 100
```
**Show GPU utilization for a specific process (3D engine):**
```promql
sum by (phys, process_id) (
rate(windows_gpu_engine_time_seconds{process_id="1234", engtype="3D"}[1m])
) * 100
```
**Show dedicated GPU memory per process:**
```promql
windows_gpu_adapter_memory_dedicated_bytes
```
## Useful Queries
**Show top 5 processes by GPU utilization (all engines):**
```promql
topk(5, sum by (process_id) (
rate(windows_gpu_engine_time_seconds[1m])
) * 100)
```
**Show GPU memory usage per physical GPU:**
```promql
sum by (phys) (
windows_gpu_adapter_memory_dedicated_bytes
)
```
Show GPU engine time with process owner and command line:
```promql
windows_gpu_engine_time_seconds * on(process_id) group_left(owner, cmdline) windows_process_info
```
## Alerting Examples
**prometheus.rules**
```yaml
# Alert on processes using more than 80% of a GPU's capacity over 10 minutes
- alert: HighGpuUtilization
expr: |
sum by (process_id) (
rate(windows_gpu_engine_time_seconds[1m])
) * 100 > 80
for: 10m
labels:
severity: warning
annotations:
summary: "High GPU Utilization (process {{ $labels.process_id }})"
description: "Process is using more than 80% of GPU resources\n VALUE = {{ $value }}\n LABELS: {{ $labels }}"
```
## Notes
* Per-process metrics allow you to identify which processes are consuming GPU resources.
* Adapter-level metrics provide an overview of total GPU memory usage.
* For overall GPU utilization, aggregate per-process metrics in Prometheus using queries such as `sum()`.
* The collector relies on Windows performance counters; ensure your system and drivers support these counters.
## Enabling the Collector
To enable the GPU collector, add `gpu` to the list of enabled collectors in your windows_exporter configuration.
Example (command line):
```shell
windows_exporter.exe --collectors.enabled=gpu
```

Some files were not shown because too many files have changed in this diff Show More