Compare commits

...

80 Commits

Author SHA1 Message Date
Calle Pettersson
7890c9ce91 Merge pull request #506 from martinlindhe/fix-adfs-dependencies
adfs collector missing dependency
2020-04-19 21:51:47 +02:00
Calle Pettersson
bcb6f2b218 adfs collector missing dependency 2020-04-19 21:44:39 +02:00
Calle Pettersson
91a64fecb8 Merge pull request #498 from Mario-Hofstaetter/master
Fix README for process whitelist and expand docs
2020-04-04 15:15:54 +02:00
Mario Hofstätter
9148728b87 Expand process collector docs to show more regexp (#497) 2020-04-03 21:05:05 +02:00
Mario Hofstätter
2290969596 Fix README to use new --collector.process.whitelist (#497)
With PR #489 `--collector.process.processes-where` no longer works, changing example to use `--collector.process.whitelist` with regexp
2020-04-03 20:49:11 +02:00
Calle Pettersson
1d7747b4d1 Merge pull request #473 from martinlindhe/remove-redirect
BREAKING: Remove redirect from unknown paths to /metrics
2020-03-28 13:35:46 +01:00
Calle Pettersson
cba42d24c1 Merge pull request #474 from martinlindhe/concurrency-limit
Add option to limit concurrent requests
2020-03-28 13:35:34 +01:00
Calle Pettersson
58d259a2b6 Merge pull request #489 from martinlindhe/process-perflib
BREAKING: Convert the process collector to use perflib
2020-03-27 20:15:10 +01:00
Calle Pettersson
4f89133893 Convert the process collector to use perflib 2020-03-24 22:46:24 +01:00
Calle Pettersson
af250824f7 Merge pull request #480 from martinlindhe/fix-versioning
Fix versioning in binary
2020-03-05 21:30:25 +01:00
Calle Pettersson
7f57491fac Fix versioning in binary 2020-03-05 21:15:26 +01:00
Calle Pettersson
890fdc2996 Merge pull request #476 from sll552/fix_domain_hostname
Fix cs collector crashing when running on a domain joined machine
2020-03-04 14:54:49 +01:00
Stefan Lengauer
d1a807840c Fix cs collector crashing when running on a domain joined machine
The wmi lib does some type checking for nil values.
Use a pointer as a workaround for that.
2020-03-04 14:49:59 +01:00
Calle Pettersson
74d7332b47 Merge pull request #463 from secustor/implement-mssql-base-counters
WIP: Implement mssql base counters
2020-03-03 19:40:10 +01:00
sebastian.poxhofer
22d4f50c83 fixing missing values for cache metrics 2020-03-03 17:57:55 +01:00
Calle Pettersson
df954ddf9d Remove redirect from unknown paths to /metrics 2020-03-02 22:46:50 +01:00
Calle Pettersson
34996b206a Add option to limit concurrent requests 2020-03-02 22:43:29 +01:00
sebastian.poxhofer
6dad58fc8f rework mssql cache metrics 2020-03-02 22:34:17 +01:00
Calle Pettersson
8231bc4395 Merge pull request #470 from sll552/add_hostname
Add collector for hostname information
2020-03-02 07:40:11 +01:00
Stefan Lengauer
baba51bc6a Add collector for hostname information
This can be useful for building grafana dashboards with dropdowns for multiple hosts
Or for managed instances of Prometheus where the user is not able to add labels via config
2020-03-01 23:16:53 +01:00
Calle Pettersson
b64ccbe683 Merge pull request #461 from martinlindhe/specific-perflib-objects
Only query the perflib objects we need
2020-03-01 12:55:03 +01:00
Calle Pettersson
21a02c4fbe Only query the perflib objects we need 2020-02-29 10:40:53 +01:00
Calle Pettersson
089bc3b2d4 Merge pull request #468 from shubhamgoel4aug/patch-1
Fixed bug in script
2020-02-24 20:45:29 +01:00
Shubham Goel
285a165eba Fixed bug in script
There was an extra parenthesis at the end of line no 23
2020-02-24 10:21:58 +05:30
basift
90b197450e Update collector.mssql.md (#436)
Update collector.mssql.md
2020-02-16 13:29:24 +01:00
Calle Pettersson
0865061210 Merge pull request #413 from der-eismann/os-info
Add product name & version to os collector
2020-01-10 13:03:03 +01:00
Calle Pettersson
2e50f515d8 Merge pull request #420 from martinlindhe/go-modules
Switch to go modules
2019-12-29 16:47:23 +01:00
Calle Pettersson
8be7dc7e83 Remove vendor dir 2019-12-28 16:28:09 +01:00
Calle Pettersson
0d4f747f8f Switch to go modules 2019-12-28 16:28:10 +01:00
Calle Pettersson
de285e1043 Check gofmt on lint 2019-12-27 12:53:34 +01:00
Calle Pettersson
7fde426e88 Merge pull request #426 from tan9/markdown-syntax-highlight
Specify YAML formatting to all rules config.
2019-10-30 09:43:47 +01:00
Pei-Tang Huang
fa12d1476f Specify YAML formatting to all rules config. 2019-10-30 16:08:03 +08:00
Calle Pettersson
92d0a1d8f0 Merge pull request #425 from tan9/patch-1
Fix typo and add yaml format.
2019-10-30 08:55:11 +01:00
Pei-Tang Huang
2f46a088de Fix typo and add yaml format. 2019-10-30 15:52:52 +08:00
Calle Pettersson
1cc4df2bd7 Merge pull request #421 from martinlindhe/fix-build-badge
Build badge should only reflect master
2019-10-19 17:40:51 +02:00
Calle Pettersson
feb2b18e6a Build badge should only reflect master 2019-10-19 17:39:01 +02:00
Calle Pettersson
012b938b54 Merge pull request #402 from breed808/perf_mem
Use perflib for memory collector
2019-10-09 21:16:46 +02:00
Calle Pettersson
a0e5baa171 Merge pull request #403 from breed808/perf_net
Use perflib for net collector
2019-10-09 21:15:55 +02:00
Calle Pettersson
7611e33bc7 Merge pull request #405 from breed808/perf_system
Use perflib for system collector
2019-10-09 21:15:06 +02:00
Ben Reedy
2aafa9ebf3 Use perflib for system collector 2019-10-08 20:59:50 +10:00
Ben Reedy
f9f27b0b97 Use perflib for net collector 2019-10-08 20:57:09 +10:00
Ben Reedy
18128f48f5 Use perflib for memory collector 2019-10-08 20:52:44 +10:00
Calle Pettersson
2688847c2e Merge pull request #401 from breed808/adfs_loop
ADFS: explicitly use first perflib result
2019-10-07 17:47:51 +02:00
Calle Pettersson
1c605adb5e Merge pull request #400 from breed808/perf
Use Perflib for logical_disk exporter
2019-10-07 17:47:05 +02:00
Ben Reedy
d0877d0dc0 Update logical_disk docs to Perflib counter 2019-10-04 21:05:17 +10:00
Ben Reedy
2cd630fb2f Use ticks to seconds scale for latency metrics
Latency metrics were previously exposing as ticks
2019-10-04 21:05:17 +10:00
Ben Reedy
b210986181 Use perflib for logical_disk collector 2019-10-04 21:05:17 +10:00
Philipp Trulson
375a74f1e8 Add product name & version to os collector 2019-10-01 18:54:50 +02:00
Calle Pettersson
abd5a53045 Merge pull request #406 from floptical/master
Another msiexec install example
2019-10-01 17:53:51 +02:00
Calle Pettersson
aa394d1d8e Merge pull request #411 from breed808/established_gauge
Set tcp_connections_established to gauge type
2019-10-01 17:53:13 +02:00
Calle Pettersson
bdcc7b0913 Merge pull request #412 from Schlump/patch-1
Update collector.hyperv.md
2019-10-01 17:52:17 +02:00
Schlump
d7a908e6c0 Update collector.hyperv.md
Typo in metrics label.
Metrics exposed for Hyper-V collector are actually named "wmi_hyperv" and not "wmi_hyper".
2019-10-01 11:16:46 +02:00
Ben Reedy
c23a98ae90 Set tcp_connections_established to gauge type
While the ConnectionsEstablished property in the
Win32_PerfRawData_Tcpip_TCP class is listed as a counter, real-world
metric values have been shown to increase *and* decrease.

Documentation for the property states "Number of TCP connections for
which the *current* state is either ESTABLISHED or CLOSE-WAIT" which
would imply the metric is a gauge.
2019-09-27 19:48:16 +10:00
floptical
f8a7c99092 Another msiexec install example
msiexec install example for older windows versions
2019-09-25 15:48:36 -04:00
Ben Reedy
29b020999d Explicitly use first ADFS result
Perflib ADFS only returns a single data result, so looping over data is
unnecessary
2019-09-23 19:24:48 +10:00
Calle Pettersson
2f0a57898f Merge pull request #399 from breed808/adfs
Add adfs collector
2019-09-17 18:41:21 +02:00
Ben Reedy
1ad20d6eb8 Add adfs collector
Perflib is used to collect base AD FS performance counters.
A subset of the total performance counters has been added, but more will
likely be added in the future.

Documentation for the AD FS counters is poor. As such, some counters
have been omitted until their nature can be interpreted.
2019-09-17 21:45:53 +10:00
Calle Pettersson
de000b74c8 Merge pull request #396 from charlesmorin/patch-1
Added a required detail for the .prom file to work properly
2019-09-04 14:46:45 +02:00
Charles Morin
d860d92dc8 Added a required detail for the .prom file to work properly
After adding the `role.prom` file on Windows on around 15 virtual machines, we discovered that if omit to insert an empty line feed at the end of the file, Prometheus won't get metrics from the virtual machine. Adding a new line fixes the issue and immediately start gathering metrics for the virtual machine.
2019-09-04 08:44:26 -04:00
Calle Pettersson
3a19fe4e7d Merge pull request #393 from breed808/net_gauge
Set current_bandwidth to gauge type
2019-08-28 22:18:47 +02:00
Calle Pettersson
26a468f17a Merge pull request #392 from breed808/doc
Additional collector documentation
2019-08-28 22:18:17 +02:00
Calle Pettersson
a6f3b33928 Merge pull request #394 from martinlindhe/fix-cgo-build-tag
Remove cgo build tag from container collector
2019-08-28 22:13:23 +02:00
Calle Pettersson
8ef215cc7e Remove cgo build tag from container collector 2019-08-28 22:06:20 +02:00
Ben Reedy
2c155a12bd Set current_bandwidth to gauge type
Bandwidth estimate for an interface may decrease or increase
2019-08-28 21:12:14 +10:00
Ben Reedy
e1141c3ec0 Add documentation for tcp collector 2019-08-28 21:06:23 +10:00
Ben Reedy
b635ecc6c1 Add documentation for os collector 2019-08-28 20:57:34 +10:00
Calle Pettersson
a7b5cf7aa6 Merge pull request #389 from breed808/doc
Add collector documentation
2019-08-27 07:57:45 +02:00
Ben Reedy
719ccd4f7f Add documentation for system collector 2019-08-26 21:07:55 +10:00
Ben Reedy
7ab8c7dde4 Add documentation for net collector 2019-08-26 21:01:09 +10:00
Ben Reedy
eb002eb667 Add documentation for memory collector 2019-08-22 22:39:15 +10:00
Ben Reedy
a1638cdf4c Add query examples to cpu collector documentation 2019-08-22 22:06:34 +10:00
Ben Reedy
091406877a Add documentation for logical_disk collector 2019-08-22 21:53:44 +10:00
Ben Reedy
84970ac086 Add logon entry to collectors README 2019-08-22 21:53:41 +10:00
Calle Pettersson
d86f318010 Merge pull request #387 from breed808/logical-disk-new-counters
Add logical_disk latency metrics
2019-08-22 08:53:42 +02:00
Ben Reedy
853d615673 Add logical_disk latency metrics 2019-08-22 03:30:40 +00:00
Calle Pettersson
cd9a740e2b Merge pull request #384 from breed808/logon
Export logon sessions metric
2019-08-21 11:05:25 +02:00
Ben Reedy
c70e7674a5 Add logon collector documentation 2019-08-21 18:35:10 +10:00
Ben Reedy
d3e3835c29 Export user logon sessions
Use Win32_LogonSession class to provide user logon sessions by type.
2019-08-20 22:27:03 +10:00
Calle Pettersson
592c8a8d69 Merge pull request #376 from martinlindhe/fix-goroutine-leak
Fix goroutine leak
2019-08-09 10:24:36 +02:00
Calle Pettersson
6f6a479535 Fix goroutine leak 2019-08-08 21:09:21 +02:00
918 changed files with 1747 additions and 241990 deletions

23
.golangci.yaml Normal file
View File

@@ -0,0 +1,23 @@
linters:
disable-all: true
enable:
- deadcode
- errcheck
- golint
- govet
- gofmt
- ineffassign
- interfacer
- structcheck
- unconvert
- varcheck
issues:
exclude:
- don't use underscores in Go names
- exported type .+ should have comment or be unexported
exclude-rules:
- # Golint has many capitalisation complaints on WMI class names
text: "`?\\w+`? should be `?\\w+`?"
linters:
- golint

View File

@@ -4,11 +4,11 @@ build:
binaries:
- name: wmi_exporter
ldflags: |
-X {{repoPath}}/vendor/github.com/prometheus/common/version.Version={{.Version}}
-X {{repoPath}}/vendor/github.com/prometheus/common/version.Revision={{.Revision}}
-X {{repoPath}}/vendor/github.com/prometheus/common/version.Branch={{.Branch}}
-X {{repoPath}}/vendor/github.com/prometheus/common/version.BuildUser={{user}}@{{host}}
-X {{repoPath}}/vendor/github.com/prometheus/common/version.BuildDate={{date "20060102-15:04:05"}}
-X github.com/prometheus/common/version.Version={{.Version}}
-X github.com/prometheus/common/version.Revision={{.Revision}}
-X github.com/prometheus/common/version.Branch={{.Branch}}
-X github.com/prometheus/common/version.BuildUser={{user}}@{{host}}
-X github.com/prometheus/common/version.BuildDate={{date "20060102-15:04:05"}}
tarball:
files:
- LICENSE

175
Gopkg.lock generated
View File

@@ -1,175 +0,0 @@
# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'.
[[projects]]
digest = "1:3ccf8ba7afe02fd470c4f07d6eea4d0e6875da3d129f95b925f2003ce5dd2024"
name = "github.com/StackExchange/wmi"
packages = ["."]
pruneopts = "NUT"
revision = "5d049714c4a64225c3c79a7cf7d02f7fb5b96338"
version = "1.0.0"
[[projects]]
branch = "master"
digest = "1:f3793f8a708522400cef1dba23385e901aede5519f68971fd69938ef330b07a1"
name = "github.com/alecthomas/template"
packages = [
".",
"parse",
]
pruneopts = "NUT"
revision = "a0175ee3bccc567396460bf5acd36800cb10c49c"
[[projects]]
branch = "master"
digest = "1:fdd419e104ec26bb5bd63cc62637c640453ed2929a7453f3afadbd9a0223da66"
name = "github.com/alecthomas/units"
packages = ["."]
pruneopts = "NUT"
revision = "2efee857e7cfd4f3d0138cc3cbb1b4966962b93a"
[[projects]]
branch = "master"
digest = "1:cb0535f5823b47df7dcb9768ebb6c000b79ad115472910c70efe93c9ed9b2315"
name = "github.com/beorn7/perks"
packages = ["quantile"]
pruneopts = "NUT"
revision = "4c0e84591b9aa9e6dcfdf3e020114cd81f89d5f9"
[[projects]]
digest = "1:f9adc21a937e5da643ea14a3488cb7506788876737a5e205394e508627a6eec8"
name = "github.com/dimchansky/utfbom"
packages = ["."]
pruneopts = "NUT"
revision = "d2133a1ce379ef6fa992b0514a77146c60db9d1c"
version = "v1.1.0"
[[projects]]
digest = "1:cb4e216bd9f58866f42dc65893455b24f879b026fdaa1ecc3aafff625fdb5a66"
name = "github.com/go-ole/go-ole"
packages = [
".",
"oleutil",
]
pruneopts = "NUT"
revision = "a41e3c4b706f6ae8dfbff342b06e40fa4d2d0506"
version = "v1.2.1"
[[projects]]
digest = "1:9f35c1344b56e5868d511d231f215edd0650aa572664f856444affdd256e43e4"
name = "github.com/golang/protobuf"
packages = ["proto"]
pruneopts = "NUT"
revision = "925541529c1fa6821df4e44ce2723319eb2be768"
version = "v1.0.0"
[[projects]]
digest = "1:5985ef4caf91ece5d54817c11ea25f182697534f8ae6521eadcd628c142ac4b6"
name = "github.com/matttproud/golang_protobuf_extensions"
packages = ["pbutil"]
pruneopts = "NUT"
revision = "3247c84500bff8d9fb6d579d800f20b3e091582c"
version = "v1.0.0"
[[projects]]
digest = "1:03bca087b180bf24c4f9060775f137775550a0834e18f0bca0520a868679dbd7"
name = "github.com/prometheus/client_golang"
packages = [
"prometheus",
"prometheus/promhttp",
]
pruneopts = "NUT"
revision = "c5b7fccd204277076155f10851dad72b76a49317"
version = "v0.8.0"
[[projects]]
branch = "master"
digest = "1:32d10bdfa8f09ecf13598324dba86ab891f11db3c538b6a34d1c3b5b99d7c36b"
name = "github.com/prometheus/client_model"
packages = ["go"]
pruneopts = "NUT"
revision = "99fa1f4be8e564e8a6b613da7fa6f46c9edafc6c"
[[projects]]
branch = "master"
digest = "1:ce98e83b2b9486b6a9ce5e44fd4097c64e8f2f0eaa6c5041a8f12d3aaa5c17b3"
name = "github.com/prometheus/common"
packages = [
"expfmt",
"internal/bitbucket.org/ww/goautoneg",
"log",
"model",
"version",
]
pruneopts = "NUT"
revision = "e4aa40a9169a88835b849a6efb71e05dc04b88f0"
[[projects]]
branch = "master"
digest = "1:61a95e8d3e39e94207fba1b56d3c2182a356a1e41017aa647f523ae964b6bb0c"
name = "github.com/prometheus/procfs"
packages = [
".",
"internal/util",
"nfs",
"xfs",
]
pruneopts = "NUT"
revision = "54d17b57dd7d4a3aa092476596b3f8a933bde349"
[[projects]]
digest = "1:6989062eb7ccf25cf38bf4fe3dba097ee209f896cda42cefdca3927047bef7b6"
name = "github.com/sirupsen/logrus"
packages = ["."]
pruneopts = "NUT"
revision = "c155da19408a8799da419ed3eeb0cb5db0ad5dbc"
version = "v1.0.5"
[[projects]]
branch = "master"
digest = "1:3f3a05ae0b95893d90b9b3b5afdb79a9b3d96e4e36e099d841ae602e4aca0da8"
name = "golang.org/x/crypto"
packages = ["ssh/terminal"]
pruneopts = "NUT"
revision = "182114d582623c1caa54f73de9c7224e23a48487"
[[projects]]
branch = "master"
digest = "1:ea69008276e11262595a1f9a279ffd51d93e21c32c13b0f81856e962c6f607dd"
name = "golang.org/x/sys"
packages = [
"unix",
"windows",
"windows/registry",
"windows/svc",
"windows/svc/eventlog",
]
pruneopts = "NUT"
revision = "8c0ece68c28377f4c326d85b94f8df0dace46f80"
[[projects]]
digest = "1:22b2dee6f30bc8601f087449a2a819df8388e54e9547349c658f14d8f8c590f2"
name = "gopkg.in/alecthomas/kingpin.v2"
packages = ["."]
pruneopts = "NUT"
revision = "947dcec5ba9c011838740e680966fd7087a71d0d"
version = "v2.2.6"
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
input-imports = [
"github.com/StackExchange/wmi",
"github.com/dimchansky/utfbom",
"github.com/prometheus/client_golang/prometheus",
"github.com/prometheus/client_golang/prometheus/promhttp",
"github.com/prometheus/client_model/go",
"github.com/prometheus/common/expfmt",
"github.com/prometheus/common/log",
"github.com/prometheus/common/version",
"golang.org/x/sys/windows/registry",
"golang.org/x/sys/windows/svc",
"gopkg.in/alecthomas/kingpin.v2",
]
solver-name = "gps-cdcl"
solver-version = 1

View File

@@ -1,4 +0,0 @@
[prune]
non-go = true
go-tests = true
unused-packages = true

View File

@@ -7,7 +7,7 @@ test:
go test -v ./...
lint:
gometalinter --vendor --config gometalinter.config ./...
golangci-lint -c .golangci.yaml run
fmt:
gofmt -l -w -s .

View File

@@ -1,6 +1,6 @@
# WMI exporter
[![Build status](https://ci.appveyor.com/api/projects/status/ljwan71as6pf2joe?svg=true)](https://ci.appveyor.com/project/martinlindhe/wmi-exporter)
[![Build status](https://ci.appveyor.com/api/projects/status/ljwan71as6pf2joe/branch/master?svg=true)](https://ci.appveyor.com/project/martinlindhe/wmi-exporter)
Prometheus exporter for Windows machines, using the WMI (Windows Management Instrumentation).
@@ -10,6 +10,7 @@ Prometheus exporter for Windows machines, using the WMI (Windows Management Inst
Name | Description | Enabled by default
---------|-------------|--------------------
[ad](docs/collector.ad.md) | Active Directory Domain Services |
[adfs](docs/collector.adfs.md) | Active Directory Federation Services |
[cpu](docs/collector.cpu.md) | CPU usage | ✓
[cs](docs/collector.cs.md) | "Computer System" metrics (system properties, num cpus/total memory) | ✓
[container](docs/collector.container.md) | Container metrics |
@@ -17,6 +18,7 @@ Name | Description | Enabled by default
[hyperv](docs/collector.hyperv.md) | Hyper-V hosts |
[iis](docs/collector.iis.md) | IIS sites and applications |
[logical_disk](docs/collector.logical_disk.md) | Logical disks, disk I/O | ✓
[logon](docs/collector.logon.md) | User logon sessions |
[memory](docs/collector.memory.md) | Memory usage metrics |
[msmq](docs/collector.msmq.md) | MSMQ queues |
[mssql](docs/collector.mssql.md) | [SQL Server Performance Objects](https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/use-sql-server-objects#SQLServerPOs) metrics |
@@ -67,6 +69,11 @@ Example service collector with a custom query.
msiexec /i <path-to-msi-file> ENABLED_COLLECTORS=os,service --% EXTRA_FLAGS="--collector.service.services-where ""Name LIKE 'sql%'"""
```
On some older versions of Windows you may need to surround parameter values with double quotes to get the install command parsing properly:
```powershell
msiexec /i C:\Users\Administrator\Downloads\wmi_exporter.msi ENABLED_COLLECTORS="ad,iis,logon,memory,process,tcp,thermalzone" TEXTFILE_DIR="C:\custom_metrics\"
```
## Roadmap
See [open issues](https://github.com/martinlindhe/wmi_exporter/issues)
@@ -74,7 +81,6 @@ See [open issues](https://github.com/martinlindhe/wmi_exporter/issues)
## Usage
go get -u github.com/golang/dep
go get -u github.com/prometheus/promu
go get -u github.com/martinlindhe/wmi_exporter
cd $env:GOPATH/src/github.com/martinlindhe/wmi_exporter
@@ -91,11 +97,9 @@ The prometheus metrics will be exposed on [localhost:9182](http://localhost:9182
### Enable only process collector and specify a custom query
.\wmi_exporter.exe --collectors.enabled "process" --collector.process.processes-where "Name LIKE 'firefox%'"
.\wmi_exporter.exe --collectors.enabled "process" --collector.process.whitelist="firefox.+"
When there are multiple processes with the same name, WMI represents those after the first instance as `process-name#index`. So to get them all, rather than just the first one, the query needs to be a wildcard search using a `%` character.
Please note that in Windows batch scripts (and when using the `cmd` command prompt), the `%` character is reserved, so it has to be escaped with another `%`. For example, the wildcard syntax for searching for all firefox processes is `firefox%%`.
When there are multiple processes with the same name, WMI represents those after the first instance as `process-name#index`. So to get them all, rather than just the first one, the [regular expression](https://en.wikipedia.org/wiki/Regular_expression) must use `.+`. See [process](docs/collector.process.md) for more information.
## License

View File

@@ -2,19 +2,27 @@ version: "{build}"
os: Visual Studio 2017
build: off
stack: go 1.10
stack: go 1.13
environment:
GOPATH: c:\gopath
GO111MODULE: on
clone_folder: c:\gopath\src\github.com\martinlindhe\wmi_exporter
install:
- mkdir %GOPATH%\bin
- set PATH=%GOPATH%\bin;%PATH%
- set PATH=%PATH%;C:\mingw-w64\x86_64-7.2.0-posix-seh-rt_v5-rev1\mingw64\bin
- go get -u github.com/prometheus/promu
- go get -u github.com/alecthomas/gometalinter && gometalinter --install
- choco install gitversion.portable make -y
- ps: |
appveyor DownloadFile https://github.com/golangci/golangci-lint/releases/download/v1.21.0/golangci-lint-1.21.0-windows-amd64.zip
Expand-Archive golangci-lint-1.21.0-windows-amd64.zip
Move-Item golangci-lint-1.21.0-windows-amd64\golangci-lint-1.21.0-windows-amd64\golangci-lint.exe $env:GOPATH\bin\golangci-lint.exe
- ps: |
$env:GO111MODULE="off"
go get -u github.com/prometheus/promu
$env:GO111MODULE="on"
test_script:
- make test
@@ -24,6 +32,10 @@ after_test:
build_script:
- ps: |
# go mod download (or, if we don't call it, go build) will write every dependent package name to
# stderr, which will be interpreted as an error and abort the build if ErrorActionPreference is Stop,
# so we need to run it before setting the preference.
go mod download
$ErrorActionPreference = "Stop"
gitversion /output json /showvariable FullSemVer | Set-Content VERSION -PassThru
$Version = Get-Content VERSION

View File

@@ -11,7 +11,7 @@ import (
)
func init() {
Factories["ad"] = NewADCollector
registerCollector("ad", NewADCollector)
}
// A ADCollector is a Prometheus collector for WMI Win32_PerfRawData_DirectoryServices_DirectoryServices metrics

188
collector/adfs.go Normal file
View File

@@ -0,0 +1,188 @@
// +build windows
package collector
import (
"github.com/prometheus/client_golang/prometheus"
)
func init() {
registerCollector("adfs", newADFSCollector, "AD FS")
}
type adfsCollector struct {
adLoginConnectionFailures *prometheus.Desc
certificateAuthentications *prometheus.Desc
deviceAuthentications *prometheus.Desc
extranetAccountLockouts *prometheus.Desc
federatedAuthentications *prometheus.Desc
passportAuthentications *prometheus.Desc
passiveRequests *prometheus.Desc
passwordChangeFailed *prometheus.Desc
passwordChangeSucceeded *prometheus.Desc
tokenRequests *prometheus.Desc
windowsIntegratedAuthentications *prometheus.Desc
}
// newADFSCollector constructs a new adfsCollector
func newADFSCollector() (Collector, error) {
const subsystem = "adfs"
return &adfsCollector{
adLoginConnectionFailures: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "ad_login_connection_failures"),
"Total number of connection failures to an Active Directory domain controller",
nil,
nil,
),
certificateAuthentications: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "certificate_authentications"),
"Total number of User Certificate authentications",
nil,
nil,
),
deviceAuthentications: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "device_authentications"),
"Total number of Device authentications",
nil,
nil,
),
extranetAccountLockouts: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "extranet_account_lockouts"),
"Total number of Extranet Account Lockouts",
nil,
nil,
),
federatedAuthentications: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "federated_authentications"),
"Total number of authentications from a federated source",
nil,
nil,
),
passportAuthentications: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "passport_authentications"),
"Total number of Microsoft Passport SSO authentications",
nil,
nil,
),
passiveRequests: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "passive_requests"),
"Total number of passive (browser-based) requests",
nil,
nil,
),
passwordChangeFailed: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "password_change_failed"),
"Total number of failed password changes",
nil,
nil,
),
passwordChangeSucceeded: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "password_change_succeeded"),
"Total number of successful password changes",
nil,
nil,
),
tokenRequests: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "token_requests"),
"Total number of token requests",
nil,
nil,
),
windowsIntegratedAuthentications: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "windows_integrated_authentications"),
"Total number of Windows integrated authentications (Kerberos/NTLM)",
nil,
nil,
),
}, nil
}
type perflibADFS struct {
AdLoginConnectionFailures float64 `perflib:"AD login Connection Failures"`
CertificateAuthentications float64 `perflib:"Certificate Authentications"`
DeviceAuthentications float64 `perflib:"Device Authentications"`
ExtranetAccountLockouts float64 `perflib:"Extranet Account Lockouts"`
FederatedAuthentications float64 `perflib:"Federated Authentications"`
PassportAuthentications float64 `perflib:"Microsoft Passport Authentications"`
PassiveRequests float64 `perflib:"Passive Requests"`
PasswordChangeFailed float64 `perflib:"Password Change Failed Requests"`
PasswordChangeSucceeded float64 `perflib:"Password Change Successful Requests"`
TokenRequests float64 `perflib:"Token Requests"`
WindowsIntegratedAuthentications float64 `perflib:"Windows Integrated Authentications"`
}
func (c *adfsCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
var adfsData []perflibADFS
err := unmarshalObject(ctx.perfObjects["AD FS"], &adfsData)
if err != nil {
return err
}
ch <- prometheus.MustNewConstMetric(
c.adLoginConnectionFailures,
prometheus.CounterValue,
adfsData[0].AdLoginConnectionFailures,
)
ch <- prometheus.MustNewConstMetric(
c.certificateAuthentications,
prometheus.CounterValue,
adfsData[0].CertificateAuthentications,
)
ch <- prometheus.MustNewConstMetric(
c.deviceAuthentications,
prometheus.CounterValue,
adfsData[0].DeviceAuthentications,
)
ch <- prometheus.MustNewConstMetric(
c.extranetAccountLockouts,
prometheus.CounterValue,
adfsData[0].ExtranetAccountLockouts,
)
ch <- prometheus.MustNewConstMetric(
c.federatedAuthentications,
prometheus.CounterValue,
adfsData[0].FederatedAuthentications,
)
ch <- prometheus.MustNewConstMetric(
c.passportAuthentications,
prometheus.CounterValue,
adfsData[0].PassportAuthentications,
)
ch <- prometheus.MustNewConstMetric(
c.passiveRequests,
prometheus.CounterValue,
adfsData[0].PassiveRequests,
)
ch <- prometheus.MustNewConstMetric(
c.passwordChangeFailed,
prometheus.CounterValue,
adfsData[0].PasswordChangeFailed,
)
ch <- prometheus.MustNewConstMetric(
c.passwordChangeSucceeded,
prometheus.CounterValue,
adfsData[0].PasswordChangeSucceeded,
)
ch <- prometheus.MustNewConstMetric(
c.tokenRequests,
prometheus.CounterValue,
adfsData[0].TokenRequests,
)
ch <- prometheus.MustNewConstMetric(
c.windowsIntegratedAuthentications,
prometheus.CounterValue,
adfsData[0].WindowsIntegratedAuthentications,
)
return nil
}

View File

@@ -2,6 +2,7 @@ package collector
import (
"strconv"
"strings"
"github.com/leoluk/perflib_exporter/perflib"
"github.com/prometheus/client_golang/prometheus"
@@ -47,8 +48,41 @@ func getWindowsVersion() float64 {
return currentv_flt
}
// Factories ...
var Factories = make(map[string]func() (Collector, error))
type collectorBuilder func() (Collector, error)
var (
builders = make(map[string]collectorBuilder)
perfCounterDependencies = make(map[string]string)
)
func registerCollector(name string, builder collectorBuilder, perfCounterNames ...string) {
builders[name] = builder
perfIndicies := make([]string, 0, len(perfCounterNames))
for _, cn := range perfCounterNames {
perfIndicies = append(perfIndicies, MapCounterToIndex(cn))
}
perfCounterDependencies[name] = strings.Join(perfIndicies, " ")
}
func Available() []string {
cs := make([]string, 0, len(builders))
for c := range builders {
cs = append(cs, c)
}
return cs
}
func Build(collector string) (Collector, error) {
return builders[collector]()
}
func getPerfQuery(collectors []string) string {
parts := make([]string, 0, len(collectors))
for _, c := range collectors {
if p := perfCounterDependencies[c]; p != "" {
parts = append(parts, p)
}
}
return strings.Join(parts, " ")
}
// Collector is the interface a collector has to implement.
type Collector interface {
@@ -61,8 +95,9 @@ type ScrapeContext struct {
}
// PrepareScrapeContext creates a ScrapeContext to be used during a single scrape
func PrepareScrapeContext() (*ScrapeContext, error) {
objs, err := getPerflibSnapshot()
func PrepareScrapeContext(collectors []string) (*ScrapeContext, error) {
q := getPerfQuery(collectors) // TODO: Memoize
objs, err := getPerflibSnapshot(q)
if err != nil {
return nil, err
}

View File

@@ -1,4 +1,4 @@
// +build windows,cgo
// +build windows
package collector
@@ -9,7 +9,7 @@ import (
)
func init() {
Factories["container"] = NewContainerMetricsCollector
registerCollector("container", NewContainerMetricsCollector)
}
// A ContainerMetricsCollector is a Prometheus collector for containers metrics

View File

@@ -9,7 +9,14 @@ import (
)
func init() {
Factories["cpu"] = newCPUCollector
var deps string
// See below for 6.05 magic value
if getWindowsVersion() > 6.05 {
deps = "Processor Information"
} else {
deps = "Processor"
}
registerCollector("cpu", newCPUCollector, deps)
}
type cpuCollectorBasic struct {
@@ -38,7 +45,7 @@ func newCPUCollector() (Collector, error) {
version := getWindowsVersion()
// For Windows 2008 (version 6.0) or earlier we only have the "Processor"
// class. As of Windows 2008 R2 (version 6.1) the more detailed
// "ProcessorInformation" set is available (although some of the counters
// "Processor Information" set is available (although some of the counters
// are added in later versions, so we aren't guaranteed to get all of
// them).
// Value 6.05 was selected to split between Windows versions.

View File

@@ -11,13 +11,14 @@ import (
)
func init() {
Factories["cs"] = NewCSCollector
registerCollector("cs", NewCSCollector)
}
// A CSCollector is a Prometheus collector for WMI metrics
type CSCollector struct {
PhysicalMemoryBytes *prometheus.Desc
LogicalProcessors *prometheus.Desc
Hostname *prometheus.Desc
}
// NewCSCollector ...
@@ -37,6 +38,15 @@ func NewCSCollector() (Collector, error) {
nil,
nil,
),
Hostname: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "hostname"),
"Labeled system hostname information as provided by ComputerSystem.DNSHostName and ComputerSystem.Domain",
[]string{
"hostname",
"domain",
"fqdn"},
nil,
),
}, nil
}
@@ -55,6 +65,9 @@ func (c *CSCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) e
type Win32_ComputerSystem struct {
NumberOfLogicalProcessors uint32
TotalPhysicalMemory uint64
DNSHostname string
Domain string
Workgroup *string
}
func (c *CSCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
@@ -79,5 +92,21 @@ func (c *CSCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, er
float64(dst[0].TotalPhysicalMemory),
)
var fqdn string
if dst[0].Workgroup == nil || dst[0].Domain != *dst[0].Workgroup {
fqdn = dst[0].DNSHostname + "." + dst[0].Domain
} else {
fqdn = dst[0].DNSHostname
}
ch <- prometheus.MustNewConstMetric(
c.Hostname,
prometheus.GaugeValue,
1.0,
dst[0].DNSHostname,
dst[0].Domain,
fqdn,
)
return nil, nil
}

View File

@@ -11,7 +11,7 @@ import (
)
func init() {
Factories["dns"] = NewDNSCollector
registerCollector("dns", NewDNSCollector)
}
// A DNSCollector is a Prometheus collector for WMI Win32_PerfRawData_DNS_DNS metrics

View File

@@ -11,7 +11,7 @@ import (
)
func init() {
Factories["hyperv"] = NewHyperVCollector
registerCollector("hyperv", NewHyperVCollector)
}
// HyperVCollector is a Prometheus collector for hyper-v

View File

@@ -16,7 +16,7 @@ import (
)
func init() {
Factories["iis"] = NewIISCollector
registerCollector("iis", NewIISCollector)
}
type simple_version struct {

View File

@@ -6,14 +6,13 @@ import (
"fmt"
"regexp"
"github.com/StackExchange/wmi"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
"gopkg.in/alecthomas/kingpin.v2"
)
func init() {
Factories["logical_disk"] = NewLogicalDiskCollector
registerCollector("logical_disk", NewLogicalDiskCollector, "LogicalDisk")
}
var (
@@ -27,19 +26,22 @@ var (
).Default("").String()
)
// A LogicalDiskCollector is a Prometheus collector for WMI Win32_PerfRawData_PerfDisk_LogicalDisk metrics
// A LogicalDiskCollector is a Prometheus collector for perflib logicalDisk metrics
type LogicalDiskCollector struct {
RequestsQueued *prometheus.Desc
ReadBytesTotal *prometheus.Desc
ReadsTotal *prometheus.Desc
WriteBytesTotal *prometheus.Desc
WritesTotal *prometheus.Desc
ReadTime *prometheus.Desc
WriteTime *prometheus.Desc
TotalSpace *prometheus.Desc
FreeSpace *prometheus.Desc
IdleTime *prometheus.Desc
SplitIOs *prometheus.Desc
RequestsQueued *prometheus.Desc
ReadBytesTotal *prometheus.Desc
ReadsTotal *prometheus.Desc
WriteBytesTotal *prometheus.Desc
WritesTotal *prometheus.Desc
ReadTime *prometheus.Desc
WriteTime *prometheus.Desc
TotalSpace *prometheus.Desc
FreeSpace *prometheus.Desc
IdleTime *prometheus.Desc
SplitIOs *prometheus.Desc
ReadLatency *prometheus.Desc
WriteLatency *prometheus.Desc
ReadWriteLatency *prometheus.Desc
volumeWhitelistPattern *regexp.Regexp
volumeBlacklistPattern *regexp.Regexp
@@ -127,6 +129,27 @@ func NewLogicalDiskCollector() (Collector, error) {
nil,
),
ReadLatency: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "read_latency_seconds_total"),
"Shows the average time, in seconds, of a read operation from the disk (LogicalDisk.AvgDiskSecPerRead)",
[]string{"volume"},
nil,
),
WriteLatency: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "write_latency_seconds_total"),
"Shows the average time, in seconds, of a write operation to the disk (LogicalDisk.AvgDiskSecPerWrite)",
[]string{"volume"},
nil,
),
ReadWriteLatency: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "read_write_latency_seconds_total"),
"Shows the time, in seconds, of the average disk transfer (LogicalDisk.AvgDiskSecPerTransfer)",
[]string{"volume"},
nil,
),
volumeWhitelistPattern: regexp.MustCompile(fmt.Sprintf("^(?:%s)$", *volumeWhitelist)),
volumeBlacklistPattern: regexp.MustCompile(fmt.Sprintf("^(?:%s)$", *volumeBlacklist)),
}, nil
@@ -135,7 +158,7 @@ func NewLogicalDiskCollector() (Collector, error) {
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *LogicalDiskCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
if desc, err := c.collect(ctx, ch); err != nil {
log.Error("failed collecting logical_disk metrics:", desc, err)
return err
}
@@ -145,25 +168,27 @@ func (c *LogicalDiskCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.
// Win32_PerfRawData_PerfDisk_LogicalDisk docs:
// - https://msdn.microsoft.com/en-us/windows/hardware/aa394307(v=vs.71) - Win32_PerfRawData_PerfDisk_LogicalDisk class
// - https://msdn.microsoft.com/en-us/library/ms803973.aspx - LogicalDisk object reference
type Win32_PerfRawData_PerfDisk_LogicalDisk struct {
type logicalDisk struct {
Name string
CurrentDiskQueueLength uint32
DiskReadBytesPerSec uint64
DiskReadsPerSec uint32
DiskWriteBytesPerSec uint64
DiskWritesPerSec uint32
PercentDiskReadTime uint64
PercentDiskWriteTime uint64
PercentFreeSpace uint32
PercentFreeSpace_Base uint32
PercentIdleTime uint64
SplitIOPerSec uint32
CurrentDiskQueueLength float64 `perflib:"Current Disk Queue Length"`
DiskReadBytesPerSec float64 `perflib:"Disk Read Bytes/sec"`
DiskReadsPerSec float64 `perflib:"Disk Reads/sec"`
DiskWriteBytesPerSec float64 `perflib:"Disk Write Bytes/sec"`
DiskWritesPerSec float64 `perflib:"Disk Writes/sec"`
PercentDiskReadTime float64 `perflib:"% Disk Read Time"`
PercentDiskWriteTime float64 `perflib:"% Disk Write Time"`
PercentFreeSpace float64 `perflib:"% Free Space_Base"`
PercentFreeSpace_Base float64 `perflib:"Free Megabytes"`
PercentIdleTime float64 `perflib:"% Idle Time"`
SplitIOPerSec float64 `perflib:"Split IO/Sec"`
AvgDiskSecPerRead float64 `perflib:"Avg. Disk sec/Read"`
AvgDiskSecPerWrite float64 `perflib:"Avg. Disk sec/Write"`
AvgDiskSecPerTransfer float64 `perflib:"Avg. Disk sec/Transfer"`
}
func (c *LogicalDiskCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_PerfRawData_PerfDisk_LogicalDisk
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
func (c *LogicalDiskCollector) collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []logicalDisk
if err := unmarshalObject(ctx.perfObjects["LogicalDisk"], &dst); err != nil {
return nil, err
}
@@ -177,77 +202,98 @@ func (c *LogicalDiskCollector) collect(ch chan<- prometheus.Metric) (*prometheus
ch <- prometheus.MustNewConstMetric(
c.RequestsQueued,
prometheus.GaugeValue,
float64(volume.CurrentDiskQueueLength),
volume.CurrentDiskQueueLength,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.ReadBytesTotal,
prometheus.CounterValue,
float64(volume.DiskReadBytesPerSec),
volume.DiskReadBytesPerSec,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.ReadsTotal,
prometheus.CounterValue,
float64(volume.DiskReadsPerSec),
volume.DiskReadsPerSec,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.WriteBytesTotal,
prometheus.CounterValue,
float64(volume.DiskWriteBytesPerSec),
volume.DiskWriteBytesPerSec,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.WritesTotal,
prometheus.CounterValue,
float64(volume.DiskWritesPerSec),
volume.DiskWritesPerSec,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.ReadTime,
prometheus.CounterValue,
float64(volume.PercentDiskReadTime)*ticksToSecondsScaleFactor,
volume.PercentDiskReadTime,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.WriteTime,
prometheus.CounterValue,
float64(volume.PercentDiskWriteTime)*ticksToSecondsScaleFactor,
volume.PercentDiskWriteTime,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.FreeSpace,
prometheus.GaugeValue,
float64(volume.PercentFreeSpace)*1024*1024,
volume.PercentFreeSpace_Base*1024*1024,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.TotalSpace,
prometheus.GaugeValue,
float64(volume.PercentFreeSpace_Base)*1024*1024,
volume.PercentFreeSpace*1024*1024,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.IdleTime,
prometheus.CounterValue,
float64(volume.PercentIdleTime)*ticksToSecondsScaleFactor,
volume.PercentIdleTime,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.SplitIOs,
prometheus.CounterValue,
float64(volume.SplitIOPerSec),
volume.SplitIOPerSec,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.ReadLatency,
prometheus.CounterValue,
volume.AvgDiskSecPerRead*ticksToSecondsScaleFactor,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.WriteLatency,
prometheus.CounterValue,
volume.AvgDiskSecPerWrite*ticksToSecondsScaleFactor,
volume.Name,
)
ch <- prometheus.MustNewConstMetric(
c.ReadWriteLatency,
prometheus.CounterValue,
volume.AvgDiskSecPerTransfer*ticksToSecondsScaleFactor,
volume.Name,
)
}

199
collector/logon.go Normal file
View File

@@ -0,0 +1,199 @@
// +build windows
package collector
import (
"errors"
"github.com/StackExchange/wmi"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {
registerCollector("logon", NewLogonCollector)
}
// A LogonCollector is a Prometheus collector for WMI metrics
type LogonCollector struct {
LogonType *prometheus.Desc
}
// NewLogonCollector ...
func NewLogonCollector() (Collector, error) {
const subsystem = "logon"
return &LogonCollector{
LogonType: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "logon_type"),
"Number of active logon sessions (LogonSession.LogonType)",
[]string{"status"},
nil,
),
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *LogonCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
log.Error("failed collecting user metrics:", desc, err)
return err
}
return nil
}
// Win32_LogonSession docs:
// - https://docs.microsoft.com/en-us/windows/win32/cimwin32prov/win32-logonsession
type Win32_LogonSession struct {
LogonType uint32
}
func (c *LogonCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_LogonSession
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
return nil, err
}
if len(dst) == 0 {
return nil, errors.New("WMI query returned empty result set")
}
// Init counters
system := 0
interactive := 0
network := 0
batch := 0
service := 0
proxy := 0
unlock := 0
networkcleartext := 0
newcredentials := 0
remoteinteractive := 0
cachedinteractive := 0
cachedremoteinteractive := 0
cachedunlock := 0
for _, entry := range dst {
switch entry.LogonType {
case 0:
system++
case 2:
interactive++
case 3:
network++
case 4:
batch++
case 5:
service++
case 6:
proxy++
case 7:
unlock++
case 8:
networkcleartext++
case 9:
newcredentials++
case 10:
remoteinteractive++
case 11:
cachedinteractive++
case 12:
cachedremoteinteractive++
case 13:
cachedunlock++
}
}
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(system),
"system",
)
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(interactive),
"interactive",
)
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(network),
"network",
)
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(batch),
"batch",
)
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(service),
"service",
)
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(proxy),
"proxy",
)
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(unlock),
"unlock",
)
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(networkcleartext),
"network_clear_text",
)
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(newcredentials),
"new_credentials",
)
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(remoteinteractive),
"remote_interactive",
)
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(cachedinteractive),
"cached_interactive",
)
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(remoteinteractive),
"cached_remote_interactive",
)
ch <- prometheus.MustNewConstMetric(
c.LogonType,
prometheus.GaugeValue,
float64(cachedunlock),
"cached_unlock",
)
return nil, nil
}

View File

@@ -6,16 +6,15 @@
package collector
import (
"github.com/StackExchange/wmi"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {
Factories["memory"] = NewMemoryCollector
registerCollector("memory", NewMemoryCollector, "Memory")
}
// A MemoryCollector is a Prometheus collector for WMI Win32_PerfRawData_PerfOS_Memory metrics
// A MemoryCollector is a Prometheus collector for perflib Memory metrics
type MemoryCollector struct {
AvailableBytes *prometheus.Desc
CacheBytes *prometheus.Desc
@@ -257,247 +256,246 @@ func NewMemoryCollector() (Collector, error) {
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *MemoryCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
if desc, err := c.collect(ctx, ch); err != nil {
log.Error("failed collecting memory metrics:", desc, err)
return err
}
return nil
}
type Win32_PerfRawData_PerfOS_Memory struct {
AvailableBytes uint64
AvailableKBytes uint64
AvailableMBytes uint64
CacheBytes uint64
CacheBytesPeak uint64
CacheFaultsPersec uint32
CommitLimit uint64
CommittedBytes uint64
DemandZeroFaultsPersec uint32
FreeAndZeroPageListBytes uint64
FreeSystemPageTableEntries uint32
ModifiedPageListBytes uint64
PageFaultsPersec uint32
PageReadsPersec uint32
PagesInputPersec uint32
PagesOutputPersec uint32
PagesPersec uint32
PageWritesPersec uint32
PoolNonpagedAllocs uint32
PoolNonpagedBytes uint64
PoolPagedAllocs uint32
PoolPagedBytes uint64
PoolPagedResidentBytes uint64
StandbyCacheCoreBytes uint64
StandbyCacheNormalPriorityBytes uint64
StandbyCacheReserveBytes uint64
SystemCacheResidentBytes uint64
SystemCodeResidentBytes uint64
SystemCodeTotalBytes uint64
SystemDriverResidentBytes uint64
SystemDriverTotalBytes uint64
TransitionFaultsPersec uint32
TransitionPagesRePurposedPersec uint32
WriteCopiesPersec uint32
type memory struct {
AvailableBytes float64 `perflib:"Available Bytes"`
AvailableKBytes float64 `perflib:"Available KBytes"`
AvailableMBytes float64 `perflib:"Available MBytes"`
CacheBytes float64 `perflib:"Cache Bytes"`
CacheBytesPeak float64 `perflib:"Cache Bytes Peak"`
CacheFaultsPersec float64 `perflib:"Cache Faults/sec"`
CommitLimit float64 `perflib:"Commit Limit"`
CommittedBytes float64 `perflib:"Committed Bytes"`
DemandZeroFaultsPersec float64 `perflib:"Demand Zero Faults/sec"`
FreeAndZeroPageListBytes float64 `perflib:"Free & Zero Page List Bytes"`
FreeSystemPageTableEntries float64 `perflib:"Free System Page Table Entries"`
ModifiedPageListBytes float64 `perflib:"Modified Page List Bytes"`
PageFaultsPersec float64 `perflib:"Page Faults/sec"`
PageReadsPersec float64 `perflib:"Page Reads/sec"`
PagesInputPersec float64 `perflib:"Pages Input/sec"`
PagesOutputPersec float64 `perflib:"Pages Output/sec"`
PagesPersec float64 `perflib:"Pages/sec"`
PageWritesPersec float64 `perflib:"Page Writes/sec"`
PoolNonpagedAllocs float64 `perflib:"Pool Nonpaged Allocs"`
PoolNonpagedBytes float64 `perflib:"Pool Nonpaged Bytes"`
PoolPagedAllocs float64 `perflib:"Pool Paged Allocs"`
PoolPagedBytes float64 `perflib:"Pool Paged Bytes"`
PoolPagedResidentBytes float64 `perflib:"Pool Paged Resident Bytes"`
StandbyCacheCoreBytes float64 `perflib:"Standby Cache Core Bytes"`
StandbyCacheNormalPriorityBytes float64 `perflib:"Standby Cache Normal Priority Bytes"`
StandbyCacheReserveBytes float64 `perflib:"Standby Cache Reserve Bytes"`
SystemCacheResidentBytes float64 `perflib:"System Cache Resident Bytes"`
SystemCodeResidentBytes float64 `perflib:"System Code Resident Bytes"`
SystemCodeTotalBytes float64 `perflib:"System Code Total Bytes"`
SystemDriverResidentBytes float64 `perflib:"System Driver Resident Bytes"`
SystemDriverTotalBytes float64 `perflib:"System Driver Total Bytes"`
TransitionFaultsPersec float64 `perflib:"Transition Faults/sec"`
TransitionPagesRePurposedPersec float64 `perflib:"Transition Pages RePurposed/sec"`
WriteCopiesPersec float64 `perflib:"Write Copies/sec"`
}
func (c *MemoryCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_PerfRawData_PerfOS_Memory
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
func (c *MemoryCollector) collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []memory
if err := unmarshalObject(ctx.perfObjects["Memory"], &dst); err != nil {
return nil, err
}
ch <- prometheus.MustNewConstMetric(
c.AvailableBytes,
prometheus.GaugeValue,
float64(dst[0].AvailableBytes),
dst[0].AvailableBytes,
)
ch <- prometheus.MustNewConstMetric(
c.CacheBytes,
prometheus.GaugeValue,
float64(dst[0].CacheBytes),
dst[0].CacheBytes,
)
ch <- prometheus.MustNewConstMetric(
c.CacheBytesPeak,
prometheus.GaugeValue,
float64(dst[0].CacheBytesPeak),
dst[0].CacheBytesPeak,
)
ch <- prometheus.MustNewConstMetric(
c.CacheFaultsTotal,
prometheus.GaugeValue,
float64(dst[0].CacheFaultsPersec),
dst[0].CacheFaultsPersec,
)
ch <- prometheus.MustNewConstMetric(
c.CommitLimit,
prometheus.GaugeValue,
float64(dst[0].CommitLimit),
dst[0].CommitLimit,
)
ch <- prometheus.MustNewConstMetric(
c.CommittedBytes,
prometheus.GaugeValue,
float64(dst[0].CommittedBytes),
dst[0].CommittedBytes,
)
ch <- prometheus.MustNewConstMetric(
c.DemandZeroFaultsTotal,
prometheus.GaugeValue,
float64(dst[0].DemandZeroFaultsPersec),
dst[0].DemandZeroFaultsPersec,
)
ch <- prometheus.MustNewConstMetric(
c.FreeAndZeroPageListBytes,
prometheus.GaugeValue,
float64(dst[0].FreeAndZeroPageListBytes),
dst[0].FreeAndZeroPageListBytes,
)
ch <- prometheus.MustNewConstMetric(
c.FreeSystemPageTableEntries,
prometheus.GaugeValue,
float64(dst[0].FreeSystemPageTableEntries),
dst[0].FreeSystemPageTableEntries,
)
ch <- prometheus.MustNewConstMetric(
c.ModifiedPageListBytes,
prometheus.GaugeValue,
float64(dst[0].ModifiedPageListBytes),
dst[0].ModifiedPageListBytes,
)
ch <- prometheus.MustNewConstMetric(
c.PageFaultsTotal,
prometheus.GaugeValue,
float64(dst[0].PageFaultsPersec),
dst[0].PageFaultsPersec,
)
ch <- prometheus.MustNewConstMetric(
c.SwapPageReadsTotal,
prometheus.GaugeValue,
float64(dst[0].PageReadsPersec),
dst[0].PageReadsPersec,
)
ch <- prometheus.MustNewConstMetric(
c.SwapPagesReadTotal,
prometheus.GaugeValue,
float64(dst[0].PagesInputPersec),
dst[0].PagesInputPersec,
)
ch <- prometheus.MustNewConstMetric(
c.SwapPagesWrittenTotal,
prometheus.GaugeValue,
float64(dst[0].PagesOutputPersec),
dst[0].PagesOutputPersec,
)
ch <- prometheus.MustNewConstMetric(
c.SwapPageOperationsTotal,
prometheus.GaugeValue,
float64(dst[0].PagesPersec),
dst[0].PagesPersec,
)
ch <- prometheus.MustNewConstMetric(
c.SwapPageWritesTotal,
prometheus.GaugeValue,
float64(dst[0].PageWritesPersec),
dst[0].PageWritesPersec,
)
ch <- prometheus.MustNewConstMetric(
c.PoolNonpagedAllocsTotal,
prometheus.GaugeValue,
float64(dst[0].PoolNonpagedAllocs),
dst[0].PoolNonpagedAllocs,
)
ch <- prometheus.MustNewConstMetric(
c.PoolNonpagedBytes,
prometheus.GaugeValue,
float64(dst[0].PoolNonpagedBytes),
dst[0].PoolNonpagedBytes,
)
ch <- prometheus.MustNewConstMetric(
c.PoolPagedAllocsTotal,
prometheus.GaugeValue,
float64(dst[0].PoolPagedAllocs),
dst[0].PoolPagedAllocs,
)
ch <- prometheus.MustNewConstMetric(
c.PoolPagedBytes,
prometheus.GaugeValue,
float64(dst[0].PoolPagedBytes),
dst[0].PoolPagedBytes,
)
ch <- prometheus.MustNewConstMetric(
c.PoolPagedResidentBytes,
prometheus.GaugeValue,
float64(dst[0].PoolPagedResidentBytes),
dst[0].PoolPagedResidentBytes,
)
ch <- prometheus.MustNewConstMetric(
c.StandbyCacheCoreBytes,
prometheus.GaugeValue,
float64(dst[0].StandbyCacheCoreBytes),
dst[0].StandbyCacheCoreBytes,
)
ch <- prometheus.MustNewConstMetric(
c.StandbyCacheNormalPriorityBytes,
prometheus.GaugeValue,
float64(dst[0].StandbyCacheNormalPriorityBytes),
dst[0].StandbyCacheNormalPriorityBytes,
)
ch <- prometheus.MustNewConstMetric(
c.StandbyCacheReserveBytes,
prometheus.GaugeValue,
float64(dst[0].StandbyCacheReserveBytes),
dst[0].StandbyCacheReserveBytes,
)
ch <- prometheus.MustNewConstMetric(
c.SystemCacheResidentBytes,
prometheus.GaugeValue,
float64(dst[0].SystemCacheResidentBytes),
dst[0].SystemCacheResidentBytes,
)
ch <- prometheus.MustNewConstMetric(
c.SystemCodeResidentBytes,
prometheus.GaugeValue,
float64(dst[0].SystemCodeResidentBytes),
dst[0].SystemCodeResidentBytes,
)
ch <- prometheus.MustNewConstMetric(
c.SystemCodeTotalBytes,
prometheus.GaugeValue,
float64(dst[0].SystemCodeTotalBytes),
dst[0].SystemCodeTotalBytes,
)
ch <- prometheus.MustNewConstMetric(
c.SystemDriverResidentBytes,
prometheus.GaugeValue,
float64(dst[0].SystemDriverResidentBytes),
dst[0].SystemDriverResidentBytes,
)
ch <- prometheus.MustNewConstMetric(
c.SystemDriverTotalBytes,
prometheus.GaugeValue,
float64(dst[0].SystemDriverTotalBytes),
dst[0].SystemDriverTotalBytes,
)
ch <- prometheus.MustNewConstMetric(
c.TransitionFaultsTotal,
prometheus.GaugeValue,
float64(dst[0].TransitionFaultsPersec),
dst[0].TransitionFaultsPersec,
)
ch <- prometheus.MustNewConstMetric(
c.TransitionPagesRepurposedTotal,
prometheus.GaugeValue,
float64(dst[0].TransitionPagesRePurposedPersec),
dst[0].TransitionPagesRePurposedPersec,
)
ch <- prometheus.MustNewConstMetric(
c.WriteCopiesTotal,
prometheus.GaugeValue,
float64(dst[0].WriteCopiesPersec),
dst[0].WriteCopiesPersec,
)
return nil, nil

View File

@@ -12,7 +12,7 @@ import (
)
func init() {
Factories["msmq"] = NewMSMQCollector
registerCollector("msmq", NewMSMQCollector)
}
var (

View File

@@ -127,7 +127,7 @@ func mssqlExpandEnabledCollectors(enabled string) []string {
}
func init() {
Factories["mssql"] = NewMSSQLCollector
registerCollector("mssql", NewMSSQLCollector)
}
// A MSSQLCollector is a Prometheus collector for various WMI Win32_PerfRawData_MSSQLSERVER_* metrics
@@ -179,7 +179,8 @@ type MSSQLCollector struct {
AccessMethodsUsedtreepagecookie *prometheus.Desc
AccessMethodsWorkfilesCreated *prometheus.Desc
AccessMethodsWorktablesCreated *prometheus.Desc
AccessMethodsWorktablesFromCacheRatio *prometheus.Desc
AccessMethodsWorktablesFromCacheHits *prometheus.Desc
AccessMethodsWorktablesFromCacheLookups *prometheus.Desc
// Win32_PerfRawData_{instance}_SQLServerAvailabilityReplica
AvailReplicaBytesReceivedfromReplica *prometheus.Desc
@@ -194,7 +195,8 @@ type MSSQLCollector struct {
// Win32_PerfRawData_{instance}_SQLServerBufferManager
BufManBackgroundwriterpages *prometheus.Desc
BufManBuffercachehitratio *prometheus.Desc
BufManBuffercachehits *prometheus.Desc
BufManBuffercachelookups *prometheus.Desc
BufManCheckpointpages *prometheus.Desc
BufManDatabasepages *prometheus.Desc
BufManExtensionallocatedpages *prometheus.Desc
@@ -252,7 +254,8 @@ type MSSQLCollector struct {
DatabasesDBCCLogicalScanBytes *prometheus.Desc
DatabasesGroupCommitTime *prometheus.Desc
DatabasesLogBytesFlushed *prometheus.Desc
DatabasesLogCacheHitRatio *prometheus.Desc
DatabasesLogCacheHits *prometheus.Desc
DatabasesLogCacheLookups *prometheus.Desc
DatabasesLogCacheReads *prometheus.Desc
DatabasesLogFilesSizeKB *prometheus.Desc
DatabasesLogFilesUsedSizeKB *prometheus.Desc
@@ -317,7 +320,8 @@ type MSSQLCollector struct {
GenStatsUserConnections *prometheus.Desc
// Win32_PerfRawData_{instance}_SQLServerLocks
LocksAverageWaitTimems *prometheus.Desc
LocksWaitTime *prometheus.Desc
LocksCount *prometheus.Desc
LocksLockRequests *prometheus.Desc
LocksLockTimeouts *prometheus.Desc
LocksLockTimeoutstimeout0 *prometheus.Desc
@@ -656,12 +660,18 @@ func NewMSSQLCollector() (Collector, error) {
[]string{"instance"},
nil,
),
AccessMethodsWorktablesFromCacheRatio: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "accessmethods_worktables_from_cache_ratio"),
AccessMethodsWorktablesFromCacheHits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "accessmethods_worktables_from_cache_hits"),
"(AccessMethods.WorktablesFromCacheRatio)",
[]string{"instance"},
nil,
),
AccessMethodsWorktablesFromCacheLookups: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "accessmethods_worktables_from_cache_lookups"),
"(AccessMethods.WorktablesFromCacheRatio_Base)",
[]string{"instance"},
nil,
),
// Win32_PerfRawData_{instance}_SQLServerAvailabilityReplica
AvailReplicaBytesReceivedfromReplica: prometheus.NewDesc(
@@ -726,12 +736,18 @@ func NewMSSQLCollector() (Collector, error) {
[]string{"instance"},
nil,
),
BufManBuffercachehitratio: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "bufman_buffer_cache_hit_ratio"),
BufManBuffercachehits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "bufman_buffer_cache_hits"),
"(BufferManager.Buffercachehitratio)",
[]string{"instance"},
nil,
),
BufManBuffercachelookups: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "bufman_buffer_cache_lookups"),
"(BufferManager.Buffercachehitratio_Base)",
[]string{"instance"},
nil,
),
BufManCheckpointpages: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "bufman_checkpoint_pages"),
"(BufferManager.Checkpointpages)",
@@ -1054,12 +1070,18 @@ func NewMSSQLCollector() (Collector, error) {
[]string{"instance", "database"},
nil,
),
DatabasesLogCacheHitRatio: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "databases_log_cache_hit_ratio"),
DatabasesLogCacheHits: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "databases_log_cache_hits"),
"(Databases.LogCacheHitRatio)",
[]string{"instance", "database"},
nil,
),
DatabasesLogCacheLookups: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "databases_log_cache_lookups"),
"(Databases.LogCacheHitRatio_Base)",
[]string{"instance", "database"},
nil,
),
DatabasesLogCacheReads: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "databases_log_cache_reads"),
"(Databases.LogCacheReads)",
@@ -1424,9 +1446,15 @@ func NewMSSQLCollector() (Collector, error) {
),
// Win32_PerfRawData_{instance}_SQLServerLocks
LocksAverageWaitTimems: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "locks_average_wait_seconds"),
"(Locks.AverageWaitTimems)",
LocksWaitTime: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "locks_wait_time_seconds"),
"(Locks.AverageWaitTimems Total time in seconds which locks have been holding resources)",
[]string{"instance", "resource"},
nil,
),
LocksCount: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "locks_count"),
"(Locks.AverageWaitTimems_Base count of how often requests have run into locks)",
[]string{"instance", "resource"},
nil,
),
@@ -1863,6 +1891,7 @@ type win32PerfRawDataSQLServerAccessMethods struct {
WorkfilesCreatedPersec uint64
WorktablesCreatedPersec uint64
WorktablesFromCacheRatio uint64
WorktablesFromCacheRatio_Base uint64
}
func (c *MSSQLCollector) collectAccessMethods(ch chan<- prometheus.Metric, sqlInstance string) (*prometheus.Desc, error) {
@@ -2175,11 +2204,18 @@ func (c *MSSQLCollector) collectAccessMethods(ch chan<- prometheus.Metric, sqlIn
)
ch <- prometheus.MustNewConstMetric(
c.AccessMethodsWorktablesFromCacheRatio,
c.AccessMethodsWorktablesFromCacheHits,
prometheus.CounterValue,
float64(v.WorktablesFromCacheRatio),
sqlInstance,
)
ch <- prometheus.MustNewConstMetric(
c.AccessMethodsWorktablesFromCacheLookups,
prometheus.CounterValue,
float64(v.WorktablesFromCacheRatio_Base),
sqlInstance,
)
return nil, nil
}
@@ -2282,6 +2318,7 @@ func (c *MSSQLCollector) collectAvailabilityReplica(ch chan<- prometheus.Metric,
type win32PerfRawDataSQLServerBufferManager struct {
BackgroundwriterpagesPersec uint64
Buffercachehitratio uint64
Buffercachehitratio_Base uint64
CheckpointpagesPersec uint64
Databasepages uint64
Extensionallocatedpages uint64
@@ -2327,12 +2364,19 @@ func (c *MSSQLCollector) collectBufferManager(ch chan<- prometheus.Metric, sqlIn
)
ch <- prometheus.MustNewConstMetric(
c.BufManBuffercachehitratio,
c.BufManBuffercachehits,
prometheus.GaugeValue,
float64(v.Buffercachehitratio),
sqlInstance,
)
ch <- prometheus.MustNewConstMetric(
c.BufManBuffercachelookups,
prometheus.GaugeValue,
float64(v.Buffercachehitratio_Base),
sqlInstance,
)
ch <- prometheus.MustNewConstMetric(
c.BufManCheckpointpages,
prometheus.CounterValue,
@@ -2704,6 +2748,7 @@ type win32PerfRawDataSQLServerDatabases struct {
GroupCommitTimePersec uint64
LogBytesFlushedPersec uint64
LogCacheHitRatio uint64
LogCacheHitRatio_Base uint64
LogCacheReadsPersec uint64
LogFilesSizeKB uint64
LogFilesUsedSizeKB uint64
@@ -2819,12 +2864,19 @@ func (c *MSSQLCollector) collectDatabases(ch chan<- prometheus.Metric, sqlInstan
)
ch <- prometheus.MustNewConstMetric(
c.DatabasesLogCacheHitRatio,
c.DatabasesLogCacheHits,
prometheus.GaugeValue,
float64(v.LogCacheHitRatio),
sqlInstance, dbName,
)
ch <- prometheus.MustNewConstMetric(
c.DatabasesLogCacheLookups,
prometheus.GaugeValue,
float64(v.LogCacheHitRatio_Base),
sqlInstance, dbName,
)
ch <- prometheus.MustNewConstMetric(
c.DatabasesLogCacheReads,
prometheus.CounterValue,
@@ -3299,6 +3351,7 @@ func (c *MSSQLCollector) collectGeneralStatistics(ch chan<- prometheus.Metric, s
type win32PerfRawDataSQLServerLocks struct {
Name string
AverageWaitTimems uint64
AverageWaitTimems_Base uint64
LockRequestsPersec uint64
LockTimeoutsPersec uint64
LockTimeoutstimeout0Persec uint64
@@ -3321,12 +3374,19 @@ func (c *MSSQLCollector) collectLocks(ch chan<- prometheus.Metric, sqlInstance s
lockResourceName := v.Name
ch <- prometheus.MustNewConstMetric(
c.LocksAverageWaitTimems,
c.LocksWaitTime,
prometheus.GaugeValue,
float64(v.AverageWaitTimems)/1000.0,
sqlInstance, lockResourceName,
)
ch <- prometheus.MustNewConstMetric(
c.LocksCount,
prometheus.GaugeValue,
float64(v.AverageWaitTimems_Base)/1000.0,
sqlInstance, lockResourceName,
)
ch <- prometheus.MustNewConstMetric(
c.LocksLockRequests,
prometheus.CounterValue,

View File

@@ -6,14 +6,13 @@ import (
"fmt"
"regexp"
"github.com/StackExchange/wmi"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
"gopkg.in/alecthomas/kingpin.v2"
)
func init() {
Factories["net"] = NewNetworkCollector
registerCollector("net", NewNetworkCollector, "Network Interface")
}
var (
@@ -28,7 +27,7 @@ var (
nicNameToUnderscore = regexp.MustCompile("[^a-zA-Z0-9]")
)
// A NetworkCollector is a Prometheus collector for WMI Win32_PerfRawData_Tcpip_NetworkInterface metrics
// A NetworkCollector is a Prometheus collector for Perflib Network Interface metrics
type NetworkCollector struct {
BytesReceivedTotal *prometheus.Desc
BytesSentTotal *prometheus.Desc
@@ -133,7 +132,7 @@ func NewNetworkCollector() (Collector, error) {
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *NetworkCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
if desc, err := c.collect(ctx, ch); err != nil {
log.Error("failed collecting net metrics:", desc, err)
return err
}
@@ -141,34 +140,33 @@ func (c *NetworkCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metr
}
// mangleNetworkName mangles Network Adapter name (non-alphanumeric to _)
// that is used in Win32_PerfRawData_Tcpip_NetworkInterface.
// that is used in networkInterface.
func mangleNetworkName(name string) string {
return nicNameToUnderscore.ReplaceAllString(name, "_")
}
// Win32_PerfRawData_Tcpip_NetworkInterface docs:
// - https://technet.microsoft.com/en-us/security/aa394340(v=vs.80)
type Win32_PerfRawData_Tcpip_NetworkInterface struct {
BytesReceivedPerSec uint64
BytesSentPerSec uint64
BytesTotalPerSec uint64
type networkInterface struct {
BytesReceivedPerSec float64 `perflib:"Bytes Received/sec"`
BytesSentPerSec float64 `perflib:"Bytes Sent/sec"`
BytesTotalPerSec float64 `perflib:"Bytes Total/sec"`
Name string
PacketsOutboundDiscarded uint64
PacketsOutboundErrors uint64
PacketsPerSec uint64
PacketsReceivedDiscarded uint64
PacketsReceivedErrors uint64
PacketsReceivedPerSec uint64
PacketsReceivedUnknown uint64
PacketsSentPerSec uint64
CurrentBandwidth uint64
PacketsOutboundDiscarded float64 `perflib:"Packets Outbound Discarded"`
PacketsOutboundErrors float64 `perflib:"Packets Outbound Errors"`
PacketsPerSec float64 `perflib:"Packets/sec"`
PacketsReceivedDiscarded float64 `perflib:"Packets Received Discarded"`
PacketsReceivedErrors float64 `perflib:"Packets Received Errors"`
PacketsReceivedPerSec float64 `perflib:"Packets Received/sec"`
PacketsReceivedUnknown float64 `perflib:"Packets Received Unknown"`
PacketsSentPerSec float64 `perflib:"Packets Sent/sec"`
CurrentBandwidth float64 `perflib:"Current Bandwidth"`
}
func (c *NetworkCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_PerfRawData_Tcpip_NetworkInterface
func (c *NetworkCollector) collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []networkInterface
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
if err := unmarshalObject(ctx.perfObjects["Network Interface"], &dst); err != nil {
return nil, err
}
@@ -187,76 +185,75 @@ func (c *NetworkCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Des
ch <- prometheus.MustNewConstMetric(
c.BytesReceivedTotal,
prometheus.CounterValue,
float64(nic.BytesReceivedPerSec),
nic.BytesReceivedPerSec,
name,
)
ch <- prometheus.MustNewConstMetric(
c.BytesSentTotal,
prometheus.CounterValue,
float64(nic.BytesSentPerSec),
nic.BytesSentPerSec,
name,
)
ch <- prometheus.MustNewConstMetric(
c.BytesTotal,
prometheus.CounterValue,
float64(nic.BytesTotalPerSec),
nic.BytesTotalPerSec,
name,
)
ch <- prometheus.MustNewConstMetric(
c.PacketsOutboundDiscarded,
prometheus.CounterValue,
float64(nic.PacketsOutboundDiscarded),
nic.PacketsOutboundDiscarded,
name,
)
ch <- prometheus.MustNewConstMetric(
c.PacketsOutboundErrors,
prometheus.CounterValue,
float64(nic.PacketsOutboundErrors),
nic.PacketsOutboundErrors,
name,
)
ch <- prometheus.MustNewConstMetric(
c.PacketsTotal,
prometheus.CounterValue,
float64(nic.PacketsPerSec),
nic.PacketsPerSec,
name,
)
ch <- prometheus.MustNewConstMetric(
c.PacketsReceivedDiscarded,
prometheus.CounterValue,
float64(nic.PacketsReceivedDiscarded),
nic.PacketsReceivedDiscarded,
name,
)
ch <- prometheus.MustNewConstMetric(
c.PacketsReceivedErrors,
prometheus.CounterValue,
float64(nic.PacketsReceivedErrors),
nic.PacketsReceivedErrors,
name,
)
ch <- prometheus.MustNewConstMetric(
c.PacketsReceivedTotal,
prometheus.CounterValue,
float64(nic.PacketsReceivedPerSec),
nic.PacketsReceivedPerSec,
name,
)
ch <- prometheus.MustNewConstMetric(
c.PacketsReceivedUnknown,
prometheus.CounterValue,
float64(nic.PacketsReceivedUnknown),
nic.PacketsReceivedUnknown,
name,
)
ch <- prometheus.MustNewConstMetric(
c.PacketsSentTotal,
prometheus.CounterValue,
float64(nic.PacketsSentPerSec),
nic.PacketsSentPerSec,
name,
)
ch <- prometheus.MustNewConstMetric(
c.CurrentBandwidth,
prometheus.CounterValue,
float64(nic.CurrentBandwidth),
prometheus.GaugeValue,
nic.CurrentBandwidth,
name,
)
}
return nil, nil
}

View File

@@ -9,7 +9,7 @@ import (
)
func init() {
Factories["netframework_clrexceptions"] = NewNETFramework_NETCLRExceptionsCollector
registerCollector("netframework_clrexceptions", NewNETFramework_NETCLRExceptionsCollector)
}
// A NETFramework_NETCLRExceptionsCollector is a Prometheus collector for WMI Win32_PerfRawData_NETFramework_NETCLRExceptions metrics

View File

@@ -9,7 +9,7 @@ import (
)
func init() {
Factories["netframework_clrinterop"] = NewNETFramework_NETCLRInteropCollector
registerCollector("netframework_clrinterop", NewNETFramework_NETCLRInteropCollector)
}
// A NETFramework_NETCLRInteropCollector is a Prometheus collector for WMI Win32_PerfRawData_NETFramework_NETCLRInterop metrics

View File

@@ -9,7 +9,7 @@ import (
)
func init() {
Factories["netframework_clrjit"] = NewNETFramework_NETCLRJitCollector
registerCollector("netframework_clrjit", NewNETFramework_NETCLRJitCollector)
}
// A NETFramework_NETCLRJitCollector is a Prometheus collector for WMI Win32_PerfRawData_NETFramework_NETCLRJit metrics

View File

@@ -9,7 +9,7 @@ import (
)
func init() {
Factories["netframework_clrloading"] = NewNETFramework_NETCLRLoadingCollector
registerCollector("netframework_clrloading", NewNETFramework_NETCLRLoadingCollector)
}
// A NETFramework_NETCLRLoadingCollector is a Prometheus collector for WMI Win32_PerfRawData_NETFramework_NETCLRLoading metrics

View File

@@ -9,7 +9,7 @@ import (
)
func init() {
Factories["netframework_clrlocksandthreads"] = NewNETFramework_NETCLRLocksAndThreadsCollector
registerCollector("netframework_clrlocksandthreads", NewNETFramework_NETCLRLocksAndThreadsCollector)
}
// A NETFramework_NETCLRLocksAndThreadsCollector is a Prometheus collector for WMI Win32_PerfRawData_NETFramework_NETCLRLocksAndThreads metrics

View File

@@ -9,7 +9,7 @@ import (
)
func init() {
Factories["netframework_clrmemory"] = NewNETFramework_NETCLRMemoryCollector
registerCollector("netframework_clrmemory", NewNETFramework_NETCLRMemoryCollector)
}
// A NETFramework_NETCLRMemoryCollector is a Prometheus collector for WMI Win32_PerfRawData_NETFramework_NETCLRMemory metrics

View File

@@ -9,7 +9,7 @@ import (
)
func init() {
Factories["netframework_clrremoting"] = NewNETFramework_NETCLRRemotingCollector
registerCollector("netframework_clrremoting", NewNETFramework_NETCLRRemotingCollector)
}
// A NETFramework_NETCLRRemotingCollector is a Prometheus collector for WMI Win32_PerfRawData_NETFramework_NETCLRRemoting metrics

View File

@@ -9,7 +9,7 @@ import (
)
func init() {
Factories["netframework_clrsecurity"] = NewNETFramework_NETCLRSecurityCollector
registerCollector("netframework_clrsecurity", NewNETFramework_NETCLRSecurityCollector)
}
// A NETFramework_NETCLRSecurityCollector is a Prometheus collector for WMI Win32_PerfRawData_NETFramework_NETCLRSecurity metrics

View File

@@ -12,11 +12,12 @@ import (
)
func init() {
Factories["os"] = NewOSCollector
registerCollector("os", NewOSCollector)
}
// A OSCollector is a Prometheus collector for WMI metrics
type OSCollector struct {
OSInformation *prometheus.Desc
PhysicalMemoryFreeBytes *prometheus.Desc
PagingFreeBytes *prometheus.Desc
VirtualMemoryFreeBytes *prometheus.Desc
@@ -36,6 +37,12 @@ func NewOSCollector() (Collector, error) {
const subsystem = "os"
return &OSCollector{
OSInformation: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "info"),
"OperatingSystem.Caption, OperatingSystem.Version",
[]string{"product", "version"},
nil,
),
PagingLimitBytes: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "paging_limit_bytes"),
"OperatingSystem.SizeStoredInPagingFiles",
@@ -124,9 +131,11 @@ func (c *OSCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) e
// Win32_OperatingSystem docs:
// - https://msdn.microsoft.com/en-us/library/aa394239 - Win32_OperatingSystem class
type Win32_OperatingSystem struct {
Caption string
FreePhysicalMemory uint64
FreeSpaceInPagingFiles uint64
FreeVirtualMemory uint64
LocalDateTime time.Time
MaxNumberOfProcesses uint32
MaxProcessMemorySize uint64
NumberOfProcesses uint32
@@ -134,7 +143,7 @@ type Win32_OperatingSystem struct {
SizeStoredInPagingFiles uint64
TotalVirtualMemorySize uint64
TotalVisibleMemorySize uint64
LocalDateTime time.Time
Version string
}
func (c *OSCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
@@ -148,6 +157,14 @@ func (c *OSCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, er
return nil, errors.New("WMI query returned empty result set")
}
ch <- prometheus.MustNewConstMetric(
c.OSInformation,
prometheus.GaugeValue,
1.0,
dst[0].Caption,
dst[0].Version,
)
ch <- prometheus.MustNewConstMetric(
c.PhysicalMemoryFreeBytes,
prometheus.GaugeValue,

View File

@@ -3,14 +3,21 @@ package collector
import (
"fmt"
"reflect"
"strconv"
perflibCollector "github.com/leoluk/perflib_exporter/collector"
"github.com/leoluk/perflib_exporter/perflib"
"github.com/prometheus/common/log"
)
func getPerflibSnapshot() (map[string]*perflib.PerfObject, error) {
objects, err := perflib.QueryPerformanceData("Global")
var nametable = perflib.QueryNameTable("Counter 009") // Reads the names in English TODO: validate that the English names are always present
func MapCounterToIndex(name string) string {
return strconv.Itoa(int(nametable.LookupIndex(name)))
}
func getPerflibSnapshot(objNames string) (map[string]*perflib.PerfObject, error) {
objects, err := perflib.QueryPerformanceData(objNames)
if err != nil {
return nil, err
}

View File

@@ -3,6 +3,8 @@
package collector
import (
"fmt"
"regexp"
"strconv"
"strings"
@@ -13,18 +15,21 @@ import (
)
func init() {
Factories["process"] = NewProcessCollector
registerCollector("process", newProcessCollector, "Process")
}
var (
processWhereClause = kingpin.Flag(
"collector.process.processes-where",
"WQL 'where' clause to use in WMI metrics query. Limits the response to the processes you specify and reduces the size of the response.",
processWhitelist = kingpin.Flag(
"collector.process.whitelist",
"Regexp of processes to include. Process name must both match whitelist and not match blacklist to be included.",
).Default(".*").String()
processBlacklist = kingpin.Flag(
"collector.process.blacklist",
"Regexp of processes to exclude. Process name must both match whitelist and not match blacklist to be included.",
).Default("").String()
)
// A ProcessCollector is a Prometheus collector for WMI Win32_PerfRawData_PerfProc_Process metrics
type ProcessCollector struct {
type processCollector struct {
StartTime *prometheus.Desc
CPUTimeTotal *prometheus.Desc
HandleCount *prometheus.Desc
@@ -39,18 +44,19 @@ type ProcessCollector struct {
VirtualBytes *prometheus.Desc
WorkingSet *prometheus.Desc
queryWhereClause string
processWhitelistPattern *regexp.Regexp
processBlacklistPattern *regexp.Regexp
}
// NewProcessCollector ...
func NewProcessCollector() (Collector, error) {
func newProcessCollector() (Collector, error) {
const subsystem = "process"
if *processWhereClause == "" {
log.Warn("No where-clause specified for process collector. This will generate a very large number of metrics!")
if *processWhitelist == ".*" && *processBlacklist == "" {
log.Warn("No filters specified for process collector. This will generate a very large number of metrics!")
}
return &ProcessCollector{
return &processCollector{
StartTime: prometheus.NewDesc(
prometheus.BuildFQName(Namespace, subsystem, "start_time"),
"Time of process start.",
@@ -129,66 +135,53 @@ func NewProcessCollector() (Collector, error) {
[]string{"process", "process_id", "creating_process_id"},
nil,
),
queryWhereClause: *processWhereClause,
processWhitelistPattern: regexp.MustCompile(fmt.Sprintf("^(?:%s)$", *processWhitelist)),
processBlacklistPattern: regexp.MustCompile(fmt.Sprintf("^(?:%s)$", *processBlacklist)),
}, nil
}
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *ProcessCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
log.Error("failed collecting process metrics:", desc, err)
return err
}
return nil
}
// Win32_PerfRawData_PerfProc_Process docs:
// - https://msdn.microsoft.com/en-us/library/aa394323(v=vs.85).aspx
type Win32_PerfRawData_PerfProc_Process struct {
type perflibProcess struct {
Name string
CreatingProcessID uint32
ElapsedTime uint64
Frequency_Object uint64
HandleCount uint32
IDProcess uint32
IODataBytesPersec uint64
IODataOperationsPersec uint64
IOOtherBytesPersec uint64
IOOtherOperationsPersec uint64
IOReadBytesPersec uint64
IOReadOperationsPersec uint64
IOWriteBytesPersec uint64
IOWriteOperationsPersec uint64
PageFaultsPersec uint32
PageFileBytes uint64
PageFileBytesPeak uint64
PercentPrivilegedTime uint64
PercentProcessorTime uint64
PercentUserTime uint64
PoolNonpagedBytes uint32
PoolPagedBytes uint32
PriorityBase uint32
PrivateBytes uint64
ThreadCount uint32
Timestamp_Object uint64
VirtualBytes uint64
VirtualBytesPeak uint64
WorkingSet uint64
WorkingSetPeak uint64
WorkingSetPrivate uint64
PercentProcessorTime float64 `perflib:"% Processor Time"`
PercentPrivilegedTime float64 `perflib:"% Privileged Time"`
PercentUserTime float64 `perflib:"% User Time"`
CreatingProcessID float64 `perflib:"Creating Process ID"`
ElapsedTime float64 `perflib:"Elapsed Time"`
HandleCount float64 `perflib:"Handle Count"`
IDProcess float64 `perflib:"ID Process"`
IODataBytesPerSec float64 `perflib:"IO Data Bytes/sec"`
IODataOperationsPerSec float64 `perflib:"IO Data Operations/sec"`
IOOtherBytesPerSec float64 `perflib:"IO Other Bytes/sec"`
IOOtherOperationsPerSec float64 `perflib:"IO Other Operations/sec"`
IOReadBytesPerSec float64 `perflib:"IO Read Bytes/sec"`
IOReadOperationsPerSec float64 `perflib:"IO Read Operations/sec"`
IOWriteBytesPerSec float64 `perflib:"IO Write Bytes/sec"`
IOWriteOperationsPerSec float64 `perflib:"IO Write Operations/sec"`
PageFaultsPerSec float64 `perflib:"Page Faults/sec"`
PageFileBytesPeak float64 `perflib:"Page File Bytes Peak"`
PageFileBytes float64 `perflib:"Page File Bytes"`
PoolNonpagedBytes float64 `perflib:"Pool Nonpaged Bytes"`
PoolPagedBytes float64 `perflib:"Pool Paged Bytes"`
PriorityBase float64 `perflib:"Priority Base"`
PrivateBytes float64 `perflib:"Private Bytes"`
ThreadCount float64 `perflib:"Thread Count"`
VirtualBytesPeak float64 `perflib:"Virtual Bytes Peak"`
VirtualBytes float64 `perflib:"Virtual Bytes"`
WorkingSetPrivate float64 `perflib:"Working Set - Private"`
WorkingSetPeak float64 `perflib:"Working Set Peak"`
WorkingSet float64 `perflib:"Working Set"`
}
type WorkerProcess struct {
AppPoolName string
ProcessId uint32
ProcessId uint64
}
func (c *ProcessCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_PerfRawData_PerfProc_Process
q := queryAllWhere(&dst, c.queryWhereClause)
if err := wmi.Query(q, &dst); err != nil {
return nil, err
func (c *processCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
data := make([]perflibProcess, 0)
err := unmarshalObject(ctx.perfObjects["Process"], &data)
if err != nil {
return err
}
var dst_wp []WorkerProcess
@@ -197,9 +190,10 @@ func (c *ProcessCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Des
log.Debugf("Could not query WebAdministration namespace for IIS worker processes: %v. Skipping", err)
}
for _, process := range dst {
if process.Name == "_Total" {
for _, process := range data {
if process.Name == "_Total" ||
c.processBlacklistPattern.MatchString(process.Name) ||
!c.processWhitelistPattern.MatchString(process.Name) {
continue
}
// Duplicate processes are suffixed # and an index number. Remove those.
@@ -208,7 +202,7 @@ func (c *ProcessCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Des
cpid := strconv.FormatUint(uint64(process.CreatingProcessID), 10)
for _, wp := range dst_wp {
if wp.ProcessId == process.IDProcess {
if wp.ProcessId == uint64(process.IDProcess) {
processName = strings.Join([]string{processName, wp.AppPoolName}, "_")
break
}
@@ -217,8 +211,7 @@ func (c *ProcessCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Des
ch <- prometheus.MustNewConstMetric(
c.StartTime,
prometheus.GaugeValue,
// convert from Windows timestamp (1 jan 1601) to unix timestamp (1 jan 1970)
float64(process.ElapsedTime-116444736000000000)/float64(process.Frequency_Object),
process.ElapsedTime,
processName,
pid,
cpid,
@@ -227,7 +220,7 @@ func (c *ProcessCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Des
ch <- prometheus.MustNewConstMetric(
c.HandleCount,
prometheus.GaugeValue,
float64(process.HandleCount),
process.HandleCount,
processName,
pid,
cpid,
@@ -236,7 +229,7 @@ func (c *ProcessCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Des
ch <- prometheus.MustNewConstMetric(
c.CPUTimeTotal,
prometheus.CounterValue,
float64(process.PercentPrivilegedTime)*ticksToSecondsScaleFactor,
process.PercentPrivilegedTime,
processName,
pid,
cpid,
@@ -246,7 +239,7 @@ func (c *ProcessCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Des
ch <- prometheus.MustNewConstMetric(
c.CPUTimeTotal,
prometheus.CounterValue,
float64(process.PercentUserTime)*ticksToSecondsScaleFactor,
process.PercentUserTime,
processName,
pid,
cpid,
@@ -256,7 +249,7 @@ func (c *ProcessCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Des
ch <- prometheus.MustNewConstMetric(
c.IOBytesTotal,
prometheus.CounterValue,
float64(process.IOOtherBytesPersec),
process.IOOtherBytesPerSec,
processName,
pid,
cpid,
@@ -266,7 +259,7 @@ func (c *ProcessCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Des
ch <- prometheus.MustNewConstMetric(
c.IOOperationsTotal,
prometheus.CounterValue,
float64(process.IOOtherOperationsPersec),
process.IOOtherOperationsPerSec,
processName,
pid,
cpid,
@@ -276,7 +269,7 @@ func (c *ProcessCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Des
ch <- prometheus.MustNewConstMetric(
c.IOBytesTotal,
prometheus.CounterValue,
float64(process.IOReadBytesPersec),
process.IOReadBytesPerSec,
processName,
pid,
cpid,
@@ -286,7 +279,7 @@ func (c *ProcessCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Des
ch <- prometheus.MustNewConstMetric(
c.IOOperationsTotal,
prometheus.CounterValue,
float64(process.IOReadOperationsPersec),
process.IOReadOperationsPerSec,
processName,
pid,
cpid,
@@ -296,7 +289,7 @@ func (c *ProcessCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Des
ch <- prometheus.MustNewConstMetric(
c.IOBytesTotal,
prometheus.CounterValue,
float64(process.IOWriteBytesPersec),
process.IOWriteBytesPerSec,
processName,
pid,
cpid,
@@ -306,7 +299,7 @@ func (c *ProcessCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Des
ch <- prometheus.MustNewConstMetric(
c.IOOperationsTotal,
prometheus.CounterValue,
float64(process.IOWriteOperationsPersec),
process.IOWriteOperationsPerSec,
processName,
pid,
cpid,
@@ -316,7 +309,7 @@ func (c *ProcessCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Des
ch <- prometheus.MustNewConstMetric(
c.PageFaultsTotal,
prometheus.CounterValue,
float64(process.PageFaultsPersec),
process.PageFaultsPerSec,
processName,
pid,
cpid,
@@ -325,7 +318,7 @@ func (c *ProcessCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Des
ch <- prometheus.MustNewConstMetric(
c.PageFileBytes,
prometheus.GaugeValue,
float64(process.PageFileBytes),
process.PageFileBytes,
processName,
pid,
cpid,
@@ -334,7 +327,7 @@ func (c *ProcessCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Des
ch <- prometheus.MustNewConstMetric(
c.PoolBytes,
prometheus.GaugeValue,
float64(process.PoolNonpagedBytes),
process.PoolNonpagedBytes,
processName,
pid,
cpid,
@@ -344,7 +337,7 @@ func (c *ProcessCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Des
ch <- prometheus.MustNewConstMetric(
c.PoolBytes,
prometheus.GaugeValue,
float64(process.PoolPagedBytes),
process.PoolPagedBytes,
processName,
pid,
cpid,
@@ -354,7 +347,7 @@ func (c *ProcessCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Des
ch <- prometheus.MustNewConstMetric(
c.PriorityBase,
prometheus.GaugeValue,
float64(process.PriorityBase),
process.PriorityBase,
processName,
pid,
cpid,
@@ -363,7 +356,7 @@ func (c *ProcessCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Des
ch <- prometheus.MustNewConstMetric(
c.PrivateBytes,
prometheus.GaugeValue,
float64(process.PrivateBytes),
process.PrivateBytes,
processName,
pid,
cpid,
@@ -372,7 +365,7 @@ func (c *ProcessCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Des
ch <- prometheus.MustNewConstMetric(
c.ThreadCount,
prometheus.GaugeValue,
float64(process.ThreadCount),
process.ThreadCount,
processName,
pid,
cpid,
@@ -381,7 +374,7 @@ func (c *ProcessCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Des
ch <- prometheus.MustNewConstMetric(
c.VirtualBytes,
prometheus.GaugeValue,
float64(process.VirtualBytes),
process.VirtualBytes,
processName,
pid,
cpid,
@@ -390,12 +383,12 @@ func (c *ProcessCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Des
ch <- prometheus.MustNewConstMetric(
c.WorkingSet,
prometheus.GaugeValue,
float64(process.WorkingSet),
process.WorkingSet,
processName,
pid,
cpid,
)
}
return nil, nil
return nil
}

View File

@@ -12,7 +12,7 @@ import (
)
func init() {
Factories["service"] = NewserviceCollector
registerCollector("service", NewserviceCollector)
}
var (

View File

@@ -3,14 +3,12 @@
package collector
import (
"errors"
"github.com/StackExchange/wmi"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/log"
)
func init() {
Factories["system"] = NewSystemCollector
registerCollector("system", NewSystemCollector, "System")
}
// A SystemCollector is a Prometheus collector for WMI metrics
@@ -70,7 +68,7 @@ func NewSystemCollector() (Collector, error) {
// Collect sends the metric values for each metric
// to the provided prometheus Metric channel.
func (c *SystemCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) error {
if desc, err := c.collect(ch); err != nil {
if desc, err := c.collect(ctx, ch); err != nil {
log.Error("failed collecting system metrics:", desc, err)
return err
}
@@ -79,57 +77,50 @@ func (c *SystemCollector) Collect(ctx *ScrapeContext, ch chan<- prometheus.Metri
// Win32_PerfRawData_PerfOS_System docs:
// - https://web.archive.org/web/20050830140516/http://msdn.microsoft.com/library/en-us/wmisdk/wmi/win32_perfrawdata_perfos_system.asp
type Win32_PerfRawData_PerfOS_System struct {
ContextSwitchesPersec uint32
ExceptionDispatchesPersec uint32
Frequency_Object uint64
ProcessorQueueLength uint32
SystemCallsPersec uint32
SystemUpTime uint64
Threads uint32
Timestamp_Object uint64
type system struct {
ContextSwitchesPersec float64 `perflib:"Context Switches/sec"`
ExceptionDispatchesPersec float64 `perflib:"Exception Dispatches/sec"`
ProcessorQueueLength float64 `perflib:"Processor Queue Length"`
SystemCallsPersec float64 `perflib:"System Calls/sec"`
SystemUpTime float64 `perflib:"System Up Time"`
Threads float64 `perflib:"Threads"`
}
func (c *SystemCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []Win32_PerfRawData_PerfOS_System
q := queryAll(&dst)
if err := wmi.Query(q, &dst); err != nil {
func (c *SystemCollector) collect(ctx *ScrapeContext, ch chan<- prometheus.Metric) (*prometheus.Desc, error) {
var dst []system
if err := unmarshalObject(ctx.perfObjects["System"], &dst); err != nil {
return nil, err
}
if len(dst) == 0 {
return nil, errors.New("WMI query returned empty result set")
}
ch <- prometheus.MustNewConstMetric(
c.ContextSwitchesTotal,
prometheus.CounterValue,
float64(dst[0].ContextSwitchesPersec),
dst[0].ContextSwitchesPersec,
)
ch <- prometheus.MustNewConstMetric(
c.ExceptionDispatchesTotal,
prometheus.CounterValue,
float64(dst[0].ExceptionDispatchesPersec),
dst[0].ExceptionDispatchesPersec,
)
ch <- prometheus.MustNewConstMetric(
c.ProcessorQueueLength,
prometheus.GaugeValue,
float64(dst[0].ProcessorQueueLength),
dst[0].ProcessorQueueLength,
)
ch <- prometheus.MustNewConstMetric(
c.SystemCallsTotal,
prometheus.CounterValue,
float64(dst[0].SystemCallsPersec),
dst[0].SystemCallsPersec,
)
ch <- prometheus.MustNewConstMetric(
c.SystemUpTime,
prometheus.GaugeValue,
// convert from Windows timestamp (1 jan 1601) to unix timestamp (1 jan 1970)
float64(dst[0].SystemUpTime-116444736000000000)/float64(dst[0].Frequency_Object),
dst[0].SystemUpTime,
)
ch <- prometheus.MustNewConstMetric(
c.Threads,
prometheus.GaugeValue,
float64(dst[0].Threads),
dst[0].Threads,
)
return nil, nil
}

View File

@@ -10,7 +10,7 @@ import (
)
func init() {
Factories["tcp"] = NewTCPCollector
registerCollector("tcp", NewTCPCollector)
}
// A TCPCollector is a Prometheus collector for WMI Win32_PerfRawData_Tcpip_TCPv4 metrics
@@ -136,7 +136,7 @@ func (c *TCPCollector) collect(ch chan<- prometheus.Metric) (*prometheus.Desc, e
)
ch <- prometheus.MustNewConstMetric(
c.ConnectionsEstablished,
prometheus.CounterValue,
prometheus.GaugeValue,
float64(dst[0].ConnectionsEstablished),
)
ch <- prometheus.MustNewConstMetric(

View File

@@ -54,7 +54,7 @@ type textFileCollector struct {
}
func init() {
Factories["textfile"] = NewTextFileCollector
registerCollector("textfile", NewTextFileCollector)
}
// NewTextFileCollector returns a new Collector exposing metrics read from files

View File

@@ -7,7 +7,7 @@ import (
)
func init() {
Factories["thermalzone"] = NewThermalZoneCollector
registerCollector("thermalzone", NewThermalZoneCollector)
}
// A thermalZoneCollector is a Prometheus collector for WMI Win32_PerfRawData_Counters_ThermalZoneInformation metrics

View File

@@ -11,7 +11,7 @@ import (
)
func init() {
Factories["vmware"] = NewVmwareCollector
registerCollector("vmware", NewVmwareCollector)
}
// A VmwareCollector is a Prometheus collector for WMI Win32_PerfRawData_vmGuestLib_VMem/Win32_PerfRawData_vmGuestLib_VCPU metrics

View File

@@ -3,12 +3,14 @@ This directory contains documentation of the collectors in the WMI exporter, wit
# Collectors
- [`ad`](collector.ad.md)
- [`adfs`](collector.adfs.md)
- [`cpu`](collector.cpu.md)
- [`cs`](collector.cs.md)
- [`dns`](collector.dns.md)
- [`hyperv`](collector.hyperv.md)
- [`iis`](collector.iis.md)
- [`logical_disk`](collector.logical_disk.md)
- [`logon`](collector.logon.md)
- [`memory`](collector.memory.md)
- [`msmq`](collector.msmq.md)
- [`mssql`](collector.mssql.md)
@@ -27,4 +29,4 @@ This directory contains documentation of the collectors in the WMI exporter, wit
- [`system`](collector.system.md)
- [`tcp`](collector.tcp.md)
- [`textfile`](collector.textfile.md)
- [`vmware`](collector.vmware.md)
- [`vmware`](collector.vmware.md)

51
docs/collector.adfs.md Normal file
View File

@@ -0,0 +1,51 @@
# adfs collector
The adfs collector exposes metrics about Active Directory Federation Services. Note that this collector has only been tested against ADFS 4.0 (2016).
Other ADFS versions may work but are not tested.
|||
-|-
Metric name prefix | `adfs`
Data source | Perflib
Counters | `AD FS`
Enabled by default? | No
## Flags
None
## Metrics
Name | Description | Type | Labels
-----|-------------|------|-------
`wmi_adfs_ad_login_connection_failures` | Total number of connection failures between the ADFS server and the Active Directory domain controller(s) | counter | None
`wmi_adfs_certificate_authentications` | Total number of [User Certificate](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication) authentications. I.E. smart cards or mobile devices with provisioned client certificates | counter | None
`wmi_adfs_device_authentications` | Total number of [device authentications](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/device-authentication-controls-in-ad-fs) (SignedToken, clientTLS, PkeyAuth). Device authentication is only available on ADFS 2016 or later | counter | None
`wmi_adfs_extranet_account_lockouts` | Total number of [extranet lockouts](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-smart-lockout-protection). Requires the Extranet Lockout feature to be enabled | counter | None
`wmi_adfs_federated_authentications` | Total number of authentications from federated sources. E.G. Office365 | counter | None
`wmi_adfs_passport_authentications` | Total number of authentications from [Microsoft Passport](https://en.wikipedia.org/wiki/Microsoft_account) (now named Microsoft Account) | counter | None
`wmi_adfs_password_change_failed` | Total number of failed password changes. The Password Change Portal must be enabled in the AD FS Management tool in order to allow user password changes | counter | None
`wmi_adfs_password_change_succeeded` | Total number of succeeded password changes. The Password Change Portal must be enabled in the AD FS Management tool in order to allow user password changes | counter | None
`wmi_adfs_token_requests` | Total number of requested access tokens | counter | None
`wmi_adfs_windows_integrated_authentications` | Total number of Windows integrated authentications using Kerberos or NTLM | counter | None
### Example metric
Show rate of device authentications in AD FS:
```
rate(wmi_adfs_device_authentications)[2m]
```
## Useful queries
## Alerting examples
**prometheus.rules**
```yaml
- alert: "HighExtranetLockouts"
expr: "rate(wmi_adfs_extranet_account_lockouts)[2m] > 100"
for: "10m"
labels:
severity: "high"
annotations:
summary: "High number of AD FS extranet lockouts"
description: "High number of AD FS extranet lockouts may indicate a password spray attack.\n Server: {{ $labels.instance }}\n Number of lockouts: {{ $value }}"
```

View File

@@ -1,43 +1,60 @@
# cpu collector
The cpu collector exposes metrics about CPU usage
|||
-|-
Metric name prefix | `cpu`
Data source | Perflib
Counters | `ProcessorInformation` (Windows Server 2008R2 and later) `Processor` (older versions)
Enabled by default? | Yes
## Flags
None
## Metrics
These metrics are available on all versions of Windows:
Name | Description | Type | Labels
-----|-------------|------|-------
`wmi_cpu_cstate_seconds_total` | Time spent in low-power idle states | counter | `core`, `state`
`wmi_cpu_time_total` | Time that processor spent in different modes (idle, user, system, ...) | counter | `core`, `mode`
`wmi_cpu_interrupts_total` | Total number of received and serviced hardware interrupts | counter | `core`
`wmi_cpu_dpcs_total` | Total number of received and serviced deferred procedure calls (DPCs) | counter | `core`
These metrics are only exposed on Windows Server 2008R2 and later:
Name | Description | Type | Labels
-----|-------------|------|-------
`wmi_cpu_clock_interrupts_total` | Total number of received and serviced clock tick interrupts | `core`
`wmi_cpu_idle_break_events_total` | Total number of time processor was woken from idle | `core`
`wmi_cpu_parking_status` | Parking Status represents whether a processor is parked or not | `gauge`
`wmi_cpu_core_frequency_mhz` | Core frequency in megahertz | `gauge`
`wmi_cpu_processor_performance` | Processor Performance is the average performance of the processor while it is executing instructions, as a percentage of the nominal performance of the processor. On some processors, Processor Performance may exceed 100% | `gauge`
### Example metric
_This collector does not yet have explained examples, we would appreciate your help adding them!_
## Useful queries
_This collector does not yet have any useful queries added, we would appreciate your help adding them!_
## Alerting examples
_This collector does not yet have alerting examples, we would appreciate your help adding them!_
# cpu collector
The cpu collector exposes metrics about CPU usage
|||
-|-
Metric name prefix | `cpu`
Data source | Perflib
Counters | `ProcessorInformation` (Windows Server 2008R2 and later) `Processor` (older versions)
Enabled by default? | Yes
## Flags
None
## Metrics
These metrics are available on all versions of Windows:
Name | Description | Type | Labels
-----|-------------|------|-------
`wmi_cpu_cstate_seconds_total` | Time spent in low-power idle states | counter | `core`, `state`
`wmi_cpu_time_total` | Time that processor spent in different modes (idle, user, system, ...) | counter | `core`, `mode`
`wmi_cpu_interrupts_total` | Total number of received and serviced hardware interrupts | counter | `core`
`wmi_cpu_dpcs_total` | Total number of received and serviced deferred procedure calls (DPCs) | counter | `core`
These metrics are only exposed on Windows Server 2008R2 and later:
Name | Description | Type | Labels
-----|-------------|------|-------
`wmi_cpu_clock_interrupts_total` | Total number of received and serviced clock tick interrupts | `core`
`wmi_cpu_idle_break_events_total` | Total number of time processor was woken from idle | `core`
`wmi_cpu_parking_status` | Parking Status represents whether a processor is parked or not | `gauge`
`wmi_cpu_core_frequency_mhz` | Core frequency in megahertz | `gauge`
`wmi_cpu_processor_performance` | Processor Performance is the average performance of the processor while it is executing instructions, as a percentage of the nominal performance of the processor. On some processors, Processor Performance may exceed 100% | `gauge`
### Example metric
Show frequency of host CPU cores
```
wmi_cpu_core_frequency_mhz{instance="localhost"}
```
## Useful queries
Show cpu usage by mode.
```
sum by (mode) (irate(wmi_cpu_time_total{instance="localhost"}[5m]))
```
## Alerting examples
**prometheus.rules**
```yaml
# Alert on hosts with more than 80% CPU usage over a 10 minute period
- alert: CpuUsage
expr: 100 - (avg by (instance) (irate(wmi_cpu_time_total{mode="idle"}[2m])) * 100) > 80
for: 10m
labels:
severity: warning
annotations:
summary: "CPU Usage (instance {{ $labels.instance }})"
description: "CPU Usage is more than 80%\n VALUE = {{ $value }}\n LABELS: {{ $labels }}"
```

View File

@@ -18,6 +18,7 @@ Name | Description | Type | Labels
-----|-------------|------|-------
`wmi_cs_logical_processors` | Number of installed logical processors | gauge | None
`wmi_cs_physical_memory_bytes` | Total installed physical memory | gauge | None
`wmi_cs_hostname` | Labeled system hostname information | gauge | `hostname`, `domain`, `fqdn`
### Example metric
_This collector does not yet have explained examples, we would appreciate your help adding them!_

View File

@@ -1,4 +1,4 @@
# hyperv collector
# hyperv collector
The hyperv collector exposes metrics about the Hyper-V hypervisor
@@ -16,81 +16,81 @@ None
Name | Description | Type | Labels
-----|-------------|------|-------
`wmi_hyper_health_critical` | _Not yet documented_ | counter | None
`wmi_hyper_health_ok` | _Not yet documented_ | counter | None
`wmi_hyper_vid_physical_pages_allocated` | _Not yet documented_ | counter | `vm`
`wmi_hyper_vid_preferred_numa_node_index` | _Not yet documented_ | counter | `vm`
`wmi_hyper_vid_remote_physical_pages` | _Not yet documented_ | counter | `vm`
`wmi_hyper_root_partition_address_spaces` | _Not yet documented_ | counter | None
`wmi_hyper_root_partition_attached_devices` | _Not yet documented_ | counter | None
`wmi_hyper_root_partition_deposited_pages` | _Not yet documented_ | counter | None
`wmi_hyper_root_partition_device_dma_errors` | _Not yet documented_ | counter | None
`wmi_hyper_root_partition_device_interrupt_errors` | _Not yet documented_ | counter | None
`wmi_hyper_root_partition_device_interrupt_mappings` | _Not yet documented_ | counter | None
`wmi_hyper_root_partition_device_interrupt_throttle_events` | _Not yet documented_ | counter | None
`wmi_hyper_root_partition_preferred_numa_node_index` | _Not yet documented_ | counter | None
`wmi_hyper_root_partition_gpa_space_modifications` | _Not yet documented_ | counter | None
`wmi_hyper_root_partition_io_tlb_flush_cost` | _Not yet documented_ | counter | None
`wmi_hyper_root_partition_io_tlb_flush` | _Not yet documented_ | counter | None
`wmi_hyper_root_partition_recommended_virtual_tlb_size` | _Not yet documented_ | counter | None
`wmi_hyper_root_partition_physical_pages_allocated` | _Not yet documented_ | counter | None
`wmi_hyper_root_partition_1G_device_pages` | _Not yet documented_ | counter | None
`wmi_hyper_root_partition_1G_gpa_pages` | _Not yet documented_ | counter | None
`wmi_hyper_root_partition_2M_device_pages` | _Not yet documented_ | counter | None
`wmi_hyper_root_partition_2M_gpa_pages` | _Not yet documented_ | counter | None
`wmi_hyper_root_partition_4K_device_pages` | _Not yet documented_ | counter | None
`wmi_hyper_root_partition_4K_gpa_pages` | _Not yet documented_ | counter | None
`wmi_hyper_root_partition_virtual_tlb_flush_entires` | _Not yet documented_ | counter | None
`wmi_hyper_root_partition_virtual_tlb_pages` | _Not yet documented_ | counter | None
`wmi_hyper_hypervisor_virtual_processors` | _Not yet documented_ | counter | None
`wmi_hyper_hypervisor_logical_processors` | _Not yet documented_ | counter | None
`wmi_hyper_host_cpu_guest_run_time` | _Not yet documented_ | counter | `core`
`wmi_hyper_host_cpu_hypervisor_run_time` | _Not yet documented_ | counter | `core`
`wmi_hyper_host_cpu_remote_run_time` | _Not yet documented_ | counter | `core`
`wmi_hyper_host_cpu_total_run_time` | _Not yet documented_ | counter | `core`
`wmi_hyper_vm_cpu_guest_run_time` | _Not yet documented_ | counter | `vm`, `core`
`wmi_hyper_vm_cpu_hypervisor_run_time` | _Not yet documented_ | counter | `vm`, `core`
`wmi_hyper_vm_cpu_remote_run_time` | _Not yet documented_ | counter | `vm`, `core`
`wmi_hyper_vm_cpu_total_run_time` | _Not yet documented_ | counter | `vm`, `core`
`wmi_hyper_vswitch_broadcast_packets_received_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyper_vswitch_broadcast_packets_sent_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyper_vswitch_bytes_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyper_vswitch_bytes_received_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyper_vswitch_bytes_sent_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyper_vswitch_directed_packets_received_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyper_vswitch_directed_packets_send_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyper_vswitch_dropped_packets_incoming_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyper_vswitch_dropped_packets_outcoming_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyper_vswitch_extensions_dropped_packets_incoming_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyper_vswitch_extensions_dropped_packets_outcoming_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyper_vswitch_learned_mac_addresses_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyper_vswitch_multicast_packets_received_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyper_vswitch_multicast_packets_sent_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyper_vswitch_number_of_send_channel_moves_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyper_vswitch_number_of_vmq_moves_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyper_vswitch_packets_flooded_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyper_vswitch_packets_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyper_vswitch_packets_received_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyper_vswitch_packets_sent_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyper_vswitch_purged_mac_addresses_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyper_ethernet_bytes_dropped` | _Not yet documented_ | counter | `adapter`
`wmi_hyper_ethernet_bytes_received` | _Not yet documented_ | counter | `adapter`
`wmi_hyper_ethernet_bytes_sent` | _Not yet documented_ | counter | `adapter`
`wmi_hyper_ethernet_frames_dropped` | _Not yet documented_ | counter | `adapter`
`wmi_hyper_ethernet_frames_received` | _Not yet documented_ | counter | `adapter`
`wmi_hyper_ethernet_frames_sent` | _Not yet documented_ | counter | `adapter`
`wmi_hyper_vm_device_error_count` | _Not yet documented_ | counter | `vm_device`
`wmi_hyper_vm_device_queue_length` | _Not yet documented_ | counter | `vm_device`
`wmi_hyper_vm_device_bytes_read` | _Not yet documented_ | counter | `vm_device`
`wmi_hyper_vm_device_operations_read` | _Not yet documented_ | counter | `vm_device`
`wmi_hyper_vm_device_bytes_written` | _Not yet documented_ | counter | `vm_device`
`wmi_hyper_vm_device_operations_written` | _Not yet documented_ | counter | `vm_device`
`wmi_hyper_vm_interface_bytes_received` | _Not yet documented_ | counter | `vm_interface`
`wmi_hyper_vm_interface_bytes_sent` | _Not yet documented_ | counter | `vm_interface`
`wmi_hyper_vm_interface_packets_incoming_dropped` | _Not yet documented_ | counter | `vm_interface`
`wmi_hyper_vm_interface_packets_outgoing_dropped` | _Not yet documented_ | counter | `vm_interface`
`wmi_hyper_vm_interface_packets_received` | _Not yet documented_ | counter | `vm_interface`
`wmi_hyper_vm_interface_packets_sent` | _Not yet documented_ | counter | `vm_interface`
`wmi_hyperv_health_critical` | _Not yet documented_ | counter | None
`wmi_hyperv_health_ok` | _Not yet documented_ | counter | None
`wmi_hyperv_vid_physical_pages_allocated` | _Not yet documented_ | counter | `vm`
`wmi_hyperv_vid_preferred_numa_node_index` | _Not yet documented_ | counter | `vm`
`wmi_hyperv_vid_remote_physical_pages` | _Not yet documented_ | counter | `vm`
`wmi_hyperv_root_partition_address_spaces` | _Not yet documented_ | counter | None
`wmi_hyperv_root_partition_attached_devices` | _Not yet documented_ | counter | None
`wmi_hyperv_root_partition_deposited_pages` | _Not yet documented_ | counter | None
`wmi_hyperv_root_partition_device_dma_errors` | _Not yet documented_ | counter | None
`wmi_hyperv_root_partition_device_interrupt_errors` | _Not yet documented_ | counter | None
`wmi_hyperv_root_partition_device_interrupt_mappings` | _Not yet documented_ | counter | None
`wmi_hyperv_root_partition_device_interrupt_throttle_events` | _Not yet documented_ | counter | None
`wmi_hyperv_root_partition_preferred_numa_node_index` | _Not yet documented_ | counter | None
`wmi_hyperv_root_partition_gpa_space_modifications` | _Not yet documented_ | counter | None
`wmi_hyperv_root_partition_io_tlb_flush_cost` | _Not yet documented_ | counter | None
`wmi_hyperv_root_partition_io_tlb_flush` | _Not yet documented_ | counter | None
`wmi_hyperv_root_partition_recommended_virtual_tlb_size` | _Not yet documented_ | counter | None
`wmi_hyperv_root_partition_physical_pages_allocated` | _Not yet documented_ | counter | None
`wmi_hyperv_root_partition_1G_device_pages` | _Not yet documented_ | counter | None
`wmi_hyperv_root_partition_1G_gpa_pages` | _Not yet documented_ | counter | None
`wmi_hyperv_root_partition_2M_device_pages` | _Not yet documented_ | counter | None
`wmi_hyperv_root_partition_2M_gpa_pages` | _Not yet documented_ | counter | None
`wmi_hyperv_root_partition_4K_device_pages` | _Not yet documented_ | counter | None
`wmi_hyperv_root_partition_4K_gpa_pages` | _Not yet documented_ | counter | None
`wmi_hyperv_root_partition_virtual_tlb_flush_entires` | _Not yet documented_ | counter | None
`wmi_hyperv_root_partition_virtual_tlb_pages` | _Not yet documented_ | counter | None
`wmi_hyperv_hypervisor_virtual_processors` | _Not yet documented_ | counter | None
`wmi_hyperv_hypervisor_logical_processors` | _Not yet documented_ | counter | None
`wmi_hyperv_host_cpu_guest_run_time` | _Not yet documented_ | counter | `core`
`wmi_hyperv_host_cpu_hypervisor_run_time` | _Not yet documented_ | counter | `core`
`wmi_hyperv_host_cpu_remote_run_time` | _Not yet documented_ | counter | `core`
`wmi_hyperv_host_cpu_total_run_time` | _Not yet documented_ | counter | `core`
`wmi_hyperv_vm_cpu_guest_run_time` | _Not yet documented_ | counter | `vm`, `core`
`wmi_hyperv_vm_cpu_hypervisor_run_time` | _Not yet documented_ | counter | `vm`, `core`
`wmi_hyperv_vm_cpu_remote_run_time` | _Not yet documented_ | counter | `vm`, `core`
`wmi_hyperv_vm_cpu_total_run_time` | _Not yet documented_ | counter | `vm`, `core`
`wmi_hyperv_vswitch_broadcast_packets_received_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyperv_vswitch_broadcast_packets_sent_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyperv_vswitch_bytes_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyperv_vswitch_bytes_received_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyperv_vswitch_bytes_sent_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyperv_vswitch_directed_packets_received_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyperv_vswitch_directed_packets_send_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyperv_vswitch_dropped_packets_incoming_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyperv_vswitch_dropped_packets_outcoming_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyperv_vswitch_extensions_dropped_packets_incoming_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyperv_vswitch_extensions_dropped_packets_outcoming_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyperv_vswitch_learned_mac_addresses_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyperv_vswitch_multicast_packets_received_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyperv_vswitch_multicast_packets_sent_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyperv_vswitch_number_of_send_channel_moves_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyperv_vswitch_number_of_vmq_moves_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyperv_vswitch_packets_flooded_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyperv_vswitch_packets_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyperv_vswitch_packets_received_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyperv_vswitch_packets_sent_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyperv_vswitch_purged_mac_addresses_total` | _Not yet documented_ | counter | `vswitch`
`wmi_hyperv_ethernet_bytes_dropped` | _Not yet documented_ | counter | `adapter`
`wmi_hyperv_ethernet_bytes_received` | _Not yet documented_ | counter | `adapter`
`wmi_hyperv_ethernet_bytes_sent` | _Not yet documented_ | counter | `adapter`
`wmi_hyperv_ethernet_frames_dropped` | _Not yet documented_ | counter | `adapter`
`wmi_hyperv_ethernet_frames_received` | _Not yet documented_ | counter | `adapter`
`wmi_hyperv_ethernet_frames_sent` | _Not yet documented_ | counter | `adapter`
`wmi_hyperv_vm_device_error_count` | _Not yet documented_ | counter | `vm_device`
`wmi_hyperv_vm_device_queue_length` | _Not yet documented_ | counter | `vm_device`
`wmi_hyperv_vm_device_bytes_read` | _Not yet documented_ | counter | `vm_device`
`wmi_hyperv_vm_device_operations_read` | _Not yet documented_ | counter | `vm_device`
`wmi_hyperv_vm_device_bytes_written` | _Not yet documented_ | counter | `vm_device`
`wmi_hyperv_vm_device_operations_written` | _Not yet documented_ | counter | `vm_device`
`wmi_hyperv_vm_interface_bytes_received` | _Not yet documented_ | counter | `vm_interface`
`wmi_hyperv_vm_interface_bytes_sent` | _Not yet documented_ | counter | `vm_interface`
`wmi_hyperv_vm_interface_packets_incoming_dropped` | _Not yet documented_ | counter | `vm_interface`
`wmi_hyperv_vm_interface_packets_outgoing_dropped` | _Not yet documented_ | counter | `vm_interface`
`wmi_hyperv_vm_interface_packets_received` | _Not yet documented_ | counter | `vm_interface`
`wmi_hyperv_vm_interface_packets_sent` | _Not yet documented_ | counter | `vm_interface`
### Example metric
_This collector does not yet have explained examples, we would appreciate your help adding them!_

View File

@@ -1,44 +1,76 @@
# logical_disk collector
The logical_disk collector exposes metrics about logical disks (in contrast to physical disks)
|||
-|-
Metric name prefix | `logical_disk`
Classes | [`Win32_PerfRawData_PerfDisk_LogicalDisk`](https://msdn.microsoft.com/en-us/windows/hardware/aa394307(v=vs.71))
Enabled by default? | Yes
## Flags
### `--collector.logical_disk.volume-whitelist`
If given, a disk needs to match the whitelist regexp in order for the corresponding disk metrics to be reported
### `--collector.logical_disk.volume-blacklist`
If given, a disk needs to *not* match the blacklist regexp in order for the corresponding disk metrics to be reported
## Metrics
Name | Description | Type | Labels
-----|-------------|------|-------
`requests_queued` | _Not yet documented_ | gauge | `volume`
`read_bytes_total` | _Not yet documented_ | counter | `volume`
`reads_total` | _Not yet documented_ | counter | `volume`
`write_bytes_total` | _Not yet documented_ | counter | `volume`
`writes_total` | _Not yet documented_ | counter | `volume`
`read_seconds_total` | _Not yet documented_ | counter | `volume`
`write_seconds_total` | _Not yet documented_ | counter | `volume`
`free_bytes` | _Not yet documented_ | gauge | `volume`
`size_bytes` | _Not yet documented_ | gauge | `volume`
`idle_seconds_total` | _Not yet documented_ | counter | `volume`
`split_ios_total` | _Not yet documented_ | counter | `volume`
### Example metric
_This collector does not yet have explained examples, we would appreciate your help adding them!_
## Useful queries
_This collector does not yet have any useful queries added, we would appreciate your help adding them!_
## Alerting examples
_This collector does not yet have alerting examples, we would appreciate your help adding them!_
# logical_disk collector
The logical_disk collector exposes metrics about logical disks (in contrast to physical disks)
|||
-|-
Metric name prefix | `logical_disk`
Data source | Perflib
Counters | `LogicalDisk` ([`Win32_PerfRawData_PerfDisk_LogicalDisk`](https://msdn.microsoft.com/en-us/windows/hardware/aa394307(v=vs.71)))
Enabled by default? | Yes
## Flags
### `--collector.logical_disk.volume-whitelist`
If given, a disk needs to match the whitelist regexp in order for the corresponding disk metrics to be reported
### `--collector.logical_disk.volume-blacklist`
If given, a disk needs to *not* match the blacklist regexp in order for the corresponding disk metrics to be reported
## Metrics
Name | Description | Type | Labels
-----|-------------|------|-------
`requests_queued` | Number of requests outstanding on the disk at the time the performance data is collected | gauge | `volume`
`read_bytes_total` | Rate at which bytes are transferred from the disk during read operations | counter | `volume`
`reads_total` | Rate of read operations on the disk | counter | `volume`
`write_bytes_total` | Rate at which bytes are transferred to the disk during write operations | counter | `volume`
`writes_total` | Rate of write operations on the disk | counter | `volume`
`read_seconds_total` | Seconds the disk was busy servicing read requests | counter | `volume`
`write_seconds_total` | Seconds the disk was busy servicing write requests | counter | `volume`
`free_bytes` | Unused space of the disk in bytes | gauge | `volume`
`size_bytes` | Total size of the disk in bytes | gauge | `volume`
`idle_seconds_total` | Seconds the disk was idle (not servicing read/write requests) | counter | `volume`
`split_ios_total` | Number of I/Os to the disk split into multiple I/Os | counter | `volume`
### Example metric
Query the rate of write operations to a disk
```
rate(wmi_logical_disk_read_bytes_total{instance="localhost", volume=~"C:"}[2m])
```
## Useful queries
Calculate rate of total IOPS for disk
```
rate(wmi_logical_disk_reads_total{instance="localhost", volume="C:"}[2m]) + rate(wmi_logical_disk_writes_total{instance="localhost", volume="C:"}[2m])
```
## Alerting examples
**prometheus.rules**
```yaml
groups:
- name: Windows Disk Alerts
rules:
# Sends an alert when disk space usage is above 95%
- alert: DiskSpaceUsage
expr: 100.0 - 100 * (wmi_logical_disk_free_bytes / wmi_logical_disk_size_bytes) > 95
for: 10m
labels:
severity: high
annotations:
summary: "Disk Space Usage (instance {{ $labels.instance }})"
description: "Disk Space on Drive is used more than 95%\n VALUE = {{ $value }}\n LABELS: {{ $labels }}"
# Alerts on disks with over 85% space usage predicted to fill within the next four days
- alert: DiskFilling
expr: 100 * (wmi_logical_disk_free_bytes / wmi_logical_disk_size_bytes) < 15 and predict_linear(wmi_logical_disk_free_bytes[6h], 4 * 24 * 3600) < 0
for: 10m
labels:
severity: warning
annotations:
summary: "Disk full in four days (instance {{ $labels.instance }})"
description: "{{ $labels.volume }} is expected to fill up within four days. Currently {{ $value | humanize }}% is available.\n VALUE = {{ $value }}\n LABELS: {{ $labels }}"
```

34
docs/collector.logon.md Normal file
View File

@@ -0,0 +1,34 @@
# logon collector
The logon collector exposes metrics detailing the active user logon sessions.
|||
-|-
Metric name prefix | `logon`
Classes | [`Win32_LogonSession`](https://docs.microsoft.com/en-us/windows/win32/cimwin32prov/win32-logonsession)
Enabled by default? | No
## Flags
None
## Metrics
Name | Description | Type | Labels
-----|-------------|------|-------
`wmi_logon_logon_type` | Number of active user logon sessions | gauge | status
### Example metric
Query the total number of interactive logon sessions
```
wmi_logon_logon_type{status="interactive"}
```
## Useful queries
Query the total number of local and remote (I.E. Terminal Services) interactive sessions.
```
wmi_logon_logon_type{status=~"interactive|remoteinteractive"}
```
## Alerting examples
_This collector does not yet have alerting examples, we would appreciate your help adding them!_

View File

@@ -5,6 +5,7 @@ The memory collector exposes metrics about system memory usage
|||
-|-
Metric name prefix | `memory`
Data source | Perflib
Classes | `Win32_PerfRawData_PerfOS_Memory`
Enabled by default? | Yes
@@ -19,25 +20,25 @@ Name | Description | Type | Labels
`wmi_cs_logical_processors` | Number of installed logical processors | gauge | None
`wmi_cs_physical_memory_bytes` | Total installed physical memory | gauge | None
`wmi_memory_available_bytes` | The amount of physical memory immediately available for allocation to a process or for system use. It is equal to the sum of memory assigned to the standby (cached), free and zero page lists | gauge | None
`wmi_memory_cache_bytes` | _Not yet documented_ | gauge | None
`wmi_memory_cache_bytes_peak` | _Not yet documented_ | gauge | None
`wmi_memory_cache_faults_total` | _Not yet documented_ | gauge | None
`wmi_memory_commit_limit` | _Not yet documented_ | gauge | None
`wmi_memory_committed_bytes` | _Not yet documented_ | gauge | None
`wmi_memory_cache_bytes` | Number of bytes currently being used by the file system cache | gauge | None
`wmi_memory_cache_bytes_peak` | Maximum number of CacheBytes after the system was last restarted | gauge | None
`wmi_memory_cache_faults_total` | Number of faults which occur when a page sought in the file system cache is not found there and must be retrieved from elsewhere in memory (soft fault) or from disk (hard fault) | gauge | None
`wmi_memory_commit_limit` | Amount of virtual memory, in bytes, that can be committed without having to extend the paging file(s) | gauge | None
`wmi_memory_committed_bytes` | Amount of committed virtual memory, in bytes | gauge | None
`wmi_memory_demand_zero_faults_total` | The number of zeroed pages required to satisfy faults. Zeroed pages, pages emptied of previously stored data and filled with zeros, are a security feature of Windows that prevent processes from seeing data stored by earlier processes that used the memory space | gauge | None
`wmi_memory_free_and_zero_page_list_bytes` | _Not yet documented_ | gauge | None
`wmi_memory_free_system_page_table_entries` | _Not yet documented_ | gauge | None
`wmi_memory_free_system_page_table_entries` | Number of page table entries not being used by the system | gauge | None
`wmi_memory_modified_page_list_bytes` | _Not yet documented_ | gauge | None
`wmi_memory_page_faults_total` | _Not yet documented_ | gauge | None
`wmi_memory_page_faults_total` | Overall rate at which faulted pages are handled by the processor | gauge | None
`wmi_memory_swap_page_reads_total` | Number of disk page reads (a single read operation reading several pages is still only counted once) | gauge | None
`wmi_memory_swap_pages_read_total` | Number of pages read across all page reads (ie counting all pages read even if they are read in a single operation) | gauge | None
`wmi_memory_swap_pages_written_total` | Number of pages written across all page writes (ie counting all pages written even if they are written in a single operation) | gauge | None
`wmi_memory_swap_page_operations_total` | Total number of swap page read and writes (PagesPersec) | gauge | None
`wmi_memory_swap_page_writes_total` | Number of disk page writes (a single write operation writing several pages is still only counted once) | gauge | None
`wmi_memory_pool_nonpaged_allocs_total` | The number of calls to allocate space in the nonpaged pool. The nonpaged pool is an area of system memory area for objects that cannot be written to disk, and must remain in physical memory as long as they are allocated | gauge | None
`wmi_memory_pool_nonpaged_bytes_total` | _Not yet documented_ | gauge | None
`wmi_memory_pool_paged_allocs_total` | _Not yet documented_ | gauge | None
`wmi_memory_pool_paged_bytes` | _Not yet documented_ | gauge | None
`wmi_memory_pool_nonpaged_bytes_total` | Number of bytes in the non-paged pool | gauge | None
`wmi_memory_pool_paged_allocs_total` | Number of calls to allocate space in the paged pool, regardless of the amount of space allocated in each call | gauge | None
`wmi_memory_pool_paged_bytes` | Number of bytes in the paged pool | gauge | None
`wmi_memory_pool_paged_resident_bytes` | _Not yet documented_ | gauge | None
`wmi_memory_standby_cache_core_bytes` | _Not yet documented_ | gauge | None
`wmi_memory_standby_cache_normal_priority_bytes` | _Not yet documented_ | gauge | None

View File

@@ -24,80 +24,80 @@ Name | Description | Type | Labels
-----|-------------|------|-------
`wmi_mssql_collector_duration_seconds` | The time taken for each sub-collector to return | counter | `collector`, `instance`
`wmi_mssql_collector_success` | 1 if sub-collector succeeded, 0 otherwise | counter | `collector`, `instance`
`wmi_mssql_accessmethods_au_batch_cleanups` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_au_cleanups` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_by_reference_lob_creates` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_by_reference_lob_uses` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_lob_read_aheads` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_column_value_pulls` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_column_value_pushes` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_deferred_dropped_aus` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_deferred_dropped_rowsets` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_dropped_rowset_cleanups` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_dropped_rowset_skips` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_extent_deallocations` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_extent_allocations` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_au_batch_cleanup_failures` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_leaf_page_cookie_failures` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_tree_page_cookie_failures` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_forwarded_records` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_free_space_page_fetches` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_free_space_scans` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_full_scans` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_index_searches` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_insysxact_waits` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_lob_handle_creates` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_lob_handle_destroys` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_lob_ss_provider_creates` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_lob_ss_provider_destroys` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_lob_ss_provider_truncations` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_mixed_page_allocations` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_page_compression_attempts` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_page_deallocations` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_page_allocations` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_page_compressions` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_page_splits` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_probe_scans` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_range_scans` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_scan_point_revalidations` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_ghost_record_skips` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_table_lock_escalations` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_leaf_page_cookie_uses` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_tree_page_cookie_uses` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_workfile_creates` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_worktables_creates` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_worktables_from_cache_ratio` | _Not yet documented_ | counter | `instance`
`wmi_mssql_availreplica_received_from_replica_bytes` | _Not yet documented_ | counter | `instance`, `replica`
`wmi_mssql_availreplica_sent_to_replica_bytes` | _Not yet documented_ | counter | `instance`, `replica`
`wmi_mssql_availreplica_sent_to_transport_bytes` | _Not yet documented_ | counter | `instance`, `replica`
`wmi_mssql_availreplica_initiated_flow_controls` | _Not yet documented_ | counter | `instance`, `replica`
`wmi_mssql_availreplica_flow_control_wait_seconds` | _Not yet documented_ | counter | `instance`, `replica`
`wmi_mssql_availreplica_receives_from_replica` | _Not yet documented_ | counter | `instance`, `replica`
`wmi_mssql_availreplica_resent_messages` | _Not yet documented_ | counter | `instance`, `replica`
`wmi_mssql_availreplica_sends_to_replica` | _Not yet documented_ | counter | `instance`, `replica`
`wmi_mssql_availreplica_sends_to_transport` | _Not yet documented_ | counter | `instance`, `replica`
`wmi_mssql_bufman_background_writer_pages` | _Not yet documented_ | counter | `instance`
`wmi_mssql_bufman_buffer_cache_hit_ratio` | _Not yet documented_ | counter | `instance`
`wmi_mssql_bufman_checkpoint_pages` | _Not yet documented_ | counter | `instance`
`wmi_mssql_bufman_database_pages` | _Not yet documented_ | counter | `instance`
`wmi_mssql_bufman_extension_allocated_pages` | _Not yet documented_ | counter | `instance`
`wmi_mssql_bufman_extension_free_pages` | _Not yet documented_ | counter | `instance`
`wmi_mssql_accessmethods_au_batch_cleanups` | The total number of batches that were completed successfully by the background task that cleans up deferred dropped allocation units | counter | `instance`
`wmi_mssql_accessmethods_au_cleanups` | The total number of allocation units that were successfully dropped the background task that cleans up deferred dropped allocation units. Each allocation unit drop requires multiple batches | counter | `instance`
`wmi_mssql_accessmethods_by_reference_lob_creates` | The total count of large object (lob) values that were passed by reference. By-reference lobs are used in certain bulk operations to avoid the cost of passing them by value | counter | `instance`
`wmi_mssql_accessmethods_by_reference_lob_uses` | The total count of by-reference lob values that were used. By-reference lobs are used in certain bulk operations to avoid the cost of passing them by-value | counter | `instance`
`wmi_mssql_accessmethods_lob_read_aheads` | The total count of lob pages on which readahead was issued | counter | `instance`
`wmi_mssql_accessmethods_column_value_pulls` | The total count of column values that were pulled in-row from off-row | counter | `instance`
`wmi_mssql_accessmethods_column_value_pushes` | The total count of column values that were pushed from in-row to off-row | counter | `instance`
`wmi_mssql_accessmethods_deferred_dropped_aus` | The total number of allocation units waiting to be dropped by the background task that cleans up deferred dropped allocation units | counter | `instance`
`wmi_mssql_accessmethods_deferred_dropped_rowsets` | The number of rowsets created as a result of aborted online index build operations that are waiting to be dropped by the background task that cleans up deferred dropped rowsets | counter | `instance`
`wmi_mssql_accessmethods_dropped_rowset_cleanups` | The number of rowsets per second created as a result of aborted online index build operations that were successfully dropped by the background task that cleans up deferred dropped rowsets | counter | `instance`
`wmi_mssql_accessmethods_dropped_rowset_skips` | The number of rowsets per second created as a result of aborted online index build operations that were skipped by the background task that cleans up deferred dropped rowsets created | counter | `instance`
`wmi_mssql_accessmethods_extent_deallocations` | Number of extents deallocated per second in all databases in this instance of SQL Server | counter | `instance`
`wmi_mssql_accessmethods_extent_allocations` | Number of extents allocated per second in all databases in this instance of SQL Server | counter | `instance`
`wmi_mssql_accessmethods_au_batch_cleanup_failures` | The number of batches per second that failed and required retry, by the background task that cleans up deferred dropped allocation units. Failure could be due to lack of memory or disk space, hardware failure and other reasons | counter | `instance`
`wmi_mssql_accessmethods_leaf_page_cookie_failures` | The number of times that a leaf page cookie could not be used during an index search since changes happened on the leaf page. The cookie is used to speed up index search | counter | `instance`
`wmi_mssql_accessmethods_tree_page_cookie_failures` | The number of times that a tree page cookie could not be used during an index search since changes happened on the parent pages of those tree pages. The cookie is used to speed up index search | counter | `instance`
`wmi_mssql_accessmethods_forwarded_records` | Number of records per second fetched through forwarded record pointers | counter | `instance`
`wmi_mssql_accessmethods_free_space_page_fetches` | Number of pages fetched per second by free space scans. These scans search for free space within pages already allocated to an allocation unit, to satisfy requests to insert or modify record fragments | counter | `instance`
`wmi_mssql_accessmethods_free_space_scans` | Number of scans per second that were initiated to search for free space within pages already allocated to an allocation unit to insert or modify record fragment. Each scan may find multiple pages | counter | `instance`
`wmi_mssql_accessmethods_full_scans` | Number of unrestricted full scans per second. These can be either base-table or full-index scans | counter | `instance`
`wmi_mssql_accessmethods_index_searches` | Number of index searches per second. These are used to start a range scan, reposition a range scan, revalidate a scan point, fetch a single index record, and search down the index to locate where to insert a new row | counter | `instance`
`wmi_mssql_accessmethods_insysxact_waits` | Number of times a reader needs to wait for a page because the InSysXact bit is set | counter | `instance`
`wmi_mssql_accessmethods_lob_handle_creates` | Count of temporary lobs created | counter | `instance`
`wmi_mssql_accessmethods_lob_handle_destroys` | Count of temporary lobs destroyed | counter | `instance`
`wmi_mssql_accessmethods_lob_ss_provider_creates` | Count of LOB Storage Service Providers (LobSSP) created. One worktable created per LobSSP | counter | `instance`
`wmi_mssql_accessmethods_lob_ss_provider_destroys` | Count of LobSSP destroyed | counter | `instance`
`wmi_mssql_accessmethods_lob_ss_provider_truncations` | Count of LobSSP truncated | counter | `instance`
`wmi_mssql_accessmethods_mixed_page_allocations` | Number of pages allocated per second from mixed extents. These could be used for storing the IAM pages and the first eight pages that are allocated to an allocation unit | counter | `instance`
`wmi_mssql_accessmethods_page_compression_attempts` | Number of pages evaluated for page-level compression. Includes pages that were not compressed because significant savings could be achieved. Includes all objects in the instance of SQL Server | counter | `instance`
`wmi_mssql_accessmethods_page_deallocations` | Number of pages deallocated per second in all databases in this instance of SQL Server. These include pages from mixed extents and uniform extents | counter | `instance`
`wmi_mssql_accessmethods_page_allocations` | Number of pages allocated per second in all databases in this instance of SQL Server. These include pages allocations from both mixed extents and uniform extents | counter | `instance`
`wmi_mssql_accessmethods_page_compressions` | Number of data pages that are compressed by using PAGE compression. Includes all objects in the instance of SQL Server | counter | `instance`
`wmi_mssql_accessmethods_page_splits` | Number of page splits per second that occur as the result of overflowing index pages | counter | `instance`
`wmi_mssql_accessmethods_probe_scans` | Number of probe scans per second that are used to find at most one single qualified row in an index or base table directly | counter | `instance`
`wmi_mssql_accessmethods_range_scans` | Number of qualified range scans through indexes per second | counter | `instance`
`wmi_mssql_accessmethods_scan_point_revalidations` | Number of times per second that the scan point had to be revalidated to continue the scan | counter | `instance`
`wmi_mssql_accessmethods_ghost_record_skips` | Number of ghosted records per second skipped during scans | counter | `instance`
`wmi_mssql_accessmethods_table_lock_escalations` | Number of times locks on a table were escalated to the TABLE or HoBT granularity | counter | `instance`
`wmi_mssql_accessmethods_leaf_page_cookie_uses` | Number of times a leaf page cookie is used successfully during an index search since no change happened on the leaf page. The cookie is used to speed up index search | counter | `instance`
`wmi_mssql_accessmethods_tree_page_cookie_uses` | Number of times a tree page cookie is used successfully during an index search since no change happened on the parent page of the tree page. The cookie is used to speed up index search | counter | `instance`
`wmi_mssql_accessmethods_workfile_creates` | Number of work files created per second. For example, work files could be used to store temporary results for hash joins and hash aggregates | counter | `instance`
`wmi_mssql_accessmethods_worktables_creates` | Number of work tables created per second. For example, work tables could be used to store temporary results for query spool, lob variables, XML variables, and cursors | counter | `instance`
`wmi_mssql_accessmethods_worktables_from_cache_ratio` | Percentage of work tables created where the initial two pages of the work table were not allocated but were immediately available from the work table cache | counter | `instance`
`wmi_mssql_availreplica_received_from_replica_bytes` | Number of bytes received from the availability replica per second. Pings and status updates will generate network traffic even on databases with no user updates | counter | `instance`, `replica`
`wmi_mssql_availreplica_sent_to_replica_bytes` | Number of bytes sent to the remote availability replica per second. On the primary replica this is the number of bytes sent to the secondary replica. On the secondary replica this is the number of bytes sent to the primary replica | counter | `instance`, `replica`
`wmi_mssql_availreplica_sent_to_transport_bytes` | Actual number of bytes sent per second over the network to the remote availability replica. On the primary replica this is the number of bytes sent to the secondary replica. On the secondary replica this is the number of bytes sent to the primary replica | counter | `instance`, `replica`
`wmi_mssql_availreplica_initiated_flow_controls` | Time in milliseconds that log stream messages waited for send flow control, in the last second | counter | `instance`, `replica`
`wmi_mssql_availreplica_flow_control_wait_seconds` | Number of times flow-control initiated in the last second. Flow Control Time (ms/sec) divided by Flow Control/sec is the average time per wait | counter | `instance`, `replica`
`wmi_mssql_availreplica_receives_from_replica` | Number of Always On messages received from thereplica per second | counter | `instance`, `replica`
`wmi_mssql_availreplica_resent_messages` | Number of Always On messages resent in the last second | counter | `instance`, `replica`
`wmi_mssql_availreplica_sends_to_replica` | Number of Always On messages sent to this availability replica per second | counter | `instance`, `replica`
`wmi_mssql_availreplica_sends_to_transport` | Actual number of Always On messages sent per second over the network to the remote availability replica | counter | `instance`, `replica`
`wmi_mssql_bufman_background_writer_pages` | Number of pages flushed to enforce the recovery interval settings | counter | `instance`
`wmi_mssql_bufman_buffer_cache_hit_ratio` | Indicates the percentage of pages found in the buffer cache without having to read from disk. The ratio is the total number of cache hits divided by the total number of cache lookups over the last few thousand page accesses | counter | `instance`
`wmi_mssql_bufman_checkpoint_pages` | Indicates the number of pages flushed to disk per second by a checkpoint or other operation that require all dirty pages to be flushed | counter | `instance`
`wmi_mssql_bufman_database_pages` | Indicates the number of pages in the buffer pool with database content | counter | `instance`
`wmi_mssql_bufman_extension_allocated_pages` | Total number of non-free cache pages in the buffer pool extension file | counter | `instance`
`wmi_mssql_bufman_extension_free_pages` | Total number of free cache pages in the buffer pool extension file | counter | `instance`
`wmi_mssql_bufman_extension_in_use_as_percentage` | _Not yet documented_ | counter | `instance`
`wmi_mssql_bufman_extension_outstanding_io` | _Not yet documented_ | counter | `instance`
`wmi_mssql_bufman_extension_page_evictions` | _Not yet documented_ | counter | `instance`
`wmi_mssql_bufman_extension_page_reads` | _Not yet documented_ | counter | `instance`
`wmi_mssql_bufman_extension_page_unreferenced_seconds` | _Not yet documented_ | counter | `instance`
`wmi_mssql_bufman_extension_page_writes` | _Not yet documented_ | counter | `instance`
`wmi_mssql_bufman_free_list_stalls` | _Not yet documented_ | counter | `instance`
`wmi_mssql_bufman_integral_controller_slope` | _Not yet documented_ | counter | `instance`
`wmi_mssql_bufman_lazywrites` | _Not yet documented_ | counter | `instance`
`wmi_mssql_bufman_page_life_expectancy_seconds` | _Not yet documented_ | counter | `instance`
`wmi_mssql_bufman_page_lookups` | _Not yet documented_ | counter | `instance`
`wmi_mssql_bufman_page_reads` | _Not yet documented_ | counter | `instance`
`wmi_mssql_bufman_page_writes` | _Not yet documented_ | counter | `instance`
`wmi_mssql_bufman_read_ahead_pages` | _Not yet documented_ | counter | `instance`
`wmi_mssql_bufman_read_ahead_issuing_seconds` | _Not yet documented_ | counter | `instance`
`wmi_mssql_bufman_target_pages` | _Not yet documented_ | counter | `instance`
`wmi_mssql_bufman_extension_outstanding_io` | Percentage of the buffer pool extension paging file occupied by buffer manager pages | counter | `instance`
`wmi_mssql_bufman_extension_page_evictions` | Number of pages evicted from the buffer pool extension file per second | counter | `instance`
`wmi_mssql_bufman_extension_page_reads` | Number of pages read from the buffer pool extension file per second | counter | `instance`
`wmi_mssql_bufman_extension_page_unreferenced_seconds` | Average seconds a page will stay in the buffer pool extension without references to it | counter | `instance`
`wmi_mssql_bufman_extension_page_writes` | Number of pages written to the buffer pool extension file per second | counter | `instance`
`wmi_mssql_bufman_free_list_stalls` | Indicates the number of requests per second that had to wait for a free page | counter | `instance`
`wmi_mssql_bufman_integral_controller_slope` | The slope that integral controller for the buffer pool last used, times -10 billion | counter | `instance`
`wmi_mssql_bufman_lazywrites` | Indicates the number of buffers written per second by the buffer manager's lazy writer | counter | `instance`
`wmi_mssql_bufman_page_life_expectancy_seconds` | Indicates the number of seconds a page will stay in the buffer pool without references | counter | `instance`
`wmi_mssql_bufman_page_lookups` | Indicates the number of requests per second to find a page in the buffer pool | counter | `instance`
`wmi_mssql_bufman_page_reads` | Indicates the number of physical database page reads that are issued per second | counter | `instance`
`wmi_mssql_bufman_page_writes` | Indicates the number of physical database page writes that are issued per second | counter | `instance`
`wmi_mssql_bufman_read_ahead_pages` | Indicates the number of pages read per second in anticipation of use | counter | `instance`
`wmi_mssql_bufman_read_ahead_issuing_seconds` | Time (microseconds) spent issuing readahead | counter | `instance`
`wmi_mssql_bufman_target_pages` | Ideal number of pages in the buffer pool | counter | `instance`
`wmi_mssql_dbreplica_database_flow_control_wait_seconds` | _Not yet documented_ | counter | `instance`, `replica`
`wmi_mssql_dbreplica_database_initiated_flow_controls` | _Not yet documented_ | counter | `instance`, `replica`
`wmi_mssql_dbreplica_received_file_bytes` | _Not yet documented_ | counter | `instance`, `replica`
@@ -112,144 +112,164 @@ Name | Description | Type | Labels
`wmi_mssql_dbreplica_log_compression_cachemisses` | _Not yet documented_ | counter | `instance`, `replica`
`wmi_mssql_dbreplica_log_compressions` | _Not yet documented_ | counter | `instance`, `replica`
`wmi_mssql_dbreplica_log_decompressions` | _Not yet documented_ | counter | `instance`, `replica`
`wmi_mssql_dbreplica_log_remaining_for_undo` | _Not yet documented_ | counter | `instance`, `replica`
`wmi_mssql_dbreplica_log_send_queue` | _Not yet documented_ | counter | `instance`, `replica`
`wmi_mssql_dbreplica_mirrored_write_transactions` | _Not yet documented_ | counter | `instance`, `replica`
`wmi_mssql_dbreplica_recovery_queue_records` | _Not yet documented_ | counter | `instance`, `replica`
`wmi_mssql_dbreplica_redo_blocks` | _Not yet documented_ | counter | `instance`, `replica`
`wmi_mssql_dbreplica_redo_remaining_bytes` | _Not yet documented_ | counter | `instance`, `replica`
`wmi_mssql_dbreplica_redone_bytes` | _Not yet documented_ | counter | `instance`, `replica`
`wmi_mssql_dbreplica_log_remaining_for_undo` | The amount of log, in bytes, remaining to complete the undo phase | counter | `instance`, `replica`
`wmi_mssql_dbreplica_log_send_queue` | Amount of log records in the log files of the primary database, in kilobytes, that haven't been sent to the secondary replica | counter | `instance`, `replica`
`wmi_mssql_dbreplica_mirrored_write_transactions` | Number of transactions that were written to the primary database and then waited to commit until the log was sent to the secondary database, in the last second | counter | `instance`, `replica`
`wmi_mssql_dbreplica_recovery_queue_records` | Amount of log records in the log files of the secondary replica that have not been redone | counter | `instance`, `replica`
`wmi_mssql_dbreplica_redo_blocks` | Number of times the redo thread was blocked on locks held by readers of the database | counter | `instance`, `replica`
`wmi_mssql_dbreplica_redo_remaining_bytes` | The amount of log, in kilobytes, remaining to be redone to finish the reverting phase | counter | `instance`, `replica`
`wmi_mssql_dbreplica_redone_bytes` | Amount of log records redone on the secondary database in the last second | counter | `instance`, `replica`
`wmi_mssql_dbreplica_redones` | _Not yet documented_ | counter | `instance`, `replica`
`wmi_mssql_dbreplica_total_log_requiring_undo` | _Not yet documented_ | counter | `instance`, `replica`
`wmi_mssql_dbreplica_transaction_delay_seconds` | _Not yet documented_ | counter | `instance`, `replica`
`wmi_mssql_databases_active_transactions` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_backup_restore_operations` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_bulk_copy_rows` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_bulk_copy_bytes` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_commit_table_entries` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_data_files_size_bytes` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_dbcc_logical_scan_bytes` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_group_commit_stall_seconds` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_flushed_bytes` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_cache_hit_ratio` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_cache_reads` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_files_size_bytes` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_files_used_size_bytes` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_flushes` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_flush_waits` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_flush_wait_seconds` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_flush_write_seconds` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_growths` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_cache_misses` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_disk_reads` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_hash_deletes` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_hash_inserts` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_invalid_hash_entries` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_log_scan_pushes` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_log_writer_pushes` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_empty_free_pool_pushes` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_low_memory_pushes` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_no_free_buffer_pushes` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_req_behind_trunc` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_requests_old_vlf` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_requests` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_total_active_log_bytes` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_total_shared_pool_bytes` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_shrinks` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_truncations` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_log_used_percent` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_pending_repl_transactions` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_repl_transactions` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_shrink_data_movement_bytes` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_tracked_transactions` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_transactions` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_write_transactions` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_xtp_controller_dlc_fetch_latency_seconds` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_xtp_controller_dlc_peak_latency_seconds` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_xtp_controller_log_processed_bytes` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_databases_xtp_memory_used_bytes` | _Not yet documented_ | counter | `instance`, `database`
`wmi_mssql_genstats_active_temp_tables` | _Not yet documented_ | counter | `instance`
`wmi_mssql_genstats_connection_resets` | _Not yet documented_ | counter | `instance`
`wmi_mssql_genstats_event_notifications_delayed_drop` | _Not yet documented_ | counter | `instance`
`wmi_mssql_genstats_http_authenticated_requests` | _Not yet documented_ | counter | `instance`
`wmi_mssql_genstats_logical_connections` | _Not yet documented_ | counter | `instance`
`wmi_mssql_genstats_logins` | _Not yet documented_ | counter | `instance`
`wmi_mssql_genstats_logouts` | _Not yet documented_ | counter | `instance`
`wmi_mssql_genstats_mars_deadlocks` | _Not yet documented_ | counter | `instance`
`wmi_mssql_genstats_non_atomic_yields` | _Not yet documented_ | counter | `instance`
`wmi_mssql_genstats_blocked_processes` | _Not yet documented_ | counter | `instance`
`wmi_mssql_genstats_soap_empty_requests` | _Not yet documented_ | counter | `instance`
`wmi_mssql_genstats_soap_method_invocations` | _Not yet documented_ | counter | `instance`
`wmi_mssql_genstats_soap_session_initiate_requests` | _Not yet documented_ | counter | `instance`
`wmi_mssql_genstats_soap_session_terminate_requests` | _Not yet documented_ | counter | `instance`
`wmi_mssql_genstats_soapsql_requests` | _Not yet documented_ | counter | `instance`
`wmi_mssql_genstats_soapwsdl_requests` | _Not yet documented_ | counter | `instance`
`wmi_mssql_genstats_sql_trace_io_provider_lock_waits` | _Not yet documented_ | counter | `instance`
`wmi_mssql_genstats_tempdb_recovery_unit_ids_generated` | _Not yet documented_ | counter | `instance`
`wmi_mssql_genstats_tempdb_rowset_ids_generated` | _Not yet documented_ | counter | `instance`
`wmi_mssql_genstats_temp_tables_creations` | _Not yet documented_ | counter | `instance`
`wmi_mssql_genstats_temp_tables_awaiting_destruction` | _Not yet documented_ | counter | `instance`
`wmi_mssql_genstats_trace_event_notification_queue_size` | _Not yet documented_ | counter | `instance`
`wmi_mssql_genstats_transactions` | _Not yet documented_ | counter | `instance`
`wmi_mssql_genstats_user_connections` | _Not yet documented_ | counter | `instance`
`wmi_mssql_locks_average_wait_seconds` | _Not yet documented_ | counter | `instance`, `resource`
`wmi_mssql_locks_lock_requests` | _Not yet documented_ | counter | `instance`, `resource`
`wmi_mssql_locks_lock_timeouts` | _Not yet documented_ | counter | `instance`, `resource`
`wmi_mssql_locks_lock_timeouts_excluding_NOWAIT` | _Not yet documented_ | counter | `instance`, `resource`
`wmi_mssql_locks_lock_waits` | _Not yet documented_ | counter | `instance`, `resource`
`wmi_mssql_locks_lock_wait_seconds` | _Not yet documented_ | counter | `instance`, `resource`
`wmi_mssql_locks_deadlocks` | _Not yet documented_ | counter | `instance`, `resource`
`wmi_mssql_memmgr_connection_memory_bytes` | _Not yet documented_ | counter | `instance`
`wmi_mssql_memmgr_database_cache_memory_bytes` | _Not yet documented_ | counter | `instance`
`wmi_mssql_memmgr_external_benefit_of_memory` | _Not yet documented_ | counter | `instance`
`wmi_mssql_memmgr_free_memory_bytes` | _Not yet documented_ | counter | `instance`
`wmi_mssql_memmgr_granted_workspace_memory_bytes` | _Not yet documented_ | counter | `instance`
`wmi_mssql_memmgr_lock_blocks` | _Not yet documented_ | counter | `instance`
`wmi_mssql_memmgr_allocated_lock_blocks` | _Not yet documented_ | counter | `instance`
`wmi_mssql_memmgr_lock_memory_bytes` | _Not yet documented_ | counter | `instance`
`wmi_mssql_memmgr_lock_owner_blocks` | _Not yet documented_ | counter | `instance`
`wmi_mssql_dbreplica_total_log_requiring_undo` | Total kilobytes of log that must be undone | counter | `instance`, `replica`
`wmi_mssql_dbreplica_transaction_delay_seconds` | Delay in waiting for unterminated commit acknowledgment for all the current transactions | counter | `instance`, `replica`
`wmi_mssql_databases_active_transactions` | Number of active transactions for the database | counter | `instance`, `database`
`wmi_mssql_databases_backup_restore_operations` | Read/write throughput for backup and restore operations of a database per second | counter | `instance`, `database`
`wmi_mssql_databases_bulk_copy_rows` | Number of rows bulk copied per second | counter | `instance`, `database`
`wmi_mssql_databases_bulk_copy_bytes` | Amount of data bulk copied (in kilobytes) per second | counter | `instance`, `database`
`wmi_mssql_databases_commit_table_entries` | he size (row count) of the in-memory portion of the commit table for the database | counter | `instance`, `database`
`wmi_mssql_databases_data_files_size_bytes` | Cumulative size (in kilobytes) of all the data files in the database including any automatic growth. Monitoring this counter is useful, for example, for determining the correct size of tempdb | counter | `instance`, `database`
`wmi_mssql_databases_dbcc_logical_scan_bytes` | Number of logical read scan bytes per second for database console commands (DBCC) | counter | `instance`, `database`
`wmi_mssql_databases_group_commit_stall_seconds` | Group stall time (microseconds) per second | counter | `instance`, `database`
`wmi_mssql_databases_log_flushed_bytes` | Total number of log bytes flushed | counter | `instance`, `database`
`wmi_mssql_databases_log_cache_hit_ratio` | Percentage of log cache reads satisfied from the log cache | counter | `instance`, `database`
`wmi_mssql_databases_log_cache_reads` | Reads performed per second through the log manager cache | counter | `instance`, `database`
`wmi_mssql_databases_log_files_size_bytes` | Cumulative size (in kilobytes) of all the transaction log files in the database | counter | `instance`, `database`
`wmi_mssql_databases_log_files_used_size_bytes` | The cumulative used size of all the log files in the database | counter | `instance`, `database`
`wmi_mssql_databases_log_flushes` | Total wait time (in milliseconds) to flush the log. On an Always On secondary database, this value indicates the wait time for log records to be hardened to disk | counter | `instance`, `database`
`wmi_mssql_databases_log_flush_waits` | Number of commits per second waiting for the log flush | counter | `instance`, `database`
`wmi_mssql_databases_log_flush_wait_seconds` | Number of commits per second waiting for the log flush | counter | `instance`, `database`
`wmi_mssql_databases_log_flush_write_seconds` | Time in milliseconds for performing writes of log flushes that were completed in the last second | counter | `instance`, `database`
`wmi_mssql_databases_log_growths` | Total number of times the transaction log for the database has been expanded | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_cache_misses` | Number of requests for which the log block was not available in the log pool | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_disk_reads` | Number of disk reads that the log pool issued to fetch log blocks | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_hash_deletes` | Rate of raw hash entry deletes from the Log Pool | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_hash_inserts` | Rate of raw hash entry inserts into the Log Pool | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_invalid_hash_entries` | Rate of hash lookups failing due to being invalid | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_log_scan_pushes` | Rate of Log block pushes by log scans, which may come from disk or memory | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_log_writer_pushes` | Rate of Log block pushes by log writer thread | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_empty_free_pool_pushes` | Rate of Log block push fails due to empty free pool | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_low_memory_pushes` | Rate of Log block push fails due to being low on memory | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_no_free_buffer_pushes` | Rate of Log block push fails due to free buffer unavailable | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_req_behind_trunc` | Log pool cache misses due to block requested being behind truncation LSN | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_requests_old_vlf` | Log Pool requests that were not in the last VLF of the log | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_requests` | The number of log-block requests processed by the log pool | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_total_active_log_bytes` | Current total active log stored in the shared cache buffer manager in bytes | counter | `instance`, `database`
`wmi_mssql_databases_log_pool_total_shared_pool_bytes` | Current total memory usage of the shared cache buffer manager in bytes | counter | `instance`, `database`
`wmi_mssql_databases_log_shrinks` | Total number of log shrinks for this database | counter | `instance`, `database`
`wmi_mssql_databases_log_truncations` | The number of times the transaction log has been truncated (in Simple Recovery Model) | counter | `instance`, `database`
`wmi_mssql_databases_log_used_percent` | Percentage of space in the log that is in use | counter | `instance`, `database`
`wmi_mssql_databases_pending_repl_transactions` | Number of transactions in the transaction log of the publication database marked for replication, but not yet delivered to the distribution database | counter | `instance`, `database`
`wmi_mssql_databases_repl_transactions` | Number of transactions per second read out of the transaction log of the publication database and delivered to the distribution database | counter | `instance`, `database`
`wmi_mssql_databases_shrink_data_movement_bytes` | Amount of data being moved per second by autoshrink operations, or DBCC SHRINKDATABASE or DBCC SHRINKFILE statements | counter | `instance`, `database`
`wmi_mssql_databases_tracked_transactions` | Number of committed transactions recorded in the commit table for the database | counter | `instance`, `database`
`wmi_mssql_databases_transactions` | Number of transactions started for the database per second | counter | `instance`, `database`
`wmi_mssql_databases_write_transactions` | Number of transactions that wrote to the database and committed, in the last second | counter | `instance`, `database`
`wmi_mssql_databases_xtp_controller_dlc_fetch_latency_seconds` | Average latency in microseconds between log blocks entering the Direct Log Consumer and being retrieved by the XTP controller, per second | counter | `instance`, `database`
`wmi_mssql_databases_xtp_controller_dlc_peak_latency_seconds` | The largest recorded latency, in microseconds, of a fetch from the Direct Log Consumer by the XTP controller | counter | `instance`, `database`
`wmi_mssql_databases_xtp_controller_log_processed_bytes` | The amount of log bytes processed by the XTP controller thread, per second | counter | `instance`, `database`
`wmi_mssql_databases_xtp_memory_used_bytes` | The amount of memory used by XTP in the database | counter | `instance`, `database`
`wmi_mssql_genstats_active_temp_tables` | Number of temporary tables/table variables in use | counter | `instance`
`wmi_mssql_genstats_connection_resets` | Total number of logins started from the connection pool | counter | `instance`
`wmi_mssql_genstats_event_notifications_delayed_drop` | Number of event notifications waiting to be dropped by a system thread | counter | `instance`
`wmi_mssql_genstats_http_authenticated_requests` | Number of authenticated HTTP requests started per second | counter | `instance`
`wmi_mssql_genstats_logical_connections` | Number of logical connections to the system | counter | `instance`
`wmi_mssql_genstats_logins` | Total number of logins started per second. This does not include pooled connections | counter | `instance`
`wmi_mssql_genstats_logouts` | Total number of logout operations started per second | counter | `instance`
`wmi_mssql_genstats_mars_deadlocks` | Number of MARS deadlocks detected | counter | `instance`
`wmi_mssql_genstats_non_atomic_yields` | Number of non-atomic yields per second | counter | `instance`
`wmi_mssql_genstats_blocked_processes` | Number of currently blocked processes | counter | `instance`
`wmi_mssql_genstats_soap_empty_requests` | Number of empty SOAP requests started per second | counter | `instance`
`wmi_mssql_genstats_soap_method_invocations` | Number of SOAP method invocations started per second | counter | `instance`
`wmi_mssql_genstats_soap_session_initiate_requests` | Number of SOAP Session initiate requests started per second | counter | `instance`
`wmi_mssql_genstats_soap_session_terminate_requests` | Number of SOAP Session terminate requests started per second | counter | `instance`
`wmi_mssql_genstats_soapsql_requests` | Number of SOAP SQL requests started per second | counter | `instance`
`wmi_mssql_genstats_soapwsdl_requests` | Number of SOAP Web Service Description Language requests started per second | counter | `instance`
`wmi_mssql_genstats_sql_trace_io_provider_lock_waits` | Number of waits for the File IO Provider lock per second | counter | `instance`
`wmi_mssql_genstats_tempdb_recovery_unit_ids_generated` | Number of duplicate tempdb recovery unit id generated | counter | `instance`
`wmi_mssql_genstats_tempdb_rowset_ids_generated` | Number of duplicate tempdb rowset id generated | counter | `instance`
`wmi_mssql_genstats_temp_tables_creations` | Number of temporary tables/table variables created per second | counter | `instance`
`wmi_mssql_genstats_temp_tables_awaiting_destruction` | Number of temporary tables/table variables waiting to be destroyed by the cleanup system thread | counter | `instance`
`wmi_mssql_genstats_trace_event_notification_queue_size` | Number of trace event notification instances waiting in the internal queue to be sent through Service Broker | counter | `instance`
`wmi_mssql_genstats_transactions` | Number of transaction enlistments (local, DTC, bound all combined) | counter | `instance`
`wmi_mssql_genstats_user_connections` | Counts the number of users currently connected to SQL Server | counter | `instance`
`wmi_mssql_locks_average_wait_seconds` | Average amount of wait time (in milliseconds) for each lock request that resulted in a wait | counter | `instance`, `resource`
`wmi_mssql_locks_lock_requests` | Number of new locks and lock conversions per second requested from the lock manager | counter | `instance`, `resource`
`wmi_mssql_locks_lock_timeouts` | Number of lock requests per second that timed out, but excluding requests for NOWAIT locks | counter | `instance`, `resource`
`wmi_mssql_locks_lock_timeouts_excluding_NOWAIT` | Number of lock requests per second that timed out, including requests for NOWAIT locks | counter | `instance`, `resource`
`wmi_mssql_locks_lock_waits` | Total wait time (in milliseconds) for locks in the last second | counter | `instance`, `resource`
`wmi_mssql_locks_lock_wait_seconds` | Number of lock requests per second that required the caller to wait | counter | `instance`, `resource`
`wmi_mssql_locks_deadlocks` | Number of lock requests per second that resulted in a deadlock | counter | `instance`, `resource`
`wmi_mssql_memmgr_connection_memory_bytes` | Specifies the total amount of dynamic memory the server is using for maintaining connections | counter | `instance`
`wmi_mssql_memmgr_database_cache_memory_bytes` | Specifies the amount of memory the server is currently using for the database pages cache | counter | `instance`
`wmi_mssql_memmgr_external_benefit_of_memory` | An internal estimation of the performance benefit from adding memory to a specific cache | counter | `instance`
`wmi_mssql_memmgr_free_memory_bytes` | Specifies the amount of committed memory currently not used by the server | counter | `instance`
`wmi_mssql_memmgr_granted_workspace_memory_bytes` | Specifies the total amount of memory currently granted to executing processes, such as hash, sort, bulk copy, and index creation operations | counter | `instance`
`wmi_mssql_memmgr_lock_blocks` | Specifies the current number of lock blocks in use on the server (refreshed periodically). A lock block represents an individual locked resource, such as a table, page, or row | counter | `instance`
`wmi_mssql_memmgr_allocated_lock_blocks` | Specifies the current number of allocated lock blocks. At server startup, the number of allocated lock blocks plus the number of allocated lock owner blocks depends on the SQL Server Locks configuration option. If more lock blocks are needed, the value increases | counter | `instance`
`wmi_mssql_memmgr_lock_memory_bytes` | Specifies the total amount of dynamic memory the server is using for locks | counter | `instance`
`wmi_mssql_memmgr_lock_owner_blocks` | Specifies the current number of allocated lock owner blocks. At server startup, the number of allocated lock owner blocks and the number of allocated lock blocks depend on the SQL Server Locks configuration option. If more lock owner blocks are needed, the value increases dynamically | counter | `instance`
`wmi_mssql_memmgr_allocated_lock_owner_blocks` | _Not yet documented_ | counter | `instance`
`wmi_mssql_memmgr_log_pool_memory_bytes` | _Not yet documented_ | counter | `instance`
`wmi_mssql_memmgr_maximum_workspace_memory_bytes` | _Not yet documented_ | counter | `instance`
`wmi_mssql_memmgr_outstanding_memory_grants` | _Not yet documented_ | counter | `instance`
`wmi_mssql_memmgr_pending_memory_grants` | _Not yet documented_ | counter | `instance`
`wmi_mssql_memmgr_optimizer_memory_bytes` | _Not yet documented_ | counter | `instance`
`wmi_mssql_memmgr_reserved_server_memory_bytes` | _Not yet documented_ | counter | `instance`
`wmi_mssql_memmgr_sql_cache_memory_bytes` | _Not yet documented_ | counter | `instance`
`wmi_mssql_memmgr_stolen_server_memory_bytes` | _Not yet documented_ | counter | `instance`
`wmi_mssql_memmgr_target_server_memory_bytes` | _Not yet documented_ | counter | `instance`
`wmi_mssql_memmgr_total_server_memory_bytes` | _Not yet documented_ | counter | `instance`
`wmi_mssql_sqlstats_auto_parameterization_attempts` | _Not yet documented_ | counter | `instance`
`wmi_mssql_memmgr_log_pool_memory_bytes` | Total amount of dynamic memory the server is using for Log Pool | counter | `instance`
`wmi_mssql_memmgr_maximum_workspace_memory_bytes` | Indicates the maximum amount of memory available for executing processes, such as hash, sort, bulk copy, and index creation operations | counter | `instance`
`wmi_mssql_memmgr_outstanding_memory_grants` | Specifies the total number of processes that have successfully acquired a workspace memory grant | counter | `instance`
`wmi_mssql_memmgr_pending_memory_grants` | Specifies the total number of processes waiting for a workspace memory grant | counter | `instance`
`wmi_mssql_memmgr_optimizer_memory_bytes` | Specifies the total amount of dynamic memory the server is using for query optimization | counter | `instance`
`wmi_mssql_memmgr_reserved_server_memory_bytes` | ndicates the amount of memory the server has reserved for future usage. This counter shows the current unused amount of memory initially granted that is shown in Granted Workspace Memory | counter | `instance`
`wmi_mssql_memmgr_sql_cache_memory_bytes` | Specifies the total amount of dynamic memory the server is using for the dynamic SQL cache | counter | `instance`
`wmi_mssql_memmgr_stolen_server_memory_bytes` | Specifies the amount of memory the server is using for purposes other than database pages | counter | `instance`
`wmi_mssql_memmgr_target_server_memory_bytes` | Indicates the ideal amount of memory the server can consume | counter | `instance`
`wmi_mssql_memmgr_total_server_memory_bytes` | Specifies the amount of memory the server has committed using the memory manager | counter | `instance`
`wmi_mssql_sqlstats_auto_parameterization_attempts` | Number of failed auto-parameterization attempts per second. This should be small. Note that auto-parameterizations are also known as simple parameterizations in later versions of SQL Server | counter | `instance`
`wmi_mssql_sqlstats_batch_requests` | _Not yet documented_ | counter | `instance`
`wmi_mssql_sqlstats_failed_auto_parameterization_attempts` | _Not yet documented_ | counter | `instance`
`wmi_mssql_sqlstats_forced_parameterizations` | _Not yet documented_ | counter | `instance`
`wmi_mssql_sqlstats_guided_plan_executions` | _Not yet documented_ | counter | `instance`
`wmi_mssql_sqlstats_misguided_plan_executions` | _Not yet documented_ | counter | `instance`
`wmi_mssql_sqlstats_safe_auto_parameterization_attempts` | _Not yet documented_ | counter | `instance`
`wmi_mssql_sqlstats_sql_attentions` | _Not yet documented_ | counter | `instance`
`wmi_mssql_sqlstats_sql_compilations` | _Not yet documented_ | counter | `instance`
`wmi_mssql_sqlstats_sql_recompilations` | _Not yet documented_ | counter | `instance`
`wmi_mssql_sqlstats_unsafe_auto_parameterization_attempts` | _Not yet documented_ | counter | `instance`
`wmi_mssql_sql_errors_total` | _Not yet documented_ | counter | `instance`, `resource`
`wmi_mssql_transactions_tempdb_free_space_bytes` | _Not yet documented_ | gauge | `instance`
`wmi_mssql_transactions_longest_transaction_running_seconds` | _Not yet documented_ | gauge | `instance`
`wmi_mssql_transactions_nonsnapshot_version_active_total` | _Not yet documented_ | counter | `instance`
`wmi_mssql_transactions_snapshot_active_total` | _Not yet documented_ | counter | `instance`
`wmi_mssql_transactions_active_total` | _Not yet documented_ | counter | `instance`
`wmi_mssql_transactions_update_conflicts_total` | _Not yet documented_ | counter | `instance`
`wmi_mssql_transactions_update_snapshot_active_total` | _Not yet documented_ | counter | `instance`
`wmi_mssql_transactions_version_cleanup_rate_bytes` | _Not yet documented_ | gauge | `instance`
`wmi_mssql_transactions_version_generation_rate_bytes` | _Not yet documented_ | gauge | `instance`
`wmi_mssql_transactions_version_store_size_bytes` | _Not yet documented_ | gauge | `instance`
`wmi_mssql_transactions_version_store_units` | _Not yet documented_ | counter | `instance`
`wmi_mssql_transactions_version_store_creation_units` | _Not yet documented_ | counter | `instance`
`wmi_mssql_transactions_version_store_truncation_units` | _Not yet documented_ | counter | `instance`
`wmi_mssql_sqlstats_forced_parameterizations` | Number of successful forced parameterizations per second | counter | `instance`
`wmi_mssql_sqlstats_guided_plan_executions` | Number of plan executions per second in which the query plan has been generated by using a plan guide | counter | `instance`
`wmi_mssql_sqlstats_misguided_plan_executions` | Number of plan executions per second in which a plan guide could not be honored during plan generation | counter | `instance`
`wmi_mssql_sqlstats_safe_auto_parameterization_attempts` | Number of safe auto-parameterization attempts per second | counter | `instance`
`wmi_mssql_sqlstats_sql_attentions` | Number of attentions per second | counter | `instance`
`wmi_mssql_sqlstats_sql_compilations` | Number of SQL compilations per second | counter | `instance`
`wmi_mssql_sqlstats_sql_recompilations` | Number of statement recompiles per second | counter | `instance`
`wmi_mssql_sqlstats_unsafe_auto_parameterization_attempts` | Number of unsafe auto-parameterization attempts per second. | counter | `instance`
`wmi_mssql_sql_errors_total` | Information for all errors | counter | `instance`, `resource`
`wmi_mssql_transactions_tempdb_free_space_bytes` | The amount of space (in kilobytes) available in tempdb | gauge | `instance`
`wmi_mssql_transactions_longest_transaction_running_seconds` | The length of time (in seconds) since the start of the transaction that has been active longer than any other current transaction | gauge | `instance`
`wmi_mssql_transactions_nonsnapshot_version_active_total` | The number of currently active transactions that are not using snapshot isolation level and have made data modifications that have generated row versions in the tempdb version store | counter | `instance`
`wmi_mssql_transactions_snapshot_active_total` | The number of currently active transactions using the snapshot isolation level | counter | `instance`
`wmi_mssql_transactions_active_total` | The number of currently active transactions of all types | counter | `instance`
`wmi_mssql_transactions_update_conflicts_total` | The percentage of those transactions using the snapshot isolation level that have encountered update conflicts within the last second | counter | `instance`
`wmi_mssql_transactions_update_snapshot_active_total` | The number of currently active transactions using the snapshot isolation level and have modified data | counter | `instance`
`wmi_mssql_transactions_version_cleanup_rate_bytes` | The rate (in kilobytes per second) at which row versions are removed from the snapshot isolation version store in tempdb | gauge | `instance`
`wmi_mssql_transactions_version_generation_rate_bytes` | The rate (in kilobytes per second) at which new row versions are added to the snapshot isolation version store in tempdb | gauge | `instance`
`wmi_mssql_transactions_version_store_size_bytes` | he amount of space (in kilobytes) in tempdb being used to store snapshot isolation level row versions | gauge | `instance`
`wmi_mssql_transactions_version_store_units` | The number of active allocation units in the snapshot isolation version store in tempdb | counter | `instance`
`wmi_mssql_transactions_version_store_creation_units` | The number of allocation units that have been created in the snapshot isolation store since the instance of the Database Engine was started | counter | `instance`
`wmi_mssql_transactions_version_store_truncation_units` | The number of allocation units that have been removed from the snapshot isolation store since the instance of the Database Engine was started | counter | `instance`
### Example metric
_This collector does not yet have explained examples, we would appreciate your help adding them!_
## Useful queries
_This collector does not yet have any useful queries added, we would appreciate your help adding them!_
### Buffer Cache Hit Ratio
When you read the counter in perfmon you will get the the percentage pages found in the buffer cache. This percentage is calculated internally based on the total number of cache hits divided by the total number of cache lookups over the last few thousand page accesses.
This collector retrieves the two internal values separately. In order to calculate the Buffer Cache Hit Ratio in PromQL.
```
wmi_mssql_bufman_buffer_cache_hits{instance="host:9182", exported_instance="MSSQLSERVER"} /
wmi_mssql_bufman_buffer_cache_lookups{instance="host:9182", exported_instance="MSSQLSERVER"}
```
This principal can be used for following metrics too:
- AccessMethodsWorktablesFromCacheHitRatio
- accessmethods_worktables_from_cache_hits
- accessmethods_worktables_from_cache_lookups
- LogCacheHitRatio
- databases_log_cache_hits
- databases_log_cache_lookups
- AverageLockWaitTime
- locks_wait_time_seconds
- locks_count
## Alerting examples
_This collector does not yet have alerting examples, we would appreciate your help adding them!_

View File

@@ -5,6 +5,7 @@ The net collector exposes metrics about network interfaces
|||
-|-
Metric name prefix | `net`
Data source | Perflib
Classes | [`Win32_PerfRawData_Tcpip_NetworkInterface`](https://technet.microsoft.com/en-us/security/aa394340(v=vs.80))
Enabled by default? | Yes
@@ -22,24 +23,40 @@ If given, an interface name needs to *not* match the blacklist regexp in order f
Name | Description | Type | Labels
-----|-------------|------|-------
`wmi_net_bytes_received_total` | _Not yet documented_ | counter | `nic`
`wmi_net_bytes_sent_total` | _Not yet documented_ | counter | `nic`
`wmi_net_bytes_total` | _Not yet documented_ | counter | `nic`
`wmi_net_packets_outbound_discarded` | _Not yet documented_ | counter | `nic`
`wmi_net_packets_outbound_errors` | _Not yet documented_ | counter | `nic`
`wmi_net_packets_received_discarded` | _Not yet documented_ | counter | `nic`
`wmi_net_packets_received_errors` | _Not yet documented_ | counter | `nic`
`wmi_net_packets_received_total` | _Not yet documented_ | counter | `nic`
`wmi_net_packets_received_unknown` | _Not yet documented_ | counter | `nic`
`wmi_net_packets_total` | _Not yet documented_ | counter | `nic`
`wmi_net_packets_sent_total` | _Not yet documented_ | counter | `nic`
`wmi_net_current_bandwidth` | _Not yet documented_ | counter | `nic`
`wmi_net_bytes_received_total` | Total bytes received by interface | counter | `nic`
`wmi_net_bytes_sent_total` | Total bytes transmitted by interface | counter | `nic`
`wmi_net_bytes_total` | Total bytes received and transmitted by interface | counter | `nic`
`wmi_net_packets_outbound_discarded` | Total outbound packets that were chosen to be discarded even though no errors had been detected to prevent transmission | counter | `nic`
`wmi_net_packets_outbound_errors` | Total packets that could not be transmitted due to errors | counter | `nic`
`wmi_net_packets_received_discarded` | Total inbound packets that were chosen to be discarded even though no errors had been detected to prevent delivery | counter | `nic`
`wmi_net_packets_received_errors` | Total packets that could not be received due to errors | counter | `nic`
`wmi_net_packets_received_total` | Total packets received by interface | counter | `nic`
`wmi_net_packets_received_unknown` | Total packets received by interface that were discarded because of an unknown or unsupported protocol | counter | `nic`
`wmi_net_packets_total` | Total packets received and transmitted by interface | counter | `nic`
`wmi_net_packets_sent_total` | Total packets transmitted by interface | counter | `nic`
`wmi_net_current_bandwidth` | Estimate of the interface's current bandwidth in bits per second (bps) | gauge | `nic`
### Example metric
_This collector does not yet have explained examples, we would appreciate your help adding them!_
Query the rate of transmitted network traffic
```
rate(wmi_net_bytes_sent_total{instance="localhost"}[2m])
```
## Useful queries
_This collector does not yet have any useful queries added, we would appreciate your help adding them!_
Get total utilisation of network interface as a percentage
```
rate(wmi_net_bytes_total{instance="localhost", nic="Microsoft_Hyper_V_Network_Adapter__1"}[2m]) * 8 / wmi_net_current_bandwidth{instance="locahost", nic="Microsoft_Hyper_V_Network_Adapter__1"} * 100
```
## Alerting examples
_This collector does not yet have alerting examples, we would appreciate your help adding them!_
**prometheus.rules**
```yaml
- alert: NetInterfaceUsage
expr: rate(wmi_net_bytes_total[2m]) * 8 / wmi_net_current_bandwidth * 100 > 95
for: 10m
labels:
severity: high
annotations:
summary: "Network Interface Usage (instance {{ $labels.instance }})"
description: "Network traffic usage is greater than 95% for interface {{ $labels.nic }}\n VALUE = {{ $value }}\n LABELS: {{ $labels }}"
```

View File

@@ -16,24 +16,51 @@ None
Name | Description | Type | Labels
-----|-------------|------|-------
`wmi_os_paging_limit_bytes` | _Not yet documented_ | gauge | None
`wmi_os_paging_free_bytes` | _Not yet documented_ | gauge | None
`wmi_os_physical_memory_free_bytes` | _Not yet documented_ | gauge | None
`wmi_os_time` | _Not yet documented_ | gauge | None
`wmi_os_timezone` | _Not yet documented_ | gauge | `timezone`
`wmi_os_processes` | _Not yet documented_ | gauge | None
`wmi_os_processes_limit` | _Not yet documented_ | gauge | None
`wmi_os_process_memory_limix_bytes` | _Not yet documented_ | gauge | None
`wmi_os_users` | _Not yet documented_ | gauge | None
`wmi_os_virtual_memory_bytes` | _Not yet documented_ | gauge | None
`wmi_os_visible_memory_bytes` | _Not yet documented_ | gauge | None
`wmi_os_virtual_memory_free_bytes` | _Not yet documented_ | gauge | None
`wmi_os_info` | Contains full product name & version in labels | gauge | `product`, `version`
`wmi_os_paging_limit_bytes` | Total number of bytes that can be sotred in the operating system paging files. 0 (zero) indicates that there are no paging files | gauge | None
`wmi_os_paging_free_bytes` | Number of bytes that can be mapped into the operating system paging files without causing any other pages to be swapped out | gauge | None
`wmi_os_physical_memory_free_bytes` | Bytes of physical memory currently unused and available | gauge | None
`wmi_os_time` | Current time as reported by the operating system, in [Unix time](https://en.wikipedia.org/wiki/Unix_time). See [time.Unix()](https://golang.org/pkg/time/#Unix) for details | gauge | None
`wmi_os_timezone` | Current timezone as reported by the operating system. See [time.Zone()](https://golang.org/pkg/time/#Time.Zone) for details | gauge | `timezone`
`wmi_os_processes` | Number of process contexts currently loaded or running on the operating system | gauge | None
`wmi_os_processes_limit` | Maximum number of process contexts the operating system can support. The default value set by the provider is 4294967295 (0xFFFFFFFF) | gauge | None
`wmi_os_process_memory_limit_bytes` | Maximum number of bytes of memory that can be allocated to a process | gauge | None
`wmi_os_users` | Number of user sessions for which the operating system is storing state information currently. For a list of current active logon sessions, see [`logon`](collector.logon.md) | gauge | None
`wmi_os_virtual_memory_bytes` | Bytes of virtual memory | gauge | None
`wmi_os_visible_memory_bytes` | Total bytes of physical memory available to the operating system. This value does not necessarily indicate the true amount of physical memory, but what is reported to the operating system as available to it | gauge | None
`wmi_os_virtual_memory_free_bytes` | Bytes of virtual memory currently unused and available | gauge | None
### Example metric
_This collector does not yet have explained examples, we would appreciate your help adding them!_
Show current number of processes
```
wmi_os_processes{instance="localhost"}
```
## Useful queries
_This collector does not yet have any useful queries added, we would appreciate your help adding them!_
Find all devices not set to UTC timezone
```
wmi_os_timezone{timezone != "UTC"}
```
## Alerting examples
_This collector does not yet have alerting examples, we would appreciate your help adding them!_
**prometheus.rules**
```yaml
# Alert on hosts that have exhausted all available physical memory
- alert: MemoryExhausted
expr: wmi_os_physical_memory_free_bytes == 0
for: 10m
labels:
severity: high
annotations:
summary: "Host {{ $labels.instance }} is out of memory"
description: "{{ $labels.instance }} has exhausted all available physical memory"
# Alert on hosts with greater than 90% memory usage
- alert: MemoryLow
expr: 100 - 100 * wmi_os_physical_memory_free_bytes / wmi_cs_physical_memory_bytes > 90
for: 10m
labels:
severity: warning
annotations:
summary: "Memory usage for host {{ $labels.instance }} is greater than 90%"
```

View File

@@ -5,18 +5,37 @@ The process collector exposes metrics about processes
|||
-|-
Metric name prefix | `process`
Classes | [`Win32_PerfRawData_PerfProc_Process`](https://msdn.microsoft.com/en-us/library/aa394323(v=vs.85).aspx)
Data source | Perflib
Counters | `Process`
Enabled by default? | No
## Flags
### `--collector.process.processes-where`
### `--collector.process.whitelist`
A WMI filter on which processes to include. Recommended to keep down number of returned metrics.
Regexp of processes to include. Process name must both match whitelist and not
match blacklist to be included. Recommended to keep down number of returned
metrics.
`%` is a wildcard, and can be used to match on substrings.
### `--collector.process.blacklist`
Example: `--collector.process.processes-where="Name LIKE 'firefox%'`
Regexp of processes to exclude. Process name must both match whitelist and not
match blacklist to be included. Recommended to keep down number of returned
metrics.
### Example
To match all firefox processes: `--collector.process.whitelist="firefox.+"`.
Note that multiple processes with the same name will be disambiguated by
Windows by adding a number suffix, such as `firefox#2`. Your [regexp](https://en.wikipedia.org/wiki/Regular_expression) must take
these suffixes into consideration.
:warning: The regexp is case-sensitive, so `--collector.process.whitelist="FIREFOX.+"` will **NOT** match a process named `firefox` .
To specify multiple names, use the pipe `|` character:
```
--collector.process.whitelist="firefox.+|FIREFOX.+|chrome.+"
```
This will match all processes named `firefox`, `FIREFOX` or `chrome` .
## Metrics

View File

@@ -79,7 +79,7 @@ count(wmi_service_state{exported_name=~"(sqlserveragent|mssqlserver)",state="run
## Alerting examples
**prometheus.rules**
```
```yaml
groups:
- name: Microsoft SQL Server Alerts
rules:

View File

@@ -5,6 +5,7 @@ The system collector exposes metrics about ...
|||
-|-
Metric name prefix | `system`
Data source | Perflib
Classes | [`Win32_PerfRawData_PerfOS_System`](https://web.archive.org/web/20050830140516/http://msdn.microsoft.com/library/en-us/wmisdk/wmi/win32_perfrawdata_perfos_system.asp)
Enabled by default? | Yes
@@ -16,18 +17,24 @@ None
Name | Description | Type | Labels
-----|-------------|------|-------
`wmi_system_context_switches_total` | _Not yet documented_ | counter | None
`wmi_system_exception_dispatches_total` | _Not yet documented_ | counter | None
`wmi_system_processor_queue_length` | _Not yet documented_ | gauge | None
`wmi_system_system_calls_total` | _Not yet documented_ | counter | None
`wmi_system_system_up_time` | _Not yet documented_ | gauge | None
`wmi_system_threads` | _Not yet documented_ | gauge | None
`wmi_system_context_switches_total` | Total number of [context switches](https://en.wikipedia.org/wiki/Context_switch) | counter | None
`wmi_system_exception_dispatches_total` | Total exceptions dispatched by the system | counter | None
`wmi_system_processor_queue_length` | Number of threads in the processor queue. There is a single queue for processor time even on computers with multiple processors. | gauge | None
`wmi_system_system_calls_total` | Total combined calls to Windows NT system service routines by all processes running on the computer | counter | None
`wmi_system_system_up_time` | Time of last boot of system | gauge | None
`wmi_system_threads` | Number of Windows system [threads](https://en.wikipedia.org/wiki/Thread_(computing)) | gauge | None
### Example metric
_This collector does not yet have explained examples, we would appreciate your help adding them!_
Show current number of system threads
```
wmi_system_threads{instance="localhost"}
```
## Useful queries
_This collector does not yet have any useful queries added, we would appreciate your help adding them!_
Find hosts that have rebooted in the last 24 hours
```
time() - wmi_system_system_up_time < 86400
```
## Alerting examples
_This collector does not yet have alerting examples, we would appreciate your help adding them!_

View File

@@ -16,15 +16,15 @@ None
Name | Description | Type | Labels
-----|-------------|------|-------
`wmi_tcp_connection_failures` | _Not yet documented_ | counter | None
`wmi_tcp_connections_active` | _Not yet documented_ | counter | None
`wmi_tcp_connections_established` | _Not yet documented_ | counter | None
`wmi_tcp_connections_passive` | _Not yet documented_ | counter | None
`wmi_tcp_connections_reset` | _Not yet documented_ | counter | None
`wmi_tcp_segments_total` | _Not yet documented_ | counter | None
`wmi_tcp_segments_received_total` | _Not yet documented_ | counter | None
`wmi_tcp_segments_retransmitted_total` | _Not yet documented_ | counter | None
`wmi_tcp_segments_sent_total` | _Not yet documented_ | counter | None
`wmi_tcp_connection_failures` | Number of times TCP connections have made a direct transition to the CLOSED state from the SYN-SENT state or the SYN-RCVD state, plus the number of times TCP connections have made a direct transition from the SYN-RCVD state to the LISTEN state | counter | None
`wmi_tcp_connections_active` | Number of times TCP connections have made a direct transition from the CLOSED state to the SYN-SENT state.| counter | None
`wmi_tcp_connections_established` | Number of TCP connections for which the current state is either ESTABLISHED or CLOSE-WAIT. | counter | None
`wmi_tcp_connections_passive` | Number of times TCP connections have made a direct transition from the LISTEN state to the SYN-RCVD state. | counter | None
`wmi_tcp_connections_reset` | Number of times TCP connections have made a direct transition from the LISTEN state to the SYN-RCVD state. | counter | None
`wmi_tcp_segments_total` | Total segments sent or received using the TCP protocol | counter | None
`wmi_tcp_segments_received_total` | Total segments received, including those received in error. This count includes segments received on currently established connections | counter | None
`wmi_tcp_segments_retransmitted_total` | Total segments retransmitted. That is, segments transmitted that contain one or more previously transmitted bytes | counter | None
`wmi_tcp_segments_sent_total` | Total segments sent, including those on current connections, but excluding those containing *only* retransmitted bytes | counter | None
### Example metric
_This collector does not yet have explained examples, we would appreciate your help adding them!_

View File

@@ -12,7 +12,7 @@ Enabled by default? | Yes
### `--collector.textfile.directory`
The directory containing the files to be ingested. Only files with the extension `.prom` are read.
The directory containing the files to be ingested. Only files with the extension `.prom` are read. The `.prom` file must end with an empty line feed to work properly.
Default value: `C:\Program Files\wmi_exporter\textfile_inputs`

View File

@@ -5,7 +5,7 @@ package main
import (
"fmt"
"net/http"
"os"
_ "net/http/pprof"
"sort"
"strconv"
"strings"
@@ -97,7 +97,11 @@ func (coll WmiCollector) Collect(ch chan<- prometheus.Metric) {
)
t := time.Now()
scrapeContext, err := collector.PrepareScrapeContext()
cs := make([]string, 0, len(coll.collectors))
for name := range coll.collectors {
cs = append(cs, name)
}
scrapeContext, err := collector.PrepareScrapeContext(cs)
ch <- prometheus.MustNewConstMetric(
snapshotDuration,
prometheus.GaugeValue,
@@ -144,6 +148,7 @@ func (coll WmiCollector) Collect(ch chan<- prometheus.Metric) {
go func() {
wg.Wait()
close(allDone)
close(metricsBuffer)
}()
// Wait until either all collectors finish, or timeout expires
@@ -187,17 +192,6 @@ func (coll WmiCollector) Collect(ch chan<- prometheus.Metric) {
l.Unlock()
}
func filterAvailableCollectors(collectors string) string {
var availableCollectors []string
for _, c := range strings.Split(collectors, ",") {
_, ok := collector.Factories[c]
if ok {
availableCollectors = append(availableCollectors, c)
}
}
return strings.Join(availableCollectors, ",")
}
func execute(name string, c collector.Collector, ctx *collector.ScrapeContext, ch chan<- prometheus.Metric) collectorOutcome {
t := time.Now()
err := c.Collect(ctx, ch)
@@ -238,16 +232,13 @@ func loadCollectors(list string) (map[string]collector.Collector, error) {
enabled := expandEnabledCollectors(list)
for _, name := range enabled {
fn, ok := collector.Factories[name]
if !ok {
return nil, fmt.Errorf("collector '%s' not available", name)
}
c, err := fn()
c, err := collector.Build(name)
if err != nil {
return nil, err
}
collectors[name] = c
}
return collectors, nil
}
@@ -274,10 +265,14 @@ func main() {
"telemetry.path",
"URL path for surfacing collected metrics.",
).Default("/metrics").String()
maxRequests = kingpin.Flag(
"telemetry.max-requests",
"Maximum number of concurrent requests. 0 to disable.",
).Default("5").Int()
enabledCollectors = kingpin.Flag(
"collectors.enabled",
"Comma-separated list of collectors to use. Use '[defaults]' as a placeholder for all the collectors enabled by default.").
Default(filterAvailableCollectors(defaultCollectors)).String()
Default(defaultCollectors).String()
printCollectors = kingpin.Flag(
"collectors.print",
"If true, print available collectors and exit.",
@@ -294,8 +289,9 @@ func main() {
kingpin.Parse()
if *printCollectors {
collectorNames := make(sort.StringSlice, 0, len(collector.Factories))
for n := range collector.Factories {
collectors := collector.Available()
collectorNames := make(sort.StringSlice, 0, len(collectors))
for _, n := range collectors {
collectorNames = append(collectorNames, n)
}
collectorNames.Sort()
@@ -340,10 +336,16 @@ func main() {
},
}
http.Handle(*metricsPath, h)
http.HandleFunc(*metricsPath, withConcurrencyLimit(*maxRequests, h.ServeHTTP))
http.HandleFunc("/health", healthCheck)
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
http.Redirect(w, r, *metricsPath, http.StatusMovedPermanently)
_, _ = w.Write([]byte(`<html>
<head><title>WMI Exporter</title></head>
<body>
<h1>WMI Exporter</h1>
<p><a href="` + *metricsPath + `">Metrics</a></p>
</body>
</html>`))
})
log.Infoln("Starting WMI exporter", version.Info())
@@ -378,6 +380,25 @@ func keys(m map[string]collector.Collector) []string {
return ret
}
func withConcurrencyLimit(n int, next http.HandlerFunc) http.HandlerFunc {
if n <= 0 {
return next
}
sem := make(chan struct{}, n)
return func(w http.ResponseWriter, r *http.Request) {
select {
case sem <- struct{}{}:
defer func() { <-sem }()
default:
w.WriteHeader(http.StatusServiceUnavailable)
_, _ = w.Write([]byte("Too many concurrent requests"))
return
}
next(w, r)
}
}
type wmiExporterService struct {
stopCh chan<- bool
}
@@ -429,7 +450,7 @@ func (mh *metricsHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
reg := prometheus.NewRegistry()
reg.MustRegister(mh.collectorFactory(time.Duration(timeoutSeconds * float64(time.Second))))
reg.MustRegister(
prometheus.NewProcessCollector(os.Getpid(), ""),
prometheus.NewProcessCollector(prometheus.ProcessCollectorOpts{}),
prometheus.NewGoCollector(),
version.NewCollector("wmi_exporter"),
)

17
go.mod Normal file
View File

@@ -0,0 +1,17 @@
module github.com/martinlindhe/wmi_exporter
go 1.13
require (
github.com/Microsoft/go-winio v0.4.14 // indirect
github.com/Microsoft/hcsshim v0.8.6
github.com/StackExchange/wmi v0.0.0-20180116203802-5d049714c4a6
github.com/dimchansky/utfbom v1.1.0
github.com/go-ole/go-ole v1.2.1 // indirect
github.com/leoluk/perflib_exporter v0.1.0
github.com/prometheus/client_golang v0.9.2
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910
github.com/prometheus/common v0.2.0
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b
gopkg.in/alecthomas/kingpin.v2 v2.2.6
)

89
go.sum Normal file
View File

@@ -0,0 +1,89 @@
github.com/Microsoft/go-winio v0.4.14 h1:+hMXMk01us9KgxGb7ftKQt2Xpf5hH/yky+TDA+qxleU=
github.com/Microsoft/go-winio v0.4.14/go.mod h1:qXqCSQ3Xa7+6tgxaGTIe4Kpcdsi+P8jBhyzoq1bpyYA=
github.com/Microsoft/hcsshim v0.8.6 h1:ZfF0+zZeYdzMIVMZHKtDKJvLHj76XCuVae/jNkjj0IA=
github.com/Microsoft/hcsshim v0.8.6/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
github.com/StackExchange/wmi v0.0.0-20180116203802-5d049714c4a6 h1:fLjPD/aNc3UIOA6tDi6QXUemppXK3P9BI7mr2hd6gx8=
github.com/StackExchange/wmi v0.0.0-20180116203802-5d049714c4a6/go.mod h1:3eOhrUMpNV+6aFIbp5/iudMxNCF27Vw2OZgy4xEx0Fg=
github.com/alecthomas/kingpin v2.2.6+incompatible/go.mod h1:59OFYbFVLKQKq+mqrL6Rw5bR0c3ACQaawgXx0QYndlE=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc h1:cAKDfWh5VpdgMhJosfJnn5/FoN2SRZ4p7fJNX58YPaU=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf h1:qet1QNfXsQxTZqLG4oE62mJzwPIB8+Tee4RNCL9ulrY=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/beorn7/perks v0.0.0-20160804104726-4c0e84591b9a h1:BtpsbiV638WQZwhA98cEZw2BsbnQJrbd0BI7tsy0W1c=
github.com/beorn7/perks v0.0.0-20160804104726-4c0e84591b9a/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973 h1:xJ4a3vCFaGF/jqvzLMYoU8P317H5OQ+Via4RmuPwCS0=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dimchansky/utfbom v1.1.0 h1:FcM3g+nofKgUteL8dm/UpdRXNC9KmADgTpLKsu0TRo4=
github.com/dimchansky/utfbom v1.1.0/go.mod h1:rO41eb7gLfo8SF1jd9F8HplJm1Fewwi4mQvIirEdv+8=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-ole/go-ole v1.2.1 h1:2lOsA72HgjxAuMlKpFiCbHTvu44PIVkZ5hqm3RSdI/E=
github.com/go-ole/go-ole v1.2.1/go.mod h1:7FAglXiTm7HKlQRDeOQ6ZNUHidzCWXuZWq/1dTyBNF8=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/golang/protobuf v1.0.0 h1:lsek0oXi8iFE9L+EXARyHIjU5rlWIhhTkjDz3vHhWWQ=
github.com/golang/protobuf v1.0.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.2.0 h1:P3YflyNX/ehuJFLhxviNdFxQPkGK5cDcApsge1SqnvM=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/konsorten/go-windows-terminal-sequences v1.0.1 h1:mweAR1A6xJ3oS2pRaGiHgQ4OO8tzTaLawm8vnODuwDk=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/leoluk/perflib_exporter v0.0.1 h1:MRwP1Ohh/mVevUy4ZUzvSlxnJtm9/NWHeM3aROxnRiQ=
github.com/leoluk/perflib_exporter v0.0.1/go.mod h1:4APOQriqHobMzovXV7guPQv0ynKH6vZD3XNmT2MBc6w=
github.com/leoluk/perflib_exporter v0.1.0 h1:fXe/mDaf9jR+Zk8FjFlcCSksACuIj2VNN4GyKHmQqtA=
github.com/leoluk/perflib_exporter v0.1.0/go.mod h1:rpV0lYj7lemdTm31t7zpCqYqPnw7xs86f+BaaNBVYFM=
github.com/matttproud/golang_protobuf_extensions v1.0.0 h1:YNOwxxSJzSUARoD9KRZLzM9Y858MNGCOACTvCW9TSAc=
github.com/matttproud/golang_protobuf_extensions v1.0.0/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.8.0 h1:1921Yw9Gc3iSc4VQh3PIoOqgPCZS7G/4xQNVUp8Mda8=
github.com/prometheus/client_golang v0.8.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.2 h1:awm861/B8OKDd2I/6o1dy3ra4BamzKhYOiGItCeZ740=
github.com/prometheus/client_golang v0.9.2/go.mod h1:OsXs2jCmiKlQ1lTBmv21f2mNfw4xf/QclQDMrYNZzcM=
github.com/prometheus/client_model v0.0.0-20171117100541-99fa1f4be8e5 h1:cLL6NowurKLMfCeQy4tIeph12XNQWgANCNvdyrOYKV4=
github.com/prometheus/client_model v0.0.0-20171117100541-99fa1f4be8e5/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910 h1:idejC8f05m9MGOsuEi1ATq9shN03HrxNkD/luQvxCv8=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/common v0.0.0-20180312112859-e4aa40a9169a h1:JLXgXKi9RCmLk8DMn8+PCvN++iwpD3KptUbVvHBsKtU=
github.com/prometheus/common v0.0.0-20180312112859-e4aa40a9169a/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.0.0-20181126121408-4724e9255275/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.2.0 h1:kUZDBDTdBVBYBj5Tmh2NZLlF60mfjA27rM34b+cVwNU=
github.com/prometheus/common v0.2.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/procfs v0.0.0-20180310141954-54d17b57dd7d h1:iF+U2tTdys559fmqt0MNaC8QLIJh1twxIIOylDGhswM=
github.com/prometheus/procfs v0.0.0-20180310141954-54d17b57dd7d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a h1:9a8MnZMP0X2nLJdBg+pBmGgkJlSaKC2KaQmTCk1XDtE=
github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/sirupsen/logrus v1.0.5 h1:8c8b5uO0zS4X6RPl/sd1ENwSkIc0/H2PaHxE3udaE8I=
github.com/sirupsen/logrus v1.0.5/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
github.com/sirupsen/logrus v1.2.0 h1:juTguoYk5qI21pwyTXY3B3Y5cOTH3ZUyZCg1v/mihuo=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.1 h1:GL2rEmy6nsikmW0r8opw9JIRScdMF5hA8cOYLH7In1k=
github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
golang.org/x/crypto v0.0.0-20180312195533-182114d58262 h1:1NLVUmR8SQ7cNNA5Vo7ronpXbR+5A+9IwIC/bLE7D8Y=
golang.org/x/crypto v0.0.0-20180312195533-182114d58262/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180313075820-8c0ece68c283 h1:DE/w7won1Ns86VoWjUZ4cJS6//TObJntGkxuZ63asRc=
golang.org/x/sys v0.0.0-20180313075820-8c0ece68c283/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190405154228-4b34438f7a67 h1:1Fzlr8kkDLQwqMP8GxrhptBLqZG/EDpiATneiZHY998=
golang.org/x/sys v0.0.0-20190405154228-4b34438f7a67/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b h1:ag/x1USPSsqHud38I9BAC88qdNLDHHtQ4mlgQIZPPNA=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
gopkg.in/alecthomas/kingpin.v2 v2.2.6 h1:jMFz6MfLP0/4fUyZle81rXUoxOBFi19VUFKVDOQfozc=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=

View File

@@ -1,28 +0,0 @@
{
"Disable": [
"goconst",
"gocyclo",
"gosec",
"maligned",
"megacheck"
],
"Enable": [
"deadcode",
"errcheck",
"golint",
"gotype",
"gotypex",
"ineffassign",
"interfacer",
"structcheck",
"unconvert",
"varcheck",
"vet",
"vetshadow"
],
"Exclude": [
"don't use underscores in Go names",
"exported type .+ should have comment or be unexported",
"should be"
]
}

View File

@@ -20,7 +20,7 @@ else {
$members = $wmiObject `
| Get-Member -MemberType Properties `
| Where-Object { $_.Definition -Match '^u?int' -and $_.Name -NotMatch '_' } `
| Select-Object Name, @{Name="Type";Expression={$_.Definition.Split(" ")[0]}})
| Select-Object Name, @{Name="Type";Expression={$_.Definition.Split(" ")[0]}}
$input = @{
"Class"=$Class;
"CollectorName"=$CollectorName;

View File

@@ -1 +0,0 @@
*.exe

View File

@@ -1,22 +0,0 @@
The MIT License (MIT)
Copyright (c) 2015 Microsoft
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -1,22 +0,0 @@
# go-winio
This repository contains utilities for efficiently performing Win32 IO operations in
Go. Currently, this is focused on accessing named pipes and other file handles, and
for using named pipes as a net transport.
This code relies on IO completion ports to avoid blocking IO on system threads, allowing Go
to reuse the thread to schedule another goroutine. This limits support to Windows Vista and
newer operating systems. This is similar to the implementation of network sockets in Go's net
package.
Please see the LICENSE file for licensing information.
This project has adopted the [Microsoft Open Source Code of
Conduct](https://opensource.microsoft.com/codeofconduct/). For more information
see the [Code of Conduct
FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact
[opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional
questions or comments.
Thanks to natefinch for the inspiration for this library. See https://github.com/natefinch/npipe
for another named pipe implementation.

View File

@@ -1,27 +0,0 @@
Copyright (c) 2012 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,344 +0,0 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package tar implements access to tar archives.
// It aims to cover most of the variations, including those produced
// by GNU and BSD tars.
//
// References:
// http://www.freebsd.org/cgi/man.cgi?query=tar&sektion=5
// http://www.gnu.org/software/tar/manual/html_node/Standard.html
// http://pubs.opengroup.org/onlinepubs/9699919799/utilities/pax.html
package tar
import (
"bytes"
"errors"
"fmt"
"os"
"path"
"time"
)
const (
blockSize = 512
// Types
TypeReg = '0' // regular file
TypeRegA = '\x00' // regular file
TypeLink = '1' // hard link
TypeSymlink = '2' // symbolic link
TypeChar = '3' // character device node
TypeBlock = '4' // block device node
TypeDir = '5' // directory
TypeFifo = '6' // fifo node
TypeCont = '7' // reserved
TypeXHeader = 'x' // extended header
TypeXGlobalHeader = 'g' // global extended header
TypeGNULongName = 'L' // Next file has a long name
TypeGNULongLink = 'K' // Next file symlinks to a file w/ a long name
TypeGNUSparse = 'S' // sparse file
)
// A Header represents a single header in a tar archive.
// Some fields may not be populated.
type Header struct {
Name string // name of header file entry
Mode int64 // permission and mode bits
Uid int // user id of owner
Gid int // group id of owner
Size int64 // length in bytes
ModTime time.Time // modified time
Typeflag byte // type of header entry
Linkname string // target name of link
Uname string // user name of owner
Gname string // group name of owner
Devmajor int64 // major number of character or block device
Devminor int64 // minor number of character or block device
AccessTime time.Time // access time
ChangeTime time.Time // status change time
CreationTime time.Time // creation time
Xattrs map[string]string
Winheaders map[string]string
}
// File name constants from the tar spec.
const (
fileNameSize = 100 // Maximum number of bytes in a standard tar name.
fileNamePrefixSize = 155 // Maximum number of ustar extension bytes.
)
// FileInfo returns an os.FileInfo for the Header.
func (h *Header) FileInfo() os.FileInfo {
return headerFileInfo{h}
}
// headerFileInfo implements os.FileInfo.
type headerFileInfo struct {
h *Header
}
func (fi headerFileInfo) Size() int64 { return fi.h.Size }
func (fi headerFileInfo) IsDir() bool { return fi.Mode().IsDir() }
func (fi headerFileInfo) ModTime() time.Time { return fi.h.ModTime }
func (fi headerFileInfo) Sys() interface{} { return fi.h }
// Name returns the base name of the file.
func (fi headerFileInfo) Name() string {
if fi.IsDir() {
return path.Base(path.Clean(fi.h.Name))
}
return path.Base(fi.h.Name)
}
// Mode returns the permission and mode bits for the headerFileInfo.
func (fi headerFileInfo) Mode() (mode os.FileMode) {
// Set file permission bits.
mode = os.FileMode(fi.h.Mode).Perm()
// Set setuid, setgid and sticky bits.
if fi.h.Mode&c_ISUID != 0 {
// setuid
mode |= os.ModeSetuid
}
if fi.h.Mode&c_ISGID != 0 {
// setgid
mode |= os.ModeSetgid
}
if fi.h.Mode&c_ISVTX != 0 {
// sticky
mode |= os.ModeSticky
}
// Set file mode bits.
// clear perm, setuid, setgid and sticky bits.
m := os.FileMode(fi.h.Mode) &^ 07777
if m == c_ISDIR {
// directory
mode |= os.ModeDir
}
if m == c_ISFIFO {
// named pipe (FIFO)
mode |= os.ModeNamedPipe
}
if m == c_ISLNK {
// symbolic link
mode |= os.ModeSymlink
}
if m == c_ISBLK {
// device file
mode |= os.ModeDevice
}
if m == c_ISCHR {
// Unix character device
mode |= os.ModeDevice
mode |= os.ModeCharDevice
}
if m == c_ISSOCK {
// Unix domain socket
mode |= os.ModeSocket
}
switch fi.h.Typeflag {
case TypeSymlink:
// symbolic link
mode |= os.ModeSymlink
case TypeChar:
// character device node
mode |= os.ModeDevice
mode |= os.ModeCharDevice
case TypeBlock:
// block device node
mode |= os.ModeDevice
case TypeDir:
// directory
mode |= os.ModeDir
case TypeFifo:
// fifo node
mode |= os.ModeNamedPipe
}
return mode
}
// sysStat, if non-nil, populates h from system-dependent fields of fi.
var sysStat func(fi os.FileInfo, h *Header) error
// Mode constants from the tar spec.
const (
c_ISUID = 04000 // Set uid
c_ISGID = 02000 // Set gid
c_ISVTX = 01000 // Save text (sticky bit)
c_ISDIR = 040000 // Directory
c_ISFIFO = 010000 // FIFO
c_ISREG = 0100000 // Regular file
c_ISLNK = 0120000 // Symbolic link
c_ISBLK = 060000 // Block special file
c_ISCHR = 020000 // Character special file
c_ISSOCK = 0140000 // Socket
)
// Keywords for the PAX Extended Header
const (
paxAtime = "atime"
paxCharset = "charset"
paxComment = "comment"
paxCtime = "ctime" // please note that ctime is not a valid pax header.
paxCreationTime = "LIBARCHIVE.creationtime"
paxGid = "gid"
paxGname = "gname"
paxLinkpath = "linkpath"
paxMtime = "mtime"
paxPath = "path"
paxSize = "size"
paxUid = "uid"
paxUname = "uname"
paxXattr = "SCHILY.xattr."
paxWindows = "MSWINDOWS."
paxNone = ""
)
// FileInfoHeader creates a partially-populated Header from fi.
// If fi describes a symlink, FileInfoHeader records link as the link target.
// If fi describes a directory, a slash is appended to the name.
// Because os.FileInfo's Name method returns only the base name of
// the file it describes, it may be necessary to modify the Name field
// of the returned header to provide the full path name of the file.
func FileInfoHeader(fi os.FileInfo, link string) (*Header, error) {
if fi == nil {
return nil, errors.New("tar: FileInfo is nil")
}
fm := fi.Mode()
h := &Header{
Name: fi.Name(),
ModTime: fi.ModTime(),
Mode: int64(fm.Perm()), // or'd with c_IS* constants later
}
switch {
case fm.IsRegular():
h.Mode |= c_ISREG
h.Typeflag = TypeReg
h.Size = fi.Size()
case fi.IsDir():
h.Typeflag = TypeDir
h.Mode |= c_ISDIR
h.Name += "/"
case fm&os.ModeSymlink != 0:
h.Typeflag = TypeSymlink
h.Mode |= c_ISLNK
h.Linkname = link
case fm&os.ModeDevice != 0:
if fm&os.ModeCharDevice != 0 {
h.Mode |= c_ISCHR
h.Typeflag = TypeChar
} else {
h.Mode |= c_ISBLK
h.Typeflag = TypeBlock
}
case fm&os.ModeNamedPipe != 0:
h.Typeflag = TypeFifo
h.Mode |= c_ISFIFO
case fm&os.ModeSocket != 0:
h.Mode |= c_ISSOCK
default:
return nil, fmt.Errorf("archive/tar: unknown file mode %v", fm)
}
if fm&os.ModeSetuid != 0 {
h.Mode |= c_ISUID
}
if fm&os.ModeSetgid != 0 {
h.Mode |= c_ISGID
}
if fm&os.ModeSticky != 0 {
h.Mode |= c_ISVTX
}
// If possible, populate additional fields from OS-specific
// FileInfo fields.
if sys, ok := fi.Sys().(*Header); ok {
// This FileInfo came from a Header (not the OS). Use the
// original Header to populate all remaining fields.
h.Uid = sys.Uid
h.Gid = sys.Gid
h.Uname = sys.Uname
h.Gname = sys.Gname
h.AccessTime = sys.AccessTime
h.ChangeTime = sys.ChangeTime
if sys.Xattrs != nil {
h.Xattrs = make(map[string]string)
for k, v := range sys.Xattrs {
h.Xattrs[k] = v
}
}
if sys.Typeflag == TypeLink {
// hard link
h.Typeflag = TypeLink
h.Size = 0
h.Linkname = sys.Linkname
}
}
if sysStat != nil {
return h, sysStat(fi, h)
}
return h, nil
}
var zeroBlock = make([]byte, blockSize)
// POSIX specifies a sum of the unsigned byte values, but the Sun tar uses signed byte values.
// We compute and return both.
func checksum(header []byte) (unsigned int64, signed int64) {
for i := 0; i < len(header); i++ {
if i == 148 {
// The chksum field (header[148:156]) is special: it should be treated as space bytes.
unsigned += ' ' * 8
signed += ' ' * 8
i += 7
continue
}
unsigned += int64(header[i])
signed += int64(int8(header[i]))
}
return
}
type slicer []byte
func (sp *slicer) next(n int) (b []byte) {
s := *sp
b, *sp = s[0:n], s[n:]
return
}
func isASCII(s string) bool {
for _, c := range s {
if c >= 0x80 {
return false
}
}
return true
}
func toASCII(s string) string {
if isASCII(s) {
return s
}
var buf bytes.Buffer
for _, c := range s {
if c < 0x80 {
buf.WriteByte(byte(c))
}
}
return buf.String()
}
// isHeaderOnlyType checks if the given type flag is of the type that has no
// data section even if a size is specified.
func isHeaderOnlyType(flag byte) bool {
switch flag {
case TypeLink, TypeSymlink, TypeChar, TypeBlock, TypeDir, TypeFifo:
return true
default:
return false
}
}

View File

@@ -1,80 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package tar_test
import (
"archive/tar"
"bytes"
"fmt"
"io"
"log"
"os"
)
func Example() {
// Create a buffer to write our archive to.
buf := new(bytes.Buffer)
// Create a new tar archive.
tw := tar.NewWriter(buf)
// Add some files to the archive.
var files = []struct {
Name, Body string
}{
{"readme.txt", "This archive contains some text files."},
{"gopher.txt", "Gopher names:\nGeorge\nGeoffrey\nGonzo"},
{"todo.txt", "Get animal handling license."},
}
for _, file := range files {
hdr := &tar.Header{
Name: file.Name,
Mode: 0600,
Size: int64(len(file.Body)),
}
if err := tw.WriteHeader(hdr); err != nil {
log.Fatalln(err)
}
if _, err := tw.Write([]byte(file.Body)); err != nil {
log.Fatalln(err)
}
}
// Make sure to check the error on Close.
if err := tw.Close(); err != nil {
log.Fatalln(err)
}
// Open the tar archive for reading.
r := bytes.NewReader(buf.Bytes())
tr := tar.NewReader(r)
// Iterate through the files in the archive.
for {
hdr, err := tr.Next()
if err == io.EOF {
// end of tar archive
break
}
if err != nil {
log.Fatalln(err)
}
fmt.Printf("Contents of %s:\n", hdr.Name)
if _, err := io.Copy(os.Stdout, tr); err != nil {
log.Fatalln(err)
}
fmt.Println()
}
// Output:
// Contents of readme.txt:
// This archive contains some text files.
// Contents of gopher.txt:
// Gopher names:
// George
// Geoffrey
// Gonzo
// Contents of todo.txt:
// Get animal handling license.
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,20 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build linux dragonfly openbsd solaris
package tar
import (
"syscall"
"time"
)
func statAtime(st *syscall.Stat_t) time.Time {
return time.Unix(st.Atim.Unix())
}
func statCtime(st *syscall.Stat_t) time.Time {
return time.Unix(st.Ctim.Unix())
}

View File

@@ -1,20 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build darwin freebsd netbsd
package tar
import (
"syscall"
"time"
)
func statAtime(st *syscall.Stat_t) time.Time {
return time.Unix(st.Atimespec.Unix())
}
func statCtime(st *syscall.Stat_t) time.Time {
return time.Unix(st.Ctimespec.Unix())
}

View File

@@ -1,32 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build linux darwin dragonfly freebsd openbsd netbsd solaris
package tar
import (
"os"
"syscall"
)
func init() {
sysStat = statUnix
}
func statUnix(fi os.FileInfo, h *Header) error {
sys, ok := fi.Sys().(*syscall.Stat_t)
if !ok {
return nil
}
h.Uid = int(sys.Uid)
h.Gid = int(sys.Gid)
// TODO(bradfitz): populate username & group. os/user
// doesn't cache LookupId lookups, and lacks group
// lookup functions.
h.AccessTime = statAtime(sys)
h.ChangeTime = statCtime(sys)
// TODO(bradfitz): major/minor device numbers?
return nil
}

View File

@@ -1,325 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package tar
import (
"bytes"
"io/ioutil"
"os"
"path"
"reflect"
"strings"
"testing"
"time"
)
func TestFileInfoHeader(t *testing.T) {
fi, err := os.Stat("testdata/small.txt")
if err != nil {
t.Fatal(err)
}
h, err := FileInfoHeader(fi, "")
if err != nil {
t.Fatalf("FileInfoHeader: %v", err)
}
if g, e := h.Name, "small.txt"; g != e {
t.Errorf("Name = %q; want %q", g, e)
}
if g, e := h.Mode, int64(fi.Mode().Perm())|c_ISREG; g != e {
t.Errorf("Mode = %#o; want %#o", g, e)
}
if g, e := h.Size, int64(5); g != e {
t.Errorf("Size = %v; want %v", g, e)
}
if g, e := h.ModTime, fi.ModTime(); !g.Equal(e) {
t.Errorf("ModTime = %v; want %v", g, e)
}
// FileInfoHeader should error when passing nil FileInfo
if _, err := FileInfoHeader(nil, ""); err == nil {
t.Fatalf("Expected error when passing nil to FileInfoHeader")
}
}
func TestFileInfoHeaderDir(t *testing.T) {
fi, err := os.Stat("testdata")
if err != nil {
t.Fatal(err)
}
h, err := FileInfoHeader(fi, "")
if err != nil {
t.Fatalf("FileInfoHeader: %v", err)
}
if g, e := h.Name, "testdata/"; g != e {
t.Errorf("Name = %q; want %q", g, e)
}
// Ignoring c_ISGID for golang.org/issue/4867
if g, e := h.Mode&^c_ISGID, int64(fi.Mode().Perm())|c_ISDIR; g != e {
t.Errorf("Mode = %#o; want %#o", g, e)
}
if g, e := h.Size, int64(0); g != e {
t.Errorf("Size = %v; want %v", g, e)
}
if g, e := h.ModTime, fi.ModTime(); !g.Equal(e) {
t.Errorf("ModTime = %v; want %v", g, e)
}
}
func TestFileInfoHeaderSymlink(t *testing.T) {
h, err := FileInfoHeader(symlink{}, "some-target")
if err != nil {
t.Fatal(err)
}
if g, e := h.Name, "some-symlink"; g != e {
t.Errorf("Name = %q; want %q", g, e)
}
if g, e := h.Linkname, "some-target"; g != e {
t.Errorf("Linkname = %q; want %q", g, e)
}
}
type symlink struct{}
func (symlink) Name() string { return "some-symlink" }
func (symlink) Size() int64 { return 0 }
func (symlink) Mode() os.FileMode { return os.ModeSymlink }
func (symlink) ModTime() time.Time { return time.Time{} }
func (symlink) IsDir() bool { return false }
func (symlink) Sys() interface{} { return nil }
func TestRoundTrip(t *testing.T) {
data := []byte("some file contents")
var b bytes.Buffer
tw := NewWriter(&b)
hdr := &Header{
Name: "file.txt",
Uid: 1 << 21, // too big for 8 octal digits
Size: int64(len(data)),
ModTime: time.Now(),
}
// tar only supports second precision.
hdr.ModTime = hdr.ModTime.Add(-time.Duration(hdr.ModTime.Nanosecond()) * time.Nanosecond)
if err := tw.WriteHeader(hdr); err != nil {
t.Fatalf("tw.WriteHeader: %v", err)
}
if _, err := tw.Write(data); err != nil {
t.Fatalf("tw.Write: %v", err)
}
if err := tw.Close(); err != nil {
t.Fatalf("tw.Close: %v", err)
}
// Read it back.
tr := NewReader(&b)
rHdr, err := tr.Next()
if err != nil {
t.Fatalf("tr.Next: %v", err)
}
if !reflect.DeepEqual(rHdr, hdr) {
t.Errorf("Header mismatch.\n got %+v\nwant %+v", rHdr, hdr)
}
rData, err := ioutil.ReadAll(tr)
if err != nil {
t.Fatalf("Read: %v", err)
}
if !bytes.Equal(rData, data) {
t.Errorf("Data mismatch.\n got %q\nwant %q", rData, data)
}
}
type headerRoundTripTest struct {
h *Header
fm os.FileMode
}
func TestHeaderRoundTrip(t *testing.T) {
golden := []headerRoundTripTest{
// regular file.
{
h: &Header{
Name: "test.txt",
Mode: 0644 | c_ISREG,
Size: 12,
ModTime: time.Unix(1360600916, 0),
Typeflag: TypeReg,
},
fm: 0644,
},
// symbolic link.
{
h: &Header{
Name: "link.txt",
Mode: 0777 | c_ISLNK,
Size: 0,
ModTime: time.Unix(1360600852, 0),
Typeflag: TypeSymlink,
},
fm: 0777 | os.ModeSymlink,
},
// character device node.
{
h: &Header{
Name: "dev/null",
Mode: 0666 | c_ISCHR,
Size: 0,
ModTime: time.Unix(1360578951, 0),
Typeflag: TypeChar,
},
fm: 0666 | os.ModeDevice | os.ModeCharDevice,
},
// block device node.
{
h: &Header{
Name: "dev/sda",
Mode: 0660 | c_ISBLK,
Size: 0,
ModTime: time.Unix(1360578954, 0),
Typeflag: TypeBlock,
},
fm: 0660 | os.ModeDevice,
},
// directory.
{
h: &Header{
Name: "dir/",
Mode: 0755 | c_ISDIR,
Size: 0,
ModTime: time.Unix(1360601116, 0),
Typeflag: TypeDir,
},
fm: 0755 | os.ModeDir,
},
// fifo node.
{
h: &Header{
Name: "dev/initctl",
Mode: 0600 | c_ISFIFO,
Size: 0,
ModTime: time.Unix(1360578949, 0),
Typeflag: TypeFifo,
},
fm: 0600 | os.ModeNamedPipe,
},
// setuid.
{
h: &Header{
Name: "bin/su",
Mode: 0755 | c_ISREG | c_ISUID,
Size: 23232,
ModTime: time.Unix(1355405093, 0),
Typeflag: TypeReg,
},
fm: 0755 | os.ModeSetuid,
},
// setguid.
{
h: &Header{
Name: "group.txt",
Mode: 0750 | c_ISREG | c_ISGID,
Size: 0,
ModTime: time.Unix(1360602346, 0),
Typeflag: TypeReg,
},
fm: 0750 | os.ModeSetgid,
},
// sticky.
{
h: &Header{
Name: "sticky.txt",
Mode: 0600 | c_ISREG | c_ISVTX,
Size: 7,
ModTime: time.Unix(1360602540, 0),
Typeflag: TypeReg,
},
fm: 0600 | os.ModeSticky,
},
// hard link.
{
h: &Header{
Name: "hard.txt",
Mode: 0644 | c_ISREG,
Size: 0,
Linkname: "file.txt",
ModTime: time.Unix(1360600916, 0),
Typeflag: TypeLink,
},
fm: 0644,
},
// More information.
{
h: &Header{
Name: "info.txt",
Mode: 0600 | c_ISREG,
Size: 0,
Uid: 1000,
Gid: 1000,
ModTime: time.Unix(1360602540, 0),
Uname: "slartibartfast",
Gname: "users",
Typeflag: TypeReg,
},
fm: 0600,
},
}
for i, g := range golden {
fi := g.h.FileInfo()
h2, err := FileInfoHeader(fi, "")
if err != nil {
t.Error(err)
continue
}
if strings.Contains(fi.Name(), "/") {
t.Errorf("FileInfo of %q contains slash: %q", g.h.Name, fi.Name())
}
name := path.Base(g.h.Name)
if fi.IsDir() {
name += "/"
}
if got, want := h2.Name, name; got != want {
t.Errorf("i=%d: Name: got %v, want %v", i, got, want)
}
if got, want := h2.Size, g.h.Size; got != want {
t.Errorf("i=%d: Size: got %v, want %v", i, got, want)
}
if got, want := h2.Uid, g.h.Uid; got != want {
t.Errorf("i=%d: Uid: got %d, want %d", i, got, want)
}
if got, want := h2.Gid, g.h.Gid; got != want {
t.Errorf("i=%d: Gid: got %d, want %d", i, got, want)
}
if got, want := h2.Uname, g.h.Uname; got != want {
t.Errorf("i=%d: Uname: got %q, want %q", i, got, want)
}
if got, want := h2.Gname, g.h.Gname; got != want {
t.Errorf("i=%d: Gname: got %q, want %q", i, got, want)
}
if got, want := h2.Linkname, g.h.Linkname; got != want {
t.Errorf("i=%d: Linkname: got %v, want %v", i, got, want)
}
if got, want := h2.Typeflag, g.h.Typeflag; got != want {
t.Logf("%#v %#v", g.h, fi.Sys())
t.Errorf("i=%d: Typeflag: got %q, want %q", i, got, want)
}
if got, want := h2.Mode, g.h.Mode; got != want {
t.Errorf("i=%d: Mode: got %o, want %o", i, got, want)
}
if got, want := fi.Mode(), g.fm; got != want {
t.Errorf("i=%d: fi.Mode: got %o, want %o", i, got, want)
}
if got, want := h2.AccessTime, g.h.AccessTime; got != want {
t.Errorf("i=%d: AccessTime: got %v, want %v", i, got, want)
}
if got, want := h2.ChangeTime, g.h.ChangeTime; got != want {
t.Errorf("i=%d: ChangeTime: got %v, want %v", i, got, want)
}
if got, want := h2.ModTime, g.h.ModTime; got != want {
t.Errorf("i=%d: ModTime: got %v, want %v", i, got, want)
}
if sysh, ok := fi.Sys().(*Header); !ok || sysh != g.h {
t.Errorf("i=%d: Sys didn't return original *Header", i)
}
}
}

View File

@@ -1,444 +0,0 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package tar
// TODO(dsymonds):
// - catch more errors (no first header, etc.)
import (
"bytes"
"errors"
"fmt"
"io"
"path"
"sort"
"strconv"
"strings"
"time"
)
var (
ErrWriteTooLong = errors.New("archive/tar: write too long")
ErrFieldTooLong = errors.New("archive/tar: header field too long")
ErrWriteAfterClose = errors.New("archive/tar: write after close")
errInvalidHeader = errors.New("archive/tar: header field too long or contains invalid values")
)
// A Writer provides sequential writing of a tar archive in POSIX.1 format.
// A tar archive consists of a sequence of files.
// Call WriteHeader to begin a new file, and then call Write to supply that file's data,
// writing at most hdr.Size bytes in total.
type Writer struct {
w io.Writer
err error
nb int64 // number of unwritten bytes for current file entry
pad int64 // amount of padding to write after current file entry
closed bool
usedBinary bool // whether the binary numeric field extension was used
preferPax bool // use pax header instead of binary numeric header
hdrBuff [blockSize]byte // buffer to use in writeHeader when writing a regular header
paxHdrBuff [blockSize]byte // buffer to use in writeHeader when writing a pax header
}
type formatter struct {
err error // Last error seen
}
// NewWriter creates a new Writer writing to w.
func NewWriter(w io.Writer) *Writer { return &Writer{w: w, preferPax: true} }
// Flush finishes writing the current file (optional).
func (tw *Writer) Flush() error {
if tw.nb > 0 {
tw.err = fmt.Errorf("archive/tar: missed writing %d bytes", tw.nb)
return tw.err
}
n := tw.nb + tw.pad
for n > 0 && tw.err == nil {
nr := n
if nr > blockSize {
nr = blockSize
}
var nw int
nw, tw.err = tw.w.Write(zeroBlock[0:nr])
n -= int64(nw)
}
tw.nb = 0
tw.pad = 0
return tw.err
}
// Write s into b, terminating it with a NUL if there is room.
func (f *formatter) formatString(b []byte, s string) {
if len(s) > len(b) {
f.err = ErrFieldTooLong
return
}
ascii := toASCII(s)
copy(b, ascii)
if len(ascii) < len(b) {
b[len(ascii)] = 0
}
}
// Encode x as an octal ASCII string and write it into b with leading zeros.
func (f *formatter) formatOctal(b []byte, x int64) {
s := strconv.FormatInt(x, 8)
// leading zeros, but leave room for a NUL.
for len(s)+1 < len(b) {
s = "0" + s
}
f.formatString(b, s)
}
// fitsInBase256 reports whether x can be encoded into n bytes using base-256
// encoding. Unlike octal encoding, base-256 encoding does not require that the
// string ends with a NUL character. Thus, all n bytes are available for output.
//
// If operating in binary mode, this assumes strict GNU binary mode; which means
// that the first byte can only be either 0x80 or 0xff. Thus, the first byte is
// equivalent to the sign bit in two's complement form.
func fitsInBase256(n int, x int64) bool {
var binBits = uint(n-1) * 8
return n >= 9 || (x >= -1<<binBits && x < 1<<binBits)
}
// Write x into b, as binary (GNUtar/star extension).
func (f *formatter) formatNumeric(b []byte, x int64) {
if fitsInBase256(len(b), x) {
for i := len(b) - 1; i >= 0; i-- {
b[i] = byte(x)
x >>= 8
}
b[0] |= 0x80 // Highest bit indicates binary format
return
}
f.formatOctal(b, 0) // Last resort, just write zero
f.err = ErrFieldTooLong
}
var (
minTime = time.Unix(0, 0)
// There is room for 11 octal digits (33 bits) of mtime.
maxTime = minTime.Add((1<<33 - 1) * time.Second)
)
// WriteHeader writes hdr and prepares to accept the file's contents.
// WriteHeader calls Flush if it is not the first header.
// Calling after a Close will return ErrWriteAfterClose.
func (tw *Writer) WriteHeader(hdr *Header) error {
return tw.writeHeader(hdr, true)
}
// WriteHeader writes hdr and prepares to accept the file's contents.
// WriteHeader calls Flush if it is not the first header.
// Calling after a Close will return ErrWriteAfterClose.
// As this method is called internally by writePax header to allow it to
// suppress writing the pax header.
func (tw *Writer) writeHeader(hdr *Header, allowPax bool) error {
if tw.closed {
return ErrWriteAfterClose
}
if tw.err == nil {
tw.Flush()
}
if tw.err != nil {
return tw.err
}
// a map to hold pax header records, if any are needed
paxHeaders := make(map[string]string)
// TODO(shanemhansen): we might want to use PAX headers for
// subsecond time resolution, but for now let's just capture
// too long fields or non ascii characters
var f formatter
var header []byte
// We need to select which scratch buffer to use carefully,
// since this method is called recursively to write PAX headers.
// If allowPax is true, this is the non-recursive call, and we will use hdrBuff.
// If allowPax is false, we are being called by writePAXHeader, and hdrBuff is
// already being used by the non-recursive call, so we must use paxHdrBuff.
header = tw.hdrBuff[:]
if !allowPax {
header = tw.paxHdrBuff[:]
}
copy(header, zeroBlock)
s := slicer(header)
// Wrappers around formatter that automatically sets paxHeaders if the
// argument extends beyond the capacity of the input byte slice.
var formatString = func(b []byte, s string, paxKeyword string) {
needsPaxHeader := paxKeyword != paxNone && len(s) > len(b) || !isASCII(s)
if needsPaxHeader {
paxHeaders[paxKeyword] = s
return
}
f.formatString(b, s)
}
var formatNumeric = func(b []byte, x int64, paxKeyword string) {
// Try octal first.
s := strconv.FormatInt(x, 8)
if len(s) < len(b) {
f.formatOctal(b, x)
return
}
// If it is too long for octal, and PAX is preferred, use a PAX header.
if paxKeyword != paxNone && tw.preferPax {
f.formatOctal(b, 0)
s := strconv.FormatInt(x, 10)
paxHeaders[paxKeyword] = s
return
}
tw.usedBinary = true
f.formatNumeric(b, x)
}
var formatTime = func(b []byte, t time.Time, paxKeyword string) {
var unixTime int64
if !t.Before(minTime) && !t.After(maxTime) {
unixTime = t.Unix()
}
formatNumeric(b, unixTime, paxNone)
// Write a PAX header if the time didn't fit precisely.
if paxKeyword != "" && tw.preferPax && allowPax && (t.Nanosecond() != 0 || !t.Before(minTime) || !t.After(maxTime)) {
paxHeaders[paxKeyword] = formatPAXTime(t)
}
}
// keep a reference to the filename to allow to overwrite it later if we detect that we can use ustar longnames instead of pax
pathHeaderBytes := s.next(fileNameSize)
formatString(pathHeaderBytes, hdr.Name, paxPath)
f.formatOctal(s.next(8), hdr.Mode) // 100:108
formatNumeric(s.next(8), int64(hdr.Uid), paxUid) // 108:116
formatNumeric(s.next(8), int64(hdr.Gid), paxGid) // 116:124
formatNumeric(s.next(12), hdr.Size, paxSize) // 124:136
formatTime(s.next(12), hdr.ModTime, paxMtime) // 136:148
s.next(8) // chksum (148:156)
s.next(1)[0] = hdr.Typeflag // 156:157
formatString(s.next(100), hdr.Linkname, paxLinkpath)
copy(s.next(8), []byte("ustar\x0000")) // 257:265
formatString(s.next(32), hdr.Uname, paxUname) // 265:297
formatString(s.next(32), hdr.Gname, paxGname) // 297:329
formatNumeric(s.next(8), hdr.Devmajor, paxNone) // 329:337
formatNumeric(s.next(8), hdr.Devminor, paxNone) // 337:345
// keep a reference to the prefix to allow to overwrite it later if we detect that we can use ustar longnames instead of pax
prefixHeaderBytes := s.next(155)
formatString(prefixHeaderBytes, "", paxNone) // 345:500 prefix
// Use the GNU magic instead of POSIX magic if we used any GNU extensions.
if tw.usedBinary {
copy(header[257:265], []byte("ustar \x00"))
}
_, paxPathUsed := paxHeaders[paxPath]
// try to use a ustar header when only the name is too long
if !tw.preferPax && len(paxHeaders) == 1 && paxPathUsed {
prefix, suffix, ok := splitUSTARPath(hdr.Name)
if ok {
// Since we can encode in USTAR format, disable PAX header.
delete(paxHeaders, paxPath)
// Update the path fields
formatString(pathHeaderBytes, suffix, paxNone)
formatString(prefixHeaderBytes, prefix, paxNone)
}
}
// The chksum field is terminated by a NUL and a space.
// This is different from the other octal fields.
chksum, _ := checksum(header)
f.formatOctal(header[148:155], chksum) // Never fails
header[155] = ' '
// Check if there were any formatting errors.
if f.err != nil {
tw.err = f.err
return tw.err
}
if allowPax {
if !hdr.AccessTime.IsZero() {
paxHeaders[paxAtime] = formatPAXTime(hdr.AccessTime)
}
if !hdr.ChangeTime.IsZero() {
paxHeaders[paxCtime] = formatPAXTime(hdr.ChangeTime)
}
if !hdr.CreationTime.IsZero() {
paxHeaders[paxCreationTime] = formatPAXTime(hdr.CreationTime)
}
for k, v := range hdr.Xattrs {
paxHeaders[paxXattr+k] = v
}
for k, v := range hdr.Winheaders {
paxHeaders[paxWindows+k] = v
}
}
if len(paxHeaders) > 0 {
if !allowPax {
return errInvalidHeader
}
if err := tw.writePAXHeader(hdr, paxHeaders); err != nil {
return err
}
}
tw.nb = int64(hdr.Size)
tw.pad = (blockSize - (tw.nb % blockSize)) % blockSize
_, tw.err = tw.w.Write(header)
return tw.err
}
func formatPAXTime(t time.Time) string {
sec := t.Unix()
usec := t.Nanosecond()
s := strconv.FormatInt(sec, 10)
if usec != 0 {
s = fmt.Sprintf("%s.%09d", s, usec)
}
return s
}
// splitUSTARPath splits a path according to USTAR prefix and suffix rules.
// If the path is not splittable, then it will return ("", "", false).
func splitUSTARPath(name string) (prefix, suffix string, ok bool) {
length := len(name)
if length <= fileNameSize || !isASCII(name) {
return "", "", false
} else if length > fileNamePrefixSize+1 {
length = fileNamePrefixSize + 1
} else if name[length-1] == '/' {
length--
}
i := strings.LastIndex(name[:length], "/")
nlen := len(name) - i - 1 // nlen is length of suffix
plen := i // plen is length of prefix
if i <= 0 || nlen > fileNameSize || nlen == 0 || plen > fileNamePrefixSize {
return "", "", false
}
return name[:i], name[i+1:], true
}
// writePaxHeader writes an extended pax header to the
// archive.
func (tw *Writer) writePAXHeader(hdr *Header, paxHeaders map[string]string) error {
// Prepare extended header
ext := new(Header)
ext.Typeflag = TypeXHeader
// Setting ModTime is required for reader parsing to
// succeed, and seems harmless enough.
ext.ModTime = hdr.ModTime
// The spec asks that we namespace our pseudo files
// with the current pid. However, this results in differing outputs
// for identical inputs. As such, the constant 0 is now used instead.
// golang.org/issue/12358
dir, file := path.Split(hdr.Name)
fullName := path.Join(dir, "PaxHeaders.0", file)
ascii := toASCII(fullName)
if len(ascii) > 100 {
ascii = ascii[:100]
}
ext.Name = ascii
// Construct the body
var buf bytes.Buffer
// Keys are sorted before writing to body to allow deterministic output.
var keys []string
for k := range paxHeaders {
keys = append(keys, k)
}
sort.Strings(keys)
for _, k := range keys {
fmt.Fprint(&buf, formatPAXRecord(k, paxHeaders[k]))
}
ext.Size = int64(len(buf.Bytes()))
if err := tw.writeHeader(ext, false); err != nil {
return err
}
if _, err := tw.Write(buf.Bytes()); err != nil {
return err
}
if err := tw.Flush(); err != nil {
return err
}
return nil
}
// formatPAXRecord formats a single PAX record, prefixing it with the
// appropriate length.
func formatPAXRecord(k, v string) string {
const padding = 3 // Extra padding for ' ', '=', and '\n'
size := len(k) + len(v) + padding
size += len(strconv.Itoa(size))
record := fmt.Sprintf("%d %s=%s\n", size, k, v)
// Final adjustment if adding size field increased the record size.
if len(record) != size {
size = len(record)
record = fmt.Sprintf("%d %s=%s\n", size, k, v)
}
return record
}
// Write writes to the current entry in the tar archive.
// Write returns the error ErrWriteTooLong if more than
// hdr.Size bytes are written after WriteHeader.
func (tw *Writer) Write(b []byte) (n int, err error) {
if tw.closed {
err = ErrWriteAfterClose
return
}
overwrite := false
if int64(len(b)) > tw.nb {
b = b[0:tw.nb]
overwrite = true
}
n, err = tw.w.Write(b)
tw.nb -= int64(n)
if err == nil && overwrite {
err = ErrWriteTooLong
return
}
tw.err = err
return
}
// Close closes the tar archive, flushing any unwritten
// data to the underlying writer.
func (tw *Writer) Close() error {
if tw.err != nil || tw.closed {
return tw.err
}
tw.Flush()
tw.closed = true
if tw.err != nil {
return tw.err
}
// trailer: two zero blocks
for i := 0; i < 2; i++ {
_, tw.err = tw.w.Write(zeroBlock)
if tw.err != nil {
break
}
}
return tw.err
}

View File

@@ -1,739 +0,0 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package tar
import (
"bytes"
"fmt"
"io"
"io/ioutil"
"math"
"os"
"reflect"
"sort"
"strings"
"testing"
"testing/iotest"
"time"
)
type writerTestEntry struct {
header *Header
contents string
}
type writerTest struct {
file string // filename of expected output
entries []*writerTestEntry
}
var writerTests = []*writerTest{
// The writer test file was produced with this command:
// tar (GNU tar) 1.26
// ln -s small.txt link.txt
// tar -b 1 --format=ustar -c -f writer.tar small.txt small2.txt link.txt
{
file: "testdata/writer.tar",
entries: []*writerTestEntry{
{
header: &Header{
Name: "small.txt",
Mode: 0640,
Uid: 73025,
Gid: 5000,
Size: 5,
ModTime: time.Unix(1246508266, 0),
Typeflag: '0',
Uname: "dsymonds",
Gname: "eng",
},
contents: "Kilts",
},
{
header: &Header{
Name: "small2.txt",
Mode: 0640,
Uid: 73025,
Gid: 5000,
Size: 11,
ModTime: time.Unix(1245217492, 0),
Typeflag: '0',
Uname: "dsymonds",
Gname: "eng",
},
contents: "Google.com\n",
},
{
header: &Header{
Name: "link.txt",
Mode: 0777,
Uid: 1000,
Gid: 1000,
Size: 0,
ModTime: time.Unix(1314603082, 0),
Typeflag: '2',
Linkname: "small.txt",
Uname: "strings",
Gname: "strings",
},
// no contents
},
},
},
// The truncated test file was produced using these commands:
// dd if=/dev/zero bs=1048576 count=16384 > /tmp/16gig.txt
// tar -b 1 -c -f- /tmp/16gig.txt | dd bs=512 count=8 > writer-big.tar
{
file: "testdata/writer-big.tar",
entries: []*writerTestEntry{
{
header: &Header{
Name: "tmp/16gig.txt",
Mode: 0640,
Uid: 73025,
Gid: 5000,
Size: 16 << 30,
ModTime: time.Unix(1254699560, 0),
Typeflag: '0',
Uname: "dsymonds",
Gname: "eng",
},
// fake contents
contents: strings.Repeat("\x00", 4<<10),
},
},
},
// The truncated test file was produced using these commands:
// dd if=/dev/zero bs=1048576 count=16384 > (longname/)*15 /16gig.txt
// tar -b 1 -c -f- (longname/)*15 /16gig.txt | dd bs=512 count=8 > writer-big-long.tar
{
file: "testdata/writer-big-long.tar",
entries: []*writerTestEntry{
{
header: &Header{
Name: strings.Repeat("longname/", 15) + "16gig.txt",
Mode: 0644,
Uid: 1000,
Gid: 1000,
Size: 16 << 30,
ModTime: time.Unix(1399583047, 0),
Typeflag: '0',
Uname: "guillaume",
Gname: "guillaume",
},
// fake contents
contents: strings.Repeat("\x00", 4<<10),
},
},
},
// This file was produced using gnu tar 1.17
// gnutar -b 4 --format=ustar (longname/)*15 + file.txt
{
file: "testdata/ustar.tar",
entries: []*writerTestEntry{
{
header: &Header{
Name: strings.Repeat("longname/", 15) + "file.txt",
Mode: 0644,
Uid: 0765,
Gid: 024,
Size: 06,
ModTime: time.Unix(1360135598, 0),
Typeflag: '0',
Uname: "shane",
Gname: "staff",
},
contents: "hello\n",
},
},
},
// This file was produced using gnu tar 1.26
// echo "Slartibartfast" > file.txt
// ln file.txt hard.txt
// tar -b 1 --format=ustar -c -f hardlink.tar file.txt hard.txt
{
file: "testdata/hardlink.tar",
entries: []*writerTestEntry{
{
header: &Header{
Name: "file.txt",
Mode: 0644,
Uid: 1000,
Gid: 100,
Size: 15,
ModTime: time.Unix(1425484303, 0),
Typeflag: '0',
Uname: "vbatts",
Gname: "users",
},
contents: "Slartibartfast\n",
},
{
header: &Header{
Name: "hard.txt",
Mode: 0644,
Uid: 1000,
Gid: 100,
Size: 0,
ModTime: time.Unix(1425484303, 0),
Typeflag: '1',
Linkname: "file.txt",
Uname: "vbatts",
Gname: "users",
},
// no contents
},
},
},
}
// Render byte array in a two-character hexadecimal string, spaced for easy visual inspection.
func bytestr(offset int, b []byte) string {
const rowLen = 32
s := fmt.Sprintf("%04x ", offset)
for _, ch := range b {
switch {
case '0' <= ch && ch <= '9', 'A' <= ch && ch <= 'Z', 'a' <= ch && ch <= 'z':
s += fmt.Sprintf(" %c", ch)
default:
s += fmt.Sprintf(" %02x", ch)
}
}
return s
}
// Render a pseudo-diff between two blocks of bytes.
func bytediff(a []byte, b []byte) string {
const rowLen = 32
s := fmt.Sprintf("(%d bytes vs. %d bytes)\n", len(a), len(b))
for offset := 0; len(a)+len(b) > 0; offset += rowLen {
na, nb := rowLen, rowLen
if na > len(a) {
na = len(a)
}
if nb > len(b) {
nb = len(b)
}
sa := bytestr(offset, a[0:na])
sb := bytestr(offset, b[0:nb])
if sa != sb {
s += fmt.Sprintf("-%v\n+%v\n", sa, sb)
}
a = a[na:]
b = b[nb:]
}
return s
}
func TestWriter(t *testing.T) {
testLoop:
for i, test := range writerTests {
expected, err := ioutil.ReadFile(test.file)
if err != nil {
t.Errorf("test %d: Unexpected error: %v", i, err)
continue
}
buf := new(bytes.Buffer)
tw := NewWriter(iotest.TruncateWriter(buf, 4<<10)) // only catch the first 4 KB
big := false
for j, entry := range test.entries {
big = big || entry.header.Size > 1<<10
if err := tw.WriteHeader(entry.header); err != nil {
t.Errorf("test %d, entry %d: Failed writing header: %v", i, j, err)
continue testLoop
}
if _, err := io.WriteString(tw, entry.contents); err != nil {
t.Errorf("test %d, entry %d: Failed writing contents: %v", i, j, err)
continue testLoop
}
}
// Only interested in Close failures for the small tests.
if err := tw.Close(); err != nil && !big {
t.Errorf("test %d: Failed closing archive: %v", i, err)
continue testLoop
}
actual := buf.Bytes()
if !bytes.Equal(expected, actual) {
t.Errorf("test %d: Incorrect result: (-=expected, +=actual)\n%v",
i, bytediff(expected, actual))
}
if testing.Short() { // The second test is expensive.
break
}
}
}
func TestPax(t *testing.T) {
// Create an archive with a large name
fileinfo, err := os.Stat("testdata/small.txt")
if err != nil {
t.Fatal(err)
}
hdr, err := FileInfoHeader(fileinfo, "")
if err != nil {
t.Fatalf("os.Stat: %v", err)
}
// Force a PAX long name to be written
longName := strings.Repeat("ab", 100)
contents := strings.Repeat(" ", int(hdr.Size))
hdr.Name = longName
var buf bytes.Buffer
writer := NewWriter(&buf)
if err := writer.WriteHeader(hdr); err != nil {
t.Fatal(err)
}
if _, err = writer.Write([]byte(contents)); err != nil {
t.Fatal(err)
}
if err := writer.Close(); err != nil {
t.Fatal(err)
}
// Simple test to make sure PAX extensions are in effect
if !bytes.Contains(buf.Bytes(), []byte("PaxHeaders.0")) {
t.Fatal("Expected at least one PAX header to be written.")
}
// Test that we can get a long name back out of the archive.
reader := NewReader(&buf)
hdr, err = reader.Next()
if err != nil {
t.Fatal(err)
}
if hdr.Name != longName {
t.Fatal("Couldn't recover long file name")
}
}
func TestPaxSymlink(t *testing.T) {
// Create an archive with a large linkname
fileinfo, err := os.Stat("testdata/small.txt")
if err != nil {
t.Fatal(err)
}
hdr, err := FileInfoHeader(fileinfo, "")
hdr.Typeflag = TypeSymlink
if err != nil {
t.Fatalf("os.Stat:1 %v", err)
}
// Force a PAX long linkname to be written
longLinkname := strings.Repeat("1234567890/1234567890", 10)
hdr.Linkname = longLinkname
hdr.Size = 0
var buf bytes.Buffer
writer := NewWriter(&buf)
if err := writer.WriteHeader(hdr); err != nil {
t.Fatal(err)
}
if err := writer.Close(); err != nil {
t.Fatal(err)
}
// Simple test to make sure PAX extensions are in effect
if !bytes.Contains(buf.Bytes(), []byte("PaxHeaders.0")) {
t.Fatal("Expected at least one PAX header to be written.")
}
// Test that we can get a long name back out of the archive.
reader := NewReader(&buf)
hdr, err = reader.Next()
if err != nil {
t.Fatal(err)
}
if hdr.Linkname != longLinkname {
t.Fatal("Couldn't recover long link name")
}
}
func TestPaxNonAscii(t *testing.T) {
// Create an archive with non ascii. These should trigger a pax header
// because pax headers have a defined utf-8 encoding.
fileinfo, err := os.Stat("testdata/small.txt")
if err != nil {
t.Fatal(err)
}
hdr, err := FileInfoHeader(fileinfo, "")
if err != nil {
t.Fatalf("os.Stat:1 %v", err)
}
// some sample data
chineseFilename := "文件名"
chineseGroupname := "組"
chineseUsername := "用戶名"
hdr.Name = chineseFilename
hdr.Gname = chineseGroupname
hdr.Uname = chineseUsername
contents := strings.Repeat(" ", int(hdr.Size))
var buf bytes.Buffer
writer := NewWriter(&buf)
if err := writer.WriteHeader(hdr); err != nil {
t.Fatal(err)
}
if _, err = writer.Write([]byte(contents)); err != nil {
t.Fatal(err)
}
if err := writer.Close(); err != nil {
t.Fatal(err)
}
// Simple test to make sure PAX extensions are in effect
if !bytes.Contains(buf.Bytes(), []byte("PaxHeaders.0")) {
t.Fatal("Expected at least one PAX header to be written.")
}
// Test that we can get a long name back out of the archive.
reader := NewReader(&buf)
hdr, err = reader.Next()
if err != nil {
t.Fatal(err)
}
if hdr.Name != chineseFilename {
t.Fatal("Couldn't recover unicode name")
}
if hdr.Gname != chineseGroupname {
t.Fatal("Couldn't recover unicode group")
}
if hdr.Uname != chineseUsername {
t.Fatal("Couldn't recover unicode user")
}
}
func TestPaxXattrs(t *testing.T) {
xattrs := map[string]string{
"user.key": "value",
}
// Create an archive with an xattr
fileinfo, err := os.Stat("testdata/small.txt")
if err != nil {
t.Fatal(err)
}
hdr, err := FileInfoHeader(fileinfo, "")
if err != nil {
t.Fatalf("os.Stat: %v", err)
}
contents := "Kilts"
hdr.Xattrs = xattrs
var buf bytes.Buffer
writer := NewWriter(&buf)
if err := writer.WriteHeader(hdr); err != nil {
t.Fatal(err)
}
if _, err = writer.Write([]byte(contents)); err != nil {
t.Fatal(err)
}
if err := writer.Close(); err != nil {
t.Fatal(err)
}
// Test that we can get the xattrs back out of the archive.
reader := NewReader(&buf)
hdr, err = reader.Next()
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(hdr.Xattrs, xattrs) {
t.Fatalf("xattrs did not survive round trip: got %+v, want %+v",
hdr.Xattrs, xattrs)
}
}
func TestPaxHeadersSorted(t *testing.T) {
fileinfo, err := os.Stat("testdata/small.txt")
if err != nil {
t.Fatal(err)
}
hdr, err := FileInfoHeader(fileinfo, "")
if err != nil {
t.Fatalf("os.Stat: %v", err)
}
contents := strings.Repeat(" ", int(hdr.Size))
hdr.Xattrs = map[string]string{
"foo": "foo",
"bar": "bar",
"baz": "baz",
"qux": "qux",
}
var buf bytes.Buffer
writer := NewWriter(&buf)
if err := writer.WriteHeader(hdr); err != nil {
t.Fatal(err)
}
if _, err = writer.Write([]byte(contents)); err != nil {
t.Fatal(err)
}
if err := writer.Close(); err != nil {
t.Fatal(err)
}
// Simple test to make sure PAX extensions are in effect
if !bytes.Contains(buf.Bytes(), []byte("PaxHeaders.0")) {
t.Fatal("Expected at least one PAX header to be written.")
}
// xattr bar should always appear before others
indices := []int{
bytes.Index(buf.Bytes(), []byte("bar=bar")),
bytes.Index(buf.Bytes(), []byte("baz=baz")),
bytes.Index(buf.Bytes(), []byte("foo=foo")),
bytes.Index(buf.Bytes(), []byte("qux=qux")),
}
if !sort.IntsAreSorted(indices) {
t.Fatal("PAX headers are not sorted")
}
}
func TestUSTARLongName(t *testing.T) {
// Create an archive with a path that failed to split with USTAR extension in previous versions.
fileinfo, err := os.Stat("testdata/small.txt")
if err != nil {
t.Fatal(err)
}
hdr, err := FileInfoHeader(fileinfo, "")
hdr.Typeflag = TypeDir
if err != nil {
t.Fatalf("os.Stat:1 %v", err)
}
// Force a PAX long name to be written. The name was taken from a practical example
// that fails and replaced ever char through numbers to anonymize the sample.
longName := "/0000_0000000/00000-000000000/0000_0000000/00000-0000000000000/0000_0000000/00000-0000000-00000000/0000_0000000/00000000/0000_0000000/000/0000_0000000/00000000v00/0000_0000000/000000/0000_0000000/0000000/0000_0000000/00000y-00/0000/0000/00000000/0x000000/"
hdr.Name = longName
hdr.Size = 0
var buf bytes.Buffer
writer := NewWriter(&buf)
if err := writer.WriteHeader(hdr); err != nil {
t.Fatal(err)
}
if err := writer.Close(); err != nil {
t.Fatal(err)
}
// Test that we can get a long name back out of the archive.
reader := NewReader(&buf)
hdr, err = reader.Next()
if err != nil {
t.Fatal(err)
}
if hdr.Name != longName {
t.Fatal("Couldn't recover long name")
}
}
func TestValidTypeflagWithPAXHeader(t *testing.T) {
var buffer bytes.Buffer
tw := NewWriter(&buffer)
fileName := strings.Repeat("ab", 100)
hdr := &Header{
Name: fileName,
Size: 4,
Typeflag: 0,
}
if err := tw.WriteHeader(hdr); err != nil {
t.Fatalf("Failed to write header: %s", err)
}
if _, err := tw.Write([]byte("fooo")); err != nil {
t.Fatalf("Failed to write the file's data: %s", err)
}
tw.Close()
tr := NewReader(&buffer)
for {
header, err := tr.Next()
if err == io.EOF {
break
}
if err != nil {
t.Fatalf("Failed to read header: %s", err)
}
if header.Typeflag != 0 {
t.Fatalf("Typeflag should've been 0, found %d", header.Typeflag)
}
}
}
func TestWriteAfterClose(t *testing.T) {
var buffer bytes.Buffer
tw := NewWriter(&buffer)
hdr := &Header{
Name: "small.txt",
Size: 5,
}
if err := tw.WriteHeader(hdr); err != nil {
t.Fatalf("Failed to write header: %s", err)
}
tw.Close()
if _, err := tw.Write([]byte("Kilts")); err != ErrWriteAfterClose {
t.Fatalf("Write: got %v; want ErrWriteAfterClose", err)
}
}
func TestSplitUSTARPath(t *testing.T) {
var sr = strings.Repeat
var vectors = []struct {
input string // Input path
prefix string // Expected output prefix
suffix string // Expected output suffix
ok bool // Split success?
}{
{"", "", "", false},
{"abc", "", "", false},
{"用戶名", "", "", false},
{sr("a", fileNameSize), "", "", false},
{sr("a", fileNameSize) + "/", "", "", false},
{sr("a", fileNameSize) + "/a", sr("a", fileNameSize), "a", true},
{sr("a", fileNamePrefixSize) + "/", "", "", false},
{sr("a", fileNamePrefixSize) + "/a", sr("a", fileNamePrefixSize), "a", true},
{sr("a", fileNameSize+1), "", "", false},
{sr("/", fileNameSize+1), sr("/", fileNameSize-1), "/", true},
{sr("a", fileNamePrefixSize) + "/" + sr("b", fileNameSize),
sr("a", fileNamePrefixSize), sr("b", fileNameSize), true},
{sr("a", fileNamePrefixSize) + "//" + sr("b", fileNameSize), "", "", false},
{sr("a/", fileNameSize), sr("a/", 77) + "a", sr("a/", 22), true},
}
for _, v := range vectors {
prefix, suffix, ok := splitUSTARPath(v.input)
if prefix != v.prefix || suffix != v.suffix || ok != v.ok {
t.Errorf("splitUSTARPath(%q):\ngot (%q, %q, %v)\nwant (%q, %q, %v)",
v.input, prefix, suffix, ok, v.prefix, v.suffix, v.ok)
}
}
}
func TestFormatPAXRecord(t *testing.T) {
var medName = strings.Repeat("CD", 50)
var longName = strings.Repeat("AB", 100)
var vectors = []struct {
inputKey string
inputVal string
output string
}{
{"k", "v", "6 k=v\n"},
{"path", "/etc/hosts", "19 path=/etc/hosts\n"},
{"path", longName, "210 path=" + longName + "\n"},
{"path", medName, "110 path=" + medName + "\n"},
{"foo", "ba", "9 foo=ba\n"},
{"foo", "bar", "11 foo=bar\n"},
{"foo", "b=\nar=\n==\x00", "18 foo=b=\nar=\n==\x00\n"},
{"foo", "hello9 foo=ba\nworld", "27 foo=hello9 foo=ba\nworld\n"},
{"☺☻☹", "日a本b語ç", "27 ☺☻☹=日a本b語ç\n"},
{"\x00hello", "\x00world", "17 \x00hello=\x00world\n"},
}
for _, v := range vectors {
output := formatPAXRecord(v.inputKey, v.inputVal)
if output != v.output {
t.Errorf("formatPAXRecord(%q, %q): got %q, want %q",
v.inputKey, v.inputVal, output, v.output)
}
}
}
func TestFitsInBase256(t *testing.T) {
var vectors = []struct {
input int64
width int
ok bool
}{
{+1, 8, true},
{0, 8, true},
{-1, 8, true},
{1 << 56, 8, false},
{(1 << 56) - 1, 8, true},
{-1 << 56, 8, true},
{(-1 << 56) - 1, 8, false},
{121654, 8, true},
{-9849849, 8, true},
{math.MaxInt64, 9, true},
{0, 9, true},
{math.MinInt64, 9, true},
{math.MaxInt64, 12, true},
{0, 12, true},
{math.MinInt64, 12, true},
}
for _, v := range vectors {
ok := fitsInBase256(v.width, v.input)
if ok != v.ok {
t.Errorf("checkNumeric(%d, %d): got %v, want %v", v.input, v.width, ok, v.ok)
}
}
}
func TestFormatNumeric(t *testing.T) {
var vectors = []struct {
input int64
output string
ok bool
}{
// Test base-256 (binary) encoded values.
{-1, "\xff", true},
{-1, "\xff\xff", true},
{-1, "\xff\xff\xff", true},
{(1 << 0), "0", false},
{(1 << 8) - 1, "\x80\xff", true},
{(1 << 8), "0\x00", false},
{(1 << 16) - 1, "\x80\xff\xff", true},
{(1 << 16), "00\x00", false},
{-1 * (1 << 0), "\xff", true},
{-1*(1<<0) - 1, "0", false},
{-1 * (1 << 8), "\xff\x00", true},
{-1*(1<<8) - 1, "0\x00", false},
{-1 * (1 << 16), "\xff\x00\x00", true},
{-1*(1<<16) - 1, "00\x00", false},
{537795476381659745, "0000000\x00", false},
{537795476381659745, "\x80\x00\x00\x00\x07\x76\xa2\x22\xeb\x8a\x72\x61", true},
{-615126028225187231, "0000000\x00", false},
{-615126028225187231, "\xff\xff\xff\xff\xf7\x76\xa2\x22\xeb\x8a\x72\x61", true},
{math.MaxInt64, "0000000\x00", false},
{math.MaxInt64, "\x80\x00\x00\x00\x7f\xff\xff\xff\xff\xff\xff\xff", true},
{math.MinInt64, "0000000\x00", false},
{math.MinInt64, "\xff\xff\xff\xff\x80\x00\x00\x00\x00\x00\x00\x00", true},
{math.MaxInt64, "\x80\x7f\xff\xff\xff\xff\xff\xff\xff", true},
{math.MinInt64, "\xff\x80\x00\x00\x00\x00\x00\x00\x00", true},
}
for _, v := range vectors {
var f formatter
output := make([]byte, len(v.output))
f.formatNumeric(output, v.input)
ok := (f.err == nil)
if ok != v.ok {
if v.ok {
t.Errorf("formatNumeric(%d): got formatting failure, want success", v.input)
} else {
t.Errorf("formatNumeric(%d): got formatting success, want failure", v.input)
}
}
if string(output) != v.output {
t.Errorf("formatNumeric(%d): got %q, want %q", v.input, output, v.output)
}
}
}
func TestFormatPAXTime(t *testing.T) {
t1 := time.Date(2000, 1, 1, 11, 0, 0, 0, time.UTC)
t2 := time.Date(2000, 1, 1, 11, 0, 0, 100, time.UTC)
t3 := time.Date(1960, 1, 1, 11, 0, 0, 0, time.UTC)
t4 := time.Date(1970, 1, 1, 0, 0, 0, 0, time.UTC)
verify := func(time time.Time, s string) {
p := formatPAXTime(time)
if p != s {
t.Errorf("for %v, expected %s, got %s", time, s, p)
}
}
verify(t1, "946724400")
verify(t2, "946724400.000000100")
verify(t3, "-315579600")
verify(t4, "0")
}

View File

@@ -1,280 +0,0 @@
// +build windows
package winio
import (
"encoding/binary"
"errors"
"fmt"
"io"
"io/ioutil"
"os"
"runtime"
"syscall"
"unicode/utf16"
)
//sys backupRead(h syscall.Handle, b []byte, bytesRead *uint32, abort bool, processSecurity bool, context *uintptr) (err error) = BackupRead
//sys backupWrite(h syscall.Handle, b []byte, bytesWritten *uint32, abort bool, processSecurity bool, context *uintptr) (err error) = BackupWrite
const (
BackupData = uint32(iota + 1)
BackupEaData
BackupSecurity
BackupAlternateData
BackupLink
BackupPropertyData
BackupObjectId
BackupReparseData
BackupSparseBlock
BackupTxfsData
)
const (
StreamSparseAttributes = uint32(8)
)
const (
WRITE_DAC = 0x40000
WRITE_OWNER = 0x80000
ACCESS_SYSTEM_SECURITY = 0x1000000
)
// BackupHeader represents a backup stream of a file.
type BackupHeader struct {
Id uint32 // The backup stream ID
Attributes uint32 // Stream attributes
Size int64 // The size of the stream in bytes
Name string // The name of the stream (for BackupAlternateData only).
Offset int64 // The offset of the stream in the file (for BackupSparseBlock only).
}
type win32StreamId struct {
StreamId uint32
Attributes uint32
Size uint64
NameSize uint32
}
// BackupStreamReader reads from a stream produced by the BackupRead Win32 API and produces a series
// of BackupHeader values.
type BackupStreamReader struct {
r io.Reader
bytesLeft int64
}
// NewBackupStreamReader produces a BackupStreamReader from any io.Reader.
func NewBackupStreamReader(r io.Reader) *BackupStreamReader {
return &BackupStreamReader{r, 0}
}
// Next returns the next backup stream and prepares for calls to Read(). It skips the remainder of the current stream if
// it was not completely read.
func (r *BackupStreamReader) Next() (*BackupHeader, error) {
if r.bytesLeft > 0 {
if s, ok := r.r.(io.Seeker); ok {
// Make sure Seek on io.SeekCurrent sometimes succeeds
// before trying the actual seek.
if _, err := s.Seek(0, io.SeekCurrent); err == nil {
if _, err = s.Seek(r.bytesLeft, io.SeekCurrent); err != nil {
return nil, err
}
r.bytesLeft = 0
}
}
if _, err := io.Copy(ioutil.Discard, r); err != nil {
return nil, err
}
}
var wsi win32StreamId
if err := binary.Read(r.r, binary.LittleEndian, &wsi); err != nil {
return nil, err
}
hdr := &BackupHeader{
Id: wsi.StreamId,
Attributes: wsi.Attributes,
Size: int64(wsi.Size),
}
if wsi.NameSize != 0 {
name := make([]uint16, int(wsi.NameSize/2))
if err := binary.Read(r.r, binary.LittleEndian, name); err != nil {
return nil, err
}
hdr.Name = syscall.UTF16ToString(name)
}
if wsi.StreamId == BackupSparseBlock {
if err := binary.Read(r.r, binary.LittleEndian, &hdr.Offset); err != nil {
return nil, err
}
hdr.Size -= 8
}
r.bytesLeft = hdr.Size
return hdr, nil
}
// Read reads from the current backup stream.
func (r *BackupStreamReader) Read(b []byte) (int, error) {
if r.bytesLeft == 0 {
return 0, io.EOF
}
if int64(len(b)) > r.bytesLeft {
b = b[:r.bytesLeft]
}
n, err := r.r.Read(b)
r.bytesLeft -= int64(n)
if err == io.EOF {
err = io.ErrUnexpectedEOF
} else if r.bytesLeft == 0 && err == nil {
err = io.EOF
}
return n, err
}
// BackupStreamWriter writes a stream compatible with the BackupWrite Win32 API.
type BackupStreamWriter struct {
w io.Writer
bytesLeft int64
}
// NewBackupStreamWriter produces a BackupStreamWriter on top of an io.Writer.
func NewBackupStreamWriter(w io.Writer) *BackupStreamWriter {
return &BackupStreamWriter{w, 0}
}
// WriteHeader writes the next backup stream header and prepares for calls to Write().
func (w *BackupStreamWriter) WriteHeader(hdr *BackupHeader) error {
if w.bytesLeft != 0 {
return fmt.Errorf("missing %d bytes", w.bytesLeft)
}
name := utf16.Encode([]rune(hdr.Name))
wsi := win32StreamId{
StreamId: hdr.Id,
Attributes: hdr.Attributes,
Size: uint64(hdr.Size),
NameSize: uint32(len(name) * 2),
}
if hdr.Id == BackupSparseBlock {
// Include space for the int64 block offset
wsi.Size += 8
}
if err := binary.Write(w.w, binary.LittleEndian, &wsi); err != nil {
return err
}
if len(name) != 0 {
if err := binary.Write(w.w, binary.LittleEndian, name); err != nil {
return err
}
}
if hdr.Id == BackupSparseBlock {
if err := binary.Write(w.w, binary.LittleEndian, hdr.Offset); err != nil {
return err
}
}
w.bytesLeft = hdr.Size
return nil
}
// Write writes to the current backup stream.
func (w *BackupStreamWriter) Write(b []byte) (int, error) {
if w.bytesLeft < int64(len(b)) {
return 0, fmt.Errorf("too many bytes by %d", int64(len(b))-w.bytesLeft)
}
n, err := w.w.Write(b)
w.bytesLeft -= int64(n)
return n, err
}
// BackupFileReader provides an io.ReadCloser interface on top of the BackupRead Win32 API.
type BackupFileReader struct {
f *os.File
includeSecurity bool
ctx uintptr
}
// NewBackupFileReader returns a new BackupFileReader from a file handle. If includeSecurity is true,
// Read will attempt to read the security descriptor of the file.
func NewBackupFileReader(f *os.File, includeSecurity bool) *BackupFileReader {
r := &BackupFileReader{f, includeSecurity, 0}
return r
}
// Read reads a backup stream from the file by calling the Win32 API BackupRead().
func (r *BackupFileReader) Read(b []byte) (int, error) {
var bytesRead uint32
err := backupRead(syscall.Handle(r.f.Fd()), b, &bytesRead, false, r.includeSecurity, &r.ctx)
if err != nil {
return 0, &os.PathError{"BackupRead", r.f.Name(), err}
}
runtime.KeepAlive(r.f)
if bytesRead == 0 {
return 0, io.EOF
}
return int(bytesRead), nil
}
// Close frees Win32 resources associated with the BackupFileReader. It does not close
// the underlying file.
func (r *BackupFileReader) Close() error {
if r.ctx != 0 {
backupRead(syscall.Handle(r.f.Fd()), nil, nil, true, false, &r.ctx)
runtime.KeepAlive(r.f)
r.ctx = 0
}
return nil
}
// BackupFileWriter provides an io.WriteCloser interface on top of the BackupWrite Win32 API.
type BackupFileWriter struct {
f *os.File
includeSecurity bool
ctx uintptr
}
// NewBackupFileWriter returns a new BackupFileWriter from a file handle. If includeSecurity is true,
// Write() will attempt to restore the security descriptor from the stream.
func NewBackupFileWriter(f *os.File, includeSecurity bool) *BackupFileWriter {
w := &BackupFileWriter{f, includeSecurity, 0}
return w
}
// Write restores a portion of the file using the provided backup stream.
func (w *BackupFileWriter) Write(b []byte) (int, error) {
var bytesWritten uint32
err := backupWrite(syscall.Handle(w.f.Fd()), b, &bytesWritten, false, w.includeSecurity, &w.ctx)
if err != nil {
return 0, &os.PathError{"BackupWrite", w.f.Name(), err}
}
runtime.KeepAlive(w.f)
if int(bytesWritten) != len(b) {
return int(bytesWritten), errors.New("not all bytes could be written")
}
return len(b), nil
}
// Close frees Win32 resources associated with the BackupFileWriter. It does not
// close the underlying file.
func (w *BackupFileWriter) Close() error {
if w.ctx != 0 {
backupWrite(syscall.Handle(w.f.Fd()), nil, nil, true, false, &w.ctx)
runtime.KeepAlive(w.f)
w.ctx = 0
}
return nil
}
// OpenForBackup opens a file or directory, potentially skipping access checks if the backup
// or restore privileges have been acquired.
//
// If the file opened was a directory, it cannot be used with Readdir().
func OpenForBackup(path string, access uint32, share uint32, createmode uint32) (*os.File, error) {
winPath, err := syscall.UTF16FromString(path)
if err != nil {
return nil, err
}
h, err := syscall.CreateFile(&winPath[0], access, share, nil, createmode, syscall.FILE_FLAG_BACKUP_SEMANTICS|syscall.FILE_FLAG_OPEN_REPARSE_POINT, 0)
if err != nil {
err = &os.PathError{Op: "open", Path: path, Err: err}
return nil, err
}
return os.NewFile(uintptr(h), path), nil
}

View File

@@ -1,255 +0,0 @@
package winio
import (
"io"
"io/ioutil"
"os"
"syscall"
"testing"
)
var testFileName string
func TestMain(m *testing.M) {
f, err := ioutil.TempFile("", "tmp")
if err != nil {
panic(err)
}
testFileName = f.Name()
f.Close()
defer os.Remove(testFileName)
os.Exit(m.Run())
}
func makeTestFile(makeADS bool) error {
os.Remove(testFileName)
f, err := os.Create(testFileName)
if err != nil {
return err
}
defer f.Close()
_, err = f.Write([]byte("testing 1 2 3\n"))
if err != nil {
return err
}
if makeADS {
a, err := os.Create(testFileName + ":ads.txt")
if err != nil {
return err
}
defer a.Close()
_, err = a.Write([]byte("alternate data stream\n"))
if err != nil {
return err
}
}
return nil
}
func TestBackupRead(t *testing.T) {
err := makeTestFile(true)
if err != nil {
t.Fatal(err)
}
f, err := os.Open(testFileName)
if err != nil {
t.Fatal(err)
}
defer f.Close()
r := NewBackupFileReader(f, false)
defer r.Close()
b, err := ioutil.ReadAll(r)
if err != nil {
t.Fatal(err)
}
if len(b) == 0 {
t.Fatal("no data")
}
}
func TestBackupStreamRead(t *testing.T) {
err := makeTestFile(true)
if err != nil {
t.Fatal(err)
}
f, err := os.Open(testFileName)
if err != nil {
t.Fatal(err)
}
defer f.Close()
r := NewBackupFileReader(f, false)
defer r.Close()
br := NewBackupStreamReader(r)
gotData := false
gotAltData := false
for {
hdr, err := br.Next()
if err == io.EOF {
break
}
if err != nil {
t.Fatal(err)
}
switch hdr.Id {
case BackupData:
if gotData {
t.Fatal("duplicate data")
}
if hdr.Name != "" {
t.Fatalf("unexpected name %s", hdr.Name)
}
b, err := ioutil.ReadAll(br)
if err != nil {
t.Fatal(err)
}
if string(b) != "testing 1 2 3\n" {
t.Fatalf("incorrect data %v", b)
}
gotData = true
case BackupAlternateData:
if gotAltData {
t.Fatal("duplicate alt data")
}
if hdr.Name != ":ads.txt:$DATA" {
t.Fatalf("incorrect name %s", hdr.Name)
}
b, err := ioutil.ReadAll(br)
if err != nil {
t.Fatal(err)
}
if string(b) != "alternate data stream\n" {
t.Fatalf("incorrect data %v", b)
}
gotAltData = true
default:
t.Fatalf("unknown stream ID %d", hdr.Id)
}
}
if !gotData || !gotAltData {
t.Fatal("missing stream")
}
}
func TestBackupStreamWrite(t *testing.T) {
f, err := os.Create(testFileName)
if err != nil {
t.Fatal(err)
}
defer f.Close()
w := NewBackupFileWriter(f, false)
defer w.Close()
data := "testing 1 2 3\n"
altData := "alternate stream\n"
br := NewBackupStreamWriter(w)
err = br.WriteHeader(&BackupHeader{Id: BackupData, Size: int64(len(data))})
if err != nil {
t.Fatal(err)
}
n, err := br.Write([]byte(data))
if err != nil {
t.Fatal(err)
}
if n != len(data) {
t.Fatal("short write")
}
err = br.WriteHeader(&BackupHeader{Id: BackupAlternateData, Size: int64(len(altData)), Name: ":ads.txt:$DATA"})
if err != nil {
t.Fatal(err)
}
n, err = br.Write([]byte(altData))
if err != nil {
t.Fatal(err)
}
if n != len(altData) {
t.Fatal("short write")
}
f.Close()
b, err := ioutil.ReadFile(testFileName)
if err != nil {
t.Fatal(err)
}
if string(b) != data {
t.Fatalf("wrong data %v", b)
}
b, err = ioutil.ReadFile(testFileName + ":ads.txt")
if err != nil {
t.Fatal(err)
}
if string(b) != altData {
t.Fatalf("wrong data %v", b)
}
}
func makeSparseFile() error {
os.Remove(testFileName)
f, err := os.Create(testFileName)
if err != nil {
return err
}
defer f.Close()
const (
FSCTL_SET_SPARSE = 0x000900c4
FSCTL_SET_ZERO_DATA = 0x000980c8
)
err = syscall.DeviceIoControl(syscall.Handle(f.Fd()), FSCTL_SET_SPARSE, nil, 0, nil, 0, nil, nil)
if err != nil {
return err
}
_, err = f.Write([]byte("testing 1 2 3\n"))
if err != nil {
return err
}
_, err = f.Seek(1000000, 0)
if err != nil {
return err
}
_, err = f.Write([]byte("more data later\n"))
if err != nil {
return err
}
return nil
}
func TestBackupSparseFile(t *testing.T) {
err := makeSparseFile()
if err != nil {
t.Fatal(err)
}
f, err := os.Open(testFileName)
if err != nil {
t.Fatal(err)
}
defer f.Close()
r := NewBackupFileReader(f, false)
defer r.Close()
br := NewBackupStreamReader(r)
for {
hdr, err := br.Next()
if err == io.EOF {
break
}
if err != nil {
t.Fatal(err)
}
t.Log(hdr)
}
}

View File

@@ -1,4 +0,0 @@
// +build !windows
// This file only exists to allow go get on non-Windows platforms.
package backuptar

View File

@@ -1,439 +0,0 @@
// +build windows
package backuptar
import (
"encoding/base64"
"errors"
"fmt"
"io"
"io/ioutil"
"path/filepath"
"strconv"
"strings"
"syscall"
"time"
"github.com/Microsoft/go-winio"
"github.com/Microsoft/go-winio/archive/tar" // until archive/tar supports pax extensions in its interface
)
const (
c_ISUID = 04000 // Set uid
c_ISGID = 02000 // Set gid
c_ISVTX = 01000 // Save text (sticky bit)
c_ISDIR = 040000 // Directory
c_ISFIFO = 010000 // FIFO
c_ISREG = 0100000 // Regular file
c_ISLNK = 0120000 // Symbolic link
c_ISBLK = 060000 // Block special file
c_ISCHR = 020000 // Character special file
c_ISSOCK = 0140000 // Socket
)
const (
hdrFileAttributes = "fileattr"
hdrSecurityDescriptor = "sd"
hdrRawSecurityDescriptor = "rawsd"
hdrMountPoint = "mountpoint"
hdrEaPrefix = "xattr."
)
func writeZeroes(w io.Writer, count int64) error {
buf := make([]byte, 8192)
c := len(buf)
for i := int64(0); i < count; i += int64(c) {
if int64(c) > count-i {
c = int(count - i)
}
_, err := w.Write(buf[:c])
if err != nil {
return err
}
}
return nil
}
func copySparse(t *tar.Writer, br *winio.BackupStreamReader) error {
curOffset := int64(0)
for {
bhdr, err := br.Next()
if err == io.EOF {
err = io.ErrUnexpectedEOF
}
if err != nil {
return err
}
if bhdr.Id != winio.BackupSparseBlock {
return fmt.Errorf("unexpected stream %d", bhdr.Id)
}
// archive/tar does not support writing sparse files
// so just write zeroes to catch up to the current offset.
err = writeZeroes(t, bhdr.Offset-curOffset)
if bhdr.Size == 0 {
break
}
n, err := io.Copy(t, br)
if err != nil {
return err
}
curOffset = bhdr.Offset + n
}
return nil
}
// BasicInfoHeader creates a tar header from basic file information.
func BasicInfoHeader(name string, size int64, fileInfo *winio.FileBasicInfo) *tar.Header {
hdr := &tar.Header{
Name: filepath.ToSlash(name),
Size: size,
Typeflag: tar.TypeReg,
ModTime: time.Unix(0, fileInfo.LastWriteTime.Nanoseconds()),
ChangeTime: time.Unix(0, fileInfo.ChangeTime.Nanoseconds()),
AccessTime: time.Unix(0, fileInfo.LastAccessTime.Nanoseconds()),
CreationTime: time.Unix(0, fileInfo.CreationTime.Nanoseconds()),
Winheaders: make(map[string]string),
}
hdr.Winheaders[hdrFileAttributes] = fmt.Sprintf("%d", fileInfo.FileAttributes)
if (fileInfo.FileAttributes & syscall.FILE_ATTRIBUTE_DIRECTORY) != 0 {
hdr.Mode |= c_ISDIR
hdr.Size = 0
hdr.Typeflag = tar.TypeDir
}
return hdr
}
// WriteTarFileFromBackupStream writes a file to a tar writer using data from a Win32 backup stream.
//
// This encodes Win32 metadata as tar pax vendor extensions starting with MSWINDOWS.
//
// The additional Win32 metadata is:
//
// MSWINDOWS.fileattr: The Win32 file attributes, as a decimal value
//
// MSWINDOWS.rawsd: The Win32 security descriptor, in raw binary format
//
// MSWINDOWS.mountpoint: If present, this is a mount point and not a symlink, even though the type is '2' (symlink)
func WriteTarFileFromBackupStream(t *tar.Writer, r io.Reader, name string, size int64, fileInfo *winio.FileBasicInfo) error {
name = filepath.ToSlash(name)
hdr := BasicInfoHeader(name, size, fileInfo)
// If r can be seeked, then this function is two-pass: pass 1 collects the
// tar header data, and pass 2 copies the data stream. If r cannot be
// seeked, then some header data (in particular EAs) will be silently lost.
var (
restartPos int64
err error
)
sr, readTwice := r.(io.Seeker)
if readTwice {
if restartPos, err = sr.Seek(0, io.SeekCurrent); err != nil {
readTwice = false
}
}
br := winio.NewBackupStreamReader(r)
var dataHdr *winio.BackupHeader
for dataHdr == nil {
bhdr, err := br.Next()
if err == io.EOF {
break
}
if err != nil {
return err
}
switch bhdr.Id {
case winio.BackupData:
hdr.Mode |= c_ISREG
if !readTwice {
dataHdr = bhdr
}
case winio.BackupSecurity:
sd, err := ioutil.ReadAll(br)
if err != nil {
return err
}
hdr.Winheaders[hdrRawSecurityDescriptor] = base64.StdEncoding.EncodeToString(sd)
case winio.BackupReparseData:
hdr.Mode |= c_ISLNK
hdr.Typeflag = tar.TypeSymlink
reparseBuffer, err := ioutil.ReadAll(br)
rp, err := winio.DecodeReparsePoint(reparseBuffer)
if err != nil {
return err
}
if rp.IsMountPoint {
hdr.Winheaders[hdrMountPoint] = "1"
}
hdr.Linkname = rp.Target
case winio.BackupEaData:
eab, err := ioutil.ReadAll(br)
if err != nil {
return err
}
eas, err := winio.DecodeExtendedAttributes(eab)
if err != nil {
return err
}
for _, ea := range eas {
// Use base64 encoding for the binary value. Note that there
// is no way to encode the EA's flags, since their use doesn't
// make any sense for persisted EAs.
hdr.Winheaders[hdrEaPrefix+ea.Name] = base64.StdEncoding.EncodeToString(ea.Value)
}
case winio.BackupAlternateData, winio.BackupLink, winio.BackupPropertyData, winio.BackupObjectId, winio.BackupTxfsData:
// ignore these streams
default:
return fmt.Errorf("%s: unknown stream ID %d", name, bhdr.Id)
}
}
err = t.WriteHeader(hdr)
if err != nil {
return err
}
if readTwice {
// Get back to the data stream.
if _, err = sr.Seek(restartPos, io.SeekStart); err != nil {
return err
}
for dataHdr == nil {
bhdr, err := br.Next()
if err == io.EOF {
break
}
if err != nil {
return err
}
if bhdr.Id == winio.BackupData {
dataHdr = bhdr
}
}
}
if dataHdr != nil {
// A data stream was found. Copy the data.
if (dataHdr.Attributes & winio.StreamSparseAttributes) == 0 {
if size != dataHdr.Size {
return fmt.Errorf("%s: mismatch between file size %d and header size %d", name, size, dataHdr.Size)
}
_, err = io.Copy(t, br)
if err != nil {
return err
}
} else {
err = copySparse(t, br)
if err != nil {
return err
}
}
}
// Look for streams after the data stream. The only ones we handle are alternate data streams.
// Other streams may have metadata that could be serialized, but the tar header has already
// been written. In practice, this means that we don't get EA or TXF metadata.
for {
bhdr, err := br.Next()
if err == io.EOF {
break
}
if err != nil {
return err
}
switch bhdr.Id {
case winio.BackupAlternateData:
altName := bhdr.Name
if strings.HasSuffix(altName, ":$DATA") {
altName = altName[:len(altName)-len(":$DATA")]
}
if (bhdr.Attributes & winio.StreamSparseAttributes) == 0 {
hdr = &tar.Header{
Name: name + altName,
Mode: hdr.Mode,
Typeflag: tar.TypeReg,
Size: bhdr.Size,
ModTime: hdr.ModTime,
AccessTime: hdr.AccessTime,
ChangeTime: hdr.ChangeTime,
}
err = t.WriteHeader(hdr)
if err != nil {
return err
}
_, err = io.Copy(t, br)
if err != nil {
return err
}
} else {
// Unsupported for now, since the size of the alternate stream is not present
// in the backup stream until after the data has been read.
return errors.New("tar of sparse alternate data streams is unsupported")
}
case winio.BackupEaData, winio.BackupLink, winio.BackupPropertyData, winio.BackupObjectId, winio.BackupTxfsData:
// ignore these streams
default:
return fmt.Errorf("%s: unknown stream ID %d after data", name, bhdr.Id)
}
}
return nil
}
// FileInfoFromHeader retrieves basic Win32 file information from a tar header, using the additional metadata written by
// WriteTarFileFromBackupStream.
func FileInfoFromHeader(hdr *tar.Header) (name string, size int64, fileInfo *winio.FileBasicInfo, err error) {
name = hdr.Name
if hdr.Typeflag == tar.TypeReg || hdr.Typeflag == tar.TypeRegA {
size = hdr.Size
}
fileInfo = &winio.FileBasicInfo{
LastAccessTime: syscall.NsecToFiletime(hdr.AccessTime.UnixNano()),
LastWriteTime: syscall.NsecToFiletime(hdr.ModTime.UnixNano()),
ChangeTime: syscall.NsecToFiletime(hdr.ChangeTime.UnixNano()),
CreationTime: syscall.NsecToFiletime(hdr.CreationTime.UnixNano()),
}
if attrStr, ok := hdr.Winheaders[hdrFileAttributes]; ok {
attr, err := strconv.ParseUint(attrStr, 10, 32)
if err != nil {
return "", 0, nil, err
}
fileInfo.FileAttributes = uint32(attr)
} else {
if hdr.Typeflag == tar.TypeDir {
fileInfo.FileAttributes |= syscall.FILE_ATTRIBUTE_DIRECTORY
}
}
return
}
// WriteBackupStreamFromTarFile writes a Win32 backup stream from the current tar file. Since this function may process multiple
// tar file entries in order to collect all the alternate data streams for the file, it returns the next
// tar file that was not processed, or io.EOF is there are no more.
func WriteBackupStreamFromTarFile(w io.Writer, t *tar.Reader, hdr *tar.Header) (*tar.Header, error) {
bw := winio.NewBackupStreamWriter(w)
var sd []byte
var err error
// Maintaining old SDDL-based behavior for backward compatibility. All new tar headers written
// by this library will have raw binary for the security descriptor.
if sddl, ok := hdr.Winheaders[hdrSecurityDescriptor]; ok {
sd, err = winio.SddlToSecurityDescriptor(sddl)
if err != nil {
return nil, err
}
}
if sdraw, ok := hdr.Winheaders[hdrRawSecurityDescriptor]; ok {
sd, err = base64.StdEncoding.DecodeString(sdraw)
if err != nil {
return nil, err
}
}
if len(sd) != 0 {
bhdr := winio.BackupHeader{
Id: winio.BackupSecurity,
Size: int64(len(sd)),
}
err := bw.WriteHeader(&bhdr)
if err != nil {
return nil, err
}
_, err = bw.Write(sd)
if err != nil {
return nil, err
}
}
var eas []winio.ExtendedAttribute
for k, v := range hdr.Winheaders {
if !strings.HasPrefix(k, hdrEaPrefix) {
continue
}
data, err := base64.StdEncoding.DecodeString(v)
if err != nil {
return nil, err
}
eas = append(eas, winio.ExtendedAttribute{
Name: k[len(hdrEaPrefix):],
Value: data,
})
}
if len(eas) != 0 {
eadata, err := winio.EncodeExtendedAttributes(eas)
if err != nil {
return nil, err
}
bhdr := winio.BackupHeader{
Id: winio.BackupEaData,
Size: int64(len(eadata)),
}
err = bw.WriteHeader(&bhdr)
if err != nil {
return nil, err
}
_, err = bw.Write(eadata)
if err != nil {
return nil, err
}
}
if hdr.Typeflag == tar.TypeSymlink {
_, isMountPoint := hdr.Winheaders[hdrMountPoint]
rp := winio.ReparsePoint{
Target: filepath.FromSlash(hdr.Linkname),
IsMountPoint: isMountPoint,
}
reparse := winio.EncodeReparsePoint(&rp)
bhdr := winio.BackupHeader{
Id: winio.BackupReparseData,
Size: int64(len(reparse)),
}
err := bw.WriteHeader(&bhdr)
if err != nil {
return nil, err
}
_, err = bw.Write(reparse)
if err != nil {
return nil, err
}
}
if hdr.Typeflag == tar.TypeReg || hdr.Typeflag == tar.TypeRegA {
bhdr := winio.BackupHeader{
Id: winio.BackupData,
Size: hdr.Size,
}
err := bw.WriteHeader(&bhdr)
if err != nil {
return nil, err
}
_, err = io.Copy(bw, t)
if err != nil {
return nil, err
}
}
// Copy all the alternate data streams and return the next non-ADS header.
for {
ahdr, err := t.Next()
if err != nil {
return nil, err
}
if ahdr.Typeflag != tar.TypeReg || !strings.HasPrefix(ahdr.Name, hdr.Name+":") {
return ahdr, nil
}
bhdr := winio.BackupHeader{
Id: winio.BackupAlternateData,
Size: ahdr.Size,
Name: ahdr.Name[len(hdr.Name):] + ":$DATA",
}
err = bw.WriteHeader(&bhdr)
if err != nil {
return nil, err
}
_, err = io.Copy(bw, t)
if err != nil {
return nil, err
}
}
}

View File

@@ -1,84 +0,0 @@
package backuptar
import (
"bytes"
"io/ioutil"
"os"
"path/filepath"
"reflect"
"testing"
"github.com/Microsoft/go-winio"
"github.com/Microsoft/go-winio/archive/tar"
)
func ensurePresent(t *testing.T, m map[string]string, keys ...string) {
for _, k := range keys {
if _, ok := m[k]; !ok {
t.Error(k, "not present in tar header")
}
}
}
func TestRoundTrip(t *testing.T) {
f, err := ioutil.TempFile("", "tst")
if err != nil {
t.Fatal(err)
}
defer f.Close()
defer os.Remove(f.Name())
if _, err = f.Write([]byte("testing 1 2 3\n")); err != nil {
t.Fatal(err)
}
if _, err = f.Seek(0, 0); err != nil {
t.Fatal(err)
}
fi, err := f.Stat()
if err != nil {
t.Fatal(err)
}
bi, err := winio.GetFileBasicInfo(f)
if err != nil {
t.Fatal(err)
}
br := winio.NewBackupFileReader(f, true)
defer br.Close()
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
err = WriteTarFileFromBackupStream(tw, br, f.Name(), fi.Size(), bi)
if err != nil {
t.Fatal(err)
}
tr := tar.NewReader(&buf)
hdr, err := tr.Next()
if err != nil {
t.Fatal(err)
}
name, size, bi2, err := FileInfoFromHeader(hdr)
if err != nil {
t.Fatal(err)
}
if name != filepath.ToSlash(f.Name()) {
t.Errorf("got name %s, expected %s", name, filepath.ToSlash(f.Name()))
}
if size != fi.Size() {
t.Errorf("got size %d, expected %d", size, fi.Size())
}
if !reflect.DeepEqual(*bi, *bi2) {
t.Errorf("got %#v, expected %#v", *bi, *bi2)
}
ensurePresent(t, hdr.Winheaders, "fileattr", "rawsd")
}

View File

@@ -1,137 +0,0 @@
package winio
import (
"bytes"
"encoding/binary"
"errors"
)
type fileFullEaInformation struct {
NextEntryOffset uint32
Flags uint8
NameLength uint8
ValueLength uint16
}
var (
fileFullEaInformationSize = binary.Size(&fileFullEaInformation{})
errInvalidEaBuffer = errors.New("invalid extended attribute buffer")
errEaNameTooLarge = errors.New("extended attribute name too large")
errEaValueTooLarge = errors.New("extended attribute value too large")
)
// ExtendedAttribute represents a single Windows EA.
type ExtendedAttribute struct {
Name string
Value []byte
Flags uint8
}
func parseEa(b []byte) (ea ExtendedAttribute, nb []byte, err error) {
var info fileFullEaInformation
err = binary.Read(bytes.NewReader(b), binary.LittleEndian, &info)
if err != nil {
err = errInvalidEaBuffer
return
}
nameOffset := fileFullEaInformationSize
nameLen := int(info.NameLength)
valueOffset := nameOffset + int(info.NameLength) + 1
valueLen := int(info.ValueLength)
nextOffset := int(info.NextEntryOffset)
if valueLen+valueOffset > len(b) || nextOffset < 0 || nextOffset > len(b) {
err = errInvalidEaBuffer
return
}
ea.Name = string(b[nameOffset : nameOffset+nameLen])
ea.Value = b[valueOffset : valueOffset+valueLen]
ea.Flags = info.Flags
if info.NextEntryOffset != 0 {
nb = b[info.NextEntryOffset:]
}
return
}
// DecodeExtendedAttributes decodes a list of EAs from a FILE_FULL_EA_INFORMATION
// buffer retrieved from BackupRead, ZwQueryEaFile, etc.
func DecodeExtendedAttributes(b []byte) (eas []ExtendedAttribute, err error) {
for len(b) != 0 {
ea, nb, err := parseEa(b)
if err != nil {
return nil, err
}
eas = append(eas, ea)
b = nb
}
return
}
func writeEa(buf *bytes.Buffer, ea *ExtendedAttribute, last bool) error {
if int(uint8(len(ea.Name))) != len(ea.Name) {
return errEaNameTooLarge
}
if int(uint16(len(ea.Value))) != len(ea.Value) {
return errEaValueTooLarge
}
entrySize := uint32(fileFullEaInformationSize + len(ea.Name) + 1 + len(ea.Value))
withPadding := (entrySize + 3) &^ 3
nextOffset := uint32(0)
if !last {
nextOffset = withPadding
}
info := fileFullEaInformation{
NextEntryOffset: nextOffset,
Flags: ea.Flags,
NameLength: uint8(len(ea.Name)),
ValueLength: uint16(len(ea.Value)),
}
err := binary.Write(buf, binary.LittleEndian, &info)
if err != nil {
return err
}
_, err = buf.Write([]byte(ea.Name))
if err != nil {
return err
}
err = buf.WriteByte(0)
if err != nil {
return err
}
_, err = buf.Write(ea.Value)
if err != nil {
return err
}
_, err = buf.Write([]byte{0, 0, 0}[0 : withPadding-entrySize])
if err != nil {
return err
}
return nil
}
// EncodeExtendedAttributes encodes a list of EAs into a FILE_FULL_EA_INFORMATION
// buffer for use with BackupWrite, ZwSetEaFile, etc.
func EncodeExtendedAttributes(eas []ExtendedAttribute) ([]byte, error) {
var buf bytes.Buffer
for i := range eas {
last := false
if i == len(eas)-1 {
last = true
}
err := writeEa(&buf, &eas[i], last)
if err != nil {
return nil, err
}
}
return buf.Bytes(), nil
}

View File

@@ -1,89 +0,0 @@
package winio
import (
"io/ioutil"
"os"
"reflect"
"syscall"
"testing"
"unsafe"
)
var (
testEas = []ExtendedAttribute{
{Name: "foo", Value: []byte("bar")},
{Name: "fizz", Value: []byte("buzz")},
}
testEasEncoded = []byte{16, 0, 0, 0, 0, 3, 3, 0, 102, 111, 111, 0, 98, 97, 114, 0, 0, 0, 0, 0, 0, 4, 4, 0, 102, 105, 122, 122, 0, 98, 117, 122, 122, 0, 0, 0}
testEasNotPadded = testEasEncoded[0 : len(testEasEncoded)-3]
testEasTruncated = testEasEncoded[0:20]
)
func Test_RoundTripEas(t *testing.T) {
b, err := EncodeExtendedAttributes(testEas)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(testEasEncoded, b) {
t.Fatalf("encoded mismatch %v %v", testEasEncoded, b)
}
eas, err := DecodeExtendedAttributes(b)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(testEas, eas) {
t.Fatalf("mismatch %+v %+v", testEas, eas)
}
}
func Test_EasDontNeedPaddingAtEnd(t *testing.T) {
eas, err := DecodeExtendedAttributes(testEasNotPadded)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(testEas, eas) {
t.Fatalf("mismatch %+v %+v", testEas, eas)
}
}
func Test_TruncatedEasFailCorrectly(t *testing.T) {
_, err := DecodeExtendedAttributes(testEasTruncated)
if err == nil {
t.Fatal("expected error")
}
}
func Test_NilEasEncodeAndDecodeAsNil(t *testing.T) {
b, err := EncodeExtendedAttributes(nil)
if err != nil {
t.Fatal(err)
}
if len(b) != 0 {
t.Fatal("expected empty")
}
eas, err := DecodeExtendedAttributes(nil)
if err != nil {
t.Fatal(err)
}
if len(eas) != 0 {
t.Fatal("expected empty")
}
}
// Test_SetFileEa makes sure that the test buffer is actually parsable by NtSetEaFile.
func Test_SetFileEa(t *testing.T) {
f, err := ioutil.TempFile("", "winio")
if err != nil {
t.Fatal(err)
}
defer os.Remove(f.Name())
defer f.Close()
ntdll := syscall.MustLoadDLL("ntdll.dll")
ntSetEaFile := ntdll.MustFindProc("NtSetEaFile")
var iosb [2]uintptr
r, _, _ := ntSetEaFile.Call(f.Fd(), uintptr(unsafe.Pointer(&iosb[0])), uintptr(unsafe.Pointer(&testEasEncoded[0])), uintptr(len(testEasEncoded)))
if r != 0 {
t.Fatalf("NtSetEaFile failed with %08x", r)
}
}

View File

@@ -1,307 +0,0 @@
// +build windows
package winio
import (
"errors"
"io"
"runtime"
"sync"
"sync/atomic"
"syscall"
"time"
)
//sys cancelIoEx(file syscall.Handle, o *syscall.Overlapped) (err error) = CancelIoEx
//sys createIoCompletionPort(file syscall.Handle, port syscall.Handle, key uintptr, threadCount uint32) (newport syscall.Handle, err error) = CreateIoCompletionPort
//sys getQueuedCompletionStatus(port syscall.Handle, bytes *uint32, key *uintptr, o **ioOperation, timeout uint32) (err error) = GetQueuedCompletionStatus
//sys setFileCompletionNotificationModes(h syscall.Handle, flags uint8) (err error) = SetFileCompletionNotificationModes
type atomicBool int32
func (b *atomicBool) isSet() bool { return atomic.LoadInt32((*int32)(b)) != 0 }
func (b *atomicBool) setFalse() { atomic.StoreInt32((*int32)(b), 0) }
func (b *atomicBool) setTrue() { atomic.StoreInt32((*int32)(b), 1) }
func (b *atomicBool) swap(new bool) bool {
var newInt int32
if new {
newInt = 1
}
return atomic.SwapInt32((*int32)(b), newInt) == 1
}
const (
cFILE_SKIP_COMPLETION_PORT_ON_SUCCESS = 1
cFILE_SKIP_SET_EVENT_ON_HANDLE = 2
)
var (
ErrFileClosed = errors.New("file has already been closed")
ErrTimeout = &timeoutError{}
)
type timeoutError struct{}
func (e *timeoutError) Error() string { return "i/o timeout" }
func (e *timeoutError) Timeout() bool { return true }
func (e *timeoutError) Temporary() bool { return true }
type timeoutChan chan struct{}
var ioInitOnce sync.Once
var ioCompletionPort syscall.Handle
// ioResult contains the result of an asynchronous IO operation
type ioResult struct {
bytes uint32
err error
}
// ioOperation represents an outstanding asynchronous Win32 IO
type ioOperation struct {
o syscall.Overlapped
ch chan ioResult
}
func initIo() {
h, err := createIoCompletionPort(syscall.InvalidHandle, 0, 0, 0xffffffff)
if err != nil {
panic(err)
}
ioCompletionPort = h
go ioCompletionProcessor(h)
}
// win32File implements Reader, Writer, and Closer on a Win32 handle without blocking in a syscall.
// It takes ownership of this handle and will close it if it is garbage collected.
type win32File struct {
handle syscall.Handle
wg sync.WaitGroup
wgLock sync.RWMutex
closing atomicBool
readDeadline deadlineHandler
writeDeadline deadlineHandler
}
type deadlineHandler struct {
setLock sync.Mutex
channel timeoutChan
channelLock sync.RWMutex
timer *time.Timer
timedout atomicBool
}
// makeWin32File makes a new win32File from an existing file handle
func makeWin32File(h syscall.Handle) (*win32File, error) {
f := &win32File{handle: h}
ioInitOnce.Do(initIo)
_, err := createIoCompletionPort(h, ioCompletionPort, 0, 0xffffffff)
if err != nil {
return nil, err
}
err = setFileCompletionNotificationModes(h, cFILE_SKIP_COMPLETION_PORT_ON_SUCCESS|cFILE_SKIP_SET_EVENT_ON_HANDLE)
if err != nil {
return nil, err
}
f.readDeadline.channel = make(timeoutChan)
f.writeDeadline.channel = make(timeoutChan)
return f, nil
}
func MakeOpenFile(h syscall.Handle) (io.ReadWriteCloser, error) {
return makeWin32File(h)
}
// closeHandle closes the resources associated with a Win32 handle
func (f *win32File) closeHandle() {
f.wgLock.Lock()
// Atomically set that we are closing, releasing the resources only once.
if !f.closing.swap(true) {
f.wgLock.Unlock()
// cancel all IO and wait for it to complete
cancelIoEx(f.handle, nil)
f.wg.Wait()
// at this point, no new IO can start
syscall.Close(f.handle)
f.handle = 0
} else {
f.wgLock.Unlock()
}
}
// Close closes a win32File.
func (f *win32File) Close() error {
f.closeHandle()
return nil
}
// prepareIo prepares for a new IO operation.
// The caller must call f.wg.Done() when the IO is finished, prior to Close() returning.
func (f *win32File) prepareIo() (*ioOperation, error) {
f.wgLock.RLock()
if f.closing.isSet() {
f.wgLock.RUnlock()
return nil, ErrFileClosed
}
f.wg.Add(1)
f.wgLock.RUnlock()
c := &ioOperation{}
c.ch = make(chan ioResult)
return c, nil
}
// ioCompletionProcessor processes completed async IOs forever
func ioCompletionProcessor(h syscall.Handle) {
for {
var bytes uint32
var key uintptr
var op *ioOperation
err := getQueuedCompletionStatus(h, &bytes, &key, &op, syscall.INFINITE)
if op == nil {
panic(err)
}
op.ch <- ioResult{bytes, err}
}
}
// asyncIo processes the return value from ReadFile or WriteFile, blocking until
// the operation has actually completed.
func (f *win32File) asyncIo(c *ioOperation, d *deadlineHandler, bytes uint32, err error) (int, error) {
if err != syscall.ERROR_IO_PENDING {
return int(bytes), err
}
if f.closing.isSet() {
cancelIoEx(f.handle, &c.o)
}
var timeout timeoutChan
if d != nil {
d.channelLock.Lock()
timeout = d.channel
d.channelLock.Unlock()
}
var r ioResult
select {
case r = <-c.ch:
err = r.err
if err == syscall.ERROR_OPERATION_ABORTED {
if f.closing.isSet() {
err = ErrFileClosed
}
}
case <-timeout:
cancelIoEx(f.handle, &c.o)
r = <-c.ch
err = r.err
if err == syscall.ERROR_OPERATION_ABORTED {
err = ErrTimeout
}
}
// runtime.KeepAlive is needed, as c is passed via native
// code to ioCompletionProcessor, c must remain alive
// until the channel read is complete.
runtime.KeepAlive(c)
return int(r.bytes), err
}
// Read reads from a file handle.
func (f *win32File) Read(b []byte) (int, error) {
c, err := f.prepareIo()
if err != nil {
return 0, err
}
defer f.wg.Done()
if f.readDeadline.timedout.isSet() {
return 0, ErrTimeout
}
var bytes uint32
err = syscall.ReadFile(f.handle, b, &bytes, &c.o)
n, err := f.asyncIo(c, &f.readDeadline, bytes, err)
runtime.KeepAlive(b)
// Handle EOF conditions.
if err == nil && n == 0 && len(b) != 0 {
return 0, io.EOF
} else if err == syscall.ERROR_BROKEN_PIPE {
return 0, io.EOF
} else {
return n, err
}
}
// Write writes to a file handle.
func (f *win32File) Write(b []byte) (int, error) {
c, err := f.prepareIo()
if err != nil {
return 0, err
}
defer f.wg.Done()
if f.writeDeadline.timedout.isSet() {
return 0, ErrTimeout
}
var bytes uint32
err = syscall.WriteFile(f.handle, b, &bytes, &c.o)
n, err := f.asyncIo(c, &f.writeDeadline, bytes, err)
runtime.KeepAlive(b)
return n, err
}
func (f *win32File) SetReadDeadline(deadline time.Time) error {
return f.readDeadline.set(deadline)
}
func (f *win32File) SetWriteDeadline(deadline time.Time) error {
return f.writeDeadline.set(deadline)
}
func (f *win32File) Flush() error {
return syscall.FlushFileBuffers(f.handle)
}
func (d *deadlineHandler) set(deadline time.Time) error {
d.setLock.Lock()
defer d.setLock.Unlock()
if d.timer != nil {
if !d.timer.Stop() {
<-d.channel
}
d.timer = nil
}
d.timedout.setFalse()
select {
case <-d.channel:
d.channelLock.Lock()
d.channel = make(chan struct{})
d.channelLock.Unlock()
default:
}
if deadline.IsZero() {
return nil
}
timeoutIO := func() {
d.timedout.setTrue()
close(d.channel)
}
now := time.Now()
duration := deadline.Sub(now)
if deadline.After(now) {
// Deadline is in the future, set a timer to wait
d.timer = time.AfterFunc(duration, timeoutIO)
} else {
// Deadline is in the past. Cancel all pending IO now.
timeoutIO()
}
return nil
}

View File

@@ -1,61 +0,0 @@
// +build windows
package winio
import (
"os"
"runtime"
"syscall"
"unsafe"
)
//sys getFileInformationByHandleEx(h syscall.Handle, class uint32, buffer *byte, size uint32) (err error) = GetFileInformationByHandleEx
//sys setFileInformationByHandle(h syscall.Handle, class uint32, buffer *byte, size uint32) (err error) = SetFileInformationByHandle
const (
fileBasicInfo = 0
fileIDInfo = 0x12
)
// FileBasicInfo contains file access time and file attributes information.
type FileBasicInfo struct {
CreationTime, LastAccessTime, LastWriteTime, ChangeTime syscall.Filetime
FileAttributes uint32
pad uint32 // padding
}
// GetFileBasicInfo retrieves times and attributes for a file.
func GetFileBasicInfo(f *os.File) (*FileBasicInfo, error) {
bi := &FileBasicInfo{}
if err := getFileInformationByHandleEx(syscall.Handle(f.Fd()), fileBasicInfo, (*byte)(unsafe.Pointer(bi)), uint32(unsafe.Sizeof(*bi))); err != nil {
return nil, &os.PathError{Op: "GetFileInformationByHandleEx", Path: f.Name(), Err: err}
}
runtime.KeepAlive(f)
return bi, nil
}
// SetFileBasicInfo sets times and attributes for a file.
func SetFileBasicInfo(f *os.File, bi *FileBasicInfo) error {
if err := setFileInformationByHandle(syscall.Handle(f.Fd()), fileBasicInfo, (*byte)(unsafe.Pointer(bi)), uint32(unsafe.Sizeof(*bi))); err != nil {
return &os.PathError{Op: "SetFileInformationByHandle", Path: f.Name(), Err: err}
}
runtime.KeepAlive(f)
return nil
}
// FileIDInfo contains the volume serial number and file ID for a file. This pair should be
// unique on a system.
type FileIDInfo struct {
VolumeSerialNumber uint64
FileID [16]byte
}
// GetFileID retrieves the unique (volume, file ID) pair for a file.
func GetFileID(f *os.File) (*FileIDInfo, error) {
fileID := &FileIDInfo{}
if err := getFileInformationByHandleEx(syscall.Handle(f.Fd()), fileIDInfo, (*byte)(unsafe.Pointer(fileID)), uint32(unsafe.Sizeof(*fileID))); err != nil {
return nil, &os.PathError{Op: "GetFileInformationByHandleEx", Path: f.Name(), Err: err}
}
runtime.KeepAlive(f)
return fileID, nil
}

View File

@@ -1,15 +0,0 @@
// Package etw provides support for TraceLogging-based ETW (Event Tracing
// for Windows). TraceLogging is a format of ETW events that are self-describing
// (the event contains information on its own schema). This allows them to be
// decoded without needing a separate manifest with event information. The
// implementation here is based on the information found in
// TraceLoggingProvider.h in the Windows SDK, which implements TraceLogging as a
// set of C macros.
package etw
//go:generate go run $GOROOT/src/syscall/mksyscall_windows.go -output zsyscall_windows.go etw.go
//sys eventRegister(providerId *windows.GUID, callback uintptr, callbackContext uintptr, providerHandle *providerHandle) (win32err error) = advapi32.EventRegister
//sys eventUnregister(providerHandle providerHandle) (win32err error) = advapi32.EventUnregister
//sys eventWriteTransfer(providerHandle providerHandle, descriptor *EventDescriptor, activityID *windows.GUID, relatedActivityID *windows.GUID, dataDescriptorCount uint32, dataDescriptors *eventDataDescriptor) (win32err error) = advapi32.EventWriteTransfer
//sys eventSetInformation(providerHandle providerHandle, class eventInfoClass, information uintptr, length uint32) (win32err error) = advapi32.EventSetInformation

View File

@@ -1,65 +0,0 @@
package etw
import (
"bytes"
"encoding/binary"
)
// EventData maintains a buffer which builds up the data for an ETW event. It
// needs to be paired with EventMetadata which describes the event.
type EventData struct {
buffer bytes.Buffer
}
// Bytes returns the raw binary data containing the event data. The returned
// value is not copied from the internal buffer, so it can be mutated by the
// EventData object after it is returned.
func (ed *EventData) Bytes() []byte {
return ed.buffer.Bytes()
}
// WriteString appends a string, including the null terminator, to the buffer.
func (ed *EventData) WriteString(data string) {
ed.buffer.WriteString(data)
ed.buffer.WriteByte(0)
}
// WriteInt8 appends a int8 to the buffer.
func (ed *EventData) WriteInt8(value int8) {
ed.buffer.WriteByte(uint8(value))
}
// WriteInt16 appends a int16 to the buffer.
func (ed *EventData) WriteInt16(value int16) {
binary.Write(&ed.buffer, binary.LittleEndian, value)
}
// WriteInt32 appends a int32 to the buffer.
func (ed *EventData) WriteInt32(value int32) {
binary.Write(&ed.buffer, binary.LittleEndian, value)
}
// WriteInt64 appends a int64 to the buffer.
func (ed *EventData) WriteInt64(value int64) {
binary.Write(&ed.buffer, binary.LittleEndian, value)
}
// WriteUint8 appends a uint8 to the buffer.
func (ed *EventData) WriteUint8(value uint8) {
ed.buffer.WriteByte(value)
}
// WriteUint16 appends a uint16 to the buffer.
func (ed *EventData) WriteUint16(value uint16) {
binary.Write(&ed.buffer, binary.LittleEndian, value)
}
// WriteUint32 appends a uint32 to the buffer.
func (ed *EventData) WriteUint32(value uint32) {
binary.Write(&ed.buffer, binary.LittleEndian, value)
}
// WriteUint64 appends a uint64 to the buffer.
func (ed *EventData) WriteUint64(value uint64) {
binary.Write(&ed.buffer, binary.LittleEndian, value)
}

View File

@@ -1,29 +0,0 @@
package etw
import (
"unsafe"
)
type eventDataDescriptorType uint8
const (
eventDataDescriptorTypeUserData eventDataDescriptorType = iota
eventDataDescriptorTypeEventMetadata
eventDataDescriptorTypeProviderMetadata
)
type eventDataDescriptor struct {
ptr ptr64
size uint32
dataType eventDataDescriptorType
reserved1 uint8
reserved2 uint16
}
func newEventDataDescriptor(dataType eventDataDescriptorType, buffer []byte) eventDataDescriptor {
return eventDataDescriptor{
ptr: ptr64{ptr: unsafe.Pointer(&buffer[0])},
size: uint32(len(buffer)),
dataType: dataType,
}
}

View File

@@ -1,67 +0,0 @@
package etw
// Channel represents the ETW logging channel that is used. It can be used by
// event consumers to give an event special treatment.
type Channel uint8
const (
// ChannelTraceLogging is the default channel for TraceLogging events. It is
// not required to be used for TraceLogging, but will prevent decoding
// issues for these events on older operating systems.
ChannelTraceLogging Channel = 11
)
// Level represents the ETW logging level. There are several predefined levels
// that are commonly used, but technically anything from 0-255 is allowed.
// Lower levels indicate more important events, and 0 indicates an event that
// will always be collected.
type Level uint8
// Predefined ETW log levels.
const (
LevelAlways Level = iota
LevelCritical
LevelError
LevelWarning
LevelInfo
LevelVerbose
)
// EventDescriptor represents various metadata for an ETW event.
type EventDescriptor struct {
id uint16
version uint8
Channel Channel
Level Level
Opcode uint8
Task uint16
Keyword uint64
}
// NewEventDescriptor returns an EventDescriptor initialized for use with
// TraceLogging.
func NewEventDescriptor() *EventDescriptor {
// Standard TraceLogging events default to the TraceLogging channel, and
// verbose level.
return &EventDescriptor{
Channel: ChannelTraceLogging,
Level: LevelVerbose,
}
}
// Identity returns the identity of the event. If the identity is not 0, it
// should uniquely identify the other event metadata (contained in
// EventDescriptor, and field metadata). Only the lower 24 bits of this value
// are relevant.
func (ed *EventDescriptor) Identity() uint32 {
return (uint32(ed.version) << 16) | uint32(ed.id)
}
// SetIdentity sets the identity of the event. If the identity is not 0, it
// should uniquely identify the other event metadata (contained in
// EventDescriptor, and field metadata). Only the lower 24 bits of this value
// are relevant.
func (ed *EventDescriptor) SetIdentity(identity uint32) {
ed.id = uint16(identity)
ed.version = uint8(identity >> 16)
}

View File

@@ -1,177 +0,0 @@
package etw
import (
"bytes"
"encoding/binary"
)
// InType indicates the type of data contained in the ETW event.
type InType byte
// Various InType definitions for TraceLogging. These must match the definitions
// found in TraceLoggingProvider.h in the Windows SDK.
const (
InTypeNull InType = iota
InTypeUnicodeString
InTypeANSIString
InTypeInt8
InTypeUint8
InTypeInt16
InTypeUint16
InTypeInt32
InTypeUint32
InTypeInt64
InTypeUint64
InTypeFloat
InTypeDouble
InTypeBool32
InTypeBinary
InTypeGUID
InTypePointerUnsupported
InTypeFileTime
InTypeSystemTime
InTypeSID
InTypeHexInt32
InTypeHexInt64
InTypeCountedString
InTypeCountedANSIString
InTypeStruct
InTypeCountedBinary
InTypeCountedArray InType = 32
InTypeArray InType = 64
)
// OutType specifies a hint to the event decoder for how the value should be
// formatted.
type OutType byte
// Various OutType definitions for TraceLogging. These must match the
// definitions found in TraceLoggingProvider.h in the Windows SDK.
const (
// OutTypeDefault indicates that the default formatting for the InType will
// be used by the event decoder.
OutTypeDefault OutType = iota
OutTypeNoPrint
OutTypeString
OutTypeBoolean
OutTypeHex
OutTypePID
OutTypeTID
OutTypePort
OutTypeIPv4
OutTypeIPv6
OutTypeSocketAddress
OutTypeXML
OutTypeJSON
OutTypeWin32Error
OutTypeNTStatus
OutTypeHResult
OutTypeFileTime
OutTypeSigned
OutTypeUnsigned
OutTypeUTF8 OutType = 35
OutTypePKCS7WithTypeInfo OutType = 36
OutTypeCodePointer OutType = 37
OutTypeDateTimeUTC OutType = 38
)
// EventMetadata maintains a buffer which builds up the metadata for an ETW
// event. It needs to be paired with EventData which describes the event.
type EventMetadata struct {
buffer bytes.Buffer
}
// Bytes returns the raw binary data containing the event metadata. Before being
// returned, the current size of the buffer is written to the start of the
// buffer. The returned value is not copied from the internal buffer, so it can
// be mutated by the EventMetadata object after it is returned.
func (em *EventMetadata) Bytes() []byte {
// Finalize the event metadata buffer by filling in the buffer length at the
// beginning.
binary.LittleEndian.PutUint16(em.buffer.Bytes(), uint16(em.buffer.Len()))
return em.buffer.Bytes()
}
// WriteEventHeader writes the metadata for the start of an event to the buffer.
// This specifies the event name and tags.
func (em *EventMetadata) WriteEventHeader(name string, tags uint32) {
binary.Write(&em.buffer, binary.LittleEndian, uint16(0)) // Length placeholder
em.writeTags(tags)
em.buffer.WriteString(name)
em.buffer.WriteByte(0) // Null terminator for name
}
func (em *EventMetadata) writeField(name string, inType InType, outType OutType, tags uint32, arrSize uint16) {
em.buffer.WriteString(name)
em.buffer.WriteByte(0) // Null terminator for name
if outType == OutTypeDefault && tags == 0 {
em.buffer.WriteByte(byte(inType))
} else {
em.buffer.WriteByte(byte(inType | 128))
if tags == 0 {
em.buffer.WriteByte(byte(outType))
} else {
em.buffer.WriteByte(byte(outType | 128))
em.writeTags(tags)
}
}
if arrSize != 0 {
binary.Write(&em.buffer, binary.LittleEndian, arrSize)
}
}
// writeTags writes out the tags value to the event metadata. Tags is a 28-bit
// value, interpreted as bit flags, which are only relevant to the event
// consumer. The event consumer may choose to attribute special meaning to tags
// (e.g. 0x4 could mean the field contains PII). Tags are written as a series of
// bytes, each containing 7 bits of tag value, with the high bit set if there is
// more tag data in the following byte. This allows for a more compact
// representation when not all of the tag bits are needed.
func (em *EventMetadata) writeTags(tags uint32) {
// Only use the top 28 bits of the tags value.
tags &= 0xfffffff
for {
// Tags are written with the most significant bits (e.g. 21-27) first.
val := tags >> 21
if tags&0x1fffff == 0 {
// If there is no more data to write after this, write this value
// without the high bit set, and return.
em.buffer.WriteByte(byte(val & 0x7f))
return
}
em.buffer.WriteByte(byte(val | 0x80))
tags <<= 7
}
}
// WriteField writes the metadata for a simple field to the buffer.
func (em *EventMetadata) WriteField(name string, inType InType, outType OutType, tags uint32) {
em.writeField(name, inType, outType, tags, 0)
}
// WriteArray writes the metadata for an array field to the buffer. The number
// of elements in the array must be written as a uint16 in the event data,
// immediately preceeding the event data.
func (em *EventMetadata) WriteArray(name string, inType InType, outType OutType, tags uint32) {
em.writeField(name, inType|InTypeArray, outType, tags, 0)
}
// WriteCountedArray writes the metadata for an array field to the buffer. The
// size of a counted array is fixed, and the size is written into the metadata
// directly.
func (em *EventMetadata) WriteCountedArray(name string, count uint16, inType InType, outType OutType, tags uint32) {
em.writeField(name, inType|InTypeCountedArray, outType, tags, count)
}
// WriteStruct writes the metadata for a nested struct to the buffer. The struct
// contains the next N fields in the metadata, where N is specified by the
// fieldCount argument.
func (em *EventMetadata) WriteStruct(name string, fieldCount uint8, tags uint32) {
em.writeField(name, InTypeStruct, OutType(fieldCount), tags, 0)
}

View File

@@ -1,63 +0,0 @@
package etw
import (
"golang.org/x/sys/windows"
)
type eventOptions struct {
descriptor *EventDescriptor
activityID *windows.GUID
relatedActivityID *windows.GUID
tags uint32
}
// EventOpt defines the option function type that can be passed to
// Provider.WriteEvent to specify general event options, such as level and
// keyword.
type EventOpt func(options *eventOptions)
// WithEventOpts returns the variadic arguments as a single slice.
func WithEventOpts(opts ...EventOpt) []EventOpt {
return opts
}
// WithLevel specifies the level of the event to be written.
func WithLevel(level Level) EventOpt {
return func(options *eventOptions) {
options.descriptor.Level = level
}
}
// WithKeyword specifies the keywords of the event to be written. Multiple uses
// of this option are OR'd together.
func WithKeyword(keyword uint64) EventOpt {
return func(options *eventOptions) {
options.descriptor.Keyword |= keyword
}
}
func WithChannel(channel Channel) EventOpt {
return func(options *eventOptions) {
options.descriptor.Channel = channel
}
}
// WithTags specifies the tags of the event to be written. Tags is a 28-bit
// value (top 4 bits are ignored) which are interpreted by the event consumer.
func WithTags(newTags uint32) EventOpt {
return func(options *eventOptions) {
options.tags |= newTags
}
}
func WithActivityID(activityID *windows.GUID) EventOpt {
return func(options *eventOptions) {
options.activityID = activityID
}
}
func WithRelatedActivityID(activityID *windows.GUID) EventOpt {
return func(options *eventOptions) {
options.relatedActivityID = activityID
}
}

View File

@@ -1,379 +0,0 @@
package etw
import (
"math"
"unsafe"
)
// FieldOpt defines the option function type that can be passed to
// Provider.WriteEvent to add fields to the event.
type FieldOpt func(em *EventMetadata, ed *EventData)
// WithFields returns the variadic arguments as a single slice.
func WithFields(opts ...FieldOpt) []FieldOpt {
return opts
}
// BoolField adds a single bool field to the event.
func BoolField(name string, value bool) FieldOpt {
return func(em *EventMetadata, ed *EventData) {
em.WriteField(name, InTypeUint8, OutTypeBoolean, 0)
bool8 := uint8(0)
if value {
bool8 = uint8(1)
}
ed.WriteUint8(bool8)
}
}
// BoolArray adds an array of bool to the event.
func BoolArray(name string, values []bool) FieldOpt {
return func(em *EventMetadata, ed *EventData) {
em.WriteArray(name, InTypeUint8, OutTypeBoolean, 0)
ed.WriteUint16(uint16(len(values)))
for _, v := range values {
bool8 := uint8(0)
if v {
bool8 = uint8(1)
}
ed.WriteUint8(bool8)
}
}
}
// StringField adds a single string field to the event.
func StringField(name string, value string) FieldOpt {
return func(em *EventMetadata, ed *EventData) {
em.WriteField(name, InTypeANSIString, OutTypeUTF8, 0)
ed.WriteString(value)
}
}
// StringArray adds an array of string to the event.
func StringArray(name string, values []string) FieldOpt {
return func(em *EventMetadata, ed *EventData) {
em.WriteArray(name, InTypeANSIString, OutTypeUTF8, 0)
ed.WriteUint16(uint16(len(values)))
for _, v := range values {
ed.WriteString(v)
}
}
}
// IntField adds a single int field to the event.
func IntField(name string, value int) FieldOpt {
switch unsafe.Sizeof(value) {
case 4:
return Int32Field(name, int32(value))
case 8:
return Int64Field(name, int64(value))
default:
panic("Unsupported int size")
}
}
// IntArray adds an array of int to the event.
func IntArray(name string, values []int) FieldOpt {
inType := InTypeNull
var writeItem func(*EventData, int)
switch unsafe.Sizeof(values[0]) {
case 4:
inType = InTypeInt32
writeItem = func(ed *EventData, item int) { ed.WriteInt32(int32(item)) }
case 8:
inType = InTypeInt64
writeItem = func(ed *EventData, item int) { ed.WriteInt64(int64(item)) }
default:
panic("Unsupported int size")
}
return func(em *EventMetadata, ed *EventData) {
em.WriteArray(name, inType, OutTypeDefault, 0)
ed.WriteUint16(uint16(len(values)))
for _, v := range values {
writeItem(ed, v)
}
}
}
// Int8Field adds a single int8 field to the event.
func Int8Field(name string, value int8) FieldOpt {
return func(em *EventMetadata, ed *EventData) {
em.WriteField(name, InTypeInt8, OutTypeDefault, 0)
ed.WriteInt8(value)
}
}
// Int8Array adds an array of int8 to the event.
func Int8Array(name string, values []int8) FieldOpt {
return func(em *EventMetadata, ed *EventData) {
em.WriteArray(name, InTypeInt8, OutTypeDefault, 0)
ed.WriteUint16(uint16(len(values)))
for _, v := range values {
ed.WriteInt8(v)
}
}
}
// Int16Field adds a single int16 field to the event.
func Int16Field(name string, value int16) FieldOpt {
return func(em *EventMetadata, ed *EventData) {
em.WriteField(name, InTypeInt16, OutTypeDefault, 0)
ed.WriteInt16(value)
}
}
// Int16Array adds an array of int16 to the event.
func Int16Array(name string, values []int16) FieldOpt {
return func(em *EventMetadata, ed *EventData) {
em.WriteArray(name, InTypeInt16, OutTypeDefault, 0)
ed.WriteUint16(uint16(len(values)))
for _, v := range values {
ed.WriteInt16(v)
}
}
}
// Int32Field adds a single int32 field to the event.
func Int32Field(name string, value int32) FieldOpt {
return func(em *EventMetadata, ed *EventData) {
em.WriteField(name, InTypeInt32, OutTypeDefault, 0)
ed.WriteInt32(value)
}
}
// Int32Array adds an array of int32 to the event.
func Int32Array(name string, values []int32) FieldOpt {
return func(em *EventMetadata, ed *EventData) {
em.WriteArray(name, InTypeInt32, OutTypeDefault, 0)
ed.WriteUint16(uint16(len(values)))
for _, v := range values {
ed.WriteInt32(v)
}
}
}
// Int64Field adds a single int64 field to the event.
func Int64Field(name string, value int64) FieldOpt {
return func(em *EventMetadata, ed *EventData) {
em.WriteField(name, InTypeInt64, OutTypeDefault, 0)
ed.WriteInt64(value)
}
}
// Int64Array adds an array of int64 to the event.
func Int64Array(name string, values []int64) FieldOpt {
return func(em *EventMetadata, ed *EventData) {
em.WriteArray(name, InTypeInt64, OutTypeDefault, 0)
ed.WriteUint16(uint16(len(values)))
for _, v := range values {
ed.WriteInt64(v)
}
}
}
// UintField adds a single uint field to the event.
func UintField(name string, value uint) FieldOpt {
switch unsafe.Sizeof(value) {
case 4:
return Uint32Field(name, uint32(value))
case 8:
return Uint64Field(name, uint64(value))
default:
panic("Unsupported uint size")
}
}
// UintArray adds an array of uint to the event.
func UintArray(name string, values []uint) FieldOpt {
inType := InTypeNull
var writeItem func(*EventData, uint)
switch unsafe.Sizeof(values[0]) {
case 4:
inType = InTypeUint32
writeItem = func(ed *EventData, item uint) { ed.WriteUint32(uint32(item)) }
case 8:
inType = InTypeUint64
writeItem = func(ed *EventData, item uint) { ed.WriteUint64(uint64(item)) }
default:
panic("Unsupported uint size")
}
return func(em *EventMetadata, ed *EventData) {
em.WriteArray(name, inType, OutTypeDefault, 0)
ed.WriteUint16(uint16(len(values)))
for _, v := range values {
writeItem(ed, v)
}
}
}
// Uint8Field adds a single uint8 field to the event.
func Uint8Field(name string, value uint8) FieldOpt {
return func(em *EventMetadata, ed *EventData) {
em.WriteField(name, InTypeUint8, OutTypeDefault, 0)
ed.WriteUint8(value)
}
}
// Uint8Array adds an array of uint8 to the event.
func Uint8Array(name string, values []uint8) FieldOpt {
return func(em *EventMetadata, ed *EventData) {
em.WriteArray(name, InTypeUint8, OutTypeDefault, 0)
ed.WriteUint16(uint16(len(values)))
for _, v := range values {
ed.WriteUint8(v)
}
}
}
// Uint16Field adds a single uint16 field to the event.
func Uint16Field(name string, value uint16) FieldOpt {
return func(em *EventMetadata, ed *EventData) {
em.WriteField(name, InTypeUint16, OutTypeDefault, 0)
ed.WriteUint16(value)
}
}
// Uint16Array adds an array of uint16 to the event.
func Uint16Array(name string, values []uint16) FieldOpt {
return func(em *EventMetadata, ed *EventData) {
em.WriteArray(name, InTypeUint16, OutTypeDefault, 0)
ed.WriteUint16(uint16(len(values)))
for _, v := range values {
ed.WriteUint16(v)
}
}
}
// Uint32Field adds a single uint32 field to the event.
func Uint32Field(name string, value uint32) FieldOpt {
return func(em *EventMetadata, ed *EventData) {
em.WriteField(name, InTypeUint32, OutTypeDefault, 0)
ed.WriteUint32(value)
}
}
// Uint32Array adds an array of uint32 to the event.
func Uint32Array(name string, values []uint32) FieldOpt {
return func(em *EventMetadata, ed *EventData) {
em.WriteArray(name, InTypeUint32, OutTypeDefault, 0)
ed.WriteUint16(uint16(len(values)))
for _, v := range values {
ed.WriteUint32(v)
}
}
}
// Uint64Field adds a single uint64 field to the event.
func Uint64Field(name string, value uint64) FieldOpt {
return func(em *EventMetadata, ed *EventData) {
em.WriteField(name, InTypeUint64, OutTypeDefault, 0)
ed.WriteUint64(value)
}
}
// Uint64Array adds an array of uint64 to the event.
func Uint64Array(name string, values []uint64) FieldOpt {
return func(em *EventMetadata, ed *EventData) {
em.WriteArray(name, InTypeUint64, OutTypeDefault, 0)
ed.WriteUint16(uint16(len(values)))
for _, v := range values {
ed.WriteUint64(v)
}
}
}
// UintptrField adds a single uintptr field to the event.
func UintptrField(name string, value uintptr) FieldOpt {
inType := InTypeNull
var writeItem func(*EventData, uintptr)
switch unsafe.Sizeof(value) {
case 4:
inType = InTypeHexInt32
writeItem = func(ed *EventData, item uintptr) { ed.WriteUint32(uint32(item)) }
case 8:
inType = InTypeHexInt64
writeItem = func(ed *EventData, item uintptr) { ed.WriteUint64(uint64(item)) }
default:
panic("Unsupported uintptr size")
}
return func(em *EventMetadata, ed *EventData) {
em.WriteField(name, inType, OutTypeDefault, 0)
writeItem(ed, value)
}
}
// UintptrArray adds an array of uintptr to the event.
func UintptrArray(name string, values []uintptr) FieldOpt {
inType := InTypeNull
var writeItem func(*EventData, uintptr)
switch unsafe.Sizeof(values[0]) {
case 4:
inType = InTypeHexInt32
writeItem = func(ed *EventData, item uintptr) { ed.WriteUint32(uint32(item)) }
case 8:
inType = InTypeHexInt64
writeItem = func(ed *EventData, item uintptr) { ed.WriteUint64(uint64(item)) }
default:
panic("Unsupported uintptr size")
}
return func(em *EventMetadata, ed *EventData) {
em.WriteArray(name, inType, OutTypeDefault, 0)
ed.WriteUint16(uint16(len(values)))
for _, v := range values {
writeItem(ed, v)
}
}
}
// Float32Field adds a single float32 field to the event.
func Float32Field(name string, value float32) FieldOpt {
return func(em *EventMetadata, ed *EventData) {
em.WriteField(name, InTypeFloat, OutTypeDefault, 0)
ed.WriteUint32(math.Float32bits(value))
}
}
// Float32Array adds an array of float32 to the event.
func Float32Array(name string, values []float32) FieldOpt {
return func(em *EventMetadata, ed *EventData) {
em.WriteArray(name, InTypeFloat, OutTypeDefault, 0)
ed.WriteUint16(uint16(len(values)))
for _, v := range values {
ed.WriteUint32(math.Float32bits(v))
}
}
}
// Float64Field adds a single float64 field to the event.
func Float64Field(name string, value float64) FieldOpt {
return func(em *EventMetadata, ed *EventData) {
em.WriteField(name, InTypeDouble, OutTypeDefault, 0)
ed.WriteUint64(math.Float64bits(value))
}
}
// Float64Array adds an array of float64 to the event.
func Float64Array(name string, values []float64) FieldOpt {
return func(em *EventMetadata, ed *EventData) {
em.WriteArray(name, InTypeDouble, OutTypeDefault, 0)
ed.WriteUint16(uint16(len(values)))
for _, v := range values {
ed.WriteUint64(math.Float64bits(v))
}
}
}
// Struct adds a nested struct to the event, the FieldOpts in the opts argument
// are used to specify the fields of the struct.
func Struct(name string, opts ...FieldOpt) FieldOpt {
return func(em *EventMetadata, ed *EventData) {
em.WriteStruct(name, uint8(len(opts)), 0)
for _, opt := range opts {
opt(em, ed)
}
}
}

View File

@@ -1,279 +0,0 @@
package etw
import (
"bytes"
"crypto/sha1"
"encoding/binary"
"encoding/hex"
"fmt"
"strings"
"unicode/utf16"
"unsafe"
"golang.org/x/sys/windows"
)
// Provider represents an ETW event provider. It is identified by a provider
// name and ID (GUID), which should always have a 1:1 mapping to each other
// (e.g. don't use multiple provider names with the same ID, or vice versa).
type Provider struct {
ID *windows.GUID
handle providerHandle
metadata []byte
callback EnableCallback
index uint
enabled bool
level Level
keywordAny uint64
keywordAll uint64
}
// String returns the `provider`.ID as a string
func (provider *Provider) String() string {
data1 := make([]byte, 4)
binary.BigEndian.PutUint32(data1, provider.ID.Data1)
data2 := make([]byte, 2)
binary.BigEndian.PutUint16(data2, provider.ID.Data2)
data3 := make([]byte, 2)
binary.BigEndian.PutUint16(data3, provider.ID.Data3)
return fmt.Sprintf(
"%s-%s-%s-%s-%s",
hex.EncodeToString(data1),
hex.EncodeToString(data2),
hex.EncodeToString(data3),
hex.EncodeToString(provider.ID.Data4[:2]),
hex.EncodeToString(provider.ID.Data4[2:]))
}
type providerHandle windows.Handle
// ProviderState informs the provider EnableCallback what action is being
// performed.
type ProviderState uint32
const (
// ProviderStateDisable indicates the provider is being disabled.
ProviderStateDisable ProviderState = iota
// ProviderStateEnable indicates the provider is being enabled.
ProviderStateEnable
// ProviderStateCaptureState indicates the provider is having its current
// state snap-shotted.
ProviderStateCaptureState
)
type eventInfoClass uint32
const (
eventInfoClassProviderBinaryTrackInfo eventInfoClass = iota
eventInfoClassProviderSetReserved1
eventInfoClassProviderSetTraits
eventInfoClassProviderUseDescriptorType
)
// EnableCallback is the form of the callback function that receives provider
// enable/disable notifications from ETW.
type EnableCallback func(*windows.GUID, ProviderState, Level, uint64, uint64, uintptr)
func providerCallback(sourceID *windows.GUID, state ProviderState, level Level, matchAnyKeyword uint64, matchAllKeyword uint64, filterData uintptr, i uintptr) {
provider := providers.getProvider(uint(i))
switch state {
case ProviderStateDisable:
provider.enabled = false
case ProviderStateEnable:
provider.enabled = true
provider.level = level
provider.keywordAny = matchAnyKeyword
provider.keywordAll = matchAllKeyword
}
if provider.callback != nil {
provider.callback(sourceID, state, level, matchAnyKeyword, matchAllKeyword, filterData)
}
}
// providerCallbackAdapter acts as the first-level callback from the C/ETW side
// for provider notifications. Because Go has trouble with callback arguments of
// different size, it has only pointer-sized arguments, which are then cast to
// the appropriate types when calling providerCallback.
func providerCallbackAdapter(sourceID *windows.GUID, state uintptr, level uintptr, matchAnyKeyword uintptr, matchAllKeyword uintptr, filterData uintptr, i uintptr) uintptr {
providerCallback(sourceID, ProviderState(state), Level(level), uint64(matchAnyKeyword), uint64(matchAllKeyword), filterData, i)
return 0
}
// providerIDFromName generates a provider ID based on the provider name. It
// uses the same algorithm as used by .NET's EventSource class, which is based
// on RFC 4122. More information on the algorithm can be found here:
// https://blogs.msdn.microsoft.com/dcook/2015/09/08/etw-provider-names-and-guids/
// The algorithm is roughly:
// Hash = Sha1(namespace + arg.ToUpper().ToUtf16be())
// Guid = Hash[0..15], with Hash[7] tweaked according to RFC 4122
func providerIDFromName(name string) *windows.GUID {
buffer := sha1.New()
namespace := []byte{0x48, 0x2C, 0x2D, 0xB2, 0xC3, 0x90, 0x47, 0xC8, 0x87, 0xF8, 0x1A, 0x15, 0xBF, 0xC1, 0x30, 0xFB}
buffer.Write(namespace)
binary.Write(buffer, binary.BigEndian, utf16.Encode([]rune(strings.ToUpper(name))))
sum := buffer.Sum(nil)
sum[7] = (sum[7] & 0xf) | 0x50
return &windows.GUID{
Data1: binary.LittleEndian.Uint32(sum[0:4]),
Data2: binary.LittleEndian.Uint16(sum[4:6]),
Data3: binary.LittleEndian.Uint16(sum[6:8]),
Data4: [8]byte{sum[8], sum[9], sum[10], sum[11], sum[12], sum[13], sum[14], sum[15]},
}
}
// NewProvider creates and registers a new ETW provider. The provider ID is
// generated based on the provider name.
func NewProvider(name string, callback EnableCallback) (provider *Provider, err error) {
return NewProviderWithID(name, providerIDFromName(name), callback)
}
// NewProviderWithID creates and registers a new ETW provider, allowing the
// provider ID to be manually specified. This is most useful when there is an
// existing provider ID that must be used to conform to existing diagnostic
// infrastructure.
func NewProviderWithID(name string, id *windows.GUID, callback EnableCallback) (provider *Provider, err error) {
providerCallbackOnce.Do(func() {
globalProviderCallback = windows.NewCallback(providerCallbackAdapter)
})
provider = providers.newProvider()
defer func() {
if err != nil {
providers.removeProvider(provider)
}
}()
provider.ID = id
provider.callback = callback
if err := eventRegister(provider.ID, globalProviderCallback, uintptr(provider.index), &provider.handle); err != nil {
return nil, err
}
metadata := &bytes.Buffer{}
binary.Write(metadata, binary.LittleEndian, uint16(0)) // Write empty size for buffer (to update later)
metadata.WriteString(name)
metadata.WriteByte(0) // Null terminator for name
binary.LittleEndian.PutUint16(metadata.Bytes(), uint16(metadata.Len())) // Update the size at the beginning of the buffer
provider.metadata = metadata.Bytes()
if err := eventSetInformation(
provider.handle,
eventInfoClassProviderSetTraits,
uintptr(unsafe.Pointer(&provider.metadata[0])),
uint32(len(provider.metadata))); err != nil {
return nil, err
}
return provider, nil
}
// Close unregisters the provider.
func (provider *Provider) Close() error {
providers.removeProvider(provider)
return eventUnregister(provider.handle)
}
// IsEnabled calls IsEnabledForLevelAndKeywords with LevelAlways and all
// keywords set.
func (provider *Provider) IsEnabled() bool {
return provider.IsEnabledForLevelAndKeywords(LevelAlways, ^uint64(0))
}
// IsEnabledForLevel calls IsEnabledForLevelAndKeywords with the specified level
// and all keywords set.
func (provider *Provider) IsEnabledForLevel(level Level) bool {
return provider.IsEnabledForLevelAndKeywords(level, ^uint64(0))
}
// IsEnabledForLevelAndKeywords allows event producer code to check if there are
// any event sessions that are interested in an event, based on the event level
// and keywords. Although this check happens automatically in the ETW
// infrastructure, it can be useful to check if an event will actually be
// consumed before doing expensive work to build the event data.
func (provider *Provider) IsEnabledForLevelAndKeywords(level Level, keywords uint64) bool {
if !provider.enabled {
return false
}
// ETW automatically sets the level to 255 if it is specified as 0, so we
// don't need to worry about the level=0 (all events) case.
if level > provider.level {
return false
}
if keywords != 0 && (keywords&provider.keywordAny == 0 || keywords&provider.keywordAll != provider.keywordAll) {
return false
}
return true
}
// WriteEvent writes a single ETW event from the provider. The event is
// constructed based on the EventOpt and FieldOpt values that are passed as
// opts.
func (provider *Provider) WriteEvent(name string, eventOpts []EventOpt, fieldOpts []FieldOpt) error {
options := eventOptions{descriptor: NewEventDescriptor()}
em := &EventMetadata{}
ed := &EventData{}
// We need to evaluate the EventOpts first since they might change tags, and
// we write out the tags before evaluating FieldOpts.
for _, opt := range eventOpts {
opt(&options)
}
if !provider.IsEnabledForLevelAndKeywords(options.descriptor.Level, options.descriptor.Keyword) {
return nil
}
em.WriteEventHeader(name, options.tags)
for _, opt := range fieldOpts {
opt(em, ed)
}
// Don't pass a data blob if there is no event data. There will always be
// event metadata (e.g. for the name) so we don't need to do this check for
// the metadata.
dataBlobs := [][]byte{}
if len(ed.Bytes()) > 0 {
dataBlobs = [][]byte{ed.Bytes()}
}
return provider.WriteEventRaw(options.descriptor, nil, nil, [][]byte{em.Bytes()}, dataBlobs)
}
// WriteEventRaw writes a single ETW event from the provider. This function is
// less abstracted than WriteEvent, and presents a fairly direct interface to
// the event writing functionality. It expects a series of event metadata and
// event data blobs to be passed in, which must conform to the TraceLogging
// schema. The functions on EventMetadata and EventData can help with creating
// these blobs. The blobs of each type are effectively concatenated together by
// the ETW infrastructure.
func (provider *Provider) WriteEventRaw(
descriptor *EventDescriptor,
activityID *windows.GUID,
relatedActivityID *windows.GUID,
metadataBlobs [][]byte,
dataBlobs [][]byte) error {
dataDescriptorCount := uint32(1 + len(metadataBlobs) + len(dataBlobs))
dataDescriptors := make([]eventDataDescriptor, 0, dataDescriptorCount)
dataDescriptors = append(dataDescriptors, newEventDataDescriptor(eventDataDescriptorTypeProviderMetadata, provider.metadata))
for _, blob := range metadataBlobs {
dataDescriptors = append(dataDescriptors, newEventDataDescriptor(eventDataDescriptorTypeEventMetadata, blob))
}
for _, blob := range dataBlobs {
dataDescriptors = append(dataDescriptors, newEventDataDescriptor(eventDataDescriptorTypeUserData, blob))
}
return eventWriteTransfer(provider.handle, descriptor, activityID, relatedActivityID, dataDescriptorCount, &dataDescriptors[0])
}

View File

@@ -1,52 +0,0 @@
package etw
import (
"sync"
)
// Because the provider callback function needs to be able to access the
// provider data when it is invoked by ETW, we need to keep provider data stored
// in a global map based on an index. The index is passed as the callback
// context to ETW.
type providerMap struct {
m map[uint]*Provider
i uint
lock sync.Mutex
once sync.Once
}
var providers = providerMap{
m: make(map[uint]*Provider),
}
func (p *providerMap) newProvider() *Provider {
p.lock.Lock()
defer p.lock.Unlock()
i := p.i
p.i++
provider := &Provider{
index: i,
}
p.m[i] = provider
return provider
}
func (p *providerMap) removeProvider(provider *Provider) {
p.lock.Lock()
defer p.lock.Unlock()
delete(p.m, provider.index)
}
func (p *providerMap) getProvider(index uint) *Provider {
p.lock.Lock()
defer p.lock.Unlock()
return p.m[index]
}
var providerCallbackOnce sync.Once
var globalProviderCallback uintptr

View File

@@ -1,16 +0,0 @@
// +build 386 arm
package etw
import (
"unsafe"
)
// byteptr64 defines a struct containing a pointer. The struct is guaranteed to
// be 64 bits, regardless of the actual size of a pointer on the platform. This
// is intended for use with certain Windows APIs that expect a pointer as a
// ULONGLONG.
type ptr64 struct {
ptr unsafe.Pointer
_ uint32
}

View File

@@ -1,15 +0,0 @@
// +build amd64 arm64
package etw
import (
"unsafe"
)
// byteptr64 defines a struct containing a pointer. The struct is guaranteed to
// be 64 bits, regardless of the actual size of a pointer on the platform. This
// is intended for use with certain Windows APIs that expect a pointer as a
// ULONGLONG.
type ptr64 struct {
ptr unsafe.Pointer
}

View File

@@ -1,91 +0,0 @@
// Shows a sample usage of the ETW logging package.
package main
import (
"bufio"
"fmt"
"os"
"github.com/Microsoft/go-winio/internal/etw"
"github.com/sirupsen/logrus"
"golang.org/x/sys/windows"
)
func callback(sourceID *windows.GUID, state etw.ProviderState, level etw.Level, matchAnyKeyword uint64, matchAllKeyword uint64, filterData uintptr) {
fmt.Printf("Callback: isEnabled=%d, level=%d, matchAnyKeyword=%d\n", state, level, matchAnyKeyword)
}
func main() {
provider, err := etw.NewProvider("TestProvider", callback)
if err != nil {
logrus.Error(err)
return
}
defer func() {
if err := provider.Close(); err != nil {
logrus.Error(err)
}
}()
fmt.Printf("Provider ID: %s\n", provider)
reader := bufio.NewReader(os.Stdin)
fmt.Println("Press enter to log events")
reader.ReadString('\n')
// Write using high-level API.
if err := provider.WriteEvent(
"TestEvent",
etw.WithEventOpts(
etw.WithLevel(etw.LevelInfo),
etw.WithKeyword(0x140),
),
etw.WithFields(
etw.StringField("TestField", "Foo"),
etw.StringField("TestField2", "Bar"),
etw.Struct("TestStruct",
etw.StringField("Field1", "Value1"),
etw.StringField("Field2", "Value2")),
etw.StringArray("TestArray", []string{
"Item1",
"Item2",
"Item3",
"Item4",
"Item5",
})),
); err != nil {
logrus.Error(err)
return
}
// Write using low-level API.
descriptor := etw.NewEventDescriptor()
descriptor.Level = etw.LevelInfo
descriptor.Keyword = 0x140
em := &etw.EventMetadata{}
ed := &etw.EventData{}
em.WriteEventHeader("TestEvent", 0)
em.WriteField("TestField", etw.InTypeANSIString, etw.OutTypeUTF8, 0)
ed.WriteString("Foo")
em.WriteField("TestField2", etw.InTypeANSIString, etw.OutTypeUTF8, 0)
ed.WriteString("Bar")
em.WriteStruct("TestStruct", 2, 0)
em.WriteField("Field1", etw.InTypeANSIString, etw.OutTypeUTF8, 0)
ed.WriteString("Value1")
em.WriteField("Field2", etw.InTypeANSIString, etw.OutTypeUTF8, 0)
ed.WriteString("Value2")
em.WriteArray("TestArray", etw.InTypeANSIString, etw.OutTypeDefault, 0)
ed.WriteUint16(5)
ed.WriteString("Item1")
ed.WriteString("Item2")
ed.WriteString("Item3")
ed.WriteString("Item4")
ed.WriteString("Item5")
if err := provider.WriteEventRaw(descriptor, nil, nil, [][]byte{em.Bytes()}, [][]byte{ed.Bytes()}); err != nil {
logrus.Error(err)
return
}
}

View File

@@ -1,78 +0,0 @@
// Code generated by 'go generate'; DO NOT EDIT.
package etw
import (
"syscall"
"unsafe"
"golang.org/x/sys/windows"
)
var _ unsafe.Pointer
// Do the interface allocations only once for common
// Errno values.
const (
errnoERROR_IO_PENDING = 997
)
var (
errERROR_IO_PENDING error = syscall.Errno(errnoERROR_IO_PENDING)
)
// errnoErr returns common boxed Errno values, to prevent
// allocations at runtime.
func errnoErr(e syscall.Errno) error {
switch e {
case 0:
return nil
case errnoERROR_IO_PENDING:
return errERROR_IO_PENDING
}
// TODO: add more here, after collecting data on the common
// error values see on Windows. (perhaps when running
// all.bat?)
return e
}
var (
modadvapi32 = windows.NewLazySystemDLL("advapi32.dll")
procEventRegister = modadvapi32.NewProc("EventRegister")
procEventUnregister = modadvapi32.NewProc("EventUnregister")
procEventWriteTransfer = modadvapi32.NewProc("EventWriteTransfer")
procEventSetInformation = modadvapi32.NewProc("EventSetInformation")
)
func eventRegister(providerId *windows.GUID, callback uintptr, callbackContext uintptr, providerHandle *providerHandle) (win32err error) {
r0, _, _ := syscall.Syscall6(procEventRegister.Addr(), 4, uintptr(unsafe.Pointer(providerId)), uintptr(callback), uintptr(callbackContext), uintptr(unsafe.Pointer(providerHandle)), 0, 0)
if r0 != 0 {
win32err = syscall.Errno(r0)
}
return
}
func eventUnregister(providerHandle providerHandle) (win32err error) {
r0, _, _ := syscall.Syscall(procEventUnregister.Addr(), 1, uintptr(providerHandle), 0, 0)
if r0 != 0 {
win32err = syscall.Errno(r0)
}
return
}
func eventWriteTransfer(providerHandle providerHandle, descriptor *EventDescriptor, activityID *windows.GUID, relatedActivityID *windows.GUID, dataDescriptorCount uint32, dataDescriptors *eventDataDescriptor) (win32err error) {
r0, _, _ := syscall.Syscall6(procEventWriteTransfer.Addr(), 6, uintptr(providerHandle), uintptr(unsafe.Pointer(descriptor)), uintptr(unsafe.Pointer(activityID)), uintptr(unsafe.Pointer(relatedActivityID)), uintptr(dataDescriptorCount), uintptr(unsafe.Pointer(dataDescriptors)))
if r0 != 0 {
win32err = syscall.Errno(r0)
}
return
}
func eventSetInformation(providerHandle providerHandle, class eventInfoClass, information uintptr, length uint32) (win32err error) {
r0, _, _ := syscall.Syscall6(procEventSetInformation.Addr(), 4, uintptr(providerHandle), uintptr(class), uintptr(information), uintptr(length), 0, 0)
if r0 != 0 {
win32err = syscall.Errno(r0)
}
return
}

View File

@@ -1,421 +0,0 @@
// +build windows
package winio
import (
"errors"
"io"
"net"
"os"
"syscall"
"time"
"unsafe"
)
//sys connectNamedPipe(pipe syscall.Handle, o *syscall.Overlapped) (err error) = ConnectNamedPipe
//sys createNamedPipe(name string, flags uint32, pipeMode uint32, maxInstances uint32, outSize uint32, inSize uint32, defaultTimeout uint32, sa *syscall.SecurityAttributes) (handle syscall.Handle, err error) [failretval==syscall.InvalidHandle] = CreateNamedPipeW
//sys createFile(name string, access uint32, mode uint32, sa *syscall.SecurityAttributes, createmode uint32, attrs uint32, templatefile syscall.Handle) (handle syscall.Handle, err error) [failretval==syscall.InvalidHandle] = CreateFileW
//sys getNamedPipeInfo(pipe syscall.Handle, flags *uint32, outSize *uint32, inSize *uint32, maxInstances *uint32) (err error) = GetNamedPipeInfo
//sys getNamedPipeHandleState(pipe syscall.Handle, state *uint32, curInstances *uint32, maxCollectionCount *uint32, collectDataTimeout *uint32, userName *uint16, maxUserNameSize uint32) (err error) = GetNamedPipeHandleStateW
//sys localAlloc(uFlags uint32, length uint32) (ptr uintptr) = LocalAlloc
const (
cERROR_PIPE_BUSY = syscall.Errno(231)
cERROR_NO_DATA = syscall.Errno(232)
cERROR_PIPE_CONNECTED = syscall.Errno(535)
cERROR_SEM_TIMEOUT = syscall.Errno(121)
cPIPE_ACCESS_DUPLEX = 0x3
cFILE_FLAG_FIRST_PIPE_INSTANCE = 0x80000
cSECURITY_SQOS_PRESENT = 0x100000
cSECURITY_ANONYMOUS = 0
cPIPE_REJECT_REMOTE_CLIENTS = 0x8
cPIPE_UNLIMITED_INSTANCES = 255
cNMPWAIT_USE_DEFAULT_WAIT = 0
cNMPWAIT_NOWAIT = 1
cPIPE_TYPE_MESSAGE = 4
cPIPE_READMODE_MESSAGE = 2
)
var (
// ErrPipeListenerClosed is returned for pipe operations on listeners that have been closed.
// This error should match net.errClosing since docker takes a dependency on its text.
ErrPipeListenerClosed = errors.New("use of closed network connection")
errPipeWriteClosed = errors.New("pipe has been closed for write")
)
type win32Pipe struct {
*win32File
path string
}
type win32MessageBytePipe struct {
win32Pipe
writeClosed bool
readEOF bool
}
type pipeAddress string
func (f *win32Pipe) LocalAddr() net.Addr {
return pipeAddress(f.path)
}
func (f *win32Pipe) RemoteAddr() net.Addr {
return pipeAddress(f.path)
}
func (f *win32Pipe) SetDeadline(t time.Time) error {
f.SetReadDeadline(t)
f.SetWriteDeadline(t)
return nil
}
// CloseWrite closes the write side of a message pipe in byte mode.
func (f *win32MessageBytePipe) CloseWrite() error {
if f.writeClosed {
return errPipeWriteClosed
}
err := f.win32File.Flush()
if err != nil {
return err
}
_, err = f.win32File.Write(nil)
if err != nil {
return err
}
f.writeClosed = true
return nil
}
// Write writes bytes to a message pipe in byte mode. Zero-byte writes are ignored, since
// they are used to implement CloseWrite().
func (f *win32MessageBytePipe) Write(b []byte) (int, error) {
if f.writeClosed {
return 0, errPipeWriteClosed
}
if len(b) == 0 {
return 0, nil
}
return f.win32File.Write(b)
}
// Read reads bytes from a message pipe in byte mode. A read of a zero-byte message on a message
// mode pipe will return io.EOF, as will all subsequent reads.
func (f *win32MessageBytePipe) Read(b []byte) (int, error) {
if f.readEOF {
return 0, io.EOF
}
n, err := f.win32File.Read(b)
if err == io.EOF {
// If this was the result of a zero-byte read, then
// it is possible that the read was due to a zero-size
// message. Since we are simulating CloseWrite with a
// zero-byte message, ensure that all future Read() calls
// also return EOF.
f.readEOF = true
} else if err == syscall.ERROR_MORE_DATA {
// ERROR_MORE_DATA indicates that the pipe's read mode is message mode
// and the message still has more bytes. Treat this as a success, since
// this package presents all named pipes as byte streams.
err = nil
}
return n, err
}
func (s pipeAddress) Network() string {
return "pipe"
}
func (s pipeAddress) String() string {
return string(s)
}
// DialPipe connects to a named pipe by path, timing out if the connection
// takes longer than the specified duration. If timeout is nil, then we use
// a default timeout of 5 seconds. (We do not use WaitNamedPipe.)
func DialPipe(path string, timeout *time.Duration) (net.Conn, error) {
var absTimeout time.Time
if timeout != nil {
absTimeout = time.Now().Add(*timeout)
} else {
absTimeout = time.Now().Add(time.Second * 2)
}
var err error
var h syscall.Handle
for {
h, err = createFile(path, syscall.GENERIC_READ|syscall.GENERIC_WRITE, 0, nil, syscall.OPEN_EXISTING, syscall.FILE_FLAG_OVERLAPPED|cSECURITY_SQOS_PRESENT|cSECURITY_ANONYMOUS, 0)
if err != cERROR_PIPE_BUSY {
break
}
if time.Now().After(absTimeout) {
return nil, ErrTimeout
}
// Wait 10 msec and try again. This is a rather simplistic
// view, as we always try each 10 milliseconds.
time.Sleep(time.Millisecond * 10)
}
if err != nil {
return nil, &os.PathError{Op: "open", Path: path, Err: err}
}
var flags uint32
err = getNamedPipeInfo(h, &flags, nil, nil, nil)
if err != nil {
return nil, err
}
f, err := makeWin32File(h)
if err != nil {
syscall.Close(h)
return nil, err
}
// If the pipe is in message mode, return a message byte pipe, which
// supports CloseWrite().
if flags&cPIPE_TYPE_MESSAGE != 0 {
return &win32MessageBytePipe{
win32Pipe: win32Pipe{win32File: f, path: path},
}, nil
}
return &win32Pipe{win32File: f, path: path}, nil
}
type acceptResponse struct {
f *win32File
err error
}
type win32PipeListener struct {
firstHandle syscall.Handle
path string
securityDescriptor []byte
config PipeConfig
acceptCh chan (chan acceptResponse)
closeCh chan int
doneCh chan int
}
func makeServerPipeHandle(path string, securityDescriptor []byte, c *PipeConfig, first bool) (syscall.Handle, error) {
var flags uint32 = cPIPE_ACCESS_DUPLEX | syscall.FILE_FLAG_OVERLAPPED
if first {
flags |= cFILE_FLAG_FIRST_PIPE_INSTANCE
}
var mode uint32 = cPIPE_REJECT_REMOTE_CLIENTS
if c.MessageMode {
mode |= cPIPE_TYPE_MESSAGE
}
sa := &syscall.SecurityAttributes{}
sa.Length = uint32(unsafe.Sizeof(*sa))
if securityDescriptor != nil {
len := uint32(len(securityDescriptor))
sa.SecurityDescriptor = localAlloc(0, len)
defer localFree(sa.SecurityDescriptor)
copy((*[0xffff]byte)(unsafe.Pointer(sa.SecurityDescriptor))[:], securityDescriptor)
}
h, err := createNamedPipe(path, flags, mode, cPIPE_UNLIMITED_INSTANCES, uint32(c.OutputBufferSize), uint32(c.InputBufferSize), 0, sa)
if err != nil {
return 0, &os.PathError{Op: "open", Path: path, Err: err}
}
return h, nil
}
func (l *win32PipeListener) makeServerPipe() (*win32File, error) {
h, err := makeServerPipeHandle(l.path, l.securityDescriptor, &l.config, false)
if err != nil {
return nil, err
}
f, err := makeWin32File(h)
if err != nil {
syscall.Close(h)
return nil, err
}
return f, nil
}
func (l *win32PipeListener) makeConnectedServerPipe() (*win32File, error) {
p, err := l.makeServerPipe()
if err != nil {
return nil, err
}
// Wait for the client to connect.
ch := make(chan error)
go func(p *win32File) {
ch <- connectPipe(p)
}(p)
select {
case err = <-ch:
if err != nil {
p.Close()
p = nil
}
case <-l.closeCh:
// Abort the connect request by closing the handle.
p.Close()
p = nil
err = <-ch
if err == nil || err == ErrFileClosed {
err = ErrPipeListenerClosed
}
}
return p, err
}
func (l *win32PipeListener) listenerRoutine() {
closed := false
for !closed {
select {
case <-l.closeCh:
closed = true
case responseCh := <-l.acceptCh:
var (
p *win32File
err error
)
for {
p, err = l.makeConnectedServerPipe()
// If the connection was immediately closed by the client, try
// again.
if err != cERROR_NO_DATA {
break
}
}
responseCh <- acceptResponse{p, err}
closed = err == ErrPipeListenerClosed
}
}
syscall.Close(l.firstHandle)
l.firstHandle = 0
// Notify Close() and Accept() callers that the handle has been closed.
close(l.doneCh)
}
// PipeConfig contain configuration for the pipe listener.
type PipeConfig struct {
// SecurityDescriptor contains a Windows security descriptor in SDDL format.
SecurityDescriptor string
// MessageMode determines whether the pipe is in byte or message mode. In either
// case the pipe is read in byte mode by default. The only practical difference in
// this implementation is that CloseWrite() is only supported for message mode pipes;
// CloseWrite() is implemented as a zero-byte write, but zero-byte writes are only
// transferred to the reader (and returned as io.EOF in this implementation)
// when the pipe is in message mode.
MessageMode bool
// InputBufferSize specifies the size the input buffer, in bytes.
InputBufferSize int32
// OutputBufferSize specifies the size the input buffer, in bytes.
OutputBufferSize int32
}
// ListenPipe creates a listener on a Windows named pipe path, e.g. \\.\pipe\mypipe.
// The pipe must not already exist.
func ListenPipe(path string, c *PipeConfig) (net.Listener, error) {
var (
sd []byte
err error
)
if c == nil {
c = &PipeConfig{}
}
if c.SecurityDescriptor != "" {
sd, err = SddlToSecurityDescriptor(c.SecurityDescriptor)
if err != nil {
return nil, err
}
}
h, err := makeServerPipeHandle(path, sd, c, true)
if err != nil {
return nil, err
}
// Create a client handle and connect it. This results in the pipe
// instance always existing, so that clients see ERROR_PIPE_BUSY
// rather than ERROR_FILE_NOT_FOUND. This ties the first instance
// up so that no other instances can be used. This would have been
// cleaner if the Win32 API matched CreateFile with ConnectNamedPipe
// instead of CreateNamedPipe. (Apparently created named pipes are
// considered to be in listening state regardless of whether any
// active calls to ConnectNamedPipe are outstanding.)
h2, err := createFile(path, 0, 0, nil, syscall.OPEN_EXISTING, cSECURITY_SQOS_PRESENT|cSECURITY_ANONYMOUS, 0)
if err != nil {
syscall.Close(h)
return nil, err
}
// Close the client handle. The server side of the instance will
// still be busy, leading to ERROR_PIPE_BUSY instead of
// ERROR_NOT_FOUND, as long as we don't close the server handle,
// or disconnect the client with DisconnectNamedPipe.
syscall.Close(h2)
l := &win32PipeListener{
firstHandle: h,
path: path,
securityDescriptor: sd,
config: *c,
acceptCh: make(chan (chan acceptResponse)),
closeCh: make(chan int),
doneCh: make(chan int),
}
go l.listenerRoutine()
return l, nil
}
func connectPipe(p *win32File) error {
c, err := p.prepareIo()
if err != nil {
return err
}
defer p.wg.Done()
err = connectNamedPipe(p.handle, &c.o)
_, err = p.asyncIo(c, nil, 0, err)
if err != nil && err != cERROR_PIPE_CONNECTED {
return err
}
return nil
}
func (l *win32PipeListener) Accept() (net.Conn, error) {
ch := make(chan acceptResponse)
select {
case l.acceptCh <- ch:
response := <-ch
err := response.err
if err != nil {
return nil, err
}
if l.config.MessageMode {
return &win32MessageBytePipe{
win32Pipe: win32Pipe{win32File: response.f, path: l.path},
}, nil
}
return &win32Pipe{win32File: response.f, path: l.path}, nil
case <-l.doneCh:
return nil, ErrPipeListenerClosed
}
}
func (l *win32PipeListener) Close() error {
select {
case l.closeCh <- 1:
<-l.doneCh
case <-l.doneCh:
}
return nil
}
func (l *win32PipeListener) Addr() net.Addr {
return pipeAddress(l.path)
}

View File

@@ -1,516 +0,0 @@
package winio
import (
"bufio"
"bytes"
"io"
"net"
"os"
"sync"
"syscall"
"testing"
"time"
"unsafe"
)
var testPipeName = `\\.\pipe\winiotestpipe`
var aLongTimeAgo = time.Unix(1, 0)
func TestDialUnknownFailsImmediately(t *testing.T) {
_, err := DialPipe(testPipeName, nil)
if err.(*os.PathError).Err != syscall.ENOENT {
t.Fatalf("expected ENOENT got %v", err)
}
}
func TestDialListenerTimesOut(t *testing.T) {
l, err := ListenPipe(testPipeName, nil)
if err != nil {
t.Fatal(err)
}
defer l.Close()
var d = time.Duration(10 * time.Millisecond)
_, err = DialPipe(testPipeName, &d)
if err != ErrTimeout {
t.Fatalf("expected ErrTimeout, got %v", err)
}
}
func TestDialAccessDeniedWithRestrictedSD(t *testing.T) {
c := PipeConfig{
SecurityDescriptor: "D:P(A;;0x1200FF;;;WD)",
}
l, err := ListenPipe(testPipeName, &c)
if err != nil {
t.Fatal(err)
}
defer l.Close()
_, err = DialPipe(testPipeName, nil)
if err.(*os.PathError).Err != syscall.ERROR_ACCESS_DENIED {
t.Fatalf("expected ERROR_ACCESS_DENIED, got %v", err)
}
}
func getConnection(cfg *PipeConfig) (client net.Conn, server net.Conn, err error) {
l, err := ListenPipe(testPipeName, cfg)
if err != nil {
return
}
defer l.Close()
type response struct {
c net.Conn
err error
}
ch := make(chan response)
go func() {
c, err := l.Accept()
ch <- response{c, err}
}()
c, err := DialPipe(testPipeName, nil)
if err != nil {
return
}
r := <-ch
if err = r.err; err != nil {
c.Close()
return
}
client = c
server = r.c
return
}
func TestReadTimeout(t *testing.T) {
c, s, err := getConnection(nil)
if err != nil {
t.Fatal(err)
}
defer c.Close()
defer s.Close()
c.SetReadDeadline(time.Now().Add(10 * time.Millisecond))
buf := make([]byte, 10)
_, err = c.Read(buf)
if err != ErrTimeout {
t.Fatalf("expected ErrTimeout, got %v", err)
}
}
func server(l net.Listener, ch chan int) {
c, err := l.Accept()
if err != nil {
panic(err)
}
rw := bufio.NewReadWriter(bufio.NewReader(c), bufio.NewWriter(c))
s, err := rw.ReadString('\n')
if err != nil {
panic(err)
}
_, err = rw.WriteString("got " + s)
if err != nil {
panic(err)
}
err = rw.Flush()
if err != nil {
panic(err)
}
c.Close()
ch <- 1
}
func TestFullListenDialReadWrite(t *testing.T) {
l, err := ListenPipe(testPipeName, nil)
if err != nil {
t.Fatal(err)
}
defer l.Close()
ch := make(chan int)
go server(l, ch)
c, err := DialPipe(testPipeName, nil)
if err != nil {
t.Fatal(err)
}
defer c.Close()
rw := bufio.NewReadWriter(bufio.NewReader(c), bufio.NewWriter(c))
_, err = rw.WriteString("hello world\n")
if err != nil {
t.Fatal(err)
}
err = rw.Flush()
if err != nil {
t.Fatal(err)
}
s, err := rw.ReadString('\n')
if err != nil {
t.Fatal(err)
}
ms := "got hello world\n"
if s != ms {
t.Errorf("expected '%s', got '%s'", ms, s)
}
<-ch
}
func TestCloseAbortsListen(t *testing.T) {
l, err := ListenPipe(testPipeName, nil)
if err != nil {
t.Fatal(err)
}
ch := make(chan error)
go func() {
_, err := l.Accept()
ch <- err
}()
time.Sleep(30 * time.Millisecond)
l.Close()
err = <-ch
if err != ErrPipeListenerClosed {
t.Fatalf("expected ErrPipeListenerClosed, got %v", err)
}
}
func ensureEOFOnClose(t *testing.T, r io.Reader, w io.Closer) {
b := make([]byte, 10)
w.Close()
n, err := r.Read(b)
if n > 0 {
t.Errorf("unexpected byte count %d", n)
}
if err != io.EOF {
t.Errorf("expected EOF: %v", err)
}
}
func TestCloseClientEOFServer(t *testing.T) {
c, s, err := getConnection(nil)
if err != nil {
t.Fatal(err)
}
defer c.Close()
defer s.Close()
ensureEOFOnClose(t, c, s)
}
func TestCloseServerEOFClient(t *testing.T) {
c, s, err := getConnection(nil)
if err != nil {
t.Fatal(err)
}
defer c.Close()
defer s.Close()
ensureEOFOnClose(t, s, c)
}
func TestCloseWriteEOF(t *testing.T) {
cfg := &PipeConfig{
MessageMode: true,
}
c, s, err := getConnection(cfg)
if err != nil {
t.Fatal(err)
}
defer c.Close()
defer s.Close()
type closeWriter interface {
CloseWrite() error
}
err = c.(closeWriter).CloseWrite()
if err != nil {
t.Fatal(err)
}
b := make([]byte, 10)
_, err = s.Read(b)
if err != io.EOF {
t.Fatal(err)
}
}
func TestAcceptAfterCloseFails(t *testing.T) {
l, err := ListenPipe(testPipeName, nil)
if err != nil {
t.Fatal(err)
}
l.Close()
_, err = l.Accept()
if err != ErrPipeListenerClosed {
t.Fatalf("expected ErrPipeListenerClosed, got %v", err)
}
}
func TestDialTimesOutByDefault(t *testing.T) {
l, err := ListenPipe(testPipeName, nil)
if err != nil {
t.Fatal(err)
}
defer l.Close()
_, err = DialPipe(testPipeName, nil)
if err != ErrTimeout {
t.Fatalf("expected ErrTimeout, got %v", err)
}
}
func TestTimeoutPendingRead(t *testing.T) {
l, err := ListenPipe(testPipeName, nil)
if err != nil {
t.Fatal(err)
}
defer l.Close()
serverDone := make(chan struct{})
go func() {
s, err := l.Accept()
if err != nil {
t.Fatal(err)
}
time.Sleep(1 * time.Second)
s.Close()
close(serverDone)
}()
client, err := DialPipe(testPipeName, nil)
if err != nil {
t.Fatal(err)
}
defer client.Close()
clientErr := make(chan error)
go func() {
buf := make([]byte, 10)
_, err = client.Read(buf)
clientErr <- err
}()
time.Sleep(100 * time.Millisecond) // make *sure* the pipe is reading before we set the deadline
client.SetReadDeadline(aLongTimeAgo)
select {
case err = <-clientErr:
if err != ErrTimeout {
t.Fatalf("expected ErrTimeout, got %v", err)
}
case <-time.After(100 * time.Millisecond):
t.Fatalf("timed out while waiting for read to cancel")
<-clientErr
}
<-serverDone
}
func TestTimeoutPendingWrite(t *testing.T) {
l, err := ListenPipe(testPipeName, nil)
if err != nil {
t.Fatal(err)
}
defer l.Close()
serverDone := make(chan struct{})
go func() {
s, err := l.Accept()
if err != nil {
t.Fatal(err)
}
time.Sleep(1 * time.Second)
s.Close()
close(serverDone)
}()
client, err := DialPipe(testPipeName, nil)
if err != nil {
t.Fatal(err)
}
defer client.Close()
clientErr := make(chan error)
go func() {
_, err = client.Write([]byte("this should timeout"))
clientErr <- err
}()
time.Sleep(100 * time.Millisecond) // make *sure* the pipe is writing before we set the deadline
client.SetWriteDeadline(aLongTimeAgo)
select {
case err = <-clientErr:
if err != ErrTimeout {
t.Fatalf("expected ErrTimeout, got %v", err)
}
case <-time.After(100 * time.Millisecond):
t.Fatalf("timed out while waiting for write to cancel")
<-clientErr
}
<-serverDone
}
type CloseWriter interface {
CloseWrite() error
}
func TestEchoWithMessaging(t *testing.T) {
c := PipeConfig{
MessageMode: true, // Use message mode so that CloseWrite() is supported
InputBufferSize: 65536, // Use 64KB buffers to improve performance
OutputBufferSize: 65536,
}
l, err := ListenPipe(testPipeName, &c)
if err != nil {
t.Fatal(err)
}
defer l.Close()
listenerDone := make(chan bool)
clientDone := make(chan bool)
go func() {
// server echo
conn, e := l.Accept()
if e != nil {
t.Fatal(e)
}
defer conn.Close()
time.Sleep(500 * time.Millisecond) // make *sure* we don't begin to read before eof signal is sent
io.Copy(conn, conn)
conn.(CloseWriter).CloseWrite()
close(listenerDone)
}()
timeout := 1 * time.Second
client, err := DialPipe(testPipeName, &timeout)
if err != nil {
t.Fatal(err)
}
defer client.Close()
go func() {
// client read back
bytes := make([]byte, 2)
n, e := client.Read(bytes)
if e != nil {
t.Fatal(e)
}
if n != 2 {
t.Fatalf("expected 2 bytes, got %v", n)
}
close(clientDone)
}()
payload := make([]byte, 2)
payload[0] = 0
payload[1] = 1
n, err := client.Write(payload)
if err != nil {
t.Fatal(err)
}
if n != 2 {
t.Fatalf("expected 2 bytes, got %v", n)
}
client.(CloseWriter).CloseWrite()
<-listenerDone
<-clientDone
}
func TestConnectRace(t *testing.T) {
l, err := ListenPipe(testPipeName, nil)
if err != nil {
t.Fatal(err)
}
defer l.Close()
go func() {
for {
s, err := l.Accept()
if err == ErrPipeListenerClosed {
return
}
if err != nil {
t.Fatal(err)
}
s.Close()
}
}()
for i := 0; i < 1000; i++ {
c, err := DialPipe(testPipeName, nil)
if err != nil {
t.Fatal(err)
}
c.Close()
}
}
func TestMessageReadMode(t *testing.T) {
var wg sync.WaitGroup
defer wg.Wait()
l, err := ListenPipe(testPipeName, &PipeConfig{MessageMode: true})
if err != nil {
t.Fatal(err)
}
defer l.Close()
msg := ([]byte)("hello world")
wg.Add(1)
go func() {
defer wg.Done()
s, err := l.Accept()
if err != nil {
t.Fatal(err)
}
_, err = s.Write(msg)
if err != nil {
t.Fatal(err)
}
s.Close()
}()
c, err := DialPipe(testPipeName, nil)
if err != nil {
t.Fatal(err)
}
defer c.Close()
setNamedPipeHandleState := syscall.NewLazyDLL("kernel32.dll").NewProc("SetNamedPipeHandleState")
p := c.(*win32MessageBytePipe)
mode := uint32(cPIPE_READMODE_MESSAGE)
if s, _, err := setNamedPipeHandleState.Call(uintptr(p.handle), uintptr(unsafe.Pointer(&mode)), 0, 0); s == 0 {
t.Fatal(err)
}
ch := make([]byte, 1)
var vmsg []byte
for {
n, err := c.Read(ch)
if err == io.EOF {
break
}
if err != nil {
t.Fatal(err)
}
if n != 1 {
t.Fatal("expected 1: ", n)
}
vmsg = append(vmsg, ch[0])
}
if !bytes.Equal(msg, vmsg) {
t.Fatalf("expected %s: %s", msg, vmsg)
}
}

View File

@@ -1,192 +0,0 @@
package etwlogrus
import (
"fmt"
"reflect"
"github.com/Microsoft/go-winio/internal/etw"
"github.com/sirupsen/logrus"
)
// Hook is a Logrus hook which logs received events to ETW.
type Hook struct {
provider *etw.Provider
}
// NewHook registers a new ETW provider and returns a hook to log from it.
func NewHook(providerName string) (*Hook, error) {
hook := Hook{}
provider, err := etw.NewProvider(providerName, nil)
if err != nil {
return nil, err
}
hook.provider = provider
return &hook, nil
}
// Levels returns the set of levels that this hook wants to receive log entries
// for.
func (h *Hook) Levels() []logrus.Level {
return []logrus.Level{
logrus.TraceLevel,
logrus.DebugLevel,
logrus.InfoLevel,
logrus.WarnLevel,
logrus.ErrorLevel,
logrus.FatalLevel,
logrus.PanicLevel,
}
}
// Fire receives each Logrus entry as it is logged, and logs it to ETW.
func (h *Hook) Fire(e *logrus.Entry) error {
level := etw.Level(e.Level)
if !h.provider.IsEnabledForLevel(level) {
return nil
}
// Reserve extra space for the message field.
fields := make([]etw.FieldOpt, 0, len(e.Data)+1)
fields = append(fields, etw.StringField("Message", e.Message))
for k, v := range e.Data {
fields = append(fields, getFieldOpt(k, v))
}
// We could try to map Logrus levels to ETW levels, but we would lose some
// fidelity as there are fewer ETW levels. So instead we use the level
// directly.
return h.provider.WriteEvent(
"LogrusEntry",
etw.WithEventOpts(etw.WithLevel(level)),
fields)
}
// Currently, we support logging basic builtin types (int, string, etc), slices
// of basic builtin types, error, types derived from the basic types (e.g. "type
// foo int"), and structs (recursively logging their fields). We do not support
// slices of derived types (e.g. "[]foo").
//
// For types that we don't support, the value is formatted via fmt.Sprint, and
// we also log a message that the type is unsupported along with the formatted
// type. The intent of this is to make it easier to see which types are not
// supported in traces, so we can evaluate adding support for more types in the
// future.
func getFieldOpt(k string, v interface{}) etw.FieldOpt {
switch v := v.(type) {
case bool:
return etw.BoolField(k, v)
case []bool:
return etw.BoolArray(k, v)
case string:
return etw.StringField(k, v)
case []string:
return etw.StringArray(k, v)
case int:
return etw.IntField(k, v)
case []int:
return etw.IntArray(k, v)
case int8:
return etw.Int8Field(k, v)
case []int8:
return etw.Int8Array(k, v)
case int16:
return etw.Int16Field(k, v)
case []int16:
return etw.Int16Array(k, v)
case int32:
return etw.Int32Field(k, v)
case []int32:
return etw.Int32Array(k, v)
case int64:
return etw.Int64Field(k, v)
case []int64:
return etw.Int64Array(k, v)
case uint:
return etw.UintField(k, v)
case []uint:
return etw.UintArray(k, v)
case uint8:
return etw.Uint8Field(k, v)
case []uint8:
return etw.Uint8Array(k, v)
case uint16:
return etw.Uint16Field(k, v)
case []uint16:
return etw.Uint16Array(k, v)
case uint32:
return etw.Uint32Field(k, v)
case []uint32:
return etw.Uint32Array(k, v)
case uint64:
return etw.Uint64Field(k, v)
case []uint64:
return etw.Uint64Array(k, v)
case uintptr:
return etw.UintptrField(k, v)
case []uintptr:
return etw.UintptrArray(k, v)
case float32:
return etw.Float32Field(k, v)
case []float32:
return etw.Float32Array(k, v)
case float64:
return etw.Float64Field(k, v)
case []float64:
return etw.Float64Array(k, v)
case error:
return etw.StringField(k, v.Error())
default:
switch rv := reflect.ValueOf(v); rv.Kind() {
case reflect.Bool:
return getFieldOpt(k, rv.Bool())
case reflect.Int:
return getFieldOpt(k, int(rv.Int()))
case reflect.Int8:
return getFieldOpt(k, int8(rv.Int()))
case reflect.Int16:
return getFieldOpt(k, int16(rv.Int()))
case reflect.Int32:
return getFieldOpt(k, int32(rv.Int()))
case reflect.Int64:
return getFieldOpt(k, int64(rv.Int()))
case reflect.Uint:
return getFieldOpt(k, uint(rv.Uint()))
case reflect.Uint8:
return getFieldOpt(k, uint8(rv.Uint()))
case reflect.Uint16:
return getFieldOpt(k, uint16(rv.Uint()))
case reflect.Uint32:
return getFieldOpt(k, uint32(rv.Uint()))
case reflect.Uint64:
return getFieldOpt(k, uint64(rv.Uint()))
case reflect.Uintptr:
return getFieldOpt(k, uintptr(rv.Uint()))
case reflect.Float32:
return getFieldOpt(k, float32(rv.Float()))
case reflect.Float64:
return getFieldOpt(k, float64(rv.Float()))
case reflect.String:
return getFieldOpt(k, rv.String())
case reflect.Struct:
fields := make([]etw.FieldOpt, 0, rv.NumField())
for i := 0; i < rv.NumField(); i++ {
field := rv.Field(i)
if field.CanInterface() {
fields = append(fields, getFieldOpt(k, field.Interface()))
}
}
return etw.Struct(k, fields...)
}
}
return etw.StringField(k, fmt.Sprintf("(Unsupported: %T) %v", v, v))
}
// Close cleans up the hook and closes the ETW provider.
func (h *Hook) Close() error {
return h.provider.Close()
}

View File

@@ -1,126 +0,0 @@
package etwlogrus
import (
"github.com/Microsoft/go-winio/internal/etw"
"testing"
)
func fireEvent(t *testing.T, p *etw.Provider, name string, value interface{}) {
if err := p.WriteEvent(
name,
nil,
etw.WithFields(getFieldOpt("Field", value))); err != nil {
t.Fatal(err)
}
}
// The purpose of this test is to log lots of different field types, to test the
// logic that converts them to ETW. Because we don't have a way to
// programatically validate the ETW events, this test has two main purposes: (1)
// validate nothing causes a panic while logging (2) allow manual validation that
// the data is logged correctly (through a tool like WPA).
func TestFieldLogging(t *testing.T) {
// Sample WPRP to collect this provider:
//
// <?xml version="1.0"?>
// <WindowsPerformanceRecorder Version="1">
// <Profiles>
// <EventCollector Id="Collector" Name="MyCollector">
// <BufferSize Value="256"/>
// <Buffers Value="100"/>
// </EventCollector>
// <EventProvider Id="HookTest" Name="5e50de03-107c-5a83-74c6-998c4491e7e9"/>
// <Profile Id="Test.Verbose.File" Name="Test" Description="Test" LoggingMode="File" DetailLevel="Verbose">
// <Collectors>
// <EventCollectorId Value="Collector">
// <EventProviders>
// <EventProviderId Value="HookTest"/>
// </EventProviders>
// </EventCollectorId>
// </Collectors>
// </Profile>
// </Profiles>
// </WindowsPerformanceRecorder>
//
// Start collection:
// wpr -start HookTest.wprp -filemode
//
// Stop collection:
// wpr -stop HookTest.etl
p, err := etw.NewProvider("HookTest", nil)
if err != nil {
t.Fatal(err)
}
defer func() {
if err := p.Close(); err != nil {
t.Fatal(err)
}
}()
fireEvent(t, p, "Bool", true)
fireEvent(t, p, "BoolSlice", []bool{true, false, true})
fireEvent(t, p, "EmptyBoolSlice", []bool{})
fireEvent(t, p, "String", "teststring")
fireEvent(t, p, "StringSlice", []string{"sstr1", "sstr2", "sstr3"})
fireEvent(t, p, "EmptyStringSlice", []string{})
fireEvent(t, p, "Int", int(1))
fireEvent(t, p, "IntSlice", []int{2, 3, 4})
fireEvent(t, p, "EmptyIntSlice", []int{})
fireEvent(t, p, "Int8", int8(5))
fireEvent(t, p, "Int8Slice", []int8{6, 7, 8})
fireEvent(t, p, "EmptyInt8Slice", []int8{})
fireEvent(t, p, "Int16", int16(9))
fireEvent(t, p, "Int16Slice", []int16{10, 11, 12})
fireEvent(t, p, "EmptyInt16Slice", []int16{})
fireEvent(t, p, "Int32", int32(13))
fireEvent(t, p, "Int32Slice", []int32{14, 15, 16})
fireEvent(t, p, "EmptyInt32Slice", []int32{})
fireEvent(t, p, "Int64", int64(17))
fireEvent(t, p, "Int64Slice", []int64{18, 19, 20})
fireEvent(t, p, "EmptyInt64Slice", []int64{})
fireEvent(t, p, "Uint", uint(21))
fireEvent(t, p, "UintSlice", []uint{22, 23, 24})
fireEvent(t, p, "EmptyUintSlice", []uint{})
fireEvent(t, p, "Uint8", uint8(25))
fireEvent(t, p, "Uint8Slice", []uint8{26, 27, 28})
fireEvent(t, p, "EmptyUint8Slice", []uint8{})
fireEvent(t, p, "Uint16", uint16(29))
fireEvent(t, p, "Uint16Slice", []uint16{30, 31, 32})
fireEvent(t, p, "EmptyUint16Slice", []uint16{})
fireEvent(t, p, "Uint32", uint32(33))
fireEvent(t, p, "Uint32Slice", []uint32{34, 35, 36})
fireEvent(t, p, "EmptyUint32Slice", []uint32{})
fireEvent(t, p, "Uint64", uint64(37))
fireEvent(t, p, "Uint64Slice", []uint64{38, 39, 40})
fireEvent(t, p, "EmptyUint64Slice", []uint64{})
fireEvent(t, p, "Uintptr", uintptr(41))
fireEvent(t, p, "UintptrSlice", []uintptr{42, 43, 44})
fireEvent(t, p, "EmptyUintptrSlice", []uintptr{})
fireEvent(t, p, "Float32", float32(45.46))
fireEvent(t, p, "Float32Slice", []float32{47.48, 49.50, 51.52})
fireEvent(t, p, "EmptyFloat32Slice", []float32{})
fireEvent(t, p, "Float64", float64(53.54))
fireEvent(t, p, "Float64Slice", []float64{55.56, 57.58, 59.60})
fireEvent(t, p, "EmptyFloat64Slice", []float64{})
type struct1 struct {
A float32
priv int
B []uint
}
type struct2 struct {
A int
B int
}
type struct3 struct {
struct2
A int
B string
priv string
C struct1
D uint16
}
// Unexported fields, and fields in embedded structs, should not log.
fireEvent(t, p, "Struct", struct3{struct2{-1, -2}, 1, "2s", "-3s", struct1{3.4, -4, []uint{5, 6, 7}}, 8})
}

Some files were not shown because too many files have changed in this diff Show More