Compare commits

..

48 Commits

Author SHA1 Message Date
Kazuhiro MUSASHI 62fe9cb81f Merge pull request 'Update the Vault tokens for the provisioning process.' (#28) from BumpVaultTokens into main
Reviewed-on: #28
2024-11-03 03:15:09 +00:00
Kazuhiro MUSASHI 6ee0679c7c Update the Vault tokens for the provisioning process.
diff --git a/cookbooks/consul/files/etc/vault.d/tokens/roleid b/cookbooks/consul/files/etc/vault.d/tokens/roleid
index 7ae456f..120be5a 100644
--- a/cookbooks/consul/files/etc/vault.d/tokens/roleid
+++ b/cookbooks/consul/files/etc/vault.d/tokens/roleid
@@ -1 +1 @@
-md5:1ae55d337df5f9dd4fffc187a183b0b2:salt:205-89-236-103-190-38-95-67:aes-256-cfb:Ma2d+BQ24dejEcakleRob9FbO/uXSyymKm3hMllr4BU89COZ6g==
\ No newline at end of file
+md5:d0bf5c103435e9c51e21752192e89575:salt:20-135-197-125-136-152-137-246:aes-256-cfb:aVa3ufSt0fr6iarjwajOHZZs4bGSOo38N577EEbCJwXNW/M41g==
\ No newline at end of file
diff --git a/cookbooks/consul/files/etc/vault.d/tokens/secretid b/cookbooks/consul/files/etc/vault.d/tokens/secretid
index 8f6d625..45ffa4a 100644
--- a/cookbooks/consul/files/etc/vault.d/tokens/secretid
+++ b/cookbooks/consul/files/etc/vault.d/tokens/secretid
@@ -1 +1 @@
-md5:c5e23c82c19bfdbd585c22c2244d48c4:salt:159-101-196-196-176-220-40-108:aes-256-cfb:ddjwjLHE5NsLCVioXEv9oaJoGtpJ+P6FvVs6ecKK26eaI49ElQ==
\ No newline at end of file
+md5:ab19117b12b65eef5d46283a1f9d8430:salt:2-183-180-51-94-222-93-197:aes-256-cfb:hlO5lzU8SmLmqPjquIJgwEzSlM5w7ij8gGFZXJVY2yt0KNRqrw==
\ No newline at end of file
2024-11-03 12:14:12 +09:00
Kazuhiro MUSASHI abd895245e Merge pull request 'Add recipes, tasks, and etc... for LXC containers.' (#27) from lxc-support into main
Reviewed-on: #27
2024-11-03 03:13:12 +00:00
Kazuhiro MUSASHI b5609d6edd Provide the task to provision LXC containers. 2024-11-03 12:11:14 +09:00
Kazuhiro MUSASHI c02c9bbb1a Provide the role for `LXC` containers. 2024-11-03 12:10:41 +09:00
Kazuhiro MUSASHI 2a199ab128 Add recipes for `LXC` containers. 2024-11-03 12:10:06 +09:00
Kazuhiro MUSASHI 06b8ae6c1c Merge pull request 'Support fo Ubuntu2404' (#26) from ubuntu2404 into main
Reviewed-on: #26
2024-11-03 02:02:59 +00:00
Kazuhiro MUSASHI 015fe2ee31 Modify `nomad` recipes to reflect the step changes. 2024-11-03 10:58:04 +09:00
Kazuhiro MUSASHI eaa7ddcd32 Update DNS settings. 2024-11-03 10:56:38 +09:00
Kazuhiro MUSASHI db10caca55 Delete `resolved.conf.2404`. 2024-11-02 16:56:12 +09:00
Kazuhiro MUSASHI a61e3c3dd7 Install `cni` plugins, using `eget`. 2024-07-20 17:34:01 +09:00
Kazuhiro MUSASHI 8e0a8a06c3 Bump `nginx` version. 2024-07-20 17:33:31 +09:00
Kazuhiro MUSASHI 248a624f22 Install `Docker`, before setting up `Nomad`. 2024-07-15 21:40:05 +09:00
Kazuhiro MUSASHI df0bccd61b Add `Consul` token setting for registering `Vault` 2024-07-15 21:39:32 +09:00
Kazuhiro MUSASHI a955001416 Add firewall settings for `Vault`. 2024-07-15 21:39:00 +09:00
Kazuhiro MUSASHI e21fa08291 Deploy `/etc/vault.d/vault.env` to enable AWS KMS. 2024-07-15 21:28:07 +09:00
Kazuhiro MUSASHI 44ca217183 Reload the config after updating the config file. 2024-07-15 21:27:23 +09:00
Kazuhiro MUSASHI 7d65474067 Change mode of `/etc/vault.d/vault.hcl`. 2024-07-15 18:49:42 +09:00
Kazuhiro MUSASHI d11206e3c2 Change `Vault` IP addresses. 2024-07-15 18:48:40 +09:00
Kazuhiro MUSASHI 44325ace47 Change `Vault` tokens for enabling Consul Auto Config. 2024-07-15 18:47:02 +09:00
Kazuhiro MUSASHI 977648f95e Change `consul` server IP addresses. 2024-07-15 18:45:25 +09:00
Kazuhiro MUSASHI 1998a11c29 Disable `Ubuntu Pro` announcement. 2024-07-01 15:23:03 +09:00
Kazuhiro MUSASHI 450426b12a Expliciyly specify the owner and group for `/etc/apt/sources.list.d/hashicorp.list`. 2024-06-10 11:55:15 +09:00
Kazuhiro MUSASHI d8094f8a6b Install exporters for `Ubuntu24.04`. 2024-06-10 11:54:39 +09:00
Kazuhiro MUSASHI 3b61e2b7ac Add condition for `Ubuntu 24.04`. 2024-06-10 11:54:08 +09:00
Kazuhiro MUSASHI cf28ca20b6 Disable password authentication for `SSH` daemon. 2024-06-10 11:48:35 +09:00
Kazuhiro MUSASHI 6bc876df6f Use `eget` to download and install `consul-template`. 2024-06-10 11:45:12 +09:00
Kazuhiro MUSASHI 4f2aeaac41 Check whether `eget` is installed or not, before the actual installation. 2024-06-10 11:44:07 +09:00
Kazuhiro MUSASHI 6d1e1599e3 Modify `dnsmasq` settings. 2024-06-10 11:42:42 +09:00
Kazuhiro MUSASHI 9a6a874abe Install `eget`. 2024-05-11 14:28:12 +09:00
Kazuhiro MUSASHI f122269855 Accumulative changes for `nginc` recipe. 2024-05-06 17:09:24 +09:00
Kazuhiro MUSASHI 6fe04fdaa0 Add cases for Ubuntu 24.04. 2024-05-06 17:08:42 +09:00
Kazuhiro MUSASHI 2063cf2f6c Update HashiCorp APT sources. 2024-04-28 12:13:35 +09:00
Kazuhiro MUSASHI a52c841151 Add steps for `SSH` daemon config. 2024-04-28 11:52:29 +09:00
Kazuhiro MUSASHI f6a6c49823 Add `git` APT source setting for Ubuntu2404. 2024-04-28 11:51:43 +09:00
Kazuhiro MUSASHI 8ae10311a6 Add steps for `timesyncd` configs. 2024-04-28 11:51:05 +09:00
Kazuhiro MUSASHI 61e5dec1c4 Merge pull request 'Change the period from `240h` to `24h`.' (#25) from loki-index-period-change into main
Reviewed-on: #25
2024-04-27 06:36:26 +00:00
Kazuhiro MUSASHI 359bdec10b Change the period from `240h` to `24h`. 2024-04-27 15:30:33 +09:00
Kazuhiro MUSASHI 14e874f439 Merge pull request 'Use `ip` command when Ubuntu 22.04.' (#24) from vector-syslog-modification into main
Reviewed-on: #24
2024-04-27 06:00:47 +00:00
Kazuhiro MUSASHI feb2ed45ad Use `ip` command when Ubuntu 22.04. 2024-04-27 14:58:10 +09:00
Kazuhiro MUSASHI 9d19f05ca4 Merge pull request 'Reflecting on the Loki config file change' (#23) from loki-update into main
Reviewed-on: #23
2024-04-27 05:32:42 +00:00
Kazuhiro MUSASHI 2da188f298 Modify the retention period from 24h to 240h. 2024-04-27 14:29:48 +09:00
Kazuhiro MUSASHI 28fea90778 Update `/etc/loki/loki-config.yml`. 2024-04-27 14:26:24 +09:00
Kazuhiro MUSASHI 3dca8b3de4 Delete unnecessary file. 2024-04-27 14:25:47 +09:00
Kazuhiro MUSASHI 8c3d0d3884 Merge pull request 'Modify the monitoring target of `Consul`.' (#22) from consul-log-monitoring-target-change into main
Reviewed-on: #22
2024-04-07 13:52:06 +00:00
Kazuhiro MUSASHI a6428c4c3a Modify the monitoring target of `Consul`. 2024-04-07 22:51:05 +09:00
Kazuhiro MUSASHI a514509f4b Merge pull request 'Modify the permissions of the `Prometheus` directory.' (#21) from change-permissions-for-prometheus into main
Reviewed-on: #21
2024-04-07 13:49:33 +00:00
Kazuhiro MUSASHI f61d1aa2ed Modify the permissions of the `Prometheus` directory. 2024-04-07 22:46:58 +09:00
82 changed files with 27483 additions and 703 deletions

View File

@ -35,6 +35,7 @@ end
# Install the necessary packages: # Install the necessary packages:
include_recipe './packages.rb' include_recipe './packages.rb'
include_recipe './eget.rb'
# Lang Setting: # Lang Setting:
include_recipe './lang.rb' include_recipe './lang.rb'
@ -69,9 +70,12 @@ include_recipe './starship.rb'
# Install cloudflared command: # Install cloudflared command:
include_recipe './cloudflared.rb' include_recipe './cloudflared.rb'
# Disable Ubuntu Pro
include_recipe './ubuntupro.rb'
# recipes for Ubuntu 20.04 and later # recipes for Ubuntu 20.04 and later
case node['platform_version'] case node['platform_version']
when "20.04", "22.04" when "20.04", "22.04", "24.04"
remote_file '/etc/multipath.conf' do remote_file '/etc/multipath.conf' do
owner 'root' owner 'root'
group 'root' group 'root'
@ -89,7 +93,6 @@ when "20.04", "22.04"
service 'systemd-timesyncd' do service 'systemd-timesyncd' do
action :enable action :enable
end end
end
case node['platform_version'] case node['platform_version']
when "20.04" when "20.04"
@ -110,7 +113,19 @@ when "22.04"
notifies :restart, 'service[systemd-timesyncd]' notifies :restart, 'service[systemd-timesyncd]'
end end
when "24.04"
remote_file '/etc/systemd/timesyncd.conf' do
owner 'root'
group 'root'
mode '0644'
source 'files/etc/systemd/timesyncd.2404.conf'
notifies :restart, 'service[systemd-timesyncd]'
end end
end
end
# AWS EC2 Swap Setting: # AWS EC2 Swap Setting:
if node['is_ec2'] if node['is_ec2']

14
cookbooks/base/eget.rb Normal file
View File

@ -0,0 +1,14 @@
result = run_command('which eget', error: false)
if result.exit_status != 0
# Install eget
execute 'curl https://zyedidia.github.io/eget.sh | sh' do
cwd '/usr/local/bin/'
end
execute 'chown root:root /usr/local/bin/eget'
execute 'chmod 755 /usr/local/bin/eget'
end
%w( zyedidia/eget mgdm/htmlq ).each do |p|
execute "eget #{p} --to /usr/local/bin/ --upgrade-only"
end

View File

@ -0,0 +1,122 @@
# This is the sshd server system-wide configuration file. See
# sshd_config(5) for more information.
# This sshd was compiled with PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
# The strategy used for options in the default sshd_config shipped with
# OpenSSH is to specify options with their default value where
# possible, but leave them commented. Uncommented options override the
# default value.
Include /etc/ssh/sshd_config.d/*.conf
Port 10022
#AddressFamily any
#ListenAddress 0.0.0.0
#ListenAddress ::
#HostKey /etc/ssh/ssh_host_rsa_key
#HostKey /etc/ssh/ssh_host_ecdsa_key
#HostKey /etc/ssh/ssh_host_ed25519_key
# Ciphers and keying
#RekeyLimit default none
# Logging
#SyslogFacility AUTH
#LogLevel INFO
# Authentication:
#LoginGraceTime 2m
PermitRootLogin no
#StrictModes yes
#MaxAuthTries 6
#MaxSessions 10
#PubkeyAuthentication yes
# Expect .ssh/authorized_keys2 to be disregarded by default in future.
#AuthorizedKeysFile .ssh/authorized_keys .ssh/authorized_keys2
#AuthorizedPrincipalsFile none
#AuthorizedKeysCommand none
#AuthorizedKeysCommandUser nobody
# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts
#HostbasedAuthentication no
# Change to yes if you don't trust ~/.ssh/known_hosts for
# HostbasedAuthentication
#IgnoreUserKnownHosts no
# Don't read the user's ~/.rhosts and ~/.shosts files
#IgnoreRhosts yes
# To disable tunneled clear text passwords, change to no here!
PasswordAuthentication no
#PermitEmptyPasswords no
# Change to yes to enable challenge-response passwords (beware issues with
# some PAM modules and threads)
KbdInteractiveAuthentication no
# Kerberos options
#KerberosAuthentication no
#KerberosOrLocalPasswd yes
#KerberosTicketCleanup yes
#KerberosGetAFSToken no
# GSSAPI options
#GSSAPIAuthentication no
#GSSAPICleanupCredentials yes
#GSSAPIStrictAcceptorCheck yes
#GSSAPIKeyExchange no
# Set this to 'yes' to enable PAM authentication, account processing,
# and session processing. If this is enabled, PAM authentication will
# be allowed through the KbdInteractiveAuthentication and
# PasswordAuthentication. Depending on your PAM configuration,
# PAM authentication via KbdInteractiveAuthentication may bypass
# the setting of "PermitRootLogin prohibit-password".
# If you just want the PAM account and session checks to run without
# PAM authentication, then enable this but set PasswordAuthentication
# and KbdInteractiveAuthentication to 'no'.
UsePAM yes
#AllowAgentForwarding yes
#AllowTcpForwarding yes
#GatewayPorts no
X11Forwarding yes
#X11DisplayOffset 10
#X11UseLocalhost yes
#PermitTTY yes
PrintMotd no
#PrintLastLog yes
#TCPKeepAlive yes
#PermitUserEnvironment no
#Compression delayed
#ClientAliveInterval 0
#ClientAliveCountMax 3
#UseDNS no
#PidFile /run/sshd.pid
#MaxStartups 10:30:100
#PermitTunnel no
#ChrootDirectory none
#VersionAddendum none
# no default banner path
#Banner none
# Allow client to pass locale environment variables
AcceptEnv LANG LC_*
# override default of no subsystems
Subsystem sftp /usr/lib/openssh/sftp-server
# Example of overriding settings on a per-user basis
#Match User anoncvs
# X11Forwarding no
# AllowTcpForwarding no
# PermitTTY no
# ForceCommand cvs server

View File

@ -0,0 +1,26 @@
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it under the
# terms of the GNU Lesser General Public License as published by the Free
# Software Foundation; either version 2.1 of the License, or (at your option)
# any later version.
#
# Entries in this file show the compile time defaults. Local configuration
# should be created by either modifying this file (or a copy of it placed in
# /etc/ if the original file is shipped in /usr/), or by creating "drop-ins" in
# the /etc/systemd/timesyncd.conf.d/ directory. The latter is generally
# recommended. Defaults can be restored by simply deleting the main
# configuration file and all drop-ins located in /etc/.
#
# Use 'systemd-analyze cat-config systemd/timesyncd.conf' to display the full config.
#
# See timesyncd.conf(5) for details.
[Time]
NTP=192.168.10.1
#FallbackNTP=ntp.ubuntu.com
#RootDistanceMaxSec=5
#PollIntervalMinSec=32
#PollIntervalMaxSec=2048
#ConnectionRetrySec=30
#SaveIntervalSec=60

View File

@ -20,12 +20,15 @@ end
### Here we are going to install git. ### Here we are going to install git.
# Constants: # Constants:
case run_command('grep VERSION_ID /etc/os-release | awk -F\" \'{print $2}\'').stdout.chomp
when "24.04"
execute 'add-apt-repository -y ppa:git-core/ppa' do
not_if 'test -e /etc/apt/sources.list.d/git-core-ubuntu-ppa-noble.sources'
end
else
KEYSRV = 'hkp://keyserver.ubuntu.com:80' KEYSRV = 'hkp://keyserver.ubuntu.com:80'
ID = 'E1DF1F24' ID = 'E1DF1F24'
GIT_PREPUSH = '/usr/share/git-core/templates/hooks/pre-push'
PREPUSH = 'https://gist.github.com/kazu634/8267388/raw/e9202cd4c29a66723c88d2be05f3cd19413d2137/pre-push'
# Retrieve the Ubuntu code: # Retrieve the Ubuntu code:
DIST = run_command('lsb_release -cs').stdout.chomp DIST = run_command('lsb_release -cs').stdout.chomp
@ -39,6 +42,7 @@ template '/etc/apt/sources.list.d/git.list' do
action :create action :create
variables(distribution: DIST) variables(distribution: DIST)
end end
end
execute 'apt update' do execute 'apt update' do
not_if 'LANG=C apt-cache policy git | grep Installed | grep ppa' not_if 'LANG=C apt-cache policy git | grep Installed | grep ppa'
@ -48,6 +52,9 @@ execute 'apt install git -y' do
not_if 'LANG=C apt-cache policy git | grep Installed | grep ppa' not_if 'LANG=C apt-cache policy git | grep Installed | grep ppa'
end end
GIT_PREPUSH = '/usr/share/git-core/templates/hooks/pre-push'
PREPUSH = 'https://gist.github.com/kazu634/8267388/raw/e9202cd4c29a66723c88d2be05f3cd19413d2137/pre-push'
execute "wget #{PREPUSH} -O #{GIT_PREPUSH}" do execute "wget #{PREPUSH} -O #{GIT_PREPUSH}" do
not_if "test -e #{GIT_PREPUSH}" not_if "test -e #{GIT_PREPUSH}"
end end

View File

@ -9,6 +9,16 @@ end
# Deploy the `sshd` configuration file: # Deploy the `sshd` configuration file:
case node['platform_version'] case node['platform_version']
when "24.04"
remote_file '/etc/ssh/sshd_config' do
user 'root'
owner 'root'
group 'root'
mode '644'
source 'files/etc/ssh/sshd_config.2404'
end
when "22.04" when "22.04"
remote_file '/etc/ssh/sshd_config' do remote_file '/etc/ssh/sshd_config' do
user 'root' user 'root'
@ -48,9 +58,15 @@ else
end end
end end
case node['platform_version']
when "24.04"
execute 'systemctl disable --now ssh.socket'
execute 'systemctl enable --now ssh.service'
execute 'systemctl daemon-reload'
end
# Apply the changes: # Apply the changes:
execute 'systemctl reload ssh.service ' do execute 'systemctl restart ssh.service ' do
action :nothing action :nothing
subscribes :run, 'remote_file[/etc/ssh/sshd_config]' subscribes :run, 'remote_file[/etc/ssh/sshd_config]'
end end

View File

@ -1,5 +1,5 @@
case node['platform_version'] case node['platform_version']
when "18.04", "20.04", "22.04" when "18.04", "20.04", "22.04", "24.04"
execute 'timedatectl set-timezone Asia/Tokyo' do execute 'timedatectl set-timezone Asia/Tokyo' do
not_if 'timedatectl | grep Tokyo' not_if 'timedatectl | grep Tokyo'
end end

View File

@ -0,0 +1,11 @@
case node['platform_version']
when "24.04"
directory "/etc/apt/apt.conf.d/bk/"
%w( 20apt-esm-hook.conf ).each do |conf|
execute "mv /etc/apt/apt.conf.d/#{conf} /etc/apt/apt.conf.d/bk/#{conf}"
execute "touch /etc/apt/apt.conf.d/#{conf}"
end
execute 'pro config set apt_news=false'
end

View File

@ -45,7 +45,7 @@ when "18.04"
not_if 'test -e /var/log/cron-apt/log' not_if 'test -e /var/log/cron-apt/log'
end end
when '20.04', '22.04' when '20.04', '22.04', '24.04'
%w(20auto-upgrades 50unattended-upgrades).each do |conf| %w(20auto-upgrades 50unattended-upgrades).each do |conf|
remote_file "/etc/apt/apt.conf.d/#{conf}" do remote_file "/etc/apt/apt.conf.d/#{conf}" do
owner 'root' owner 'root'

View File

@ -1,10 +1,12 @@
# ------------------------------------------- # -------------------------------------------
# Specifying the default settings: # Specifying the default settings:
# ------------------------------------------- # -------------------------------------------
node.reverse_merge!({ node.reverse_merge!({
'consulTemplate' => { 'consulTemplate' => {
'baseUrl' => 'https://releases.hashicorp.com/consul-template/', 'baseUrl' => 'https://releases.hashicorp.com/consul-template/',
'version' => '0.25.2', 'version' => `curl -s https://releases.hashicorp.com/consul-template/ | htmlq -t 'a' | grep consul-template | head -n 1 | sed -e 's/^[^_]*_//g'`.chomp!,
'zipPrefix' => 'consul-template_', 'zipPrefix' => 'consul-template_',
'zipPostfix' => '_linux_amd64.zip', 'zipPostfix' => '_linux_amd64.zip',
'storage' => '/opt/consul-template/consul-template', 'storage' => '/opt/consul-template/consul-template',

View File

@ -5,20 +5,13 @@ consulTemplate_url = "#{node['consulTemplate']['baseUrl']}#{node['consulTemplate
result = run_command('which consul-template', error: false) result = run_command('which consul-template', error: false)
if result.exit_status != 0 if result.exit_status != 0
# Download:
TMP = "/tmp/#{consulTemplate_zip}"
execute "wget #{consulTemplate_url} -O #{TMP}"
directory '/opt/consul-template' do directory '/opt/consul-template' do
owner 'root' owner 'root'
group 'root' group 'root'
mode '0755' mode '0755'
end end
execute "unzip #{TMP} -d /opt/consul-template/" do execute "eget #{consulTemplate_url} --to /opt/consul-template/"
not_if 'test -e /opt/consul-template/consul-template'
end
# Change Owner and Permissions: # Change Owner and Permissions:
file "#{node['consulTemplate']['storage']}" do file "#{node['consulTemplate']['storage']}" do

View File

@ -2,7 +2,7 @@
# Specifying the default settings: # Specifying the default settings:
# ------------------------------------------- # -------------------------------------------
case run_command('grep VERSION_ID /etc/os-release | awk -F\" \'{print $2}\'').stdout.chomp case run_command('grep VERSION_ID /etc/os-release | awk -F\" \'{print $2}\'').stdout.chomp
when "20.04", "22.04" when "20.04", "22.04", "24.04"
cmd = 'LANG=C ip a | grep "inet " | grep -v -E "(127|172)" | cut -d" " -f6 | perl -pe "s/\/.+//g"' cmd = 'LANG=C ip a | grep "inet " | grep -v -E "(127|172)" | cut -d" " -f6 | perl -pe "s/\/.+//g"'
when "18.04" when "18.04"
@ -19,7 +19,7 @@ dns = run_command(cmd).stdout.chomp
node.reverse_merge!({ node.reverse_merge!({
'consul' => { 'consul' => {
'manager' => false, 'manager' => false,
'manager_hosts' => '"192.168.10.101", "192.168.10.251", "192.168.10.252", "192.168.10.253"', 'manager_hosts' => '"192.168.10.102", "192.168.10.251", "192.168.10.252", "192.168.10.253"',
'ipaddr' => ipaddr, 'ipaddr' => ipaddr,
'dns' => dns, 'dns' => dns,
'encrypt' => 's2T3XUTb9MjHYOw8I820O5YkN2G6eJrjLjJRTnEAKoM=', 'encrypt' => 's2T3XUTb9MjHYOw8I820O5YkN2G6eJrjLjJRTnEAKoM=',

View File

@ -7,6 +7,42 @@ package 'dnsmasq'
end end
case run_command('grep VERSION_ID /etc/os-release | awk -F\" \'{print $2}\'').stdout.chomp case run_command('grep VERSION_ID /etc/os-release | awk -F\" \'{print $2}\'').stdout.chomp
when "24.04"
execute "change link to /etc/resolv.conf" do
command "ln -fs /run/systemd/resolve/resolv.conf /etc/resolv.conf"
end
directory "/etc/systemd/resolved.conf.d/" do
mode "0755"
owner "root"
group "root"
end
template '/etc/systemd/resolved.conf.d/partial.conf' do
owner 'root'
group 'root'
mode '644'
source 'templates/etc/systemd/resolved.conf.d/partial.conf.erb'
variables(dns: node['consul']['dns'])
notifies :restart, 'service[systemd-resolved]', :immediately
end
remote_file "/etc/default/dnsmasq" do
mode "0644"
owner "root"
group "root"
end
remote_file '/etc/dnsmasq.conf' do
owner 'root'
group 'root'
mode '644'
notifies :restart, 'service[dnsmasq]', :immediately
end
when "22.04" when "22.04"
template '/etc/systemd/resolved.conf' do template '/etc/systemd/resolved.conf' do
owner 'root' owner 'root'
@ -24,6 +60,8 @@ when "22.04"
group 'root' group 'root'
mode '644' mode '644'
source 'files/etc/dnsmasq.conf.2204'
notifies :restart, 'service[dnsmasq]', :immediately notifies :restart, 'service[dnsmasq]', :immediately
end end

View File

@ -0,0 +1,42 @@
# This file has six functions:
# 1) to completely disable starting this dnsmasq instance
# 2) to set DOMAIN_SUFFIX by running `dnsdomainname`
# 3) to select an alternative config file
# by setting DNSMASQ_OPTS to --conf-file=<file>
# 4) to tell dnsmasq to read the files in /etc/dnsmasq.d for
# more configuration variables.
# 5) to stop the resolvconf package from controlling dnsmasq's
# idea of which upstream nameservers to use.
# 6) to avoid using this dnsmasq instance as the system's default resolver
# by setting DNSMASQ_EXCEPT="lo"
# For upgraders from very old versions, all the shell variables set
# here in previous versions are still honored by the init script
# so if you just keep your old version of this file nothing will break.
#DOMAIN_SUFFIX=`dnsdomainname`
#DNSMASQ_OPTS="--conf-file=/etc/dnsmasq.alt"
# The dnsmasq daemon is run by default conforming to the Debian Policy.
# To disable the service,
# for SYSV init, use "update-rc.d dnsmasq disable",
# for systemd, use "systemctl disable dnsmasq".
# By default search this drop directory for configuration options.
# Libvirt leaves a file here to make the system dnsmasq play nice.
# Comment out this line if you don't want this. The dpkg-* are file
# endings which cause dnsmasq to skip that file. This avoids pulling
# in backups made by dpkg.
CONFIG_DIR=/etc/dnsmasq.d,.dpkg-dist,.dpkg-old,.dpkg-new
# If the resolvconf package is installed, dnsmasq will use its output
# rather than the contents of /etc/resolv.conf to find upstream
# nameservers. Uncommenting this line inhibits this behaviour.
# Note that including a "resolv-file=<filename>" line in
# /etc/dnsmasq.conf is not enough to override resolvconf if it is
# installed: the line below must be uncommented.
IGNORE_RESOLVCONF=yes
# If the resolvconf package is installed, dnsmasq will tell resolvconf
# to use dnsmasq under 127.0.0.1 as the system's default resolver.
# Uncommenting this line inhibits this behaviour.
#DNSMASQ_EXCEPT="lo"

View File

@ -16,9 +16,9 @@
# these requests from bringing up the link unnecessarily. # these requests from bringing up the link unnecessarily.
# Never forward plain names (without a dot or domain part) # Never forward plain names (without a dot or domain part)
#domain-needed domain-needed
# Never forward addresses in the non-routed address spaces. # Never forward addresses in the non-routed address spaces.
#bogus-priv bogus-priv
# Uncomment these to enable DNSSEC validation and caching: # Uncomment these to enable DNSSEC validation and caching:
# (Requires dnsmasq to be built with DNSSEC option.) # (Requires dnsmasq to be built with DNSSEC option.)

View File

@ -0,0 +1,679 @@
# Configuration file for dnsmasq.
#
# Format is one option per line, legal options are the same
# as the long options legal on the command line. See
# "/usr/sbin/dnsmasq --help" or "man 8 dnsmasq" for details.
# Listen on this specific port instead of the standard DNS port
# (53). Setting this to zero completely disables DNS function,
# leaving only DHCP and/or TFTP.
#port=5353
# The following two options make you a better netizen, since they
# tell dnsmasq to filter out queries which the public DNS cannot
# answer, and which load the servers (especially the root servers)
# unnecessarily. If you have a dial-on-demand link they also stop
# these requests from bringing up the link unnecessarily.
# Never forward plain names (without a dot or domain part)
#domain-needed
# Never forward addresses in the non-routed address spaces.
#bogus-priv
# Uncomment these to enable DNSSEC validation and caching:
# (Requires dnsmasq to be built with DNSSEC option.)
#conf-file=%%PREFIX%%/share/dnsmasq/trust-anchors.conf
#dnssec
# Replies which are not DNSSEC signed may be legitimate, because the domain
# is unsigned, or may be forgeries. Setting this option tells dnsmasq to
# check that an unsigned reply is OK, by finding a secure proof that a DS
# record somewhere between the root and the domain does not exist.
# The cost of setting this is that even queries in unsigned domains will need
# one or more extra DNS queries to verify.
#dnssec-check-unsigned
# Uncomment this to filter useless windows-originated DNS requests
# which can trigger dial-on-demand links needlessly.
# Note that (amongst other things) this blocks all SRV requests,
# so don't use it if you use eg Kerberos, SIP, XMMP or Google-talk.
# This option only affects forwarding, SRV records originating for
# dnsmasq (via srv-host= lines) are not suppressed by it.
#filterwin2k
# Change this line if you want dns to get its upstream servers from
# somewhere other that /etc/resolv.conf
#resolv-file=
# By default, dnsmasq will send queries to any of the upstream
# servers it knows about and tries to favour servers to are known
# to be up. Uncommenting this forces dnsmasq to try each query
# with each server strictly in the order they appear in
# /etc/resolv.conf
strict-order
# If you don't want dnsmasq to read /etc/resolv.conf or any other
# file, getting its servers from this file instead (see below), then
# uncomment this.
#no-resolv
# If you don't want dnsmasq to poll /etc/resolv.conf or other resolv
# files for changes and re-read them then uncomment this.
#no-poll
# Add other name servers here, with domain specs if they are for
# non-public domains.
server=/consul/127.0.0.1#8600
# Example of routing PTR queries to nameservers: this will send all
# address->name queries for 192.168.3/24 to nameserver 10.1.2.3
#server=/3.168.192.in-addr.arpa/10.1.2.3
# Add local-only domains here, queries in these domains are answered
# from /etc/hosts or DHCP only.
#local=/localnet/
# Add domains which you want to force to an IP address here.
# The example below send any host in double-click.net to a local
# web-server.
#address=/double-click.net/127.0.0.1
# --address (and --server) work with IPv6 addresses too.
#address=/www.thekelleys.org.uk/fe80::20d:60ff:fe36:f83
# Add the IPs of all queries to yahoo.com, google.com, and their
# subdomains to the vpn and search ipsets:
#ipset=/yahoo.com/google.com/vpn,search
# You can control how dnsmasq talks to a server: this forces
# queries to 10.1.2.3 to be routed via eth1
# server=10.1.2.3@eth1
# and this sets the source (ie local) address used to talk to
# 10.1.2.3 to 192.168.1.1 port 55 (there must be an interface with that
# IP on the machine, obviously).
# server=10.1.2.3@192.168.1.1#55
# If you want dnsmasq to change uid and gid to something other
# than the default, edit the following lines.
#user=
#group=
# If you want dnsmasq to listen for DHCP and DNS requests only on
# specified interfaces (and the loopback) give the name of the
# interface (eg eth0) here.
# Repeat the line for more than one interface.
#interface=
# Or you can specify which interface _not_ to listen on
#except-interface=
# Or which to listen on by address (remember to include 127.0.0.1 if
# you use this.)
#listen-address=
# If you want dnsmasq to provide only DNS service on an interface,
# configure it as shown above, and then use the following line to
# disable DHCP and TFTP on it.
#no-dhcp-interface=
# On systems which support it, dnsmasq binds the wildcard address,
# even when it is listening on only some interfaces. It then discards
# requests that it shouldn't reply to. This has the advantage of
# working even when interfaces come and go and change address. If you
# want dnsmasq to really bind only the interfaces it is listening on,
# uncomment this option. About the only time you may need this is when
# running another nameserver on the same machine.
#bind-interfaces
# If you don't want dnsmasq to read /etc/hosts, uncomment the
# following line.
#no-hosts
# or if you want it to read another file, as well as /etc/hosts, use
# this.
#addn-hosts=/etc/banner_add_hosts
# Set this (and domain: see below) if you want to have a domain
# automatically added to simple names in a hosts-file.
#expand-hosts
# Set the domain for dnsmasq. this is optional, but if it is set, it
# does the following things.
# 1) Allows DHCP hosts to have fully qualified domain names, as long
# as the domain part matches this setting.
# 2) Sets the "domain" DHCP option thereby potentially setting the
# domain of all systems configured by DHCP
# 3) Provides the domain part for "expand-hosts"
#domain=thekelleys.org.uk
# Set a different domain for a particular subnet
#domain=wireless.thekelleys.org.uk,192.168.2.0/24
# Same idea, but range rather then subnet
#domain=reserved.thekelleys.org.uk,192.68.3.100,192.168.3.200
# Uncomment this to enable the integrated DHCP server, you need
# to supply the range of addresses available for lease and optionally
# a lease time. If you have more than one network, you will need to
# repeat this for each network on which you want to supply DHCP
# service.
#dhcp-range=192.168.0.50,192.168.0.150,12h
# This is an example of a DHCP range where the netmask is given. This
# is needed for networks we reach the dnsmasq DHCP server via a relay
# agent. If you don't know what a DHCP relay agent is, you probably
# don't need to worry about this.
#dhcp-range=192.168.0.50,192.168.0.150,255.255.255.0,12h
# This is an example of a DHCP range which sets a tag, so that
# some DHCP options may be set only for this network.
#dhcp-range=set:red,192.168.0.50,192.168.0.150
# Use this DHCP range only when the tag "green" is set.
#dhcp-range=tag:green,192.168.0.50,192.168.0.150,12h
# Specify a subnet which can't be used for dynamic address allocation,
# is available for hosts with matching --dhcp-host lines. Note that
# dhcp-host declarations will be ignored unless there is a dhcp-range
# of some type for the subnet in question.
# In this case the netmask is implied (it comes from the network
# configuration on the machine running dnsmasq) it is possible to give
# an explicit netmask instead.
#dhcp-range=192.168.0.0,static
# Enable DHCPv6. Note that the prefix-length does not need to be specified
# and defaults to 64 if missing/
#dhcp-range=1234::2, 1234::500, 64, 12h
# Do Router Advertisements, BUT NOT DHCP for this subnet.
#dhcp-range=1234::, ra-only
# Do Router Advertisements, BUT NOT DHCP for this subnet, also try and
# add names to the DNS for the IPv6 address of SLAAC-configured dual-stack
# hosts. Use the DHCPv4 lease to derive the name, network segment and
# MAC address and assume that the host will also have an
# IPv6 address calculated using the SLAAC algorithm.
#dhcp-range=1234::, ra-names
# Do Router Advertisements, BUT NOT DHCP for this subnet.
# Set the lifetime to 46 hours. (Note: minimum lifetime is 2 hours.)
#dhcp-range=1234::, ra-only, 48h
# Do DHCP and Router Advertisements for this subnet. Set the A bit in the RA
# so that clients can use SLAAC addresses as well as DHCP ones.
#dhcp-range=1234::2, 1234::500, slaac
# Do Router Advertisements and stateless DHCP for this subnet. Clients will
# not get addresses from DHCP, but they will get other configuration information.
# They will use SLAAC for addresses.
#dhcp-range=1234::, ra-stateless
# Do stateless DHCP, SLAAC, and generate DNS names for SLAAC addresses
# from DHCPv4 leases.
#dhcp-range=1234::, ra-stateless, ra-names
# Do router advertisements for all subnets where we're doing DHCPv6
# Unless overridden by ra-stateless, ra-names, et al, the router
# advertisements will have the M and O bits set, so that the clients
# get addresses and configuration from DHCPv6, and the A bit reset, so the
# clients don't use SLAAC addresses.
#enable-ra
# Supply parameters for specified hosts using DHCP. There are lots
# of valid alternatives, so we will give examples of each. Note that
# IP addresses DO NOT have to be in the range given above, they just
# need to be on the same network. The order of the parameters in these
# do not matter, it's permissible to give name, address and MAC in any
# order.
# Always allocate the host with Ethernet address 11:22:33:44:55:66
# The IP address 192.168.0.60
#dhcp-host=11:22:33:44:55:66,192.168.0.60
# Always set the name of the host with hardware address
# 11:22:33:44:55:66 to be "fred"
#dhcp-host=11:22:33:44:55:66,fred
# Always give the host with Ethernet address 11:22:33:44:55:66
# the name fred and IP address 192.168.0.60 and lease time 45 minutes
#dhcp-host=11:22:33:44:55:66,fred,192.168.0.60,45m
# Give a host with Ethernet address 11:22:33:44:55:66 or
# 12:34:56:78:90:12 the IP address 192.168.0.60. Dnsmasq will assume
# that these two Ethernet interfaces will never be in use at the same
# time, and give the IP address to the second, even if it is already
# in use by the first. Useful for laptops with wired and wireless
# addresses.
#dhcp-host=11:22:33:44:55:66,12:34:56:78:90:12,192.168.0.60
# Give the machine which says its name is "bert" IP address
# 192.168.0.70 and an infinite lease
#dhcp-host=bert,192.168.0.70,infinite
# Always give the host with client identifier 01:02:02:04
# the IP address 192.168.0.60
#dhcp-host=id:01:02:02:04,192.168.0.60
# Always give the InfiniBand interface with hardware address
# 80:00:00:48:fe:80:00:00:00:00:00:00:f4:52:14:03:00:28:05:81 the
# ip address 192.168.0.61. The client id is derived from the prefix
# ff:00:00:00:00:00:02:00:00:02:c9:00 and the last 8 pairs of
# hex digits of the hardware address.
#dhcp-host=id:ff:00:00:00:00:00:02:00:00:02:c9:00:f4:52:14:03:00:28:05:81,192.168.0.61
# Always give the host with client identifier "marjorie"
# the IP address 192.168.0.60
#dhcp-host=id:marjorie,192.168.0.60
# Enable the address given for "judge" in /etc/hosts
# to be given to a machine presenting the name "judge" when
# it asks for a DHCP lease.
#dhcp-host=judge
# Never offer DHCP service to a machine whose Ethernet
# address is 11:22:33:44:55:66
#dhcp-host=11:22:33:44:55:66,ignore
# Ignore any client-id presented by the machine with Ethernet
# address 11:22:33:44:55:66. This is useful to prevent a machine
# being treated differently when running under different OS's or
# between PXE boot and OS boot.
#dhcp-host=11:22:33:44:55:66,id:*
# Send extra options which are tagged as "red" to
# the machine with Ethernet address 11:22:33:44:55:66
#dhcp-host=11:22:33:44:55:66,set:red
# Send extra options which are tagged as "red" to
# any machine with Ethernet address starting 11:22:33:
#dhcp-host=11:22:33:*:*:*,set:red
# Give a fixed IPv6 address and name to client with
# DUID 00:01:00:01:16:d2:83:fc:92:d4:19:e2:d8:b2
# Note the MAC addresses CANNOT be used to identify DHCPv6 clients.
# Note also that the [] around the IPv6 address are obligatory.
#dhcp-host=id:00:01:00:01:16:d2:83:fc:92:d4:19:e2:d8:b2, fred, [1234::5]
# Ignore any clients which are not specified in dhcp-host lines
# or /etc/ethers. Equivalent to ISC "deny unknown-clients".
# This relies on the special "known" tag which is set when
# a host is matched.
#dhcp-ignore=tag:!known
# Send extra options which are tagged as "red" to any machine whose
# DHCP vendorclass string includes the substring "Linux"
#dhcp-vendorclass=set:red,Linux
# Send extra options which are tagged as "red" to any machine one
# of whose DHCP userclass strings includes the substring "accounts"
#dhcp-userclass=set:red,accounts
# Send extra options which are tagged as "red" to any machine whose
# MAC address matches the pattern.
#dhcp-mac=set:red,00:60:8C:*:*:*
# If this line is uncommented, dnsmasq will read /etc/ethers and act
# on the ethernet-address/IP pairs found there just as if they had
# been given as --dhcp-host options. Useful if you keep
# MAC-address/host mappings there for other purposes.
#read-ethers
# Send options to hosts which ask for a DHCP lease.
# See RFC 2132 for details of available options.
# Common options can be given to dnsmasq by name:
# run "dnsmasq --help dhcp" to get a list.
# Note that all the common settings, such as netmask and
# broadcast address, DNS server and default route, are given
# sane defaults by dnsmasq. You very likely will not need
# any dhcp-options. If you use Windows clients and Samba, there
# are some options which are recommended, they are detailed at the
# end of this section.
# Override the default route supplied by dnsmasq, which assumes the
# router is the same machine as the one running dnsmasq.
#dhcp-option=3,1.2.3.4
# Do the same thing, but using the option name
#dhcp-option=option:router,1.2.3.4
# Override the default route supplied by dnsmasq and send no default
# route at all. Note that this only works for the options sent by
# default (1, 3, 6, 12, 28) the same line will send a zero-length option
# for all other option numbers.
#dhcp-option=3
# Set the NTP time server addresses to 192.168.0.4 and 10.10.0.5
#dhcp-option=option:ntp-server,192.168.0.4,10.10.0.5
# Send DHCPv6 option. Note [] around IPv6 addresses.
#dhcp-option=option6:dns-server,[1234::77],[1234::88]
# Send DHCPv6 option for namservers as the machine running
# dnsmasq and another.
#dhcp-option=option6:dns-server,[::],[1234::88]
# Ask client to poll for option changes every six hours. (RFC4242)
#dhcp-option=option6:information-refresh-time,6h
# Set option 58 client renewal time (T1). Defaults to half of the
# lease time if not specified. (RFC2132)
#dhcp-option=option:T1,1m
# Set option 59 rebinding time (T2). Defaults to 7/8 of the
# lease time if not specified. (RFC2132)
#dhcp-option=option:T2,2m
# Set the NTP time server address to be the same machine as
# is running dnsmasq
#dhcp-option=42,0.0.0.0
# Set the NIS domain name to "welly"
#dhcp-option=40,welly
# Set the default time-to-live to 50
#dhcp-option=23,50
# Set the "all subnets are local" flag
#dhcp-option=27,1
# Send the etherboot magic flag and then etherboot options (a string).
#dhcp-option=128,e4:45:74:68:00:00
#dhcp-option=129,NIC=eepro100
# Specify an option which will only be sent to the "red" network
# (see dhcp-range for the declaration of the "red" network)
# Note that the tag: part must precede the option: part.
#dhcp-option = tag:red, option:ntp-server, 192.168.1.1
# The following DHCP options set up dnsmasq in the same way as is specified
# for the ISC dhcpcd in
# http://www.samba.org/samba/ftp/docs/textdocs/DHCP-Server-Configuration.txt
# adapted for a typical dnsmasq installation where the host running
# dnsmasq is also the host running samba.
# you may want to uncomment some or all of them if you use
# Windows clients and Samba.
#dhcp-option=19,0 # option ip-forwarding off
#dhcp-option=44,0.0.0.0 # set netbios-over-TCP/IP nameserver(s) aka WINS server(s)
#dhcp-option=45,0.0.0.0 # netbios datagram distribution server
#dhcp-option=46,8 # netbios node type
# Send an empty WPAD option. This may be REQUIRED to get windows 7 to behave.
#dhcp-option=252,"\n"
# Send RFC-3397 DNS domain search DHCP option. WARNING: Your DHCP client
# probably doesn't support this......
#dhcp-option=option:domain-search,eng.apple.com,marketing.apple.com
# Send RFC-3442 classless static routes (note the netmask encoding)
#dhcp-option=121,192.168.1.0/24,1.2.3.4,10.0.0.0/8,5.6.7.8
# Send vendor-class specific options encapsulated in DHCP option 43.
# The meaning of the options is defined by the vendor-class so
# options are sent only when the client supplied vendor class
# matches the class given here. (A substring match is OK, so "MSFT"
# matches "MSFT" and "MSFT 5.0"). This example sets the
# mtftp address to 0.0.0.0 for PXEClients.
#dhcp-option=vendor:PXEClient,1,0.0.0.0
# Send microsoft-specific option to tell windows to release the DHCP lease
# when it shuts down. Note the "i" flag, to tell dnsmasq to send the
# value as a four-byte integer - that's what microsoft wants. See
# http://technet2.microsoft.com/WindowsServer/en/library/a70f1bb7-d2d4-49f0-96d6-4b7414ecfaae1033.mspx?mfr=true
#dhcp-option=vendor:MSFT,2,1i
# Send the Encapsulated-vendor-class ID needed by some configurations of
# Etherboot to allow is to recognise the DHCP server.
#dhcp-option=vendor:Etherboot,60,"Etherboot"
# Send options to PXELinux. Note that we need to send the options even
# though they don't appear in the parameter request list, so we need
# to use dhcp-option-force here.
# See http://syslinux.zytor.com/pxe.php#special for details.
# Magic number - needed before anything else is recognised
#dhcp-option-force=208,f1:00:74:7e
# Configuration file name
#dhcp-option-force=209,configs/common
# Path prefix
#dhcp-option-force=210,/tftpboot/pxelinux/files/
# Reboot time. (Note 'i' to send 32-bit value)
#dhcp-option-force=211,30i
# Set the boot filename for netboot/PXE. You will only need
# this if you want to boot machines over the network and you will need
# a TFTP server; either dnsmasq's built-in TFTP server or an
# external one. (See below for how to enable the TFTP server.)
#dhcp-boot=pxelinux.0
# The same as above, but use custom tftp-server instead machine running dnsmasq
#dhcp-boot=pxelinux,server.name,192.168.1.100
# Boot for iPXE. The idea is to send two different
# filenames, the first loads iPXE, and the second tells iPXE what to
# load. The dhcp-match sets the ipxe tag for requests from iPXE.
#dhcp-boot=undionly.kpxe
#dhcp-match=set:ipxe,175 # iPXE sends a 175 option.
#dhcp-boot=tag:ipxe,http://boot.ipxe.org/demo/boot.php
# Encapsulated options for iPXE. All the options are
# encapsulated within option 175
#dhcp-option=encap:175, 1, 5b # priority code
#dhcp-option=encap:175, 176, 1b # no-proxydhcp
#dhcp-option=encap:175, 177, string # bus-id
#dhcp-option=encap:175, 189, 1b # BIOS drive code
#dhcp-option=encap:175, 190, user # iSCSI username
#dhcp-option=encap:175, 191, pass # iSCSI password
# Test for the architecture of a netboot client. PXE clients are
# supposed to send their architecture as option 93. (See RFC 4578)
#dhcp-match=peecees, option:client-arch, 0 #x86-32
#dhcp-match=itanics, option:client-arch, 2 #IA64
#dhcp-match=hammers, option:client-arch, 6 #x86-64
#dhcp-match=mactels, option:client-arch, 7 #EFI x86-64
# Do real PXE, rather than just booting a single file, this is an
# alternative to dhcp-boot.
#pxe-prompt="What system shall I netboot?"
# or with timeout before first available action is taken:
#pxe-prompt="Press F8 for menu.", 60
# Available boot services. for PXE.
#pxe-service=x86PC, "Boot from local disk"
# Loads <tftp-root>/pxelinux.0 from dnsmasq TFTP server.
#pxe-service=x86PC, "Install Linux", pxelinux
# Loads <tftp-root>/pxelinux.0 from TFTP server at 1.2.3.4.
# Beware this fails on old PXE ROMS.
#pxe-service=x86PC, "Install Linux", pxelinux, 1.2.3.4
# Use bootserver on network, found my multicast or broadcast.
#pxe-service=x86PC, "Install windows from RIS server", 1
# Use bootserver at a known IP address.
#pxe-service=x86PC, "Install windows from RIS server", 1, 1.2.3.4
# If you have multicast-FTP available,
# information for that can be passed in a similar way using options 1
# to 5. See page 19 of
# http://download.intel.com/design/archives/wfm/downloads/pxespec.pdf
# Enable dnsmasq's built-in TFTP server
#enable-tftp
# Set the root directory for files available via FTP.
#tftp-root=/var/ftpd
# Do not abort if the tftp-root is unavailable
#tftp-no-fail
# Make the TFTP server more secure: with this set, only files owned by
# the user dnsmasq is running as will be send over the net.
#tftp-secure
# This option stops dnsmasq from negotiating a larger blocksize for TFTP
# transfers. It will slow things down, but may rescue some broken TFTP
# clients.
#tftp-no-blocksize
# Set the boot file name only when the "red" tag is set.
#dhcp-boot=tag:red,pxelinux.red-net
# An example of dhcp-boot with an external TFTP server: the name and IP
# address of the server are given after the filename.
# Can fail with old PXE ROMS. Overridden by --pxe-service.
#dhcp-boot=/var/ftpd/pxelinux.0,boothost,192.168.0.3
# If there are multiple external tftp servers having a same name
# (using /etc/hosts) then that name can be specified as the
# tftp_servername (the third option to dhcp-boot) and in that
# case dnsmasq resolves this name and returns the resultant IP
# addresses in round robin fashion. This facility can be used to
# load balance the tftp load among a set of servers.
#dhcp-boot=/var/ftpd/pxelinux.0,boothost,tftp_server_name
# Set the limit on DHCP leases, the default is 150
#dhcp-lease-max=150
# The DHCP server needs somewhere on disk to keep its lease database.
# This defaults to a sane location, but if you want to change it, use
# the line below.
#dhcp-leasefile=/var/lib/misc/dnsmasq.leases
# Set the DHCP server to authoritative mode. In this mode it will barge in
# and take over the lease for any client which broadcasts on the network,
# whether it has a record of the lease or not. This avoids long timeouts
# when a machine wakes up on a new network. DO NOT enable this if there's
# the slightest chance that you might end up accidentally configuring a DHCP
# server for your campus/company accidentally. The ISC server uses
# the same option, and this URL provides more information:
# http://www.isc.org/files/auth.html
#dhcp-authoritative
# Set the DHCP server to enable DHCPv4 Rapid Commit Option per RFC 4039.
# In this mode it will respond to a DHCPDISCOVER message including a Rapid Commit
# option with a DHCPACK including a Rapid Commit option and fully committed address
# and configuration information. This must only be enabled if either the server is
# the only server for the subnet, or multiple servers are present and they each
# commit a binding for all clients.
#dhcp-rapid-commit
# Run an executable when a DHCP lease is created or destroyed.
# The arguments sent to the script are "add" or "del",
# then the MAC address, the IP address and finally the hostname
# if there is one.
#dhcp-script=/bin/echo
# Set the cachesize here.
#cache-size=150
# If you want to disable negative caching, uncomment this.
#no-negcache
# Normally responses which come from /etc/hosts and the DHCP lease
# file have Time-To-Live set as zero, which conventionally means
# do not cache further. If you are happy to trade lower load on the
# server for potentially stale date, you can set a time-to-live (in
# seconds) here.
#local-ttl=
# If you want dnsmasq to detect attempts by Verisign to send queries
# to unregistered .com and .net hosts to its sitefinder service and
# have dnsmasq instead return the correct NXDOMAIN response, uncomment
# this line. You can add similar lines to do the same for other
# registries which have implemented wildcard A records.
#bogus-nxdomain=64.94.110.11
# If you want to fix up DNS results from upstream servers, use the
# alias option. This only works for IPv4.
# This alias makes a result of 1.2.3.4 appear as 5.6.7.8
#alias=1.2.3.4,5.6.7.8
# and this maps 1.2.3.x to 5.6.7.x
#alias=1.2.3.0,5.6.7.0,255.255.255.0
# and this maps 192.168.0.10->192.168.0.40 to 10.0.0.10->10.0.0.40
#alias=192.168.0.10-192.168.0.40,10.0.0.0,255.255.255.0
# Change these lines if you want dnsmasq to serve MX records.
# Return an MX record named "maildomain.com" with target
# servermachine.com and preference 50
#mx-host=maildomain.com,servermachine.com,50
# Set the default target for MX records created using the localmx option.
#mx-target=servermachine.com
# Return an MX record pointing to the mx-target for all local
# machines.
#localmx
# Return an MX record pointing to itself for all local machines.
#selfmx
# Change the following lines if you want dnsmasq to serve SRV
# records. These are useful if you want to serve ldap requests for
# Active Directory and other windows-originated DNS requests.
# See RFC 2782.
# You may add multiple srv-host lines.
# The fields are <name>,<target>,<port>,<priority>,<weight>
# If the domain part if missing from the name (so that is just has the
# service and protocol sections) then the domain given by the domain=
# config option is used. (Note that expand-hosts does not need to be
# set for this to work.)
# A SRV record sending LDAP for the example.com domain to
# ldapserver.example.com port 389
#srv-host=_ldap._tcp.example.com,ldapserver.example.com,389
# A SRV record sending LDAP for the example.com domain to
# ldapserver.example.com port 389 (using domain=)
#domain=example.com
#srv-host=_ldap._tcp,ldapserver.example.com,389
# Two SRV records for LDAP, each with different priorities
#srv-host=_ldap._tcp.example.com,ldapserver.example.com,389,1
#srv-host=_ldap._tcp.example.com,ldapserver.example.com,389,2
# A SRV record indicating that there is no LDAP server for the domain
# example.com
#srv-host=_ldap._tcp.example.com
# The following line shows how to make dnsmasq serve an arbitrary PTR
# record. This is useful for DNS-SD. (Note that the
# domain-name expansion done for SRV records _does_not
# occur for PTR records.)
#ptr-record=_http._tcp.dns-sd-services,"New Employee Page._http._tcp.dns-sd-services"
# Change the following lines to enable dnsmasq to serve TXT records.
# These are used for things like SPF and zeroconf. (Note that the
# domain-name expansion done for SRV records _does_not
# occur for TXT records.)
#Example SPF.
#txt-record=example.com,"v=spf1 a -all"
#Example zeroconf
#txt-record=_http._tcp.example.com,name=value,paper=A4
# Provide an alias for a "local" DNS name. Note that this _only_ works
# for targets which are names from DHCP or /etc/hosts. Give host
# "bert" another name, bertrand
#cname=bertand,bert
# For debugging purposes, log each DNS query as it passes through
# dnsmasq.
#log-queries
# Log lots of extra information about DHCP transactions.
#log-dhcp
# Include another lot of configuration options.
#conf-file=/etc/dnsmasq.more.conf
#conf-dir=/etc/dnsmasq.d
# Include all the files in a directory except those ending in .bak
#conf-dir=/etc/dnsmasq.d,.bak
# Include all files in a directory which end in .conf
#conf-dir=/etc/dnsmasq.d/,*.conf
# If a DHCP client claims that its name is "wpad", ignore that.
# This fixes a security hole. see CERT Vulnerability VU#598349
#dhcp-name-match=set:wpad-ignore,wpad
#dhcp-ignore-names=tag:wpad-ignore

View File

@ -1 +1 @@
md5:3589fac78cfe7ae33551d6478f20e2cd:salt:229-185-78-119-188-9-161-204:aes-256-cfb:aqhITLoIN7UEBZRyMeO+xwAqfZrz7VXUVcre+Fip/RhqzfWZaQ== md5:d0bf5c103435e9c51e21752192e89575:salt:20-135-197-125-136-152-137-246:aes-256-cfb:aVa3ufSt0fr6iarjwajOHZZs4bGSOo38N577EEbCJwXNW/M41g==

View File

@ -1 +1 @@
md5:98b157199b9f17446254894788740c7d:salt:233-189-165-36-170-54-151-47:aes-256-cfb:gB1Ml+Bg2iNwwd76Qn7C8+mVlzKT9Ndb0W3R0g2PTQyF7ejNJg== md5:ab19117b12b65eef5d46283a1f9d8430:salt:2-183-180-51-94-222-93-197:aes-256-cfb:hlO5lzU8SmLmqPjquIJgwEzSlM5w7ij8gGFZXJVY2yt0KNRqrw==

View File

@ -7,7 +7,7 @@ execute "wget -O- #{SRC} | gpg --dearmor -o #{DEST}" do
end end
# Retrieve the Ubuntu code: # Retrieve the Ubuntu code:
DIST = run_command('lsb_release -cs').stdout.chomp DIST = run_command('lsb_release -cs 2>/dev/null').stdout.chomp
# Deploy the `apt` sources: # Deploy the `apt` sources:
template '/etc/apt/sources.list.d/hashicorp.list' do template '/etc/apt/sources.list.d/hashicorp.list' do

View File

@ -0,0 +1,3 @@
[Resolve]
DNS=127.0.0.1
DNSStubListener=no

View File

@ -10,7 +10,6 @@
;instance_name = ${HOSTNAME} ;instance_name = ${HOSTNAME}
# force migration will run migrations that might cause dataloss # force migration will run migrations that might cause dataloss
# Deprecated, use clean_upgrade option in [unified_alerting.upgrade] instead.
;force_migration = false ;force_migration = false
#################################### Paths #################################### #################################### Paths ####################################
@ -35,9 +34,6 @@
# Protocol (http, https, h2, socket) # Protocol (http, https, h2, socket)
;protocol = http ;protocol = http
# This is the minimum TLS version allowed. By default, this value is empty. Accepted values are: TLS1.2, TLS1.3. If nothing is set TLS1.2 would be taken
;min_tls_version = ""
# The ip address to bind to, empty will bind to all interfaces # The ip address to bind to, empty will bind to all interfaces
;http_addr = ;http_addr =
@ -90,19 +86,6 @@
# `0` means there is no timeout for reading the request. # `0` means there is no timeout for reading the request.
;read_timeout = 0 ;read_timeout = 0
# This setting enables you to specify additional headers that the server adds to HTTP(S) responses.
[server.custom_response_headers]
#exampleHeader1 = exampleValue1
#exampleHeader2 = exampleValue2
#################################### GRPC Server #########################
;[grpc_server]
;network = "tcp"
;address = "127.0.0.1:10000"
;use_tls = false
;cert_file =
;key_file =
#################################### Database #################################### #################################### Database ####################################
[database] [database]
# You can configure the database connection by specifying type, host, name, user and password # You can configure the database connection by specifying type, host, name, user and password
@ -124,9 +107,6 @@ password = 123qwe$%&RTY
# For "mysql", use either "true", "false", or "skip-verify". # For "mysql", use either "true", "false", or "skip-verify".
;ssl_mode = disable ;ssl_mode = disable
# For "postregs", use either "1" to enable or "0" to disable SNI
;ssl_sni =
# Database drivers may support different transaction isolation levels. # Database drivers may support different transaction isolation levels.
# Currently, only "mysql" driver supports isolation levels. # Currently, only "mysql" driver supports isolation levels.
# If the value is empty - driver's default isolation level is applied. # If the value is empty - driver's default isolation level is applied.
@ -156,9 +136,6 @@ password = 123qwe$%&RTY
# For "sqlite3" only. cache mode setting used for connecting to the database. (private, shared) # For "sqlite3" only. cache mode setting used for connecting to the database. (private, shared)
;cache_mode = private ;cache_mode = private
# For "sqlite3" only. Enable/disable Write-Ahead Logging, https://sqlite.org/wal.html. Default is false.
;wal = false
# For "mysql" only if migrationLocking feature toggle is set. How many seconds to wait before failing to lock the database for the migrations, default is 0. # For "mysql" only if migrationLocking feature toggle is set. How many seconds to wait before failing to lock the database for the migrations, default is 0.
;locking_attempt_timeout_sec = 0 ;locking_attempt_timeout_sec = 0
@ -168,9 +145,6 @@ password = 123qwe$%&RTY
# For "sqlite" only. How many times to retry transaction in case of database is locked failures. Default is 5. # For "sqlite" only. How many times to retry transaction in case of database is locked failures. Default is 5.
;transaction_retries = 5 ;transaction_retries = 5
# Set to true to add metrics and tracing for database queries.
;instrument_queries = false
################################### Data sources ######################### ################################### Data sources #########################
[datasources] [datasources]
# Upper limit of data sources that Grafana will return. This limit is a temporary configuration and it will be deprecated when pagination will be introduced on the list data sources API. # Upper limit of data sources that Grafana will return. This limit is a temporary configuration and it will be deprecated when pagination will be introduced on the list data sources API.
@ -187,12 +161,6 @@ password = 123qwe$%&RTY
# memcache: 127.0.0.1:11211 # memcache: 127.0.0.1:11211
;connstr = ;connstr =
# prefix prepended to all the keys in the remote cache
; prefix =
# This enables encryption of values stored in the remote cache
;encryption =
#################################### Data proxy ########################### #################################### Data proxy ###########################
[dataproxy] [dataproxy]
@ -238,9 +206,6 @@ password = 123qwe$%&RTY
# Limits the number of rows that Grafana will process from SQL data sources. # Limits the number of rows that Grafana will process from SQL data sources.
;row_limit = 1000000 ;row_limit = 1000000
# Sets a custom value for the `User-Agent` header for outgoing data proxy requests. If empty, the default value is `Grafana/<BuildVersion>` (for example `Grafana/9.0.0`).
;user_agent =
#################################### Analytics #################################### #################################### Analytics ####################################
[analytics] [analytics]
# Server reporting, sends usage counters to stats.grafana.org every 24 hours. # Server reporting, sends usage counters to stats.grafana.org every 24 hours.
@ -256,7 +221,7 @@ password = 123qwe$%&RTY
# for new versions of grafana. The check is used # for new versions of grafana. The check is used
# in some UI views to notify that a grafana update exists. # in some UI views to notify that a grafana update exists.
# This option does not cause any auto updates, nor send any information # This option does not cause any auto updates, nor send any information
# only a GET request to https://grafana.com/api/grafana/versions/stable to get the latest version. # only a GET request to https://raw.githubusercontent.com/grafana/grafana/main/latest.json to get the latest version.
;check_for_updates = true ;check_for_updates = true
# Set to false to disable all checks to https://grafana.com # Set to false to disable all checks to https://grafana.com
@ -290,12 +255,6 @@ password = 123qwe$%&RTY
# Rudderstack Config url, optional, used by Rudderstack SDK to fetch source config # Rudderstack Config url, optional, used by Rudderstack SDK to fetch source config
;rudderstack_config_url = ;rudderstack_config_url =
# Rudderstack Integrations URL, optional. Only valid if you pass the SDK version 1.1 or higher
;rudderstack_integrations_url =
# Intercom secret, optional, used to hash user_id before passing to Intercom via Rudderstack
;intercom_secret =
# Controls if the UI contains any links to user feedback forms # Controls if the UI contains any links to user feedback forms
;feedback_links_enabled = true ;feedback_links_enabled = true
@ -371,14 +330,6 @@ password = 123qwe$%&RTY
# $ROOT_PATH is server.root_url without the protocol. # $ROOT_PATH is server.root_url without the protocol.
;content_security_policy_template = """script-src 'self' 'unsafe-eval' 'unsafe-inline' 'strict-dynamic' $NONCE;object-src 'none';font-src 'self';style-src 'self' 'unsafe-inline' blob:;img-src * data:;base-uri 'self';connect-src 'self' grafana.com ws://$ROOT_PATH wss://$ROOT_PATH;manifest-src 'self';media-src 'none';form-action 'self';""" ;content_security_policy_template = """script-src 'self' 'unsafe-eval' 'unsafe-inline' 'strict-dynamic' $NONCE;object-src 'none';font-src 'self';style-src 'self' 'unsafe-inline' blob:;img-src * data:;base-uri 'self';connect-src 'self' grafana.com ws://$ROOT_PATH wss://$ROOT_PATH;manifest-src 'self';media-src 'none';form-action 'self';"""
# Enable adding the Content-Security-Policy-Report-Only header to your requests.
# Allows you to monitor the effects of a policy without enforcing it.
;content_security_policy_report_only = false
# Set Content Security Policy Report Only template used when adding the Content-Security-Policy-Report-Only header to your requests.
# $NONCE in the template includes a random nonce.
# $ROOT_PATH is server.root_url without the protocol.
;content_security_policy_report_only_template = """script-src 'self' 'unsafe-eval' 'unsafe-inline' 'strict-dynamic' $NONCE;object-src 'none';font-src 'self';style-src 'self' 'unsafe-inline' blob:;img-src * data:;base-uri 'self';connect-src 'self' grafana.com ws://$ROOT_PATH wss://$ROOT_PATH;manifest-src 'self';media-src 'none';form-action 'self';"""
# Controls if old angular plugins are supported or not. This will be disabled by default in future release # Controls if old angular plugins are supported or not. This will be disabled by default in future release
;angular_support_enabled = true ;angular_support_enabled = true
@ -388,12 +339,6 @@ password = 123qwe$%&RTY
# List of allowed headers to be set by the user, separated by spaces. Suggested to use for if authentication lives behind reverse proxies. # List of allowed headers to be set by the user, separated by spaces. Suggested to use for if authentication lives behind reverse proxies.
;csrf_additional_headers = ;csrf_additional_headers =
# The CSRF check will be executed even if the request has no login cookie.
;csrf_always_check = false
# Comma-separated list of plugins ids that won't be loaded inside the frontend sandbox
;disable_frontend_sandbox_for_plugins =
[security.encryption] [security.encryption]
# Defines the time-to-live (TTL) for decrypted data encryption keys stored in memory (cache). # Defines the time-to-live (TTL) for decrypted data encryption keys stored in memory (cache).
# Please note that small values may cause performance issues due to a high frequency decryption operations. # Please note that small values may cause performance issues due to a high frequency decryption operations.
@ -405,9 +350,6 @@ password = 123qwe$%&RTY
#################################### Snapshots ########################### #################################### Snapshots ###########################
[snapshots] [snapshots]
# set to false to remove snapshot functionality
;enabled = true
# snapshot sharing options # snapshot sharing options
;external_enabled = true ;external_enabled = true
;external_snapshot_url = https://snapshots.raintank.io ;external_snapshot_url = https://snapshots.raintank.io
@ -446,7 +388,7 @@ password = 123qwe$%&RTY
# Set this value to automatically add new users to the provided organization (if auto_assign_org above is set to true) # Set this value to automatically add new users to the provided organization (if auto_assign_org above is set to true)
;auto_assign_org_id = 1 ;auto_assign_org_id = 1
# Default role new users will be automatically assigned # Default role new users will be automatically assigned (if disabled above is set to true)
;auto_assign_org_role = Viewer ;auto_assign_org_role = Viewer
# Require email validation before sign up completes # Require email validation before sign up completes
@ -459,8 +401,8 @@ password = 123qwe$%&RTY
# Default UI theme ("dark" or "light") # Default UI theme ("dark" or "light")
;default_theme = dark ;default_theme = dark
# Default UI language (supported IETF language tag, such as en-US) # Default locale (supported IETF language tag, such as en-US)
;default_language = en-US ;default_locale = en-US
# Path to a custom home page. Users are only redirected to this if the default home dashboard is used. It should match a frontend route and contain a leading slash. # Path to a custom home page. Users are only redirected to this if the default home dashboard is used. It should match a frontend route and contain a leading slash.
;home_page = ;home_page =
@ -482,27 +424,6 @@ password = 123qwe$%&RTY
# Enter a comma-separated list of users login to hide them in the Grafana UI. These users are shown to Grafana admins and themselves. # Enter a comma-separated list of users login to hide them in the Grafana UI. These users are shown to Grafana admins and themselves.
; hidden_users = ; hidden_users =
[secretscan]
# Enable secretscan feature
;enabled = false
# Interval to check for token leaks
;interval = 5m
# base URL of the grafana token leak check service
;base_url = https://secret-scanning.grafana.net
# URL to send outgoing webhooks to in case of detection
;oncall_url =
# Whether to revoke the token if a leak is detected or just send a notification
;revoke = true
[service_accounts]
# Service account maximum expiration date in days.
# When set, Grafana will not allow the creation of tokens with expiry greater than this setting.
; token_expiration_day_limit =
[auth] [auth]
# Login cookie name # Login cookie name
;login_cookie_name = grafana_session ;login_cookie_name = grafana_session
@ -530,14 +451,12 @@ password = 123qwe$%&RTY
# Set to true to attempt login with OAuth automatically, skipping the login screen. # Set to true to attempt login with OAuth automatically, skipping the login screen.
# This setting is ignored if multiple OAuth providers are configured. # This setting is ignored if multiple OAuth providers are configured.
# Deprecated, use auto_login option for specific provider instead.
;oauth_auto_login = false ;oauth_auto_login = false
# OAuth state max age cookie duration in seconds. Defaults to 600 seconds. # OAuth state max age cookie duration in seconds. Defaults to 600 seconds.
;oauth_state_cookie_max_age = 600 ;oauth_state_cookie_max_age = 600
# Skip forced assignment of OrgID 1 or 'auto_assign_org_id' for social logins # Skip forced assignment of OrgID 1 or 'auto_assign_org_id' for social logins
# Deprecated, use skip_org_role_sync option for specific provider instead.
;oauth_skip_org_role_update_sync = false ;oauth_skip_org_role_update_sync = false
# limit of api_key seconds to live before expiration # limit of api_key seconds to live before expiration
@ -552,23 +471,6 @@ password = 123qwe$%&RTY
# Set to true to enable Azure authentication option for HTTP-based datasources. # Set to true to enable Azure authentication option for HTTP-based datasources.
;azure_auth_enabled = false ;azure_auth_enabled = false
# Set to skip the organization role from JWT login and use system's role assignment instead.
; skip_org_role_sync = false
# Use email lookup in addition to the unique ID provided by the IdP
;oauth_allow_insecure_email_lookup = false
# Set to true to include id of identity as a response header
;id_response_header_enabled = false
# Prefix used for the id response header, X-Grafana-Identity-Id
;id_response_header_prefix = X-Grafana
# List of identity namespaces to add id response headers for, separated by space.
# Available namespaces are user, api-key and service-account.
# The header value will encode the namespace ("user:<id>", "api-key:<id>", "service-account:<id>")
;id_response_header_namespaces = user api-key service-account
#################################### Anonymous Auth ###################### #################################### Anonymous Auth ######################
[auth.anonymous] [auth.anonymous]
# enable anonymous access # enable anonymous access
@ -585,138 +487,96 @@ password = 123qwe$%&RTY
#################################### GitHub Auth ########################## #################################### GitHub Auth ##########################
[auth.github] [auth.github]
;name = GitHub
;icon = github
;enabled = false ;enabled = false
;allow_sign_up = true ;allow_sign_up = true
;auto_login = false
;client_id = some_id ;client_id = some_id
;client_secret = some_secret ;client_secret = some_secret
;scopes = user:email,read:org ;scopes = user:email,read:org
;auth_url = https://github.com/login/oauth/authorize ;auth_url = https://github.com/login/oauth/authorize
;token_url = https://github.com/login/oauth/access_token ;token_url = https://github.com/login/oauth/access_token
;api_url = https://api.github.com/user ;api_url = https://api.github.com/user
;signout_redirect_url =
;allowed_domains = ;allowed_domains =
;team_ids = ;team_ids =
;allowed_organizations = ;allowed_organizations =
;role_attribute_path = ;role_attribute_path =
;role_attribute_strict = false ;role_attribute_strict = false
;allow_assign_grafana_admin = false ;allow_assign_grafana_admin = false
;skip_org_role_sync = false
#################################### GitLab Auth ######################### #################################### GitLab Auth #########################
[auth.gitlab] [auth.gitlab]
;name = GitLab
;icon = gitlab
;enabled = false ;enabled = false
;allow_sign_up = true ;allow_sign_up = true
;auto_login = false
;client_id = some_id ;client_id = some_id
;client_secret = some_secret ;client_secret = some_secret
;scopes = openid email profile ;scopes = api
;auth_url = https://gitlab.com/oauth/authorize ;auth_url = https://gitlab.com/oauth/authorize
;token_url = https://gitlab.com/oauth/token ;token_url = https://gitlab.com/oauth/token
;api_url = https://gitlab.com/api/v4 ;api_url = https://gitlab.com/api/v4
;signout_redirect_url =
;allowed_domains = ;allowed_domains =
;allowed_groups = ;allowed_groups =
;role_attribute_path = ;role_attribute_path =
;role_attribute_strict = false ;role_attribute_strict = false
;allow_assign_grafana_admin = false ;allow_assign_grafana_admin = false
;skip_org_role_sync = false
;tls_skip_verify_insecure = false
;tls_client_cert =
;tls_client_key =
;tls_client_ca =
;use_pkce = true
#################################### Google Auth ########################## #################################### Google Auth ##########################
[auth.google] [auth.google]
;name = Google
;icon = google
;enabled = false ;enabled = false
;allow_sign_up = true ;allow_sign_up = true
;auto_login = false
;client_id = some_client_id ;client_id = some_client_id
;client_secret = some_client_secret ;client_secret = some_client_secret
;scopes = openid email profile ;scopes = https://www.googleapis.com/auth/userinfo.profile https://www.googleapis.com/auth/userinfo.email
;auth_url = https://accounts.google.com/o/oauth2/v2/auth ;auth_url = https://accounts.google.com/o/oauth2/auth
;token_url = https://oauth2.googleapis.com/token ;token_url = https://accounts.google.com/o/oauth2/token
;api_url = https://openidconnect.googleapis.com/v1/userinfo ;api_url = https://www.googleapis.com/oauth2/v1/userinfo
;signout_redirect_url =
;allowed_domains = ;allowed_domains =
;validate_hd =
;hosted_domain = ;hosted_domain =
;allowed_groups =
;role_attribute_path =
;role_attribute_strict = false
;allow_assign_grafana_admin = false
;skip_org_role_sync = false
;use_pkce = true
#################################### Grafana.com Auth #################### #################################### Grafana.com Auth ####################
[auth.grafana_com] [auth.grafana_com]
;name = Grafana.com
;icon = grafana
;enabled = false ;enabled = false
;allow_sign_up = true ;allow_sign_up = true
;auto_login = false
;client_id = some_id ;client_id = some_id
;client_secret = some_secret ;client_secret = some_secret
;scopes = user:email ;scopes = user:email
;allowed_organizations = ;allowed_organizations =
;skip_org_role_sync = false
#################################### Azure AD OAuth ####################### #################################### Azure AD OAuth #######################
[auth.azuread] [auth.azuread]
;name = Microsoft ;name = Azure AD
;icon = microsoft
;enabled = false ;enabled = false
;allow_sign_up = true ;allow_sign_up = true
;auto_login = false
;client_id = some_client_id ;client_id = some_client_id
;client_secret = some_client_secret ;client_secret = some_client_secret
;scopes = openid email profile ;scopes = openid email profile
;auth_url = https://login.microsoftonline.com/<tenant-id>/oauth2/v2.0/authorize ;auth_url = https://login.microsoftonline.com/<tenant-id>/oauth2/v2.0/authorize
;token_url = https://login.microsoftonline.com/<tenant-id>/oauth2/v2.0/token ;token_url = https://login.microsoftonline.com/<tenant-id>/oauth2/v2.0/token
;signout_redirect_url =
;allowed_domains = ;allowed_domains =
;allowed_groups = ;allowed_groups =
;allowed_organizations =
;role_attribute_strict = false ;role_attribute_strict = false
;allow_assign_grafana_admin = false ;allow_assign_grafana_admin = false
;use_pkce = true
# prevent synchronizing users organization roles
;skip_org_role_sync = false
#################################### Okta OAuth ####################### #################################### Okta OAuth #######################
[auth.okta] [auth.okta]
;name = Okta ;name = Okta
;enabled = false ;enabled = false
;allow_sign_up = true ;allow_sign_up = true
;auto_login = false
;client_id = some_id ;client_id = some_id
;client_secret = some_secret ;client_secret = some_secret
;scopes = openid profile email groups ;scopes = openid profile email groups
;auth_url = https://<tenant-id>.okta.com/oauth2/v1/authorize ;auth_url = https://<tenant-id>.okta.com/oauth2/v1/authorize
;token_url = https://<tenant-id>.okta.com/oauth2/v1/token ;token_url = https://<tenant-id>.okta.com/oauth2/v1/token
;api_url = https://<tenant-id>.okta.com/oauth2/v1/userinfo ;api_url = https://<tenant-id>.okta.com/oauth2/v1/userinfo
;signout_redirect_url =
;allowed_domains = ;allowed_domains =
;allowed_groups = ;allowed_groups =
;role_attribute_path = ;role_attribute_path =
;role_attribute_strict = false ;role_attribute_strict = false
;allow_assign_grafana_admin = false ;allow_assign_grafana_admin = false
;skip_org_role_sync = false
;use_pkce = true
#################################### Generic OAuth ########################## #################################### Generic OAuth ##########################
[auth.generic_oauth] [auth.generic_oauth]
;enabled = false ;enabled = false
;name = OAuth ;name = OAuth
;allow_sign_up = true ;allow_sign_up = true
;auto_login = false
;client_id = some_id ;client_id = some_id
;client_secret = some_secret ;client_secret = some_secret
;scopes = user:email,read:org ;scopes = user:email,read:org
@ -729,7 +589,6 @@ password = 123qwe$%&RTY
;auth_url = https://foo.bar/login/oauth/authorize ;auth_url = https://foo.bar/login/oauth/authorize
;token_url = https://foo.bar/login/oauth/access_token ;token_url = https://foo.bar/login/oauth/access_token
;api_url = https://foo.bar/user ;api_url = https://foo.bar/user
;signout_redirect_url =
;teams_url = ;teams_url =
;allowed_domains = ;allowed_domains =
;team_ids = ;team_ids =
@ -749,7 +608,6 @@ password = 123qwe$%&RTY
#################################### Basic Auth ########################## #################################### Basic Auth ##########################
[auth.basic] [auth.basic]
;enabled = true ;enabled = true
;password_policy = false
#################################### Auth Proxy ########################## #################################### Auth Proxy ##########################
[auth.proxy] [auth.proxy]
@ -776,10 +634,7 @@ password = 123qwe$%&RTY
;cache_ttl = 60m ;cache_ttl = 60m
;expect_claims = {"aud": ["foo", "bar"]} ;expect_claims = {"aud": ["foo", "bar"]}
;key_file = /path/to/key/file ;key_file = /path/to/key/file
# Use in conjunction with key_file in case the JWT token's header specifies a key ID in "kid" field
;key_id = some-key-id
;role_attribute_path = ;role_attribute_path =
;groups_attribute_path =
;role_attribute_strict = false ;role_attribute_strict = false
;auto_sign_up = false ;auto_sign_up = false
;url_login = false ;url_login = false
@ -808,20 +663,6 @@ password = 123qwe$%&RTY
# If true, assume role will be enabled for all AWS authentication providers that are specified in aws_auth_providers # If true, assume role will be enabled for all AWS authentication providers that are specified in aws_auth_providers
; assume_role_enabled = true ; assume_role_enabled = true
# Specify max no of pages to be returned by the ListMetricPages API
; list_metrics_page_limit = 500
# Experimental, for use in Grafana Cloud only. Please do not set.
; external_id =
# Sets the expiry duration of an assumed role.
# This setting should be expressed as a duration. Examples: 6h (hours), 10d (days), 2w (weeks), 1M (month).
; session_duration = "15m"
# Set the plugins that will receive AWS settings for each request (via plugin context)
# By default this will include all Grafana Labs owned AWS plugins, or those that make use of AWS settings (ElasticSearch, Prometheus).
; forward_settings_to_plugins = cloudwatch, grafana-athena-datasource, grafana-redshift-datasource, grafana-x-ray-datasource, grafana-timestream-datasource, grafana-iot-sitewise-datasource, grafana-iot-twinmaker-app, grafana-opensearch-datasource, aws-datasource-provisioner, elasticsearch, prometheus
#################################### Azure ############################### #################################### Azure ###############################
[azure] [azure]
# Azure cloud environment where Grafana is hosted # Azure cloud environment where Grafana is hosted
@ -838,56 +679,9 @@ password = 123qwe$%&RTY
# Should be set for user-assigned identity and should be empty for system-assigned identity # Should be set for user-assigned identity and should be empty for system-assigned identity
;managed_identity_client_id = ;managed_identity_client_id =
# Specifies whether Azure AD Workload Identity authentication should be enabled in datasources that support it
# For more documentation on Azure AD Workload Identity, review this documentation:
# https://azure.github.io/azure-workload-identity/docs/
# Disabled by default, needs to be explicitly enabled
;workload_identity_enabled = false
# Tenant ID of the Azure AD Workload Identity
# Allows to override default tenant ID of the Azure AD identity associated with the Kubernetes service account
;workload_identity_tenant_id =
# Client ID of the Azure AD Workload Identity
# Allows to override default client ID of the Azure AD identity associated with the Kubernetes service account
;workload_identity_client_id =
# Custom path to token file for the Azure AD Workload Identity
# Allows to set a custom path to the projected service account token file
;workload_identity_token_file =
# Specifies whether user identity authentication (on behalf of currently signed-in user) should be enabled in datasources
# that support it (requires AAD authentication)
# Disabled by default, needs to be explicitly enabled
;user_identity_enabled = false
# Override token URL for Azure Active Directory
# By default is the same as token URL configured for AAD authentication settings
;user_identity_token_url =
# Override ADD application ID which would be used to exchange users token to an access token for the datasource
# By default is the same as used in AAD authentication or can be set to another application (for OBO flow)
;user_identity_client_id =
# Override the AAD application client secret
# By default is the same as used in AAD authentication or can be set to another application (for OBO flow)
;user_identity_client_secret =
# Set the plugins that will receive Azure settings for each request (via plugin context)
# By default this will include all Grafana Labs owned Azure plugins, or those that make use of Azure settings (Azure Monitor, Azure Data Explorer, Prometheus, MSSQL).
;forward_settings_to_plugins = grafana-azure-monitor-datasource, prometheus, grafana-azure-data-explorer-datasource, mssql
#################################### Role-based Access Control ########### #################################### Role-based Access Control ###########
[rbac] [rbac]
;permission_cache = true ;permission_cache = true
# Reset basic roles permissions on boot
# Warning left to true, basic roles permissions will be reset on every boot
#reset_basic_roles = false
# Validate permissions' action and scope on role creation and update
; permission_validation_enabled = true
#################################### SMTP / Emailing ########################## #################################### SMTP / Emailing ##########################
[smtp] [smtp]
;enabled = false ;enabled = false
@ -904,13 +698,6 @@ password = 123qwe$%&RTY
;ehlo_identity = dashboard.example.com ;ehlo_identity = dashboard.example.com
# SMTP startTLS policy (defaults to 'OpportunisticStartTLS') # SMTP startTLS policy (defaults to 'OpportunisticStartTLS')
;startTLS_policy = NoStartTLS ;startTLS_policy = NoStartTLS
# Enable trace propagation in e-mail headers, using the 'traceparent', 'tracestate' and (optionally) 'baggage' fields (defaults to false)
;enable_tracing = false
[smtp.static_headers]
# Include custom static headers in all outgoing emails
;Foo-Header = bar
;Foo = bar
[emails] [emails]
;welcome_email_on_sign_up = false ;welcome_email_on_sign_up = false
@ -929,9 +716,6 @@ password = 123qwe$%&RTY
# optional settings to set different levels for specific loggers. Ex filters = sqlstore:debug # optional settings to set different levels for specific loggers. Ex filters = sqlstore:debug
;filters = ;filters =
# Set the default error message shown to users. This message is displayed instead of sensitive backend errors which should be obfuscated. Default is the same as the sample value.
;user_facing_default_error = "please inspect Grafana server log for details"
# For "console" mode only # For "console" mode only
[log.console] [log.console]
;level = ;level =
@ -978,11 +762,20 @@ password = 123qwe$%&RTY
;tag = ;tag =
[log.frontend] [log.frontend]
# Should Faro javascript agent be initialized # Should Sentry javascript agent be initialized
;enabled = false ;enabled = false
# Custom HTTP endpoint to send events to. Default will log the events to stdout. # Defines which provider to use, default is Sentry
;custom_endpoint = /log-grafana-javascript-agent ;provider = sentry
# Sentry DSN if you want to send events to Sentry.
;sentry_dsn =
# Custom HTTP endpoint to send events captured by the Sentry agent to. Default will log the events to stdout.
;custom_endpoint = /log
# Rate of events to be reported between 0 (none) and 1 (all), float
;sample_rate = 1.0
# Requests per second limit enforced an extended period, for Grafana backend log ingestion endpoint (/log). # Requests per second limit enforced an extended period, for Grafana backend log ingestion endpoint (/log).
;log_endpoint_requests_per_second_limit = 3 ;log_endpoint_requests_per_second_limit = 3
@ -1043,13 +836,6 @@ password = 123qwe$%&RTY
# global limit of alerts # global limit of alerts
;global_alert_rule = -1 ;global_alert_rule = -1
# global limit of correlations
; global_correlations = -1
# Limit of the number of alert rules per rule group.
# This is not strictly enforced yet, but will be enforced over time.
;alerting_rule_group_rules = 100
#################################### Unified Alerting #################### #################################### Unified Alerting ####################
[unified_alerting] [unified_alerting]
#Enable the Unified Alerting sub-system and interface. When enabled we'll migrate all of your alert rules and notification channels to the new system. New alert rules will be created and your notification channels will be converted into an Alertmanager configuration. Previous data is preserved to enable backwards compatibility but new data is removed.``` #Enable the Unified Alerting sub-system and interface. When enabled we'll migrate all of your alert rules and notification channels to the new system. New alert rules will be created and your notification channels will be converted into an Alertmanager configuration. Previous data is preserved to enable backwards compatibility but new data is removed.```
@ -1066,26 +852,6 @@ password = 123qwe$%&RTY
# The interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m. # The interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m.
;alertmanager_config_poll_interval = 60s ;alertmanager_config_poll_interval = 60s
# The redis server address that should be connected to.
;ha_redis_address =
# The username that should be used to authenticate with the redis server.
;ha_redis_username =
# The password that should be used to authenticate with the redis server.
;ha_redis_password =
# The redis database, by default it's 0.
;ha_redis_db =
# A prefix that is used for every key or channel that is created on the redis server
# as part of HA for alerting.
;ha_redis_prefix =
# The name of the cluster peer that will be used as identifier. If none is
# provided, a random one will be generated.
;ha_redis_peer_name =
# Listen address/hostname and port to receive unified alerting messages for other Grafana instances. The port is used for both TCP and UDP. It is assumed other Grafana instances are also running on the same port. The default value is `0.0.0.0:9094`. # Listen address/hostname and port to receive unified alerting messages for other Grafana instances. The port is used for both TCP and UDP. It is assumed other Grafana instances are also running on the same port. The default value is `0.0.0.0:9094`.
;ha_listen_address = "0.0.0.0:9094" ;ha_listen_address = "0.0.0.0:9094"
@ -1101,11 +867,6 @@ password = 123qwe$%&RTY
# The interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m. # The interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m.
;ha_peer_timeout = "15s" ;ha_peer_timeout = "15s"
# The label is an optional string to include on each packet and stream.
# It uniquely identifies the cluster and prevents cross-communication
# issues when sending gossip messages in an enviromenet with multiple clusters.
;ha_label =
# The interval between sending gossip messages. By lowering this value (more frequent) gossip messages are propagated # The interval between sending gossip messages. By lowering this value (more frequent) gossip messages are propagated
# across cluster more quickly at the expense of increased bandwidth usage. # across cluster more quickly at the expense of increased bandwidth usage.
# The interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m. # The interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m.
@ -1123,88 +884,18 @@ password = 123qwe$%&RTY
# The timeout string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m. # The timeout string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m.
;evaluation_timeout = 30s ;evaluation_timeout = 30s
# Number of times we'll attempt to evaluate an alert rule before giving up on that evaluation. The default value is 1. # Number of times we'll attempt to evaluate an alert rule before giving up on that evaluation. This option has a legacy version in the `[alerting]` section that takes precedence.
;max_attempts = 1 ;max_attempts = 3
# Minimum interval to enforce between rule evaluations. Rules will be adjusted if they are less than this value or if they are not multiple of the scheduler interval (10s). Higher values can help with resource management as we'll schedule fewer evaluations over time. This option has a legacy version in the `[alerting]` section that takes precedence. # Minimum interval to enforce between rule evaluations. Rules will be adjusted if they are less than this value or if they are not multiple of the scheduler interval (10s). Higher values can help with resource management as we'll schedule fewer evaluations over time. This option has a legacy version in the `[alerting]` section that takes precedence.
# The interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m. # The interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m.
;min_interval = 10s ;min_interval = 10s
# This is an experimental option to add parallelization to saving alert states in the database.
# It configures the maximum number of concurrent queries per rule evaluated. The default value is 1
# (concurrent queries per rule disabled).
;max_state_save_concurrency = 1
# If the feature flag 'alertingSaveStatePeriodic' is enabled, this is the interval that is used to persist the alerting instances to the database.
# The interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m.
;state_periodic_save_interval = 5m
# Disables the smoothing of alert evaluations across their evaluation window.
# Rules will evaluate in sync.
;disable_jitter = false
[unified_alerting.reserved_labels] [unified_alerting.reserved_labels]
# Comma-separated list of reserved labels added by the Grafana Alerting engine that should be disabled. # Comma-separated list of reserved labels added by the Grafana Alerting engine that should be disabled.
# For example: `disabled_labels=grafana_folder` # For example: `disabled_labels=grafana_folder`
;disabled_labels = ;disabled_labels =
[unified_alerting.state_history]
# Enable the state history functionality in Unified Alerting. The previous states of alert rules will be visible in panels and in the UI.
; enabled = true
# Select which pluggable state history backend to use. Either "annotations", "loki", or "multiple"
# "loki" writes state history to an external Loki instance. "multiple" allows history to be written to multiple backends at once.
# Defaults to "annotations".
; backend = "multiple"
# For "multiple" only.
# Indicates the main backend used to serve state history queries.
# Either "annotations" or "loki"
; primary = "loki"
# For "multiple" only.
# Comma-separated list of additional backends to write state history data to.
; secondaries = "annotations"
# For "loki" only.
# URL of the external Loki instance.
# Either "loki_remote_url", or both of "loki_remote_read_url" and "loki_remote_write_url" is required for the "loki" backend.
; loki_remote_url = "http://loki:3100"
# For "loki" only.
# URL of the external Loki's read path. To be used in configurations where Loki has separated read and write URLs.
# Either "loki_remote_url", or both of "loki_remote_read_url" and "loki_remote_write_url" is required for the "loki" backend.
; loki_remote_read_url = "http://loki-querier:3100"
# For "loki" only.
# URL of the external Loki's write path. To be used in configurations where Loki has separated read and write URLs.
# Either "loki_remote_url", or both of "loki_remote_read_url" and "loki_remote_write_url" is required for the "loki" backend.
; loki_remote_write_url = "http://loki-distributor:3100"
# For "loki" only.
# Optional tenant ID to attach to requests sent to Loki.
; loki_tenant_id = 123
# For "loki" only.
# Optional username for basic authentication on requests sent to Loki. Can be left blank to disable basic auth.
; loki_basic_auth_username = "myuser"
# For "loki" only.
# Optional password for basic authentication on requests sent to Loki. Can be left blank.
; loki_basic_auth_password = "mypass"
[unified_alerting.state_history.external_labels]
# Optional extra labels to attach to outbound state history records or log streams.
# Any number of label key-value-pairs can be provided.
; mylabelkey = mylabelvalue
[unified_alerting.upgrade]
# If set to true when upgrading from legacy alerting to Unified Alerting, grafana will first delete all existing
# Unified Alerting resources, thus re-upgrading all organizations from scratch. If false or unset, organizations that
# have previously upgraded will not lose their existing Unified Alerting data when switching between legacy and
# Unified Alerting. Should be kept false when not needed as it may cause unintended data-loss if left enabled.
;clean_upgrade = false
#################################### Alerting ############################ #################################### Alerting ############################
[alerting] [alerting]
# Disable legacy alerting engine & UI features # Disable legacy alerting engine & UI features
@ -1287,16 +978,6 @@ password = 123qwe$%&RTY
# Enable the Profile section # Enable the Profile section
;enabled = true ;enabled = true
#################################### News #############################
[news]
# Enable the news feed section
; news_feed_enabled = true
#################################### Query #############################
[query]
# Set the number of data source queries that can be executed concurrently in mixed queries. Default is the number of CPUs.
;concurrent_query_limit =
#################################### Query History ############################# #################################### Query History #############################
[query_history] [query_history]
# Enable the Query history # Enable the Query history
@ -1311,8 +992,6 @@ password = 123qwe$%&RTY
;interval_seconds = 10 ;interval_seconds = 10
# Disable total stats (stat_totals_*) metrics to be generated # Disable total stats (stat_totals_*) metrics to be generated
;disable_total_stats = false ;disable_total_stats = false
# The interval at which the total stats collector will update the stats. Default is 1800 seconds.
;total_stats_collector_interval_seconds = 1800
#If both are set, basic auth will be required for the metrics endpoints. #If both are set, basic auth will be required for the metrics endpoints.
; basic_auth_username = ; basic_auth_username =
@ -1334,7 +1013,6 @@ password = 123qwe$%&RTY
# Url used to import dashboards directly from Grafana.com # Url used to import dashboards directly from Grafana.com
[grafana_com] [grafana_com]
;url = https://grafana.com ;url = https://grafana.com
;api_url = https://grafana.com/api
#################################### Distributed tracing ############ #################################### Distributed tracing ############
# Opentracing is deprecated use opentelemetry instead # Opentracing is deprecated use opentelemetry instead
@ -1364,18 +1042,6 @@ password = 123qwe$%&RTY
[tracing.opentelemetry] [tracing.opentelemetry]
# attributes that will always be included in when creating new spans. ex (key1:value1,key2:value2) # attributes that will always be included in when creating new spans. ex (key1:value1,key2:value2)
;custom_attributes = key1:value1,key2:value2 ;custom_attributes = key1:value1,key2:value2
# Type specifies the type of the sampler: const, probabilistic, rateLimiting, or remote
; sampler_type = remote
# Sampler configuration parameter
# for "const" sampler, 0 or 1 for always false/true respectively
# for "probabilistic" sampler, a probability between 0.0 and 1.0
# for "rateLimiting" sampler, the number of spans per second
# for "remote" sampler, param is the same as for "probabilistic"
# and indicates the initial sampling rate before the actual one
# is received from the sampling server (set at sampling_server_url)
; sampler_param = 0.5
# specifies the URL of the sampling server when sampler_type is remote
; sampling_server_url = http://localhost:5778/sampling
[tracing.opentelemetry.jaeger] [tracing.opentelemetry.jaeger]
# jaeger destination (ex http://localhost:14268/api/traces) # jaeger destination (ex http://localhost:14268/api/traces)
@ -1457,15 +1123,6 @@ password = 123qwe$%&RTY
;plugin_catalog_url = https://grafana.com/grafana/plugins/ ;plugin_catalog_url = https://grafana.com/grafana/plugins/
# Enter a comma-separated list of plugin identifiers to hide in the plugin catalog. # Enter a comma-separated list of plugin identifiers to hide in the plugin catalog.
;plugin_catalog_hidden_plugins = ;plugin_catalog_hidden_plugins =
# Log all backend requests for core and external plugins.
;log_backend_requests = false
# Disable download of the public key for verifying plugin signature.
; public_key_retrieval_disabled = false
# Force download of the public key for verifying plugin signature on startup. If disabled, the public key will be retrieved every 10 days.
# Requires public_key_retrieval_disabled to be false to have any effect.
; public_key_retrieval_on_startup = false
# Enter a comma-separated list of plugin identifiers to avoid loading (including core plugins). These plugins will be hidden in the catalog.
; disable_plugins =
#################################### Grafana Live ########################################## #################################### Grafana Live ##########################################
[live] [live]
@ -1488,9 +1145,6 @@ password = 123qwe$%&RTY
# This option is EXPERIMENTAL. # This option is EXPERIMENTAL.
;ha_engine_address = "127.0.0.1:6379" ;ha_engine_address = "127.0.0.1:6379"
# ha_engine_password allows setting an optional password to authenticate with the engine
;ha_engine_password = ""
#################################### Grafana Image Renderer Plugin ########################## #################################### Grafana Image Renderer Plugin ##########################
[plugin.grafana-image-renderer] [plugin.grafana-image-renderer]
# Instruct headless browser instance to use a default timezone when not provided by Grafana, e.g. when rendering panel image of alert. # Instruct headless browser instance to use a default timezone when not provided by Grafana, e.g. when rendering panel image of alert.
@ -1553,14 +1207,6 @@ password = 123qwe$%&RTY
;grpc_host = ;grpc_host =
;grpc_port = ;grpc_port =
[support_bundles]
# Enable support bundle creation (default: true)
#enabled = true
# Only server admins can generate and view support bundles (default: true)
#server_admin_only = true
# If set, bundles will be encrypted with the provided public keys separated by whitespace
#public_keys = ""
[enterprise] [enterprise]
# Path to a valid Grafana Enterprise license.jwt file # Path to a valid Grafana Enterprise license.jwt file
;license_path = ;license_path =
@ -1614,42 +1260,12 @@ password = 123qwe$%&RTY
;enable_custom_baselayers = true ;enable_custom_baselayers = true
# Move an app plugin referenced by its id (including all its pages) to a specific navigation section # Move an app plugin referenced by its id (including all its pages) to a specific navigation section
# Dependencies: needs the `topnav` feature to be enabled
[navigation.app_sections] [navigation.app_sections]
# The following will move an app plugin with the id of `my-app-id` under the `cfg` section # The following will move an app plugin with the id of `my-app-id` under the `starred` section
# my-app-id = cfg # my-app-id = admin
# Move a specific app plugin page (referenced by its `path` field) to a specific navigation section # Move a specific app plugin page (referenced by its `path` field) to a specific navigation section
[navigation.app_standalone_pages] [navigation.app_standalone_pages]
# The following will move the page with the path "/a/my-app-id/my-page" from `my-app-id` to the `cfg` section # The following will move the page with the path "/a/my-app-id/starred-content" from `my-app-id` to the `starred` section
# /a/my-app-id/my-page = cfg # /a/my-app-id/starred-content = starred
#################################### Secure Socks5 Datasource Proxy #####################################
[secure_socks_datasource_proxy]
; enabled = false
; root_ca_cert =
; client_key =
; client_cert =
; server_name =
# The address of the socks5 proxy datasources should connect to
; proxy_address =
; show_ui = true
; allow_insecure = false
################################## Feature Management ##############################################
[feature_management]
# Options to configure the experimental Feature Toggle Admin Page feature, which is behind the `featureToggleAdminPage` feature toggle. Use at your own risk.
# Allow editing of feature toggles in the feature management page
;allow_editing = false
# Allow customization of URL for the controller that manages feature toggles
;update_webhook =
# Allow configuring an auth token for feature management update requests
;update_webhook_token =
# Hide specific feature toggles from the feature management page
;hidden_toggles =
# Disable updating specific feature toggles in the feature management page
;read_only_toggles =
#################################### Public Dashboards #####################################
[public_dashboards]
# Set to false to disable public dashboards
;enabled = true

View File

@ -0,0 +1,9 @@
# config file version
apiVersion: 1
providers:
- name: 'node_exporter'
orgId: 1
type: file
options:
path: /var/lib/grafana/provision/dashboards/node_exporter.json

View File

@ -0,0 +1,9 @@
# config file version
apiVersion: 1
providers:
- name: 'node_exporter_all_nodes'
orgId: 1
type: file
options:
path: /var/lib/grafana/provision/dashboards/node_exporter_all_nodes.json

View File

@ -0,0 +1,9 @@
# config file version
apiVersion: 1
providers:
- name: 'synology'
orgId: 1
type: file
options:
path: /var/lib/grafana/provision/dashboards/synology.json

View File

@ -18,8 +18,7 @@ datasources:
# <int> org id. will default to orgId 1 if not specified # <int> org id. will default to orgId 1 if not specified
orgId: 1 orgId: 1
# <string> url # <string> url
url: http://loki.service.consul:3100 url: http://localhost:3100
url: http://loki.service.cosul:3100
version: 1 version: 1
# <bool> allow users to edit datasources from the UI. # <bool> allow users to edit datasources from the UI.
editable: false editable: false

View File

@ -18,7 +18,7 @@ datasources:
# <int> org id. will default to orgId 1 if not specified # <int> org id. will default to orgId 1 if not specified
orgId: 1 orgId: 1
# <string> url # <string> url
url: http://prometheus.service.consul:9090 url: http://localhost:9090
version: 1 version: 1
# <bool> allow users to edit datasources from the UI. # <bool> allow users to edit datasources from the UI.
editable: false editable: false

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,928 @@
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"description": "A Dashboard for Synology NAS based on SNMP and Prometheus",
"editable": false,
"gnetId": 6157,
"graphTooltip": 0,
"id": 5,
"iteration": 1605170528850,
"links": [],
"panels": [
{
"cacheTimeout": null,
"colorBackground": true,
"colorValue": false,
"colors": [
"#299c46",
"rgba(237, 129, 40, 0.89)",
"#d44a3a"
],
"datasource": "Prometheus",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"format": "none",
"gauge": {
"maxValue": 100,
"minValue": 0,
"show": false,
"thresholdLabels": false,
"thresholdMarkers": true
},
"gridPos": {
"h": 3,
"w": 4,
"x": 0,
"y": 0
},
"id": 5,
"interval": null,
"links": [],
"mappingType": 1,
"mappingTypes": [
{
"name": "value to text",
"value": 1
},
{
"name": "range to text",
"value": 2
}
],
"maxDataPoints": 100,
"nullPointMode": "connected",
"nullText": null,
"postfix": "",
"postfixFontSize": "50%",
"prefix": "",
"prefixFontSize": "50%",
"rangeMaps": [
{
"from": "null",
"text": "N/A",
"to": "null"
}
],
"sparkline": {
"fillColor": "rgba(31, 118, 189, 0.18)",
"full": false,
"lineColor": "rgb(31, 120, 193)",
"show": false
},
"tableColumn": "systemFanStatus{instance=\"192.168.10.200\", job=\"synology\"}",
"targets": [
{
"expr": "systemFanStatus",
"format": "time_series",
"intervalFactor": 1,
"refId": "A"
}
],
"thresholds": "2,2",
"title": "systemFanStatus",
"type": "singlestat",
"valueFontSize": "80%",
"valueMaps": [
{
"op": "=",
"text": "Normal",
"value": "1"
},
{
"op": "=",
"text": "Failed",
"value": "2"
}
],
"valueName": "avg"
},
{
"cacheTimeout": null,
"colorBackground": true,
"colorValue": false,
"colors": [
"#299c46",
"rgba(237, 129, 40, 0.89)",
"#d44a3a"
],
"datasource": "Prometheus",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"format": "none",
"gauge": {
"maxValue": 100,
"minValue": 0,
"show": false,
"thresholdLabels": false,
"thresholdMarkers": true
},
"gridPos": {
"h": 3,
"w": 4,
"x": 4,
"y": 0
},
"id": 6,
"interval": null,
"links": [],
"mappingType": 1,
"mappingTypes": [
{
"name": "value to text",
"value": 1
},
{
"name": "range to text",
"value": 2
}
],
"maxDataPoints": 100,
"nullPointMode": "connected",
"nullText": null,
"postfix": "",
"postfixFontSize": "50%",
"prefix": "",
"prefixFontSize": "50%",
"rangeMaps": [
{
"from": "null",
"text": "N/A",
"to": "null"
}
],
"sparkline": {
"fillColor": "rgba(31, 118, 189, 0.18)",
"full": false,
"lineColor": "rgb(31, 120, 193)",
"show": false
},
"tableColumn": "cpuFanStatus{instance=\"192.168.10.200\", job=\"synology\"}",
"targets": [
{
"expr": "cpuFanStatus",
"format": "time_series",
"intervalFactor": 1,
"refId": "A"
}
],
"thresholds": "2,2",
"title": "cpuFanStatus",
"type": "singlestat",
"valueFontSize": "80%",
"valueMaps": [
{
"op": "=",
"text": "Normal",
"value": "1"
},
{
"op": "=",
"text": "Failed",
"value": "2"
}
],
"valueName": "avg"
},
{
"cacheTimeout": null,
"colorBackground": true,
"colorValue": false,
"colors": [
"#299c46",
"rgba(237, 129, 40, 0.89)",
"#d44a3a"
],
"datasource": "Prometheus",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"format": "none",
"gauge": {
"maxValue": 100,
"minValue": 0,
"show": false,
"thresholdLabels": false,
"thresholdMarkers": true
},
"gridPos": {
"h": 3,
"w": 4,
"x": 8,
"y": 0
},
"id": 8,
"interval": null,
"links": [],
"mappingType": 1,
"mappingTypes": [
{
"name": "value to text",
"value": 1
},
{
"name": "range to text",
"value": 2
}
],
"maxDataPoints": 100,
"nullPointMode": "connected",
"nullText": null,
"postfix": "",
"postfixFontSize": "50%",
"prefix": "",
"prefixFontSize": "50%",
"rangeMaps": [
{
"from": "null",
"text": "N/A",
"to": "null"
}
],
"sparkline": {
"fillColor": "rgba(31, 118, 189, 0.18)",
"full": false,
"lineColor": "rgb(31, 120, 193)",
"show": false
},
"tableColumn": "systemStatus{instance=\"192.168.10.200\", job=\"synology\"}",
"targets": [
{
"expr": "systemStatus",
"format": "time_series",
"intervalFactor": 1,
"refId": "A"
}
],
"thresholds": "2,2",
"title": "systemStatus",
"type": "singlestat",
"valueFontSize": "80%",
"valueMaps": [
{
"op": "=",
"text": "Normal",
"value": "1"
},
{
"op": "=",
"text": "Failed",
"value": "2"
}
],
"valueName": "avg"
},
{
"cacheTimeout": null,
"colorBackground": true,
"colorValue": false,
"colors": [
"#299c46",
"rgba(237, 129, 40, 0.89)",
"#d44a3a"
],
"datasource": "Prometheus",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"format": "none",
"gauge": {
"maxValue": 100,
"minValue": 0,
"show": false,
"thresholdLabels": false,
"thresholdMarkers": true
},
"gridPos": {
"h": 3,
"w": 4,
"x": 12,
"y": 0
},
"id": 7,
"interval": null,
"links": [],
"mappingType": 1,
"mappingTypes": [
{
"name": "value to text",
"value": 1
},
{
"name": "range to text",
"value": 2
}
],
"maxDataPoints": 100,
"nullPointMode": "connected",
"nullText": null,
"postfix": "",
"postfixFontSize": "50%",
"prefix": "",
"prefixFontSize": "50%",
"rangeMaps": [
{
"from": "null",
"text": "N/A",
"to": "null"
}
],
"sparkline": {
"fillColor": "rgba(31, 118, 189, 0.18)",
"full": false,
"lineColor": "rgb(31, 120, 193)",
"show": false
},
"tableColumn": "powerStatus{instance=\"192.168.10.200\", job=\"synology\"}",
"targets": [
{
"expr": "powerStatus",
"format": "time_series",
"intervalFactor": 1,
"refId": "A"
}
],
"thresholds": "2,2",
"title": "powerStatus",
"type": "singlestat",
"valueFontSize": "80%",
"valueMaps": [
{
"op": "=",
"text": "Normal",
"value": "1"
},
{
"op": "=",
"text": "Failed",
"value": "2"
}
],
"valueName": "avg"
},
{
"cacheTimeout": null,
"colorBackground": true,
"colorValue": false,
"colors": [
"#299c46",
"rgba(237, 129, 40, 0.89)",
"#d44a3a"
],
"datasource": "Prometheus",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"format": "none",
"gauge": {
"maxValue": 100,
"minValue": 0,
"show": false,
"thresholdLabels": false,
"thresholdMarkers": true
},
"gridPos": {
"h": 3,
"w": 7,
"x": 16,
"y": 0
},
"id": 9,
"interval": null,
"links": [],
"mappingType": 1,
"mappingTypes": [
{
"name": "value to text",
"value": 1
},
{
"name": "range to text",
"value": 2
}
],
"maxDataPoints": 100,
"nullPointMode": "connected",
"nullText": null,
"postfix": "",
"postfixFontSize": "50%",
"prefix": "",
"prefixFontSize": "50%",
"rangeMaps": [
{
"from": "null",
"text": "N/A",
"to": "null"
}
],
"sparkline": {
"fillColor": "rgba(31, 118, 189, 0.18)",
"full": false,
"lineColor": "rgb(31, 120, 193)",
"show": false
},
"tableColumn": "raidStatus{instance=\"192.168.10.200\", job=\"synology\", raidName=\"0x566F6C756D652031\"}",
"targets": [
{
"expr": "raidStatus",
"format": "time_series",
"intervalFactor": 1,
"refId": "A"
}
],
"thresholds": "2,2",
"title": "raidStatus",
"type": "singlestat",
"valueFontSize": "80%",
"valueMaps": [
{
"op": "=",
"text": "Normal",
"value": "1"
},
{
"op": "=",
"text": "Repairing",
"value": "2"
},
{
"op": "=",
"text": "Migrating",
"value": "3"
},
{
"op": "=",
"text": "Expanding",
"value": "4"
},
{
"op": "=",
"text": "Deleting",
"value": "5"
},
{
"op": "=",
"text": "Creating",
"value": "6"
},
{
"op": "=",
"text": "RaidSyncing",
"value": "7"
},
{
"op": "=",
"text": "RaidParityChecking",
"value": "8"
},
{
"op": "=",
"text": "RaidAssembling",
"value": "9"
},
{
"op": "=",
"text": "Canceling",
"value": "10"
},
{
"op": "=",
"text": "Degrade",
"value": "11"
},
{
"op": "=",
"text": "Crashed",
"value": "12"
}
],
"valueName": "avg"
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "Prometheus",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 8,
"w": 23,
"x": 0,
"y": 3
},
"hiddenSeries": false,
"id": 11,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.3.2",
"pointradius": 5,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "laLoadInt",
"format": "time_series",
"intervalFactor": 1,
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Load",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"Out": "#C15C17"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "Prometheus",
"editable": true,
"error": false,
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 6,
"w": 18,
"x": 0,
"y": 11
},
"hiddenSeries": false,
"id": 1,
"legend": {
"alignAsTable": false,
"avg": false,
"current": true,
"max": true,
"min": false,
"rightSide": false,
"show": true,
"total": false,
"values": true
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "connected",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.3.2",
"pointradius": 5,
"points": false,
"renderer": "flot",
"repeat": "Interface",
"repeatDirection": "h",
"seriesOverrides": [
{
"alias": "Out",
"transform": "negative-Y"
}
],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "irate(ifHCInOctets{job='synology',instance='$Device',ifDescr=~'$Interface'}[5m]) or irate(ifInOctets{job='synology',instance='$Device',ifDescr=~'$Interface'}[5m]) ",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
"legendFormat": "In",
"refId": "A",
"step": 60
},
{
"expr": "irate(ifHCOutOctets{job='synology',instance='$Device',ifDescr=~'$Interface'}[5m]) or irate(ifOutOctets{job='synology',instance='$Device',ifDescr=~'$Interface'}[5m]) ",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
"legendFormat": "Out",
"refId": "B",
"step": 60
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Trafick of $Interface Interfaces",
"tooltip": {
"msResolution": false,
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "Bps",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"cacheTimeout": null,
"colorBackground": false,
"colorValue": false,
"colors": [
"#299c46",
"rgba(237, 129, 40, 0.89)",
"#d44a3a"
],
"datasource": "Prometheus",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"format": "none",
"gauge": {
"maxValue": 100,
"minValue": 0,
"show": true,
"thresholdLabels": false,
"thresholdMarkers": true
},
"gridPos": {
"h": 6,
"w": 5,
"x": 18,
"y": 11
},
"id": 3,
"interval": null,
"links": [],
"mappingType": 1,
"mappingTypes": [
{
"name": "value to text",
"value": 1
},
{
"name": "range to text",
"value": 2
}
],
"maxDataPoints": 100,
"nullPointMode": "connected",
"nullText": null,
"postfix": "",
"postfixFontSize": "50%",
"prefix": "",
"prefixFontSize": "50%",
"rangeMaps": [
{
"from": "null",
"text": "N/A",
"to": "null"
}
],
"sparkline": {
"fillColor": "rgba(31, 118, 189, 0.18)",
"full": false,
"lineColor": "rgb(31, 120, 193)",
"show": false
},
"tableColumn": "temperature{instance=\"192.168.10.200\", job=\"synology\"}",
"targets": [
{
"expr": "temperature",
"format": "time_series",
"intervalFactor": 1,
"refId": "A"
}
],
"thresholds": "45,75",
"title": "temperature",
"type": "singlestat",
"valueFontSize": "80%",
"valueMaps": [
{
"op": "=",
"text": "N/A",
"value": "null"
}
],
"valueName": "avg"
}
],
"refresh": "5m",
"schemaVersion": 26,
"style": "dark",
"tags": [],
"templating": {
"list": [
{
"allValue": null,
"current": {
"selected": false,
"text": "192.168.10.200",
"value": "192.168.10.200"
},
"datasource": "Prometheus",
"definition": "query_result(sum by (instance)(ifInOctets{job=\"synology\"}))",
"error": null,
"hide": 0,
"includeAll": false,
"label": null,
"multi": false,
"name": "Device",
"options": [],
"query": "query_result(sum by (instance)(ifInOctets{job=\"synology\"}))",
"refresh": 1,
"regex": ".*instance=\"(.*?)\".*",
"skipUrlSync": false,
"sort": 1,
"tagValuesQuery": null,
"tags": [],
"tagsQuery": null,
"type": "query",
"useTags": false
},
{
"allValue": null,
"current": {
"selected": false,
"text": "All",
"value": "$__all"
},
"datasource": "Prometheus",
"definition": "query_result(ifInOctets{job=\"synology\",instance=\"$Device\"})",
"error": null,
"hide": 2,
"includeAll": true,
"label": null,
"multi": false,
"name": "Interface",
"options": [],
"query": "query_result(ifInOctets{job=\"synology\",instance=\"$Device\"})",
"refresh": 1,
"regex": ".*ifDescr=\"(.*?)\",.*",
"skipUrlSync": false,
"sort": 1,
"tagValuesQuery": null,
"tags": [],
"tagsQuery": null,
"type": "query",
"useTags": false
}
]
},
"time": {
"from": "now-12h",
"to": "now"
},
"timepicker": {
"refresh_intervals": [
"5s",
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
],
"time_options": [
"5m",
"15m",
"1h",
"6h",
"12h",
"24h",
"2d",
"7d",
"30d"
]
},
"timezone": "browser",
"title": "Synology SNMP DashBoard",
"uid": "N4Cl097iz",
"version": 6
}

View File

@ -1,4 +1,12 @@
# Start provisioning: # Start provisioning:
%w(node_exporter.yaml node_exporter_all_nodes.yaml synology.yaml).each do |conf|
remote_file "/etc/grafana/provisioning/dashboards/#{conf}" do
owner 'root'
group 'grafana'
mode '640'
end
end
%w(loki.yaml prometheus.yaml).each do |conf| %w(loki.yaml prometheus.yaml).each do |conf|
remote_file "/etc/grafana/provisioning/datasources/#{conf}" do remote_file "/etc/grafana/provisioning/datasources/#{conf}" do
owner 'root' owner 'root'
@ -7,6 +15,20 @@
end end
end end
directory "/var/lib/grafana/provision/dashboards" do
owner 'grafana'
group 'grafana'
mode '755'
end
%w(node_exporter.json node_exporter_all_nodes.json synology.json).each do |conf|
remote_file "/var/lib/grafana/provision/dashboards/#{conf}" do
owner 'grafana'
group 'grafana'
mode '640'
end
end
remote_file '/etc/grafana/grafana.ini' do remote_file '/etc/grafana/grafana.ini' do
owner 'grafana' owner 'grafana'
group 'grafana' group 'grafana'

View File

@ -31,3 +31,13 @@ remote_file '/home/kazu634/.ssh/config' do
mode '644' mode '644'
end end
# Disable Password authentication
file '/etc/ssh/sshd_config.d/50-cloud-init.conf' do
action :delete
end
execute 'systemctl restart ssh.service ' do
action :nothing
subscribes :run, 'file[/etc/ssh/sshd_config.d/50-cloud-init.conf]'
end

View File

@ -2,7 +2,7 @@
# Specifying the default settings: # Specifying the default settings:
# ------------------------------------------- # -------------------------------------------
case run_command('grep VERSION_ID /etc/os-release | awk -F\" \'{print $2}\'').stdout.chomp case run_command('grep VERSION_ID /etc/os-release | awk -F\" \'{print $2}\'').stdout.chomp
when "20.04", "22.04" when "20.04", "22.04", "24.04"
cmd = 'LANG=C ip a | grep "inet " | grep -v -E "(127|172)" | cut -d" " -f6 | perl -pe "s/\/.+//g"' cmd = 'LANG=C ip a | grep "inet " | grep -v -E "(127|172)" | cut -d" " -f6 | perl -pe "s/\/.+//g"'
when "18.04" when "18.04"

View File

@ -2,44 +2,50 @@ auth_enabled: false
server: server:
http_listen_port: 3100 http_listen_port: 3100
grpc_listen_port: 9096
ingester: common:
lifecycler: instance_addr: 127.0.0.1
address: 127.0.0.1 path_prefix: /var/opt/loki
storage:
filesystem:
chunks_directory: /var/opt/loki/chunks
rules_directory: /var/opt/loki/rules
replication_factor: 1
ring: ring:
kvstore: kvstore:
store: inmemory store: inmemory
replication_factor: 1
final_sleep: 0s query_range:
chunk_idle_period: 5m results_cache:
chunk_retain_period: 30s cache:
max_transfer_retries: 0 embedded_cache:
enabled: true
max_size_mb: 100
schema_config: schema_config:
configs: configs:
- from: 2018-04-15 - from: 2020-10-24
store: boltdb store: tsdb
object_store: filesystem object_store: filesystem
schema: v11 schema: v13
index: index:
prefix: index_ prefix: index_
period: 168h period: 24h
storage_config: frontend:
boltdb: encoding: protobuf
directory: /var/opt/loki/index
filesystem: # By default, Loki will send anonymous, but uniquely-identifiable usage and configuration
directory: /var/opt/loki/chunks # analytics to Grafana Labs. These statistics are sent to https://stats.grafana.org/
#
# Statistics help us better understand how Loki is used, and they show us performance
# levels for most users. This helps us prioritize features and documentation.
# For more information on what's sent, look at
# https://github.com/grafana/loki/blob/main/pkg/analytics/stats.go
# Refer to the buildReport method to see what goes into a report.
#
# If you would like to disable reporting, uncomment the following lines:
#analytics:
# reporting_enabled: false
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
chunk_store_config:
max_look_back_period: 0s
table_manager:
retention_deletes_enabled: false
retention_period: 0s

View File

@ -8,13 +8,11 @@
end end
# Deploy `prometheus` files: # Deploy `prometheus` files:
template '/etc/loki/loki-config.yml' do remote_file '/etc/loki/loki-config.yml' do
owner 'root' owner 'root'
group 'root' group 'root'
mode '644' mode '644'
variables(ipaddr: node['loki']['ipaddr'])
notifies :restart, 'service[loki]' notifies :restart, 'service[loki]'
end end

View File

@ -1,45 +0,0 @@
auth_enabled: false
server:
http_listen_port: 3100
ingester:
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 5m
chunk_retain_period: 30s
max_transfer_retries: 0
schema_config:
configs:
- from: 2018-04-15
store: boltdb
object_store: filesystem
schema: v11
index:
prefix: index_
period: 168h
storage_config:
boltdb:
directory: /var/opt/loki/index
filesystem:
directory: /var/opt/loki/chunks
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
chunk_store_config:
max_look_back_period: 0s
table_manager:
retention_deletes_enabled: false
retention_period: 0s

16
cookbooks/lxc/default.rb Normal file
View File

@ -0,0 +1,16 @@
# Install the necessary packages:
include_recipe './packages.rb'
include_recipe './eget.rb'
# `unattended-upgrade` settings:
include_recipe './unattended-upgrade.rb'
# `ufw` configurations:
include_recipe './ufw.rb'
# timezone configurations:
include_recipe './timezone.rb'
execute 'ufw allow 22/tcp' do
notifies :run, 'execute[ufw reload-or-enable]'
end

14
cookbooks/lxc/eget.rb Normal file
View File

@ -0,0 +1,14 @@
result = run_command('which eget', error: false)
if result.exit_status != 0
# Install eget
execute 'curl https://zyedidia.github.io/eget.sh | sh' do
cwd '/usr/local/bin/'
end
execute 'chown root:root /usr/local/bin/eget'
execute 'chmod 755 /usr/local/bin/eget'
end
%w( zyedidia/eget mgdm/htmlq ).each do |p|
execute "eget #{p} --to /usr/local/bin/ --upgrade-only"
end

View File

@ -0,0 +1,2 @@
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";

View File

@ -0,0 +1,143 @@
// Automatically upgrade packages from these (origin:archive) pairs
//
// Note that in Ubuntu security updates may pull in new dependencies
// from non-security sources (e.g. chromium). By allowing the release
// pocket these get automatically pulled in.
Unattended-Upgrade::Allowed-Origins {
"${distro_id}:${distro_codename}";
"${distro_id}:${distro_codename}-security";
// Extended Security Maintenance; doesn't necessarily exist for
// every release and this system may not have it installed, but if
// available, the policy for updates is such that unattended-upgrades
// should also install from here by default.
"${distro_id}ESMApps:${distro_codename}-apps-security";
"${distro_id}ESM:${distro_codename}-infra-security";
// "${distro_id}:${distro_codename}-updates";
// "${distro_id}:${distro_codename}-proposed";
// "${distro_id}:${distro_codename}-backports";
};
// Python regular expressions, matching packages to exclude from upgrading
Unattended-Upgrade::Package-Blacklist {
// The following matches all packages starting with linux-
// "linux-";
// Use $ to explicitely define the end of a package name. Without
// the $, "libc6" would match all of them.
// "libc6$";
// "libc6-dev$";
// "libc6-i686$";
// Special characters need escaping
// "libstdc\+\+6$";
// The following matches packages like xen-system-amd64, xen-utils-4.1,
// xenstore-utils and libxenstore3.0
// "(lib)?xen(store)?";
// For more information about Python regular expressions, see
// https://docs.python.org/3/howto/regex.html
};
// This option controls whether the development release of Ubuntu will be
// upgraded automatically. Valid values are "true", "false", and "auto".
Unattended-Upgrade::DevRelease "auto";
// This option allows you to control if on a unclean dpkg exit
// unattended-upgrades will automatically run
// dpkg --force-confold --configure -a
// The default is true, to ensure updates keep getting installed
//Unattended-Upgrade::AutoFixInterruptedDpkg "true";
// Split the upgrade into the smallest possible chunks so that
// they can be interrupted with SIGTERM. This makes the upgrade
// a bit slower but it has the benefit that shutdown while a upgrade
// is running is possible (with a small delay)
//Unattended-Upgrade::MinimalSteps "true";
// Install all updates when the machine is shutting down
// instead of doing it in the background while the machine is running.
// This will (obviously) make shutdown slower.
// Unattended-upgrades increases logind's InhibitDelayMaxSec to 30s.
// This allows more time for unattended-upgrades to shut down gracefully
// or even install a few packages in InstallOnShutdown mode, but is still a
// big step back from the 30 minutes allowed for InstallOnShutdown previously.
// Users enabling InstallOnShutdown mode are advised to increase
// InhibitDelayMaxSec even further, possibly to 30 minutes.
//Unattended-Upgrade::InstallOnShutdown "false";
// Send email to this address for problems or packages upgrades
// If empty or unset then no email is sent, make sure that you
// have a working mail setup on your system. A package that provides
// 'mailx' must be installed. E.g. "user@example.com"
//Unattended-Upgrade::Mail "";
// Set this value to one of:
// "always", "only-on-error" or "on-change"
// If this is not set, then any legacy MailOnlyOnError (boolean) value
// is used to chose between "only-on-error" and "on-change"
//Unattended-Upgrade::MailReport "on-change";
// Remove unused automatically installed kernel-related packages
// (kernel images, kernel headers and kernel version locked tools).
Unattended-Upgrade::Remove-Unused-Kernel-Packages "true";
// Do automatic removal of newly unused dependencies after the upgrade
Unattended-Upgrade::Remove-New-Unused-Dependencies "true";
// Do automatic removal of unused packages after the upgrade
// (equivalent to apt-get autoremove)
Unattended-Upgrade::Remove-Unused-Dependencies "true";
// Automatically reboot *WITHOUT CONFIRMATION* if
// the file /var/run/reboot-required is found after the upgrade
Unattended-Upgrade::Automatic-Reboot "false";
// Automatically reboot even if there are users currently logged in
// when Unattended-Upgrade::Automatic-Reboot is set to true
//Unattended-Upgrade::Automatic-Reboot-WithUsers "true";
// If automatic reboot is enabled and needed, reboot at the specific
// time instead of immediately
// Default: "now"
//Unattended-Upgrade::Automatic-Reboot-Time "02:00";
// Use apt bandwidth limit feature, this example limits the download
// speed to 70kb/sec
//Acquire::http::Dl-Limit "70";
// Enable logging to syslog. Default is False
// Unattended-Upgrade::SyslogEnable "false";
// Specify syslog facility. Default is daemon
// Unattended-Upgrade::SyslogFacility "daemon";
// Download and install upgrades only on AC power
// (i.e. skip or gracefully stop updates on battery)
// Unattended-Upgrade::OnlyOnACPower "true";
// Download and install upgrades only on non-metered connection
// (i.e. skip or gracefully stop updates on a metered connection)
// Unattended-Upgrade::Skip-Updates-On-Metered-Connections "true";
// Verbose logging
// Unattended-Upgrade::Verbose "false";
// Print debugging information both in unattended-upgrades and
// in unattended-upgrade-shutdown
// Unattended-Upgrade::Debug "false";
// Allow package downgrade if Pin-Priority exceeds 1000
// Unattended-Upgrade::Allow-downgrade "false";
// When APT fails to mark a package to be upgraded or installed try adjusting
// candidates of related packages to help APT's resolver in finding a solution
// where the package can be upgraded or installed.
// This is a workaround until APT's resolver is fixed to always find a
// solution if it exists. (See Debian bug #711128.)
// The fallback is enabled by default, except on Debian's sid release because
// uninstallable packages are frequent there.
// Disabling the fallback speeds up unattended-upgrades when there are
// uninstallable packages at the expense of rarely keeping back packages which
// could be upgraded or installed.
// Unattended-Upgrade::Allow-APT-Mark-Fallback "true";

View File

@ -0,0 +1,2 @@
autoclean -y
upgrade -y -o APT::Get::Show-Upgraded=true

View File

@ -0,0 +1,11 @@
# Configuration for cron-apt. For further information about the possible
# configuration settings see the README file.
SYSLOGON="always"
DEBUG="verbose"
MAILON=""
APTCOMMAND=/usr/bin/apt
OPTIONS="-o quiet=1 -o Dir::Etc::SourceList=/etc/apt/security.sources.list"

14
cookbooks/lxc/packages.rb Normal file
View File

@ -0,0 +1,14 @@
# Execute `apt update`:
execute 'apt update'
# Install the necessary packages:
%w[build-essential zsh vim-nox debian-keyring curl direnv jq avahi-daemon wget gpg coreutils].each do |pkg|
package pkg
end
execute 'ufw allow 5353/udp' do
user 'root'
not_if 'LANG=c ufw status | grep 5353'
notifies :run, 'execute[ufw reload-or-enable]'
end

View File

@ -0,0 +1 @@
deb "http://ppa.launchpad.net/git-core/ppa/ubuntu" <%= @distribution %> main

23
cookbooks/lxc/timezone.rb Normal file
View File

@ -0,0 +1,23 @@
case node['platform_version']
when "18.04", "20.04", "22.04", "24.04"
execute 'timedatectl set-timezone Asia/Tokyo' do
not_if 'timedatectl | grep Tokyo'
end
else
remote_file '/etc/timezone' do
user 'root'
owner 'root'
group 'root'
mode '644'
end
[
'cp -f /usr/share/zoneinfo/Asia/Tokyo /etc/localtime'
].each do |cmd|
execute cmd do
user 'root'
not_if 'diff /usr/share/zoneinfo/Asia/Tokyo /etc/localtime'
end
end
end

6
cookbooks/lxc/ufw.rb Normal file
View File

@ -0,0 +1,6 @@
execute 'ufw reload-or-enable' do
user 'root'
command 'LANG=C ufw reload | grep skipping && ufw --force enable || exit 0'
action :nothing
end

View File

@ -0,0 +1,56 @@
case run_command('grep VERSION_ID /etc/os-release | awk -F\" \'{print $2}\'').stdout.chomp
when "18.04"
# Install `cron-apt`:
package 'cron-apt'
# From here, we are going to set up `cron-apt` to
# install the important security updates every day.
remote_file '/etc/cron-apt/config' do
user 'root'
owner 'root'
group 'root'
mode '644'
end
remote_file '/etc/cron-apt/action.d/3-download' do
user 'root'
owner 'root'
group 'root'
mode '644'
end
execute 'grep security /etc/apt/sources.list > /etc/apt/security.sources.list' do
user 'root'
not_if 'test -e /etc/apt/security.sources.list'
end
file '/var/log/cron-apt/log' do
user 'root'
content 'foo\n'
owner 'root'
group 'root'
mode '666'
not_if 'test -e /var/log/cron-apt/log'
end
execute '/usr/sbin/logrotate -f /etc/logrotate.d/cron-apt' do
user 'root'
not_if 'test -e /var/log/cron-apt/log'
end
when '20.04', '22.04', '24.04'
%w(20auto-upgrades 50unattended-upgrades).each do |conf|
remote_file "/etc/apt/apt.conf.d/#{conf}" do
owner 'root'
group 'root'
mode '644'
end
end
end

View File

@ -3,8 +3,9 @@
# ------------------------------------------- # -------------------------------------------
node.reverse_merge!({ node.reverse_merge!({
'nginx' => { 'nginx' => {
'version' => '1.25.0', 'version' => '1.26.1',
'skip_lego' => 'true', 'skip_lego' => true,
'skip_webadm' => 'true' 'skip_webadm' => false,
'skip_deploy_conf' => true
} }
}) })

View File

@ -78,7 +78,7 @@ directory MODULEDIR do
end end
# Build starts here: # Build starts here:
execute "#{NGINXBUILD} -d working -v #{version} -c configure.sh -zlib -pcre -libressl -libresslversion 3.8.0" do execute "#{NGINXBUILD} -d working -v #{version} -c configure.sh -zlib -pcre -libressl -libresslversion 3.9.1" do
cwd WORKDIR cwd WORKDIR
user USER user USER

View File

@ -35,6 +35,13 @@ end
# Prerequisites for Building nginx: # Prerequisites for Building nginx:
if !node['nginx']['skip_webadm'] if !node['nginx']['skip_webadm']
include_recipe './webadm.rb' include_recipe './webadm.rb'
end
# Build nginx:
include_recipe './build.rb'
# Check whether to deploy the nginx confings:
if !node['nginx']['skip_deploy_conf']
include_recipe '../blog/default.rb' include_recipe '../blog/default.rb'
include_recipe '../everun/default.rb' include_recipe '../everun/default.rb'
end end
@ -44,9 +51,5 @@ if !node['nginx']['skip_lego']
include_recipe './lego.rb' include_recipe './lego.rb'
end end
# Build nginx:
include_recipe './build.rb'
# Setup nginx: # Setup nginx:
include_recipe './setup.rb' include_recipe './setup.rb'

View File

@ -47,6 +47,7 @@ end
end end
# Create `repo` directory: # Create `repo` directory:
if !node['nginx']['skip_deploy_conf']
git '/home/webadm/repo/nginx-config' do git '/home/webadm/repo/nginx-config' do
user 'webadm' user 'webadm'
repository 'https://github.com/kazu634/nginx-config.git' repository 'https://github.com/kazu634/nginx-config.git'
@ -60,4 +61,4 @@ end
service 'consul-template' do service 'consul-template' do
action :restart action :restart
end end
end

View File

@ -31,14 +31,7 @@ directory '/opt/cni/bin' do
mode '0755' mode '0755'
end end
%w( bandwidth bridge dhcp firewall host-device host-local ipvlan loopback macvlan portmap ptp sbr static tuning vlan vrf ).each do |f| execute "eget containernetworking/plugins --to /opt/cni/bin --upgrade-only -a ^sha --all"
remote_file "/opt/cni/bin/#{f}" do
owner 'root'
group 'root'
mode '0775'
end
end
directory '/etc/cni' do directory '/etc/cni' do
owner 'root' owner 'root'

View File

@ -2,7 +2,14 @@ include_recipe './attributes.rb'
include_recipe './install.rb' include_recipe './install.rb'
if node['nomad']['client']
include_recipe '../docker/default.rb'
include_recipe './csi.rb'
package "consul-cni"
package "dmidecode"
end
if node['nomad']['manager'] || node['nomad']['client'] if node['nomad']['manager'] || node['nomad']['client']
include_recipe './setup.rb' include_recipe './setup.rb'
include_recipe './csi.rb'
end end

View File

@ -7,7 +7,7 @@ execute "wget -O- #{SRC} | gpg --dearmor -o #{DEST}" do
end end
# Retrieve the Ubuntu code: # Retrieve the Ubuntu code:
DIST = run_command('lsb_release -cs').stdout.chomp DIST = run_command('lsb_release -cs 2>/dev/null').stdout.chomp
# Deploy the `apt` sources: # Deploy the `apt` sources:
template '/etc/apt/sources.list.d/hashicorp.list' do template '/etc/apt/sources.list.d/hashicorp.list' do

View File

@ -1,5 +1,6 @@
# Kernel parameters: # Kernel parameters:
execute 'modprobe br_netfilter' execute 'modprobe br_netfilter'
execute 'modprobe bridge'
remote_file '/etc/sysctl.d/90-nomad.conf' do remote_file '/etc/sysctl.d/90-nomad.conf' do
owner 'root' owner 'root'

View File

@ -3,23 +3,12 @@
# ------------------------------------------- # -------------------------------------------
node.reverse_merge!({ node.reverse_merge!({
'node_exporter' => { 'node_exporter' => {
'url' => 'https://github.com/prometheus/node_exporter/releases/download/', 'url' => 'prometheus/node_exporter',
'prefix' => 'node_exporter-',
'postfix' => '.linux-amd64.tar.gz',
'storage' => '/opt/node_exporter/bin/', 'storage' => '/opt/node_exporter/bin/',
'location' => '/usr/local/bin/' 'location' => '/usr/local/bin/'
}, },
'blackbox_exporter' => {
'url' => 'https://github.com/prometheus/blackbox_exporter/releases/download/',
'prefix' => 'blackbox_exporter-',
'postfix' => '.linux-amd64.tar.gz',
'storage' => '/opt/blackbox_exporter/bin/',
'location' => '/usr/local/bin/'
},
'filestat_exporter' => { 'filestat_exporter' => {
'url' => 'https://github.com/michael-doubez/filestat_exporter/releases/download/', 'url' => 'michael-doubez/filestat_exporter',
'prefix' => 'filestat_exporter-',
'postfix' => '.linux-amd64.tar.gz',
'storage' => '/opt/filestat_exporter/', 'storage' => '/opt/filestat_exporter/',
'location' => '/usr/local/bin/' 'location' => '/usr/local/bin/'
}, },

View File

@ -3,9 +3,7 @@ BIN = '/usr/local/bin/exporter_proxy'
CONFDIR = '/etc/prometheus_exporters.d/exporter_proxy/' CONFDIR = '/etc/prometheus_exporters.d/exporter_proxy/'
CONF = 'config.yml' CONF = 'config.yml'
execute "wget #{URL} -O #{BIN}" do execute "eget rrreeeyyy/exporter_proxy --to /usr/local/bin/ --upgrade-only"
not_if "test -e #{BIN}"
end
file BIN do file BIN do
user 'root' user 'root'

View File

@ -1,35 +1,3 @@
filestat_exporter_url = ''
filestat_exporter_bin = ''
vtag = ''
# Calculate the Download URL:
begin
require 'net/http'
uri = URI.parse('https://github.com/michael-doubez/filestat_exporter/releases/latest')
Timeout.timeout(3) do
response = Net::HTTP.get_response(uri)
vtag = $1 if response['location'] =~ %r{tag\/(v\d+\.\d+\.\d+)}
filestat_exporter_bin = "#{node['filestat_exporter']['prefix']}#{vtag}#{node['filestat_exporter']['postfix']}"
filestat_exporter_url = "#{node['filestat_exporter']['url']}/#{vtag}/#{filestat_exporter_bin}"
end
rescue
# Abort the chef client process:
raise 'Cannot connect to http://github.com.'
end
# バージョン確認して、アップデート必要かどうか確認
result = run_command("filestat_exporter --version 2>&1 | grep #{vtag}", error: false)
if result.exit_status != 0
# Download:
TMP = "/tmp/#{filestat_exporter_bin}"
execute "wget #{filestat_exporter_url} -O #{TMP}"
# Install: # Install:
directory node['filestat_exporter']['storage'] do directory node['filestat_exporter']['storage'] do
owner 'root' owner 'root'
@ -37,7 +5,7 @@ if result.exit_status != 0
mode '755' mode '755'
end end
execute "tar zxf #{TMP} -C #{node['filestat_exporter']['storage']} --strip-components 1" execute "eget #{node['filestat_exporter']['url']} --to #{node['filestat_exporter']['storage']}"
# Change Owner and Permissions: # Change Owner and Permissions:
file "#{node['filestat_exporter']['storage']}filestat_exporter" do file "#{node['filestat_exporter']['storage']}filestat_exporter" do
@ -50,4 +18,3 @@ if result.exit_status != 0
link "#{node['filestat_exporter']['location']}filestat_exporter" do link "#{node['filestat_exporter']['location']}filestat_exporter" do
to "#{node['filestat_exporter']['storage']}filestat_exporter" to "#{node['filestat_exporter']['storage']}filestat_exporter"
end end
end

View File

@ -1,37 +1,3 @@
node_exporter_url = ''
node_exporter_bin = ''
tag = ''
vtag = ''
# Calculate the Download URL:
begin
require 'net/http'
uri = URI.parse('https://github.com/prometheus/node_exporter/releases/latest')
Timeout.timeout(3) do
response = Net::HTTP.get_response(uri)
vtag = $1 if response['location'] =~ %r{tag\/(v\d+\.\d+\.\d+)}
tag = vtag.sub(/^v/, '')
node_exporter_bin = "#{node['node_exporter']['prefix']}#{tag}#{node['node_exporter']['postfix']}"
node_exporter_url = "#{node['node_exporter']['url']}/#{vtag}/#{node_exporter_bin}"
end
rescue
# Abort the chef client process:
raise 'Cannot connect to http://github.com.'
end
# バージョン確認して、アップデート必要かどうか確認
result = run_command("node_exporter --version 2>&1 | grep #{tag}", error: false)
if result.exit_status != 0
# Download:
TMP = "/tmp/#{node_exporter_bin}"
execute "wget #{node_exporter_url} -O #{TMP}"
# Install: # Install:
directory node['node_exporter']['storage'] do directory node['node_exporter']['storage'] do
owner 'root' owner 'root'
@ -39,7 +5,7 @@ if result.exit_status != 0
mode '755' mode '755'
end end
execute "tar zxf #{TMP} -C #{node['node_exporter']['storage']} --strip-components 1" execute "eget #{node['node_exporter']['url']} --to #{node['node_exporter']['storage']} --upgrade-only"
# Change Owner and Permissions: # Change Owner and Permissions:
file "#{node['node_exporter']['storage']}node_exporter" do file "#{node['node_exporter']['storage']}node_exporter" do
@ -52,4 +18,3 @@ if result.exit_status != 0
link "#{node['node_exporter']['location']}node_exporter" do link "#{node['node_exporter']['location']}node_exporter" do
to "#{node['node_exporter']['storage']}node_exporter" to "#{node['node_exporter']['storage']}node_exporter"
end end
end

View File

@ -3,7 +3,7 @@
# ------------------------------------------- # -------------------------------------------
case run_command('grep VERSION_ID /etc/os-release | awk -F\" \'{print $2}\'').stdout.chomp case run_command('grep VERSION_ID /etc/os-release | awk -F\" \'{print $2}\'').stdout.chomp
when "20.04" when "20.04", "22.04", "24.04"
cmd = 'LANG=C ip a | grep "inet " | grep -v -E "(127|172)" | cut -d" " -f6 | perl -pe "s/\/.+//g"' cmd = 'LANG=C ip a | grep "inet " | grep -v -E "(127|172)" | cut -d" " -f6 | perl -pe "s/\/.+//g"'
when "18.04" when "18.04"
@ -21,6 +21,6 @@ node.reverse_merge!({
'manager' => false, 'manager' => false,
'ipaddr' => ipaddr, 'ipaddr' => ipaddr,
'hostname' => hostname, 'hostname' => hostname,
'ips' => ['192.168.10.141', '192.168.10.142', '192.168.10.143'], 'ips' => ['192.168.10.140', '192.168.10.141', '192.168.10.142'],
} }
}) })

View File

@ -0,0 +1,5 @@
md5:cb234b386c1601dc3c6bf1072c00a441:salt:123-90-76-221-9-96-59-101:aes-256-cfb:SfQ2qhmH163jZgh9yequT6JyUCNfaCYW1Ch6BDE6Lid8fj6xcwWYLLTycXhs
o0y3Wvf3lgt3rHQy6J2tPuSahbtMcZwcBUp6jblNahBJW5yw1pUR/cLNXruy
J3/LLbA2BPBb+l2TAzVfUTNHKdPY7Z1hZ2hcSgf7uK6cCoSHrPGF1jePQx7+
Ys1sJLsg0M7jUXUiHrNZGdf5ShR0oeyQ+1tFYu9bMVn/EnJHoTtrL6Zbrb8b
14YmdtqwhuY46L+wTE2nmWqBUdCYCnlta8RHzgnXxWQRLnnEZ356oW+WIQ==

View File

@ -7,12 +7,15 @@ execute "wget -O- #{SRC} | gpg --dearmor -o #{DEST}" do
end end
# Retrieve the Ubuntu code: # Retrieve the Ubuntu code:
DIST = run_command('lsb_release -cs').stdout.chomp DIST = run_command('lsb_release -cs 2>/dev/null').stdout.chomp
# Deploy the `apt` sources: # Deploy the `apt` sources:
template '/etc/apt/sources.list.d/hashicorp.list' do template '/etc/apt/sources.list.d/hashicorp.list' do
action :create action :create
variables(distribution: DIST) variables(distribution: DIST)
owner 'root'
group 'root'
end end
execute 'apt update' do execute 'apt update' do

View File

@ -2,9 +2,21 @@
template '/etc/vault.d/vault.hcl' do template '/etc/vault.d/vault.hcl' do
owner 'vault' owner 'vault'
group 'vault' group 'vault'
mode '644' mode '600'
variables(HOSTNAME: node['vault']['hostname'], IPADDR: node['vault']['ipaddr'], IPS: node['vault']['ips']) variables(HOSTNAME: node['vault']['hostname'], IPADDR: node['vault']['ipaddr'], IPS: node['vault']['ips'])
notifies :restart, 'service[vault]'
end
encrypted_remote_file '/etc/vault.d/vault.env' do
owner 'vault'
group 'vault'
mode '600'
source 'files/etc/vault.d/vault.env'
password ENV['ITAMAE_PASSWORD']
notifies :restart, 'service[vault]'
end end
directory '/etc/vault.d/policies' do directory '/etc/vault.d/policies' do
@ -26,3 +38,18 @@ remote_file '/etc/logrotate.d/vault' do
group 'root' group 'root'
mode '644' mode '644'
end end
%w(8200 8201).each do |port|
execute "ufw allow #{port}" do
user 'root'
not_if "LANG=c ufw status | grep #{port}"
notifies :run, 'execute[ufw reload-or-enable]'
end
end
service 'vault' do
action [:enable, :start]
end

View File

@ -1,15 +1,15 @@
ui = true ui = true
disable_mlock = true disable_mlock = true
# service_registration "consul" { service_registration "consul" {
# address = "127.0.0.1:8500" address = "127.0.0.1:8500"
# token = "19149728-ce09-2a72-26b6-d2fc3aeecdd8" token = "63c7eb0b-3e39-95e8-9c70-6e42885cb8f8"
# } }
storage "raft" { storage "raft" {
path = "/opt/vault/data" path = "/opt/vault/data"
node_id = "<%= @HOSTNAME %>" node_id = "<%= @HOSTNAME %>"
<% @IPS.each do |ip| %> <% @IPS.each do |ip| %>
retry_join { retry_join {
leader_api_addr = "http://<%= ip %>:8200" leader_api_addr = "http://<%= ip %>:8200"
@ -18,7 +18,7 @@ storage "raft" {
} }
api_addr = "http://<%= @IPADDR %>:8200" api_addr = "http://<%= @IPADDR %>:8200"
cluster_addr = "http://<%= @IPADDR %>::8201" cluster_addr = "http://<%= @IPADDR %>:8201"
# HTTPS listener # HTTPS listener
listener "tcp" { listener "tcp" {

View File

@ -2,7 +2,7 @@
# Specifying the default settings: # Specifying the default settings:
# ------------------------------------------- # -------------------------------------------
case run_command('grep VERSION_ID /etc/os-release | awk -F\" \'{print $2}\'').stdout.chomp case run_command('grep VERSION_ID /etc/os-release | awk -F\" \'{print $2}\'').stdout.chomp
when "20.04" when "20.04", "22.04", "24.04"
cmd = 'LANG=C ip a | grep "inet " | grep -v -E "(127|172)" | cut -d" " -f6 | perl -pe "s/\/.+//g"' cmd = 'LANG=C ip a | grep "inet " | grep -v -E "(127|172)" | cut -d" " -f6 | perl -pe "s/\/.+//g"'
when "18.04" when "18.04"

6
roles/lxc.rb Normal file
View File

@ -0,0 +1,6 @@
include_recipe "../cookbooks/lxc/default.rb"
include_recipe "../cookbooks/vault/default.rb"
include_recipe "../cookbooks/consul-template/default.rb"
include_recipe "../cookbooks/consul/default.rb"
include_recipe "../cookbooks/vector/default.rb"
include_recipe "../cookbooks/prometheus-exporters/default.rb"

9
tasks/lxc.rake Normal file
View File

@ -0,0 +1,9 @@
#!/usr/bin/env rake
desc 'Invoke itamae command for the lxc container'
task :lxc do
node = `ls -1 nodes/*.json | xargs -I % basename % .json | fzf`
node.chomp!
sh "ITAMAE_PASSWORD=musashi bundle ex itamae ssh --host #{node} -j nodes/#{node}.json -u root entrypoint.rb"
end