Saltstack Official Linux Formula
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

map.jinja 13KB

8 년 전
rewrite LVM lv_present prevents unwanted LV shrink (#221) * Update file.sls add replace * Update file.sls update replace * Update job.sls Added the opportunity to set a job with a special keyword like '@reboot' or '@hourly'. Quotes must be used, otherwise PyYAML will strip the '@' sign. https://docs.saltstack.com/en/master/ref/states/all/salt.states.cron.html * Update README.rst Added the opportunity to set a job with a special keyword like '@reboot' or '@hourly'. Quotes must be used, otherwise PyYAML will strip the '@' sign. * Update README.rst * fix(deprecation): update to new method (#214) Signed-off-by: Felipe Zipitria <fzipitria@perceptyx.com> * Allow swap to be completely disabled * sort repos so they do not change order every run * allow use of new state syntax for module.run The new syntax has been supported since ~2017. From the docs, in case they change: ! New Style test.random_hash: module.run: - test.random_hash: - size: 42 - hash_type: sha256 ! Legacy Style test.random_hash: module.run: - size: 42 - hash_type: sha256 * Update map.jinja Add support fpr Ubuntu Focal. * Update file.sls added possibility to delete files * Network resolf.conf handling the handling as the Resolv.conf is generated and adapted, adapted. previously the Resolv.conf was created and then through Overwrite "network.system" in the interface.sls again. With two search servers that should actually be included. "search example.com. sudomain.example.com" but it always became that search ['example.com.', 'sudomain.example.com'] The resolv.conf was first created correctly but then overwritten again in the interface.sls. The problem only arises if you don't want to have a "Domain:" in resov.conf * rewrite LVM lv_present Since salt now also supports LV extend and reduce, the option Force must be used with care. The changes include that force is only set if the corresponding LV does not yet exist (check via Grains) in order to overwrite any FS signatures (Wiping fs signature). If the LV already exists (check via Grains), Force is set to False unless this is explicitly set to True in the pillars. * Network resolf.conf handling (#220) (#8) * Update file.sls add replace * Update file.sls update replace * Update job.sls Added the opportunity to set a job with a special keyword like '@reboot' or '@hourly'. Quotes must be used, otherwise PyYAML will strip the '@' sign. https://docs.saltstack.com/en/master/ref/states/all/salt.states.cron.html * Update README.rst Added the opportunity to set a job with a special keyword like '@reboot' or '@hourly'. Quotes must be used, otherwise PyYAML will strip the '@' sign. * Update README.rst * fix(deprecation): update to new method (#214) Signed-off-by: Felipe Zipitria <fzipitria@perceptyx.com> * Allow swap to be completely disabled * sort repos so they do not change order every run * allow use of new state syntax for module.run The new syntax has been supported since ~2017. From the docs, in case they change: ! New Style test.random_hash: module.run: - test.random_hash: - size: 42 - hash_type: sha256 ! Legacy Style test.random_hash: module.run: - size: 42 - hash_type: sha256 * Update map.jinja Add support fpr Ubuntu Focal. * Update file.sls added possibility to delete files * Network resolf.conf handling the handling as the Resolv.conf is generated and adapted, adapted. previously the Resolv.conf was created and then through Overwrite "network.system" in the interface.sls again. With two search servers that should actually be included. "search example.com. sudomain.example.com" but it always became that search ['example.com.', 'sudomain.example.com'] The resolv.conf was first created correctly but then overwritten again in the interface.sls. The problem only arises if you don't want to have a "Domain:" in resov.conf Co-authored-by: Felipe Zipitría <fzipi@fing.edu.uy> Co-authored-by: Kyle Gullion <kgullion@gmail.com> Co-authored-by: Matthew Thode <thode@fsi.io> Co-authored-by: Matthew Thode <mthode@mthode.org> Co-authored-by: Felipe Zipitría <fzipi@fing.edu.uy> Co-authored-by: Kyle Gullion <kgullion@gmail.com> Co-authored-by: Matthew Thode <thode@fsi.io> Co-authored-by: Matthew Thode <mthode@mthode.org> * Network resolf.conf handling (#220) (#9) * Update file.sls add replace * Update file.sls update replace * Update job.sls Added the opportunity to set a job with a special keyword like '@reboot' or '@hourly'. Quotes must be used, otherwise PyYAML will strip the '@' sign. https://docs.saltstack.com/en/master/ref/states/all/salt.states.cron.html * Update README.rst Added the opportunity to set a job with a special keyword like '@reboot' or '@hourly'. Quotes must be used, otherwise PyYAML will strip the '@' sign. * Update README.rst * fix(deprecation): update to new method (#214) Signed-off-by: Felipe Zipitria <fzipitria@perceptyx.com> * Allow swap to be completely disabled * sort repos so they do not change order every run * allow use of new state syntax for module.run The new syntax has been supported since ~2017. From the docs, in case they change: ! New Style test.random_hash: module.run: - test.random_hash: - size: 42 - hash_type: sha256 ! Legacy Style test.random_hash: module.run: - size: 42 - hash_type: sha256 * Update map.jinja Add support fpr Ubuntu Focal. * Update file.sls added possibility to delete files * Network resolf.conf handling the handling as the Resolv.conf is generated and adapted, adapted. previously the Resolv.conf was created and then through Overwrite "network.system" in the interface.sls again. With two search servers that should actually be included. "search example.com. sudomain.example.com" but it always became that search ['example.com.', 'sudomain.example.com'] The resolv.conf was first created correctly but then overwritten again in the interface.sls. The problem only arises if you don't want to have a "Domain:" in resov.conf Co-authored-by: Felipe Zipitría <fzipi@fing.edu.uy> Co-authored-by: Kyle Gullion <kgullion@gmail.com> Co-authored-by: Matthew Thode <thode@fsi.io> Co-authored-by: Matthew Thode <mthode@mthode.org> Co-authored-by: Felipe Zipitría <fzipi@fing.edu.uy> Co-authored-by: Kyle Gullion <kgullion@gmail.com> Co-authored-by: Matthew Thode <thode@fsi.io> Co-authored-by: Matthew Thode <mthode@mthode.org> * Update mount.sls added the possibility to set the dump and pass option dump The dump value to be passed into the fstab, Default is 0 pass_num The pass value to be passed into the fstab, Default is 0 * Update mount.sls correction, wrong line. added the possibility to set the dump and pass option dump The dump value to be passed into the fstab, Default is 0 pass_num The pass value to be passed into the fstab, Default is 0 * Add Bind Mount Option Add Bind Mount Option * Add support for template defaults/context args * Add IPv6 Interface Support Add IPv6 Interface Support First Version * Fix warning in salt v3003 The 'gid_from_name' argument in the user.present state has been replaced with 'usergroup'. Update your SLS file to get rid of this warning. * Update map.jinja add Jammy Support --------- Signed-off-by: Felipe Zipitria <fzipitria@perceptyx.com> Co-authored-by: Felipe Zipitría <fzipi@fing.edu.uy> Co-authored-by: Kyle Gullion <kgullion@gmail.com> Co-authored-by: Matthew Thode <thode@fsi.io> Co-authored-by: Matthew Thode <mthode@mthode.org> Co-authored-by: Bruno Binet <bruno.binet@gmail.com>
1 년 전
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502
  1. {% set system = salt['grains.filter_by']({
  2. 'Arch': {
  3. 'pkgs': ['sudo', 'vim', 'wget'],
  4. 'utc': true,
  5. 'user': {},
  6. 'group': {},
  7. 'job': {},
  8. 'limit': {},
  9. 'locale': {},
  10. 'motd': {},
  11. 'env': {},
  12. 'profile': {},
  13. 'proxy': {},
  14. 'repo': {},
  15. 'package': {},
  16. 'autoupdates': {
  17. 'pkgs': []
  18. },
  19. 'selinux': 'permissive',
  20. 'ca_certs_dir': '/usr/local/share/ca-certificates',
  21. 'ca_certs_bin': 'update-ca-certificates',
  22. 'atop': {
  23. 'enabled': false,
  24. 'interval': '20',
  25. 'autostart': true,
  26. 'logpath': '/var/log/atop',
  27. 'outfile': '/var/log/atop/daily.log'
  28. },
  29. 'at': {
  30. 'pkgs': [],
  31. 'services': []
  32. },
  33. 'cron': {
  34. 'pkgs': [],
  35. 'services': []
  36. },
  37. },
  38. 'Debian': {
  39. 'pkgs': ['python-apt', 'apt-transport-https', 'libmnl0'],
  40. 'utc': true,
  41. 'user': {},
  42. 'group': {},
  43. 'job': {},
  44. 'limit': {},
  45. 'locale': {},
  46. 'motd': {},
  47. 'env': {},
  48. 'profile': {},
  49. 'proxy': {},
  50. 'repo': {},
  51. 'package': {},
  52. 'autoupdates': {
  53. 'pkgs': ['unattended-upgrades']
  54. },
  55. 'selinux': 'permissive',
  56. 'ca_certs_dir': '/usr/local/share/ca-certificates',
  57. 'ca_certs_bin': 'update-ca-certificates',
  58. 'atop': {
  59. 'enabled': false,
  60. 'interval': '20',
  61. 'autostart': true,
  62. 'logpath': '/var/log/atop',
  63. 'outfile': '/var/log/atop/daily.log'
  64. },
  65. 'at': {
  66. 'pkgs': ['at'],
  67. 'services': ['atd'],
  68. 'user': {}
  69. },
  70. 'cron': {
  71. 'pkgs': ['cron'],
  72. 'services': ['cron'],
  73. 'user': {}
  74. },
  75. },
  76. 'RedHat': {
  77. 'pkgs': ['policycoreutils', 'policycoreutils-python', 'telnet', 'wget'],
  78. 'utc': true,
  79. 'user': {},
  80. 'group': {},
  81. 'job': {},
  82. 'limit': {},
  83. 'locale': {},
  84. 'motd': {},
  85. 'env': {},
  86. 'profile': {},
  87. 'proxy': {},
  88. 'repo': {},
  89. 'package': {},
  90. 'autoupdates': {
  91. 'pkgs': []
  92. },
  93. 'selinux': 'permissive',
  94. 'ca_certs_dir': '/etc/pki/ca-trust/source/anchors',
  95. 'ca_certs_bin': 'update-ca-trust extract',
  96. 'atop': {
  97. 'enabled': false,
  98. 'interval': '20',
  99. 'autostart': true,
  100. 'logpath': '/var/log/atop',
  101. 'outfile': '/var/log/atop/daily.log'
  102. },
  103. 'at': {
  104. 'pkgs': [],
  105. 'services': []
  106. },
  107. 'cron': {
  108. 'pkgs': [],
  109. 'services': []
  110. },
  111. },
  112. }, merge=salt['grains.filter_by']({
  113. 'bullseye': {
  114. 'pkgs': ['python3-apt', 'apt-transport-https', 'libmnl0'],
  115. },
  116. 'bookworm': {
  117. 'pkgs': ['python3-apt', 'apt-transport-https', 'libmnl0'],
  118. },
  119. 'sid': {
  120. 'pkgs': ['python3-apt', 'apt-transport-https', 'libmnl0'],
  121. },
  122. 'jammy': {
  123. 'pkgs': ['python3-apt', 'apt-transport-https', 'libmnl0'],
  124. },
  125. }, grain='oscodename', merge=salt['pillar.get']('linux:system'))) %}
  126. {% set banner = salt['grains.filter_by']({
  127. 'BaseDefaults': {
  128. 'enabled': false,
  129. },
  130. }, grain='os_family', merge=salt['pillar.get']('linux:system:banner'), base='BaseDefaults') %}
  131. {% set auth = salt['grains.filter_by']({
  132. 'Arch': {
  133. 'enabled': false,
  134. 'duo': {
  135. 'enabled': false,
  136. 'duo_host': 'localhost',
  137. 'duo_ikey': '',
  138. 'duo_skey': ''
  139. }
  140. },
  141. 'RedHat': {
  142. 'enabled': false,
  143. 'duo': {
  144. 'enabled': false,
  145. 'duo_host': 'localhost',
  146. 'duo_ikey': '',
  147. 'duo_skey': ''
  148. }
  149. },
  150. 'Debian': {
  151. 'enabled': false,
  152. 'duo': {
  153. 'enabled': false,
  154. 'duo_host': 'localhost',
  155. 'duo_ikey': '',
  156. 'duo_skey': ''
  157. }
  158. },
  159. }, grain='os_family', merge=salt['pillar.get']('linux:system:auth')) %}
  160. {% set ldap = salt['grains.filter_by']({
  161. 'RedHat': {
  162. 'enabled': false,
  163. 'pkgs': ['openldap-clients', 'nss-pam-ldapd', 'authconfig', 'nscd'],
  164. 'version': '3',
  165. 'scope': 'sub',
  166. 'uid': 'nslcd',
  167. 'gid': 'nslcd',
  168. },
  169. 'Debian': {
  170. 'enabled': false,
  171. 'pkgs': ['libnss-ldapd', 'libpam-ldapd', 'nscd'],
  172. 'version': '3',
  173. 'scope': 'sub',
  174. 'uid': 'nslcd',
  175. 'gid': 'nslcd',
  176. },
  177. }, grain='os_family', merge=salt['pillar.get']('linux:system:auth:ldap')) %}
  178. {%- load_yaml as login_defs_defaults %}
  179. Debian:
  180. CHFN_RESTRICT:
  181. value: 'rwh'
  182. DEFAULT_HOME:
  183. value: 'yes'
  184. ENCRYPT_METHOD:
  185. value: 'SHA512'
  186. ENV_PATH:
  187. value: 'PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games'
  188. ENV_SUPATH:
  189. value: 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
  190. ERASECHAR:
  191. value: '0177'
  192. FAILLOG_ENAB:
  193. value: 'yes'
  194. FTMP_FILE:
  195. value: '/var/log/btmp'
  196. GID_MAX:
  197. value: '60000'
  198. GID_MIN:
  199. value: '1000'
  200. HUSHLOGIN_FILE:
  201. value: '.hushlogin'
  202. KILLCHAR:
  203. value: '025'
  204. LOGIN_RETRIES:
  205. value: '5'
  206. LOGIN_TIMEOUT:
  207. value: '60'
  208. LOG_OK_LOGINS:
  209. value: 'no'
  210. LOG_UNKFAIL_ENAB:
  211. value: 'no'
  212. MAIL_DIR:
  213. value: '/var/mail'
  214. PASS_MAX_DAYS:
  215. value: '99999'
  216. PASS_MIN_DAYS:
  217. value: '0'
  218. PASS_WARN_AGE:
  219. value: '7'
  220. SU_NAME:
  221. value: 'su'
  222. SYSLOG_SG_ENAB:
  223. value: 'yes'
  224. SYSLOG_SU_ENAB:
  225. value: 'yes'
  226. TTYGROUP:
  227. value: 'tty'
  228. TTYPERM:
  229. value: '0600'
  230. UID_MAX:
  231. value: '60000'
  232. UID_MIN:
  233. value: '1000'
  234. UMASK:
  235. value: '022'
  236. USERGROUPS_ENAB:
  237. value: 'yes'
  238. {%- endload %}
  239. {%- set login_defs = salt['grains.filter_by'](login_defs_defaults,
  240. grain='os_family', merge=salt['pillar.get']('linux:system:login_defs')) %}
  241. {# 'network_name', #}
  242. {% set interface_params = [
  243. 'gateway',
  244. 'mtu',
  245. 'network',
  246. 'broadcast',
  247. 'master',
  248. 'miimon',
  249. 'ovs_ports',
  250. 'ovs_bridge',
  251. 'mode',
  252. 'port_type',
  253. 'peer',
  254. 'lacp-rate',
  255. 'dns-search',
  256. 'up_cmds',
  257. 'pre_up_cmds',
  258. 'post_up_cmds',
  259. 'down_cmds',
  260. 'pre_down_cmds',
  261. 'post_down_cmds',
  262. 'maxwait',
  263. 'stp',
  264. 'gro',
  265. 'rx',
  266. 'tx',
  267. 'sg',
  268. 'tso',
  269. 'ufo',
  270. 'gso',
  271. 'lro',
  272. 'lacp_rate',
  273. 'ad_select',
  274. 'downdelay',
  275. 'updelay',
  276. 'hashing-algorithm',
  277. 'hardware-dma-ring-rx',
  278. 'hwaddr',
  279. 'noifupdown',
  280. 'arp_ip_target',
  281. 'primary',
  282. ] %}
  283. {% set debian_headers = "linux-headers-" + grains.get('kernelrelease')|string %}
  284. {% set network = salt['grains.filter_by']({
  285. 'Arch': {
  286. 'pkgs': ['wpa_supplicant', 'dhclient', 'wireless_tools', 'ifenslave'],
  287. 'bridge_pkgs': ['bridge-utils', 'vlan'],
  288. 'ovs_pkgs': ['openvswitch-switch', 'vlan'],
  289. 'hostname_file': '/etc/hostname',
  290. 'network_manager': False,
  291. 'systemd': {},
  292. 'interface': {},
  293. 'interface_params': interface_params,
  294. 'bridge': 'none',
  295. 'proxy': {
  296. 'host': 'none',
  297. },
  298. 'host': {},
  299. 'mine_dns_records': False,
  300. 'dhclient_config': '/etc/dhcp/dhclient.conf',
  301. 'ovs_nowait': False,
  302. },
  303. 'Debian': {
  304. 'pkgs': ['ifenslave'],
  305. 'hostname_file': '/etc/hostname',
  306. 'bridge_pkgs': ['bridge-utils', 'vlan'],
  307. 'ovs_pkgs': ['openvswitch-switch', 'bridge-utils', 'vlan'],
  308. 'dpdk_pkgs': ['dpdk', 'dpdk-dev', 'dpdk-igb-uio-dkms', 'dpdk-rte-kni-dkms', debian_headers.encode('utf8') ],
  309. 'network_manager': False,
  310. 'systemd': {},
  311. 'interface': {},
  312. 'interface_params': interface_params,
  313. 'bridge': 'none',
  314. 'proxy': {
  315. 'host': 'none'
  316. },
  317. 'host': {},
  318. 'mine_dns_records': False,
  319. 'dhclient_config': '/etc/dhcp/dhclient.conf',
  320. 'ovs_nowait': False,
  321. },
  322. 'RedHat': {
  323. 'pkgs': ['iputils'],
  324. 'bridge_pkgs': ['bridge-utils', 'vlan'],
  325. 'ovs_pkgs': ['openvswitch-switch', 'bridge-utils', 'vlan'],
  326. 'hostname_file': '/etc/sysconfig/network',
  327. 'network_manager': False,
  328. 'systemd': {},
  329. 'interface': {},
  330. 'interface_params': interface_params,
  331. 'bridge': 'none',
  332. 'proxy': {
  333. 'host': 'none'
  334. },
  335. 'host': {},
  336. 'mine_dns_records': False,
  337. 'dhclient_config': '/etc/dhcp/dhclient.conf',
  338. 'ovs_nowait': False,
  339. },
  340. }, grain='os_family', merge=salt['pillar.get']('linux:network')) %}
  341. {% set storage = salt['grains.filter_by']({
  342. 'Arch': {
  343. 'mount': {},
  344. 'swap': {},
  345. 'disk': {},
  346. 'lvm': {},
  347. 'lvm_services': ['lvm2-lvmetad', 'lvm2-lvmpolld', 'lvm2-monitor'],
  348. 'loopback': {},
  349. 'nfs': {
  350. 'pkgs': ['nfs-utils']
  351. },
  352. 'multipath': {
  353. 'enabled': False,
  354. 'pkgs': ['multipath-tools', 'multipath-tools-boot'],
  355. 'service': ''
  356. },
  357. },
  358. 'Debian': {
  359. 'mount': {},
  360. 'swap': {},
  361. 'lvm': {},
  362. 'disk': {},
  363. 'lvm_services': ['lvm2-lvmetad', 'lvm2-lvmpolld', 'lvm2-monitor'],
  364. 'loopback': {},
  365. 'nfs': {
  366. 'pkgs': ['nfs-common']
  367. },
  368. 'multipath': {
  369. 'enabled': False,
  370. 'pkgs': ['multipath-tools', 'multipath-tools-boot'],
  371. 'service': 'multipath-tools'
  372. },
  373. 'lvm_pkgs': ['lvm2'],
  374. },
  375. 'RedHat': {
  376. 'mount': {},
  377. 'swap': {},
  378. 'lvm': {},
  379. 'disk': {},
  380. 'lvm_services': ['lvm2-lvmetad', 'lvm2-lvmpolld', 'lvm2-monitor'],
  381. 'loopback': {},
  382. 'nfs': {
  383. 'pkgs': ['nfs-utils']
  384. },
  385. 'multipath': {
  386. 'enabled': False,
  387. 'pkgs': [],
  388. 'service': 'multipath'
  389. },
  390. },
  391. }, merge=salt['grains.filter_by']({
  392. 'CentOS Stream 8': {
  393. 'lvm_services': ['lvm2-lvmpolld', 'lvm2-monitor'],
  394. },
  395. 'jammy': {
  396. 'lvm_services': ['lvm2-monitor'],
  397. },
  398. 'focal': {
  399. 'lvm_services': ['lvm2-monitor'],
  400. },
  401. 'buster': {
  402. 'lvm_services': ['lvm2-monitor'],
  403. },
  404. 'bullseye': {
  405. 'lvm_services': ['lvm2-monitor'],
  406. },
  407. 'bookworm': {
  408. 'lvm_services': ['lvm2-monitor'],
  409. },
  410. 'sid': {
  411. 'lvm_services': ['lvm2-monitor'],
  412. },
  413. 'trusty': {
  414. 'lvm_services': ['udev'],
  415. },
  416. }, grain='oscodename', merge=salt['pillar.get']('linux:storage'))) %}
  417. {% set monitoring = salt['grains.filter_by']({
  418. 'default': {
  419. 'bond_status': {
  420. 'interfaces': False
  421. },
  422. 'zombie': {
  423. 'warn': 3,
  424. 'crit': 7,
  425. },
  426. 'procs': {
  427. 'warn': 5000,
  428. 'crit': 10000,
  429. },
  430. 'load': {
  431. 'warn': '6,4,2',
  432. 'crit': '12,8,4',
  433. },
  434. 'swap': {
  435. 'warn': '50%',
  436. 'crit': '20%',
  437. },
  438. 'disk': {
  439. 'warn': '15%',
  440. 'crit': '5%',
  441. },
  442. 'netlink': {
  443. 'interfaces': [],
  444. 'interface_regex': '^[a-z0-9]+$',
  445. 'ignore_selected': False,
  446. },
  447. 'cpu_usage_percentage': {
  448. 'warn': 90.0,
  449. },
  450. 'memory_usage_percentage': {
  451. 'warn': 90.0,
  452. 'major': 95.0,
  453. },
  454. 'disk_usage_percentage': {
  455. 'warn': 85.0,
  456. 'major': 95.0,
  457. },
  458. 'swap_usage_percentage': {
  459. 'warn': 50.0,
  460. 'minor': 90.0,
  461. },
  462. 'inodes_usage_percentage': {
  463. 'warn': 85.0,
  464. 'major': 95.0,
  465. },
  466. 'system_load_threshold': {
  467. 'warn': 1,
  468. 'crit': 2,
  469. },
  470. 'rx_packets_dropped_threshold': {
  471. 'warn': 100,
  472. },
  473. 'tx_packets_dropped_threshold': {
  474. 'warn': 100,
  475. },
  476. 'swap_in_rate': {
  477. 'warn': 1024 * 1024,
  478. },
  479. 'swap_out_rate': {
  480. 'warn': 1024 * 1024,
  481. },
  482. 'failed_auths_threshold': {
  483. 'warn': 5,
  484. },
  485. 'net_rx_action_per_cpu_threshold': {
  486. 'warning': '500',
  487. 'minor': '5000'
  488. },
  489. 'packets_dropped_per_cpu_threshold': {
  490. 'minor': '0',
  491. 'major': '100'
  492. }
  493. },
  494. }, grain='os_family', merge=salt['pillar.get']('linux:monitoring')) %}