-
-
Notifications
You must be signed in to change notification settings - Fork 127
Description
Background
My organization has a home grown module for managing SplunkForwarder that I thought I could replace w/ puppet-splunk
which I expected would more cleanly handle the new(ish) splunkfwd
user + RPM weirdness. I am still running into problems when I use the $splunk::forwarder::splunk_user
parameter under several circumstances.
Affected Puppet, Ruby, OS and module versions/distributions
- Puppet: 7.32.1 (from Puppetlabs puppet-agent RPM)
- Ruby: 2.7.8p225 (from Puppetlabs puppet-agent RPM)
- Distribution: Amazon Linux 2023
- Module version: 10.0.0
- Splunk Forwarder Version: 9.2.2-d76edf6f0a15.x86_64 (Splunk Inc, RPM)
How to reproduce (e.g Puppet code you use)
# A profile for managing splunk forwarder
class profile::splunk::forwarder (
Stdlib::Ensure::Package $package_ensure = 'installed',
Integer[0] $phone_home_interval_in_secs = 600,
Optional[Stdlib::HTTPSUrl] $deployment_server = undef,
){
class{'splunk::forwarder':
manage_password => false,
package_ensure => $package_ensure,
package_provider => 'yum',
manage_package_source => false,
purge_deploymentclient => true, # remove any other deploymentclients found
splunk_user => 'splunk', # the rpm now defaults to splunkfwd, but we still use splunk
boot_start => true,
use_default_config => false, # Prevent module from trying to set etc/local/inputs.conf
require => [Class['yum'], User['splunk']],
}
# REMOVED FOR BREVITY
}
What are you seeing
Fresh Install
When the module is used with $splunk_user
on a system that does not presently have Splunk Forwarder installed it fails on Exec['stop_splunkforwarder']
because it is run as $splunk_user
which currently doesn't have access to /opt/splunkforwarder/bin/splunk
because the RPM installs everything as the user splunkfwd
and will only chown -R splunk:splunk
if it detects that the RPM is being UPGRADED.
As such SplunkForwarder is never configured to run on boot and subsequent puppet runs fail in the same way.
Info: Class[Splunk::Forwarder::Config]: Scheduling refresh of Class[Splunk::Forwarder::Service]
Notice: /Stage[main]/Splunk::Forwarder::Service::Nix/Exec[stop_splunkforwarder]/returns: Error: Could not execute posix command: Permission denied - /opt/splunkforwarder/bin/splunk
Error: '/opt/splunkforwarder/bin/splunk stop' returned 1 instead of one of [0]
Error: /Stage[main]/Splunk::Forwarder::Service::Nix/Exec[stop_splunkforwarder]/returns: change from 'notrun' to ['0'] failed: '/opt/splunkforwarder/bin/splunk stop' returned 1 instead of one of [0] (corrective)
Notice: /Stage[main]/Splunk::Forwarder::Service::Nix/Exec[enable_splunkforwarder]: Dependency Exec[stop_splunkforwarder] has failures: true
SplunkForwarder is pre-installed (but not enabled)
I actually came across this one on accident while trying to reproduce other conditions. In this case $splunk::forwarder::forwarder_service_file
does not exist so Exec['stop_splunkforwarder']
~> Exec['enable_splunkforwarder']
are executed so we get the same failure as above.
(abbreviated to reduce repetition)
Notice: /Stage[main]/Splunk::Forwarder::Service::Nix/Exec[stop_splunkforwarder]/returns: Error: Could not execute posix command: Permission denied - /opt/splunkforwarder/bin/splunk
Error: '/opt/splunkforwarder/bin/splunk stop' returned 1 instead of one of [0]
SplunkForwarder is pre-installed AND enabled (without -user
)
SplunkForwarder is pre-installed and splunk enable boot-start
has been run ahead of time (machine image) puppet-splunk
fails to enforce the $splunk_user
parameter because it can't detect that anything is wrong. In fact the puppet run works perfectly. But because $splunk::forwarder::forwarder_service_file
already exists Exec['stop_splunkforwarder']
~> Exec['enable_splunkforwarder']
are NOT executed.
On start/restart the SplunkForwarder systemd unit runs chown -R splunkfwd:splunkfwd /opt/splunkforwarder
. On the next puppet run puppet fixes the various conf files it thinks should be owned by $splunk_user
and dutifully restarts SplunkForwarder which... well I think you see. :)
What behaviour did you expect instead
I would expect that regardless of the starting conditions that the module would leave the system in the expected state.
Output log
See above.
Possible Fix
Run Exec['stop_splunkforwarder']
as root.
I did some (gross) hackery in my profile module to pre-empt the puppet-splunk
module's version of Exec['stop_splunkforwarder']
with one that runs as root. This naturally fixed the Could not execute posix command: Permission denied - /opt/splunkforwarder/bin/splunk
error but I don't know if running splunk stop
as root might have other implications. I would think not, presumably stop is just going to attempt to stop the service in some way (kill, some other IPC) but I can't say w/ certainty.
exec { 'our_stop_splunkforwarder':
command => "${splunk::forwarder::forwarder_homedir}/bin/splunk stop",
creates => $splunk::forwarder::forwarder_service_file,
timeout => 0,
require => Exec['splunk_chown_after_install'],
notify => Exec['our_enable_splunkforwarder'],
}
exec { 'our_enable_splunkforwarder':
command => "${splunk::forwarder::forwarder_homedir}/bin/splunk enable boot-start -user splunk ${splunk::params::boot_start_args} --accept-license --answer-yes --no-prompt",
tag => 'splunk_forwarder',
refreshonly => true,
require => Exec['our_stop_splunkforwarder'],
before => Exec['stop_splunkforwarder'],
}
Note: This does NOT fix the "pre-install w/ boot-start" scenario. The module still has no way to detect that the Systemd unit has User=splunkfwd
and not User=$splunk_user
. I recognize that this might be a corner case that this module just may not handle and I'll have to do some more hackery in my profile module to catch this. exec
with onlyif
or some ini_setting
w/ a notify. Idunno...