Unable to start TeamCity with systemd

Answered

Hi,

 

I am trying to start teamcity with systemd. When I try it by doing

/opt/jetbrains/teamcity/bin/runAll.sh start

Then there is no issue and it works fine.


[root@ip-192-168-0-10 system]# /opt/jetbrains/teamcity/bin/runAll.sh start
Spawning TeamCity restarter in separate process
TeamCity restarter running with PID 257045
Starting TeamCity build agent...
Java executable is found: '/usr/java/latest/bin/java'
Starting TeamCity Build Agent Launcher...
Agent home directory is /opt/jetbrains/teamcity/buildAgent
Agent Launcher Java runtime version is 1.8
Lock file: /opt/jetbrains/teamcity/buildAgent/logs/buildAgent.properties.lock
Using no lock
Done [257644], see log at /opt/jetbrains/teamcity/buildAgent/logs/teamcity-agent.log
[root@ip-192-168-0-10 system]#

However if I try to start it with systemd then it fails. My systemd file looks like this

 

[Unit]
Description=TeamCity Server
Documentation=https://unix.stackexchange.com/a/316369/5132
After=network.target

[Install]
WantedBy=multi-user.target

[Service]
Type=simple
User=teamcity
Group=teamcity
Environment=TEAMCITY_DATA_PATH=/var/jetbrains/application-data/teamcity/
Environment=TEAMCITY_DIR=/opt/jetbrains/teamcity/
Environment=TEAMCITY_SERVER_OPTS=-Djava.awt.headless=true
#Environment=CATALINA_PID=/opt/jetbrains/teamcity/bin/../logs/teamcity.pid
Environment=JAVA_HOME=/usr/java/latest
SyslogIdentifier=teamcity_server
#PIDFile=/var/run/jetbrains/teamcity.pid
ExecStart=/opt/jetbrains/teamcity/bin/runAll.sh start
ExecStop=/opt/jetbrains/teamcity/bin/runAll.sh stop
PrivateTmp=yes
RestartSec=5
Restart=on-failure
TimeoutStartSec=900

When I do the above I see nothing in the logs in <TEAMCITY DIR>/logs/. When I do journalctl -f I see in the logs:

Mar 23 08:10:51 ip-192-168-0-10 systemd[1]: Started TeamCity Server.
Mar 23 08:10:51 ip-192-168-0-10 teamcity_server[270700]: Spawning TeamCity restarter in separate process
Mar 23 08:10:51 ip-192-168-0-10 teamcity_server[270700]: TeamCity restarter running with PID 270707
Mar 23 08:10:51 ip-192-168-0-10 teamcity_server[270700]: Starting TeamCity build agent...
Mar 23 08:10:54 ip-192-168-0-10 teamcity_server[270700]: Java executable is found: '/usr/java/latest/bin/java'
Mar 23 08:10:54 ip-192-168-0-10 teamcity_server[270700]: Starting TeamCity Build Agent Launcher...
Mar 23 08:10:54 ip-192-168-0-10 teamcity_server[270700]: Agent home directory is /opt/jetbrains/teamcity/buildAgent
Mar 23 08:10:54 ip-192-168-0-10 teamcity_server[270700]: Agent Launcher Java runtime version is 1.8
Mar 23 08:10:54 ip-192-168-0-10 teamcity_server[270700]: Lock file: /opt/jetbrains/teamcity/buildAgent/logs/buildAgent.properties.lock
Mar 23 08:10:54 ip-192-168-0-10 teamcity_server[270700]: Using no lock
Mar 23 08:10:54 ip-192-168-0-10 teamcity_server[270700]: Done [271309], see log at /opt/jetbrains/teamcity/buildAgent/logs/teamcity-agent.log
Mar 23 08:10:55 ip-192-168-0-10 teamcity_server[271311]: Removing lock file so server won't automatically restart
Mar 23 08:10:56 ip-192-168-0-10 teamcity_server[271311]: Java executable is found: '/usr/java/latest/bin/java'
Mar 23 08:10:56 ip-192-168-0-10 teamcity_server[271311]: PID file found but either no matching process was found or the current user does not have permission to stop the process. Stop aborted.
Mar 23 08:10:56 ip-192-168-0-10 teamcity_server[271311]: Stopping TeamCity build agent...
Mar 23 08:11:03 ip-192-168-0-10 teamcity_server[271311]: Java executable is found: '/usr/java/latest/bin/java'
Mar 23 08:11:03 ip-192-168-0-10 teamcity_server[271311]: Starting TeamCity Build Agent Launcher...
Mar 23 08:11:03 ip-192-168-0-10 teamcity_server[271311]: Agent home directory is /opt/jetbrains/teamcity/buildAgent
Mar 23 08:11:04 ip-192-168-0-10 teamcity_server[271311]: Received stop command from console.
Mar 23 08:11:04 ip-192-168-0-10 teamcity_server[271311]: Unable to locate agent port file: /opt/jetbrains/teamcity/buildAgent/logs/buildAgent.xmlRpcPort
Mar 23 08:11:04 ip-192-168-0-10 teamcity_server[271311]: Agent is not running?
Mar 23 08:11:04 ip-192-168-0-10 teamcity_server[271311]: Sending agent shutdown command to: http://localhost:9090
Mar 23 08:11:04 ip-192-168-0-10 teamcity_server[271311]: Failed to shutdown agent gracefully: Connection refused (Connection refused)
Mar 23 08:11:04 ip-192-168-0-10 teamcity_server[271311]: Cannot stop agent gracefully, you can try to kill agent by './agent.sh stop kill' command
Mar 23 08:11:04 ip-192-168-0-10 systemd[1]: teamcity.service: Succeeded.

If I change the systemd script to be

ExecStart=/opt/jetbrains/teamcity/bin/teamcity-server.sh start
ExecStop=/opt/jetbrains/teamcity/bin/teamcity-server.sh stop

Then I see in the logs over and over

Mar 23 08:15:45 ip-192-168-0-10 systemd[1]: Started TeamCity Server.
Mar 23 08:15:45 ip-192-168-0-10 teamcity_server[272055]: Spawning TeamCity restarter in separate process
Mar 23 08:15:45 ip-192-168-0-10 teamcity_server[272055]: TeamCity restarter running with PID 272059
Mar 23 08:15:45 ip-192-168-0-10 teamcity_server[272060]: Java executable is found: '/usr/java/latest/bin/java'
Mar 23 08:15:45 ip-192-168-0-10 teamcity_server[272060]: PID file found but either no matching process was found or the current user does not have permission to stop the process. Stop aborted.
Mar 23 08:15:45 ip-192-168-0-10 systemd[1]: teamcity.service: Control process exited, code=exited status=1
Mar 23 08:15:45 ip-192-168-0-10 systemd[1]: teamcity.service: Failed with result 'exit-code'.
Mar 23 08:15:50 ip-192-168-0-10 systemd[1]: teamcity.service: Service RestartSec=5s expired, scheduling restart.
Mar 23 08:15:50 ip-192-168-0-10 systemd[1]: teamcity.service: Scheduled restart job, restart counter is at 1.
Mar 23 08:15:50 ip-192-168-0-10 systemd[1]: Stopped TeamCity Server.
Mar 23 08:15:50 ip-192-168-0-10 systemd[1]: Started TeamCity Server.
Mar 23 08:15:50 ip-192-168-0-10 teamcity_server[272373]: Spawning TeamCity restarter in separate process
Mar 23 08:15:50 ip-192-168-0-10 teamcity_server[272373]: TeamCity restarter running with PID 272377
Mar 23 08:15:50 ip-192-168-0-10 teamcity_server[272378]: Java executable is found: '/usr/java/latest/bin/java'
Mar 23 08:15:50 ip-192-168-0-10 teamcity_server[272378]: PID file found but either no matching process was found or the current user does not have permission to stop the process. Stop aborted.
Mar 23 08:15:50 ip-192-168-0-10 systemd[1]: teamcity.service: Control process exited, code=exited status=1
Mar 23 08:15:51 ip-192-168-0-10 systemd[1]: teamcity.service: Failed with result 'exit-code'.
Mar 23 08:15:56 ip-192-168-0-10 systemd[1]: teamcity.service: Service RestartSec=5s expired, scheduling restart.
Mar 23 08:15:56 ip-192-168-0-10 systemd[1]: teamcity.service: Scheduled restart job, restart counter is at 2.
Mar 23 08:15:56 ip-192-168-0-10 systemd[1]: Stopped TeamCity Server.
Mar 23 08:15:56 ip-192-168-0-10 systemd[1]: Started TeamCity Server.
Mar 23 08:15:56 ip-192-168-0-10 teamcity_server[272685]: Spawning TeamCity restarter in separate process
Mar 23 08:15:56 ip-192-168-0-10 teamcity_server[272685]: TeamCity restarter running with PID 272689
Mar 23 08:15:56 ip-192-168-0-10 teamcity_server[272690]: Java executable is found: '/usr/java/latest/bin/java'
Mar 23 08:15:56 ip-192-168-0-10 teamcity_server[272690]: PID file found but either no matching process was found or the current user does not have permission to stop the process. Stop aborted.
Mar 23 08:15:56 ip-192-168-0-10 systemd[1]: teamcity.service: Control process exited, code=exited status=1
Mar 23 08:15:56 ip-192-168-0-10 systemd[1]: teamcity.service: Failed with result 'exit-code'.
^C

 

I am assuming there is an issue with some sort of run time variable. Any idea what I am doing wrong?

0
10 comments

The runAll.sh and teamcity-server.sh scripts will spawn additional processes. Try setting the Service Type to "Forking". https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_basic_system_settings/managing-services-with-systemd_configuring-basic-system-settings#unit-file-structure_working-with-systemd-unit-files

[Unit]
Description=TeamCity Server
Documentation=https://unix.stackexchange.com/a/316369/5132
After=network.target

[Install]
WantedBy=multi-user.target

[Service]
Type=Forking
User=teamcity
Group=teamcity
Environment=TEAMCITY_DATA_PATH=/var/jetbrains/application-data/teamcity/
Environment=TEAMCITY_DIR=/opt/jetbrains/teamcity/
Environment=TEAMCITY_SERVER_OPTS=-Djava.awt.headless=true
#Environment=CATALINA_PID=/opt/jetbrains/teamcity/bin/../logs/teamcity.pid
Environment=JAVA_HOME=/usr/java/latest
SyslogIdentifier=teamcity_server
#PIDFile=/var/run/jetbrains/teamcity.pid
ExecStart=/opt/jetbrains/teamcity/bin/runAll.sh start
ExecStop=/opt/jetbrains/teamcity/bin/runAll.sh stop
PrivateTmp=yes
RestartSec=5
Restart=on-failure
TimeoutStartSec=900
0

When doing forking it hangs indefinably. 

0

After going through this myself for teamcity agents only. I've arrived at the following working systemd service configuration:

[Unit]
Description=Teamcity Agent
After=network.target

[Service]
ExecStart=<path_to_agent.sh> start
ExecStop=<path_to_agent.sh> stop
Type=forking
Restart=on-failure
RestartSec=5
TimeoutStartSec=300
SuccessExitStatus=143
User=<user running teamcity agent>
Group=<group running teamcity agent>
SyslogIdentifier=teamcity_agent
PrivateTmp=true

[Install]
WantedBy=multi-user.target

This will effectively launch the agent and makes systemd guess the main pid.
It will also restart the agent whenever it exits, unless it is performing an auto upgrade (exit code 143).
As a side note this will make the agent restart if its child process is killed (which makes the main process exit with code 0)

0

Manuel Torrinha That's just for agents. I am trying to start the server as well. When I do that using forking it starts however systemd then seems to hang indefinitely. 

0

What do you get for journalctl output? Are any logs created in <teamcity installation directory>/logs? Can you confirm what operating system are you using?

We don't have any official documentation covering this, but I am able to run it on Ubuntu 20.04 with Type=forking. I have also successfully run it in the past as forking in CentOS. Here is my Ubuntu teamcity.service for reference:

[Unit]
Description=TeamCity Server
Documentation=https://unix.stackexchange.com/a/316369/5132
After=network.target

[Install]
WantedBy=multi-user.target

[Service]
Type=forking
User=teamcity
Group=teamcity
Environment=TEAMCITY_DATA_PATH=/home/teamcity/.BuildServer
Environment=TEAMCITY_DIR=/opt/TeamCity/
SyslogIdentifier=teamcity_server
PIDFile=/opt/TeamCity/logs/teamcity.pid
ExecStart=/opt/TeamCity/bin/teamcity-server.sh start
ExecStop=/opt/TeamCity/bin/teamcity-server.sh stop
PrivateTmp=yes

Please note, we have had some reports of issues with automatic upgrades when running as a systemd service. Refer to https://youtrack.jetbrains.com/issue/TW-58706 for more information regarding that issue.

0

Eric,

I am using Oracle Linux 8.3 (which is a RHEL clone).  Below is what I have in my startup script

[Unit]
Description=TeamCity Server
Documentation=https://unix.stackexchange.com/a/316369/5132
After=network.target

[Install]
WantedBy=multi-user.target

[Service]
Type=forking
User=teamcity
Group=teamcity
Environment=TEAMCITY_DATA_PATH=/var/jetbrains/application-data/teamcity/
Environment=TEAMCITY_DIR=/opt/jetbrains/teamcity/
#Environment=TEAMCITY_SERVER_OPTS=-Djava.awt.headless=true
#Environment=JAVA_HOME=/usr/java/latest
SyslogIdentifier=teamcity_server
PIDFile=/opt/jetbrains/teamcity/logs/teamcity.pid
ExecStart=/opt/jetbrains/teamcity/bin/teamcity-server.sh start
ExecStop=/opt/jetbrains/teamcity/bin/teamcity-server.sh stop
PrivateTmp=yes
RestartSec=5
Restart=on-failure
TimeoutStartSec=900

 At first I thought it was not starting but it actually was. I am not sure if the reason is that it is working now is because I am only launching the server vs run-all or something else.  

EDIT: I decided to dig a bit further to see what was broken and I was not able to find anything. I then went ahead and added a service for the agent so that would work as well. In doing so I changed that the pid should match the server so that the pid file is

PIDFile=/opt/jetbrains/teamcity/logs/teamcity-server.pid

When I did the above the systemd scripts seemed to have hung indefinitely. As it was hanging as soon as I did:

 [root@ip-192-168-0-10 ~]# cp /opt/jetbrains/teamcity/logs/teamcity.pid /opt/jetbrains/teamcity/logs/teamcity-server.pid

For some reason as the owner of the DIR (user teamcity) I was NOT able to copy the file. I had to do it as root. As soon as I did it showed the service as running. Switching it back to

PIDFile=/opt/jetbrains/teamcity/logs/teamcity.pid

Seems to have fixed the issue and allowed me to start the service. In the logs I see (when it works)

Apr 07 07:03:28 ip-192-168-0-10.internal systemd[1]: Starting TeamCity Server...
Apr 07 07:03:28 ip-192-168-0-10.internal teamcity_server[124836]: Spawning TeamCity restarter in separate process
Apr 07 07:03:28 ip-192-168-0-10.internal teamcity_server[124836]: TeamCity restarter running with PID 124840
Apr 07 07:03:28 ip-192-168-0-10.internal systemd[1]: teamcity-server.service: Can't open PID file /opt/jetbrains/teamcity/logs/teamcity.pid (yet?) after start: No such file or directory
Apr 07 07:04:28 ip-192-168-0-10.internal systemd[1]: teamcity-server.service: Supervising process 124951 which is not our child. We'll most likely not notice when it exits.
Apr 07 07:04:28 ip-192-168-0-10.internal systemd[1]: Started TeamCity Server.

The same went for the agent. Setting the pid file to anything but buildAgent.pid (e.g. if I set it to /opt/jetbrains/teamcity/buildAgent/logs/buildAgent-foo.pid) would cause systemd to hang indefinitely. Is this a bug?

EDIT2: In the process of troubleshooting I set selinux to permissive thinking that was part of the issue and in the logs I saw
Apr 07 07:16:57 ip-192-168-0-10.internal setroubleshoot[130923]: SELinux is preventing systemd from unlink access on the file buildAgent.pid. For complete SELinux messages run: sealert -l 80daf6b3-bf5b-462f-9816-6a4c15e6fc8f

Apr 07 07:16:57 ip-192-168-0-10.internal setroubleshoot[130923]: SELinux is preventing systemd from unlink access on the file buildAgent.pid.

The above only happens when the agent is stopped so it's not the end of the world but I would like to have SELINUX running. Who creates the PID? Is it systemd or teamcity? Below is what selinux is logging:

[root@ip-192-168-0-10 logs]# sealert -l 80daf6b3-bf5b-462f-9816-6a4c15e6fc8f
SELinux is preventing systemd from unlink access on the file buildAgent.pid.

***** Plugin catchall_labels (83.8 confidence) suggests *******************

If you want to allow systemd to have unlink access on the buildAgent.pid file
Then you need to change the label on buildAgent.pid
Do
# semanage fcontext -a -t FILE_TYPE 'buildAgent.pid'
where FILE_TYPE is one of the following: NetworkManager_unit_file_t, NetworkManager_var_run_t, abrt_unit_file_t, abrt_var_run_t, accountsd_unit_file_t, aiccu_var_run_t, ajaxterm_var_run_t, alsa_lock_t, alsa_unit_file_t, alsa_var_run_t, amanda_unit_file_t, antivirus_unit_file_t, antivirus_var_run_t, apcupsd_lock_t, apcupsd_unit_file_t, apcupsd_var_run_t, apmd_lock_t, apmd_unit_file_t, apmd_var_run_t, arpwatch_unit_file_t, arpwatch_var_run_t, asterisk_var_run_t, audisp_var_run_t, auditd_etc_t, auditd_unit_file_t, auditd_var_run_t, automount_lock_t, automount_unit_file_t, automount_var_run_t, avahi_unit_file_t, avahi_var_run_t, bacula_var_run_t, bcfg2_unit_file_t, bcfg2_var_run_t, bitlbee_var_run_t, blkmapd_var_run_t, blktap_var_run_t, blueman_var_run_t, bluetooth_lock_t, bluetooth_unit_file_t, bluetooth_var_run_t, boinc_unit_file_t, boltd_var_run_t, bootloader_var_run_t, bpf_t, brltty_unit_file_t, brltty_var_run_t, bumblebee_unit_file_t, bumblebee_var_run_t, cache_home_t, cachefilesd_var_run_t, callweaver_var_run_t, canna_var_run_t, cardmgr_var_run_t, ccs_var_run_t, certmaster_var_run_t, certmonger_unit_file_t, certmonger_var_run_t, cgdcbxd_unit_file_t, cgdcbxd_var_run_t, cgred_var_run_t, cgroup_t, chronyd_unit_file_t, chronyd_var_run_t, cinder_api_unit_file_t, cinder_backup_unit_file_t, cinder_scheduler_unit_file_t, cinder_var_run_t, cinder_volume_unit_file_t, clogd_var_run_t, cloud_init_unit_file_t, cluster_unit_file_t, cluster_var_run_t, clvmd_var_run_t, cmirrord_var_run_t, cockpit_unit_file_t, cockpit_var_run_t, collectd_unit_file_t, collectd_var_run_t, colord_unit_file_t, comsat_var_run_t, condor_unit_file_t, condor_var_lock_t, condor_var_run_t, config_home_t, conman_unit_file_t, conman_var_run_t, conntrackd_unit_file_t, conntrackd_var_lock_t, conntrackd_var_run_t, consolekit_log_t, consolekit_unit_file_t, consolekit_var_run_t, container_file_t, container_ro_file_t, couchdb_unit_file_t, couchdb_var_run_t, courier_var_run_t, cpuplug_lock_t, cpuplug_var_run_t, cpuspeed_var_run_t, cron_var_run_t, crond_unit_file_t, crond_var_run_t, ctdbd_var_run_t, cupsd_config_var_run_t, cupsd_lock_t, cupsd_lpd_var_run_t, cupsd_unit_file_t, cupsd_var_run_t, cvs_var_run_t, cyphesis_var_run_t, cyrus_tmp_t, cyrus_var_run_t, data_home_t, dbskkd_var_run_t, dbus_home_t, dcc_var_run_t, dccd_var_run_t, dccifd_var_run_t, dccm_var_run_t, dcerpcd_var_run_t, ddclient_var_run_t, deltacloudd_var_run_t, denyhosts_var_lock_t, device_t, devicekit_var_run_t, dhcpc_var_run_t, dhcpd_unit_file_t, dhcpd_var_run_t, dictd_var_run_t, dirsrv_snmp_var_run_t, dirsrv_tmp_t, dirsrv_unit_file_t, dirsrv_var_lock_t, dirsrv_var_run_t, dirsrvadmin_lock_t, dirsrvadmin_unit_file_t, dkim_milter_data_t, dlm_controld_var_run_t, dnsmasq_unit_file_t, dnsmasq_var_run_t, dnssec_trigger_unit_file_t, dnssec_trigger_var_run_t, dovecot_var_run_t, drbd_lock_t, drbd_var_run_t, dspam_var_run_t, entropyd_var_run_t, etc_aliases_t, etc_runtime_t, eventlogd_var_run_t, evtchnd_var_run_t, exim_var_run_t, fail2ban_var_run_t, faillog_t, fcoemon_var_run_t, fenced_lock_t, fenced_var_run_t, fetchmail_var_run_t, fingerd_var_run_t, firewalld_unit_file_t, firewalld_var_run_t, foghorn_var_run_t, freeipmi_bmc_watchdog_unit_file_t, freeipmi_bmc_watchdog_var_run_t, freeipmi_ipmidetectd_unit_file_t, freeipmi_ipmidetectd_var_run_t, freeipmi_ipmiseld_unit_file_t, freeipmi_ipmiseld_var_run_t, fsadm_var_run_t, fsdaemon_var_run_t, ftpd_lock_t, ftpd_unit_file_t, ftpd_var_run_t, fwupd_unit_file_t, games_srv_var_run_t, gconf_home_t, gdomap_var_run_t, getty_lock_t, getty_unit_file_t, getty_var_run_t, gfs_controld_var_run_t, gkeyringd_gnome_home_t, glance_api_unit_file_t, glance_registry_unit_file_t, glance_scrubber_unit_file_t, glance_var_run_t, glusterd_var_run_t, gnome_home_t, gpm_var_run_t, gpsd_var_run_t, greylist_milter_data_t, groupd_var_run_t, gssproxy_unit_file_t, gssproxy_var_run_t, gstreamer_home_t, haproxy_unit_file_t, haproxy_var_run_t, hostapd_unit_file_t, hostapd_var_run_t, hsqldb_unit_file_t, httpd_lock_t, httpd_tmp_t, httpd_unit_file_t, httpd_var_run_t, hwloc_dhwd_unit_t, hwloc_var_run_t, hypervkvp_unit_file_t, hypervvssd_unit_file_t, ibacm_var_run_t, icc_data_home_t, icecast_var_run_t, ifconfig_var_run_t, inetd_child_var_run_t, inetd_var_run_t, init_tmp_t, init_var_lib_t, init_var_run_t, initrc_state_t, initrc_var_run_t, innd_unit_file_t, innd_var_run_t, iodined_unit_file_t, ipa_dnskey_unit_file_t, ipa_ods_exporter_unit_file_t, ipa_otpd_unit_file_t, ipa_tmp_t, ipa_var_run_t, ipmievd_lock_t, ipmievd_unit_file_t, ipmievd_var_run_t, ipsec_mgmt_lock_t, ipsec_mgmt_unit_file_t, ipsec_mgmt_var_run_t, ipsec_var_run_t, iptables_lock_t, iptables_unit_file_t, iptables_var_lib_t, iptables_var_run_t, irqbalance_var_run_t, iscsi_lock_t, iscsi_unit_file_t, iscsi_var_run_t, isnsd_var_run_t, iwhd_var_run_t, jetty_unit_file_t, jetty_var_run_t, kadmind_var_run_t, kdump_lock_t, kdump_unit_file_t, keepalived_unit_file_t, keepalived_var_run_t, keystone_unit_file_t, keystone_var_run_t, kismet_var_run_t, klogd_var_run_t, kmod_var_run_t, kmscon_unit_file_t, krb5_host_rcache_t, krb5_keytab_t, krb5kdc_lock_t, krb5kdc_var_run_t, ksmtuned_unit_file_t, ksmtuned_var_run_t, ktalkd_unit_file_t, l2tpd_var_run_t, likewise_pstore_lock_t, lircd_var_run_t, lldpad_var_run_t, local_login_lock_t, locale_t, locate_var_run_t, lockdev_lock_t, logrotate_lock_t, logwatch_lock_t, logwatch_var_run_t, lpd_var_run_t, lsassd_var_run_t, lsmd_unit_file_t, lsmd_var_run_t, lttng_sessiond_unit_file_t, lttng_sessiond_var_run_t, lvm_lock_t, lvm_unit_file_t, lvm_var_run_t, lwiod_var_run_t, lwregd_var_run_t, lwsmd_var_run_t, machineid_t, mailman_lock_t, mailman_var_run_t, mandb_lock_t, mcelog_var_run_t, mdadm_unit_file_t, mdadm_var_run_t, memcached_var_run_t, minidlna_var_run_t, minissdpd_var_run_t, mip6d_unit_file_t, mirrormanager_var_run_t, mnt_t, mock_var_run_t, modemmanager_unit_file_t, mon_statd_var_run_t, mongod_unit_file_t, mongod_var_run_t, motion_unit_file_t, motion_var_run_t, mount_var_run_t, mpd_var_run_t, mrtg_lock_t, mrtg_var_run_t, mscan_var_run_t, munin_var_run_t, mysqld_unit_file_t, mysqld_var_run_t, mysqlmanagerd_var_run_t, naemon_var_run_t, nagios_var_run_t, named_conf_t, named_tmp_t, named_unit_file_t, named_var_run_t, netlabel_mgmt_unit_file_t, netlogond_var_run_t, neutron_unit_file_t, neutron_var_run_t, nfsd_unit_file_t, ninfod_run_t, ninfod_unit_file_t, nis_unit_file_t, nmbd_var_run_t, nova_unit_file_t, nova_var_run_t, nrpe_var_run_t, nscd_unit_file_t, nscd_var_run_t, nsd_var_run_t, nslcd_var_run_t, ntop_var_run_t, ntpd_unit_file_t, ntpd_var_run_t, numad_unit_file_t, numad_var_run_t, nut_unit_file_t, nut_var_run_t, nx_server_var_run_t, oddjob_unit_file_t, oddjob_var_run_t, opafm_var_run_t, openct_var_run_t, opendnssec_unit_file_t, opendnssec_var_run_t, openhpid_var_run_t, openshift_var_run_t, opensm_unit_file_t, openvpn_var_run_t, openvswitch_unit_file_t, openvswitch_var_run_t, openwsman_run_t, openwsman_unit_file_t, osad_var_run_t, pads_var_run_t, pam_var_console_t, pam_var_run_t, passenger_var_run_t, passwd_file_t, pcp_var_run_t, pcscd_var_run_t, pdns_unit_file_t, pdns_var_run_t, pegasus_openlmi_storage_var_run_t, pegasus_var_run_t, pesign_unit_file_t, pesign_var_run_t, phc2sys_unit_file_t, piranha_fos_var_run_t, piranha_lvs_var_run_t, piranha_pulse_var_run_t, piranha_web_var_run_t, pkcs11proxyd_unit_file_t, pkcs11proxyd_var_run_t, pkcs_slotd_lock_t, pkcs_slotd_unit_file_t, pkcs_slotd_var_run_t, pki_ra_lock_t, pki_ra_var_run_t, pki_tomcat_lock_t, pki_tomcat_unit_file_t, pki_tomcat_var_run_t, pki_tps_lock_t, pki_tps_var_run_t, plymouthd_var_run_t, policykit_var_run_t, polipo_pid_t, polipo_unit_file_t, portmap_var_run_t, portreserve_var_run_t, postfix_var_run_t, postgresql_lock_t, postgresql_unit_file_t, postgresql_var_run_t, postgrey_var_run_t, power_unit_file_t, pppd_lock_t, pppd_unit_file_t, pppd_var_run_t, pptp_var_run_t, prelude_audisp_var_run_t, prelude_lml_var_run_t, prelude_var_run_t, print_spool_t, privoxy_var_run_t, prosody_unit_file_t, prosody_var_run_t, psad_var_run_t, ptal_var_run_t, ptp4l_unit_file_t, pulseaudio_var_run_t, puppet_var_run_t, pwauth_var_run_t, pyicqt_var_run_t, qdiskd_var_run_t, qemu_var_run_t, qpidd_var_run_t, quota_nld_var_run_t, rabbitmq_unit_file_t, rabbitmq_var_lock_t, rabbitmq_var_run_t, radiusd_unit_file_t, radiusd_var_run_t, radvd_var_run_t, random_seed_t, rasdaemon_unit_file_t, rdisc_unit_file_t, readahead_var_run_t, redis_unit_file_t, redis_var_run_t, regex_milter_data_t, restorecond_var_run_t, rhev_agentd_unit_file_t, rhev_agentd_var_run_t, rhnsd_unit_file_t, rhnsd_var_run_t, rhsmcertd_lock_t, rhsmcertd_var_run_t, ricci_modcluster_var_run_t, ricci_modstorage_lock_t, ricci_var_run_t, rkt_unit_file_t, rlogind_var_run_t, rngd_unit_file_t, rngd_var_run_t, rolekit_unit_file_t, roundup_var_run_t, rpcbind_unit_file_t, rpcbind_var_run_t, rpcd_lock_t, rpcd_unit_file_t, rpcd_var_run_t, rpm_var_run_t, rrdcached_var_run_t, rsync_var_run_t, rtas_errd_unit_file_t, rtas_errd_var_lock_t, rtas_errd_var_run_t, samba_unit_file_t, sanlk_resetd_unit_file_t, sanlock_unit_file_t, sanlock_var_run_t, saslauthd_var_run_t, sbd_unit_file_t, sbd_var_run_t, sblim_var_run_t, screen_var_run_t, semanage_read_lock_t, semanage_trans_lock_t, sendmail_var_run_t, sensord_unit_file_t, sensord_var_run_t, setrans_var_run_t, setroubleshoot_var_run_t, shorewall_lock_t, slapd_lock_t, slapd_unit_file_t, slapd_var_run_t, slpd_var_run_t, smbd_var_run_t, smokeping_var_run_t, smsd_var_run_t, snmpd_var_run_t, snort_var_run_t, sosreport_var_run_t, soundd_var_run_t, spamass_milter_data_t, spamd_var_run_t, speech_dispatcher_unit_file_t, squid_var_run_t, srvsvcd_var_run_t, sshd_keygen_unit_file_t, sshd_unit_file_t, sshd_var_run_t, sslh_unit_file_t, sslh_var_run_t, sssd_public_t, sssd_unit_file_t, sssd_var_run_t, stapserver_var_run_t, stratisd_var_run_t, stunnel_var_run_t, svirt_home_t, svirt_image_t, svirt_tmp_t, svirt_tmpfs_t, svnserve_tmp_t, svnserve_unit_file_t, svnserve_var_run_t, swat_var_run_t, swift_lock_t, swift_unit_file_t, swift_var_run_t, sysfs_t, syslogd_unit_file_t, syslogd_var_run_t, system_cronjob_lock_t, system_cronjob_var_run_t, system_dbusd_var_run_t, systemd_bootchart_unit_file_t, systemd_bootchart_var_run_t, systemd_gpt_generator_unit_file_t, systemd_home_t, systemd_hwdb_unit_file_t, systemd_importd_var_run_t, systemd_logind_inhibit_var_run_t, systemd_logind_sessions_t, systemd_logind_var_run_t, systemd_machined_unit_file_t, systemd_machined_var_run_t, systemd_modules_load_unit_file_t, systemd_networkd_unit_file_t, systemd_networkd_var_run_t, systemd_passwd_var_run_t, systemd_resolved_unit_file_t, systemd_resolved_var_run_t, systemd_rfkill_unit_file_t, systemd_runtime_unit_file_t, systemd_timedated_unit_file_t, systemd_timedated_var_run_t, systemd_unit_file_t, systemd_vconsole_unit_file_t, tangd_cache_t, tangd_unit_file_t, targetd_unit_file_t, telnetd_var_run_t, tftpd_var_run_t, tgtd_var_run_t, thin_aeolus_configserver_var_run_t, thin_var_run_t, timemaster_unit_file_t, timemaster_var_run_t, tlp_unit_file_t, tlp_var_run_t, tmp_t, tmpfs_t, tomcat_unit_file_t, tomcat_var_run_t, tor_unit_file_t, tor_var_run_t, tuned_var_run_t, udev_rules_t, udev_var_run_t, uml_switch_var_run_t, usbmuxd_unit_file_t, usbmuxd_var_run_t, user_home_t, user_tmp_t, useradd_var_run_t, uucpd_lock_t, uucpd_var_run_t, uuidd_var_run_t, var_lib_nfs_t, var_lib_t, var_lock_t, var_run_t, varnishd_var_run_t, varnishlog_var_run_t, vdagent_var_run_t, vhostmd_var_run_t, virt_lock_t, virt_lxc_var_run_t, virt_qemu_ga_var_run_t, virt_var_run_t, virtd_unit_file_t, virtlogd_unit_file_t, virtlogd_var_run_t, vmtools_unit_file_t, vmware_host_pid_t, vmware_pid_t, vnstatd_var_run_t, vpnc_var_run_t, watchdog_var_run_t, wdmd_var_run_t, winbind_var_run_t, xdm_lock_t, xdm_var_run_t, xenconsoled_var_run_t, xend_var_run_t, xenstored_var_run_t, xserver_var_run_t, ypbind_unit_file_t, ypbind_var_run_t, yppasswdd_var_run_t, ypserv_var_run_t, ypxfr_var_run_t, zabbix_var_run_t, zarafa_deliver_var_run_t, zarafa_gateway_var_run_t, zarafa_ical_var_run_t, zarafa_indexer_var_run_t, zarafa_monitor_var_run_t, zarafa_server_var_run_t, zarafa_spooler_var_run_t, zebra_unit_file_t, zebra_var_run_t, zoneminder_unit_file_t, zoneminder_var_run_t.
Then execute:
restorecon -v 'buildAgent.pid'

***** Plugin catchall (17.1 confidence) suggests **************************

If you believe that systemd should be allowed unlink access on the buildAgent.pid file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'systemd' --raw | audit2allow -M my-systemd
# semodule -X 300 -i my-systemd.pp

Additional Information:
Source Context system_u:system_r:init_t:s0
Target Context system_u:object_r:usr_t:s0
Target Objects buildAgent.pid [ file ]
Source systemd
Source Path systemd
Port <Unknown>
Host ip-192-168-0-10.internal
Source RPM Packages
Target RPM Packages
SELinux Policy RPM selinux-policy-
targeted-3.14.3-54.0.5.el8_3.2.noarch
Local Policy RPM selinux-policy-
targeted-3.14.3-54.0.5.el8_3.2.noarch
Selinux Enabled True
Policy Type targeted
Enforcing Mode Permissive
Host Name ip-192-168-0-10.internal
Platform Linux ip-192-168-0-10.internal
5.4.17-2102.200.13.el8uek.x86_64 #2 SMP Sun Mar 28
14:48:36 PDT 2021 x86_64 x86_64
Alert Count 4
First Seen 2021-04-07 07:08:06 EDT
Last Seen 2021-04-07 07:16:46 EDT
Local ID 80daf6b3-bf5b-462f-9816-6a4c15e6fc8f

Raw Audit Messages
type=AVC msg=audit(1617794206.491:698): avc: denied { unlink } for pid=1 comm="systemd" name="buildAgent.pid" dev="nvme0n1p2" ino=654314792 scontext=system_u:system_r:init_t:s0 tcontext=system_u:object_r:usr_t:s0 tclass=file permissive=1

Hash: systemd,init_t,usr_t,file,unlink

[root@ip-192-168-0-10 logs]#

 

 

0

At first I thought it was not starting but it actually was. I am not sure if the reason is that it is working now is because I am only launching the server vs run-all or something else.

So the server runs now? For what it's worth, I can substitute the ExecStart/Stop commands with runAll.sh and it will still run. 

I'm not sure why you're trying to copy the pid files. They are used to store the pid of the running processes and should be unique between the agent and server. The pid files are automatically generated by the teamcity-server.sh script and will be removed when the service is stopped. The pid stored in the pid file is used to monitor and stop the service when the "teamcity-server.sh stop" command is issued.

For the build agent, you need the service to monitor the pid file for the build agent. See my working example below:

[Unit]
Description=Teamcity Agent
After=network.target

[Service]
ExecStart=/opt/TeamCity/buildAgent/bin/agent.sh start
ExecStop=/opt/TeamCity/buildAgent/bin/agent.sh stop
PIDFile=/opt/TeamCity/buildAgent/logs/buildAgent.pid
Type=forking
Restart=on-failure
RestartSec=5
TimeoutStartSec=300
SuccessExitStatus=143
User=teamcity
Group=teamcity
SyslogIdentifier=teamcity_agent
PrivateTmp=true

[Install]
WantedBy=multi-user.target

 

 

0

Hi,

I was copying the pids as it seems that systemd does not think the process is running. I was under the impression that systemd was creating the pid file.  This is why systemd seemed to have hung. I looked at the documentation here https://www.freedesktop.org/software/systemd/man/systemd.service.html  and it seems that systdemd may try to remove the pid. Does systemd delete it the pid or does the teamcity agent try to delete it? I am trying to determine where is the issue with selinux and why it's preventing the deletion of the pid file.

0

The pid file is created by the application when it is launched. Systemd will only read the PID after the start-up of the service, it doesn't create it or make any changes to it. It is important to make sure the teamcity.pid and buildAgent.pid files are not present before attempting to start the services with systemd. If they do exist, make sure the processes are not actually already running before removing them. 

It is true that systemd will remove the pid after the service has stopped, but only in cases where it hasn't already been removed as some kind of fail-safe. In the case of TeamCity and build agents, the pid file should be removed by the application rather than systemd. 

Last night, I set up a VM with OracleLinux 8.3 and was able to use the same .service files as my Ubuntu server. So I can confirm at least it is possible for this to work. I'm not sure what is the cause of the hanging for you. It does take a few moments for the service to finish starting up, maybe giving the appearance of being hung.

 

0
Avatar
Permanently deleted user

The one minute hang is due to logic in teamcity-server-restarter.sh. Note that "run" case is used in restarter script even if you pass "start" to teamcity-server.sh. Looks quite hacky, but probably for good reasons as comments suggest...

run)
# Hack with spawning and waiting required for traps to work
# Check silent for thew case when started as 'start', so TC won't spoil wrapper log
if [ "$TEAMCITY_RESTARTER_SILENT_ACTUAL" != "" ]; then
_roll "$@"
./catalina.sh "$@" >> "$CATALINA_OUT" 2>&1 &
tc_pid=$!
else
./catalina.sh "$@" 2>&1 &
tc_pid=$!
fi
_log "TeamCity process PID is $tc_pid"

# Wait for some time prior to saving pid file, catalina may exit abruptly with exit code 0 if failed to bind address
# Otherwise we can end un in situation when pid of wrong server overrides pid of correct one
minus_p=''
if ps -p 1 >/dev/null 2>/dev/null; then
minus_p='-p'
fi
# In case of abrupt exit we want to be informed as quickly as possibly, though max waiting time is one minute
for interval in 5 5 10 10 15 15; do
sleep $interval
if ! ps $minus_p $tc_pid >/dev/null 2>&1; then
break
fi
done
if ps $minus_p $tc_pid >/dev/null 2>&1; then
echo $tc_pid > "$CATALINA_PID"
fi
0

Please sign in to leave a comment.