Our Galaxy instance is configured to send Trinity jobs to a special destination defined in the job_conf.xml file:
<destination id="jse_drop_trinity" runner="jse_drop">
<param id="qsub_options">-V -j n -l mem256 -pe smp.pe 12</param>
<tool id="trinity" destination="jse_drop_trinity" />
The qsub_options are options for our Grid Engine-based submission system which dispatches Trinity to a 12-core parallel environment on one of the higher memory nodes on the cluster; the galaxy_slots option tells the job that 12 slots are available, and is passed to Trinity on start up so that it knows how many processes it can start.
These options appeared to be working correctly, so the question was then: where were the extra processes coming from? The admin identified that Trinity is actually a Java-based software package, and that the Java runtime appeared to be starting additional multiple processes for its garbage collection (a process within the Java runtime for managing memory usage and other internal book-keeping operations).
Looking at the output from a Trinity job showed the default command line:
Thursday, February 13, 2020: 10:09:18 CMD: java -Xmx64m -XX:ParallelGCThreads=2 -jar /email@example.com/opt/trinity-2.8.4/util/support_scripts/ExitTester.jar 0
which includes -XX:ParallelGCThreads=2 and indicates that each Java process should use 2 threads for garbage collection (GC).
It's possible to override the defaults by setting the desired option in the _JAVA_OPTIONS environment variable when the job is run, and this can be done by adding a new element in the job destination for Trinity:
(See the section on Enviroment modifications in the Galaxy documentation for more details.)
With this in place subsequent Trinity jobs behaved correctly when submitted to the compute cluster.