Sample App Files#

The following examples were generated by AI. They are included here because they provide good documentation and completeness.

Example 1: tapisjob.sh (Tapis-generated launcher)#

tapisjob.sh

⚠️ You do not create or edit this file. This example shows the kind of script Tapis generates and submits to Slurm.

#!/bin/bash
#SBATCH -J tapis-job
#SBATCH -N 2
#SBATCH --ntasks-per-node=48
#SBATCH -p normal
#SBATCH -t 02:00:00
#SBATCH -o tapisjob.out
#SBATCH -e tapisjob.err

# Move to the job execution directory
cd "$SLURM_SUBMIT_DIR"

# Load Tapis-provided environment variables
if [ -f tapisjob.env ]; then
  source tapisjob.env
fi

echo "Tapis Job UUID: $_tapisJobUUID"
echo "Allocated nodes: $_tapisNodes"

# Ensure the app script is executable
chmod +x ./tapisjob_app.sh

# Launch the application
./tapisjob_app.sh

# Capture exit code for Tapis bookkeeping
echo $? > tapisjob.exitcode

What this example illustrates

  • Slurm directives (#SBATCH) come from your app definition + job request

  • tapisjob.env injects Tapis metadata and resolved paths

  • Output and error streams are standardized

  • The only thing this script really does is:

    • prepare the environment

    • call your application

    • report status back to Tapis

You can think of this as “Tapis’s Slurm wrapper”.


Example 2: tapisjob_app.sh (user-provided application logic)#

This is the script you write and control. It lives in your ZIP app bundle.

Example A: Simple serial or threaded job
#!/bin/bash
set -e

echo "Running on host: $(hostname)"
echo "Working directory: $(pwd)"

# Explicit environment setup (compute node starts clean)
module purge
module load python/3.12.11
module load hdf5/1.14.4

# Optional: create a virtual environment
python -m venv venv
source venv/bin/activate

pip install numpy pandas

# Run your analysis
python run_analysis.py input.json

Key points illustrated

  • No assumptions about login-node state

  • All modules and environments are loaded here

  • This script could be tested locally or on HPC

  • Tapis doesn’t interfere with its contents

Example B: MPI-based OpenSees job
#!/bin/bash
set -e

echo "MPI job starting"
echo "Nodes: $_tapisNodes"
echo "Cores per node: $_tapisCoresPerNode"

module purge
module load opensees
module load openmpi

# Use Slurm-provided MPI launcher
mpirun -np $SLURM_NTASKS OpenSeesMP model.tcl

What this shows

  • MPI is launched inside tapisjob_app.sh

  • Whether the app is declared isMpi: true or not,

    • the MPI command lives here

  • tapisjob.sh does not care whether this is MPI, Python, or anything else

Example C: Hybrid workflow (pre-processing + MPI + post-processing)
#!/bin/bash
set -e

module purge
module load python/3.12.11
module load opensees
module load hdf5

echo "Pre-processing inputs"
python generate_model.py

echo "Running OpenSees in parallel"
mpirun -np $SLURM_NTASKS OpenSeesMP model.tcl

echo "Post-processing results"
python extract_results.py results/

This is where the two-script model really shines:

  • Tapis handles scheduling and lifecycle

  • You orchestrate entire pipelines in one place

  • The script remains portable and readable


Side-by-side mental model#

Script

Who owns it

Purpose

You edit it?

tapisjob.sh

Tapis

Scheduler glue, environment injection, monitoring

❌ Never

tapisjob_app.sh

You

Scientific / computational workflow

✅ Always


One sentence to remember#

tapisjob.sh exists so Tapis can talk to Slurm; tapisjob_app.sh exists so you can talk to your science.