Try on DesignSafe

Step 3: Retrieve Output#

by Silvia Mazzoni, DesignSafe, 2025

Using local utilities library
t=OpsUtils.connect_tapis()
 -- Checking Tapis token --
 Token loaded from file. Token is still valid!
 Token expires at: 2025-08-20T22:46:16+00:00
 Token expires in: 2:37:04.679006
-- LOG IN SUCCESSFUL! --

Set Job ID#

Use the id of a completed job you have already run. You cannot use the id of another user’s job.

jobUuid = '4dfa35e1-15cd-48fd-a090-f348544dee1f-007'

______________________________________________________________________________#

Once a job has completed, the final step is to explore and retrieve its outputs. Tapis automatically archives all files produced by your job into the designated archive system and path. These outputs may include:

  • Primary results (e.g., simulation data, processed files)

  • Log and status files (.out, .err)

  • Intermediate data generated during execution

Tapis provides two key functions for working with archived results:

  • getJobOutputList(job_id) — Browse and list all files and folders in the archive.

  • getJobOutputDownload(job_id, path) — Retrieve specific files for local use.

This separation makes it easy to see what a job produced before downloading, and ensures workflows can scale — whether you just need a quick log file for debugging or a large dataset for post-processing.

Best Practice: Start with Logs#

Before downloading large result files, always check the .out (standard output) and .err (error output) files. These are the first place to look to confirm that:

  • Your job executed correctly,

  • Inputs were staged properly, and

  • No errors occurred during runtime.

By reviewing logs first, you can avoid unnecessary downloads of large files from failed or incomplete jobs and quickly pinpoint issues.


getJobOutputList(job_id)#

Purpose:#

List archived output files.

Example:#

output_path = '.'; # spcify the relative path you want to view, we'll start at the "root"
files = t.jobs.getJobOutputList(jobUuid=jobUuid,outputPath=output_path)
for f in files:
    print(f.name, f.size, f.lastModified)
.ipynb_checkpoints 80 2025-08-19T17:24:30Z
inputDirectory 4096 2025-05-07T22:17:48Z
opensees.zip 376 2025-05-07T22:17:48Z
tapisjob.env 1571 2025-05-07T22:17:48Z
tapisjob.out 1822513 2025-05-07T22:17:49Z
tapisjob.sh 1173 2025-05-07T22:17:48Z
tapisjob_app.sh 314 2025-05-07T22:17:48Z

getJobOutputDownload(job_id, path)#

Purpose:#

Download a specific file. You must know the relative path from getJobOutputList().

Example:#

filePath = files[-1].name; # pick the last file from the list you obtained from getJobOutputList
print('filePath',filePath)

output_file = t.jobs.getJobOutputDownload(
    jobUuid = jobUuid,
    outputPath=filePath
)
# you now have the file contents in memory, you can save them to a file, or view them.

localFilePath = os.path.expanduser('~/MyData/tmp.txt')
with open(localFilePath, "wb") as f:
    f.write(output_file)
filePath tapisjob_app.sh

Accessing and Downloading Outputs#

After metadata retrieval, pull job results with:

  • getJobOutputList() → List available files/folders.

  • getJobOutputDownload() → Download specific files locally.

Example Combined Workflow:

job_id = "myuser-job-abc123"

# Get metadata
job = client.jobs.getJob(job_id)
print(f"Job {job.id} on app {job.appId} - Status: {job.status}")

# Check history
history = client.jobs.getJobHistory(job_id)
for h in history:
    print(h.status, h.timestamp)

# See outputs
files = client.jobs.getJobOutputList(job_id)
for f in files:
    print(f.name, f.size)

# Download output
content = client.jobs.getJobOutputDownload(job_id, path="results/output.txt")
with open("output.txt", "wb") as f:
    f.write(content)

We will look at these commands in more details in this training module.