Scientific image processing in the cloud with Fiji/ImageJ

The Centre for Dynamic Imaging at the Walter and Eliza Hall Institute, together with DiUS, jointly developed a cloud-based (AWS) High Performance Computing (HPC) framework to enable on-demand cluster-based compute and storage resources for image analysis.

Using high-resolution optical microscopes, the Walter and Eliza Hall Institute routinely generates 2-3 Tb of images per week for analysis. The subsequent image analysis has become a bottleneck due the computational requirements and, as usage has grown, investment in powerful hardware clusters has been required. Scientists and laboratories have also been constrained by the need to schedule and share the compute and storage infrastructure.

Fiji/ImageJ Customisation

As the users in the lab were using Fiji/ImageJ as a desktop user interface (UI) based application, our initial work was aimed at preparing to run it as a headless service in anticipation of running on EC2. Along the way we realised that some Fiji/ImageJ plugins were written to run only in UI mode. However, a majority of the plugins were able to run headless.

Next, we benchmarked a few baseline image-processing workloads (cell death count, ki67, etc) on LAN machines and an EC2 instance to show the viability of using cloud-based compute.

As you can see in the diagram above, C4.8xLarge was significantly faster (from approximately 18 minutes down to 6 minutes) than analyses on available local hardware.

SciOps – Usability and Ad-Hoc Workflows

To meet the design goals and facilitate ease of use we developed a SciOps [sic DevOps] framework on top of the AWS CfnCluster HPC capabilities. It provided a convention over configuration approach to managing image analysis workloads between the scientist’s workstations/LAN and the AWS-based ephemeral HPC clusters.

The primary design considerations and tooling addressed:

  1. Configuration and lifecycle management of AWS based ephemeral clusters.
  2. Data synchronisation between the LAN and the AWS based cluster.
  3. Image processing workload submission and job control, including pre/post-processing activities.

The CfnCluster framework defines a set of processes, templates and a CLI for an AWS based cluster. It enables (1) the creation of AWS CloudFormation stacks with supporting resources and service instances, as well as (2) wiring these to the underlying HPC tools for workload lifecycle management using the (3) pre-provided machine images (AMIs) containing the HPC tooling.

image01

Creating and Managing the Cluster

After wrapping our heads around CfnCluster, the initial prototyping involved the headless execution of Fiji/ImageJ on EC2 for image processing. This allowed us to establish benchmarks for different workloads across AWS instance types and helped better understand the computational footprints (Number of CPUs, RAM and storage) needed for our initial target workloads in the cluster.

image00

The next step was to create a customised AMI with Fiji/ImageJ on top of the CfnCluster base image. This also required adapting the existing Fiji/ImageJ image processing scripts to run in headless mode on the AWS cluster.

java -Djava.util.prefs.userRoot=/tmp
-Dpython.cachedir.skip=true -Dplugins.dir=/home/ec2-user/Fiji.app -Xms30g -Xmx30g
-Djava.awt.headless=true -Dapple.awt.UIElement=true -XX:+UseParallelGC -XX:PermSize=2g
-Djava.class.path=/home/ec2-user/Fiji.app/jars/imagej-launcher-3.1.6.jar
-Dimagej.dir=/home/ec2-user/Fiji.app -Dij.dir=/home/ec2-user/Fiji.app
-Dfiji.dir=/home/ec2-user/Fiji.app
-Dfiji.defaultLibPath=lib/amd64/server/libjvm.so
-Dfiji.executable=/home/ec2-user/Fiji.app/ImageJ-linux64
-Dij.executable=/home/ec2-user/Fiji.app/ImageJ-linux64
-Djava.library.path=/home/ec2-user/Fiji.app/lib/linux64:/home/ec2-user/Fiji.app/mm/linux64
net.imagej.launcher.ClassLauncher -ijjarpath jars -ijjarpath plugins net.imagej.Main -macro $1 $2

Headless Fiji launch script on EC2

The image processing workloads were ideally suited for parallel processing (ie. each image can be processed separately). Our work also showed that the size of the clusters (the number and type of EC2 instances) needed to be traded off between the speed and complexity of the parallel processing vs the AWS cost of the cluster per-hour. As such, the cluster’s node configuration was made available to the scientific users.

A typical CfnCluster configuration for a time critical workload with multithreaded image-processing jobs looked like below.

[aws]
aws\_region\_name = ap-southeast-2
aws\_access\_key\_id = \*\*\*\*\*\* TODO \*\*\*\*\*\*
aws\_secret\_access\_key = \*\*\*\*\*\* TODO \*\*\*\*\*\*

[vpc public]
master\_subnet\_id = subnet-exxxxxxx
vpc\_id = vpc-0xxxxxxx

[global]
update\_check = true
sanity\_check = true
cluster\_template = custom-imaging

[cluster custom-imaging]
custom\_ami = ami-16xxl33t
vpc\_settings = public
key\_name = \*\*\*\*\*\* TODO \*\*\*\*\*\*
compute\_instance\_type = c4.8xlarge
master\_instance\_type = m4.large
maintain\_initial\_size = false
initial\_queue\_size = 5
max\_queue\_size = 10
s3\_read\_resource = arn:aws:s3:::\*
s3\_read\_write\_resource = arn:aws:s3:::\*
ebs\_settings = custom
master\_root\_volume\_size = 100
compute\_root\_volume\_size = 100
# cluster\_type = spot
# spot\_price = 0.60

[ebs custom]
volume\_size = 100

In order to reduce the CfnCluster configuration burden, the following behaviours were pre-defined:

  • Using a customised AMI containing the image analysis tools needed.
  • Pre-defined VPC(s) and sub-network(s) that allowed us to better control the network security model for deploying the cluster.
  • Long-lived compute nodes were disabled allowing unused nodes (EC2 instances) to terminate when the cluster wasn’t busy.
  • Read/write access to multiple S3 buckets as input and output storage endpoints for pre/post image processing data.
  • The use of](https://arc.liv.ac.uk/trac/SGE) as the HPC job scheduling and management tools (CfnCluster also supports torque and openlava).

The scientific users were allowed to configure the following cluster settings on a per workload basis:

  • Choosing the compute and master EC2 instance types.
  • The number of initial compute nodes to be launched with the cluster.
  • The upper limit for auto-scaling of on-demand EC2 nodes when dynamically scaling a CfnCluster based on load.
  • Customising the per-node EBS storage size allowing for CfnCluster to run NFS across the nodes.
  • The root storage device size on the EC2 instances.

Data Synchronisation

We introduced the notion of an image processing project (and tooling) which encompasses all the (1) images, (2) image processing scripts along with the (3) results of the executions in a single project sub-directory structure.

ect_name/ – Project root directory

├── bin/ – The Image processing tool invocation scripts (eg. Fiji/ImageJ – fiji.sh)

├── input/ – Images and configurations to be used in the image processing jobs

├── output/ – The generated results data and images

├── src / – The image processing scripts (for Fiji/ImageJ)

Using this structure and tools enable the users to make reliable assumptions (usability) during automation of the AWS cluster for processing across of different types of image processing models.

Projects are created and synchronised between AWS and the LAN hosts, using command line tools that:

  • Create a new project on the LAN/local machine.
  • Sync a project and data with AWS.
  • Create and execute image-processing jobs on the HPC cluster on AWS.
  • Sync the project results from AWS.

S3 is used as the project storage directory between the LAN and the AWS based ephemeral clusters:

  • A single S3 bucket (s3://imaging-bucket/) is used as the root for all https://dius.imgix.net/2016/03/image01-1-e1459219381408.png processing projects on AWS.
  • A distributed NFS filesystem is implemented across the cluster nodes providing transparent access to project files.
  • Pre-image processing: A cluster will synchronise a project’s files (images, scripts, etc) onto its local storage from s3://imaging-bucket/.
  • Pre-processing images and configuration files are stored in …//input/ directory.
  • Post-processed results and images are stored in …//input/ directory.
  • Post-image processing: A cluster will synchronise the results (primarily the …/output/ folder) onto s3://imaging-bucket/.

Workload Submission and Job Control

Cfncluster provides multiple HPC schedulers such as SGE, Torque, Openlava, etc. We choose to use SGE as our scheduler and provided tooling to wrap the sge job submissions and scheduling functionality such that Image processing workloads are managed through the use of a custom commands for:

  • Copying the project from s3://imaging-bucket/.
  • Pre-processing the project image files to generate parallel processing configurations.
  • Submitting the image processing jobs to the sge scheduler.
  • Post-processing the results and collating the results result file(s).
  • Copying the project results from the cfncluster to the s3://imaging-bucket/.
  • Providing a slack notification of job activity.

Gotchas – things to look out for:

  • Multiple concurrent executions of headless ImageJ jobs on a node causes errors and slowdown.
  • Starting a new job on a node where a previous ImageJ job was killed job generates unrelated errors on start-up (RMI related errors) but successfully executes the new job.
  • Jobs will fails with resource exhaustion if the EC2 machine type does not contain enough memory and/or the memory settings in fiji.sh are a mismatch.
  • If any of the individual jobs fail on a node the user can:
    • Re-submit the all the image processing jobs to the cluster and get a collated set of results – preferred.
    • Submit the individual job manually and collate the results.

Scientific Computing at Scale

Enabling large-scale image processing for scientific computing in the cloud reduces the demand for on-site computational resources in laboratory environments, while also providing scientists and analysts with the flexibility of access to on-demand compute on an ad-hoc basis. However, this flexibility also requires the scientists to make trade-off between the desired compute power provisioned for clusters, speed of achieving processing results and the cost of the AWS services. By adopting techniques from software-engineering and DevOps to build out low cost open source tools and a culture of Scientific Operations (SciOps [sic]) we can better facilitate on demand scientific computing at scale.

Want to know more about how DiUS can help you?

Offices

Melbourne

Level 3, 31 Queen St
Melbourne, Victoria, 3000
Phone: 03 9008 5400

DiUS wishes to acknowledge the Traditional Custodians of the lands on which we work and gather at both our Melbourne and Sydney offices. We pay respect to Elders past, present and emerging and celebrate the diversity of Aboriginal peoples and their ongoing cultures and connections to the lands and waters of Australia.

Subscribe to updates from DiUS

Sign up to receive the latest news, insights and event invites from DiUS straight into your inbox.

© 2024 DiUS®. All rights reserved.

Privacy  |  Terms