Documentation
A key feature in OpenMOLE is the possibility to delegate the workload to a remote execution environment. Tasks in OpenMOLE have been designed so that the delegation a part of the workload to a remote environment is declarative.
Contents
Setting up an Authentication ๐
You first need to define an authentication method for the environment(s) you want to use. The way to achieve this is describe in the GUI guide Have a look here to set up authentication in console mode.Defining an execution Environment ๐
The actual delegation of the task is noted by the keywordon
followed by a defined Environment:
// Define the variables that are transmitted between the tasks
val i = Val[Double]
val res = Val[Double]
// Define the model, here it is a simple task executing "res = i * 2", but it can be your model
val model =
ScalaTask("val res = i * 2") set (
inputs += i,
outputs += (i, res)
)
// Declare a local environment using 10 cores of the local machine
val env = LocalEnvironment(10)
// Make the model run on the the local environment
DirectSampling(
evaluation = model on env hook ToStringHook(),
sampling = i in (0.0 to 100.0 by 1.0)
)
You do not need to install anything or perform any kind of configuration on the target execution environment for
OpenMOLE to work. It reuses the infrastructure in place. You will however be required to provide the authentication
information in order for OpenMOLE to access the remote environment. At this point, just specify the credentials you're
using to login to this environment outside of OpenMOLE. Voila! That's all you need to do to use your environment
through OpenMOLE. In case you face authentication problems when targeting an environment through SSH, please refer
to the corresponding entry in the FAQ.
When no specific environment is specified for a task, or a group of tasks, they will be executed sequentially on your
local machine.
Grouping ๐
The use of a batch environment is generally not suited for short tasks (less than a 1 minute for a cluster, or less than 1 hour for a grid). In case your tasks are short you can group several executions. To group the execution by 100 in each job submitted to the environment, use the keywordby
:
// Define the variables that are transmitted between the tasks
val i = Val[Double]
val res = Val[Double]
// Define the model, here it is a simple task executing "res = i * 2", but it can be your model
val model =
ScalaTask("val res = i * 2") set (
inputs += i,
outputs += (i, res)
)
// Make the model run on the the local environment
DirectSampling(
evaluation = model on env by 100 hook ToStringHook(),
sampling = i in (0.0 to 1000.0 by 1.0)
)