Embed your model

Suggest edits
Documentation

Contents

Tasks 🔗

Tasks are the atomic computing elements of OpenMOLE: they describe what OpenMOLE should execute. There are a number of tasks in OpenMOLE especially designed to embed your own models and/or programs, depending on the language your model uses (binary executable, Java, R, etc.). You need to choose the adequate task for your model.
All the available tasks are documented in this section, just click on an item in the list on the left menu.

The execution of a given task depends on input variables, and each task produces output variables which can be transmitted as inputs of subsequent tasks. Below is a dummy task to illustrate this concept:

// Define a variable i of type Int
val i = Val[Int]
val j = Val[Int]

// Instantiate a task that does nothing.
// This task uses the variable i as input and j as output.
// Any task immediately following this one in the workflow (i.e. linked with a transition) will be able to use the variable j containing the result of this task.
val t = EmptyTask() set (
  inputs += i,
  outputs += j
)
It is also possible to specify default values which are used by the task in case no input data was provided in the dataflow:

val i = Val[Int]
val j = Val[Int]

val t = EmptyTask() set (
  inputs += i,
  outputs += j,
  // set i's default value to 0
  i := 0
)
Once your model is properly embedded in an OpenMOLE task, you can use an exploration method on it, and delegate the multiple executions of the task on remote computing environments.

Exploration method 🔗

The composition of a full exploration experiment is achieved by writing a script in the OpenMOLE language. A working OpenMOLE exploration script needs to define: Let's say you have a model that takes a string as input and do some stuff with it, like launching a simulation with the parameters contained in the input string. People from the lab gave you a huge CSV file with each line containing various experimental setup parameters.
What you want to do is to run a simulation for each line of this file, execute it on the lab's cluster, and gather the results. Your OpenMOLE script would look like that:

val inputParameter: Val[Int]
val result: Val[Double]

// Crawl the big file and take the lines
val all_the_lines = CSVSampling("EmpiricalData.CSV") set (columns += inputParameter)

// Encapsulate your model in an "execution" task that calls the main routine
val  my_model_execution =
  ScalaTask("mainRun(inputParameter)",
    inputs += inputString,
    outputs += result
  )

// A hook to catch the outputs of your model execution and write them in a CSV file
val catch_output = CSVHook(workDirectory / "path/to/save/it")

// Declare your computing environment
val env = ClusterEnvironment(login, machineIP)

// The exploration method
// It says: explore the lines and run a model execution for each one, save the outputs, all that on the cluster
// (results are not brought back to the local computer yet).
DirectSampling(
  evaluation = my_model_execution on env hook catch_output,
  sampling = all_the_lines
)