Overview

This page introduces the EarthCARE Simulator. The EarthCARE Simulator was created by instantiating the EODiSP platform. The EarthCARE Simulator is end-to-end performance mission simulator for the EarthCARE mission. The EarthCARE mission is an earth observation satellite mission under development at ESA. Most of the simulation packages integrated in the EarthCARE Simulator were developed by KNMI as part of their contribution to the phase A EarthCARE study.

This page assumes the reader to be familiar with the EODiSP concept.

Contents

Objectives

The instantiation of the EODiSP to build the EarthCARE Simulator has two objectives.

  • To create a simulator that can be used to perform the system-level verification of the EODiSP.
  • To provide a blueprint for prospective users of the EODiSP of how they can use the EODiSP to create their own simulators.
An EODiSP simulation is built by integrating a set of simulation packages with the EODiSP infrastructure. A simulation package is a piece of software that implements part of the functionalities required for an end-to-end simulation and that is delivered as a single unit. A simulation package encapsulates one or more simulation models.

The simulation models normally take the forms of algorithms, possibly defined in some modelling environment such as Matlab. The simulation packages implement the simulation model algorithms in software. Simulation packages can take a variety of forms: source code in a high-level language, binary level executable, macros in an excel spreadsheet, etc. In the EarthCARE Simulator, three kinds of simulation packages are used: excel spreadsheets, linux executables available in binary form only, and Java source code.

With reference to the above objectives, this web site describes:

  • The simulation packages that compose the EarthCARE Simulator;
  • The wrappers that transform the simulation packages into HLA Federates ready to be integrated with the EODiSP infrastructure;
  • The repositories where the HLA Federates are registered.
Note that this web site is intended to describe a simulator, not a particular simulation. The simulator described here can be used to perform several simulations. Some of these simulations were used to demonstrate the correct implementation of the EODiSP and are described in the EODiSP web site.

Simulator Architecture

The EarthCARE simulator is based on the end-to-end performance simulator developed at KNMI for the phase A EarthCARE mission. The KNMI simulator consists of a set of stand-alone programs running on a Linux platform. Each program models one element of the end-to-end observation chain.

The starting point of the observation chain is a scene creator program that generates a scene file. The scene file models the physical scene that is observed by the EarthCARE instruments. The next item in the chain is a set of forward models programs which simulate the propagation of signals from the scene to the satellite instruments. The third item in the chain is a set of instrument model programs that simulate the satellite instruments. The last item in the chain is a set of retrieval model programs that simulate the post processing of the instrument data. A set of utility programs are also available to plot the results of the data retrieval and to extract geophysical data from scene files. The simulator programs exchange information through data files.

The KNMI simulator is described in detail in a publicly available user guide.

The EODiSP EarthCARE simulator is based on a nearly complete subset of the programs in the KNMI EarthCARE simulator. Its architecture is shown in the figure below. Blue boxes in the figure represent KNMI simulator programs wrapped as HLA federates. The names in the boxes are the same as the names used in the KNMI documentation. Red boxes represent federates that were developed specifically for the EODiSP EarthCARE Simulator and do not therefore come from KNMI.

The arrows in the figure represent data flows. The structure of the data generated and consumed by the KNMI programs is too complex to be represented graphically. The arrows therefore only indicate which program is providing data to which other program but they do not say anything about the kind of data that is being exchanged. The sim_controller federate sends data to and receives data from all the other federates in the simulator. For simplicity, its connections are not shown in the figure.

EarthCARE Simulator Architecture

Figure 1: EarthCARE Simulator Architecture

The starting point of the processing chain simulated by the demonstrator is the scene_creator utility that is used to generate a scene file. The scene file defines the physical scene to be observed by the EarthCARE instruments. The scene data are processed, in four parallel branches, by the four EarthCARE forward models: lid_filter, rad_filter, MC_sim_main, and MC_LW_sim_main. The phase A EarthCARE mission included four instruments (RADAR, LIDAR, BBR and MSI). Of these, only two - the RADAR and the LIDAR - are explicitly modelled in the simulator (federates lidar and radar). Finally, the simulator includes three retrieval models: lidar_ret1, msi_ret, and lw_msi_lidar_radar.

In addition to the federates built around KNMI models, the EarthCARE Simulator includes two federates that were developed specifically for it:

  • The orbit_propagator federate provides the spacecraft orbital position as an input for other models. The present version is a dummy intended for demonstration purposes that outputs hard-coded values for the sun aspect angles.
  • The sim_controller federate is built around a set of excel spreadsheets that allow the user to control and display the simulation parameters through a simple graphical interface. Dynamic update of simulation parameters while a simulation is running is also possible.

Note that the EODiSP EarthCARE simulator - like the KNMI simulator from which it is derived - only simulates the processing of one single scene. Hence, the data flows shown in the figure are only executed once in a single simulation run.

File Access Modes

The EarthCARE Simulation federates are implemented by wrapping the corresponding KNMI programs as HLA federates. No changes are made to the original KNMI code.

The HLA interfaces of the EarthCARE federates are defined by the attributes the federates publish and by the attributes they subscribe to.

The attributes that an EarthCARE federate publishes and subscribes to are implicitly defined by the inputs and outputs of the associated KNMI program. These inputs and outputs take the form of files. Conceptually, the HLA wrappers map each file that is produced by one of the KNMI programs to an HLA attribute. At implementation level, two options are possible for managing these input and output files:

  • A file remains on the same distribution node as the program that generates it. The federates that encapsulate the programs that read the file must then provide mechanisms to allow a remote access to the file. In this case, it is only the URL of the file that is physically transferred from a federate to another.
  • A file is represented as a sequence of bytes (an HLA attribute of type HLAopaqueData) that is transferred from one federate to the next. In this case, the entire content of the file is transferred from the federate producing the file to the federates consuming it. Note that if several federates are using the same file as an input, then the file content is transferred several times and multiple copies of the file are created (possibly residing on the same computational node).
The choice between the two approaches depends on the pattern of access to a file on the part of the KNMI programs, on the size of the files, and on the distribution architecture of a simulation.

If a federate needs to access a file several times during its execution and if the file is generated at a remote location, then the transfer of the file content to the federate that needs it may be the most efficient solution. If, on the other hand, the files are very large and used by several federates, then it may be more advantageous to keep them where they are created and to simply transfer their URL to the federates that need to access them. The latter solution, quite obviously, is also the optimal solution in case all federates reside on the same computational node.

The EarthCARE Simulator currently supports only the latter mode, the file transfer mode. Thus, the data is always transfered between federates, even if they reside on the same node.

Execution Mode

The original KNMI simulator was intended to run sequentially on a single processing nodes. The KNMI simulation models are encapsulated in executable programs that run one after the other. Each executable runs to completion before handing over to the next executable in the processing chain.

The EODiSP supports distributed simulations where various elements of a simulation may run in parallel. Such a distributed approach makes eminent sense in the case of the EarthCARE simulation for at least two reasons. Firstly, the forwad models are typically very computationally intensive. It is therefore advantageous to locate them on different nodes because this allows them to execute in parallel thus reducing the overall duration of a complete simulation. Secondly, it is often the case that some of the models that participate in a simulation are proprietary. The EODiSP approach allows their owners to keep control over them while still allowing them to be included in a simulation.

The EarthCARE Simulator is designed in the EODiSP spirit to support distributed simulation if the simulation packages are located on distributed nodes.

Failure Modes

It is common for simulation models to support the concept of failure modes. The failure mode is a flag that the user of the model can set to ask the model to emulate a failure condition. The KNMI models do not support failure modes. However, for purposes of demonstration, the EarthCARE Simulator includes a facility to handle failure modes.

To each federate (with the exception of the sim_controller), a failureMode flag is associated. The value of this flag is visualized by the sim_controller federate and, through this same federate, the simulation manager can also ask for the value of the flag to be changed.

Since the KNMI models do not support failure modes, changes to the value of the failureMode flag have no effect. However, this facility simulates a situation where the simulation manager can control, in real-time and while a simulation is running, the failure mode of the models that participate in a simulation.

The visualization and change in the value of the failure flag is done through the excel interface of the sim_controller federate. This demonstrates how excel can be used to build a simple GUI-based control interface for an EODiSP federation.

Dummy Federates

Some of the KNMI programs that form the heart of the EarthCARE Simulator have very long execution times (tens of minutes or more). This is inconvenient for debugging or demonstration purposes. For this reason, the EarthCARE Simulator include dummy versions of some of the federates that wrap the KNMI programs.

The KNMI federates wrap an executable that reads one or more input files and generates one or more output files. The dummy versions of the KNMI federates do not include any executable but they can output an out put file with the same format as the output file generated by the KNMI executable.

The dummy federates implement the same HLA interface as the KNMI federate they replace (i.e. they comply with the same SOM). The dummy federates are therefore interchangeable with the KNMI federates.

The EarthCARE Demonstrator delivery includes dummy federates for several KNMI programs. The dummy federates are characterized by names like xxx_dummy where 'xxx' is the name of a KNMI program.

Simulation Parameters

The behaviour of simulation models is normally controlled by specifying the values of certain parameters. The EarthCARE Simulator supports this type of configuration of its models.

The model parameter values are displayed through the excel interface of the sim_controller federate and, through this same interface, the simulation controller can also update their values or load an entire new set of default values for the simulation parameters.

In view of the demonstrator status of the EarthCARE Simulator, the set of simulation parameters that are displayed through the sim_controller federate is only a subset of all the parameters that are defined for the KNMI models. The intention is to demonstrate how simulation parameters can be displayed and updated in an EODiSP simulation, rather than to provide access to the full set of KNMI parameters.

Simulation Experiments

The EODiSP is designed to support the execution of simulation experiments. A simulation experiment is a set of simulation runs executed in sequence with different configurations. A configuration is defined by a set of initialization files. An initialization file is a file that is read by the federate during the federate initialization phase (i.e. prior to the federate achieving its EODISP_START synchronization point). In the EODiSP, a federate can have multiple initialization files. These files can be added to a federate in the simulation manager application, retaining the order of the files.

In the case of the EarthCARE Simulator, a simulation configuration is defined by a set of values for the simulation parameters. This set of values is stored in the excel file that is embedded within the sim_controller federate. This federate is designed to take the excel file as its initialization file. No other federate in the EarthCARE Simulator has an initialization file. Hence the sim_controller initialization file defines a configuration in a simulation run.

Simulation Repository

In the EODiSP concept, at least one centralized repository must exist where all federates that may take part in a simulation are registered. The EarthCARE Simulation Repository is the centralized repository where the EarthCARE Simulator federates are registered.

The EODiSP can operate in two basic modes. In remote mode, one or more of the federates are located on a remote node (with respect to the simulation manager application). In local mode, all the federates are located on the same platform as the simulation manager application. Operation in remote mode requires the repository to be installed on a publicly accessible server. Operation in local mode requires the repository to be installed locally on the same machine on which the simulation models and the simulation manager application are located.

The EarthCARE Simulator is designed for remote mode operation. Its repository is installed on the P&P Software server located in Germany. Obviously, users who wish to run an EarthCARE simulation in local mode can do so by setting up a local repository on their own machine.

Simulation Federate Owners

In the EODiSP concept, all federates that may take part in a simulation are owned by a person or entity. The EarthCARE simulation federate owners are the owners of the federates of the EarthCARE Simulation.

In the EODiSP concept, federates are located on some platform and a simulation can be built by integrating federates that reside on different - distributed - platforms. Federate owners must therefore be associated to a particular platform where their federates they own are located. The federate owners and their associated platforms for the EarthCARE Simulator are as follows:

  • PnP_1: federate owner residing at P&P Software associated to a Linux laptop.
  • ETH_2: federate owner residing at ETH-Zurich associated to a Windows laptop.
  • Estec_4: federate owner residing at ESA-Estec associated to a Linux desktop.
  • Estec_5: federate owner residing at ESA-Estec associated to a Windows desktop.

Federate owners manage their models through the EODiSP model manager application. This application is therefore installed on each of the demonstrator owner platforms. Note that the laptop-based federate owners can be physically anywhere as long as they have access to the Internet.

The figure below illustrates on which node a federate is running for the EarthCARE Simulator. It is the same figure as above, extended with a box for each federate. This box shows the federate owner, according to the list given above.

EarthCARE Simulator Architecture with Federate Owners

Figure 2: EarthCARE Simulator Architecture with Federate Owners

Normally, the various owners contributing federates to a simulator own disjoint subsets of federates. In the case of the EarthCARE Simulation, and in view of its demonstrative character, federates are duplicated. This is not shown in the figure above. The figure only shows the federates that will be used nominally.

The KNMI programs are designed to run under Linux. Their associated federates can therefore only be allocated to Linux-based federate owners. The sim_controller federate is built around an excel spreadsheet and must therefore be allocated to a Windows-based federate owner. The orbit_propagator federate is implemented as a Java class and is therefore platform-independent.

The EarthCARE Simulator provides two copies of each KNMI federate, with each copy allocated to the PnP_1 and Estec_4 federate owners. Similarly, the orbit_propagator federate is provided in four copies allocated to each of the four federate owners.

This duplication of federates gives great flexibility in building simulations as it allows the simulation manager to select different mixes of federates representing different distribution architecture. Mixing of federates from different platforms is possible and easy because federates that implement the same HLA interface are interchangeable and because the EODiSP allows seamless integration of distributed simulation packages.

There is another node that participates in the EarthCARE simulation:

  • ETH_3: Running the JXTA rendezvous and relay server, the repository application, and the CRC on a Linux server.

This node does not run any model manager application but is also important for a functional EarthCARE simulation since it provides services needed by all EODiSP simulations. It has been chosen to run this services on a publicly accessible server. However, these services could also be run on a local node if the simulation is to be executed locally.