User Tools

Site Tools


reading:kenda:cosmoletkf

Some insights into the code of COSMO and LETKF

COSMO

How to compile

It is recommended to recompile your Cosmo-version locally to be independent from further personal developments of the binary. This should be done locally on xce (!), e.g. by ssh xceh. (AHJune212016)

How does COSMO pass variables to monRTOVP ?

COSMO includes a parametrisation of cloud properties. In this context, monRTOVP-files are interesting, since they include variables which give information on clouds. These variables are cldtop (cloud top pressure), cld (cloud water content in units of g/m**3), cldfrac (cloud fraction): and q (specific humidity).

These values are written in the subroutine write_rtovp in mo_rad.f90, what is a module in the libradiance library. The subroutine write_rtovp is called in the subroutine calc_obs_satpp that is located in src_obs_rad.f90 in the COSMO-package.

For instance, the variable cld seen in monRTOVP represents the content of r%cld_fg in calc_obs_satpp . In this subroutine, most elements in the data structure r are filled in the subroutine prepare_rttov_input. Note: the subroutine rttov_fill_input called in calc_obs_satpp maps the data structures from type t_radv to a data structure that RTTOV can manage.

The variable cld_fg in calc_obs_satpp is equivalent to the variable cloud in prepare_rttov_input. This variable is computed based on model parameters qih (cirrus clouds), qwch (convective clouds) and qwh (stratiform clouds). (AHJune212016)

To compute the height (in m) at which a cloud is detected, src_obs_rad.f90 includes a piece of code that computes the sum of liquid and ice water content (LWC+IWC) where it exceeds a certain threshold. The height is set to a negative value if LWC+IWC < the threshold. If LWC+IWC > the threshold, the height is the distance of the threshold crossing to the land surface (based on the orography, in meter). In the code, hh(k,iprof) is the midlevel height with k=1..ke and iprof runs through the spatial locations (FOVs). Here, k=1 is the maximum height and ke is the level on the ground. The variable hsurf(:,:) gives the orographic height (landscape) at different spatial locations. Then the cloud height may be given as hh-hsurf with respect to the earth surface, or just as hh as the height over sea height. (AH2June282016)

Namelist parameters

  • altopsu: height threshold for SYNOP data beyond which data is rejected; array is (wind,geopotential,T,RH); Example: altopsu = 100., 5000., -10., -10. leads to a rejection of all T and RH (AH23112016)

LETKF

Namelist parameters

  • sgm_fg(1): in first guess-check threshold for stddev(obs-fg). If stddev(obs-fg)>sgm_fg(1): reject, see analysis/mo_fg_check.f90 (AH23112016)

How to compile dace

First, you should get the dace package. It will be installed under the path 3dvar that is the root directory. On the machine xce, there are two possible compilation environments: cce (cray) and gnu. Choose one of them. The default depends on your profile. On the xce, you may install the profile .profile.xc :

# .profile.xc                                                 -*- sh -*-
########################################################################
# The following shell functions enable fast switching between
# different programming environment on the Cray xc login nodes
# by H. Anlauf
########################################################################
 case "$HOSTNAME" in
  xc*)
      set_ps1()
      {
          if type ppwd >/dev/null 2>/dev/null ; then
              PS1='$(ppwd \l)\u@\h:'$1':\w> '         # bash
          else
              PS1="${USER}@${HOST}:"$1':${PWD}> '     # ksh
          fi
      }
      pe_to_prgenv_()
      {
          case "$1" in
              CRAY)  prgenv_="PrgEnv-cray"  ;;
              GNU)   prgenv_="PrgEnv-gnu"   ;;
              INTEL) prgenv_="PrgEnv-intel" ;;
          esac
      }
      cce()
      {
          cce_vers=${1:-default}      # common default cce version
          pe_to_prgenv_ $PE_ENV
          oldenv_="$prgenv_"
          pe_to_prgenv_ "CRAY"
          if [ ! $oldenv_ = $prgenv_ ]; then
              module unload libdwd grib_api
              module unload cray-netcdf perftools stat craype-hugepages2M cray-hdf5
              module swap $oldenv_ $prgenv_
          fi
          module swap cce cce/$cce_vers
          module load cray-netcdf perftools stat
          module load craype-hugepages2M
          module load libdwd
          module load grib_api
          module list
          set_ps1 cray
      }
      gnu()
      {
          gnu_vers=${1:-default}      # common default gcc version
          pe_to_prgenv_ $PE_ENV
          oldenv_="$prgenv_"
          pe_to_prgenv_ "GNU"
          if [ ! $oldenv_ = $prgenv_ ]; then
              module unload libdwd grib_api
              module unload cray-netcdf perftools stat craype-hugepages2M cray-hdf5
              module swap $oldenv_ $prgenv_
          fi
          module swap gcc gcc/$gnu_vers
          case $gnu_vers in
              5.*)    # gcc 5.1, 5.2, 5.3
                  module unload cray-hdf5 cray-netcdf cray-mpich cray-libsci
                  module load cray-hdf5/1.8.14 cray-netcdf/4.3.3.1 cray-mpich/7.2.6 \
                              cray-libsci/13.2.0
                  ;;
              *)      # gcc 4.9.3 also works with default netcdf
                  module unload cray-hdf5 cray-netcdf cray-mpich cray-libsci
                  module load cray-netcdf cray-mpich cray-libsci
                  ;;
          esac
          module load perftools stat
          module load libdwd
          module load grib_api
          module list
          set_ps1 gnu
      }
      intel()
      {
          intel_vers=${1:-15.0.1.133} #     my default intel version
          pe_to_prgenv_ $PE_ENV
          oldenv_="$prgenv_"
          pe_to_prgenv_ "INTEL"
          if [ ! $oldenv_ = $prgenv_ ]; then
              module unload libdwd grib_api
              module unload cray-netcdf perftools stat craype-hugepages2M cray-hdf5
              module swap $oldenv_ $prgenv_
          fi
          module swap intel intel/$intel_vers
          module load cray-netcdf perftools stat
          module list
          set_ps1 intel
      }
      if [ -n "$PS1" ]; then
          if [ "$HOSTNAME" = "xct00" ]; then
              cce default             # xct default environment: cray
          else
              cce                     # xce default environment: cray
          fi
          export CRAYLIBS_ARCH_OVERRIDE=ivybridge
      fi
      ;;
 esac

and call it from your .profile in its beginning, e.g. by

. ./.profile.xc

One you have downloaded dace and updated it by svn update, you may ensure that all dependencies are correct by calling ./Make in the root directory. A subsequent make clean removes all modules. Then calling make -j 12 compiles all files and links all modules and generates all libraries. The -j 12 option means that make distributes the tasks over 12 processors. You may leave this option out or choose another number of processors.

As soon as you have edited a module and you ensured that it compiles without error, you may submit it via svn commit -m “blabla” where blabla gives some information on your modifications. Before committing, please make sure that you update your version by svn update. This prevents a mismatch between versions and compilation problems for your colleagues.

Hint: If you have called svn update and receive a message on an undefined reference of a subroutine after calling make, your dependencies may be incorrect. Then please update the dependencies by the call sequence ./Make , make clean and make -j 12 in your root directory.

Hint: Please make sure that there are no old copies of .f90 files in the directories that end on .f90, e.g. file.f90 and file.old.f90. Both files may contain a definition of a module, e.g. great_module. If this is the case, make compiles both new and old files probably containing a similar definition of great_module. Then linking of the module may fail since there are two different definitions of great_module. To this end, rename your old .f90 files avoiding the file extension .f90, e.g. as file.f90.old. (AHJuly152016)

Data structure for observations

In analysis/mo_letkf.f90, the observation data and related elements are stored in the structure obs. This is a global public module structure of type t_obs_set set in analysis/mo_psas.f90 . The type t_obs_set is defined in analysis/mo_obs_set.f90 . It is a rather complex structure containing substructures of various derived types. The major structure elements are:

structure name structure type description
o t_obs observation data type
si t_dec_info report/fov
oi t_dec_info observation
ii t_dec_info interpolation
di t_dec_info dummy
l t_ooj linearised observation operator
b t_bg background in interpolation and observation space
obc t_vector bias corrected observation
R t_matrix nominal observation error
vc t_vqc quadratic approcimation of VQC J_o (for EnVar only)
vbc t_vbc Variational bias correction

One of the most important structures is o of type t_obs. In fact, this structure is an array of observation boxes and it is explained in oo-model/mo_t_obs.f90. Each array element includes, inter alia, these subelements:

structure name structure type description
n_spots integer number of spatial locations/reports
spot t_spot header of spatial location
n_obs integer size of observation space
body t_datum data of single observations
varno integer observation type
ana real analysis

Its variable n_spots is the number of reports, i.e. the number of spatial localizations where an observation is taken. The structure spot is an array of size n_spots and gives the header of each report.

The header of each report obs%spot of type t_spot includes, inter alia, obs%spot%n_hd of type t_head which contains information on the data at the spatial location of the corresponding report. An important structure is obs%spot%o containing information on the observation. In the case of SEVIRI, obs%spot%o is an array that contains information on the observations in the different channels. obs%spot%o contains the indices where to find toe observation in the structure body%o (see below): obs%spot%o%i+1 gives the index where to find the observation data of the first channel, obs%spot%o%i+2 the second, .., and obs%spot%o%i+n the channel n-th channel.

The component body of type t_datum (defined in oo-model/mo_t_datum.f90) describes the single observations including inter alia:

structure name structure type description
o real observed value
bg real background value
bc real bis correction
eo real observational error
plev real observation height (Pa)

For instance, let the index k be in the interval [obs%spot%o%i+1,obs%spot%o%i+n] in a specific report (spatial location). Then, for example, obs%body(k)%o is the observation in a corresponding channel and spatial location and obs%body(k)%eo the corresponding observation error.

(AHJune142016)

How are fof_RAD files read in and how is the ensemble data stored ?

The fof_RAD files are read in mo_fdbk_in.f90 in subroutine read_feedback. The information on the files to be read in are stored in subroutine mo_t_obs.source() and their number in mo_t_obs.n_source . The file content is written into the data structure of type t_obs.

The rourtine read_feedback in mo_fdbk_in.f90 is called in subroutine process_obs0 in mo_obs.f90. There, the files data are stored in a data structure obs of type t_obs_block in the element obs%o (of type t_obs).

Before applying the LETKF, feedback file content is read in letkf_driver located in mo_letkf.f90 . There, read_obs_ens is called (located in mo_t_enkf.f90) where the ensemble is read by calling read_feedback again. (AHJuly042016)

reading/kenda/cosmoletkf.txt · Last modified: 2016/11/23 11:38 by hutt