Skip to content
Snippets Groups Projects
Commit d795e253 authored by Michael Blaschek's avatar Michael Blaschek :bicyclist:
Browse files

WRF updates

parent 67404351
Branches
No related tags found
No related merge requests found
......@@ -18,7 +18,7 @@ FROM golang:1.20.0-alpine as builder
# along with this program. If not, see <https://www.gnu.org/licenses/>.
#
################################################################################
# based on: https://github.com/singularityhub/singularity-docker/blob/v3.11.4-slim/Dockerfile
# alpine image with the go tools
RUN apk update && \
......
......@@ -4,7 +4,7 @@ We are going to use spack to build a container with different libraries and head
## Define a spack configuration
We can choose some libraries and version that we need and build it into a concise image.
We can choose some libraries and version that we need and build it into a concise image. We choose ubuntu 18.04 and spack 0.16.3, but we can build our own OS and spack version as we further invest.
This needs to be in a file called spack.yaml
```yaml
......@@ -15,8 +15,8 @@ spack:
container:
images:
os: "centos:8"
spack: 0.17.2
os: "ubuntu:18.04"
spack: 0.16.3
format: singularity
......@@ -70,7 +70,7 @@ SiFive - riscv64
u74mc
```
convert this configuration into a singularity (apptainer) recipes
convert this configuration into a singularity (apptainer) recipe
```bash
spack containerize > Singularity.hdf5.1.10.7
......@@ -80,7 +80,7 @@ which has the following content
```singularity
Bootstrap: docker
From: spack/centos7:0.16.3
From: spack/ubuntu-bionic:0.16.3
Stage: build
%post
......@@ -114,7 +114,7 @@ EOF
Bootstrap: docker
From: centos:7
From: ubuntu:18.04
Stage: final
%files from build
......@@ -125,19 +125,56 @@ Stage: final
%post
# Update, install and cleanup of system packages needed at run-time
yum update -y && yum install -y epel-release && yum update -y
yum install -y gcc gcc-gfortran
rm -rf /var/cache/yum && yum clean all
apt-get -yqq update && apt-get -yqq upgrade
apt-get -yqq install gcc gfortran
rm -rf /var/lib/apt/lists/*
# Modify the environment without relying on sourcing shell specific files at startup
cat /opt/spack-environment/environment_modifications.sh >> $SINGULARITY_ENVIRONMENT
%labels
app hdf5@1.10.7+mpi
mpi openmpi@3.6.1
mpi openmpi@3.1.6
```
Then we can use that recipes to build a singularity container:
```bash
singularity build hdf5.1.10.7.sif Singularity.hdf5.1.10.7
```
## Build your first Singularity container for spack development and integration
This is a much more complex build and of course configure example. We will build a Centos 7 image (or use any other) and install [spack](https://spack.readthedocs.io/en/latest/), a HPC package manager, into the container with our new favorite compiler. In a next step we can use that container and install even more packages into a development container for different apps.
```bash
# this will take some time ...
singularity build centos7.sif definition-files/centos/Singularity-centos7
# resulting in a 180 MB Centos root image, called centos.sif
singularity build centos7.spack.sif definition-files/centos/Singularity-centos7-spack
```
The next step is to use the [definition file](definition-files/centos/Singularity-centos7-spack) and install the wanted compiler into that container
```bash
...
# using spack for Installing
$SPACK_ROOT/bin/spack install -y gcc@8.4.0
# adding the compiler to the settings
$SPACK_ROOT/bin/spack compiler add $(spack location -i gcc@8.4.0)
...
```
Now we have an image that includes spack version 0.17.3 with a recent gcc @8.4.0 compiler, ready for development.
```bash
```
### Containerizing applications using spack
## Signing and hosting containers
```bash
singularity sign
```
\ No newline at end of file
......@@ -183,7 +183,7 @@ report "AlmaLinux 8 spack image available"
if [ $? -ne 0 ]; then
test -f $DEFPATH/almalinux/Singularity-alma8-spack
report "AlmaLinux 8 spack definition found" || exit 1
sed "s/v0.17.3/${spackversion}/" definition-files/almalinux/Singularity-alma8-spack >Singularity-alma8-spack
sed "s/v0.17.3/v${spackversion}/" $DEFPATH/almalinux/Singularity-alma8-spack >Singularity-alma8-spack
singularity build alma8-spack${spackversion} Singularity-alma8-spack
report "AlmaLinux 8 spack container ready" || exit 1
fi
......@@ -221,18 +221,18 @@ fi
#
# Verify containers work
# not as root
su mblaschek -c 'singularity key list | grep IMGW'
singularity key list | grep IMGW
report "IMGW GPG key" || exit 1
echo "Please supply GPG passphrase for your key"
su mblaschek -c "singularity sign alma8-spack${spackversion}"
su mblaschek -c "singularity sign alma8-spack${spackversion}-gcc${gccversion}"
su mblaschek -c "singularity sign centos7-spack${spackversion}-gcc${gccversion}"
singularity sign alma8-spack${spackversion}
singularity sign alma8-spack${spackversion}-gcc${gccversion}
singularity sign centos7-spack${spackversion}-gcc${gccversion}
#
# Upload to cloud / repository for open access
#
# Spack primary development container
singularity push alma8-spack${spackversion} library://mblaschek/spack/alma8:${spackversion}
singularity push alma8-spack${spackversion} library://mblaschek//spack${spackversion}/alma8:8.5.0
# Spack development container with gcc versions
singularity push alma8-spack${spackversion}-gcc${gccversion} library://mblaschek/spack/${spackversion}/alma8/gcc:${gccversion}
singularity push centos7-spack${spackversion}-gcc${gccversion} library://mblaschek/spack/${spackversion}/centos7/gcc:${gccversion}
singularity push alma8-spack${spackversion}-gcc${gccversion} library://mblaschek//spack${spackversion}/alma8:${gccversion}
singularity push centos7-spack${spackversion}-gcc${gccversion} library://mblaschek//spack${spackversion}/centos7:${gccversion}
# singularity push centos8-spack-icc${iccversion} library://mblaschek/spack/alma8/icc:${iccversion}
......@@ -48,3 +48,7 @@ Steps:
5. Build your own container from the WRF development container and your SRC Code.
# Intel
intel compiled WRF is faster
intel-oneapi-runtime is 1.3GB
intel-onaapi-compilers 10 GB
\ No newline at end of file
Bootstrap: localimage
From: alma8.base.sif
%labels
maintainer IT-IMGW <it.img-wien@univie.ac.at>
baseimage AlmaLinux8
%apprun downloadwrf
WRF_VERSION=4.4.1
if [ $# -ne 0 ]; then
WRF_VERSION=$1
fi
if [ ! -f WRF-${WRF_VERSION}.tar.gz ]; then
echo "[wrf.dev] Downloading WRF $WRF_VERSION from github:"
curl -SL -o WRF-${WRF_VERSION}.tar.gz https://github.com/wrf-model/WRF/releases/download/v${WRF_VERSION}/v${WRF_VERSION}.tar.gz
else
echo "[wrf.dev] Source found: WRF-${WRF_VERSION}.tar.gz"
fi
%apphelp downloadwrf
[WRF] APP: downloadwrf
[WRF] This will download the WRF source code from GitHub. The version can be specified by an argument.
[WRF] run: singularity run --app downloadwrf 4.4.1
[WRF] curl -SL -o WRF-{WRF_VERSION}.tar.gz https://github.com/wrf-model/WRF/releases/download/v{WRF_VERSION}/v{WRF_VERSION}.tar.gz
[WRF]
%apprun downloadwps
WPS_VERSION=4.4
if [ $# -ne 0 ]; then
WPS_VERSION=$1
fi
if [ ! -f WPS-${WPS_VERSION}.tar.gz ]; then
echo "[wrf.dev] Downloading WPS $WPS_VERSION from github:"
curl -SL -o WPS-${WPS_VERSION}.tar.gz https://github.com/wrf-model/WPS/archive/refs/tags/v${WPS_VERSION}.tar.gz
else
echo "[wrf.dev] Source found: WPS-${WPS_VERSION}.tar.gz"
fi
%apphelp downloadwps
[WRF] APP: downloadwps
[WRF] This will download the WPS source code from GitHub. The version can be specified by an argument.
[WRF] run: singularity run --app downloadwps 4.4
[WRF] curl -SL -o WPS-{WPS_VERSION}.tar.gz https://github.com/wrf-model/WPS/archive/refs/tags/v{WPS_VERSION}.tar.gz
[WRF]
%post
# Every line will be a layer in the container
# See https://fedoraproject.org/wiki/EPEL#Quickstart for powertools
# yum --enablerepo epel groupinstall -y "Development Tools" \
yum update -y \
&& yum install -y dnf-plugins-core \
&& dnf config-manager --set-enabled powertools \
&& yum install -y epel-release \
&& yum update -y \
&& yum --enablerepo epel install -y \
curl wget \
file \
findutils \
gcc-c++ \
gcc \
gcc-gfortran \
glibc.i686 libgcc.i686 \
libpng-devel jasper-libs jasper-devel \
m4 make perl cmake \
flex flex-devel bison bison-devel \
libcurl-devel \
libxml2 libxml2-devel perl-XML-LibXML ImageMagick \
python3 python3-pip python3-devel \
tar bash tcsh time which zlib zlib-devel \
git \
gnupg2 \
hostname \
iproute \
patch \
openmpi-devel \
openmpi \
hdf5-openmpi-devel \
hdf5-openmpi-static \
netcdf-openmpi-devel \
netcdf-openmpi-static \
netcdf-fortran-openmpi-devel \
netcdf-fortran-openmpi-static \
openblas-devel.x86_64 \
openblas-openmp.x86_64 \
&& rm -rf /var/cache/yum \
&& yum clean all \
&& dnf clean all \
&& rm -rf /usr/share/doc \
&& rm -rf /usr/share/man \
&& ln -s /usr/include/openmpi-x86_64/ /usr/lib64/openmpi/include
# command prompt name
CNAME=wrf.dev
# does not work goes into /.singularity.d/env/91-environment.sh
echo "export PS1=\"[IMGW-$CNAME]\w\$ \"" >> /.singularity.d/env/99-zz-custom-env.sh
# not sure why that does not happen as default
echo "export PKG_CONFIG_PATH=/usr/lib64/openmpi/lib/pkgconfig/" >> $SINGULARITY_ENVIRONMENT
%environment
export LD_LIBRARY_PATH=/usr/lib64/openmpi/lib:/usr/lib64:/lib64:/lib
export PATH=/usr/lib64/openmpi/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
export LIBRARY=/usr/lib64/openmpi/lib:/usr/lib64:/lib64:/lib
export INCLUDE=/usr/include/openmpi-x86_64/:/usr/lib64/gfortran/modules/openmpi:/usr/include
export NETCDF=/usr/lib64/openmpi
export NETCDF_ROOT=/usr/lib64/openmpi
export HDF5_ROOT=/usr/lib64/openmpi
export JASPERINC=/usr/include/jasper/
export JASPERLIB=/usr/lib64/
......@@ -93,5 +93,6 @@ Stage: final
echo "export PS1=\"[IMGW-$CNAME]\w\$ \"" >> /.singularity.d/env/99-zz-custom-env.sh
%environment
export LC_ALL=C
export PATH=/wrf/bin:$PATH
export WRF_BUILD_TARGET=em_real
......@@ -16,6 +16,11 @@ bind special directories
development using includes and libraries from inside the container
> ./image.sif gfortran -I\$INCLUDE -L\$LIBRARY -o test.x test.f90
Directory not found?
As a defaut singularity only mounts your home into the container. So you need access to other directories inside the container:
> export SINGULARITY_BIND="/path1,/path2"
> ./image.sif
WRF Simulations - Example
default run files are located in /wrf/run
executables are located in /wrf/bin
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment