diff --git a/README.md b/README.md index 9757972b..f8494e57 100644 --- a/README.md +++ b/README.md @@ -48,9 +48,13 @@ pip install --editable . ## Using TCTrack -Information and examples of how to use TCTrack from within Python after installation can -be found in the +Details of how to use TCTrack can be found in the [getting-started documentation online](https://tctrack.readthedocs.io/en/latest/getting-started/index.html). + +New users may wish to follow the [TCTrack tutorial](https://tctrack.readthedocs.io/en/latest/getting-started/tutorial.html) +using the scripts in the [`tutorial/`](https://github.com/Cambridge-ICCS/TCTrack/tree/main/tutorial) +directory. + For a complete description of the library API see [API documentation](https://tctrack.readthedocs.io/en/latest/api/index.html). diff --git a/docs/getting-started/index.rst b/docs/getting-started/index.rst index 72847c0e..8ab8b55a 100644 --- a/docs/getting-started/index.rst +++ b/docs/getting-started/index.rst @@ -3,7 +3,11 @@ Getting Started This getting started guide is intended for new users of TCTrack to get them up and running as quickly as possible whilst introducing the main concepts. -More advanced users should consult the full :doc:`API documentation`<../api/index>`. +More advanced users should consult the full :doc:`api documentation`<../api/index>`. + +After reading the installation information here new users may wish to work through the +:doc:`tutorial `` to see examples of how TCTrack is used and familiarise +themselves with the workflow before using their own data. .. toctree:: :maxdepth: 3 diff --git a/docs/getting-started/tutorial.rst b/docs/getting-started/tutorial.rst new file mode 100644 index 00000000..44566b99 --- /dev/null +++ b/docs/getting-started/tutorial.rst @@ -0,0 +1,180 @@ +TCTrack Tutorial +================ + +This page outlines a tutorial for using TCTrack to generate cyclone tracks using +Tempest Extremes and TSTORMS. + +We work though all steps of the process from obtaining and preparing data, +installing and using TCTrack and the wrapped algorithms, to plotting some simple +outputs. + +Details are provided for installing and running multiple tracking algorithms, though +you may choose to follow just one. + +.. toctree:: + :maxdepth: 2 + :hidden: + +.. Import tctrack to use references throughout this page. +.. py:module:: tctrack + :no-index: + + +Installation +------------ + +Install TCTrack from GitHub. +It is recommended that a Python virtual environment is used. +For more information see the :ref:`installation instructions `.:: + + git clone git@github.com:Cambridge-ICCS/TCTrack.git + cd TCTrack + pip install . + + +The next step is to install the trackers we want to call from TCTrack, in this case +:ref:`Tempest Extremes <../tracking-algorithms/tempest_extremes>` and +:ref:`TSTORMS <../tracking-algorithms/tstorms>`. + +Ensure that the following dependencies have been installed + +* NetCDF (with C++ bindings and Tempest Extremes and Fortran bindings for TSTORMS) +* A C++ Compiler (for Tempest Extremes) +* A Fortran Compiler (for TSTORMS) -- ifort is assumed, + though :ref:`others are available <../tracking-algorithms/tstorms:installation>` + + +From your cloned version of TCTrack navigate to the ``tutorial/`` directory at the top +level:: + + cd tutorial + +This directory contains a number of pre-prepared scripts demonstrating a simple +TCTrack workflow to provide an example of usage and take you through the process of +detecting cyclone tracks from climate data. + + +To install Tempest Extremes source the installation script which will +clone and build Tempest Extremes locally under the tutorial directory, as +described on the :ref:`Tempest Extremes pages <../tracking-algorithms/tempest_extremes>`, +and add the executables to the ``PATH``. +Read carefully to understand and check that you are happy with what it will do before +running:: + + source install_tempest_extremes.sh + + +To install TSTORMS source the installation script to clone and build TSTORMS locally +under the tutorial directory, as described on the :ref:`TSTORMS pages <../tracking-algorithms/tstorms>`. +Read carefully to understand and check that you are happy with what it will do before +running:: + + source install_tstorms.sh + + +Obtaining Data +-------------- + +We will use example data from CMIP6, specifically from the HadGEM model and the +1950-historical experiment. We will use just a subset of ASO (August, September, October) +from 1951 to demonstrate the code and some preprocessing techniques. + +This data can be obtained from the `ESGF CEDA archive `_ +or direct from `CEDA `_ using the included fetch data script:: + + ./fetch_data.sh + +This will fetch the NetCDF data files using ``wget`` and place them in a ``data/`` +directory. +Again, read carefully to understand and check that you are happy with what it will do +before running. + + +Pre-processing of Data +---------------------- + +Some preprocessing of the data is required. +We will do this using cf-python and esmf as described on the +:doc:`../data/preprocessing_data` pages. + +The recommended method for installing esmf is via Conda for which we recommend using +`miniforge `_. +From a conda base environment create a new environment with cf-python and esmpy installed:: + + conda create -n regrid -c conda-forge cf-python cf-plot udunits2 esmpy + conda activate regrid + +From inside the conda environment run the regridding script to pre-process the data:: + + python regrid.py + +This will pre-process the downloaded data as required for our codes and place it in +``data_processed/``. +This includes the following processes: + +* Target months (ASO) are extracted from the yearly files +* The 3hr data is mapped to the same daily time values as the daily data +* Single variables at single levels are extracted for TSTORMS inputs +* Vorticity is calculated from velocity data (following regridding to surface grid) +* A mean is taken over pressure levels of temperature + +Note that since this requires significant IO it may take a little time to complete. +Note also the use of the Python ``del`` command where appropriate as the data consumes a +large amount of memory which we want to free when possible. + + +Running the code +---------------- + +To run Tempest Extremes over the data we use the enclosed Python script which will +execute using the parameters from the [Ullrich2021]_ paper:: + + python run_tempest_extremes.py + +Intermediate files will be placed in ``te_outputs/`` whilst the resulting tracks file +will be output as ``tracks_tempest_extremes.nc``. +These can be inspected using:: + + ncdump -h tracks_tempest_extremes.nc + + +To run TSTORMS over the data we use the enclosed Python script which will +execute similarly to the [Vitart2001]_ paper:: + + python run_tstorms.py + +Intermediate files will be placed in ``tstorms_outputs/`` whilst the resulting tracks file +will be output as ``tracks_tstorms.nc``. +These can be inspected using:: + + ncdump -h tracks_tstorms.nc + + +Visualising Results +------------------- + +Finally we can visualise the results by plotting the reacks from the output data files. +This can be done by running the included plotting script:: + + python plot_tracks.py + +which will generate a png figure of tracks plotted on a map using windspeed as a measure +of intensity. + +By default this will read from tracks_tempest_extremes.nc, but this can be changed in +the file. + +Note that the plotting script requires the following Pythion packages to be installed in +the local environment: ``numpy``, ``NetCDF4``, ``matplotlib``, and ``cartopy``. + + + +.. rubric:: References + +.. [Ullrich2021] Ullrich, Paul A., et al. “TempestExtremes v2.1: A Community Framework for Feature Detection, Tracking, and Analysis in Large Datasets.” Geoscientific Model Development 14, no. 8 (2021): 5023–48. https://doi.org/10.5194/gmd-14-5023-2021. + +.. [Vitart2001] Vitart, Frédéric, and Timothy N. Stockdale. + "Seasonal Forecasting of Tropical Storms Using Coupled GCM Integrations", + Monthly Weather Review 129, 10 (2001): 2521-2537. + `https://doi.org/10.1175/1520-0493(2001)129\<2521:SFOTSU\>2.0.CO;2 2.0.CO;2>`_ + diff --git a/docs/index.rst b/docs/index.rst index 978b7975..aca224dd 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -26,6 +26,7 @@ the output of different algorithms for a variety of data sources. :caption: Contents: Getting Started + Tutorial Tracking Algorithms API Documentation Data Formats and Manipulation diff --git a/tutorial/README.md b/tutorial/README.md new file mode 100644 index 00000000..6cfbab62 --- /dev/null +++ b/tutorial/README.md @@ -0,0 +1,16 @@ +# TCTrack Tutorial + +This directory contains a number of scripts used in the +[TCTrack tutorial](https://github.com/Cambridge-ICCS/TCTrack/tree/main/docs). + +For full details please see the [online tutorial documentation](https://github.com/Cambridge-ICCS/TCTrack/tree/main/docs). + +Contained in this directory are: + +- `install_tempest_extremes.sh` - to install Tempest Extremes locally +- `install_tstorms.sh` - to install TSTORMS locally +- `fetch_data.sh` - to download sample CMIP data +- `regrid.py` - to preprocess downloaded data for use with the tracking codes (requires ESMPy) +- `run_tempest_extremes.py` - to run Tempest Extremes to generate tracks +- `run_tstorms.py` - to run TSTORMS to generate tracks +- `plot_tracks.py` - to visualise the tracks generated by TCTrack diff --git a/tutorial/fetch_data.sh b/tutorial/fetch_data.sh new file mode 100644 index 00000000..bf9b3742 --- /dev/null +++ b/tutorial/fetch_data.sh @@ -0,0 +1,29 @@ +# Create a directory to store the data in if it does not already exist +mkdir -p data/ + +# Fetch data from various sources using wget and place in directory + +# Most data can be obtained from ESGF CEDA portal: +# psl +wget --directory-prefix data https://esgf.ceda.ac.uk/thredds/fileServer/esg_cmip6/CMIP6/HighResMIP/MOHC/HadGEM3-GC31-HM/hist-1950/r1i1p1f1/day/psl/gn/files/d20180730/psl_day_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_19500101-19501230.nc +# sfc wind +wget --directory-prefix data https://esgf.ceda.ac.uk/thredds/fileServer/esg_cmip6/CMIP6/HighResMIP/MOHC/HadGEM3-GC31-HM/hist-1950/r1i1p1f1/day/sfcWind/gn/files/d20180730/sfcWind_day_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_19500101-19501230.nc +# orography +wget --directory-prefix data https://esgf.ceda.ac.uk/thredds/fileServer/esg_cmip6/CMIP6/HighResMIP/MOHC/HadGEM3-GC31-HM/hist-1950/r1i1p1f1/fx/orog/gn/files/d20200910/orog_fx_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn.nc +# t +wget --directory-prefix data https://esgf.ceda.ac.uk/thredds/fileServer/esg_cmip6/CMIP6/HighResMIP/MOHC/HadGEM3-GC31-HM/hist-1950/r1i1p1f1/day/ta/gn/files/d20180730/ta_day_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_19500701-19501230.nc + +# u +wget --directory-prefix data https://esgf.ceda.ac.uk/thredds/fileServer/esg_cmip6/CMIP6/HighResMIP/MOHC/HadGEM3-GC31-HM/hist-1950/r1i1p1f1/day/ua/gn/files/d20180730/ua_day_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_19500701-19501230.nc +# us +wget --directory-prefix data https://esgf.ceda.ac.uk/thredds/fileServer/esg_cmip6/CMIP6/HighResMIP/MOHC/HadGEM3-GC31-HM/hist-1950/r1i1p1f1/day/uas/gn/files/d20180730/uas_day_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_19500101-19501230.nc + +# v +wget --directory-prefix data https://esgf.ceda.ac.uk/thredds/fileServer/esg_cmip6/CMIP6/HighResMIP/MOHC/HadGEM3-GC31-HM/hist-1950/r1i1p1f1/day/va/gn/files/d20180730/va_day_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_19500701-19501230.nc +#vs +wget --directory-prefix data https://esgf.ceda.ac.uk/thredds/fileServer/esg_cmip6/CMIP6/HighResMIP/MOHC/HadGEM3-GC31-HM/hist-1950/r1i1p1f1/day/vas/gn/files/d20180730/vas_day_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_19500701-19501230.nc + +# zg has to be fetched direct from CEDA as not present on ESGF +wget https://dap.ceda.ac.uk/badc/cmip6/data/PRIMAVERA/HighResMIP/MOHC/HadGEM3-GC31-HM/hist-1950/r1i1p1f1/Prim3hrPt/zg7h/gn/files/d20180730/zg7h_Prim3hrPt_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_195008010000-195008302100.nc?download=1 -O data/zg7h_Prim3hrPt_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_195008010000-195008302100.nc +wget https://dap.ceda.ac.uk/badc/cmip6/data/PRIMAVERA/HighResMIP/MOHC/HadGEM3-GC31-HM/hist-1950/r1i1p1f1/Prim3hrPt/zg7h/gn/files/d20180730/zg7h_Prim3hrPt_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_195009010000-195009302100.nc?download=1 -O data/zg7h_Prim3hrPt_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_195010090000-195009302100.nc +wget https://dap.ceda.ac.uk/badc/cmip6/data/PRIMAVERA/HighResMIP/MOHC/HadGEM3-GC31-HM/hist-1950/r1i1p1f1/Prim3hrPt/zg7h/gn/files/d20180730/zg7h_Prim3hrPt_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_195010010000-195010302100.nc?download=1 -O data/zg7h_Prim3hrPt_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_195010010000-195010302100.nc diff --git a/tutorial/install_tempest_extremes.sh b/tutorial/install_tempest_extremes.sh new file mode 100644 index 00000000..2edc935f --- /dev/null +++ b/tutorial/install_tempest_extremes.sh @@ -0,0 +1,22 @@ +# This script will clone, build, and install Tempest Extremes. + +# It is assumed that the following dependencies are installed: +# - CMake +# - C++ compiler +# - NetCDF with C++ bindings + +# Clone and checkout specific commit TCTrack has been tested against +git clone https://github.com/ClimateGlobalChange/tempestextremes.git +cd tempestextremes/ +git checkout 5feb3a04d29fd62a1f13fa9c0b85daeefcbecd6f + +# Build using CMake +mkdir build +cmake -B build/ -DCMAKE_BUILD_TYPE=Release -DENABLE_MPI=OFF . +cmake --build build/ + +# Add Tempest extremes binaries to the path +export PATH="$PATH:$PWD/build/bin" + +# return to tutorial directory +cd ../ diff --git a/tutorial/install_tstorms.sh b/tutorial/install_tstorms.sh new file mode 100644 index 00000000..bf466e69 --- /dev/null +++ b/tutorial/install_tstorms.sh @@ -0,0 +1,18 @@ +# This script will clone, build, and install TSTORMS. + +# It is assumed that the following dependencies are installed: +# - Fortran compiler +# - NetCDF with Fortran bindings + +# Clone and checkout specific commit TCTrack has been tested against +git clone https://github.com/Cambridge-ICCS/TSTORMS.git +cd TSTORMS/ + +# Build using Make +cd tstorms_driver/ +make +cd ../trajectory_analysis/ +make + +# return to tutorial directory +cd ../../ diff --git a/tutorial/plot_tracks.py b/tutorial/plot_tracks.py new file mode 100644 index 00000000..c5d9b5ee --- /dev/null +++ b/tutorial/plot_tracks.py @@ -0,0 +1,76 @@ +""" +Script to plot tracks from TCTrack output. + +Change the value of `TCTRACK_DATA` to use different input files. +""" + +import cartopy.crs as ccrs +import matplotlib.pyplot as plt +import netCDF4 +import numpy as np + +TCTRACK_DATA = "tracks_tempest_extremes.nc" + +# Open the NetCDF file +with netCDF4.Dataset(TCTRACK_DATA) as ncfile: + # Read variables + lat_var = ncfile.variables["lat"] + lon_var = ncfile.variables["lon"] + time_var = ncfile.variables["time"] + intensity_var = ncfile.variables["wind_speed"] + traj_var = ncfile.variables["trajectory"] + + lats = lat_var[:] + lons = lon_var[:] + intensity = intensity_var[:] + traj_labels = traj_var[:] + times = time_var[:] + + # Convert times to datetime objects + missing_time = getattr(time_var, "missing_value", np.nan) + times = np.ma.masked_where(times == missing_time, times) + time_units = time_var.units + time_calendar = time_var.calendar + times_dt = netCDF4.num2date(times, units=time_units, calendar=time_calendar) + + # Get intensity metadata for labels + intensity_name = intensity_var.long_name + intensity_units = getattr(intensity_var, "units", "") + min_intensity = np.nanmin(intensity) + max_intensity = np.nanmax(intensity) + + plt.figure(figsize=(10, 3)) + ax = plt.axes(projection=ccrs.PlateCarree()) + ax.coastlines() + gl = ax.gridlines(draw_labels=True) + gl.top_labels = False + gl.right_labels = False + gl.left_labels = True + gl.bottom_labels = True + + # Plot each trajectory + for i in traj_labels: + times_i = times_dt[i, :].compressed() + label = ( + f"{times_i[0].strftime('%Y-%m-%d %H:%M')} to " + f"{times_i[-1].strftime('%Y-%m-%d %H:%M')}" + ) + pl = ax.plot( + lons[i], lats[i], "--", transform=ccrs.PlateCarree(), label=f"{label}" + ) + sc = ax.scatter( + lons[i], + lats[i], + c=intensity[i], + cmap="viridis", + s=40, + vmin=min_intensity, + vmax=max_intensity, + transform=ccrs.PlateCarree(), + ) + + plt.colorbar(sc, label=f"{intensity_name} ({intensity_units})") + plt.title(f"All Trajectories Colored by {intensity_name}") + plt.legend() + plt.tight_layout() + plt.savefig("my_tracks.png") diff --git a/tutorial/regrid.py b/tutorial/regrid.py new file mode 100644 index 00000000..4c254bef --- /dev/null +++ b/tutorial/regrid.py @@ -0,0 +1,188 @@ +"""Script to pre-process and regrid CMIP6 data for use in TCTrack analysis.""" + +import os +import shutil + +import cf + +# Set up file structure +data_dir = "data/" +data_out = "data_processed/" +os.makedirs(data_out, exist_ok=True) + + +# Define time window for data - ASO 1950 +ASO_1950 = cf.wi(cf.dt("1950-08-01"), cf.dt("1950-11-01"), open_upper=True) + + +# ======== Tempest Extremes ======== +# Extract ASO from annual data files +print("Extracting subspace from psl...", end="", flush=True) +field_psl = cf.read( + f"{data_dir}/psl_day_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_19500101-19501230.nc" +)[0] +field_psl = field_psl.subspace(T=ASO_1950) +print("writing data...", end="", flush=True) +cf.write( + field_psl, + f"{data_out}/psl_day_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_19500801-19501030.nc", +) +print("done.") + +print("Extracting subspace from sfcWind...", end="", flush=True) +field_sfcWind = cf.read( + f"{data_dir}/sfcWind_day_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_19500101-19501230.nc" +)[0] +field_sfcWind = field_sfcWind.subspace(T=ASO_1950) +print("writing data...", end="", flush=True) +cf.write( + field_sfcWind, + f"{data_out}/sfcWind_day_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_19500801-19501030.nc", +) +del field_sfcWind +print("done.") + +# Combine the monthly zg files into one, subspacing in time to the above resolutions: +print("Combining zg files into one...", end="", flush=True) +input_files = [ + f"{data_dir}/zg7h_Prim3hrPt_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_195008010000-195008302100.nc", + f"{data_dir}/zg7h_Prim3hrPt_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_195009010000-195009302100.nc", + f"{data_dir}/zg7h_Prim3hrPt_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_195010010000-195010302100.nc", +] + +field_zg_in = cf.read(input_files)[0] +# zg is 3hr data from 00:00 but we want daily at 12:00, so subspace with a slice +field_zg = field_zg_in.subspace(T=slice(4, None, 8)) +del field_zg_in +print("writing data...", end="", flush=True) +cf.write( + field_zg, + f"{data_out}/zg7h_day_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_19500801-19501030.nc", +) +del field_zg +print("done.") + +# Copy orography across directly: +print("Copying orography file...", end="", flush=True) +shutil.copy( + f"{data_dir}/orog_fx_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn.nc", f"{data_out}" +) +print("done.") + + +# ======== TSTORMS ======== +print("Renaming slp...", end="", flush=True) +field_psl.nc_set_variable("slp") +print("writing data...", end="", flush=True) +cf.write( + field_psl, + f"{data_out}/slp_day_ASO50.nc", +) +del field_psl +print("done.") + +print("Extracting subspace from uas and renaming...", end="", flush=True) +field_uas_in = cf.read( + f"{data_dir}/uas_day_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_19500101-19501230.nc" +)[0] +field_uas = field_uas_in.subspace(T=ASO_1950) +del field_uas_in +field_uas.nc_set_variable("u_ref") +print("writing data...", end="", flush=True) +cf.write( + field_uas, + f"{data_out}/u_ref_day_ASO50.nc", +) +print("done.") + +print("Extracting subspace from vas and renaming...", end="", flush=True) +field_vas_in = cf.read( + f"{data_dir}/vas_day_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_19500701-19501230.nc" +)[0] +field_vas = field_vas_in.subspace(T=ASO_1950) +field_vas.regrids(field_uas, method="linear", inplace=True) +field_vas.nc_clear_dataset_chunksizes() +del field_vas_in +field_vas.nc_set_variable("v_ref") +print("writing data...", end="", flush=True) +cf.write( + field_vas, + f"{data_out}/v_ref_day_ASO50.nc", +) +del field_vas +print("done.") + + +print("Extracting subspace from ua and va to calculate vorticity and renaming...") +print("\tExtracting subspace from ua for u850...", end="", flush=True) +field_ua = cf.read( + f"{data_dir}/ua_day_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_19500701-19501230.nc" +)[0] +field_u850 = field_ua.subspace(T=ASO_1950, Z=[1]) +del field_ua +field_u850.squeeze(inplace=True) +field_u850_rg = field_u850.regrids(field_uas, method="linear") +del field_u850 +field_u850_rg.nc_clear_dataset_chunksizes() +field_u850_rg.nc_set_variable("u850") +print("writing data...", end="", flush=True) +cf.write( + field_u850_rg, + f"{data_out}/u850_day_ASO50.nc", +) +print("done.") + +print("\tExtracting subspace from va for v850...", end="", flush=True) +field_va = cf.read( + f"{data_dir}/va_day_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_19500701-19501230.nc" +)[0] +field_v850 = field_va.subspace(T=ASO_1950, Z=[1]) +del field_va +field_v850.squeeze(inplace=True) +field_v850_rg = field_v850.regrids(field_uas, method="linear") +del field_uas +del field_v850 +field_v850_rg.nc_clear_dataset_chunksizes() +field_v850_rg.nc_set_variable("v850") +print("writing data...", end="", flush=True) +cf.write( + field_v850_rg, + f"{data_out}/v850_day_ASO50.nc", +) +print("done.") + +print("\tCalculating vorticity for vort850...", end="", flush=True) +field_vort850 = cf.curl_xy(field_u850_rg, field_v850_rg, radius="earth") +del field_u850_rg +del field_v850_rg +field_vort850.filled(fill_value=0.0, inplace=True) +field_vort850.nc_set_variable("vort850") +field_vort850.set_property("standard_name", "atmosphere_upward_absolute_vorticity") +field_vort850.set_property("units", "s-1") +print("writing data...", end="", flush=True) +cf.write( + field_vort850, + f"{data_out}/vort850_day_ASO50.nc", +) +del field_vort850 +print("done.") + +print("done.") + +print("Extracting subspace and taking mean of ta and renaming...", end="", flush=True) +field_ta_full = cf.read( + f"{data_dir}/ta_day_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_19500701-19501230.nc" +)[0] +# Extract 500 and 250 pressure levels and take mean +field_ta = field_ta_full.subspace(T=ASO_1950, Z=slice(3, -3)) +del field_ta_full +field_ta.collapse("mean", axes="Z", inplace=True) +field_ta.squeeze(inplace=True) +field_ta.nc_set_variable("tm") +print("writing data...", end="", flush=True) +cf.write( + field_ta, + f"{data_out}/tm_day_ASO50.nc", +) +del field_ta +print("done.") diff --git a/tutorial/run_tempest_extremes.py b/tutorial/run_tempest_extremes.py new file mode 100644 index 00000000..769a5ddd --- /dev/null +++ b/tutorial/run_tempest_extremes.py @@ -0,0 +1,56 @@ +"""Script to run Tempest Extremes as part of the TCTrack tutorial.""" + +import tctrack.tempest_extremes as te + +data_dir = "data_processed/" + +input_files = [ + f"{data_dir}/zg7h_day_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_19500801-19501030.nc", + f"{data_dir}/psl_day_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_19500801-19501030.nc", + f"{data_dir}/sfcWind_day_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn_19500801-19501030.nc", + f"{data_dir}/orog_fx_HadGEM3-GC31-HM_hist-1950_r1i1p1f1_gn.nc", +] + +closed_contours = [ + te.TEContour(var="psl", delta=200.0, dist=5.5, minmaxdist=0.0), + te.TEContour( + var="_DIFF(zg7h(250hPa),zg7h(500hPa))", delta=-6.0, dist=6.5, minmaxdist=1.0 + ), +] + +output_commands = [ + te.TEOutputCommand(var="psl", operator="min", dist=0.0), + te.TEOutputCommand(var="orog", operator="max", dist=0.0), + te.TEOutputCommand(var="sfcWind", operator="max", dist=2.0), +] + +threshold_filters = [ + te.TEThreshold(var="lat", op="<=", value=50, count=10), + te.TEThreshold(var="lat", op=">=", value=-50, count=10), + te.TEThreshold(var="orog", op="<=", value=150, count=10), + te.TEThreshold(var="sfcWind", op=">=", value=10, count=10), +] + +dn_params = te.DetectNodesParameters( + in_data=input_files, + search_by_min="psl", + time_filter="6hr", + merge_dist=6.0, + closed_contours=closed_contours, + out_header=True, + output_commands=output_commands, + output_dir="te_outputs", +) + +sn_params = te.StitchNodesParameters( + caltype="360_day", + max_sep=8.0, + min_time="54h", + max_gap="24h", + min_endpoint_dist=8.0, + threshold_filters=threshold_filters, +) + +te_tracker = te.TETracker(dn_params, sn_params) + +te_tracker.run_tracker("tracks_tempest_extremes.nc") diff --git a/tutorial/run_tstorms.py b/tutorial/run_tstorms.py new file mode 100644 index 00000000..06d38606 --- /dev/null +++ b/tutorial/run_tstorms.py @@ -0,0 +1,45 @@ +"""Script to run TSTORMS as part of the TCTrack tutorial.""" + +import os + +from tctrack import tstorms + +tstorms_params = tstorms.TSTORMSBaseParameters( + tstorms_dir=f"{os.getcwd()}/TSTORMS", + output_dir="tstorms_outputs", + input_dir=f"{os.getcwd()}/data_processed", +) + +detect_params = tstorms.TSTORMSDetectParameters( + u_in_file="u_ref_day_ASO50.nc", + v_in_file="v_ref_day_ASO50.nc", + vort_in_file="vort850_day_ASO50.nc", + tm_in_file="tm_day_ASO50.nc", + slp_in_file="slp_day_ASO50.nc", + vort_crit=3.5e-5, + tm_crit=0.0, + thick_crit=50.0, + dist_crit=4.0, + lat_bound_n=70.0, + lat_bound_s=-70.0, + do_spline=False, + do_thickness=False, + use_sfc_wind=True, +) + +stitch_params = tstorms.TSTORMSStitchParameters( + r_crit=900.0, + wind_crit=15.0, + vort_crit=3.5e-5, + tm_crit=0.0, + n_day_crit=1, + do_filter=True, + lat_bound_n=70.0, + lat_bound_s=-70.0, +) + +tstorms_tracker = tstorms.TSTORMSTracker(tstorms_params, detect_params, stitch_params) +tstorms_tracker.run_tracker("tracks_tstorms.nc") + +# tstorms_tracker.stitch(verbose=True) +# tstorms_tracker.to_netcdf("tracks_tstorms.nc")