I have recently moved to Switzerland and started skiing again after many years. Curious about historical snow levels, I went looking for public data.
Finding out how much snow is available right now is very easy, there are many webpages that provide this information for today but it’s much harder to find the answer to the question: How much snow was there yesterday? Or last week? Or a year ago?
This is a recurring theme as data becomes hidden with time. This is true for weather data (my weather app only shows the future not the past) or prices in the supermarket. Historical data is often locked away or difficult to access.
So in this post I explore the available public data regarding snow depth in Switzerland.
Open Data
I pretty quickly found the open data portal opendata.swiss which seems like the right spot to find such data. It is really cool that Switzerland seems to be embracing an open data model, which allows people like me but also journalists and scientists to use this data for research. I really appreciate this.
And indeed it has a dataset called Automatic weather stations - Measurement values which seems to contain the information I want.
This dataset contains not only historical data but also recent measurements for a number of automatic measurement stations, and this is already a clue: Automatic measurement stations sounds like it might not really be that historical.
Automatic weather stations: Snow
To evaluate whether this dataset works for my question let’s take a few skiing areas at random and see if we can find data for them in the dataset:
Without looking at the dataset I decided to check for:
- Engelberg
- Zermatt
- Savognin
- Meiringen
So to get the needed data we can use Python. As always I include the code in the post.
import polars as pl
import requests
station_metadata_csv_url = "https://data.geo.admin.ch/ch.meteoschweiz.ogd-smn/ogd-smn_meta_stations.csv"
parameter_metadata_csv_url = "https://data.geo.admin.ch/ch.meteoschweiz.ogd-smn/ogd-smn_meta_parameters.csv"
def load_remote_csv(url: str) -> pl.DataFrame:
return pl.read_csv(
url,
separator=";",
infer_schema_length=10000,
encoding="latin1",
)
stations = load_remote_csv(station_metadata_csv_url)
parameters = load_remote_csv(parameter_metadata_csv_url)
stations.select(
[
"station_abbr",
"station_name",
"station_coordinates_wgs84_lat",
"station_coordinates_wgs84_lon",
]
).head(5)
| station_abbr | station_name | station_coordinates_wgs84_lat | station_coordinates_wgs84_lon |
|---|---|---|---|
| str | str | f64 | f64 |
| "ABO" | "Adelboden" | 46.491703 | 7.560703 |
| "AEG" | "Oberägeri" | 47.133636 | 8.608206 |
| "AIG" | "Aigle" | 46.326647 | 6.924472 |
| "ALT" | "Altdorf" | 46.887069 | 8.621894 |
| "AND" | "Andeer" | 46.610139 | 9.431981 |
This metadata seems to show the weather stations locations this dataset covers.
To find the stations matching to my query locations I want to use the latitude and longitude data and a radius of 10km to find any stations that are applicable, rather than relying on matching the name.
So I looked up the latitude and longitude of each of the spots and made a query dictionary:
query_locations = [
{"name": "Engelberg", "lat": 46.819950, "lon": 8.400650},
{"name": "Zermatt", "lat": 46.020714, "lon": 7.749117},
{"name": "Savognin", "lat": 46.596920, "lon": 9.597540},
{"name": "Meiringen", "lat": 46.7285518, "lon": 8.1870934},
]
I can now use some math to find stations around these coordinates:
from math import radians, sin, cos, sqrt, atan2
query_radius = 10000 # in meters
def haversine_distance(lat1, lon1, lat2, lon2):
R = 6371000 # Earth radius in meters
dlat = radians(lat2 - lat1)
dlon = radians(lon2 - lon1)
a = (
sin(dlat / 2) ** 2
+ cos(radians(lat1)) * cos(radians(lat2)) * sin(dlon / 2) ** 2
)
c = 2 * atan2(sqrt(a), sqrt(1 - a))
return R * c
stations_of_interest = []
for row in stations.iter_rows(named=True):
station_lat = row["station_coordinates_wgs84_lat"]
station_lon = row["station_coordinates_wgs84_lon"]
for location in query_locations:
distance = haversine_distance(
station_lat, station_lon, location["lat"], location["lon"]
)
if distance <= query_radius:
print(
f"Station {row['station_abbr']} ({row['station_name']}) is within {query_radius} meters of {location['name']} (distance: {distance:.2f} m)"
)
stations_of_interest.append(row["station_abbr"].lower())
stations_of_interest.sort()
Station BRZ (Brienz) is within 10000 meters of Meiringen (distance: 9714.69 m)
Station ENG (Engelberg) is within 10000 meters of Engelberg (distance: 773.68 m)
Station GOR (Gornergrat) is within 10000 meters of Zermatt (distance: 5008.19 m)
Station MER (Meiringen) is within 10000 meters of Meiringen (distance: 1420.10 m)
Station PMA (Piz Martegnas) is within 10000 meters of Savognin (distance: 5640.72 m)
Station TIT (Titlis) is within 10000 meters of Engelberg (distance: 5815.58 m)
Station ZER (Zermatt) is within 10000 meters of Zermatt (distance: 985.44 m)
Using the Haversine formula it is quick to estimate the distance between two points on a sphere, which might not be 100% accurate but suffices for this purpose.
This way it was quick to find the stations for my target areas. And indeed for Engelberg and Zermatt we do find exact matches in the stations table but for Savognin the closest hit is Piz Martegnas.
Now that I have the codes for the stations I can see what data is available for each.
STAC_ITEMS_URL = "https://data.geo.admin.ch/api/stac/v1/collections/ch.meteoschweiz.ogd-smn/items"
def iter_stac_items(limit: int = 1000):
"""
Iterate over all items in the STAC collection, handling pagination.
"""
url = STAC_ITEMS_URL
params = {"limit": limit}
while url:
r = requests.get(url, params=params, timeout=30)
r.raise_for_status()
data = r.json()
for item in data["features"]:
yield item
url = next(
(
link["href"]
for link in data.get("links", [])
if link["rel"] == "next"
),
None,
)
params = None
for item in iter_stac_items():
if item["id"] in stations_of_interest:
print(f"Found item for station {item['id']}")
for name, asset in item.get("assets", {}).items():
print(f" Asset: {name}, href: {asset['href']}")
break
Found item for station brz
Asset: ogd-smn_brz_d_historical.csv, href: https://data.geo.admin.ch/ch.meteoschweiz.ogd-smn/brz/ogd-smn_brz_d_historical.csv
Asset: ogd-smn_brz_d_recent.csv, href: https://data.geo.admin.ch/ch.meteoschweiz.ogd-smn/brz/ogd-smn_brz_d_recent.csv
Asset: ogd-smn_brz_h_historical_1990-1999.csv, href: https://data.geo.admin.ch/ch.meteoschweiz.ogd-smn/brz/ogd-smn_brz_h_historical_1990-1999.csv
Asset: ogd-smn_brz_h_historical_2000-2009.csv, href: https://data.geo.admin.ch/ch.meteoschweiz.ogd-smn/brz/ogd-smn_brz_h_historical_2000-2009.csv
Asset: ogd-smn_brz_h_historical_2010-2019.csv, href: https://data.geo.admin.ch/ch.meteoschweiz.ogd-smn/brz/ogd-smn_brz_h_historical_2010-2019.csv
Asset: ogd-smn_brz_h_historical_2020-2029.csv, href: https://data.geo.admin.ch/ch.meteoschweiz.ogd-smn/brz/ogd-smn_brz_h_historical_2020-2029.csv
Asset: ogd-smn_brz_h_now.csv, href: https://data.geo.admin.ch/ch.meteoschweiz.ogd-smn/brz/ogd-smn_brz_h_now.csv
Asset: ogd-smn_brz_h_recent.csv, href: https://data.geo.admin.ch/ch.meteoschweiz.ogd-smn/brz/ogd-smn_brz_h_recent.csv
Asset: ogd-smn_brz_m.csv, href: https://data.geo.admin.ch/ch.meteoschweiz.ogd-smn/brz/ogd-smn_brz_m.csv
Asset: ogd-smn_brz_t_historical_2000-2009.csv, href: https://data.geo.admin.ch/ch.meteoschweiz.ogd-smn/brz/ogd-smn_brz_t_historical_2000-2009.csv
Asset: ogd-smn_brz_t_historical_2010-2019.csv, href: https://data.geo.admin.ch/ch.meteoschweiz.ogd-smn/brz/ogd-smn_brz_t_historical_2010-2019.csv
Asset: ogd-smn_brz_t_historical_2020-2029.csv, href: https://data.geo.admin.ch/ch.meteoschweiz.ogd-smn/brz/ogd-smn_brz_t_historical_2020-2029.csv
Asset: ogd-smn_brz_t_now.csv, href: https://data.geo.admin.ch/ch.meteoschweiz.ogd-smn/brz/ogd-smn_brz_t_now.csv
Asset: ogd-smn_brz_t_recent.csv, href: https://data.geo.admin.ch/ch.meteoschweiz.ogd-smn/brz/ogd-smn_brz_t_recent.csv
Asset: ogd-smn_brz_y.csv, href: https://data.geo.admin.ch/ch.meteoschweiz.ogd-smn/brz/ogd-smn_brz_y.csv
Above I fetched the available files for the first station. Let’s take a moment to go through the data and what it means:
The data is split into day (d), month (m), hour (h), and 10 minute interval (t) datasets. With historical data being split into 10 year intervals.
But as hinted at earlier this data is not just snow data. So to find out if there is any snow data for the query locations we need to work with the parameter file.
parameters.filter(
pl.col("parameter_group_en").str.to_lowercase().str.contains("snow")
).select(
[
"parameter_shortname",
"parameter_description_en",
"parameter_group_en",
"parameter_granularity",
]
)
| parameter_shortname | parameter_description_en | parameter_group_en | parameter_granularity |
|---|---|---|---|
| str | str | str | str |
| "htoauts0" | "Snow depth (automatic measurem… | "snow" | "T" |
| "htoauths" | "Snow depth (automatic measurem… | "snow" | "H" |
| "htoautd0" | "Snow depth (automatic measurem… | "snow" | "D" |
Clearly there are only three parameters that we need to look at for now htoaut d0, htoaut hs and htoaut s0. Each being a resolution of the snow depth measurement.
As such we can now make a simple function that fetches a csv and reduces it to the snow data.
def get_df(url: str) -> pl.DataFrame:
return (
pl.read_csv(
url,
separator=";",
encoding="latin1",
infer_schema_length=1000000,
)
.with_columns(
pl.col("reference_timestamp").str.to_datetime(
format="%d.%m.%Y %H:%M", strict=True
)
)
.unpivot(
index=["station_abbr", "reference_timestamp"],
variable_name="parameter_shortname",
value_name="value",
)
.filter(
pl.col("parameter_shortname").is_in(
{"htoautd0", "htoauths", "htoauts0"}
)
)
.with_columns(pl.col("value").cast(pl.Float64, strict=False))
.drop_nulls(subset="value")
)
dfs = []
for item in iter_stac_items():
if item["id"] in stations_of_interest:
for name, asset in item.get("assets", {}).items():
try:
df = get_df(asset["href"])
if df.height > 0:
dfs.append(df)
except Exception as e:
print(
f"Error processing asset {name} for station {item['id']}: {e}"
)
df_snow_depth = pl.concat(dfs, how="vertical").join(
parameters.select(
[
"parameter_shortname",
"parameter_granularity",
]
),
on="parameter_shortname",
how="left",
)
print(f"Found {df_snow_depth.height} rows of snow depth data.")
Found 2043341 rows of snow depth data.
This worked well and I found more than 2 million snow depth measurements. But I am interested in historical data so let’s explore for what time and which resolution the snow data is for first.
import plotnine as p9
(
p9.ggplot(
df_snow_depth.with_columns(
# make a year column by truncating the timestamp to the year
pl.col("reference_timestamp").dt.year().alias("year")
)
.group_by(["station_abbr", "parameter_granularity", "year"])
.agg(
pl.len().alias("num_observations"),
)
)
+ p9.aes(x="year", y="num_observations", fill="station_abbr")
+ p9.geom_col(position="dodge")
+ p9.facet_grid("parameter_granularity ~ .", scales="free_y")
+ p9.theme_bw()
+ p9.labs(
x="Year",
y="Number of Observations",
fill="Station",
)
)

While I had five stations of interest I only have snow depth data for three. The span covered by these starts in the late 2000s not even reaching into the 90s, which is unfortunate and MER only seems to cover the last two years. In addition it seems that Engelberg has better data coverage than Zermatt.
While we have this data let’s plot the daily snow level:
(
p9.ggplot(
df_snow_depth.with_columns(
# make a year column by truncating the timestamp to the year
pl.col("reference_timestamp").dt.year().alias("year")
).filter(pl.col("parameter_granularity") == "D")
)
+ p9.aes(x="reference_timestamp", y="value", color="station_abbr")
+ p9.geom_line()
+ p9.theme_bw()
+ p9.labs(
x="Time",
y="Snow Depth (cm)",
color="Station",
)
+ p9.theme(
figure_size=(12, 2.5),
)
)

While this plot is interesting there are many outlier spikes in the plot. I think it will be nicer once we smooth the data a bit. The next figure is the rolling mean over a week worth of data.
df_smoothed = (
df_snow_depth.filter(pl.col("parameter_granularity") == "D")
.sort(["station_abbr", "reference_timestamp"])
.group_by_dynamic(
index_column="reference_timestamp",
every="1d",
period="7d",
offset="-3d",
group_by="station_abbr",
)
.agg(pl.col("value").mean().alias("value_weekly"))
)
(
p9.ggplot(df_smoothed)
+ p9.aes(
x="reference_timestamp",
y="value_weekly",
color="station_abbr",
)
+ p9.geom_line()
+ p9.theme_bw()
+ p9.labs(
x="Time",
y="Snow Depth (cm)",
color="Station",
)
+ p9.theme(figure_size=(12, 2.5))
)

This has reduced the dynamic range a bit, reining in the outliers. It’s now easy to see that the maximum values are in the range of 150cm. And we can also see that this winter for Zermatt has less snow than last year. But maybe we can see that better if we compare the winter months directly to each other.
Let’s make a plot showing the January mean daily value for each year.
df_jan_mean = (
df_snow_depth.filter(pl.col("parameter_granularity") == "D")
.with_columns(
year=pl.col("reference_timestamp").dt.year(),
month=pl.col("reference_timestamp").dt.month(),
)
.filter(pl.col("month") == 1)
.group_by(["station_abbr", "year"])
.agg(pl.col("value").mean().alias("jan_mean"))
.sort("year")
)
(
p9.ggplot(df_jan_mean)
+ p9.aes(
x="year",
y="jan_mean",
color="station_abbr",
)
+ p9.geom_line()
+ p9.geom_point()
+ p9.theme_bw()
+ p9.labs(
x="Year",
y="January mean snow depth (cm)",
color="Station",
)
+ p9.theme(figure_size=(10, 3))
)

It’s quite clear that 2018 was a heavy snow year in Zermatt. We can confirm that with a quick Google search which shows that in 2018 Zermatt had so much snow it was cut off from the outside world (News Story Example). So the data we have seems correct and helpful. It just does not go back far enough and its spatial resolution is not quite what I want, as I cannot distinguish valley from mountain in this dataset.
After a nice email exchange with MeteoSwiss I was pointed to check out the data from “WSL Institute for Snow and Avalanche Research SLF”. In a first search I did find it but I assumed it was the same data just served via a different API. But maybe that was a wrong conclusion by me.
The SLF data service
On the SLF webpage I quickly found the “SLF data service” which seems to be similar to what MeteoSwiss serves, as that it is CSV files with measurements.
For a start I focus on the historical data served there. And I think I can do the same analysis as before. This data seems to come from the “IMIS measuring network”, which is a network of nearly 200 automatic measuring stations spread across Switzerland, but mostly in the alps and/or high altitudes.
stations_slf_url = "https://measurement-data.slf.ch/imis/stations.csv"
stations_slf = pl.read_csv(
stations_slf_url,
separator=",",
encoding="utf-8", # This file is UTF-8 encoded, not latin1
infer_schema_length=1000000,
)
stations_slf.head(5)
| network | station_code | label | active | lon | lat | elevation | station_type |
|---|---|---|---|---|---|---|---|
| str | str | str | bool | f64 | f64 | i64 | str |
| "IMIS" | "ADE2" | "Engstligenalp" | true | 7.582633 | 46.434365 | 2304 | "SNOW_FLAT" |
| "IMIS" | "ADE3" | "Lavey-Sattligrat" | true | 7.490138 | 46.475527 | 2355 | "FLOWCAPT" |
| "IMIS" | "ALB2" | "Teststation Albula" | true | 9.836224 | 46.580945 | 2322 | "SNOW_FLAT" |
| "IMIS" | "ALI1" | "Vanil des Artses" | true | 6.987282 | 46.482248 | 1989 | "WIND" |
| "IMIS" | "ALI2" | "Chenau" | true | 6.993298 | 46.488632 | 1708 | "SNOW_FLAT" |
stations_of_interest_slf = []
for row in stations_slf.iter_rows(named=True):
station_lat = row["lat"]
station_lon = row["lon"]
for location in query_locations:
distance = haversine_distance(
station_lat, station_lon, location["lat"], location["lon"]
)
if distance <= query_radius:
print(
f"Station {row['station_code']} ({row['label']}) is within {query_radius} meters of {location['name']} (distance: {distance:.2f} m)"
)
stations_of_interest_slf.append(row["station_code"].lower())
stations_of_interest_slf.sort()
Station ELA1 (Piz Salteras) is within 10000 meters of Savognin (distance: 8630.19 m)
Station ELA2 (Tschitta) is within 10000 meters of Savognin (distance: 8972.43 m)
Station GAD2 (Gschletteregg) is within 10000 meters of Engelberg (distance: 8427.68 m)
Station GOR2 (Gornergratsee) is within 10000 meters of Zermatt (distance: 4581.18 m)
Station GUT1 (Bänzlauistock) is within 10000 meters of Meiringen (distance: 8004.13 m)
Station GUT2 (Homad) is within 10000 meters of Meiringen (distance: 9549.20 m)
Station PMA2 (Colms da Parsonz) is within 10000 meters of Savognin (distance: 4963.64 m)
Station SCB2 (Schönbüel) is within 10000 meters of Meiringen (distance: 8517.86 m)
Station TIT2 (Titlisboden) is within 10000 meters of Engelberg (distance: 3833.47 m)
Station ZER1 (Platthorn) is within 10000 meters of Zermatt (distance: 3793.98 m)
Station ZER2 (Triftchumme) is within 10000 meters of Zermatt (distance: 2916.36 m)
Station ZER3 (Wisshorn) is within 10000 meters of Zermatt (distance: 2198.23 m)
Station ZER4 (Stafelalp) is within 10000 meters of Zermatt (distance: 4405.38 m)
Station ZER5 (Alp Hermetje) is within 10000 meters of Zermatt (distance: 3894.93 m)
Clearly this dataset is different than the one from MeteoSwiss, but luckily the structure is similar enough that I could reuse my code with minimal changes.
In this dataset I found 11 stations in the vicinity of my target locations. So that is promising.
The dataset is very comprehensive but lacks a description of the measurements taken in the same way the MeteoSwiss dataset had the parameters.csv file. Without explanation of the parameters it is difficult to make sense of nice names such as:
TA_30MIN_MEAN, VW_30MIN_MEAN, VW_30MIN_MAX, DW_30MIN_MEAN, RH_30MIN_MEAN, DW_30MIN_SD, HS, TS0_30MIN_MEAN, TS25_30MIN_MEAN, TS50_30MIN_MEAN, TS100_30MIN_MEAN, RSWR_30MIN_MEAN, TSS_30MIN_MEAN
So instead of working with the detailed sheets per station I am downloading the daily summary file which is also provided which only comes with two data columns.
daily_snow_slf_url = (
"https://measurement-data.slf.ch/imis/data/daily_snow_values/daily_snow.csv"
)
daily_snow_slf = (
pl.read_csv(
daily_snow_slf_url,
separator=",",
encoding="utf-8",
infer_schema_length=1000000,
)
.filter(
pl.col("station_code")
.str.to_lowercase()
.is_in(stations_of_interest_slf)
)
.with_columns(
pl.col("measure_date").str.to_datetime(
format="%Y-%m-%d %H:%M:%S%z", strict=True
)
)
)
daily_snow_slf.head()
| station_code | measure_date | hyear | HS | HN_1D |
|---|---|---|---|---|
| str | datetime[μs, UTC] | i64 | f64 | f64 |
| "TIT2" | 1993-06-01 00:00:00 UTC | 1993 | 139.0 | null |
| "TIT2" | 1993-06-02 00:00:00 UTC | 1993 | 132.0 | null |
| "TIT2" | 1993-06-03 00:00:00 UTC | 1993 | 128.0 | null |
| "TIT2" | 1993-06-04 00:00:00 UTC | 1993 | 129.0 | null |
| "TIT2" | 1993-06-05 00:00:00 UTC | 1993 | 124.0 | null |
This way I now quickly fetched 70k lines of snow depth data, or at least I assume it is snow depth data. It could also be fresh snow fall or something else. I assume HS stands for snow height or ‘Schneehöhe’, but could not find that stated anywhere.
(
p9.ggplot(
daily_snow_slf.with_columns(
# make a year column by truncating the timestamp to the year
pl.col("measure_date").dt.year().alias("year")
)
.group_by(["station_code", "year"])
.agg(
pl.len().alias("num_observations"),
)
)
+ p9.aes(x="year", y="num_observations", fill="station_code")
+ p9.geom_col(position="dodge")
+ p9.facet_grid("station_code ~ .", scales="free_y")
+ p9.theme_bw()
+ p9.labs(
x="Year",
y="Number of Observations",
fill="Station",
)
)

Here I can quickly see that the ZER stations seem to be replacements of each other with ZER4 replacing ZER3 and ZER5 replacing ZER4.
(
p9.ggplot(
daily_snow_slf.with_columns(
# make a year column by truncating the timestamp to the year
pl.col("measure_date").dt.year().alias("year")
)
)
+ p9.aes(x="measure_date", y="HS", color="station_code")
+ p9.geom_line()
+ p9.theme_bw()
+ p9.labs(
x="Time",
y="Snow Depth (cm)",
color="Station",
)
+ p9.theme(
figure_size=(12, 2.5),
)
)
/home/paul/miniforge3/envs/openpaulgithub/lib/python3.13/site-packages/plotnine/geoms/geom_path.py:100: PlotnineWarning: geom_path: Removed 2 rows containing missing values.

Very nice, we can clearly see the winters each year and no smoothing is needed to make the plot readable. Of course this visualisation is too busy to see much, but it gives me an insight into what data there is.
What’s pretty cool is that the “historical dataset” seems to be very up to date with values well into February 2026. It seems to be maybe updated weekly.
Let’s again focus on a single time, this time I think I will choose a week and focus on it. Let’s say the 3rd week of February or in other words the 8th week of the year.
slf_mean = (
daily_snow_slf.filter((pl.col("measure_date").dt.week() == 8))
.with_columns(pl.col("measure_date").dt.year().alias("year"))
.group_by(["station_code", "year"])
.agg(pl.col("HS").mean().alias("mean_hs"))
)
(
p9.ggplot(slf_mean)
+ p9.aes(x="year", y="mean_hs", color="station_code")
+ p9.geom_line()
+ p9.geom_point()
+ p9.theme_bw()
+ p9.labs(
x="Year",
y="Mean snow depth in 3rd week\nof February (cm)",
color="Station",
)
+ p9.theme(figure_size=(10, 3))
)
/home/paul/miniforge3/envs/openpaulgithub/lib/python3.13/site-packages/plotnine/layer.py:364: PlotnineWarning: geom_point : Removed 2 rows containing missing values.

Here we can see that there is a lot of variability in the snow heights year to year. The data ranges from the 90s to 2025. We can see that overall the trends align, which makes sense as this is all data from Switzerland and we do not expect large variations across the country. But it’s hard to make out a difference between the 2000s and the 2010s for example.
When is most Snow?
From the data can I see which month has the most snow on average?
For this I simply aggregate the data by month and visualize it as a boxplot.
month_enum = pl.Enum(
[
"Aug",
"Sep",
"Oct",
"Nov",
"Dec",
"Jan",
"Feb",
"Mar",
"Apr",
"May",
"Jun",
"Jul",
]
)
(
p9.ggplot(
(
daily_snow_slf.with_columns(
pl.col("measure_date")
.dt.strftime("%b")
.alias("month")
.cast(month_enum)
)
)
)
+ p9.aes(x="month", y="HS")
+ p9.geom_boxplot()
+ p9.theme_bw()
+ p9.theme(figure_size=(6, 4))
)
/home/paul/miniforge3/envs/openpaulgithub/lib/python3.13/site-packages/plotnine/layer.py:284: PlotnineWarning: stat_boxplot : Removed 2224 rows containing non-finite values.

And yes, it is clear that snowfall starts in October and continues until March or April, when the melting then sets in in April or May. This aligns well with a lot of the higher hiking tracks opening in June/July.
I am no climate scientist so I will stop the analysis here. I think this was rather interesting to learn about the available data and see how easy it can be, but also how the two datasets from MeteoSwiss and SLF opendata portals are very different while seeming very similar.
There is much one can still do with this data and the SLF has many more resources which you can check out: www.slf.ch/en/about-the-slf/services-and-products