News entry thumbnail
hale»connect Release Notes: January 2022
03.01.2022 by Akshat Bajaj, Kate Lyndegaard

Here’s what’s new in hale»connect this month!

For Users

New Features

hale»connect now offers a TN-ITS endpoint. The TN-ITS specification focuses on “describing the exchange of (changes) of road attributes, with the emphasis on static road data.” Through this endpoint, TN-ITS data providers can make TN-ITS data sets available via the standardized TN-ITS REST service interface. TN-ITS data consumers can also obtain the TN-ITS data set via the interface.

Supported endpoints:

/download/queryDataSets returns all TN-ITS datasets of an org as a tnits:TNITSRestDatasetRefList (see API data model).

/download/queryDataSets?lastValidDataSetID=<base64-encoded datasetId> returns all TN-ITS datasets of an org as a tnits:TNITSRestDatasetRefList that chronologically come after the dataset specified in the parameter.

download/readDataSet?dataSetID=<base64-encoded datasetId> returns the tnits:RoadFeatureDataset dataset.

For more information, visit the following links:

Changes

  • In gmd:MD_DataIdentification, users can now edit the revision date, publication date and creation date of a dataset at gmd:citation/gmd:CI_Citation/gmd:date/gmd:CI_Date/gmd:date using either gco:Date or gco:DateTime. Users can use an autofill rule to populate these fields in the metadata editor.
  • The migration of the hale»connect platform to Angular is currently ongoing. Recent upgrades include the migration of the file upload component on hale»connect, which enables users to upload files to the platform.

Fixes

  • We implemented a fix to generate external GetFeatureById links for INSPIRE Environmental Monitoring Facilities features published via WFS, using deegree. The WFS response for published Environmental Monitoring Network features contained relative path links. The underlying issue was that in EnvironmentalMonitoringNetwork.contains, deegree assumed that the referenced element was a NetworkFacility object, which is the association class for linking EnvironmentalMonitoringNetwork and EnvironmentalMonitoringFacility. Because NetworkFacility is not a feature type, deegree did not generate external GetFeatureById links for referencing the respective objects.
  • The global capacity update now only occurs nightly during a defined interval, reducing the load on MongoDB during the capacity update.

Here’s what’s new in hale»connect this month!

For Users

New Features

hale»connect now offers a TN-ITS endpoint. The TN-ITS specification focuses on “describing the exchange of (changes) of road attributes, with the emphasis on static road data.” Through this endpoint, TN-ITS data providers can make TN-ITS data sets available via the standardized TN-ITS REST service interface. TN-ITS data consumers can also obtain the TN-ITS data set via the interface.

Supported endpoints:

/download/queryDataSets returns all TN-ITS datasets of an org as a tnits:TNITSRestDatasetRefList (see API data model).

/download/queryDataSets?lastValidDataSetID=<base64-encoded datasetId> returns all TN-ITS datasets of an org as a tnits:TNITSRestDatasetRefList that chronologically come after the dataset specified in the parameter.

download/readDataSet?dataSetID=<base64-encoded datasetId> returns the tnits:RoadFeatureDataset dataset.

For more information, visit the following links:

Changes

  • In gmd:MD_DataIdentification, users can now edit the revision date, publication date and creation date of a dataset at gmd:citation/gmd:CI_Citation/gmd:date/gmd:CI_Date/gmd:date using either gco:Date or gco:DateTime. Users can use an autofill rule to populate these fields in the metadata editor.
  • The migration of the hale»connect platform to Angular is currently ongoing. Recent upgrades include the migration of the file upload component on hale»connect, which enables users to upload files to the platform.

Fixes

  • We implemented a fix to generate external GetFeatureById links for INSPIRE Environmental Monitoring Facilities features published via WFS, using deegree. The WFS response for published Environmental Monitoring Network features contained relative path links. The underlying issue was that in EnvironmentalMonitoringNetwork.contains, deegree assumed that the referenced element was a NetworkFacility object, which is the association class for linking EnvironmentalMonitoringNetwork and EnvironmentalMonitoringFacility. Because NetworkFacility is not a feature type, deegree did not generate external GetFeatureById links for referencing the respective objects.
  • The global capacity update now only occurs nightly during a defined interval, reducing the load on MongoDB during the capacity update.

(more)

Like last year, the INSPIRE conference was again held virtually. We all keep rooting for a physical conference in Dubrovnik, and it looks like next year we can finally make the trip!

The key theme of the 2021 conference was change. The community dove deep into the current status of INSPIRE, and where things are going given current legislative, governance and business needs.

There were a lot of concepts thrown around – Monitoring Reports, the Green Deal, the EU data strategy. Environmental Data Spaces. OGC APIs, alternative encodings, and use cases. The list only gets longer – and somehow the INSPIRE conference pulled all these dots together.

But it’s easy to miss the big picture when you’re down in the technical trenches of a 5-day long virtual conference. Let’s try to zoom out (no pun intended) and see what the conference focused on and what it achieved.

First, let’s define some of the terminology that popped up across sessions:

Green Deal: A set of policy initiatives by the European Commission with the overarching aim of making the European Union climate neutral in 2050.

EU Data Strategy: The European strategy for data aims at creating a single market for data that will ensure Europe’s global competitiveness and data sovereignty.

Data Spaces: A data space is a data exchange where trusted partners share data for processing without sacrificing data sovereignty.

The state of INSPIRE today, and the way forward

There are 3 aspects to consider: legislation, governance, and technology. The INSPIRE Assessment report and INSPIRE conference sessions gave us a wealth of information on all three.

The assessment report called for:

  • Avoiding overspecification: Generally, avoid overly complex models. In some INSPIRE guidelines, there is a lot of unnecessary structure.
  • Demarcating technical and legal aspects: The INSPIRE Implementing Rules often go too deep in the technological aspect. This overlap means that certain technological aspects are too rigid, and consequently the directive cannot accommodate technological changes easily.
  • Licensing frameworks: To “catalyse data sharing”, we can’t depend only on open data. To ensure the flow of all kinds of data, licensing frameworks need to be created, communicated, and implemented.

Bettina Rafaelson (COWI), Lise Oules (Mileu) and Nadine Alameh ([OGC) presented on this topic. Bettina and Lise gave a summary of a survey conducted in 31 countries and focused on finding where INSPIRE currently stands in the legislative sense. Lise said that, “the added-value of INSPIRE resided in the development of governance structures at national level for data sharing.

While acknowledging the positives, this session also spoke about the negatives such as inflexibility and overcomplexity. Bettina brought in important recommendations, as shown below.

Legal INSPIRE Presentations at Online INSPIRE Conference 2021

Nadine built on this perspective in the closing notes of the conference and spoke of how INSPIRE legislation can be made future proof with respect to technological advancements such as the rapid development of standards. Given that public sector legislation often lags behind private sector innovation, this perspective was more than welcome.

The governance aspect was anchored in a community modus operandi perspective. Codrina Ilie (OSGeo) called for an agile approach to SDIs that captures the complex nature of member states, and said that communities such as FOSS4G were critical in helping us to stay aligned with ever-changing technologies. The Assessment Report pointed to the outdated INSPIRE specifications and INSPIRE Artefact management – two things that were not maintained properly, with several bugs and other issues having been around for more than five years. To deal with this, the INSPIRE community suggested a more agile model that could better accommodate stakeholders and different requirements.

Graph of mean values of INSPIRE Compliance Indicators

The above graph from the report focuses on the results and gives an idea of how far technology has gotten. There’s still a way to go for most member states to be fully INSPIRE compliant. As a community we had a large learning curve to get to this point, but we can’t stop now - our tech stack needs to evolve to be compatible to a changing environment. The report outlined how we can achieve that:

  • Improve accessibility and findability of data, e.g. through Web Indexing
  • Maintain INSPIRE’s focus on open standards and ensure compatibility with new technology
  • Harness the power of APIs and Alternative Encodings to make INSPIRE data useful for those outside the spatial data specialist niche

Future of INSPIRE

Undoubtedly, the biggest development for INSPIRE is the creation of the Green Deal Data Space.

The European Green Deal data space will make environmental data accessible, usable, and useful. It will allow data providers to maintain data sovereignty and protect sensitive data, while unlocking data access for thousands of applications that will help make society more sustainable.

As Hugo de Groof said, “INSPIRE is the blueprint for the European Data Spaces”. INSPIRE has already achieved important steps, and many of the resources in the infrastructure could be considered “data space ready”. The most important aspects are the shared semantics – the data specifications – and the fact that more than 40.000 data sets have already been made accessible.

However, there are also some major TODOs left open. These include defining the governance rules for such green deal data spaces. Governance rules will define who will contribute what to the data space, and what they are allowed to do with the data.

An essential part of any data space is trusted, certified processing services, such as analytic models, transformation services, or machine learning models. These still need to be developed for a wide range of applications, but when they are, they can be rolled out across the EU to achieve optimal impact quickly – exactly what we need to achieve the goals of the Green Deal.

Another aspect is to think about the prioritisation, procurement, development, and deployment of the infrastructure. If public authorities continue to take years in specifying their systems, more years in procuring them, even longer to let them be custom-built, it will take too long to establish data spaces at scale, and best practices will be slow to proliferate. Instead, we expect to see more standard products and Software-as-a-Service solutions, such as our current INSPIRE as a service offering hale»connect, or our future Dataspace-as-a-Service solution. Such infrastructures can be deployed to GAIA-X to achieve optimal digital sovereignty.

Now, this gives us the high-level strategic overview of what’s going on. However, the devil of strategy lies in the details of implementation. Here’s how the conference addressed how we can reach that perfect strategic output.

Governance: Going Beyond Priority Datasets

Governance changes need to occur on all levels. As mentioned earlier, we need to have an agile methodology to make these changes effectively. But there’s more to it – we’ve found that most organisations still don’t have INSPIRE, Open Data and other data strategies as a priority. To change this, organisations need to identify synergies and collaboration potentials and also ensure that long-term budgets are available where needed.

Additionally, as a community we also need to be open to the new, harness new cloud infrastructures such as GAIA-X and automated SaaS instead of increasing fragmentation by inventing individual solutions. Jürgen Moßgraber (Fraunhofer IOSB) spoke of the link between GAIA-X and INSPIRE and how this link promotes “cross-fertilisation of activities around data-sharing.” Similar synergies can be exploited with other automated SaaS.

Introduction to GAIA-X and to what does GAIA-X have to do with INSPIRE

There’s also a bigger picture. We saw an out-of-the-box perspective on this in the location interoperability session. CheeHai Teo from the UN-GGIM spoke about the Integrated Geospatial Information Framework and explained the necessity of having actors at the national level that push forward key geospatial initiatives. This kind of community push is what empowers successful SDIs. This session provided depth of content across different technical and political factors. It highlighted the synergies required between public sector and private sector players and demonstrated the full breadth of what local interoperability projects should look like.

Integrated Geospatial Information Frameworks: the 9 strategic pathways

Technical Stacks and the Industry Perspective

The INSPIRE conferences have always been a platform for sharing new technologies. The “Past, present and future of INSPIRE: an industry perspective” showcased INSPIRE tech stacks from the industry’s point of view, and what can be expected in the future to make the date more useful. Safe Software did a good job of summing up some of the general challenges, as shown below.

Implementation challenges for INSPIRE

Thorsten, wetransform CEO, spoke in depth about the innovations technical INSPIRE stacks will require to keep pace with the development’s dynamic nature and requirements, especially given organisational constraints.

Keeping INSPIRE tech stacks up to date

Thorsten emphasised that implementers need to consider maintenance requirements when building your tech stack, as the creeping costs associated with operations and maintenance can make for a very rude awakening. “Always take the infrastructure and product perspective, commit to long-term effort and funding. Even in other areas like Industry 4.0, success only came after sustained investments and long-term commitments.

There were also mentions of increasing INSPIRE’s usefulness through alternative encodings and OGC APIs. Johanna Ott, consultant at wetransform, stated that she “really liked that we are not talking about how to implement INSPIRE any longer but that the focus is on using the data. We are finally at a point where we can start generating added value from the data we’ve all worked on in the last years.”

As an example, wetransform showed how planned land use datasets are being used in Germany. These datasets receive over a million views each month, so it’s clear that the data is important to data users. We also highlighted another project, in which we created drone flight zones by deriving them from the INSPIRE Protected Sites theme. The INSPIRE data filled in the data gaps that were present, and the result was a portal in which one can see drone flight corridors. Learn more about this project here.

Across all presentations in this session, alternative Encodings and new APIs were the two hot topics in terms of making better use of INSPIRE data – let’s dig deeper.

APIs

APIs are one of the keys to unlocking the full value of INSPIRE data. Plus, APIs are relatively more mature compared to Alternative Encodings, as the interface needs to be defined only once. Alex Kotsev (JRC) left no ambiguity about the value of APIs in his session on technology trends.

Keeping INSPIRE tech stacks up to date

So far, there are four new APIs, mostly focused on download of data:

  • SensorThings API: This type of download service is effective for delivering streams of sensor data and has already been approved as a good practice.
  • WCS 2.0 Download Service: The WCS 2.0 architecture is closer to WFS 2.0 than to the new OGC APIs. However, it adds value because it allows to access coverages very effectively.
  • OGC API, Features: This Download Service type is an entirely new API that builds on web standards and relies heavily on HTTP methods, JSON payloads and OpenAPI. It will likely be approved as a Good Practice very soon. A compliance test is already available in the ETF validator.
  • OGC API, Records: This is a new type of Metadata Catalogue interface, which pairs well with GeoDCAT-AP, a newer metadata format that is central to Open Data platforms. It is likely quite far away from being approved as a good practice, as the standard itself is not yet mature.

Alternative Encodings

Alternative encodings are a means to bridge the gap between a certain format and the needs of end users who want to work with the data. To enhance data usability, other formats or encodings can be used to complement the default encoding. In the context of INSPIRE, this can be an alternative encoding, i.e., one that fulfils all requirements of the INSPIRE Implementing Rule and thus be used instead of the default encoding. Most want to use simpler formats – but since these are alternative encodings to INSPIRE, the simpler formats must also contain sufficient information to make them INSPIRE compliant. These encodings were also mentioned multiple times in the Industry session on INSPIRE, and were seen as a creative way to make the most of INSPIRE data.

Keeping INSPIRE tech stacks up to date

The beauty of a good alternative encoding lies in the fact that INSPIRE compliance can be proven practically through automated transformation from the alternative encoding to default encoding. For example, a Geopackage is relatively easy to create with a hale»studio mapping. The GeoPackage uses a flattened and simplified relational schema, it can be easily picked up by a transformation service and be converted to an INSPIRE compliant GML file – and boom, with only one manual and easy mapping, you have an alternative encoding and an INSPIRE compliant dataset. The flowchart below describes such a potential workflow.

Workflow to create alternative encodings

The usefulness of INSPIRE: What can we actually do with INSPIRE?

Cross-domain probelm solving

As we move onto data-driven decision-making mechanisms, location intelligence is impossible to ignore. It permeates across domains and industries.

The “Statistics and geospatial information” session showed how to optimize current statistical business processes such as the Generic Statistical Business Process Model with the help of geospatial data. We learned about how geodata can play a part in essential business processes and break silos across sectors and industries– for example, the type of geography can influence costs and risks associated with production and distribution.

We also saw how INSPIRE forms a basis for reporting processes in the e-Reporting session. In a joint presentation with Epsilon Italia, wetransform focused on the latest revision of the European Noise Directive (END) reporting. Stefania Morrone from Epsilon spoke about how INSPIRE can be linked with the END, and how similarities between the two directives such as same core information and common cross domain information lead to an optimised workflow. Thorsten’s segment focused more on the technological aspect, and how reporters can leverage GeoPackage for further reporting optimization. He also spoke about the best practices for END reporting, such as flattening hierarchical structures and setting default dataset properties.

Workflow to create alternative encodings

What does all of this mean for me?

From our perspective, the JRC and central groups like the MIG-T have now transferred the torch to the community to push INSPIRE forward. Here’s what you can do:

  • Join in the governance party and encourage good practices: Keep things streamlined and homogenous by creating accommodating initiatives. And if you want to see a change, make it happen! We’re there to help. It’s up to us as a community to start initiatives, build roadmaps and get funding.
  • Keep your tech stack up to date by investing in maintenance and future developments such as APIs and Alternative Encodings. Try to future proof your toolkit by anticipating needs of data users, and act accordingly.
  • Join the environmental data spaces community to help define what Green Deal Data Spaces will really look like.

A concluding remark

Overall – the theme of the event was clear: Showcase use cases of INSPIRE data, and the future development of the INSPIRE SDI with respect to sustainable developments within the EU. It’s clear that a transition is in the works and, here’s how we’re supporting it:

  • hale»studio: Further support for alternative encodings such as GeoPackage and transformed model templates such as the European noise directive in our latest release.
  • hale»connect: We will add general availability support for the OGC WCS API and OGC Features API by the second quarter of 2022. The development of the SensorThingsAPI is in progress, and we expect to push this update in 2022 though the exact timeline is to be confirmed.

And lastly, the next INSPIRE conference will take place in May 2022 in Dubrovnik! (You’re welcome, Game of Thrones fans 😉) You’ll receive updates on that topic from us soon.

Like last year, the INSPIRE conference was again held virtually. We all keep rooting for a physical conference in Dubrovnik, and it looks like next year we can finally make the trip!

The key theme of the 2021 conference was change. The community dove deep into the current status of INSPIRE, and where things are going given current legislative, governance and business needs.

There were a lot of concepts thrown around – Monitoring Reports, the Green Deal, the EU data strategy. Environmental Data Spaces. OGC APIs, alternative encodings, and use cases. The list only gets longer – and somehow the INSPIRE conference pulled all these dots together.

But it’s easy to miss the big picture when you’re down in the technical trenches of a 5-day long virtual conference. Let’s try to zoom out (no pun intended) and see what the conference focused on and what it achieved.

First, let’s define some of the terminology that popped up across sessions:

Green Deal: A set of policy initiatives by the European Commission with the overarching aim of making the European Union climate neutral in 2050.

EU Data Strategy: The European strategy for data aims at creating a single market for data that will ensure Europe’s global competitiveness and data sovereignty.

Data Spaces: A data space is a data exchange where trusted partners share data for processing without sacrificing data sovereignty.

The state of INSPIRE today, and the way forward

There are 3 aspects to consider: legislation, governance, and technology. The INSPIRE Assessment report and INSPIRE conference sessions gave us a wealth of information on all three.

The assessment report called for:

  • Avoiding overspecification: Generally, avoid overly complex models. In some INSPIRE guidelines, there is a lot of unnecessary structure.
  • Demarcating technical and legal aspects: The INSPIRE Implementing Rules often go too deep in the technological aspect. This overlap means that certain technological aspects are too rigid, and consequently the directive cannot accommodate technological changes easily.
  • Licensing frameworks: To “catalyse data sharing”, we can’t depend only on open data. To ensure the flow of all kinds of data, licensing frameworks need to be created, communicated, and implemented.

Bettina Rafaelson (COWI), Lise Oules (Mileu) and Nadine Alameh ([OGC) presented on this topic. Bettina and Lise gave a summary of a survey conducted in 31 countries and focused on finding where INSPIRE currently stands in the legislative sense. Lise said that, “the added-value of INSPIRE resided in the development of governance structures at national level for data sharing.

While acknowledging the positives, this session also spoke about the negatives such as inflexibility and overcomplexity. Bettina brought in important recommendations, as shown below.

Legal INSPIRE Presentations at Online INSPIRE Conference 2021

Nadine built on this perspective in the closing notes of the conference and spoke of how INSPIRE legislation can be made future proof with respect to technological advancements such as the rapid development of standards. Given that public sector legislation often lags behind private sector innovation, this perspective was more than welcome.

The governance aspect was anchored in a community modus operandi perspective. Codrina Ilie (OSGeo) called for an agile approach to SDIs that captures the complex nature of member states, and said that communities such as FOSS4G were critical in helping us to stay aligned with ever-changing technologies. The Assessment Report pointed to the outdated INSPIRE specifications and INSPIRE Artefact management – two things that were not maintained properly, with several bugs and other issues having been around for more than five years. To deal with this, the INSPIRE community suggested a more agile model that could better accommodate stakeholders and different requirements.

Graph of mean values of INSPIRE Compliance Indicators

The above graph from the report focuses on the results and gives an idea of how far technology has gotten. There’s still a way to go for most member states to be fully INSPIRE compliant. As a community we had a large learning curve to get to this point, but we can’t stop now - our tech stack needs to evolve to be compatible to a changing environment. The report outlined how we can achieve that:

  • Improve accessibility and findability of data, e.g. through Web Indexing
  • Maintain INSPIRE’s focus on open standards and ensure compatibility with new technology
  • Harness the power of APIs and Alternative Encodings to make INSPIRE data useful for those outside the spatial data specialist niche

Future of INSPIRE

Undoubtedly, the biggest development for INSPIRE is the creation of the Green Deal Data Space.

The European Green Deal data space will make environmental data accessible, usable, and useful. It will allow data providers to maintain data sovereignty and protect sensitive data, while unlocking data access for thousands of applications that will help make society more sustainable.

As Hugo de Groof said, “INSPIRE is the blueprint for the European Data Spaces”. INSPIRE has already achieved important steps, and many of the resources in the infrastructure could be considered “data space ready”. The most important aspects are the shared semantics – the data specifications – and the fact that more than 40.000 data sets have already been made accessible.

However, there are also some major TODOs left open. These include defining the governance rules for such green deal data spaces. Governance rules will define who will contribute what to the data space, and what they are allowed to do with the data.

An essential part of any data space is trusted, certified processing services, such as analytic models, transformation services, or machine learning models. These still need to be developed for a wide range of applications, but when they are, they can be rolled out across the EU to achieve optimal impact quickly – exactly what we need to achieve the goals of the Green Deal.

Another aspect is to think about the prioritisation, procurement, development, and deployment of the infrastructure. If public authorities continue to take years in specifying their systems, more years in procuring them, even longer to let them be custom-built, it will take too long to establish data spaces at scale, and best practices will be slow to proliferate. Instead, we expect to see more standard products and Software-as-a-Service solutions, such as our current INSPIRE as a service offering hale»connect, or our future Dataspace-as-a-Service solution. Such infrastructures can be deployed to GAIA-X to achieve optimal digital sovereignty.

Now, this gives us the high-level strategic overview of what’s going on. However, the devil of strategy lies in the details of implementation. Here’s how the conference addressed how we can reach that perfect strategic output.

Governance: Going Beyond Priority Datasets

Governance changes need to occur on all levels. As mentioned earlier, we need to have an agile methodology to make these changes effectively. But there’s more to it – we’ve found that most organisations still don’t have INSPIRE, Open Data and other data strategies as a priority. To change this, organisations need to identify synergies and collaboration potentials and also ensure that long-term budgets are available where needed.

Additionally, as a community we also need to be open to the new, harness new cloud infrastructures such as GAIA-X and automated SaaS instead of increasing fragmentation by inventing individual solutions. Jürgen Moßgraber (Fraunhofer IOSB) spoke of the link between GAIA-X and INSPIRE and how this link promotes “cross-fertilisation of activities around data-sharing.” Similar synergies can be exploited with other automated SaaS.

Introduction to GAIA-X and to what does GAIA-X have to do with INSPIRE

There’s also a bigger picture. We saw an out-of-the-box perspective on this in the location interoperability session. CheeHai Teo from the UN-GGIM spoke about the Integrated Geospatial Information Framework and explained the necessity of having actors at the national level that push forward key geospatial initiatives. This kind of community push is what empowers successful SDIs. This session provided depth of content across different technical and political factors. It highlighted the synergies required between public sector and private sector players and demonstrated the full breadth of what local interoperability projects should look like.

Integrated Geospatial Information Frameworks: the 9 strategic pathways

Technical Stacks and the Industry Perspective

The INSPIRE conferences have always been a platform for sharing new technologies. The “Past, present and future of INSPIRE: an industry perspective” showcased INSPIRE tech stacks from the industry’s point of view, and what can be expected in the future to make the date more useful. Safe Software did a good job of summing up some of the general challenges, as shown below.

Implementation challenges for INSPIRE

Thorsten, wetransform CEO, spoke in depth about the innovations technical INSPIRE stacks will require to keep pace with the development’s dynamic nature and requirements, especially given organisational constraints.

Keeping INSPIRE tech stacks up to date

Thorsten emphasised that implementers need to consider maintenance requirements when building your tech stack, as the creeping costs associated with operations and maintenance can make for a very rude awakening. “Always take the infrastructure and product perspective, commit to long-term effort and funding. Even in other areas like Industry 4.0, success only came after sustained investments and long-term commitments.

There were also mentions of increasing INSPIRE’s usefulness through alternative encodings and OGC APIs. Johanna Ott, consultant at wetransform, stated that she “really liked that we are not talking about how to implement INSPIRE any longer but that the focus is on using the data. We are finally at a point where we can start generating added value from the data we’ve all worked on in the last years.”

As an example, wetransform showed how planned land use datasets are being used in Germany. These datasets receive over a million views each month, so it’s clear that the data is important to data users. We also highlighted another project, in which we created drone flight zones by deriving them from the INSPIRE Protected Sites theme. The INSPIRE data filled in the data gaps that were present, and the result was a portal in which one can see drone flight corridors. Learn more about this project here.

Across all presentations in this session, alternative Encodings and new APIs were the two hot topics in terms of making better use of INSPIRE data – let’s dig deeper.

APIs

APIs are one of the keys to unlocking the full value of INSPIRE data. Plus, APIs are relatively more mature compared to Alternative Encodings, as the interface needs to be defined only once. Alex Kotsev (JRC) left no ambiguity about the value of APIs in his session on technology trends.

Keeping INSPIRE tech stacks up to date

So far, there are four new APIs, mostly focused on download of data:

  • SensorThings API: This type of download service is effective for delivering streams of sensor data and has already been approved as a good practice.
  • WCS 2.0 Download Service: The WCS 2.0 architecture is closer to WFS 2.0 than to the new OGC APIs. However, it adds value because it allows to access coverages very effectively.
  • OGC API, Features: This Download Service type is an entirely new API that builds on web standards and relies heavily on HTTP methods, JSON payloads and OpenAPI. It will likely be approved as a Good Practice very soon. A compliance test is already available in the ETF validator.
  • OGC API, Records: This is a new type of Metadata Catalogue interface, which pairs well with GeoDCAT-AP, a newer metadata format that is central to Open Data platforms. It is likely quite far away from being approved as a good practice, as the standard itself is not yet mature.

Alternative Encodings

Alternative encodings are a means to bridge the gap between a certain format and the needs of end users who want to work with the data. To enhance data usability, other formats or encodings can be used to complement the default encoding. In the context of INSPIRE, this can be an alternative encoding, i.e., one that fulfils all requirements of the INSPIRE Implementing Rule and thus be used instead of the default encoding. Most want to use simpler formats – but since these are alternative encodings to INSPIRE, the simpler formats must also contain sufficient information to make them INSPIRE compliant. These encodings were also mentioned multiple times in the Industry session on INSPIRE, and were seen as a creative way to make the most of INSPIRE data.

Keeping INSPIRE tech stacks up to date

The beauty of a good alternative encoding lies in the fact that INSPIRE compliance can be proven practically through automated transformation from the alternative encoding to default encoding. For example, a Geopackage is relatively easy to create with a hale»studio mapping. The GeoPackage uses a flattened and simplified relational schema, it can be easily picked up by a transformation service and be converted to an INSPIRE compliant GML file – and boom, with only one manual and easy mapping, you have an alternative encoding and an INSPIRE compliant dataset. The flowchart below describes such a potential workflow.

Workflow to create alternative encodings

The usefulness of INSPIRE: What can we actually do with INSPIRE?

Cross-domain probelm solving

As we move onto data-driven decision-making mechanisms, location intelligence is impossible to ignore. It permeates across domains and industries.

The “Statistics and geospatial information” session showed how to optimize current statistical business processes such as the Generic Statistical Business Process Model with the help of geospatial data. We learned about how geodata can play a part in essential business processes and break silos across sectors and industries– for example, the type of geography can influence costs and risks associated with production and distribution.

We also saw how INSPIRE forms a basis for reporting processes in the e-Reporting session. In a joint presentation with Epsilon Italia, wetransform focused on the latest revision of the European Noise Directive (END) reporting. Stefania Morrone from Epsilon spoke about how INSPIRE can be linked with the END, and how similarities between the two directives such as same core information and common cross domain information lead to an optimised workflow. Thorsten’s segment focused more on the technological aspect, and how reporters can leverage GeoPackage for further reporting optimization. He also spoke about the best practices for END reporting, such as flattening hierarchical structures and setting default dataset properties.

Workflow to create alternative encodings

What does all of this mean for me?

From our perspective, the JRC and central groups like the MIG-T have now transferred the torch to the community to push INSPIRE forward. Here’s what you can do:

  • Join in the governance party and encourage good practices: Keep things streamlined and homogenous by creating accommodating initiatives. And if you want to see a change, make it happen! We’re there to help. It’s up to us as a community to start initiatives, build roadmaps and get funding.
  • Keep your tech stack up to date by investing in maintenance and future developments such as APIs and Alternative Encodings. Try to future proof your toolkit by anticipating needs of data users, and act accordingly.
  • Join the environmental data spaces community to help define what Green Deal Data Spaces will really look like.

A concluding remark

Overall – the theme of the event was clear: Showcase use cases of INSPIRE data, and the future development of the INSPIRE SDI with respect to sustainable developments within the EU. It’s clear that a transition is in the works and, here’s how we’re supporting it:

  • hale»studio: Further support for alternative encodings such as GeoPackage and transformed model templates such as the European noise directive in our latest release.
  • hale»connect: We will add general availability support for the OGC WCS API and OGC Features API by the second quarter of 2022. The development of the SensorThingsAPI is in progress, and we expect to push this update in 2022 though the exact timeline is to be confirmed.

And lastly, the next INSPIRE conference will take place in May 2022 in Dubrovnik! (You’re welcome, Game of Thrones fans 😉) You’ll receive updates on that topic from us soon.

(more)

News entry thumbnail
hale»studio 4.1.0: Multiple File Import, Spatial Indexing and more!
16.11.2021 by Akshat Bajaj, Florian Esser, Kapil Agnihotri

The hale»studio 4.1.0 release is here!

Based on customer feedback, we’ve brought in a host of exciting new changes. After months of hard work, we’ve made hale»studio a more powerful tool that now enables you to:

  • Select multiple files during the schema and the data import
  • Remove a single schema from the project view
  • Export the source and the transformed data to Shapefiles using GeoTools
  • Work with presets for Environmental Noise Directive (END) schemas
  • Create a spatial index when writing GeoPackage files

And of course, there are many other bug fixes (such as fixing the hale»studio launch in Mac OS 10.15.5 and above) and enhancements! You can find the complete changelog here.

Multiple File Import

Until now, users could import only a single source schema or a single source data file at once in an import process. However, from this release, users can select multiple files during the schema import or when importing source data. This allows the user to save a significant amount of time when working with multiple files of the same format.

Image of the hale connect map layer view widget

Shapefile File Export

This release enables users to export source or transformed data as a Shapefile, as shown below.

Image of the hale connect map layer view widget

END Schema Presets

The END, introduced in 2002, monitors the effectiveness of EU emission controls by requiring the assessment of environmental noise at the Member State level. The deadline is in 2022, and almost 90% of the work is expected to be completed in the coming months.

The END consists of multiple application schemas that inherit from different INSPIRE themes. Given this closeness to INSPIRE, hale»studio has been used extensively for transforming data to END compliant formats, and we decided to add in schema presets to make the experience even better for those working on END datasets. A big thanks to the EEA who funded this development!

Spatial Index for GeoPackage Files

Some tools can’t read a GeoPackage file unless the file has a spatial index. Moreover, it can be time consuming to retrieve and process spatial data through traditional comprehensive sequential scans. To combat these issues, hale»studio allows the user to create an index over the spatial data of their choice, resulting in better compatibility and faster processing.

The development work for this release was co-funded by the European Health and Digital Executive Agency (HaDEA) under Action No 2018-EU-IA-0093 (GO-PEG: Generation of cross border Pan European Geospatial Datasets and Services). And of course, a big thanks to the wetransform service team for their efforts through the months! See what we came up with below.

Download hale»studio

Download the latest version and send us your feedback:

To avoid any compatibility issues when using an existing workspace, we recommend starting with a fresh workspace when you install hale»studio 4.1.0.

The hale»studio 4.1.0 release is here!

Based on customer feedback, we’ve brought in a host of exciting new changes. After months of hard work, we’ve made hale»studio a more powerful tool that now enables you to:

  • Select multiple files during the schema and the data import
  • Remove a single schema from the project view
  • Export the source and the transformed data to Shapefiles using GeoTools
  • Work with presets for Environmental Noise Directive (END) schemas
  • Create a spatial index when writing GeoPackage files

And of course, there are many other bug fixes (such as fixing the hale»studio launch in Mac OS 10.15.5 and above) and enhancements! You can find the complete changelog here.

Multiple File Import

Until now, users could import only a single source schema or a single source data file at once in an import process. However, from this release, users can select multiple files during the schema import or when importing source data. This allows the user to save a significant amount of time when working with multiple files of the same format.

Image of the hale connect map layer view widget

Shapefile File Export

This release enables users to export source or transformed data as a Shapefile, as shown below.

Image of the hale connect map layer view widget

END Schema Presets

The END, introduced in 2002, monitors the effectiveness of EU emission controls by requiring the assessment of environmental noise at the Member State level. The deadline is in 2022, and almost 90% of the work is expected to be completed in the coming months.

The END consists of multiple application schemas that inherit from different INSPIRE themes. Given this closeness to INSPIRE, hale»studio has been used extensively for transforming data to END compliant formats, and we decided to add in schema presets to make the experience even better for those working on END datasets. A big thanks to the EEA who funded this development!

Spatial Index for GeoPackage Files

Some tools can’t read a GeoPackage file unless the file has a spatial index. Moreover, it can be time consuming to retrieve and process spatial data through traditional comprehensive sequential scans. To combat these issues, hale»studio allows the user to create an index over the spatial data of their choice, resulting in better compatibility and faster processing.

The development work for this release was co-funded by the European Health and Digital Executive Agency (HaDEA) under Action No 2018-EU-IA-0093 (GO-PEG: Generation of cross border Pan European Geospatial Datasets and Services). And of course, a big thanks to the wetransform service team for their efforts through the months! See what we came up with below.

Download hale»studio

Download the latest version and send us your feedback:

To avoid any compatibility issues when using an existing workspace, we recommend starting with a fresh workspace when you install hale»studio 4.1.0.

(more)

News entry thumbnail
hale»connect Release Notes: November 2021
15.11.2021 by Akshat Bajaj, Jonathan Boudewijn, Kate Lyndegaard

For Users

New Features

  • The map view layer widget can now handle large data set series, with the added capability to filter datasets for display.
Image of the hale connect map layer view widget
  • hale»connect now supports providing multiple autofill rules for fields with a cardinality greater than 1 in theme metadata configurations. Comma-separated autofill rules can be added within square brackets in autofill fields.
  • Multiple GML files can now be used as source data on hale»connect. Multi-file data sets that are tiled, or split by feature type, can be published on the platform. This new functionality is designed to support users who want to publish large data sets.
  • Large data sets can now be automatically split at upload using a configurable threshold defined in the theme. This functionality is helpful for users uploading large datasets directly on the hale»connect platform. Users can configure a threshold that can be used to control the number of feature instances included in each partitioned file.

Changes

  • All success confirmation messages have been aligned to use floating, green toast messages that disappear without user interaction.
  • All system error messages have been aligned to use red, banner alert style messages which include a link to inform an administrator, and which require the user to dismiss the message.
  • Support was added to enable platform-wide configuration of cloud transformation resources. Cloud transformation runs can now use a custom amount of resources on demand.

Fixes

  • The password protection functionality of services has been improved. Activation and deactivation of password protection in the UI requires the republishing of services.
  • A fix was implemented to prevent the error message: “Transformation target data set could not be loaded” from occurring. The error was caused by a reference to a target bucket that was deleted in the past and still referenced in an older transformation result.
  • hale»connect WMS now supports the Croatian CRS EPSG: appears in the GetCapabilities document when configured in a theme.
  • The correct CI_OnLineFunctionCode codelist value was added in WMS metadata.
  • The correct codelist for gmd:CI_RoleCode was added in WMS metadata.
  • The correct CI_OnLineFunctionCode codelist value was added in dataset metadata.
  • WFS configuration for series did not include the configuration for disabled resources and led to delayed insertion times. The DisabledResources settings was activated for series.
  • A mapproxy issue causing black and white borders in WMS services was fixed.
  • The metadata editor now displays a defaultValue when a field in the metadata is set to use enumValues and editable is set to false.
  • When setting a default value in a theme’s metadata configuration, a page refresh is no longer needed to display the value in the dataset’s metadata.
  • Nginx and mapproxy now have an increased URL limit to enable requests for hundreds of WMS layers.
  • Capacity point calculation for sub-data sets of data set series was improved.
  • Schemas have been prevented from becoming invalid after editing schema types and attributes.
  • Theme datasets in worker threads were deserialized to prevent the workflow-manager from becoming unresponsive.
  • Added service publisher endpoints to regenerate nginx configuration.
  • The bucket-service no longer becomes unresponsive when requests to S3 fail.
  • XPlanung GML files are no longer altered when requesting the data with GetFeature requests. The replacement of codes by definitions has been fixed.
  • A fix was implemented to enable the deletion of dataset series.
  • The ManageStoredQueries constraint was set to FALSE in WFS GetCapabilities to reflect the correct status of our services and to prevent errors in the WFS Conformance Class in the INSPIRE validator.
  • When adding an additional value to a metadata field with an array in the metadata configuration, the array is no longer filled as the added value. The value itself appears.

For Administrators

Fixes

  • Endpoints in the service publisher were changed to require a token with access to the respective dataset or alternatively a bsp user/admin via basic authentication.

For Users

New Features

  • The map view layer widget can now handle large data set series, with the added capability to filter datasets for display.
Image of the hale connect map layer view widget
  • hale»connect now supports providing multiple autofill rules for fields with a cardinality greater than 1 in theme metadata configurations. Comma-separated autofill rules can be added within square brackets in autofill fields.
  • Multiple GML files can now be used as source data on hale»connect. Multi-file data sets that are tiled, or split by feature type, can be published on the platform. This new functionality is designed to support users who want to publish large data sets.
  • Large data sets can now be automatically split at upload using a configurable threshold defined in the theme. This functionality is helpful for users uploading large datasets directly on the hale»connect platform. Users can configure a threshold that can be used to control the number of feature instances included in each partitioned file.

Changes

  • All success confirmation messages have been aligned to use floating, green toast messages that disappear without user interaction.
  • All system error messages have been aligned to use red, banner alert style messages which include a link to inform an administrator, and which require the user to dismiss the message.
  • Support was added to enable platform-wide configuration of cloud transformation resources. Cloud transformation runs can now use a custom amount of resources on demand.

Fixes

  • The password protection functionality of services has been improved. Activation and deactivation of password protection in the UI requires the republishing of services.
  • A fix was implemented to prevent the error message: “Transformation target data set could not be loaded” from occurring. The error was caused by a reference to a target bucket that was deleted in the past and still referenced in an older transformation result.
  • hale»connect WMS now supports the Croatian CRS EPSG: appears in the GetCapabilities document when configured in a theme.
  • The correct CI_OnLineFunctionCode codelist value was added in WMS metadata.
  • The correct codelist for gmd:CI_RoleCode was added in WMS metadata.
  • The correct CI_OnLineFunctionCode codelist value was added in dataset metadata.
  • WFS configuration for series did not include the configuration for disabled resources and led to delayed insertion times. The DisabledResources settings was activated for series.
  • A mapproxy issue causing black and white borders in WMS services was fixed.
  • The metadata editor now displays a defaultValue when a field in the metadata is set to use enumValues and editable is set to false.
  • When setting a default value in a theme’s metadata configuration, a page refresh is no longer needed to display the value in the dataset’s metadata.
  • Nginx and mapproxy now have an increased URL limit to enable requests for hundreds of WMS layers.
  • Capacity point calculation for sub-data sets of data set series was improved.
  • Schemas have been prevented from becoming invalid after editing schema types and attributes.
  • Theme datasets in worker threads were deserialized to prevent the workflow-manager from becoming unresponsive.
  • Added service publisher endpoints to regenerate nginx configuration.
  • The bucket-service no longer becomes unresponsive when requests to S3 fail.
  • XPlanung GML files are no longer altered when requesting the data with GetFeature requests. The replacement of codes by definitions has been fixed.
  • A fix was implemented to enable the deletion of dataset series.
  • The ManageStoredQueries constraint was set to FALSE in WFS GetCapabilities to reflect the correct status of our services and to prevent errors in the WFS Conformance Class in the INSPIRE validator.
  • When adding an additional value to a metadata field with an array in the metadata configuration, the array is no longer filled as the added value. The value itself appears.

For Administrators

Fixes

  • Endpoints in the service publisher were changed to require a token with access to the respective dataset or alternatively a bsp user/admin via basic authentication.

(more)

Over the past 10 months, we’ve worked with a group of organisations from Germany including the Lower Saxon State Department for Waterway, Coastal and Nature Conservation (NLWKN), the Federal Agency for Nature Conservation (BfN), and the Federal Insititute for Hydrology (BAFG) to build a benthic information system called BenINFOS. Now, that system is available, and we’d like to introduce the project to a wider audience.

The Fach AG Benthos has the task of implementing the requirements of the Marine Strategy Framework Directive for assessing the status of the seabed fauna within the framework of the Federal / State Working Group North and Baltic Sea (BLANO).

Benthic data from these decentralized structures has to be digitally combined for the first time in this project.

The BenINFOS Project

In summer 2020, people from the German expert group on benthic information reached out to us to discuss a potential project. After a tendering procedure, wetransform and AquaEcology were tasked with the implementation of a first version of a Benthic Information System.

This system should help experts in the “Fach AG Benthos” with the task of implementing the requirements of the Marine Strategy Framework Directive (2008/56 / EU, MSFD) for assessing the status of the seabed fauna within the framework of the Federal / State Working Group North and Baltic Sea (BLANO).

  • Aggregate and consolidate benthic data from decentralized structures and provide this in a standardized BenINFOS data model - Calculation of the index values required for valuation (M-AMBI, BQI) and transparent presentation of the calculation levels
  • Implement the calculation of two indices (M-AMBI and BQI) that allow to assess the state of the benthic ecosystem
  • Export the calculated indices together with all log files and source data, to ensure full transparency and repeatability
  • Provide further data integration options for additional stakeholders and processes
  • Make the resulting uniform assessment accessible by means of an online application (BenINFOS specialist application) and through download and view services

For us, a project like this is very interesting, because it is about really using data that comes from different sources and organisations. As expected, there were a lot of challenges hidden in that area.

Challenges to Overcome

The different stakeholders in the domain working group had already implemented a common schema for their data, called ICES (developed by a working group in the organisation of the same name). However, even given the same GML application schema for the data, individual data sets still contained significant heterogeneity. We had to find solutions for problems such as:

  • Incomplete data to apply the methodologies, such as missing salt content or depth information for individual probes
  • Mismatched classification systems, e.g., for species names
  • Errors in spelling of species’ names and other properties
  • Minor technical issues in the format and encoding

Over the course of several months, we iterated both over the data, the actual R scripts that perform index calculations, and the web application used to manage and visualise individual index calculation runs. The teams at wetransform and AquaEcology worked together intensively with domain experts to find practical solutions and to ensure a high-quality result.

During these iterations, there were still some doubts as to whether such a system could be applied to all existing data sets, but most stakeholders were able to get the expected results from the integrated data towards the end of the pilot project.

The Results and Next Steps

At the end, the pilot project came to a positive conclusion. It offers an easy to use, straightforward process to integrate data sets and to configure index calculation runs.

Image of the BENInfos platform for the Marine Strategy Framework Directive
Configuration of index calculation runs.

The platform also lets you visualise and download the results of such runs, as shown below.

Image of the BENInfos platform for the Marine Strategy Framework Directive
Visualisation of the results of a calculation run.

The pre-processing steps and calculation scripts take care of a lot of required contextual information and create very detailed logs and outputs. All of this was implemented on top of an existing hale»connect on-premise deployment that is operated by plangis on behalf of the Federal Waterways Engineering and Research Institute (BAW). This project added a custom microservice for the M-AMBI and BQI R script execution, custom workflows, and a web application based on the hale»connect feature explorer.

There is still work to do to make the system fully operational, such as further improvement of the individual source data sets and adding the possibility to also publish result data sets as services automatically.

A second project phase will likely start in late 2021. In it, we also hope to bring more organisations around the Baltic Sea and the North Sea on board with the system. In the next months, we will also write and submit a scientific paper to explain what was done in greater detail.

If you are interested in staying up to date about this development, sign up for our newsletter here.

Over the past 10 months, we’ve worked with a group of organisations from Germany including the Lower Saxon State Department for Waterway, Coastal and Nature Conservation (NLWKN), the Federal Agency for Nature Conservation (BfN), and the Federal Insititute for Hydrology (BAFG) to build a benthic information system called BenINFOS. Now, that system is available, and we’d like to introduce the project to a wider audience.

The Fach AG Benthos has the task of implementing the requirements of the Marine Strategy Framework Directive for assessing the status of the seabed fauna within the framework of the Federal / State Working Group North and Baltic Sea (BLANO).

Benthic data from these decentralized structures has to be digitally combined for the first time in this project.

The BenINFOS Project

In summer 2020, people from the German expert group on benthic information reached out to us to discuss a potential project. After a tendering procedure, wetransform and AquaEcology were tasked with the implementation of a first version of a Benthic Information System.

This system should help experts in the “Fach AG Benthos” with the task of implementing the requirements of the Marine Strategy Framework Directive (2008/56 / EU, MSFD) for assessing the status of the seabed fauna within the framework of the Federal / State Working Group North and Baltic Sea (BLANO).

  • Aggregate and consolidate benthic data from decentralized structures and provide this in a standardized BenINFOS data model - Calculation of the index values required for valuation (M-AMBI, BQI) and transparent presentation of the calculation levels
  • Implement the calculation of two indices (M-AMBI and BQI) that allow to assess the state of the benthic ecosystem
  • Export the calculated indices together with all log files and source data, to ensure full transparency and repeatability
  • Provide further data integration options for additional stakeholders and processes
  • Make the resulting uniform assessment accessible by means of an online application (BenINFOS specialist application) and through download and view services

For us, a project like this is very interesting, because it is about really using data that comes from different sources and organisations. As expected, there were a lot of challenges hidden in that area.

Challenges to Overcome

The different stakeholders in the domain working group had already implemented a common schema for their data, called ICES (developed by a working group in the organisation of the same name). However, even given the same GML application schema for the data, individual data sets still contained significant heterogeneity. We had to find solutions for problems such as:

  • Incomplete data to apply the methodologies, such as missing salt content or depth information for individual probes
  • Mismatched classification systems, e.g., for species names
  • Errors in spelling of species’ names and other properties
  • Minor technical issues in the format and encoding

Over the course of several months, we iterated both over the data, the actual R scripts that perform index calculations, and the web application used to manage and visualise individual index calculation runs. The teams at wetransform and AquaEcology worked together intensively with domain experts to find practical solutions and to ensure a high-quality result.

During these iterations, there were still some doubts as to whether such a system could be applied to all existing data sets, but most stakeholders were able to get the expected results from the integrated data towards the end of the pilot project.

The Results and Next Steps

At the end, the pilot project came to a positive conclusion. It offers an easy to use, straightforward process to integrate data sets and to configure index calculation runs.

Image of the BENInfos platform for the Marine Strategy Framework Directive
Configuration of index calculation runs.

The platform also lets you visualise and download the results of such runs, as shown below.

Image of the BENInfos platform for the Marine Strategy Framework Directive
Visualisation of the results of a calculation run.

The pre-processing steps and calculation scripts take care of a lot of required contextual information and create very detailed logs and outputs. All of this was implemented on top of an existing hale»connect on-premise deployment that is operated by plangis on behalf of the Federal Waterways Engineering and Research Institute (BAW). This project added a custom microservice for the M-AMBI and BQI R script execution, custom workflows, and a web application based on the hale»connect feature explorer.

There is still work to do to make the system fully operational, such as further improvement of the individual source data sets and adding the possibility to also publish result data sets as services automatically.

A second project phase will likely start in late 2021. In it, we also hope to bring more organisations around the Baltic Sea and the North Sea on board with the system. In the next months, we will also write and submit a scientific paper to explain what was done in greater detail.

If you are interested in staying up to date about this development, sign up for our newsletter here.

(more)