News > hale connect

From July until September 2020 we plan to offer a variety of new and free webinars on INSPIRE, data transformation, hale studio & hale connect. Three webinars will be in German – one of them of on XPlanung. Of course, the new release of hale»studio 4.0 will also play an important role:

  • 02.07.2020, 15:30 - 17:00 hale»studio Community Milestone Planning Meeting (English)
  • 09.07.2020, 15:00 - 15:45 AGIT 2020: Geodatenharmonisierung mit hale»studio (German) (AGIT requires participation fee)
  • 15.07.2020, 13:00 - 14:00 hale»connect: An introduction to managing INSPIRE data (English)
  • 23.07.2020, 13:00 - 14:00 hale»connect: Building an INSPIRE platform for municipalities (English)
  • 19.08.2020, 13:00 - 14:00 hale»connect: Installing hale»connect on premise (English)
  • 26.08.2020, 13:00 - 14:00 hale»studio 4: New functions (English)
  • 03.09.2020, 13:00 - 14:00 hale»connect: Metadata configurations (English)
  • 09.09.2020, 13:00 - 14:00 hale»studio: Neue Funktionen für XPlanung (German)
  • 16.09.2020, 13:00 - 14:00 hale»connect: Automated transformation workflows (English)
  • 23.09.2020, 13:00 - 14:00 hale»studio: Einführung in die Datentransformation nach INSPIRE (German)

If you want to learn more about it or register for any of the webinars, you just need to send an email at info@wetransform.to along with your name and your organization. All dates and webinar topics are planned and still subject to change or cancellation. In this case registrants will be informed automatically.

From July until September 2020 we plan to offer a variety of new and free webinars on INSPIRE, data transformation, hale studio & hale connect. Three webinars will be in German – one of them of on XPlanung. Of course, the new release of hale»studio 4.0 will also play an important role:

  • 02.07.2020, 15:30 - 17:00 hale»studio Community Milestone Planning Meeting (English)
  • 09.07.2020, 15:00 - 15:45 AGIT 2020: Geodatenharmonisierung mit hale»studio (German) (AGIT requires participation fee)
  • 15.07.2020, 13:00 - 14:00 hale»connect: An introduction to managing INSPIRE data (English)
  • 23.07.2020, 13:00 - 14:00 hale»connect: Building an INSPIRE platform for municipalities (English)
  • 19.08.2020, 13:00 - 14:00 hale»connect: Installing hale»connect on premise (English)
  • 26.08.2020, 13:00 - 14:00 hale»studio 4: New functions (English)
  • 03.09.2020, 13:00 - 14:00 hale»connect: Metadata configurations (English)
  • 09.09.2020, 13:00 - 14:00 hale»studio: Neue Funktionen für XPlanung (German)
  • 16.09.2020, 13:00 - 14:00 hale»connect: Automated transformation workflows (English)
  • 23.09.2020, 13:00 - 14:00 hale»studio: Einführung in die Datentransformation nach INSPIRE (German)

If you want to learn more about it or register for any of the webinars, you just need to send an email at info@wetransform.to along with your name and your organization. All dates and webinar topics are planned and still subject to change or cancellation. In this case registrants will be informed automatically.

(more)

At wetransform, we fully support INSPIRE implementers since we think that key problems can be better solved through cross-border, accessible and useable, harmonised data. In February, we have started a new project that heavily builds on harmonised INSPIRE data to solve a critical issue: The massive ecological and economic impact of climate change on our forests.

Forests are subject to numerous stressors: extreme weather events such as heat, drought and heavy rainfall, pests such as bark beetles and air pollutants. Even tree species that were considered to be stable have suffered from these stressors in the last years. As a result, forest experts face new challenges. They have to identify stands that are at a high risk of being endangered. For each location, they need to find optimal forest development types, tree species and intraspecific varieties. Their measures must ensure that the biodiversity of forests and economic yields are maintained despite climate change.

The factors that influence the general vitality and productivity of forests (climate, weather, soil, geology, morphology, biodiversity, age mixing, etc.) are complex and interconnected. Forest experts consider these in the planning process. However, an in-depth analysis of these variables and their interconnectedness is still a big challenge.

A keystone of new approaches is, like in many other industries, a shift to data-driven decision support. Data on the aforementioned factors can form a basis for more effective and faster decision-making, but gathering useful data is not an easy task. The sheer size and complexity of environmental data related to forests ensures that. To compound matters, this data is held by different organizations, which usually have diverse use cases and use different schemas, formats and semantics. For example, one organization may represent the topographic data of a forest region in a shapefile, and another organization may represent the topographic data of a neighbouring region in the same forest as a GML file.

Forest experts thus depend on a wide range of data sets that are acquired from different sources. This interdisciplinary knowledge transfer is a challenge, but is also mission critical. High quality data acquisition and data integration are a precursor to the forest experts’ data driven decision-making and ultimately to the survival of our forests.

To help forest experts and owners to deal with these issues, wetransform has initiated the FutureForst project. FutureForst is a Phase I Artificial Intelligence Lighthouse project co-funded by the German Zukunft-Umwelt-Gesellschaft (ZUG) association. Through FutureForst, forest owners receive comprehensive decision support that has been adapted to their specific situation and goals. These recommendations are based on harmonized data such as the forest inventory, weather, pest development and air pollution.

In order to provide this decision support, we currently evaluate deep-learning methods and “Explainable AI” approaches such as Semantic Reasoning and Bayesian Belief Networks together with our partners, Minerva Intelligence GmbH and the Forstliche Versuchsanstalt Baden-Württemberg.

With the “Explainable AI” approach, the system can generate comprehensible results with accessible recommendations for action. Explainable AI methods enable users to see which variables of the input data lead to which outcome. These can be adjusted by experts and laypersons in any depth and checked for plausibility. On this basis, experts and laymen can then decide which forest development types and tree species can be established as “future forests” - forest ecosystems that can withstand climate change.

In a further step, different climate forecasts can then be taken into account and forest conversion scenarios can be simulated. In addition, a solution forum will be offered where users and partners can exchange information on their approach.

The data that is core to the approach is harmonised and published with wetransform’s tools, hale connect and hale studio. These tools have already been used by hundreds of organizations to consolidate heterogenous stacks into harmonized data that can be easily analysed. Afterwards, components of Minerva’s AI suite are used for the analysis and development of explainable recommendations based on the harmonized data.

When this project is finished, we aim to provide a FutureForst solution that offers:

  • Always up-to-date, homogeneous data basis
  • Complete picture of the environment including real-time stressors such as pest infestation
  • Highly local, explainable recommendations for action based on international data
  • Solution forum for users and partners We are currently executing open remote workshops on this project every two weeks.

These workshops provide opportunities to learn about what the concrete outcomes of the project are, and to contribute experiences and requirements. The workshops are themed as follows:

  • 15.04.2020: End user scenario development
  • 29.04.2020: Existing and missing data
  • 13.05.2020: AI Approaches and Algorithms

Reach out to us to stay updated on the project’s progress and let us know how your country is dealing with this challenge:


At wetransform, we fully support INSPIRE implementers since we think that key problems can be better solved through cross-border, accessible and useable, harmonised data. In February, we have started a new project that heavily builds on harmonised INSPIRE data to solve a critical issue: The massive ecological and economic impact of climate change on our forests.

Forests are subject to numerous stressors: extreme weather events such as heat, drought and heavy rainfall, pests such as bark beetles and air pollutants. Even tree species that were considered to be stable have suffered from these stressors in the last years. As a result, forest experts face new challenges. They have to identify stands that are at a high risk of being endangered. For each location, they need to find optimal forest development types, tree species and intraspecific varieties. Their measures must ensure that the biodiversity of forests and economic yields are maintained despite climate change.

The factors that influence the general vitality and productivity of forests (climate, weather, soil, geology, morphology, biodiversity, age mixing, etc.) are complex and interconnected. Forest experts consider these in the planning process. However, an in-depth analysis of these variables and their interconnectedness is still a big challenge.

A keystone of new approaches is, like in many other industries, a shift to data-driven decision support. Data on the aforementioned factors can form a basis for more effective and faster decision-making, but gathering useful data is not an easy task. The sheer size and complexity of environmental data related to forests ensures that. To compound matters, this data is held by different organizations, which usually have diverse use cases and use different schemas, formats and semantics. For example, one organization may represent the topographic data of a forest region in a shapefile, and another organization may represent the topographic data of a neighbouring region in the same forest as a GML file.

Forest experts thus depend on a wide range of data sets that are acquired from different sources. This interdisciplinary knowledge transfer is a challenge, but is also mission critical. High quality data acquisition and data integration are a precursor to the forest experts’ data driven decision-making and ultimately to the survival of our forests.

To help forest experts and owners to deal with these issues, wetransform has initiated the FutureForst project. FutureForst is a Phase I Artificial Intelligence Lighthouse project co-funded by the German Zukunft-Umwelt-Gesellschaft (ZUG) association. Through FutureForst, forest owners receive comprehensive decision support that has been adapted to their specific situation and goals. These recommendations are based on harmonized data such as the forest inventory, weather, pest development and air pollution.

In order to provide this decision support, we currently evaluate deep-learning methods and “Explainable AI” approaches such as Semantic Reasoning and Bayesian Belief Networks together with our partners, Minerva Intelligence GmbH and the Forstliche Versuchsanstalt Baden-Württemberg.

With the “Explainable AI” approach, the system can generate comprehensible results with accessible recommendations for action. Explainable AI methods enable users to see which variables of the input data lead to which outcome. These can be adjusted by experts and laypersons in any depth and checked for plausibility. On this basis, experts and laymen can then decide which forest development types and tree species can be established as “future forests” - forest ecosystems that can withstand climate change.

In a further step, different climate forecasts can then be taken into account and forest conversion scenarios can be simulated. In addition, a solution forum will be offered where users and partners can exchange information on their approach.

The data that is core to the approach is harmonised and published with wetransform’s tools, hale connect and hale studio. These tools have already been used by hundreds of organizations to consolidate heterogenous stacks into harmonized data that can be easily analysed. Afterwards, components of Minerva’s AI suite are used for the analysis and development of explainable recommendations based on the harmonized data.

When this project is finished, we aim to provide a FutureForst solution that offers:

  • Always up-to-date, homogeneous data basis
  • Complete picture of the environment including real-time stressors such as pest infestation
  • Highly local, explainable recommendations for action based on international data
  • Solution forum for users and partners We are currently executing open remote workshops on this project every two weeks.

These workshops provide opportunities to learn about what the concrete outcomes of the project are, and to contribute experiences and requirements. The workshops are themed as follows:

  • 15.04.2020: End user scenario development
  • 29.04.2020: Existing and missing data
  • 13.05.2020: AI Approaches and Algorithms

Reach out to us to stay updated on the project’s progress and let us know how your country is dealing with this challenge:


(more)

The stage will be set for the INSPIRE 2020 Conference in Dubrovnik.

INSPIRE is gaining momentum. The number of datasets that are being transformed is increasing. Simultaneously, the next deadlines approach fast. It is now critical that all processes be running smoothly, all results be of high quality and you can share your experiences with others. This year’s INSPIRE conference has been postponed and is expected to take place in September, right on time before the next INSPIRE deadline.

The wetransform team will be there to listen to you and to share ideas and solutions that provide real value both for data providers and for data users. We will address topics ranging from metadata profile management and dataset series publication to the risks of an INSPIRE implementation and how to make your INSPIRE data useful.

We will also showcase our solution, hale»connect, the user-friendly and automated data management tool that makes INSPIRE easy, efficient and affordable for everyone and has already helped over 200 organizations become INSPIRE compliant.

Stop by at our booth and experience the Zen of INSPIRE.

We will contribute to several presentations and workshops on diverse topics:

Session Title
Keynote Applications and Cross-cutting Issues
Workshop Organisation, Licensing, Technology: Build sustainable solutions on INSPIRE
Workshop Leveraging INSPIRE Data into Artificial Intelligence Applications
Workshop Geopackage, JSON-LD and GeoJSON: Alternative and Additional Encodings
Presentation A Key INSPIRE Use case: WFD eReporting Revisited
Workshop GO-PEG - Generation Of cross border PanEuropean Geospatial Datasets and Services
Presentation 5.000 data sets later: What INSPIRE in the cloud changes
Presentation Alternative Encodings, Alternative APIs, Alternative Models?
Presentation Profiling metadata for reuse in the INSPIRE Validator and GeoNetwork

We will keep you posted on further details and later changes as the program will be rescheduled.

The stage will be set for the INSPIRE 2020 Conference in Dubrovnik.

INSPIRE is gaining momentum. The number of datasets that are being transformed is increasing. Simultaneously, the next deadlines approach fast. It is now critical that all processes be running smoothly, all results be of high quality and you can share your experiences with others. This year’s INSPIRE conference has been postponed and is expected to take place in September, right on time before the next INSPIRE deadline.

The wetransform team will be there to listen to you and to share ideas and solutions that provide real value both for data providers and for data users. We will address topics ranging from metadata profile management and dataset series publication to the risks of an INSPIRE implementation and how to make your INSPIRE data useful.

We will also showcase our solution, hale»connect, the user-friendly and automated data management tool that makes INSPIRE easy, efficient and affordable for everyone and has already helped over 200 organizations become INSPIRE compliant.

Stop by at our booth and experience the Zen of INSPIRE.

We will contribute to several presentations and workshops on diverse topics:

Session Title
Keynote Applications and Cross-cutting Issues
Workshop Organisation, Licensing, Technology: Build sustainable solutions on INSPIRE
Workshop Leveraging INSPIRE Data into Artificial Intelligence Applications
Workshop Geopackage, JSON-LD and GeoJSON: Alternative and Additional Encodings
Presentation A Key INSPIRE Use case: WFD eReporting Revisited
Workshop GO-PEG - Generation Of cross border PanEuropean Geospatial Datasets and Services
Presentation 5.000 data sets later: What INSPIRE in the cloud changes
Presentation Alternative Encodings, Alternative APIs, Alternative Models?
Presentation Profiling metadata for reuse in the INSPIRE Validator and GeoNetwork

We will keep you posted on further details and later changes as the program will be rescheduled.

(more)

Attend the webinar on March 19th at 12:00 (CET)

Dataset series enable efficient and consistent management of large amounts of related data. With dataset series you can ensure service compliance and meet your reporting obligations. You also make it easier for end-users to access and use your services.

Data set series are a powerful data management tool which can be used to group related datasets in a single service. hale»connect data set series can help data providers to organize large amounts of data, including XPlanung and INSPIRE datasets.

But, how exactly are dataset series classified and described?

A dataset series is a collection of spatial data that shares similar characteristics of theme, source date, resolution, and methodology. Typically, all datasets that belong to one series differ in time or in space (2D or 3D) or in both space and time. However, the exact definition of what constitutes a series entry is determined by the data providers themselves.

Dataset series are most commonly used to organize:

  • Orthoimagery and raster datasets
  • Cartographic map series
  • Time series where datasets are produced or updated at regular intervals.
  • Thematic series which group datasets to provide a richer context for understanding a topic or thematic area

Dataset series metadata facilitates higher level catalog searches and enables end users to find datasets which belong to the same specification. Dataset and dataset series metadata, however, can be complicated to maintain, particularly for datasets that are updated frequently. Hale»connect dataset series provide users with easy-to-use tools to create, edit and update series metadata.

Transforming and publishing of dataset series also provide a unique value-add: as opposed to transforming and publishing dozens of datasets one-by-one, performing these operations on a dataset series reduces effort and costs significantly.

Standardizing a dataset series is, however, not an easy task. Complex operations such as metadata management, transformation and publication must be performed on multiple datasets. This is often a time consuming and error-prone process.

To this end, we’re hosting a webinar in which we’ll demonstrate how to standardize a dataset series. We’ll go over the processes of metadata management, transformation and publication of a dataset series.

To inquire or join in, just click the button below and fill in the details mentioned in the e-mail template. We look forward to hearing from you!


Attend the webinar on March 19th at 12:00 (CET)

Dataset series enable efficient and consistent management of large amounts of related data. With dataset series you can ensure service compliance and meet your reporting obligations. You also make it easier for end-users to access and use your services.

Data set series are a powerful data management tool which can be used to group related datasets in a single service. hale»connect data set series can help data providers to organize large amounts of data, including XPlanung and INSPIRE datasets.

But, how exactly are dataset series classified and described?

A dataset series is a collection of spatial data that shares similar characteristics of theme, source date, resolution, and methodology. Typically, all datasets that belong to one series differ in time or in space (2D or 3D) or in both space and time. However, the exact definition of what constitutes a series entry is determined by the data providers themselves.

Dataset series are most commonly used to organize:

  • Orthoimagery and raster datasets
  • Cartographic map series
  • Time series where datasets are produced or updated at regular intervals.
  • Thematic series which group datasets to provide a richer context for understanding a topic or thematic area

Dataset series metadata facilitates higher level catalog searches and enables end users to find datasets which belong to the same specification. Dataset and dataset series metadata, however, can be complicated to maintain, particularly for datasets that are updated frequently. Hale»connect dataset series provide users with easy-to-use tools to create, edit and update series metadata.

Transforming and publishing of dataset series also provide a unique value-add: as opposed to transforming and publishing dozens of datasets one-by-one, performing these operations on a dataset series reduces effort and costs significantly.

Standardizing a dataset series is, however, not an easy task. Complex operations such as metadata management, transformation and publication must be performed on multiple datasets. This is often a time consuming and error-prone process.

To this end, we’re hosting a webinar in which we’ll demonstrate how to standardize a dataset series. We’ll go over the processes of metadata management, transformation and publication of a dataset series.

To inquire or join in, just click the button below and fill in the details mentioned in the e-mail template. We look forward to hearing from you!


(more)

Attend the webinar on April 2nd at 14:00 (CEST)

We’re hosting a webinar in which we’ll demonstrate how to manage complex metadata profiles. We’ll explain the typical challenges, the methods of simplification, go over the profile management processes and demonstrate how to work with hale»connect’s Profile Management Tool (PMT). To inquire or join in, just click the button below and fill in the details mentioned in the email template.


Building a new Metadata Profile

A while ago, we worked with LGL, the geodetic state agency in Baden-Württemberg, to substantially extend hale»connect’s metadata tools. To fulfil their requirements, we developed a formal model that could capture all the required information, as well as a set of editing functions that extend the existing schema modelling tools in hale»connect.

We continued to build upon these developments to make metadata profile management as effective as possible, in order to better support organizations that must implement standards such as ISO 19115.

Such standards are used to define geographical metadata. However, these standards often are too broad and hard to implement for a specific use case. Instead, profiles are used to implement metadata standards. A profile is a subset of a standard. It only describes the relevant aspects of that standard. By using formal descriptions instead of verbal explanations, a profile can also be implemented and automated easier than the original standard.

Often, such profiles are managed in Excel sheets and Word documents. They can be hundreds of pages long. Implementing them into editors and validators can take much effort and be expensive. Updates of data and of the various metadata requirements on the regional, national and European level usually generate inconsistencies with the profiles and increase costs and effort.

Excel table of requirements
Figure 1: An Excel table containing profile requirements, with rules from ISO, INSPIRE, national and local schemas.

The hale»connect Profile Management Tool (PMT) enables you to easily define profiles. You just need to define type constraints (see figure 2) and consistency constraints (see figure 3).

Adding type constraints and consistency constraints
Figure 2: Defining type and property level constraints.

Adding type constraints and consistency constraints
Figure 3: Defining profile consistency constraints.

Understandable documentation and executable test suites and template files are then generated automatically from original data and context information. You do not need to write complex tests in ETF, schematron, XQuery or the like, in order to validate that your metadata complies with your profiles. Hard to understand complex Word or Excel specifications are no longer an issue.

To summarize, the hale»connect PMT

  • reduces your costs of development and maintenance of Metadata quality assurance,
  • reduces your manual work in editing metadata,
  • ensures compliance of your metadata with the metadata requirements at any time and
  • helps other SDI contributors to quickly understand what they need to do

Our next webinar will take place on April 2nd at 14:00. In the webinar, we’ll go over the concept of profiles, ETS and ETF validation, and how to create a metadata profile. We’re looking forward to seeing you there and discussing your requirements for such profiles. If you would like to sign up for this webinar, just click on the button below!


Attend the webinar on April 2nd at 14:00 (CEST)

We’re hosting a webinar in which we’ll demonstrate how to manage complex metadata profiles. We’ll explain the typical challenges, the methods of simplification, go over the profile management processes and demonstrate how to work with hale»connect’s Profile Management Tool (PMT). To inquire or join in, just click the button below and fill in the details mentioned in the email template.


Building a new Metadata Profile

A while ago, we worked with LGL, the geodetic state agency in Baden-Württemberg, to substantially extend hale»connect’s metadata tools. To fulfil their requirements, we developed a formal model that could capture all the required information, as well as a set of editing functions that extend the existing schema modelling tools in hale»connect.

We continued to build upon these developments to make metadata profile management as effective as possible, in order to better support organizations that must implement standards such as ISO 19115.

Such standards are used to define geographical metadata. However, these standards often are too broad and hard to implement for a specific use case. Instead, profiles are used to implement metadata standards. A profile is a subset of a standard. It only describes the relevant aspects of that standard. By using formal descriptions instead of verbal explanations, a profile can also be implemented and automated easier than the original standard.

Often, such profiles are managed in Excel sheets and Word documents. They can be hundreds of pages long. Implementing them into editors and validators can take much effort and be expensive. Updates of data and of the various metadata requirements on the regional, national and European level usually generate inconsistencies with the profiles and increase costs and effort.

Excel table of requirements
Figure 1: An Excel table containing profile requirements, with rules from ISO, INSPIRE, national and local schemas.

The hale»connect Profile Management Tool (PMT) enables you to easily define profiles. You just need to define type constraints (see figure 2) and consistency constraints (see figure 3).

Adding type constraints and consistency constraints
Figure 2: Defining type and property level constraints.

Adding type constraints and consistency constraints
Figure 3: Defining profile consistency constraints.

Understandable documentation and executable test suites and template files are then generated automatically from original data and context information. You do not need to write complex tests in ETF, schematron, XQuery or the like, in order to validate that your metadata complies with your profiles. Hard to understand complex Word or Excel specifications are no longer an issue.

To summarize, the hale»connect PMT

  • reduces your costs of development and maintenance of Metadata quality assurance,
  • reduces your manual work in editing metadata,
  • ensures compliance of your metadata with the metadata requirements at any time and
  • helps other SDI contributors to quickly understand what they need to do

Our next webinar will take place on April 2nd at 14:00. In the webinar, we’ll go over the concept of profiles, ETS and ETF validation, and how to create a metadata profile. We’re looking forward to seeing you there and discussing your requirements for such profiles. If you would like to sign up for this webinar, just click on the button below!


(more)