News > hale connect

We have now changed our release cycles so that hale studio and hale connect releases happen in quick succession - first hale studio, then hale connect. In this way we ensure that all capabilities you can use in hale studio also work in hale connect. This latest version of hale connect has been rolled out to all public cloud and private cloud instances and includes the following updates:

  • Support for external datasets metadata sources such as Catalogue Services or Portals
  • Support for dataset attachments (such as PDFs, Textures and GeoTIFFs or other Raster data sets)
  • A New GetFeatureInfo client in the WMS map preview
  • A New Feature Explorer Tool specifically designed for object-oriented and linked data

To try out the new features, head over to www.haleconnect.com and either log in with your existing account or create a new (30-day trial) account.

We have now changed our release cycles so that hale studio and hale connect releases happen in quick succession - first hale studio, then hale connect. In this way we ensure that all capabilities you can use in hale studio also work in hale connect. This latest version of hale connect has been rolled out to all public cloud and private cloud instances and includes the following updates:

  • Support for external datasets metadata sources such as Catalogue Services or Portals
  • Support for dataset attachments (such as PDFs, Textures and GeoTIFFs or other Raster data sets)
  • A New GetFeatureInfo client in the WMS map preview
  • A New Feature Explorer Tool specifically designed for object-oriented and linked data

To try out the new features, head over to www.haleconnect.com and either log in with your existing account or create a new (30-day trial) account.

Support for external metadata sources

hale connect now supports the direct re-use of your existing metadata files. For theme managers, these options are configurable in the metadata section of your theme:

  • Select ‘Republish existing metadata’ to upload your XML or XSD during data set creation
  • Select ‘Link to existing metadata’ to provide a URL pointing to dataset metadata

This option you have selected appears in the metadata step of dataset creation. More information on metadata workflows is available from a recent tutorial.

Addign a link to an existing dataset metadata resource

Support for dataset attachments

hale connect 1.9.0 makes it possible to reference file attachments from uploaded or transformed data sets.

To upload attachments, navigate to the Files section of your data set and click the ‘Upload attachments’ button.

To reference the uploaded attachments, your GML source data needs to include the following expression as the value for the attribute which references the attachment: attachment:///<filename>. The filename of the attachment must be identical to the filename in the GML. When the dataset is published, the expression is transformed into a publicly available link to the uploaded attachment file.

Adding attachments to a dataset

Much Linked Data, as well as Open Standards Data, uses rich object-oriented models, with many explicit and implicit references between objects (or as the GIS community calls them, features). Such references are hard to navigate and use in a classical, layers-based GIS. We have thus developed a specific client to explore such data sets.

The Feature Explorer can be accessed via the GetFeatureInfo client on the WMS map preview for your published view services. The Feature Explorer can be used to explore GML that contains complex features and links to related features. INSPIRE compliant GML often contains links to related features, or codelists, which provide additional information about the feature.

To access the Feature Explorer, click on the ‘show Details’ button in the HTML view of GetFeatureInfo client. The Feature Explorer opens in a new tab which displays the attributes associated with the selected feature. Click on any link to further explore the attributes or related features. A ‘+’ button appears to the right of attributes which contain additional levels of nesting.

The FeatureExplorer component with link to codelists, attachments, and related resources and objects

GetFeatureInfo added to WMS Map Preview

GetFeatureInfo is an optional operation which allows users of your view services to query your WMS layers. The GetFeatureInfo client is only available for WMS layers which have been configured to support the GetFeatureInfo operation.

As a theme manager, you can activate GetFeatureInfo for your WMS in the View Services sections of the associated theme. To access the GetFeatureInfo client, click the Map view link in the View Services section of your dataset. Click on any feature in the map preview to view attributes for the selected feature. The GetFeatureInfo client allows you to select the feature layer you are viewing and the display format (HTML, plain text or XML).

Improved GetFeatureInfo and Tool in the Map Preview

(more)

Spring is here and so is the latest release of hale studio. The new release 3.4.0 includes the addition of several, new, third-party plug-ins, new features, as well as numerous bug fixes:

  • Support for isolated workspaces in Geoserver App-Schema plugin
  • XtraServer configuration plugin
  • The alignment merger tool
  • View tasks and messages associated with alignment cells
  • Split GML output by feature type
  • Import Groovy snippets for use in transformation scripts
  • Preset for AAA/NAS XML schema
  • CLI option to output statistics and define custom success conditions

The whole list is available in the changelog.

Download the latest version and send us your feedback:


Spring is here and so is the latest release of hale studio. The new release 3.4.0 includes the addition of several, new, third-party plug-ins, new features, as well as numerous bug fixes:

  • Support for isolated workspaces in Geoserver App-Schema plugin
  • XtraServer configuration plugin
  • The alignment merger tool
  • View tasks and messages associated with alignment cells
  • Split GML output by feature type
  • Import Groovy snippets for use in transformation scripts
  • Preset for AAA/NAS XML schema
  • CLI option to output statistics and define custom success conditions

The whole list is available in the changelog.

Download the latest version and send us your feedback:


Support for isolated workspaces in Geoserver app-schema plugin

The app-schema plugin developed at GeoSolutions now comes with support for GeoServer’s isolated workspaces feature. Isolated workspaces allow service providers to restrict access to OWS layers through virtual services. A virtual service exists for each GeoServer workspace and publishes only those layers available on the associated workspace. Once a workspace is set to isolated, the contained layers are no longer visible or queryable by global services. The contents of an isolated workspace are accessible only via the associated virtual service. This functionality is useful for service providers who want to share specific services with different clients.

Thanks to the GeoSolutions team, specifically to Stefano Costa and Nuno Oliveira, for this contribution!


XtraServer configuration plug-in

XtraServer is a product of interactive instruments GmbH. It is a suite of implementations of various OGC service specifications, e.g. Web Feature Service (WFS) and Web Map Service (WMS). XtraServer services can be based on any application schema according to the Geography Markup Language (GML). For this, a mapping from the GML application schema to the table structure of the underlying database is to be provided in the configuration of the service. The mapping language of XtraServer is very flexible and can virtually map all GML application schemas to heavily deviating database schemas. For this reason, mappings can be quite complex.

The purpose of this plugin is to transform the XtraServer mappings to hale alignments (via Import) and generate a new XtraServer mapping file easily (via Export).

Thanks to interactive instruments, Jon Herrmann, and Andreas Zahnen for this contribution!


The Alignment Merger

Declarative Mappings, which we call Alignments, lend themselves to be re-used. One potential area of re-use is if you have one alignment that maps from A to B and another one that maps from B to C is that you could combine them into a single alignment from A to C. This is exactly what the Alignment Merger command-line component does - it allows you to merge two alignments that have a shared schema into one. As an example, say you have a mapping from a database to a national standard, and one from that national standard to INSPIRE. Now you can directly create a mapping from your database to INSPIRE, without much extra work!

The Alignment Merger will perform as many steps as possible automatically, but will sometimes require manual input from you. For this purpose, the Alignment Merger generates Tasks (see next new feature).

Thanks to the Implementierungspartnerschaft AAA-Dienste for funding this work.

Viewing tasks and messages associated with alignment cells

With the release of 3.4.0, users are able to view and manage tasks that are created by the alignment merger process. This functionality allows users to either dismiss tasks or edit cells directly before transformation.


Allow to split GML output by feature type

hale studio now supports the option to split GML by feature type during the export of a GML feature collection. This new option is helpful for users who want to reduce their file size or who need to work with GML files containing a single feature type.


Groovy snippets as re-useable resoruces

Now you can import Groovy scripts to your transformation project. Using Groovy snippets allows you to keep extensive logic in external files and to easily reuse them across different transformation scripts. You can reference a specific Groovy snippet by its identifier that you set when importing the snippet.


Preset for AAA XML schema

The list of presets for source and target schemas has been extended: The newest addition is the AAA (NAS) XML Schema 6.0.1.


CLI option to output statistics and define custom success conditions

Users performing transformation of source data using the command line interface in hale studio are now able to set customized success conditions through use of a Groovy script which is evaluated against the transformation summary. Success criteria included in an evaluation script might include: an XML schema validation with no errors, or that a certain number of objects were created.

Thanks to the Landesamt für Vermessung und Geobasisinformation Rheinland-Pfalz for funding this work.

(more)

Metadata is an important component of most Spatial Data Infrastructures. We use it to find resources such as data sets and services and to assess their usefulness for our objectives. As an example, metadata can contain license information. Metadata also clearly shows who is responsible for a resource and how we can contact them.

At the same time, metadata is usually something invisible. Large parts of the internet use metadata, e.g. in the headers of every HTML page, that most users never see and are barely aware of. In INSPIRE and in other SDIs, metadata has become something explicit and visible. INSPIRE required that as a first implementation step, metadata for all data sets needed to be delivered by 2010 (for Annex I and II ) to 2013 (for Annex III). This resulted in the availability of more than 150.000 metadata sets that describe different types of resources – quite a treasure?

This short tutorial explains what the typical processes are for working with metadata in our integrated Data Infrastructure platform, hale connect.

Concepts used in the workflows

  • Dataset Metadata: This metadata resource describes the dataset itself.
  • Service Metadata: This metadata resource describes one service through which the dataset can be accessed. If you provide a download and a view service, you will have two service metadata resources, both of which are generated from a single configuration (see below).
  • Linking (dataset-service coupling): In INSPIRE metadata, links point from service descriptions to the data set description. There can be multiple services that publish the same data set.

Workflow 1: Generating Metadata

The default metadata workflow in hale connect is to automatically generate both data set and service metadata. This has several advantages: By using so-called autofill rules, the metadata can be kept up to date with changes to the data or the organisation. Just set your central contact point, and all metadata is updated. Furthermore, the dataset-service coupling will always be up to date, as it is refreshed with each service update.

To use this workflow, follow these steps:

  1. Go to «Themes»
  2. Pick the theme you’d like to edit the metadata configuration for
  3. Go to «Metadata»
  4. In the «Dataset metadata» tab, select «Use metadata editor» in the dropdown menu.

To define how hale connect should generate the metadata, the system provides a special-purpose text editor. The default metadata configuration displays INSPIRE compliant metadata elements.

hale connect metadata workflow 1 - generating metadata

Workflow 2: Linking Metadata

Many of you have an established, well-working infrastructure for metadata in place, including a CSW endpoint and a portal. There is no need to change that. In this second workflow, you can simply provide the URL pointing to your dataset metadata to hale connect, and that link will be used to connect the service to the dataset metadata.

To use this workflow, follow these steps:

  1. Go to «Themes»
  2. Pick the theme you’d like to edit the metadata configuration for
  3. Go to «Metadata»
  4. Select «Link to existing metadata» in the dropdown menu.

Note: When you use this workflow, you cannot use the metadata editor to change any fields of the dataset metadata.

More variants and combinations are possible. Reach out to us if you have any questions on how to set up your optimal metadata generation and publishing workflows!

Metadata is an important component of most Spatial Data Infrastructures. We use it to find resources such as data sets and services and to assess their usefulness for our objectives. As an example, metadata can contain license information. Metadata also clearly shows who is responsible for a resource and how we can contact them.

At the same time, metadata is usually something invisible. Large parts of the internet use metadata, e.g. in the headers of every HTML page, that most users never see and are barely aware of. In INSPIRE and in other SDIs, metadata has become something explicit and visible. INSPIRE required that as a first implementation step, metadata for all data sets needed to be delivered by 2010 (for Annex I and II ) to 2013 (for Annex III). This resulted in the availability of more than 150.000 metadata sets that describe different types of resources – quite a treasure?

This short tutorial explains what the typical processes are for working with metadata in our integrated Data Infrastructure platform, hale connect.

Concepts used in the workflows

  • Dataset Metadata: This metadata resource describes the dataset itself.
  • Service Metadata: This metadata resource describes one service through which the dataset can be accessed. If you provide a download and a view service, you will have two service metadata resources, both of which are generated from a single configuration (see below).
  • Linking (dataset-service coupling): In INSPIRE metadata, links point from service descriptions to the data set description. There can be multiple services that publish the same data set.

Workflow 1: Generating Metadata

The default metadata workflow in hale connect is to automatically generate both data set and service metadata. This has several advantages: By using so-called autofill rules, the metadata can be kept up to date with changes to the data or the organisation. Just set your central contact point, and all metadata is updated. Furthermore, the dataset-service coupling will always be up to date, as it is refreshed with each service update.

To use this workflow, follow these steps:

  1. Go to «Themes»
  2. Pick the theme you’d like to edit the metadata configuration for
  3. Go to «Metadata»
  4. In the «Dataset metadata» tab, select «Use metadata editor» in the dropdown menu.

To define how hale connect should generate the metadata, the system provides a special-purpose text editor. The default metadata configuration displays INSPIRE compliant metadata elements.

hale connect metadata workflow 1 - generating metadata

Workflow 2: Linking Metadata

Many of you have an established, well-working infrastructure for metadata in place, including a CSW endpoint and a portal. There is no need to change that. In this second workflow, you can simply provide the URL pointing to your dataset metadata to hale connect, and that link will be used to connect the service to the dataset metadata.

To use this workflow, follow these steps:

  1. Go to «Themes»
  2. Pick the theme you’d like to edit the metadata configuration for
  3. Go to «Metadata»
  4. Select «Link to existing metadata» in the dropdown menu.

Note: When you use this workflow, you cannot use the metadata editor to change any fields of the dataset metadata.

More variants and combinations are possible. Reach out to us if you have any questions on how to set up your optimal metadata generation and publishing workflows!

(more)

2018 still feels like a fresh year, so here’s a fresh hale studio release to go with this! Despite some large changes under the hood, we’ve decided to make this a bugfix release mostly, with some smaller enhancements:

  • Improved hale connect integration with support for multiple organisations
  • Support for CQL Functions in filter contexts
  • Improved behaviour on missing Bursa-Wolf Parameters
  • Pre-fill charset for shapefile loading when a .cpg file is available
  • Fixed Groovy Restriction state after reloading a project
  • Fixed Un-associating codelists
  • Use a precalculated index created during data loading for Joins and Merges

The whole list is available in the changelog.

Get the latest version, and let us know what you think of it!


2018 still feels like a fresh year, so here’s a fresh hale studio release to go with this! Despite some large changes under the hood, we’ve decided to make this a bugfix release mostly, with some smaller enhancements:

  • Improved hale connect integration with support for multiple organisations
  • Support for CQL Functions in filter contexts
  • Improved behaviour on missing Bursa-Wolf Parameters
  • Pre-fill charset for shapefile loading when a .cpg file is available
  • Fixed Groovy Restriction state after reloading a project
  • Fixed Un-associating codelists
  • Use a precalculated index created during data loading for Joins and Merges

The whole list is available in the changelog.

Get the latest version, and let us know what you think of it!


Improved hale connect integration

This release of hale studio improves the integration with the online collaboration platform hale connect. It is now possible to select which organisation should own an uploaded transformation project for cases where the currently logged-in user is member of more than one organisation. Furthermore, hale studio now supports a relogin with same or different credentials without having to clear the credentials stored in preferences.


Support for CQL Functions in filter contexts

(E)CQL doesn’t just contain simple operators, but also a large set of functions that can be applied to any operand in any place where hale studio enables the use of such filters:

  • Condition contexts on source schema elements
  • Filters on source and target data table views

Filter functions can be used to build expressions such as strToLowerCase(VALUE) like '%m%'. They also simplify many expressions, e.g. through functions such as IN(val1, val2, ...), which needed an OR chain of comparisons so far, or between(val, low, high) statements. It is even possible to use spatial functions to filter by the spatial relationship - contains(geomA, geomB) will return true when geomA contains geomB.

The Geoserver Documentation includes a full reference of the available functions.

This work was funded by the Landesbetrieb Geoinformation und Vermessung Hamburg through a support contract.


Improved behaviour on missing Bursa-Wolf Parameters and on axis flips

When you load source data from shapefiles, and then export the transformed data later on to a different projection / coordinate reference system, you may have encountered an error message like Missing Bursa-Wolf Parameters. More information on this specific issue is available with the GeoTools documentation. Furthermore, there were cases where the axes were swapped due to inconsistent CRS definitions across different systems. With some enhancements to how hale studio interprets and uses CRS definitions from various sources (Shapefiles, GML, internal EPSG database), hale now avoids most of the pitfalls of these two issues.

When loading source data, hale studio now provides the content of the srsName attribute (in case of a GML source) or the WKT definition found in the projection file (.prj) that accompanies a Shapefile. This allows the user to select the correct CRS without having to manually look up this information in the source files.


Character set detection for Shapefiles

Importing a schema or source data from a Shapefile requires the user to select the encoding of the Shapefile. In cases where the Shapefile is accompanied by a Codepage file (.cpg), hale studio can now read the encoding from that file a prefill the character set selection dialog.


(more)

News' image preview

Last week, more than 800 people met in Strasbourg for an event packed with workshops, keynotes and presentations. This somewhat personal retrospective summarizes our impression in broad strokes. First of all, it was a very intense week for our team - with more than 50 meetings and 10 contributions to the programme. For almost all of these, videos are now available online.

Last week, more than 800 people met in Strasbourg for an event packed with workshops, keynotes and presentations. This somewhat personal retrospective summarizes our impression in broad strokes. First of all, it was a very intense week for our team - with more than 50 meetings and 10 contributions to the programme. For almost all of these, videos are now available online.

Participants Keynote session on Wednesday

Strategy

Most of the INSPIRE community is aware of the discussions surrounding the Fitness for Purpose of INSPIRE, and the related efforts to improve the usefulness of INSPIRE network service and data specifications. There is also an ongoing debate to define a list of Priority data themes and data sets to indicate which steps implementers should focus on. Some stakeholders are very critical of the current state of INSPIRE and point to technical difficulties in implementation as well as to limited usefulness for many use cases.

This criticism is somewhat in contrast to the many organisations moving forward on their INSPIRE implementation. There was a substantial number of presentations and workshops about projects that showed how to successfully implement interoperable services. As an example, Christine Najar from Swisstopo presented their feasibility study, which looked at the concrete efforts required to provide both INSPIRE and ELF/ELS data and services, and came to the conclusion that overall efforts are lower than many people anticipated.

At this point, there is a lot of evidence that some parts of the INSPIRE requirements need to be modified or relaxed, to make implementation easier and more robust. One example is the actual data discovery process, which we analysed in the context of the INScope project. In that project, which we presented together with Wageningen University & Research and the European Environmental Agency, we showed that only a few percent of data sets actually meet all requirements according to their metadata. Another example is the simplification of the encoding, for which several suggestions have been made, e.g. by Denmark and by Germany.

Some recommendations from the study commissioned by BKG

It is also important to focus on the usefulness and usability of INSPIRE data. To increase the usefulness, several national initiatives such as the Spatial Planning Act in the Netherlands build on top of the INSPIRE legislation. INSPIRE extensions are one way to piggyback local use cases onto the INSPIRE infrastructure.

Technology

In the technology and tool oriented sessions, the single keyword that was used most was probably Docker. Docker is a container technology that makes the deployment and maintenance of server based applications much easier than virtual machines did. Docker is a core building block of an entire ecosystem with tools such as docker compose, docker swarm and Rancher that allow us to build scalable, robust applications that can be managed much more effectively than previous generations. The paradigm shift is to move away from individual servers that are manually administered (“Pets”) to fully automated cluster deployments (“Cattle”). No more manual patching of individual Application Servers or Oracle Databases! By now, we have Docker images for all relevant Open Source and Closed Source applications available, be it ArcGIS Server, FME Server, GeoServer, deegree or our own hale connect platform.

Closely related to this topic was the second trendy keyword – the Cloud is coming! INSPIRE mandates a relatively high level of availability and performance for all INSPIRE services, with requirements such as 99% availability and 20 WMS requests per second for a 640x480 raster image. For smaller organisations who do not have dedicated staff and hardware, these objectives can be hard to fulfill, so cloud architectures offer a practical, efficient solution.

Joeri Robbrecht from DG ENV gives an introduction to the INSPIRE requirements and their impact

One key consideration that popped up several times was the question of whether cloud services are secure enough. Several presenters including Ken Bragg (Safe Software) explained that “AWS is probably more secure than your data center” in one variant or another. AWS is by now offering basically any certification one could ask for, and is very transparent about security issues. Especially in the context of INSPIRE data, which is intended for sharing and publishing, there are very few reasons left not to use cloud services – be it Software as a Service Solutions (such as haleconnect.com) or Platform as a Service resources. For those organisations with additional requirements about who should access data, there are also solutions in place or being developed, e.g. by the CLARUS project, which develops a Cloud Encryption Gateway and a Cloud Access Security Broker.

Linked Data is mostly a topic of research projects and prototypes. The main promise of Linked Data is to better integrate with “mainstream” IT technology by making resources such as individual spatial objects discoverable through search engines and by embedding fragments of linked data in normal web content. Just changing the encoding from GML to RDF or JSON-LD for all INSPIRE data however is certainly no silver bullet.

A Personal View by Anida

Anida and Andreas happy to give a thumbs-up even on the last day of the conference :)

This year, I attended the INSPIRE conference for the first time, so I am not going to compare it to previous conferences. I would rather like to focus on the key points and takeaways from an INSPIRE Newbie perspective.

The conference brought together many INSPIRE, GIS, Data and Technology experts, as well as lots of people looking for opportunities to learn something new, and to exchange experiences. The conference was also a meeting point for people looking for new career opportunities, and that made me wonder. Was this market not too small to come to a conference looking for new career opportunities? Then I realized that it is not about the market size, it is about the impact of what was going on with INSPIRE and beyond.

I heard a lot about the approach to open data and making it available for citizens and businesses and listened to discussions about how far public administrations should open their data. In my personal view, on its own, INSPIRE will not bring high-end innovations, but combined with Open Data principles, they become feasible. Attending the SMESpire workshop as a representative of a start-up made me think more about the innovations that can be created by implementing INSPIRE.

Can we bring innovation, open data and fulfillment of legal obligations together? In my opinion, we can, but it is very important to understand why are we implementing INSPIRE. As I see it, INSPIRE should not be the ultimate goal: to implement something just for the sake of implementation. It should be a tool to help countries maintain, manage and exchange big amounts of data effectively, to foster international collaboration. That will then lead to innovations created by businesses. Businesses will find a way to create the added value that will then lead to growth. What does it take? Collaboration and communication, and then a bit more of it. It also takes some kind of a joint platform, that will enable SMEs to take part in different projects and address the needs and priorities of INSPIRE implementers.

So it was a week full of learnings and a really great opportunity for exchange and networking, but moreover, it was an opportunity for so many people to find that one solution, implementation or the expert that will bring them further and closer to their goals.

The way forward

We’ve very much enjoyed supporting this year’s conference through our Gold Partnership, and would like to thank the organisers in Germany, France and at the JRC for the great conference.

The INSPIRE GIS partners at our joint booth, together with some JRC staff

There is not much of a break now, though – the next INSPIRE Roadmap milestone is approaching fast: On November 23rd, provision of existing data sets tied to Annex I in INSPIRE interoperable form is required. Many organisations we work with aim to fulfil their obligations in time. Looking beyond this milestone, focus will shift towards annex II and III – a good moment to take a break and evaluate both the major strategic directions and new technology.

We’re looking forward to the 2018 edition in Antwerp! You can bet that we will accept the challenge of the Hunt for the Golden Pineapple!

(more)