Minutes of Telecon 10 August 2010

Attendees

  • Frédéric Guillaud
  • Aaron Braeckel
  • John Schatell
  • Andrew Woolf
  • Jeremy Tandy (chair)
  • Bruce Wright

Agenda

1) Review Use Case progress

As it was agreed that there had been little further progress on the use cases, the meeting focussed on item 2...

2) Justification for O&M Focus

In answer to ABs previously tabled question "why do poeple get excited about O&M?", the following discussion ensued...

AW pointed the meeting to O&M draft document (10-004r2), starting with the Figure 2 (The basic Observation type) on page 10, which provides an overview of O&M, which can be summarised as:

An Observation is an action whose result is an estimate of the value of some Property of the Feature-of-interest, obtained using a specified Procedure.

AW emphasised the this could be applied to any 'observation procedure', including a numerical forecast algorithm. He pointed out that there is always a trade-off between generality (for convenient exchange) and specitivity (for application is particular areas), and O&M ties to find the 'sweet spot' by capturing recurring patterns. This enables standardised services and queries (i.e. SOS, which is a query service for O&M, allowing filtering of the data). This makes it possible to:
  1. Share implementation effort for servers and clients across a wider community of developers
  2. Share understanding (at a basic level) across a wider community of collaborators - enables pollination of ideas between domains of interest

There is an opportunity here to re-cast existing standard forms such as GRIB & BUFR in terms of O&M so that consumers can understand at least a portion of the original dataset that matches the O&M profile.

AW highlighted the fact that O&M is too abstract to be applied directly, and that an Application Profile for our domain is required. O&M says nothing about the observed property, but Table 7 (section 7, page 18) shows some examples for specialized observations. Section 7.3 talks about observations whose results vary = Coverage result type, which covers most of our data. Further, Annex D3 (page 45) talks about the Sampling Feature, acknowledging that fact that we never observethe whole atmosphere (the Feature of Interest), but a subset of it (e.g. a profile). Bringing these two concepts together, yields the idea of Sampling Coverage Observation, which is key to our domain (and forms the basis of CSML 3).

JT pointed out that Application Profiles might fall into one of two types:
  • Type 1 Application Profile: simple restriction
  • TYpe 2 Application Profile: extension, as well as restriction
Type 1 would be simpler to implement, if we can get away with it.

AB said that WXXM had been using O&M, but they were still assessing this. He wondered whether O&M provided enough benefit, give the complexity it added? JS expanded on this to point out that they had done a lot of 'shoe-horning' to get O&M into WXXM, which had resulted in a lot of 'boiler plate' XML around a small amount of content.

AW pointed out that now that O&M draft standard, it now has a stable model and it was worth lookingat the latest XML implementation [see SWE branch of OGC subversion], which has been more thoroughly worked through by Simon cox, with much of the complexity removed and many previously mandatory attributes now optional or nillable. Ffor example there are only now two mandatory times: phenomenonTime (real world) and resultTime (when the result became available), and the latter can be set as unknown. There is also lots of flexibility in how you represent standard coverage results, some better than others for particular cases. However, AW openly acknowedged that O&M is still a draft standard, so there is a need for more deployment examples to gain confidence that is has 'got it right'.

JS pondered whether O&M is complex enough to be awkward for vendor implementation (as was the case with GML). JT responded that O&M is far more structured than GML (& profiles even more so), which makes it easier to implement than the generic GML model like 3.1 … "there will never be a universal WFS client" said a GIS vendor!

AB pointed out that existing met reports such as METARS are complicated to implement fully, as they have a large number of rules associated with them (e.g. to do with quality, precison, etc). JT responded that yes, because O&M is a pattern for capturing the observation and the metadata about the observation, traditional Met reports infer metadata about the observation as well. However for performance optimization, an xlink to 'static' metadata elements (that will always be the same, i.e. for all METARS!) could be used.

FG suggested that although O&M is OK for coverages (80% of met data), it's not so good for things like fronts, which is potentially better are better represented as 'vector features', as for example WOML has done. There was discussion over whether a Front can be considered as just visual representation, but WOML certaily seperates the information about a front from the visualisation of that front. Fronts are used to capture a diagnostic analysis (a digest of a complicated mix of information), and they have a constant (along the length) set of properties such as: direction, status; and associations with other fronts, as well as an implicitimplicit identity associated with their persistence from analysis to analysis / forecast to forecast (although fronts are not normally named 'these days'). It was argued that Are we are no longer worried about the discrete identity of these objects, as we are really only interested in their location & attributes (i.e. the underlying data), so there is little need to describe these type of feature. However, if each object has identity, we must model these objects as collections of features, and develop a discrete model for each feature type that can be collected - again, see WOML. Alternatively, if we are not concerned with identity, we can avoid having to create new feature types to represent the result set, and treat a front, say, like a discrete curve coverage for a road surface temperature model.

[Post Meeting Note: In a discussion between JT and BW, it was agreed that storm tracks are a genuine accepted example of a need to manage data, which are not obviously characterised as a coverage.]

NEW ACTION A58: BW to find out how fronts currently encoded in sigwx/BUFR? [discuss with Pete Trevelyan]

3) Next Telcon

-- BruceWright - 11 Aug 2010
Topic revision: r2 - 11 Aug 2010, BruceBannerman
This site is powered by FoswikiThe information you supply is used for OGC purposes only. We will never pass your contact details to any third party without your prior consent.
If you enter content here you are agreeing to the OGC privacy policy.

Copyright &© by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding OGC Public Wiki? Send feedback