Search Mailing List Archives


Limit search to: Subject & Body Subject Author
Sort by: Reverse Sort
Limit to: All This Week Last Week This Month Last Month
Select Date Range     through    

[protege-owl] Layered Ontologies?

Thomas Russ tar at ISI.EDU
Wed Mar 4 14:12:48 PST 2009


On Mar 3, 2009, at 7:02 AM, Johann Petrak wrote:

> Dear all,
>
> there have been a few emails that I think already touched this topic,
> but I think it would still be interesting to start a thread on this.
>
> My problem ist this: I have an ontology, that is supposed to describe
> how a process should create/modify a different ontology. The  
> background
> is populating a domain ontology from legacy data or documents.
>
> So ontology D describes the classes and relations in the domain, while
> ontology P describes how the legacy data should be interpreted and
> how the data should be mapped to concepts in the ontology D.
>
> I think this is quite a natural use-case for the use of ontologies,
> but as far as I understand this, there are problems:
>
> Ontology P is supposed to map e.g. field in the data to the
> "type" of the data, i.e. to a class in Ontology D. But how
> is one supposed to express this in ontology P?
> If I understand things correctly, there is no way to use
> a class as an individual in OWL2 and there is no way to
> just "use" a class as an individual in OWL1 without
> making the ontology Full and getting problems with
> reasoning.

A couple of ideas:

You could reverse the sense of the mapping.  You didn't specify how  
you represent the legacy data that you are mapping, but perhaps  
instead of doing the mapping by going from the legacy field to a  
property, perhaps you could do this using an AnnotationProperty on the  
OWL class or property that connects it to the legacy data field  
individual?

That keeps you out of OWL Full, and as long as you don't need  
classification reasoning or restrictions on the values of the mapping  
properties, that can do what you need.

I also don't think that going to OWL Full really hurts you that much.   
At least the Pellet reasoner just ignores the OWL-Full sorts of  
restrictions.  What I have done if I've needed classification is to  
just move the OWL-Full items one level of indirection further away by  
introducing "proxy" individuals which can be used for driving the  
classification.

So a PropertyProxy would be an OWLIndividual that can be used in  
restriction expressions.  And it, in turn, has an OWL-Full property  
that links it to a particular OWLProperty.

> But this kind of protection is not really necessary here,
> because from the point of view of ontology P, the class
> indeed IS nothing more than an individual. It's
> "class-ness" in D is of no importance for the reasoning
> in ontology P.
>
> So my workaround is for now to essentially represent
> the class from D in the corresponding property in P
> within a string field.
> But this looks a lot like a dirty hack and it looks
> a lot "relational-DB"-ish.
>
> Are there other, better approaches to do this?

I would prefer using a full-blown proxy to a string hack.

> My feeling is that this kind of "layering" ontologies
> must be a pattern that probably comes up rather
> frequently when modelling the real world: at
> one level of abstraction, or in one application
> context, something needs to be modeled as a class,
> but at some other level of abstraction or in some
> other context, the same entity is better modeled
> as something that can be the value of a property
> and hence should be an individual.
>
> As has been pointed out in previous threads, the
> same goes not only for classes, but for whole
> facts: Something that is modeled as a triple in
> ontology A is often best modeled as an individual
> in some other ontology B. In my case, for example
> it would be nice to be able to model all the facts
> that were used to populate ontology D as instances
> in P (or some third ontology) where each instance
> contains information on how, when, from where, and why
> the corresponding fact was put into D.
>
> The approach to do this with annotation properties
> in OWL2 looks a bit like a hack, because it cannot
> be layered or generalized.
>
> Is there a principal reason why these kinds of
> things are so hard in OWL? Are there other modern
> ontology representation formalism that do allow
> this kind of layered modeling?

You could take a look at some other reasoning systems.

In particular, PowerLoom (which I work on occasionally) is more  
expressive, supports both open and closed world reasoning, and can  
even attach relations to propositions.  It doesn't have the same  
degree of classification reasoning implemented yet, so if you really  
need that it may not be the best choice.  If you need a description  
logic that doesn't require a division between classes, properties and  
individuals -- and you can operate in Common Lisp -- you could also  
look at the older Loom system.  It is expressive, has a high  
performance classifier, but does not guarantee completeness in the  
inference.

PowerLoom:  http://www.isi.edu/isd/LOOM/PowerLoom/
Loom:  http://www.isi.edu/isd/LOOM/



More information about the protege-owl mailing list