Search Mailing List Archives
[protege-owl] Modeling change, source, uncertainty, contradiction?
tar at ISI.EDU
Wed Aug 22 12:37:05 PDT 2007
On Aug 22, 2007, at 6:11 AM, Johann Petrak wrote:
> Matt Williams wrote:
>> As a very simple approach to modelling time, you could use
>> time-interval-valid versions of the ontology. Not pretty, but
>> might be
> For my purpose, I would be more interested in searchability than
> deductability. In other words, I do not want to make deductions based
> on time -- it would be sufficient to find properties of instances
> that are valid at a specific time. (So time would not be an issue
> for classes, just for instances)
> Naively I want an attribute "valid during" for each property/relation
> between instances.
This is generally tough to get without some sort of temporal
reasoning system to keep the information separate. One approach that
I worked on integrated such reasoning with a description logic system
(LOOM, starting with version 2.1). The only reference to that is in
the release notes, though:
As Matt Williams indicated, sometimes a simple partitioning or
context system will be sufficient, as long as the time granularity
does not need to be too fine.
Although you suggest that you don't care about deduction, in fact
your desire to find items "valid at a specific time", which implies
some form of reasoning that understands notions of temporal extent
and persistence. Unless you are able to specify ALL of the specific
times at which your fact is supposed to hold. But that quickly gets
unwieldy. That generally means you want to have some type of (at
least) interval reasoning capability so that you don't have to code
the temporal constraints directly in any query that you write.
> More generically I want several attributs: "valid during", "source"
> The only thing I would need to do is to get the value of these
> attributes out of the knowledge base and to search for triples
> where these attributes match a specific pattern.
This is where things start to get a bit trickier, since the searching
interface will either need to understand the conventions for the
temporal or attributive relationships or else you will need to do
some reified version of the propositions. But if you reify them,
then you will end up having to do any domain level reasoning on your
It's a tricky problem.
>> DL ontologies will not handle conflicting information. To do that,
>> need to use a defeasible formalism. There is a little bit of work on
>> ontologies & defeasibel reasoning.
>> I have used argumentation & ontologies; there's a paper at
>> http://acl.icnet.uk/~mw/WillliamsHunterICTAI07.pdf which also has
>> references to the other approaches.
> Thank you for the reference. I am not sure I need it at such a
> complex level though, since I do not need a reasoner to
> come up with sets of entailed or conflicting facts.
> I think it would be sufficient for me to be able to model
> something like
> "source A indicates instanceX Rel1 instanceY"
> "source B indicates instanceX notRel1 instanceY"
> or put differently
> instanceX Rel1-withattr: from source A instanceY
This requires either some form of the frames-world facets, or more
generally, a higher order logic, since you are making assertions
about particular sentences rather than just making domain-level
The straight-forward approach involves using higher-order logics.
This allows attaching source and certainty values directly to
sentences in the logic. But there are generally few such logics
available, and they have somewhat limited reasoning. Some examples
are CyC (<http://www.cycorp.com>) and PowerLoom (<http://www.isi.edu/
isd/LOOM/PowerLoom/>). [Disclaimer: I work on the latter system]
The alternative is to reify the relationships as instances in the
logic. The reification will also work, but by doing so you give up
any understanding by OWL of the fact that these reifications are
properties of particular instances. It will also slightly complicate
your querying, because instead of a query something like the
following (using made-up pseudo-syntax):
SELECT ?y WHERE (instanceX Rel1 ?y)
you would need to query for something like
WHERE (?rel isa Relationship) and (subject ?rel instanceX) and
(predicate ?rel Rel1) and (object ?rel ?y)
With reification, you have a individual that represents the
relationship, and that allows you to make arbitrary additional
assertions about that relationship. This uses a technique similar to
that used by OWL for representing N-ARY relationships. (Ref: <http://
> So it comes down to attatching searchable arbitrary attributes
> to properties/relations again.
Well, they wouldn't be properties per se anymore if you used
reification. Instead, the relationship would be an instance of
something like Relationship with its own properties such as subject,
predicate object. In other words, it would look like an RDF triple,
about which you could express something else.
In that sense, perhaps Matt is correct in that you might want to look
at less constrained modeling systems such as RDF/RDFS instead of
OWL. You can then make arbitrary assertions about any triple, as
long as you were content to do any reasoning yourself.
> Could it be that I am missing something totally elementary
> here because it seems that should be something that is needed
> all the time?
Well, it's a hard problem, especially with regard to producing
reasoners and query answering services over the more complicated
logics. For one thing, having both the base relations and
reifications of them introduce issues of keeping the two items
synchronized. The usual solution to that is to use only the reified
form, since that is the simplest. But then you don't get to treat
those particular relationships as relationships that, say OWL, would
understand. They would be instances that you would have to provide
all the reasoning for.
>> If you want to discuss this in more detail, email me off-list.
>> Johann Petrak wrote:
>>> I am pretty new to using Ontologies for knowledge represenation so
>>> most of the tutorials and examples I have seen only are about
>>> modeling some consistent set of unchanging facts.
>>> However, in real word situations it is often necessary to deal
>>> with knowledge or information that has one of the following
>>> * a fact might change over time. More exactly, a property
>>> might be valid during some period of time but not another.
>>> Is it possible to model this in OWL ontologies and if yes,
>>> what are common design patterns to do it?
>>> * The fact that some instance has some property might be
>>> known based on sources A and B but might not be confirmed
>>> from source C. More problematic, it might contradict
>>> information from source D. So instead of some fact just
>>> "existing" we would like to model that it exists
>>> "according to source A" but "not confirmed by source C"
>>> and "not, according to source D"
>>> Is it possible to model this?
>>> * Sometimes it would be useful to attach a level of belief
>>> to a fact. E.g. some instance having some property might
>>> be likely but not certain.
>>> These things are probably differently hard to model, if at
>>> My biggest concern at the moment is change over time: for
>>> most applications where I need some knowledge representation
>>> it would be extremely important to be able to know that
>>> e.g. some name was used during a certain time or that some
>>> property existed during a certain period but not another.
>>> I would be thankful for any hints you could give me or
>>> any papers or sources you could point out where these
>>> issues are discussed.
>>> protege-owl mailing list
>>> protege-owl at lists.stanford.edu
>>> Instructions for unsubscribing: http://protege.stanford.edu/doc/
>> protege-owl mailing list
>> protege-owl at lists.stanford.edu
>> Instructions for unsubscribing: http://protege.stanford.edu/doc/
> protege-owl mailing list
> protege-owl at lists.stanford.edu
> Instructions for unsubscribing: http://protege.stanford.edu/doc/
More information about the protege-owl