Search Mailing List Archives
[protege-owl] Protege-OWL API, Problem saving an Ontology
dennis.spohr at ling.uni-stuttgart.de
Sun Jan 31 02:58:34 PST 2010
Kevin Alonso <kalonso <at> vicomtech.org> writes:
> Hello Timothy,
> I have been doing some tests with other large ontologies and the save
> process goes well. Debugging my program I see nothing rare, it goes well
> only very slow when the program enters in protegeowl-api sources. I have
> tried the saving the ontology in an old 3.4 beta version(build 130) and
> the saving process is very quick. I enclose my protege project if you
> want to test with it.
> Could be this a bug of the new releases?
> Thank you,
I came across this thread when looking for the solution of a problem that
is very similar to the one reported by Kevin, just more serious. I have
also experienced incredibly long saving times (3 hours) with a file that
was around 2mb in the end, and I have that problem also with a file that
would be a bit bigger than that (I would expect around 4-6mb), and which
crashed after trying to safe the file for 45 hours. At that time it had
been occupying around 2.5gb RAM.
The strange thing is that at the moment it starts the saving, it had used
around 400mb RAM, and that number went slowly up and down during the next
hours until it crashed. I've also tried to run it on a "better" machine
where I could allocate 10gb RAM, but here the program crashed with an
"java.lang.OutOfMemoryError: unable to create new native thread" exception.
I am using SPARQL quite often in the program, so I am not sure if the
problem is located somewhere there, in the number of threads that are
created. I have checked that with Threads.activeCount() on the 2mb file,
and when starting with the saving, the program had 436 active threads. Is
that normal or too much? I haven't checked it on the bigger file, though
I could if anyone thinks that the problem might actually be related to
So, has there been a follow-up of the problem reported by Kevin, and maybe
even a solution? I haven't been able to find more on the list.
Thanks a lot!
> Timothy Redmond escribió:
> > No this is very small. The thesaurus, a moderately large ontology, is
> > over 89M. The size is only one of many metrics of course but I suspect
> > that this is not the issue.
> > Another thing occurred to me - probably not it - is that maybe your
> > process needs more memory (change the -Xmx jvm parameter). But I think
> > that this is unlikely also because things are small. What I think we
> > need to see is some thread dumps.
> > -Timothy
More information about the protege-owl