Search Mailing List Archives
[protege-discussion] Memory leak in the protégé Rmi server
Bernhard Wellhöfer
Bernhard.Wellhoefer at gaia-group.com
Mon Sep 18 00:36:30 PDT 2006
Hi Tania,
Thx for the update! I also tried to find the hole, but without success...
Regards,
Bernhard
> -----Ursprüngliche Nachricht-----
> Von: protege-discussion-bounces at lists.stanford.edu
> [mailto:protege-discussion-bounces at lists.stanford.edu] Im
> Auftrag von Tania Tudorache
> Gesendet: Freitag, 15. September 2006 19:21
> An: User support for Core Protege and the Protege-Frames editor
> Betreff: Re: [protege-discussion] Memory leak in the protégé
> Rmi server
>
> Hi Bernhard,
>
> We did not solve the memory leak yet. We have investigated
> possible causes but we did not have the time to look into
> this issue in more depth.
> If you already know the cause, and have a fix for it, we are
> happy to integrate in the code.
>
> Cheers,
> Tania
>
>
>
> Bernhard Wellhöfer wrote:
>
> >Hello Tania,
> >
> >What is the current status for this issue; can you give me a
> quick update?
> >
> >Regards and thanks,
> >
> >Bernhard
> >
> >
> >
> >
> >
> >>-----Ursprüngliche Nachricht-----
> >>Von: protege-discussion-bounce at SMI.Stanford.EDU
> >>[mailto:protege-discussion-bounce at SMI.Stanford.EDU] Im Auftrag von
> >>Bernhard Wellhöfer
> >>Gesendet: Dienstag, 27. Juni 2006 09:31
> >>An: protege-discussion at smi.Stanford.EDU
> >>Betreff: [protege-discussion] Re: [protege-discussion]
> Memory leak in
> >>the protégé Rmi server
> >>
> >>
> >>Hello Tania,
> >>
> >>Thanks for your reply.
> >>
> >>I did not use any plugin for the protege RMI server. Two different
> >>clients connect to the protege server. Both clients use
> plugins. The
> >>protege server runs out of memory. The memory usage of the
> clients is
> >>also increasing, but much much slower. Since I have to restart the
> >>clients when the server dies, I can make no statement
> whether we have
> >>a memory leack for the clients too or whether the memory usage will
> >>run into a saturation.
> >>
> >>Thx,
> >>
> >>Bernd
> >>
> >>
> >>-----Ursprüngliche Nachricht-----
> >>Von: protege-discussion-bounce at SMI.Stanford.EDU
> >>[mailto:protege-discussion-bounce at SMI.Stanford.EDU] Im Auftrag von
> >>Tania Tudorache
> >>Gesendet: Dienstag, 27. Juni 2006 02:04
> >>An: protege-discussion at smi.Stanford.EDU
> >>Betreff: [protege-discussion] Re: [protege-discussion] Re:
> >>[protege-discussion] Memory leak in the protégé Rmi server
> >>
> >>
> >>Bernd,
> >>
> >>thank you for sending us the spreadsheet showing the memory leak.
> >>Unfortunately, we did not have time to look at this issue.
> >>
> >>I know one possible memory leak, but I did not verify it.
> >>Protege keeps an internal knowledge base (which you can get with
> >>project.getInternalKnowledgebase()), in which it stores the
> instances
> >>from the pprj file. These instances correspond to widget
> information
> >>(graphical information). Each time you browse a new class
> or instance,
> >>it creates new internal instances which are never cleaned
> up. They get
> >>cleaned up only when you load the project again.
> >>
> >>You mentioned that you did not use any plugin. Does this
> mean that you
> >>started the server, but you did not connect any Protege
> client to it?
> >>
> >>Tania
> >>
> >>
> >>Bernhard Wellhöfer wrote:
> >>
> >>
> >>
> >>>Dear protege team,
> >>>
> >>>did you already find time to look into this issue?
> >>>
> >>>Please find attached an Excel spreadhsheet which shows the
> >>>
> >>>
> >>memory leak. Each row in the spreadsheet is one line printed by
> >>"-verbose:gc". The first column is the memory used before
> the garbage
> >>collector started to clean up memory and the second column is the
> >>memory usage after the gc run finished.
> >>The third column is the time of each gc run. For this test
> the protege
> >>RMI server (3.1.1, build 216) was started with 512MB as max
> heap size
> >>- without any plugin.
> >>
> >>
> >>>Can you send me a short statement whether and if yes when
> >>>
> >>>
> >>you will look into this issue? I tried with a (simple) profiler to
> >>find the hole, but was not successful...
> >>
> >>
> >>>Regards,
> >>>
> >>>Bernd
> >>>
> >>>-----Ursprüngliche Nachricht-----
> >>>Von: protege-discussion-bounce at SMI.Stanford.EDU
> >>>[mailto:protege-discussion-bounce at SMI.Stanford.EDU] Im Auftrag von
> >>>Bernhard Wellhöfer
> >>>Gesendet: Mittwoch, 31. Mai 2006 10:12
> >>>An: protege-discussion at smi.Stanford.EDU
> >>>Betreff: [protege-discussion] Memory leak in the protégé Rmi server
> >>>
> >>>Hello,
> >>>
> >>>I have a big ontology and serve this ontology via a servlet
> >>>
> >>>
> >>into the web. The servlet code connects via RMI to a protégé server
> >>(3.1.1, build 216) to retrieve the ontology details.
> >>The protégé server reads the ontology data from a database.
> >>The protégé server and the servlet code run without any protégé
> >>plugin.
> >>
> >>
> >>>The ontology (slot values, slots, instances ...) is "bigger"
> >>>
> >>>
> >>then the size of the main memory (1 GB). I give (via "-Xmx") the
> >>servlet container java process 512 MB and also the protégé
> server java
> >>process 512 MB as maximum heap size. I start both java
> processes with
> >>the "-verbose:gc" option to trace the memory usage.
> >>
> >>
> >>>It now turns out that after several days the protégé server
> >>>
> >>>
> >>runs out of memory.
> >>
> >>
> >>>Please find attached a quick and dirty test program to
> >>>
> >>>
> >>replicate the problem:
> >>
> >>
> >>>1) Start the protégé RMI server with "-verbose:gc" and with
> >>>
> >>>
> >>"-Xmx48m" as JVM options. With "-verbose:gc" you will see the
> >>current memory usage each time the garbage collector runs.
> >>With -Xmx48m the protégé server starts with a maximum of 48
> >>MB as heap memory. 48 MB are small enough to replicate the
> >>problem fast enough. Maybe you could try to make the number
> >>smaller to speed up the crash.
> >>
> >>
> >>>2) Change at the top of the test program the class variables
> >>>
> >>>
> >>which define the protégé RMI server host name, the user and
> >>password and the project name.
> >>
> >>
> >>>3) Compile and run the test program. The test program starts
> >>>
> >>>
> >>two threads: The first thread checks whether a class Foo and
> >>a slot foo exist in the project. If not it will create the
> >>class, the slot and add the slot to the class. It then starts
> >>an endless loop to create instances of Foo and set a big
> >>string as foo slot value of the created instances. The second
> >>thread also runs an endless loop and gets the instances
> >>created by the first thread from the knowledge base and reads
> >>the foo slot value. The threads separately connect to the
> >>protégé server to force the update process to be executed.
> >>
> >>
> >>>4) Now just wait. For me it took one hour and 18.000 created
> >>>
> >>>
> >>instances of the class Foo until the protégé server ran out
> >>of memory. (Ok the ontology was not empty and so it may takes
> >>longer when you start with a clean ontology). My test
> >>program, the protégé server and the database ran on different
> >>machines. To have all three on one machine will definitly
> >>speed up the process.
> >>
> >>
> >>>Who can help here and where is the memory leak? Has somebody
> >>>
> >>>
> >>a good memory profiler to investigate this issue in detail?
> >>
> >>
> >>>Without a fix the test program proves that the protégé
> >>>
> >>>
> >>server is not able to handle ontologies of arbitrary size. So
> >>18.000 instances for 48 MB lead to ~ 200.000 instances for a
> >>maximum heap size of 512 MB - that is not a huge ontology and
> >>protégé already fails.
> >>
> >>
> >>>Regards,
> >>>
> >>>Bernd
> >>>
> >>>
> >>>
> >>>-- Attached file removed by Ecartis and put at URL below --
> >>>-- Type: application/octet-stream
> >>>-- Desc: TestProtegememoryLeack.java
> >>>-- Size: 6k (6236 bytes)
> >>>-- URL :
> >>>http://protege.stanford.edu/mail_archive/attachments/TestProt
> >>>
> >>>
> >>egememoryL
> >>
> >>
> >>>eack.java
> >>>
> >>>
> >>>-------------------------------------------------------------
> >>>
> >>>
> >>----------
> >>
> >>
> >>>-- To unsubscribe go to
> >>>http://protege.stanford.edu/community/subscribe.html
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>-- Attached file removed by Ecartis and put at URL below --
> >>>-- Type: application/x-zip-compressed
> >>>-- Desc: gc_protege.zip
> >>>-- Size: 539k (552554 bytes)
> >>>-- URL :
> >>>http://protege.stanford.edu/mail_archive/attachments/gc_protege.zip
> >>>
> >>>
> >>>-------------------------------------------------------------
> >>>
> >>>
> >>----------
> >>
> >>
> >>>-- To unsubscribe go to
> >>>http://protege.stanford.edu/community/subscribe.html
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>--------------------------------------------------------------
> >>-----------
> >>To unsubscribe go to
> >>http://protege.stanford.edu/community/subscribe.html
> >>
> >>
> >>
> >>
> >>--------------------------------------------------------------
> >>-----------
> >>To unsubscribe go to
> >>http://protege.stanford.edu/community/subscribe.html
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >_______________________________________________
> >protege-discussion mailing list
> >protege-discussion at lists.stanford.edu
> >https://mailman.stanford.edu/mailman/listinfo/protege-discussion
> >
> >
> >
>
> _______________________________________________
> protege-discussion mailing list
> protege-discussion at lists.stanford.edu
> https://mailman.stanford.edu/mailman/listinfo/protege-discussion
>
>
>
>
More information about the protege-discussion
mailing list