Search Mailing List Archives


Limit search to: Subject & Body Subject Author
Sort by: Reverse Sort
Limit to: All This Week Last Week This Month Last Month
Select Date Range     through    

[protege-discussion] RMI server scalability for Protege Frames 3.5 editor

John Pierre johnmapierre at gmail.com
Tue Jul 2 16:24:04 PDT 2013


Thank you very much for your help Timothy, the tgviz tab was indeed the
problem.  I'm glad there was such a simple explanation, luckily we aren't
even using that plugin.

Let me know if there are some examples for the proper style to code plugins
for the client-server architecture, we may be developing a plugin in the
near future.





On Fri, Jun 28, 2013 at 2:37 PM, Timothy Redmond <tredmond at stanford.edu>wrote:

>
> Sorry for the delay in the response.  The problem that you are having
> appears to occur in the TGViz tab.  When the tab is disabled, the protege
> client comes up quickly (over here at least) even when I simulate fairly
> extreme network delays (800 kilobits per second and 80 milliseconds of
> latency).  But when the TGViz tab is enabled, it takes a long time to come
> up.  I have almost finished writing this message and the TGViz tab is still
> trying to initialize.
>
> This is an old plugin and it hasn't been rewritten to accommodate the
> protege client-server architecture. Unfortunately the Protege 3
> client-server architecture requires that plugin code be written in a
> certain style to avoid major slow-downs of this type.
>
> You should be able to fix the problem by shutting down the server, loading
> the project used by the server and disabling the tgviz tab and then
> restarting the server.  Alternatively you can test this theory with a
> client where the tgviz tab has been removed.
>
> -Timothy.
>
>
>
> On 06/23/2013 02:57 PM, John Pierre wrote:
>
>  Thanks for the thoughtful reply.  I am using Protege 3.5 Build 663.
> The .pprj file is only about 52K with about 150 or so class definitions for
> top level classes, slots, layout properties, etc.
>
>  Looking at the fine grained logging on the server and the client it seems
> it is loading 80MB of data when the client opens the ontology.  Seems to be
> loading all the frames into the cache with 47,952 calls like this:
>
> 2013.06.22 12:21:57.975 PDT FINE: Invocation took 3 ms --
> RemoteClientFrameStore$2.invoke()
> 2013.06.22 12:21:57.975 PDT FINE: Cache miss for frame named 1125R
> [171078] -- RemoteClientFrameStore.ge
> tFrame()
> 2013.06.22 12:21:57.975 PDT FINE: Remote invoke: getFrame Args: --
> RemoteClientFrameStore$2.invoke()
>
> I noticed that -Dserver.client.preload.skip and -Dpreload.frame.limit
> properties do not seem to have any affect whatsoever.
>
>  I'll send some additional info off-line.  Sincere thanks!
>
>
>
>
>
>
>
>
>  On Sat, Jun 22, 2013 at 12:49 PM, Timothy Redmond <tredmond at stanford.edu>wrote:
>
>>
>> This type of performance problem is a tricky area and to understand it
>> better we would have to understand exactly where the bottleneck is.  But my
>> first questions would be how large is the pprj file and what version of
>> Protege are you using.  We made a change to Protege recently that optimizes
>> how the .pprj part of a frames project is loaded by the client.  Using the
>> very latest version of Protege may dramatically improve the startup
>> performance if the pprj file is large.
>>
>> If moving to the latest Protege is not sufficient then you could send me
>> a copy of the ontology so that I could see the performance issue and find
>> the bottleneck here.  If you send the ontology out of line I can keep it
>> private if needed.
>>
>>
>>
>>  Therefore the culprit seems to be network demands of the rmi server.
>>
>>
>>  This sounds like a simple
>>
>>
>>
>> 1. What is the practical scalability limit of the Protege RMI
>> client-server in terms of ontology size?
>>
>>
>>  The server-client is used with ontologies of over 128K classes.  In this
>> example the ontology is also a mysql database project.  So the 25000 frames
>> you describe is not that large.
>>
>>
>>
>>  2.  Are there additional configuration settings that we can try to get
>> our ontology to load?
>>
>>
>>  It has been a long time since I worked on the protege 3 client server
>> but I think you have found the main options.  But you should also be warned
>> that
>>
>>             -Dserver.client.preload.skip=true
>>
>>
>> might work against you.  You save time on the preload but all the
>> operations thereafter are slower until the client side cache has been
>> sufficiently built up.
>>
>>
>>
>>  3.  Are there other collaboration models we could try for allowing
>> multiple people to work on a large scale Frames ontology besides the rmi
>> client-server approach?
>>
>>
>>  You could also try webprotege.
>>
>> -Timothy
>>
>>
>>
>>
>> On 06/19/2013 10:26 AM, John Pierre wrote:
>>
>>    We are trying to set up a collaborative installation so multiple
>> developers can edit a Frames ontology.  The ontology currently has about
>> 25,000 frames.
>>
>>  We have successfully configured the RMI server and can connect clients
>> to the server and access the example ontologies across the network.
>>
>>  The problem is that our 25,000 frame ontology isn't able to load into
>> the clients when accessed across the network.
>>
>>  Our ontology is MySQL backed.
>>
>>  The ontology loads fine when accessed on the same machine without going
>> through the rmi server.
>>
>>  The ontology loads but takes several minutes to do so when running the
>> client and server on the same machine through the localhost loopback.
>>
>>  The ontology loading hangs and cannot load when running the client and
>> server on different machines either on a wide area network (20+Mbps) nor on
>> a local area Ethernet network (100+Mbps).  After waiting 30 mins or so we
>> get broken pipes and/or timed out connections.
>>
>>  The ontology loads if the MySQL database is accessed across the network
>> directly without using the client-server (.pprj file is on the client side
>> but points to a MySQL database hosted on the network)
>>
>>  Therefore the culprit seems to be network demands of the rmi server.
>>
>>  We have  -Dserver.use.compression=true turned on at the server.
>>
>>  We've tried -Dserver.client.preload.skip=true on the client side.
>>
>>
>>  It seems this 25,000 frame ontology might be too large for the RMI
>> client-server architecture, but the Protege documentation seems to hint
>> that much larger ontologies have been developed using Protege.
>>
>>  Questions:
>>
>>  1. What is the practical scalability limit of the Protege RMI
>> client-server in terms of ontology size?
>>
>>  2.  Are there additional configuration settings that we can try to get
>> our ontology to load?
>>
>>  3.  Are there other collaboration models we could try for allowing
>> multiple people to work on a large scale Frames ontology besides the rmi
>> client-server approach?
>>
>>  Thanks in advance for your help.
>>
>>  John
>>
>>
>>
>>
>>  _______________________________________________
>> protege-discussion mailing listprotege-discussion at lists.stanford.eduhttps://mailman.stanford.edu/mailman/listinfo/protege-discussion
>>
>> Instructions for unsubscribing: http://protege.stanford.edu/doc/faq.html#01a.03
>>
>>
>>
>> _______________________________________________
>> protege-discussion mailing list
>> protege-discussion at lists.stanford.edu
>> https://mailman.stanford.edu/mailman/listinfo/protege-discussion
>>
>> Instructions for unsubscribing:
>> http://protege.stanford.edu/doc/faq.html#01a.03
>>
>>
>
>
> On Sat, Jun 22, 2013 at 12:49 PM, Timothy Redmond <tredmond at stanford.edu>wrote:
>
>>
>> This type of performance problem is a tricky area and to understand it
>> better we would have to understand exactly where the bottleneck is.  But my
>> first questions would be how large is the pprj file and what version of
>> Protege are you using.  We made a change to Protege recently that optimizes
>> how the .pprj part of a frames project is loaded by the client.  Using the
>> very latest version of Protege may dramatically improve the startup
>> performance if the pprj file is large.
>>
>> If moving to the latest Protege is not sufficient then you could send me
>> a copy of the ontology so that I could see the performance issue and find
>> the bottleneck here.  If you send the ontology out of line I can keep it
>> private if needed.
>>
>>
>>
>>  Therefore the culprit seems to be network demands of the rmi server.
>>
>>
>>  This sounds like a simple
>>
>>
>>
>> 1. What is the practical scalability limit of the Protege RMI
>> client-server in terms of ontology size?
>>
>>
>>  The server-client is used with ontologies of over 128K classes.  In this
>> example the ontology is also a mysql database project.  So the 25000 frames
>> you describe is not that large.
>>
>>
>>
>>  2.  Are there additional configuration settings that we can try to get
>> our ontology to load?
>>
>>
>>  It has been a long time since I worked on the protege 3 client server
>> but I think you have found the main options.  But you should also be warned
>> that
>>
>>             -Dserver.client.preload.skip=true
>>
>>
>> might work against you.  You save time on the preload but all the
>> operations thereafter are slower until the client side cache has been
>> sufficiently built up.
>>
>>
>>
>>  3.  Are there other collaboration models we could try for allowing
>> multiple people to work on a large scale Frames ontology besides the rmi
>> client-server approach?
>>
>>
>>  You could also try webprotege.
>>
>> -Timothy
>>
>>
>>
>>
>> On 06/19/2013 10:26 AM, John Pierre wrote:
>>
>>    We are trying to set up a collaborative installation so multiple
>> developers can edit a Frames ontology.  The ontology currently has about
>> 25,000 frames.
>>
>>  We have successfully configured the RMI server and can connect clients
>> to the server and access the example ontologies across the network.
>>
>>  The problem is that our 25,000 frame ontology isn't able to load into
>> the clients when accessed across the network.
>>
>>  Our ontology is MySQL backed.
>>
>>  The ontology loads fine when accessed on the same machine without going
>> through the rmi server.
>>
>>  The ontology loads but takes several minutes to do so when running the
>> client and server on the same machine through the localhost loopback.
>>
>>  The ontology loading hangs and cannot load when running the client and
>> server on different machines either on a wide area network (20+Mbps) nor on
>> a local area Ethernet network (100+Mbps).  After waiting 30 mins or so we
>> get broken pipes and/or timed out connections.
>>
>>  The ontology loads if the MySQL database is accessed across the network
>> directly without using the client-server (.pprj file is on the client side
>> but points to a MySQL database hosted on the network)
>>
>>  Therefore the culprit seems to be network demands of the rmi server.
>>
>>  We have  -Dserver.use.compression=true turned on at the server.
>>
>>  We've tried -Dserver.client.preload.skip=true on the client side.
>>
>>
>>  It seems this 25,000 frame ontology might be too large for the RMI
>> client-server architecture, but the Protege documentation seems to hint
>> that much larger ontologies have been developed using Protege.
>>
>>  Questions:
>>
>>  1. What is the practical scalability limit of the Protege RMI
>> client-server in terms of ontology size?
>>
>>  2.  Are there additional configuration settings that we can try to get
>> our ontology to load?
>>
>>  3.  Are there other collaboration models we could try for allowing
>> multiple people to work on a large scale Frames ontology besides the rmi
>> client-server approach?
>>
>>  Thanks in advance for your help.
>>
>>  John
>>
>>
>>
>>
>>  _______________________________________________
>> protege-discussion mailing listprotege-discussion at lists.stanford.eduhttps://mailman.stanford.edu/mailman/listinfo/protege-discussion
>>
>> Instructions for unsubscribing: http://protege.stanford.edu/doc/faq.html#01a.03
>>
>>
>>
>> _______________________________________________
>> protege-discussion mailing list
>> protege-discussion at lists.stanford.edu
>> https://mailman.stanford.edu/mailman/listinfo/protege-discussion
>>
>> Instructions for unsubscribing:
>> http://protege.stanford.edu/doc/faq.html#01a.03
>>
>>
>
>
> _______________________________________________
> protege-discussion mailing listprotege-discussion at lists.stanford.eduhttps://mailman.stanford.edu/mailman/listinfo/protege-discussion
>
> Instructions for unsubscribing: http://protege.stanford.edu/doc/faq.html#01a.03
>
>
>
> _______________________________________________
> protege-discussion mailing list
> protege-discussion at lists.stanford.edu
> https://mailman.stanford.edu/mailman/listinfo/protege-discussion
>
> Instructions for unsubscribing:
> http://protege.stanford.edu/doc/faq.html#01a.03
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.stanford.edu/pipermail/protege-discussion/attachments/20130702/f84ee7c6/attachment-0001.html>


More information about the protege-discussion mailing list