Search Mailing List Archives

Limit search to: Subject & Body Subject Author
Sort by: Reverse Sort
Limit to: All This Week Last Week This Month Last Month
Select Date Range     through    

[bioontology-support] [BioPortal] Feedback from rakesh

John Graybeal jgraybeal at
Tue Aug 18 13:32:01 PDT 2020

That sounds correct. I believe the command you want is in the link we included early in the thread:

It looks like the second link there is broken, this is the script to do that same task, if I am not mistaken.


On Aug 18, 2020, at 12:39 PM, Rakesh Nagarajan <rakesh at<mailto:rakesh at>> wrote:

     Thanks! We'll try this in the next few days. One point of clarification. I should:

1) Upload zipped file as OWL as Jennifer states
2) Put an unzipped copy in /srv/ontoportal/data/repository/SNOMEDCD/xxx/

Is that right? I will I do #2 but that folder won't be created until the submission is uploaded correct? And the parsing will likely fail by then, correct? If so, how do I parse the ontology at command line on the server itself? Can you have the DevOps expert send that command to me please?

Finally, if that doesn't work I'll move to the community support as you recommended. What is the listserv for that?


On Tue, Aug 18, 2020 at 2:11 PM John Graybeal <jgraybeal at<mailto:jgraybeal at>> wrote:
Hi Rakesh,

I talked to Jennifer last week (she is unavailable this week) and our DevOps support person Alex about your issue, and have one further suggestion you can try. If that doesn't work, for the near term we're going to need to let the community try to support this issue, as we have a number of other near-term priorities we have to focus on. It seems likely you (and Jennifer) are experiencing resource limitations, and a number of components of the stack are written in a way that is resource-dependent, so it isn't clear we can solve this any time soon. Apologies for that—we would be happy to have some funding to help us remove some of these resource constraints in our system! We can also offer consultation services for user-specific needs, so that could be another path for you.

Our DevOps expert has had success parsing the file after (a) compressing the file as Jennifer describes, (b) setting his Virtual Machine memory to 24GB (setting it larger may create other resource issues if you are running on a system that doesn't have enough memory overall), and (c) moving the resulting uncompressed file up one level because it was not findable by the parser where it ended up. (He used the command line to try to parse the file, and there was an error message (appended) that suggested where the file couldn't be found.)

I suggest you continue to send your results to the list—I recommend you include with your report the machine you're on, and any virtualization software you are using, along with the available memory/memory settings of both the machine and its virtualization software.

If anyone has quick further suggestions we will be happy to provide them, but for now we will have to hold off on more detailed troubleshooting for a little bit. We'll consider this an open item and will start a ticket, if we don't hear that you've found a solution on your end.


I, [2020-08-17T16:44:21.224921 #6961]  INFO -- : ["Files extracted from zip [#<Zip::Entry:0x0000000001f18118 @local_header_offset=0, @local_header_size=nil, @internal_file_attributes=1, @external_file_attributes=2175008768, @header_signature=33639248, @version_needed_to_extract=20, @version=30, @ftype=:file, @filepath=nil, @gp_flags=0, @follow_symlinks=false, @restore_times=false, @restore_permissions=false, @restore_ownership=false, @unix_uid=nil, @unix_gid=nil, @unix_perms=420, @dirty=false, @fstype=3, @zipfile=\"/srv/ontoportal/data/repository/SNOMEDCD/4/\", @name=\"SNOMEDCT.ttl\", @comment=\"\", @extra={\"UniversalTime\"=>#<Zip::ExtraField::UniversalTime:0x0000000001f26da8 @ctime=nil, @mtime=2020-08-17 16:23:06 +0000, @atime=nil, @flag=3>, \"Unknown\"=>\"ux\\v\\u0000\\u0001\\u0004x\\u0003\\u0000\\u0000\\u0004x\\u0003\\u0000\\u0000\"}, @compressed_size=81595795, @crc=2114384685, @compression_method=8, @size=1474945916, @time=2020-08-17 16:23:06 +0000, @last_mod_time=33507, @last_mod_date=20753, @name_length=12, @extra_length=24, @comment_length=0>]"]
I, [2020-08-17T16:44:21.227700 #6961]  INFO -- : ["Using UMLS turtle file found, skipping OWLAPI parse"]
E, [2020-08-17T16:44:21.227942 #6961] ERROR -- : ["Errno::ENOENT: No such file or directory @ rb_sysopen - /srv/ontoportal/data/repository/SNOMEDCD/4/SNOMEDCT.ttl\n/srv/ontoportal/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/ontologies_linked_data-613a5c836099/lib/ontologies_linked_data/models/ontology_submission.rb:423:in `foreach'\n\t/srv/ontoportal/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/ontologies_linked_data-613a5c836099/lib/ontologies_linked_data/models/ontology_submission.rb:423:in `generate_umls_metrics_file'\n\t/srv/ontoportal/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/ontologies_linked_data-613a5c836099/lib/ontologies_linked_data/models/ontology_submission.rb:440:in `generate_rdf'\n\t/srv/ontoportal/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/ontologies_linked_data-613a5c836099/lib/ontologies_linked_data/models/ontology_submission.rb:973:in `process_submission'\n\t/srv/ontoportal/ncbo_cron/lib/ncbo_cron/ontology_submission_parser.rb:177:in `process_submission'\n\tbin/ncbo_ontology_process:98:in `block in <main>'\n\tbin/ncbo_ontology_process:81:in `each'\n\tbin/ncbo_ontology_process:81:in `<main>'"]
E, [2020-08-17T16:44:21.286307 #6961] ERROR -- : Failed, exception: Errno::ENOENT: No such file or directory @ rb_sysopen - /srv/ontoportal/data/repository/SNOMEDCD/4/SNOMEDCT.ttl

On Aug 13, 2020, at 5:52 PM, Jennifer Leigh Vendetti <vendetti at<mailto:vendetti at>> wrote:

Hi Rakesh,

I’m writing with a status report of sorts. Unfortunately, I haven’t found a resolution for this issue yet.

I spoke with our IT person who mentioned a possible Nginx limit on POST size of 1 GB. I examined the log files for the Rails application (/var/log/rails/appliance.log), and did see an indication of such an issue, e.g.:

F, [2020-08-13T00:08:39.434081 #3975] FATAL -- : MultiJson::ParseError (Problem loading json
<head><title>413 Request Entity Too Large</title></head>
<center><h1>413 Request Entity Too Large</h1></center>

To prevent this error, I compressed the SNOMED CT TTL file to a ZIP archive (<>, which reduced the size to roughly 81 MB.

I made a second attempt to create a new submission for the SNOMED ontology:

1). Logged into the OntoPortal web application
2). Navigated to the ontology summary page (
3). Clicked the plus button at the top of the Submission table to create a new submission
4). Filled the required fields, choosing OWL as the Format, and specifying the compressed ontology source file as the upload

Clicking the Add submission button resulted in a success page displaying instead of the 500 error seen previously. However, when I later checked the submission processing status, the virtual machine showed an out of memory error and the processing failed to complete. I shut down the virtual machine, doubled the memory from 8 to 16 GB, restarted, and attempted the above process a second time. The processing got a bit further the second time, but ran out of memory again. I tried bumping the memory a couple more times, but was still getting failures at 32 GB and my laptop can’t handle experimenting with anything larger.

I’ll need to have another conversation with IT to see if there are any other possibilities.

Kind regards,

On Aug 12, 2020, at 7:05 PM, Rakesh Nagarajan <rakesh at<mailto:rakesh at>> wrote:

     We'd love to upgrade to v3.0 of the appliance as soon as our DevOps has the bandwidth to do so. Thanks for checking on this and I am glad my issue is at least reproducible. I'll await further feedback from you after you've had a chance to connect with your colleagues tomorrow.

Thanks a ton,

On Wed, Aug 12, 2020 at 7:49 PM Jennifer Leigh Vendetti <vendetti at<mailto:vendetti at>> wrote:
Hi Rakesh,

OK - I’m sorry to say that I don’t have a satisfactory answer for you at the moment. I installed a virtual appliance instance in my local development environment and attempted to upload the SNOMED CT TTL file. I reproduced the 500 error that you reported. Unfortunately, the log files where I would usually look for errors don’t contain anything enlightening (appliance.log, unicorn.stderr.log, unicorn.stdout.log). I’ve pinged our IT person in charge of packaging the appliance to see what other log files might contain error output, and will get back to you with what I find out.

A few things to note:

1). There's a new version of the appliance out (3.0.2). Not sure if this is of interest to you in terms of possibly upgrading. I was testing with 3.0.2, so it won’t help with your immediate issue.

2). The REST API logging is turned off by default in version 2.5 of the appliance. If you want to turn it on in order to examine log output, you’ll need to uncomment stderr_path and stdout_path in /srv/ncbo/ontologies_api/current/config/unicorn.rb. Again, this may not help with your immediate issue since I have REST API logging turned on in my local instance and the log files didn’t reveal anything obvious. If you turn on logging, the files should be located in /srv/ncbo/ontologies_api/current/log.

I should be able to speak with a couple of colleagues tomorrow and hope to have other tips for you.

Kind regards,

On Aug 12, 2020, at 2:32 PM, Rakesh Nagarajan <rakesh at<mailto:rakesh at>> wrote:

     No worries at all. I did track this down last night and tried to use the API and again got an Internal Server Error. The submission itself isn't succeeding so there is no submission ID created on the server and so I can't check logs there. Is there a way to check any other internal application server (nginx, tomcat?) logs to see why the submission is failing? I have tried to restart the VM and flushed the goo cache per the instructions as well. I also validated the TTL file in Protege which was able to successfully load the complete SNOMEDCT file without errors. Any other ideas?

Thanks so much for your help,

On Wed, Aug 12, 2020 at 2:48 PM Jennifer Leigh Vendetti <vendetti at<mailto:vendetti at>> wrote:
Sorry about that Rakesh - I meant to send a link to this script in our sample code repository:

On Aug 11, 2020, at 6:31 PM, Rakesh Nagarajan <rakesh at<mailto:rakesh at>> wrote:

     Thanks! Can you confirm the last link where you refer to a script but that link is the same as the previous link?


On Tue, Aug 11, 2020 at 6:18 PM Jennifer Leigh Vendetti <vendetti at<mailto:vendetti at>> wrote:
Hi Rakesh,

On Aug 10, 2020, at 5:18 PM, support at<mailto:support at> wrote:

Name: rakesh

Email: rakesh at<mailto:rakesh at>



I am using v2.5 of the Ontoportal appliance and have successfully loaded NCIT a few times now. I am now attempting to load SNOMED-CT after conversion from the UMLS using umls2rdf as a .ttl file. I have selected UMLS as the format but I have now tried twice to upload via UI and the server threw an error saying 'something went wrong and administrators have been notified. However I dont know what happened. Are there logs on the server to check?

There are some instructions for locating log files here:

Any other ideas to help troubleshoot? Finally are there commands to load directly on the server I may use?

I’m not certain this is what you’re asking for, but you can manually re-parse an ontology submission using this methodology:

It’s also possible to upload ontologies programmatically. An appliance user submitted the following script to our code sample repository:

It’s a script that imports ontologies from on appliance instance to another, which is not what you’re doing. However, I thought that some portions of the code might be of interest to you.

Kind regards,


bioontology-support mailing list
bioontology-support at<mailto:bioontology-support at>

John Graybeal
Technical Program Manager
Center for Expanded Data Annotation and Retrieval /+/ NCBO BioPortal
Stanford Center for Biomedical Informatics Research
650-736-1632  | ORCID  0000-0001-6875-5360


Rakesh Nagarajan

Founder, President, and Chief Technology & Visionary Officer

m: 314-504-5620
e: rakesh at<mailto:rakesh at>

[pierian-logo-trans-light-background]<>   [linkedin_circle-512] <>  [twitter_circle-512] <>

Wisdom in Every Report™

CONFIDENTIALITY NOTICE: This message and any attachments are solely for the use of the intended recipient and may contain privileged, confidential or other legally protected information. If you are not the intended recipient, please destroy all copies without reading or disclosing their contents and notify the sender of the error by reply email.

John Graybeal
Technical Program Manager
Center for Expanded Data Annotation and Retrieval /+/ NCBO BioPortal
Stanford Center for Biomedical Informatics Research
650-736-1632  | ORCID  0000-0001-6875-5360

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the bioontology-support mailing list