Search Mailing List Archives

Limit search to: Subject & Body Subject Author
Sort by: Reverse Sort
Limit to: All This Week Last Week This Month Last Month
Select Date Range     through    

[bioontology-support] search by id in a subtree -> sparql service in VA similar to BioPortal

Madani, Sina Sina.Madani at
Fri Feb 2 05:25:23 PST 2018

Thank you, John for the response.
I will contact Agroportal regarding the sparql endpoint. For my problem with loading SNOMED, I am using the same process (umls2rdf) that I use to generate other UMLS terminologies (like RxNomr, LOINC, ICDx, etc.), per Bioportal instruction. The default output of umls2rdf script is ttl (load on code). ICD9/10, RxNomr, and LOINC ttl files were successfully loaded and parsed in Bioportal though.

Thanks again for looking into this


From: John Graybeal <jgraybeal at>
Date: Friday, February 2, 2018 at 2:28 AM
To: "Madani, Sina" <Sina.Madani at>
Cc: "support at" <support at>
Subject: Re: [bioontology-support] search by id in a subtree -> sparql service in VA similar to BioPortal

Hi Sina,

Yes, the 4store backend essentially is a sparql endpoint, used by the rest of the Virtual Appliance. But it could be configured to be accessed also by other query originators. I suggest you contact the Agroportal folks to learn about their process and experience.

I think we haven't managed to answer your question, though I may have missed it. (We are a little time-constrained this week, sorry!)

But at a quick guess (only partially informed, apologies in advance if I get something wrong here), I would not expect loading the TTL file to work that way, and I would not expect the size to be an issue, since we've loaded big ontologies a lot without an issue related to their size. Naively perhaps, I would consider converting your ontologies of interest to OWL and loading them through the normal process, unless you want to execute the whole UMLS load software (not recommended, very complex!).

I think we'll be able to come back to you with more thoughts soon, hopefully by the weekend.


On Feb 1, 2018, at 11:51 AM, Madani, Sina <Sina.Madani at<mailto:Sina.Madani at>> wrote:

Hi John,

Thank you for getting back to me. Yes, I meant similar functionality like the sparql end point at the Stanford instance.
I understand that many (if not all) of the queries perhaps can be done via APIs but I was just curious to see if the appliance can be configured as an sparql end point for incoming sparql queries, like for extracting and validating the mappings that is done automatically by the appliance or regular queries.
We are evaluating your appliance for search/browse and visualization purposes for our order sets catalogue, with mappings to standard terminologies, at this point.


From: John Graybeal [mailto:jgraybeal at]
Sent: Wednesday, January 31, 2018 6:50 PM
To: Michael Dorf
Cc: support at<mailto:support at>; Madani, Sina
Subject: Re: [bioontology-support] search by id in a subtree -> sparql service in VA similar to BioPortal


(Changing the topic line to focus on the SPARQL question that I'm addressing here.)

The exact answer to your last question may depend on what you mean by "similar to the BioPortal web site".

On BioPortal we have the 'front-end beta SPARQL query UI' set up at<>. There are a few complexities before the queries get to the backend service, which is a separate 4store service from the one used to serve BioPortal itself. (Which is why the backend data is not the same.)  We could tell you about all those details and provide code to make it all work, but you may not need things to be configured that way on your system.

It is certainly possible to configure the primary backend store (4store) that your Virtual Appliance uses, so that it can accept queries from anywhere. If you are running a public service, you might not want to do that, because it is difficult to protect your back end from queries that can take the system down. (That's why we don't do it that way on BioPortal.)

I think that Clement Jonquet may have found a way to do it that doesn't have many problems, for his AgroPortal installation. He reads this list and will likely weigh in, but if not we can make sure he gets this question too.

I know the technical folks are thinking about your other questions, I'm not going to try to guess at those answers!


On Jan 31, 2018, at 3:05 PM, Michael Dorf <mdorf at<mailto:mdorf at>> wrote:


I am using a local version of Appliance 2.5 RC3 as well as UMLS2RDF script against UMLS2017AB database to generate and load SNOMED.ttl file (1.29Gb).
However, upon submitting a new ontology and after few seconds I get this error message in the web UI: “something went wrong”. Also, http://ontotoportal.admin<> report under issues section shows “ontology has no submission”.  Under /srv/ncbo/repository path no directory was created for SNOMED. Seems manually creating “1” submission directory and copying SNOMED ttl into that directory doesn’t have any effect even with manual parsing per instruction. Is it possible to manually load large ttl files and create submissions?
Scheduler.log or appliance.log doesn’t show any error either.

I re-tried the submission with a compressed tar file (72 Mb) too. This time,  http://ontotoportal.admin<> showed ERROR_RDF under Error Status field. Also, on admin page, the log link under URL field shows below messages.  It seems manually unzipping the file and/or reprocessing it doesn’t have any effect either
Is there a workaround for loading/parsing SNOMED (or similar large ontologies) into OntoPortal? Also, is it possible to access 4store directly in our local instance from a sparql endpoint similar to Bioportal website?



I, [2018-01-28T11:19:17.181176 #3910]  INFO -- : ["Starting to process"<>]
I, [2018-01-28T11:19:17.219473 #3910]  INFO -- : ["Starting to process SNOMED/submissions/1"]
I, [2018-01-28T11:19:17.338663 #3910]  INFO -- : ["Using UMLS turtle file found, skipping OWLAPI parse"]
E, [2018-01-28T11:19:17.338923 #3910] ERROR -- : ["ArgumentError: invalid byte sequence in UTF-8\n/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-ff5920539091/lib/ontologies_linked_data/models/ontology_submission.rb:398:in `block in generate_umls_metrics_file'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-ff5920539091/lib/ontologies_linked_data/models/ontology_submission.rb:397:in `foreach'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-ff5920539091/lib/ontologies_linked_data/models/ontology_submission.rb:397:in `generate_umls_metrics_file'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-ff5920539091/lib/ontologies_linked_data/models/ontology_submission.rb:414:in `generate_rdf'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-ff5920539091/lib/ontologies_linked_data/models/ontology_submission.rb:903:in `process_submission'\n\t/srv/ncbo/ncbo_cron/lib/ncbo_cron/ontology_submission_parser.rb:177:in `process_submission'\n\t/srv/ncbo/ncbo_cron/lib/ncbo_cron/ontology_submission_parser.rb:47:in `block in process_queue_submissions'\n\t/srv/ncbo/ncbo_cron/lib/ncbo_cron/ontology_submission_parser.rb:41:in `each'\n\t/srv/ncbo/ncbo_cron/lib/ncbo_cron/ontology_submission_parser.rb:41:in `process_queue_submissions'\n\t/srv/ncbo/ncbo_cron/bin/ncbo_cron:228:in `block (3 levels) in <main>'\n\t/srv/ncbo/ncbo_cron/lib/ncbo_cron/scheduler.rb:65:in `block (3 levels) in scheduled_locking_job'\n\t/srv/ncbo/ncbo_cron/lib/ncbo_cron/scheduler.rb:51:in `fork'\n\t/srv/ncbo/ncbo_cron/lib/ncbo_cron/scheduler.rb:51:in `block (2 levels) in scheduled_locking_job'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/mlanett-redis-lock-0.2.7/lib/redis-lock.rb:43:in `lock'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/mlanett-redis-lock-0.2.7/lib/redis-lock.rb:234:in `lock'\n\t/srv/ncbo/ncbo_cron/lib/ncbo_cron/scheduler.rb:50:in `block in scheduled_locking_job'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rufus-scheduler-2.0.24/lib/rufus/sc/jobs.rb:230:in `trigger_block'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rufus-scheduler-2.0.24/lib/rufus/sc/jobs.rb:204:in `block in trigger'\n\t/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rufus-scheduler-2.0.24/lib/rufus/sc/scheduler.rb:430:in `block in trigger_job'"]

bioontology-support mailing list
bioontology-support at<mailto:bioontology-support at><>

John Graybeal
Technical Program Manager
Center for Expanded Data Annotation and Retrieval /+/ NCBO BioPortal
Stanford Center for Biomedical Informatics Research

John Graybeal
Technical Program Manager
Center for Expanded Data Annotation and Retrieval /+/ NCBO BioPortal
Stanford Center for Biomedical Informatics Research

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the bioontology-support mailing list