Search Mailing List Archives

Limit search to: Subject & Body Subject Author
Sort by: Reverse Sort
Limit to: All This Week Last Week This Month Last Month
Select Date Range     through    

[bioontology-support] [BioPortal] Feedback from Solene Grosdidier

Michael Dorf mdorf at
Fri Jun 14 15:15:38 PDT 2019

Hi Solene,

What you’re describing is a known issue:

Background: At some point in the past, we’ve implemented a system that prevents expensive COUNT queries going live against our 4store backend. These queries used to really bog down our servers, often resulting in downtime. The COUNT queries used to be executed on paged REST services, like the one that retrieves all mappings for a given ontology.  So, in order to determine the correct number of pages for a given call, our system used to first execute a COUNT query, storing the result in the output. The new system would pre-cache these counts, so when a paged service call is made, the count would be retrieved from a static repository. Unfortunately, there appears to be a bug in this process that triggers the behavior you are seeing.

For your specific example, it’s best to simply use an iterator to go through ALL pages of available mappings until you hit an empty collection instead of relying on the reported totalCount.

Thanks again for your report. Hope this works as a workaround for what you are trying to accomplish.


On Jun 14, 2019, at 4:08 AM, support at<mailto:support at> wrote:

Name: Solene Grosdidier

Email: s.grosdidier at<mailto:s.grosdidier at>



Dear Bioportal team,

I am trying to retrieve all the mappings available in Bioportal between SNOMEDCT and NCIT through the API. Unfortunately, after the 321st page, I do get empty json for the following pages (reproducible on 2 different computers). Below is a copy-paste of my script.



import urllib.request, urllib.error, urllib.parse
import simplejson as json
import requests
API_KEY = "2c84c2c2-3510-46fa-b7af-732659784401"

def get_json(url):
opener = urllib.request.build_opener()
opener.addheaders = [('Authorization', 'apikey token=' + API_KEY)]
return json.loads(
#print(REST_URL + "/mappings?ontologies=MEDDRA,SNOMEDCT")
mapping = get_json(REST_URL+"/mappings?ontologies=NCIT,SNOMEDCT")

pages = mapping["pageCount"]

for i in range (1,pages+1):
print("page: " + str(i))
mapping2 = get_json(REST_URL+"/mappings?ontologies=NCIT%2CSNOMEDCT&page=" + str(i))
for element in mapping2["collection"]:
print(element["source"] + "\t" + element["classes"][0]["@id"]+ "\t" + element["classes"][1]["@id"])

Can you help me understanding what is happening?
Thank you very much for your help. I am looking forward to hearing from you.

Solene Grosdidier

bioontology-support mailing list
bioontology-support at<mailto:bioontology-support at>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the bioontology-support mailing list