Below is part one of a small series on ctj rdf, a new program I made to transform ContentMine CProjects into SPARQL-queryable Wikidata-linked rdf.
ctj has been around for longer, and started as a way to learn my way into the ContentMine pipeline of tools, but turned out to uncover a lot of possibilities in further processing the output of this pipeline (1, 2).
The recent addition of ctj rdf expands on this. While there is a lot of data loss between the ContentMine output and the resulting rdf, the possibilities certainly are no less. This is mainly because of SPARQL, which makes it possible to integrate in other databases, such as Wikidata, without many changes in ctj rdf itself.
Here’s a simple demonstration of how this works:
- We download 100 articles about
aardvark
(classic ContentMine example) - We run the ContentMine pipeline (norma, ami2-species, ami2-sequence)
- We run ctj rdf
This generates data.ttl
, which holds the following information:
- Common identifier for each article (currently PMC)
- Matched terms from each article (which terms/names are found in which article)
- Type of each term (genus, binomial, etc.)
- Label of each term (matched text)
Example data.ttl contents |
Note that there are no links to Wikidata whatsoever. When we list, for instance, how often each term is mentioned in an article (in the dataset), we only have string values, a identifiers.org URI and some custom namespace URIs.
However, in this format, we can easily use the information in these papers in conjunction with the enormous amount of data in Wikidata with Federated Queries.
To accomplish this we first link the identifier in our dataset to the ones in Wikidata; then we link the matched text of the term to the taxon name in species in Wikidata. This alone already gives us a set of semantic triples where both values in every triple are linked to values in the extensive community-driven database that is Wikidata.
Example query, counting how often each species is mentioned, and mapping them to Wikidata |
Results of the above query |
Now say we want to list the Swedish name of each of those species. Well, we can, because that info probably exists on Wikidata (see stats below). And if we can’t find something, remember that each of those Wikidata values are also linked to numerous other databases.
Again, this is without having to change anything in the rdf output (to be fair, I forgot to list an article identifier in the first version of the program, but that could/should have been anticipated). Not having to add this data to the output has the added benefit of not having to make and maintain local dictionaries and lists of this data.
Some stats:
- Number of articles: 100 (for reference)
- Number of ‘term found in article’ statements: 1964
- Number of those statements that map to Wikidata: 1293 (65.3% of total)
- Number of mapped statements with Swedish labels: 1056 (81.7% of mapped statements, 53.8% of total)
- Average number of statements per article: 19.64, 12.93 mapped
Note that not all terms are actually valid. A lot of genus
matches are actually just capitalised words, and a lot of common species names are abbreviated, e.g. to E. coli
, making it impossible to unambiguously map to Wikidata or any other database. This could explain the difference between found ‘terms’ and mapped terms.
No comments:
Post a Comment