Difference between revisions of "Data Collection Overview"
Line 28: | Line 28: | ||
== Workflow Principles == | == Workflow Principles == | ||
There are a few principles underlying the process outlined above. These include | There are a few principles underlying the process outlined above. These include | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
== Databases and Data Schemas Required == | == Databases and Data Schemas Required == | ||
Line 72: | Line 58: | ||
In a fully developed system, that might have many source systems, (possibly up to a 100) it may be necessary to 'stage' the aggregation process and include intermediate database layers between the source and the main databases, to reduce the number of foreign table schemas in any one database. For instance different intermediate databases could be established for trial registries, or for institutional or bibliographic systems. | In a fully developed system, that might have many source systems, (possibly up to a 100) it may be necessary to 'stage' the aggregation process and include intermediate database layers between the source and the main databases, to reduce the number of foreign table schemas in any one database. For instance different intermediate databases could be established for trial registries, or for institutional or bibliographic systems. | ||
<br/><br/> | <br/><br/> | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− |
Revision as of 20:15, 28 October 2020
Needs to be updated...
Contents
The Overall Strategy
Data is collected from a growing number of clinical trial registries and data object repositories (collectively known as 'data sources'), transformed into ECRIN schema metadata, and stored in a central database so that it can be accessed by the web portal or APIs. In fact there are four distinct processes involved in data collection and extraction, which apply to all data sources, and which are shown in figure 1.
Data Download
All data used by the MDR is first downloaded onto an ECRIN managed server, and stored as an XML file. The data may start as an XML file at the source (as for ClinicalTrials.gov or Pubmed) in which case downloading the file is relatively straightforward, using API calls to identify the files required. The data may be in a downloadable csv file (e.g. WHO ICTRP data) which is then processed to generate an XML file per record (row) in the file. The data may be need to be scraped from one or more web pages, in which case an XML file is again constructed for each record.
Data files that are created rather than simply downloaded demand more processing, but that processing can be used to start the process of cleaning and transforming the data from its 'raw' state into one that matches the ECRIN schema. The XML files generated are therefore relatively easy to harvest in the next stage of the process, compared with the 'native' XML files from, for example, PubMed.
Successive download operations result in steadily growing collections of source data, stored locally on the MDR database server. Each source has its own folder, or set of folders. For the WHO data the download process splits the records up and distributes the resulting files to different folders according to the source registry.
For small sources, it is often simpler to re-download the whole set of source data and replace the existing files. For large sources this takes too much time, and just a subset of records are downloaded each time - usually those revised or added since the last download. This does not prevent even large datasets being completely replaced at intervals - perhaps annually - to ensure synchronisation between the source and the MDR's version of it. Either way, at any one time, the MDR has the totality of the relevant material from each source available locally.
Data Harvesting
At intervals the local data can be processed and inserted into a database, a process that in this system is known as 'harvesting'. The harvested data is inserted into - effectively - a temporary holding database, so that it can be examined and additional processing carried out as required.
In the mdr system each source has a single database but uses at least two schemas, or distinct sets of tables. The schema that the harvested data is placed into is known as the session data schema (the name of each table in it is prefixed with 'sd.'). This differentiates it from the accumulated data schema tables (all prefixed with 'ad.'). As the name suggests the accumulated data tables hold the totality of data obtained from the source. They are usually created when the source is first accessed and then gradually grow and are revised over time. The session data tables, on the other hand, hold only the data from the last harvested session. These tables are dropped and recreated each time a harvest session takes place.
In most circumstances, harvests are set up to process only files that have been added or revised since the most recent import operation (described below). This means that only data that is potentially new to the system is harvested and placed in the sd tables. In most cases therefore, the sd tables will hold a small fraction of the volume in the ad tables, but it will be the data of current interest, because it is data recently changed in or added to the source. (It is possible to do a '100% harvest', but this would be relatively rare in normal operations).
The other important aspect of harvesting is that it completes the transformation of the data into the structure of the ECRIN metadata schema. The different databases will have different numbers of tables in their sd and ad schemas, (some sources are more complex than others) but a table of a particular type will be the same in all the databases, i.e. contain the same fields, and those fields will conform to the ECRIN schema. For the XML files generated by the download process this second transformation stage is usually straightforward. For ClinicalTriuals.gov and PubMed files, all the transformation has to be done during harvesting, which can therefore be relatively complex.
Data Import
The data import process brings the data into the accumulated data 'ad' tables. The ad and sd tables are broadly the same in structure (though the ad tables have more audit fields) so the transfer is relatively straightforward. The initial step is to identify what data is new and what has been changed in the sd tables compared to the ad tables. While new data is easy to spot, edited data is not so simple to identify when most data objects do not have intrinsic identifiers, and when data is not necessarily presented in the same order between harvest sessions. The system uses a series of hashing techniques to summarise record content, and this allows changes in records, or closely related groups of records, to be picked up for editing to take place - i.e. replacement of the relevant portion of ad data by the new sd data.
If, but only if, the harvest has been a full 100% harvest, so that the sd tables represent all the data available, it is also possible to see if any study or object data has been deleted from the source, and which therefore should be deleted from the ad tables. This is relatively rare, but can occur with a few (non trial registry) sources.
Once import has taken place the ad tables should once again be synchronised with the source material.
Data Aggregation
The accumulated data from the different sources now need to be brought together to form the central, aggregated database. The problem is that both studies and (some types of ) data objects can be referenced in more than one source. About 27,000 studies, for example, are registered in more than one trial registry. Simply aggregating records would cause these studies to have duplicated (sometimes three or more) records in the system. Similarly, some journal articles are referenced by multiple studies in different sources, and simple aggregation would cause confusing duplication of records. The aggregation process must therefore guard against duplication. For studies this is done by identifying the links between study registrations and ensuring all studies in more than one registry share the same id. When studies are added they are checked to see if they are one of the linked studies, and if they already exist in the system. If they do their id is changed to match the study already present. For PubMed published journal articles all the possible study-article links are first collected together and then de-duplicated using the articles' PubMed identifier, to produce a distinct set. Once that has been done the articles can be added to the system safely.
The aggregation process always starts from scratch - there is no editing of existing data involved. The aggregate tables are dropped and recreated and all data is added to them. This is for simplicity and ease of maintenance - dealing with de-duplication is complex enough without having to deal with possible edits and deletes!
After initial aggregation the data exists within three different schemas within the main mdr database - one for study data, one for data objects and one for link related data, between studies and data objects and between studies and studies. In general links between data objects (i.e. objects existing in different sources but with different names) do not yet exist in the source data. One final step is to import slightly simplified versions of this data into a single 'core' schema, that can be used as the data source for the web site. The generation of the core schema also sees the production of the 'provenance strings' for each study and object. For studies these may be composite, because the data has been collected from more than one source. Also at the end of the aggregation process, after the core tables have been created, the system can generate the json versions of study and data object data.
Workflow Principles
There are a few principles underlying the process outlined above. These include
Databases and Data Schemas Required
In Postgres data can held within multiple databases, each of which can be subdivided into different data schemas, or groups of related tables.
The databases required will be:
- A separate database for each data source, named after that source.
- A 'context' database, that acts as a central store of contextual data (e.g. organisations, countries) and the look up tables used within the system.
- The mdr database itself, which aggregates all the data into a single set of tables.
In more detail, the schemas required are:
- In each data source database
- A schema (sd) that holds the data as imported each time.
- A schema (ad) for the accumulated data from that source.
- In some cases it may be useful to have a third, supplementary schema (pp), to hold source specific support tables and procedures.
- In the context database
- A schema (lup) that holds a central store of lookup tables, i.e. controlled terminology.
- A schema (ctx) that holds contextual data relating to organisations and people, countries and regions, and languages.
- In the mdr database
- A schema (st) that holds aggregate study data points, i.e. from all data sources (mostly trial registries), and links them together so that study duplications are resolved.
- A schema (ob) that holds aggregate data object data points, i.e. from all sources, and links them together so that data object duplications are resolved.
- A schema (nk) that holds the linkage data, between studies and data objects, i.e. from all sources, and links them together so that duplications are resolved.
- The metadata repository itself, (schema = core), the final end point of the import processes and the data flowing through them. It brings together the data from the three 'aggregating' schemas. Data processing within this schema, beyond simple import, should be minimal.
- A monitor / logging schema (mn), to hold a list of data sources and a log record of the imports carried out. This assumes a central scheduling system that runs the import routines as required and which logs the reported results to this database.
Postgres allows direct links between schemas in the same database - for example queries can be written that reference tables in different schemas. Having 5 schemas within the main mdr database - mn, st, ob, nk and core - simplifies the interactions between these data stores. Postgres does not allow, however, simple references between databases. Instead schemas from an external database can be imported as 'foreign tables' (not copies but references to the source tables and their data) into pre-existing schemas in the importing database. By convention these foreign table schemas are distinct from the 'native' schemas and data, and usually take the same name as the source schema.
In the data system required for the MDR, each of the source specific databases and the mdr database requires the lup (look up tables) and ctx (context data) avalable as foreign tables. In that way it is necessary to store only one copy of this important contextual data.
In addition the central mdr database, as well as its 5 native schemas, needs the ad schemas of each of the source databases to be available as foreign table schemas, so that the data can be easily transferred.
Figure 2 summarises the schemas and their interactions.
In a fully developed system, that might have many source systems, (possibly up to a 100) it may be necessary to 'stage' the aggregation process and include intermediate database layers between the source and the main databases, to reduce the number of foreign table schemas in any one database. For instance different intermediate databases could be established for trial registries, or for institutional or bibliographic systems.