Difference between revisions of "Aggregating Data"
(→Introduction) |
(→Introduction) |
||
Line 9: | Line 9: | ||
In other words the aggregation module should be scheduled to run after the application of all other modules.<br/> | In other words the aggregation module should be scheduled to run after the application of all other modules.<br/> | ||
<br/> | <br/> | ||
− | One important difference between the data in the central mdr tables and that in the source databases concerns the id fields used for both studies and data objects. In the source databases, these ids are called sd_sid (source data study id) and sd_oid (source data object id) respectively. sd_sid values are usually trial registry ids, or sometimes data repository accession numbers. sd_oids are usually constructed hash values, or in the case of journal papers PubMed ids. It is true that the records in the sd and ad schemas in the source databases also have an integer id, but this is a simple identity column and has no significance other than as a counter, a way of breaking up records into groups, by id, if and when necessary. The values are not transferred from the sd to the ad tables during the import process. The 'true' identifiers for the records, which of course are transferred, are in the sd_sid or sd_oid fields.<br/> | + | One important difference between the data in the central mdr tables and that in the source databases concerns the id fields used for both studies and data objects. In the source databases, these ids are called sd_sid (source data study id) and sd_oid (source data object id) respectively. sd_sid values are usually trial registry ids, or sometimes data repository accession numbers. sd_oids are usually constructed hash values, or in the case of journal papers PubMed ids. It is true that the records in the sd and ad schemas in the source databases also have an integer id, but this is a simple identity column and has no significance other than as a counter, a way of breaking up records into groups, by id, if and when necessary. The values are not transferred from the sd to the ad tables during the import process, or from the ad tables to the central tables during aggregation. The 'true' identifiers for the records, which of course ''are'' transferred, are in the sd_sid or sd_oid fields.<br/> |
− | Because they are always source specific the sd_sid / sd_oid fields cannot be guaranteed to be unique across ''all'' studies or objects (even though, in practice, at the moment they are). More fundamentally, because an MDR study entry (and in the future a data object entry) may be derived from more than one study (or object) it does not make sense to use one particular id to identify that study (or object). The ids used for both studies and objects in the central tables are therefore integers, and the system drops the sd_sids and sd_oids from the table data, although they are retained in the links tables. As a bonus the use of integer ids normally produces much faster processing than using strings, which given the size of some of the tables in the system is a non-trivial issue. It is true that the attribute table records in the core database also have identity integer ids, but again these have no significance other than as a record label (e.g. to quickly locate a particular attribute record if the need for that arose). <br/> | + | Because they are always source specific the sd_sid / sd_oid fields cannot, however, be guaranteed to be unique across ''all'' studies or objects (even though, in practice, at the moment they are). More fundamentally, because an MDR study entry (and in the future a data object entry) may be derived from more than one study (or object) it does not make sense to use one particular id to identify that study (or object). The ids used for both studies and objects in the central tables are therefore integers, and the system drops the sd_sids and sd_oids from the table data, although they are retained in the links tables. As a bonus the use of integer ids normally produces much faster processing than using strings, which given the size of some of the tables in the system is a non-trivial issue. The way in which the ids for studies and data objects are generated is described in the sections below. It is true that the ''attribute'' table records in the core database also have identity integer ids, but again these have no significance other than as a record label (e.g. to quickly locate a particular attribute record if the need for that arose). <br/> |
− | Because the aggregation process starts from a 'blank canvas' each time, and re-creates all the tables each time, there are two important consequences | + | Because the aggregation process starts from a 'blank canvas' each time, and re-creates all the tables each time, there are two important consequences: |
* '''The ids given for studies and data objects are not consistent in different versions of the aggregated data.''' They are consistent internally in any one version - but not across versions. | * '''The ids given for studies and data objects are not consistent in different versions of the aggregated data.''' They are consistent internally in any one version - but not across versions. | ||
* '''The ids given for studies and data objects cannot therefore be public identifiers, and so should never be exposed to users.''' | * '''The ids given for studies and data objects cannot therefore be public identifiers, and so should never be exposed to users.''' | ||
− | It may be possible to develop systems to generate permanent accession numbers, if that is required in the future, but it is important to be aware of this property of the ids in the central mdr system. | + | It may be possible to develop systems to generate permanent accession numbers, if that is required in the future, but it is important to be aware of this property of the ids in the central mdr system.<br/> |
<br/> | <br/> | ||
''N.B. The aggregation code can be found at https://github.com/ecrin-github/DataAggregator'' | ''N.B. The aggregation code can be found at https://github.com/ecrin-github/DataAggregator'' |
Revision as of 17:45, 19 November 2020
Contents
Introduction
The program takes all the data within the ad tables in the various source databases and loads it to central tables within the mdr database, dealing with multiple entries for studies and creating the link data information between studies and data objects. The aggregated data is held within tables in the st (study), ob (object) and nk (links) schemas. A fourth schema, 'core' is then populated as a direct import from the others, to provide a single simplified mdr dataset that can be exported to other systems.
Note that the aggregation process starts from scratch each time - there is no attempt to edit existing data. All of the central tables in the st, ob and nk schemas are dropped and then re-created during the main aggregation processes (triggered by -D). All of the core tables are dropped and re-created when the data is transferred to the core schema (triggered by -C). This makes each aggregation longer (it takes about 1 hour in total) but simplifies the processes involved, allowing a focus on the aggregation itself, without a need to consider updates and edits, and it makes the system much easier to maintain.
Although an aggregation can be run at any time it makes most sense to do so after the following sequence of events:
- Downloads are run for all data sources to get the local source file collections as up to date as possible.
- Contextual data is updated and / or augmented, so far as resources and time allow. (This is a gradual process).
- Harvests and imports are run for all study based sources to get the 'baseline' study data as up to date as possible, in each of the study source databases.
- Harvests and imports are run for all object based sources to get additional data object data as up to date as possible, in each of the object source databases.
In other words the aggregation module should be scheduled to run after the application of all other modules.
One important difference between the data in the central mdr tables and that in the source databases concerns the id fields used for both studies and data objects. In the source databases, these ids are called sd_sid (source data study id) and sd_oid (source data object id) respectively. sd_sid values are usually trial registry ids, or sometimes data repository accession numbers. sd_oids are usually constructed hash values, or in the case of journal papers PubMed ids. It is true that the records in the sd and ad schemas in the source databases also have an integer id, but this is a simple identity column and has no significance other than as a counter, a way of breaking up records into groups, by id, if and when necessary. The values are not transferred from the sd to the ad tables during the import process, or from the ad tables to the central tables during aggregation. The 'true' identifiers for the records, which of course are transferred, are in the sd_sid or sd_oid fields.
Because they are always source specific the sd_sid / sd_oid fields cannot, however, be guaranteed to be unique across all studies or objects (even though, in practice, at the moment they are). More fundamentally, because an MDR study entry (and in the future a data object entry) may be derived from more than one study (or object) it does not make sense to use one particular id to identify that study (or object). The ids used for both studies and objects in the central tables are therefore integers, and the system drops the sd_sids and sd_oids from the table data, although they are retained in the links tables. As a bonus the use of integer ids normally produces much faster processing than using strings, which given the size of some of the tables in the system is a non-trivial issue. The way in which the ids for studies and data objects are generated is described in the sections below. It is true that the attribute table records in the core database also have identity integer ids, but again these have no significance other than as a record label (e.g. to quickly locate a particular attribute record if the need for that arose).
Because the aggregation process starts from a 'blank canvas' each time, and re-creates all the tables each time, there are two important consequences:
- The ids given for studies and data objects are not consistent in different versions of the aggregated data. They are consistent internally in any one version - but not across versions.
- The ids given for studies and data objects cannot therefore be public identifiers, and so should never be exposed to users.
It may be possible to develop systems to generate permanent accession numbers, if that is required in the future, but it is important to be aware of this property of the ids in the central mdr system.
N.B. The aggregation code can be found at https://github.com/ecrin-github/DataAggregator
Parameters
The program is a console app, to enable it to be more easily scheduled. There are a variety of flag type parameters, that can be used alone or in combination (though only some combinations make sense). These include:
-D: which indicates that the aggregating data transfer should take place, from the source ad tables to the tables in the st (studies), ob (objects) and nk (links) schemas. This is the necessary first step of the aggregation process.
-C: indicates that the core tables should be created and filled from the aggregate tables, i.e. data is combined from the st, ob and nk schemas in to a single, simpler core schema.
-J: indicates that the core data be used to create JSON versions of the data within the core database.
-F: indicates that the core data should be used to create JSON files of two types, one for each study and another for each data object. It has no effect unless the -J parameter is also supplied.
-S: collects statistics about the existing data, from both the ad tables and the central aggregated tables.
-Z: zips the json files created by the -F parameter into a series of zip files, with up to 100,000 files in each. This is for ease of transfer to other systems.
The -S parameter can be provided at any time or on its own. It makes little sense to trigger the other processes without an initial call using -D. A -C call would then normally follows, and then -J (-F), and finally -Z. The system can cope with multiple parameters, and applies them in the order given: -D -C -J -F -S -Z. It is easier to see what is happening, however, to make multiple calls to the program working through the parameter list as described.
Overview
Initial Setup
The main aggregation process, as triggered by -D, begins with the creation of a new set of central tables in the st, ob and nk schemas.
After that the program interrogates the mon database to retrieve a set of 'Source' objects, each one corresponding to a source database. The list is in order of decreasing 'preference', where preference indicates the usefulness of the source as the primary data source for duplicated studies (see the Preferred Source concept section in Study-study links). It runs through those sources to first obtain a list of the 'other registry ids' in each database. In other words it builds up a list of all studies that are found in more than one data source. About 25,000 studies are registered in 2 registries and about another 1,000 are registered in 3 or more registries. The details of the process are provided in Study-study links, but the outcome is a list of ordered study pairs, with each duplicated study indicating the 'more preferred' version of its data source.
Some studies (several hundred) have more complex relationships - for example are registered in two registries but in one of those are entered as a group of related studies rather than having a simple 1-to-1 relationship with the other registry study. These are removed and instead added to the study-study relationship data. Again the details are in Study-study links.
Armed with the study-study linkage data, the system can begin the data transfer to the central mdr database.
Aggregating Data
The aggregation process is summarised by the high level loop below. For each source:
- The ad schema of the source database is linked as a foreign table wrapper (FTW) in the mdr database. The schema name returned from this process is always <database name>_ad, e.g. ctg_ad, isrctn_ad, etc. It is required for future use in SQL commands.
- A connection string for the source database is created for future use
if the source has study tables (i.e. is a study based source)
- The study ids are retrieved from the source database, and processed to see which are new and which correspond to studies already in the central mdr database.
- As part of that processing, the source data study ids (the string sd_sids) are replaced by integer ids that are unique across the whole MDR system.
- The data for the studies and their attributes is added to the st schema tables in the mdr db
- The object ids are retrieved and linked to the studies using the mdr ids for studies rather than the original source based ones.
- The string source data object ids (sd_oids) are replaced by integer ids
else, if an object based data source
- the object ids are retrieved and linked to studies using source-specific processes
- The string source data object ids (sd_oids) are replaced by integer ids
In either case...
- The data for data objects and their attributes is transferred to the ob schema tables in the mdr db
- The relationship between old (sd_sid and sd_oid) ids and new integer ids in the central system is retained in link tables in the nk schema. This link data is created in the id processing steps above.
- The foreign table wrapper for the source database is dropped.
foreach (Source s in sources) { string schema_name = repo.SetUpTempFTW(s.database_name); string conn_string = logging_repo.FetchConnString(s.database_name); DataTransferBuilder tb = new DataTransferBuilder(s, schema_name, conn_string, logging_repo); if (s.has_study_tables) { tb.ProcessStudyIds(); tb.TransferStudyData(); tb.ProcessStudyObjectIds(); } else { tb.ProcessStandaloneObjectIds(); } tb.TransferObjectData(); repo.DropTempFTW(s.database_name); }
The study data is then added to the aggregate tables, in the order of most preferred source, working through the list to the least preferred. Apart from the first (ClinicalTriuals.gov) the study id of any imported study is checked against the table of poly-registered studies. If it exists in that table it is not added as a separate record but instead is given the same id as that of the most preferred version of that study. Study data, including all attribute data, of studies that are genuinely new to the system are simply added to the aggregate data. Also immediately added are all associated data objects and their attribute data. Study data for a study that already exists in the system is checked first to see if it represents new data. The main study data record is not added - that can only come from the 'preferred' source. Study attributes are only added if they do not already exist, so far as that can be readily checked by the program. Data objects in the 'non-preferred' versions of the study may already exist but the nature of the data is that genuine duplication of data objects from different sources is extremely rare. Almost all data objects are therefore added. Studies with multiple entries in different registries therefore have their data built up from a single 'preferred' source for the main study record, from potentially multiple registries for study attributes, and definitely from multiple registries for the associated data objects.
Aggregating Link Data
- The link between data objects and studies - found within the source data object data - is transferred to link tables. The 'parent study' id is transformed into its most 'preferred' form if and when necessary, to ensure that the links work properly in the aggregated system. Also transferred to link tables is the provenance data that indicates when each study and data object was last retrieved from the source.
Aggregating Object Based Data
- For sources where there are no studies - just data objects - the process is necessarily different. It must also follow after the aggregation of study data, to ensure that all studies are in the central system.
- This only applies to PubMed data at the moment. For PubMed, the links between the PubMed data and the studies are first identified. Two sources are used - the 'bank id' data within the PubMed data itself, referring to trial registry ids, and the 'source id' data in the study based sources, where references are provided to relevant papers. These two sets of data are combined and de-duplicated, and two final sets of data are created: the distinct list of PubMed data objects, and the list of links between those objects and studies. Unlike most data objects in the study based resources, PubMed data objects can be linked to multiple studies, and of course studies may have multiple article references. The linkage is therefore complex and requires considerable additional processing.
Creating the Core Tables
Most of the other options provided by the program are relatively simple and self contained. The -C option copies the data from the aggregating schema (st, ob, and nk) to the core schema without any processing, other than creating the provenance strings for both studies and data objects. The latter may be composite if more than one source was involved.