Cam-CAN (Cambridge Centre for Ageing Neuroscience) dataset inventory.

The Cambridge Centre for Ageing and Neuroscience (Cam-CAN) is a large-scale collaborative research project at the University of Cambridge, launched in October 2010, with substantial initial funding from the Biotechnology and Biological Sciences Research Council (BBSRC), followed by support from the Medical Research Council (MRC) Cognition & Brain Sciences Unit (CBU) and the European Union Horizon 2020 LifeBrain project. The Cam-CAN project uses epidemiological, cognitive, and neuroimaging data to understand how individuals can best retain cognitive abilities into old age.

This inventory contains details of data from Stages I and II of the Cambridge Centre for Ageing and Neuroscience (CamCAN) project. Nearly 3000 adults aged 18-90 completed a home interview, and a subset of nearly 700 (100 per decade from 18-88; the "CC700") were scanned using structural Magnetic Resonance Imaging (MRI), functional MRI (both resting and task-based), magnetoencephalography (MEG), and completed multiple cognitive experiments. Details of available datasets are listed below, and procedures for obtaining them follow the "apply for access" button. The data are available freely, subject to an online data usage agreement (available here). Data from Stage III (the "CC280") are available on email request.

A detailed description of the cc700 dataset and pre-processing pipeline can be found in Taylor et al (2017). A more general overview of the CamCAN dataset can be found in Shafto et al. (2014).


Apply for Data Access


Imaging Data

Name N (Raw) Data
Magnetic Resonance Imaging (MRI) data (conforming to BIDS standard)
Structural Data
T1
653
T2
653
Diffusion Weighted Imaging (DWI)
642
Magnetisation Transfer Imaging (MTI)
623
Functional
Resting State
652
Movie Watching
649
Sensori-motor task
651
Magnetoencephalography (MEG) (conforming to BIDS standard)
Resting State
647
Sensori-Motor task
647
Sensory (passive) task
639

Imaging Data Parameters

For a complete set of sequence parameters, download the PDF

T1 3D MPRAGE, TR=2250ms, TE=2.99ms, TI=900ms; FA=9 deg; FOV=256x240x192mm; 1mm isotropic; GRAPPA=2; TA=4mins 32s
T2 3D SPACE, TR=2800ms, TE=408ms, TI=900ms; FOV=256x256x192mm; 1mm isotropic; GRAPPA=2; TA=4mins 30s
Fieldmaps PE-GRE, TR=400ms, TE=5.19ms/7.65ms, 1 Magnitude and 1 Phase volume, 32 slices 3.7mm thick, 0.74mm gap, FA=60deg, FOV= 192 × 192mm, 3 × 3 × 4.44, TA=53s
DWI 2D twice-refocused SE EPI, TR=9100ms, TE=104ms, TI=900ms; FOV=192x192mm; 66 axial slices, 2mm isotropic; B0=0,1000/2000s/mm2, 30 directions; TA=10mins 2s. Readout time 0.0684 (echo spacing=0.72ms, EPI factor=96).
MTI 2x3D, MT-prepared SPGR, TR =30ms/50ms, FA=12 deg; FOV=192x192 mm; 1.5 mm × 1.5 mm; bandwidth =190Hz/px; TA=2min, 36s/4mins, 19s (RF pulse offset=1950Hz, BW=375 Hz, FA=500deg, dur=9984ms)
fMRI (Rest + Sensorimotor) T2* GE EPI, N=261 volumes of 32 axial slices 3.7mm thick, 0.74mm gap, TR=1970ms; TE=30ms, FA=78 deg; FOV =192 mm × 192 mm;3 × 3 x 4.44 mm, TA=8mins 40s.
fMRI (Movie) Multi-echo T2* GE EPI, N=193 volumes of 32 axial slices 3.7mm thick, 0.74mm gap, TR=2470ms; TE=[9,4, 21.2, 33, 45, 57]ms, FA=78 deg; FOV =192 mm × 192 mm;3 × 3 x 4.44 mm, TA=8mins 13s. First TR corresponds to the first frame of the movie. Data can be provided in a single image in which 5 echoes weighted by estimated T2*, or as 5 separate images, one per TE.
NOTE FOR ALL MRI DATA In October 2011, after the first 97 CamCAN participants had been acquired, the 3T TRIO was ramped down after the gradient coil failed, and therefore B0 frequency and passive shimming configuration differed for the subsequent participants. The coil-type is available as a binary variable within the standard tabular data folder provided for all requests.
MEG Raw
(Rest + Sensorimotor + Sensory)

306-channel Elekta Neuromag Vectorview (102 magnetometers and 204 planar gradiometers), 1kHz sampling, 0.03-330 Hz, 4 HPI coils, bipolar VEOG, HEOG, ECG

NOTE: The first 30-40s of each MEG run was without continuous Head Position Indicator (cHPI) coils switched on (this does not affect trial onsets, which always started after cHPI turned on, but does affect resting-state).

Empty-room data are also available.

MEG Maxfiltered
(Head + Device Space)

By default Elekta Neuromag MaxFilter 2.2 was applied with the following settings: temporal signal space separation (tSSS): 0.98 correlation, 10s window; bad channel correction: ON; motion correction: OFF; 50Hz+harmonics (mains) notch. Additional versions are available that: 1) include continuous motion correction and 2) include transformation to a common default (device) space across participants. Note that motion correction did not work for some participants (whose HPI coils were lost).

Cognitive Data

Most of the cognitive (behavioural) datasets below are acquired at Stage II outside of the scanner (except for MEG and MRI sensori-motor).

Dataset N
Behavioural data from non-imaging sessions
Benton faces 657
Cardiovascular measures 587
Cattell 660
Emotional expression recogn. 665
Emotional memory 330
Emotion regulation 316
Famous faces 660
Force matching 328
Hotel task 658
Motor learning 318
Picture priming 652
Proverbs 655
RT choice 682
RT simple 686
Synsem 660
TOT 656
VSTM colour 656
Behavioural data from imaging sessions
MRI Sensori-motor task 657
MEG Sensori-motor task (RTs) 655

Cognitive Data Descriptions

Emotional expression View face and label emotion expressed (happy, sad, anger, fear, disgust, surprise) where faces are morphs along axes between emotional expression.
Emotional memory Study: View (positive, neutral, or negative) background image, then object image superimposed, and imagine a ‘story’ linking the two; Test (incidental): View and identify degraded image of (studied, new) object, then judge memory and confidence for visually intact image of same object, then recall valence and any details of background image from study phase.
Emotional reactivity and regulation View (positive, neutral, negative) film clips under instructions to simply ‘watch’ or ‘reappraise’ (attempt to reduce emotional impact by reinterpreting its meaning; for some negative films only), then rate emotional impact (how negative, positive they felt during clip) and the degree to which they successfully reappraised.
Face recognition: familiar faces View faces of famous people (and some unknown foils), judge whether each is familiar, and if so, what is known about the person (occupation, nationality, origin of fame, etc.), then attempt to provide person’s name.
Face recognition: unfamiliar faces Given a target image of a face, identify same individual in an array of 6 face images (with possible changes in head orientation and lighting between target and same face in the test array).
Fluid intelligence Complete nonverbal puzzles involving series completion, classification, matrices, and conditions.
Force matching Match mechanical force applied to left index finger by using right index finger either directly, pressing a lever which transmits force to left index finger, or indirectly, by moving a slider which adjusts the force transmitted to the left index finger.
Hotel task Perform tasks in role of hotel manager: write customer bills, sort money, proofread advert, sort playing cards, alphabetise list of names. Total time must be allocated equally between tasks; there is not enough time to complete any one task.
Motor learning Time-pressured movement of a cursor to a target by moving an (occluded) stylus under veridical, perturbed (30°), and reset (veridical again) mappings between visual and real space.
Picture-picture priming Name the pictured object presented alone (baseline), then when preceded by a prime object that is phonologically related (one, two initial phonemes), semantically related (low, high relatedness), or unrelated.
Proverb comprehension Read and interpret three English proverbs.
Sentence comprehension Listen to and judge grammatical acceptability of partial sentences, beginning with an (ambiguous, unambiguous) sentence stem (e.g., “Tom noticed that landing planes…”) followed by a disam biguating continuation word (e.g., “are”) in a different voice. Ambiguity is either semantic or syntactic, with empirically determined dominant and subordinate interpretations.
Tip-of-tongue task View faces of famous people (actors, musicians, politicians, etc.) and respond with the person’s name, or “don’t know” if they do not know the person’s name (even if familiar), or “TOT” if they know the person’s name but are (temporarily) unable to retrieve it.
Visual short-term memory View (1–4) coloured discs briefly presented on a computer screen, then after a delay, attempt to remember the colour of the disc that was at a cued location, with response indicated by selecting the colour on a colour wheel (touchscreen input).
Functional Data
Resting state Participants are asked to lie still and rest with their eyes closed and remain awake.

Duration: 9m20s.

Sensori-motor task Audio-visual stimuli (bilateral sine gratings and concurrent audio tone. Participants were asked to respond each time a stimulus was presented.

Duration: 8m40s.

Passive audio-visual task Separate auditory and visual stimuli (bilateral sine gratings or auditory tone). No response required.

Duration: 2 minutes.

Demographic data

Home Interview The Home Interview dataset contains data from approximately 2700 participants from stage I of the camcan study. This includes a range of interview and self-completion questionnaires designed to collect lifestyle variables, demographic data, physical and social activity etc. A comprehensive searchable list is provided if data are requested. The lists are divided into four categories: Home Interview (homeint_ prefix), Electronic Personal Assessment Questionnaire (epaq_ prefix), Self-Completion Questionnaire (scq_ prefix), Additional Scores (additional_ prefix).

Physiological Measures

Cardiovascular measures (diastolic and systolic blood pressure), height, and weight were taken during the scanning session. Three BP measures per partipant are provided.

For more information on imaging and behavioural datasets including references and key variables, please see Taylor et al (2015), and Shafto et al. (2014).

 

Apply for Data Access

References:

Taylor, J.R., Williams, N., Cusack, R., Auer, T., Shafto, M.A., Dixon, M., Tyler, L.K., Cam-CAN, Henson, R.N. (2017). The Cambridge Centre for Ageing and Neuroscience (Cam-CAN) data repository: Structural and functional MRI, MEG, and cognitive data from a cross-sectional adult lifespan sample. NeuroImage. 144, 262-269. doi: 10.1016/j.neuroimage.2015.09.018. [Cam-CAN Author list 12]

Shafto, M.A., Tyler, L.K., Dixon, M., Taylor, J.R., Rowe, J.B., Cusack, R., Calder, A.J., Marslen-Wilson, W.D., Duncan, J., Dalgleish, T., Henson, R.N., Brayne, C., Cam-CAN, & Matthews, F.E. (2014). The Cambridge Centre for Ageing and Neuroscience (Cam-CAN) study protocol: a cross-sectional, lifespan, multidisciplinary examination of healthy cognitive ageing. BMC Neurology, 14(204). doi:10.1186/s12883-014-0204-1. [Cam-CAN Author list 10]

See also: Cam-CAN Public Website