Skip to content

Convert your fMRI data into BIDS format

To organize our fMRI dataset, we follow the BIDS Specification.

If you are not familiar with the BIDS Specification, the BIDS Starter Kit provides all the information needed to get started, along with example BIDS datasets, Talks and Slides, and most importantly Tutorials.

It is crucial that you get familiar with BIDS folders/files naming convention and structure. Most, if not all, the tools we are going to use in the next steps are BIDS Apps, and they rely on data organized following the BIDS Specification. Following this structure will make it easier to use these tools, share your code and data, and communicate with other scientists.

The BIDS Specification provides guidelines on how to organize all your data formats, including (f/d)MRI, EEG, eye-tracking, Task events associated with Neuro-Imaging recordings or not, and Derivatives (e.g., pre-processed files, Regions Of Interest mask files, GLM files, etc.).

At any moment, you can check your dataset for BIDS compliance. To do so, you can use the BIDS dataset validator.

Table of Contents

BIDS Conversion Overview

Here's a high-level overview of the steps involved in arranging your data in a BIDS-compatible way. While this provides a general understanding, most of these steps should be performed using the code provided in each sub-section to minimize errors. After scanning participants, you'll obtain data from two primary sources:

  1. The scanner: functional and structural outputs (DICOM files).
  2. The stimulus presentation computer: behavioural outputs (mainly log files and mat files) and potentially eye-tracking data (edf files or tsv files).

As you turn your raw data into a BIDS-compatible format, your project directory will change considerably. The folder trees below show you how each steps will affect your working directory, with changing folders and file in bold for each step.


1. Create the raw data directory


myproject
└── sourcedata

Your first step is to organize your files in a sourcedata folder. Follow the structure outlined in How to store raw data: have one main project folder (e.g. myproject), and a sourcedata folder in it.


2. Create the subject's directory


myproject
└── sourcedata
    └── sub-01
        ├── bh
        ├── dicom
        ├── eye
        ├── dicom_anon
        └── nifti

Create the relevant sub-folders within the sourcedata folder: for each participant you collected data from, create a sub-xx folder (e.g. sub-01). Within the folder of each participant, create a bh (behaviour), eye (i.e. eye-tracking data), dicom (i.e. dicom files collected from the scanner), dicom_anon (i.e. anonymized dicom files collected from the scanner), and nifti (i.e. nifti, the format of the files after the DICOM conversion).

To create these folders, open your terminal (or PowerShell if you are on Windows) and type:

cd /path/to/myproject/sourcedata
mkdir sub-01
mkdir sub-01/bh
mkdir sub-01/eye
mkdir sub-01/dicom
mkdir sub-01/dicom_anon
mkdir sub-01/nifti

3. Organize your source files


myproject
└── sourcedata
    └── sub-01
        ├── bh
        │   ├── yyyy-mm-dd-sub-01_run-01_task-{taskname}_log.tsv
        │   ├── yyyy-mm-dd-sub-01_run-01_task-{taskname}.mat
        │   ├── ...
        │   ├── yyyy-mm-dd-sub-01_run-{runnumber}_task-{taskname}_log.tsv
        │   └── yyyy-mm-dd-sub-01_run-{runnumber}_task-{taskname}.mat
        ├── dicom
        │   ├── IM_0001
        │   ├── IM_0005
        │   ├── PS_0002
        │   ├── PS_0006
        │   ├── XX_0003
        │   ├── XX_0004
        │   └── XX_0007
        ├── dicom_anon
        ├── nifti
        └── eye

Place the files you collected in this sourcedata structure: data collected from your experimental task goes into bh (e.g. .mat files and log files if you used the fMRI task template), data collected from the scanner itself (DICOM) goes in dicom, eye-tracking data (generally, EDF or csv files) goes in eye.


4. Convert DICOM files


myproject
└── sourcedata
    └── sub-01
        ├── bh
        ├── dicom
        ├── dicom_anon
        │   ├── IM_0001
        │   ├── IM_0005
        │   ├── PS_0002
        │   ├── PS_0006
        │   ├── XX_0003
        │   ├── XX_0004
        │   └── XX_0007
        ├── nifti
        │   ├── dcmHeaders.mat
        │   ├── sub-01_run-01.json
        │   ├── sub-01_run-01.nii.gz
        │   ├── sub-01_struct.json
        │   └── sub-01_struct.nii.gz
        ├── dicom_anon
        ├── nifti
        └── eye

If you have collected DICOM files from the scanner, you need to anonymise and convert them so that you can use them properly. There are several tools available that can help with this. One recommended option is dicm2nii, a lightweight and flexible toolbox for handling DICOM-to-NIfTI conversion.

Why dicm2nii and not dcm2niix?

Although dcm2niix is widely used and robust, especially for modern enhanced DICOMs and vendor-specific edge cases (like Philips), dicm2nii is often suggested.

For data acquired with Philips scanners, or if your DICOMs have missing metadata (e.g., PhaseEncodingDirection), see this Rorden Lab guide and this NITRC forum thread See also Missing fields in JSON files for more information.

To convert your data:

  1. Navigate to your sourcedata folder

    cd /path/to/myproject/sourcedata
    
  2. Clone the repository from GitHub:

    git clone https://github.com/xiangruili/dicm2nii.git
    
  3. Add the dicm2nii folder to your MATLAB path: In MATLAB, run:

    addpath('/path/to/dicm2nii')  % Adjust this to the actual folder path
    

    Tip

    You can also use uigetdir to interactively select the folder:

    addpath(uigetdir)
    
  4. Anonymize your DICOM files

    Use the anonymize_dicm function. This removes identifying fields and creates a safe copy for conversion:

    anonymize_dicm('sub-01/dicom', 'sub-01/dicom_anon', 'sub-01')
    
    • First argument = path to raw DICOM folder
    • Second argument = path to output anonymized DICOM folder
    • Third argument = subject ID string used in metadata fields (optional but recommended)

    This will create dicom_anon and log any changes made.

  5. Convert anonymized DICOMs to NIfTI

    Now convert the anonymized files:

    dicm2nii('sub-01/dicom_anon', 'sub-01/nifti', 'nii.gz')
    
    • First argument = path to anonymized DICOMs
    • Second argument = output directory
    • Third argument = output format (nii, nii.gz)

    This will:

    • Generate one .nii.gz file per series
    • Produce accompanying .json metadata files
    • Create a dcmHeaders.mat with all parsed metadata

5. Create the BIDS directory


myproject
├── BIDS
│   └── sub-01
│       ├── anat
│       │   └── sub-01_T1w.nii
│       └── func
│           ├── sub-01_task-{taskname}_run-01_bold.nii
│           ├── ...
│           └── sub-01_task-{taskname}_run-{runnumber}_bold.nii
└── sourcedata
    └── sub-01
        ├── bh
        ├── dicom
        ├── dicom_anon
        └── nifti

Create a BIDS folder in your main project directory, alongside the sourcedata folder. For each participant, create a sub folder (e.g. BIDS/sub-01). In the BIDS folder of each participant, place a func folder for functional files and a anat folder for anatomical files. Copy-paste your functional .nii files from sourcedata to their corresponding func folder, renaming them if necessary to follow BIDS format (e.g. sub-01_task-{taskname}_run-01_bold.nii), and similarly copy-paste your structural .nii files to the anat folder, renaming them if necessary (e.g. sub-01_T1w.nii). See below for more details on how to rename and move nifti files.


6. Organise the BIDS directory


myproject
├── BIDS
│   └── sub-01
│       ├── anat
│       │   └── sub-01_T1w.nii
│       └── func
│           ├── sub-01_task-{taskname}_run-01_bold.nii
│           ├── ...
│           └── sub-01_task-{taskname}_run-{runnumber}_bold.nii
└── sourcedata
    └── sub-01
        ├── bh
        ├── dicom
        ├── dicom_anon
        └── nifti
  1. Navigate to your sourcedata/sub-xx/nifti/ folder.
  2. Identify the functional and structural NIfTI files.
  3. Rename the files following BIDS conventions:
    • Functional: sub-<label>_task-<label>_run-<label>_bold.nii
    • Structural: sub-<label>_T1w.nii
  4. Move the renamed files to their respective folders in BIDS/sub-xx/:
    • Functional files go to BIDS/sub-xx/func/
    • Structural files go to BIDS/sub-xx/anat/

7. Create JSON sidecar files


myproject
├── BIDS
│   └── sub-01
│       ├── anat
│       │   ├──sub-01_T1w.nii
│       │   └── sub-01_T1w.json
│       └── func
│           ├── sub-01_task-{taskname}_run-01_bold.json
│           ├── sub-01_task-{taskname}_run-01_bold.nii
│           ├── ...
│           ├── sub-01_task-{taskname}_run-{runnumber}_bold.json
│           └── sub-01_task-{taskname}_run-{runnumber}_bold.nii
└── sourcedata
    └── sub-01
        ├── bh
        ├── dicom
        ├── dicom_anon
        └── nifti

Create .json sidecar files for each functional run .nii file, using the output from the dicom conversion step.

Each nii file must have a sidecar JSON file. Make sure you anonymised and converted your DICOM files and go through the following steps:

  1. Locate the JSON sidecar files in sourcedata/sub-xx/nifti/.
  2. Open each JSON file and Complete the PhaseEncodingDirection and SliceTiming fields (see Missing fields in JSON files for more information).
  3. Copy-paste the updated JSON files to accompany each NIfTI file in the BIDS/sub-xx/func folder: each run should have its accompanying sub-xx_task-{taskname}_run-{runnumber}_bold.json sidecar file.

8. Create event files


myproject
├── BIDS
│   └── sub-01
│       ├── anat
│       │   ├──sub-01_T1w.nii
│       │   └── sub-01_T1w.json
│       └── func
│           ├── sub-01_task-{taskname}_run-01_bold.json
│           ├── sub-01_task-{taskname}_run-01_bold.nii
│           ├── sub-01_task-{taskname}_run-01_events.tsv
│           ├── ...
│           ├── sub-01_task-{taskname}_run-{runnumber}_bold.json
│           ├── sub-01_task-{taskname}_run-{runnumber}_bold.nii
│           └── sub-01_task-{taskname}_run-{runnumber}_events.tsv
├── code
└── sourcedata
    └── sub-01
        ├── bh
        ├── dicom
        ├── dicom_anon
        ├── eye
        └── nifti

Create one events.tsv file for each function run .nii file, using the output from your experimental task. If you used the fMRI task template), output log files can be used to create event files quite easily. More info on events files can be found here.

Event files are crucial for analyzing fMRI data. They contain information about the timing and nature of stimuli or tasks during the scan. To create your event files manually:

  1. Navigate to your sourcedata/sub-xx/bh/ folder.
  2. Locate the behavioral output files (.mat or .log) for each run.
  3. Create a corresponding events.tsv file for each run in the BIDS/sub-xx/func/ folder.

Each events.tsv file must contain at least three columns: onset, duration, and trial_type, and can include additional as needed for your specific analysis. It also must contain one row per trial (stimulus) in your experiment.

If you use the fMRI task template, the log files you get as output contain all the information needed to build event files in a few steps. Below is a quick overview of the steps to take to make event files from log files. Note that it might not apply perfectly to all cases, and that other approaches can be more practical to you. It can be a good idea to create your own utility script to create event files from your behavioural results.

To create event files from log files, here is what you need to do (see the example below for an example transformation):

  • Create the onset column from the onset values in the log files: in the latter, onset times (usually stored in a column called ACTUAL_ONSET) are aligned to the start of the run. In BIDS event files, events need to be aligned to the start of the scanning. To obtain correct onset values, one can simply shift the onset of each line from a log file so that the onset 0.0 corresponds to the first TR trigger.
  • Create the duration column from the onset values. Log files typically don't record the exact duration of events, as that would put some extra calculation load onto MatLab (which struggles enough already as it is). A good approach is to calculate these post-hoc from the onset values, by simply taking the difference in between successive event onsets.
  • Create the trial_type column with the condition names. Fill this column by extracting the information that is relevant for your experimental design. In the example below, we extract the condition names face and building from the EVENT_ID column, as these are the conditions we're interested in.
  • Add or keep any extra column you might need. In the example below, we keep the event_id columns as it might still be useful later on in the pipeline. Note that you should make a reference to these extra column in your events.json file.

For example, this is what a log file looks like:

EVENT_TYPE  EVENT_NAME  DATETIME                EXP_ONSET     ACTUAL_ONSET  DELTA     EVENT_ID
START       -           yyyy-mm-dd-hh-mm-ss  -               0.000000  -         -
FLIP        Instr       yyyy-mm-dd-hh-mm-ss  -               0.099950  -         -
RESP        KeyPress    yyyy-mm-dd-hh-mm-ss  -               7.663277  -         7
FLIP        TgrWait     yyyy-mm-dd-hh-mm-ss  -               7.697805  -         -
PULSE       Trigger     yyyy-mm-dd-hh-mm-ss  -              12.483778  -         5
PULSE       Trigger     yyyy-mm-dd-hh-mm-ss  -              24.452093  -         5
FLIP        Pre-fix     yyyy-mm-dd-hh-mm-ss  -              24.462263  -         -
PULSE       Trigger     yyyy-mm-dd-hh-mm-ss  -              26.452395  -         5
PULSE       Trigger     yyyy-mm-dd-hh-mm-ss  -              28.452362  -         5
PULSE       Trigger     yyyy-mm-dd-hh-mm-ss  -              30.451807  -         5
PULSE       Trigger     yyyy-mm-dd-hh-mm-ss  -              32.451339  -         5
PULSE       Trigger     yyyy-mm-dd-hh-mm-ss  -              34.451376  -         5
FLIP        Stim        yyyy-mm-dd-hh-mm-ss  34.462263      34.474302  0.012039  building_image.png
RESP        KeyPress    yyyy-mm-dd-hh-mm-ss  -              35.566808  -         9
FLIP        Fix         yyyy-mm-dd-hh-mm-ss  -              34.521628  -         -
PULSE       Trigger     yyyy-mm-dd-hh-mm-ss  -              36.451615  -         5
FLIP        Stim        yyyy-mm-dd-hh-mm-ss  37.462263      37.524439  0.062177  face_image.png
FLIP        Fix         yyyy-mm-dd-hh-mm-ss  -              37.572648  -         -
PULSE       Trigger     yyyy-mm-dd-hh-mm-ss  -              38.453535  -         5
RESP        KeyPress    yyyy-mm-dd-hh-mm-ss  -              38.806193  -         1
PULSE       Trigger     yyyy-mm-dd-hh-mm-ss  -              40.451415  -         5
...

This is what the corresponding event file should look like:

onset           duration        trial_type     event_id
10.022209       0.0473259       building       building_image.png
13.072346       0.0482089       face           face_image.png

9. Create additional BIDS files


myproject
├── BIDS
│   ├── dataset_description.json
│   ├── events.json
│   ├── participants.json
│   ├── participants.tsv
│   ├── sub-01
│   │   ├── anat
│   │   └── func
│   └── task-taskname_bold.json
└── sourcedata
    └── sub-01
        ├── bh
        ├── dicom
        ├── dicom_anon
        ├── eye
        └── nifti
  1. Create the following modality agnostic BIDS files files in your BIDS/ folder:

    • dataset_description.json
    • participants.tsv
    • participants.json
    • task-<taskname>_bold.json
  2. Fill in the required information for each file according to the BIDS specification.

And set up additional components:

  1. Create a derivatives/ folder in your BIDS/ directory.
  2. If needed, create a .bidsignore file in your BIDS/ root folder to exclude any non-BIDS compliant files.
Why should I use a .bidsignore file?

A `.bidsignore' file is useful to communicate to the BIDS validator which files should not be indexed, because they are not part of the standard BIDS structure. More information can be found here.


10. Validating Your BIDS Structure

By following these steps systematically, you'll ensure your data is properly organized in BIDS format, facilitating easier analysis and collaboration.

Make sure all the steps have been followed successfully by validating your BIDS folder. To do so, use the BIDS validator.

  1. Use the online BIDS Validator to check your BIDS structure.
  2. Upload your entire BIDS/ folder and review any errors or warnings.
  3. Make necessary corrections based on the validator's output.

By following these detailed steps, you'll ensure your data is properly organized in BIDS format, facilitating easier analysis and collaboration.


Now that you have your data in BIDS format, we can proceed to data pre-processing and quality assessment. See the next guide for instructions. → Pre-processing and QA