10. Reading and aligning neuroimaging data with Nibabel#

In previous chapters, we saw that there is plenty that you can do with general-purpose software. Although the examples all used neuroimaging data, the ideas that we introduced – arrays and computing with arrays, tables and how to manipulate tabular data, scientific computing and data visualization – were not specific to neuroimaging. However, within the broader ecosystem of Python tools for science, there is a family of tools specifically focused on neuroimaging. These software tools, developed by and for neuroimaging researchers are designed specifically to analyze neuroimaging data. They cover a wide range of data analysis tasks on a variety of different kinds of experimental data. In following chapters, we will see how some of these tools are used.

But we will start with something much more fundamental. All of these tools depend to some degree on the ability to read experimental data from common neuroimaging file formats into the memory of our computer. The tool of choice for doing that is called nibabel. The name hints at the plethora of different file formats that exist to represent and store neuroimaging data. Nibabel provides a relatively straightforward common route to read many of these formats. But though there is indeed a plethora of file formats that neuroimaging data appears in, one of the most commonly used file formats nowadays is called NIfTI. The acronym stands for Neuroimaging Informatics Technology Initiative, which was the group at the US National Institutes of Health that originally designed the file format. Because this file format is very common, we will focus on this format here and we will not go into detail on the other file formats that are supported by nibabel.

Getting acquainted with nibabel and its operations will also give us an opportunity to work through some issues that arise when analyzing data from neuroimaging experiments. Let’s start by downloading a neuroimaging file and see how nibabel works. We’ll use the URL downloader implemented in ndslib to get data from the OpenNeuro archive. This file, deposited by Patrick Beukema in Tim Verstynen’s lab at CMU, contains a T1-weighted MR image of a participants brain.

from ndslib.data import download_file

download_file("https://openneuro.org/crn/datasets/ds001233/snapshots/00003/files/sub-17:ses-pre:anat:sub-17_ses-pre_T1w.nii.gz",
              "sub-17_ses-pre_T1w.nii.gz")

In case you are wondering why the file is named the way it is, this is something that we will discuss next in Section 11. For now, we import the nibabel library and use the load function to load the file:

import nibabel as nib
img_t1 = nib.load("sub-17_ses-pre_T1w.nii.gz")

One of the principles that guide the design of nibabel is that operations that take time, or that require a lot of the computer’s memory are delayed and only executed when there is no choice. Instead, nibabel starts by reading a relatively small section of the file, called the file’s “header” that contains meta-data: information about the data, rather than the data itself. For example, the header contains information about the size of the data array that is stored inside the file. So, when we called nibabel.load above none of the actual data was read. Instead, nibabel reads the header information about this data, and could be prepared to also give you the data if and when you want it. This is useful for cases where we need to know some property of the data, such as the size of the data array, but we don’t yet need to read any of the data itself. What kind of object is img_t1?

type(img_t1)
nibabel.nifti1.Nifti1Image

As you can see, it’s a Nifti1Image object. This is a special nibabel object for handling NIfTI files. For the time being, it can provide some information about the data. For example:

print(img_t1.shape)
(256, 256, 176)

This tells us that the data in this file is a three-dimensional array with dimensions of 256 by 256 by 176 voxels. There is other information stored in the header, and we’ll look at some of this information a little bit later. First, we will use the object’s get_fdata method to read the data from the file into memory. The ‘f’ in this method name indicates to us that nibabel is going to take the data that is stored in the file and do its best to represent it as floating point numbers, regardless of the way that the data is stored in the file. For example

data_t1 = img_t1.get_fdata()
type(data_t1)
numpy.ndarray

Now it’s just a regular numpy array. Everything that applies to the arrays that you already saw in previous chapters applies to this array. You can compute on it, visualize it, and so forth. What data-type does it use?

print(data_t1.dtype)
float64

As promised, the data, regardless of how it was stored in the file is given to us in the 64 bit floating point numpy data-type. There are other ways to read data from nibabel, that allow you more fine-grained control of how much of the data you are reading, how and whether you store it in memory and so on, but we will not go into that here. We recommend reading the nibabel documentation to get acquainted with these details. One of the reasons that we don’t think this is worth expounding upon is that the get_fdata does the right thing for you in 99% of the cases that we have encountered ourselves. So, this provides a simple way to read the data and always know what you are getting.

What shape does this array have?

print(data_t1.shape)
(256, 256, 176)

It’s exactly the shape you would expect from the information provided in the Nifti1Image that we saw above.

10.1. Combining MRI data from different measurements#

One of the main problems that we might encounter in analyzing MRI data is that we would like to combine information acquired using different kinds of measurements. For example, we just read in a file that contains a T1-weighted image of a person’s brain. This image is really great for finding different parts of this person’s anatomy. For example, it can be used to define where this person’s gray matter is and where the gray matter turns into white matter, just underneath the surface of the cortex. But if we want to also measure the brain responses to some stimulus, we will need to make another measurement.

In this case, the same person whose T1-weighted image we just loaded also underwent a BOLD fMRI scan in the same session. Let’s get that data too.

download_file("https://openneuro.org/crn/datasets/ds001233/snapshots/00003/files/sub-17:ses-pre:func:sub-17_ses-pre_task-cuedSFM_run-01_bold.nii.gz",
              "sub-17_ses-pre_task-cuedSFM_run-01_bold.nii.gz")
img_bold = nib.load("sub-17_ses-pre_task-cuedSFM_run-01_bold.nii.gz")

If we look at the meta-data for this file, we will already get a sense that there might be a problem:

print(img_bold.shape)
(96, 96, 66, 241)

The data is 4-dimensional, because a volume was acquired once every 2 seconds and we have the full time series of 241 time-points here. But the shape of the first three dimensions, which are the spatial dimensions of the volume, are not the same as the shape of the T1-weighted acquisition. In this case, we happen to know that the BOLD was acquired with a spatial resolution of 2mm by 2mm by 2 mm and the T1-weighted data was acquired with a spatial resolution of 1mm by 1mm by 1mm. But even taking that into account, by multiplying the size of each dimension of the BOLD data by the ratio of the resolutions (a factor of 2, in this case) does not make things equal. That is because the data was also acquired using a different field of view. That is, the T1-weighted data covers 256 by 256 mm in each slice (and there are 176 slices, covering 176 mm), while the BOLD data covers only 96 times 2 = 192mm by 192mm in each slice (and there are 66 slices, covering only 132 mm). How would we then know how to match a particular location in the BOLD data to particular locations in the T1-weighted image? This is something that we need to do, for example, if we want to know if a particular voxel in the BOLD is inside of the brain’s gray matter or in another anatomical segment of the brain.

In addition to the differences in acquisition parameters, we also need to take into account that the brain might be in a different location within the scanner in different measurements. This can be because the subject moved between the two scans, or because the two scans were done on separate days and the head of the subject was in a different position on these two days. The process of figuring out the way to bring two images of the same object taken when the object is in different locations is called image registration. Qe will need to take care of that too, and we will return to discuss when in Section 14. But, for the purpose of our current discussion, we are going to assume that the subject’s brain was in the same place when the two measurements were made. In the case of this particular data, this happens to be a good assumption – probably because these two measurements were done in close succession, and the subject did not move between the end of one scan and the beginning of the next one.

But even in this case, we still need to take care of image alignment, because of the differences in image acquisition parameters. Fortunately, the NIfTI file contains information that can help us align the two volumes to each other, despite the difference in resolution and field of view.

This information is stored in another piece of metadata in the header of the file, which is called the “affine matrix”. This is a 4-by-4 matrix that contains information that tells us about the way in which the acquisition volume was positioned relative to the MRI instrument, and relative to the subject’s brain, which is inside the magnet. The affine matrix stored together with the data provides a way to unambiguously denote the location of the volume of data relative to the scanner.

10.2. Coordinate frames#

To understand how we will use this information, let’s first look at a concrete example of the problem. For example, consider what happens when we take a single volume of the BOLD data, slice it on the last dimension, so that we get the middle of the volume on that dimension, and compare it to the middle slice of the last dimension of the T1-weighted data.

data_bold = img_bold.get_fdata()

data_bold_t0 = data_bold[:, :, :, 0]

import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 2)
ax[0].matshow(data_bold_t0[:, :, data_bold_t0.shape[-1]//2])
im = ax[1].matshow(data_t1[:, :, data_t1.shape[-1]//2])
../../_images/001-nibabel_20_0.png

Obviously, these two views of the subject’s brain are very different from each other. The data are not even oriented the same way in terms of the order of the anatomical axes. While the middle slice on the last spatial dimension for the BOLD data gives us an axial slice from the data, for the T1-weighted data, we get a saggital slice.

To understand why that is the case, we need to understand that the coordinate frame in which the data is collected can be quite different in different acquisitions. What do we mean by “coordinate frame”? Technically, a coordinate frame is defined by an origin and axes emanating from that origin, as well as the directions and units used to step along the axes. For example, when we make measurements in the MRI instruments, we define a coordinate frame of the scanner as one that has the iso-center of the scanner as it’s origin. This is the point in the scanner in which we usually place the subject’s brain when we make measurements. We then use the way in which the subject is usually placed into the MRI bore to explain the convention used to define the axes. In neuroimaging, this is usually done by placing the subject face up on a bed and then moving the bed into the magnet with the head going into the MRI bore first (this is technically known as the “supine, head first” position). Using the subject’s position on the bed, we can define the axes based on the location of the brain within the scanner. For example, we can define the first (x) axis to be one that goes from left to right through the subjects brain as they are lying inside the MRI bore, the second (y) axis is defined to go from the floor of the MRI room, behind the back of the subject’s brain through the brain and up through the front of their head towards the ceiling of the MRI room. In terms of the subject’s head, as they are placed in the supine, head first position, this is the “anterior-posterior” axis. The last (z) axis goes from their feet lying outside the magnet, through the length of their body and through the top of their head (this is “inferior-superior” relative to their anatomy). Based on these axes, we say that this coordinate frame has the “RAS” (right,anterior,superior) orientation. This means that the coordinates increase towards the right side of the subject, towards the anterior part of their brain and towards the superior part of their head.

In the spatial coordinate frame of the MRI magnet, we often use millimeters to describe how we step along the axes. So, specific coordinates in millimeters point to particular points within the scanner. For example, the coordinate [1, 0, 0] is one millimeter to the right of the isocenter (where right is defined as the direction to the subject’s right as they are lying in there). Following this same logic, negative coordinates can be used to indicate points that are to the left, posterior and inferior of the scanner iso-center (again, relative to the subject’s brain). So, the coordinate [-1, 0, 0] indicates a point that is 1mm to the left of the iso-center. The RAS coordinate system is a common way to define the scanner coordinate frame, but you can also have LPI (which goes right-to-left, anterior-to-posterior and superior-to-inferior instead) or any combination of directions along these principal directions (RAI, LPS, …, etc.).

Exercise

What is the location of [1, 1, 1] RAS in the LAS coordinate frame?

Given that we know where particular points are in space relative to the magnet iso-center, how can we tell where these particular points are in the numpy array that represents our data? For example, how would we translate the coordinate [1,0,0] in millimeters into an index [i,j,k], which we can use to index into the array to find out what the value of the MRI data is in that location?

The volume of data is acquired with a certain orientation, a certain field of view and a certain voxel resolution. This means that the data also has a spatial coordinate frame. The [0, 0, 0] coordinate indicates the voxel that is the array coordinate [0, 0, 0] (that is, i=0, j=0 and k=0) and the axes are the axes of the array. For example, the BOLD data we have been looking at here is organized such that [0, 0, 0] is somewhere inferior and posterior to their head. We can tell that by plotting several slices of the brain starting at slice 20, going to the middle slice and then up to slice 46 (20 slices from the end of the array in that direction).

import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 3)
ax[0].matshow(data_bold_t0[:, :, 20])
ax[1].matshow(data_bold_t0[:, :, data_bold_t0.shape[-1]//2])
ax[2].matshow(data_bold_t0[:, :, 46])
fig.set_tight_layout("tight")
../../_images/001-nibabel_22_0.png

Based on our knowledge of the anatomy, we see that the first slice is lower in the head than the middle slice, which is lower in the head than the third slice. This means that as we increase the coordinate on the last dimension we are going up in the brain. So, we know at least that the last coordinate is an “S”. Based on the labeling of the axis, we can also see that the second dimension increases from the back of the brain to the front of the brain, meaning that the second coordinate is an “A”. So far, so good. What about the first dimension? We might be tempted to think that since it looks like it’s increasing from left to right that must be an “R”. But this is where we need to pause for a bit. This is because there is nothing to tell us whether the brain is right-left flipped. This is something that we need to be particularly careful with (right-left flips are a major source of error in neuroimaging data analysis!).

Exercise

Based on the anatomy, what can you infer about the coordinate frame in which the T1w data was acquired?

We see that we can guess our way towards the coordinate frame in which the data was acquired, but as we already told you above, there is one bit of data that will tell you unambiguously how to correspond between the data and the scanner coordinate frame. In order to use this bit of data to calculate the transformation between different coordinate frames, we are going to use a mathematical operation called a matrix multiplication. If you have never seen this operation before, you can learn about it in the section below. Otherwise, you can skip on to Section 10.3.


Matrix multiplication

Matrices, vectors and their multiplication

A matrix is a collection of numbers that are organized in a table with rows and columns. Matrices (the plural of “matrix”) look a lot like two-dimensional numpy arrays, and numpy arrays can be used to represent matrices.

In the case where a matrix has only one column or only one row, we call it a vector. If it has only one column and only one row – it is just a single number – we refer to it as a scalar.

Let’s look at an example. The following is a 2-by-3 matrix:

\[\begin{split} A = \begin{bmatrix} a & b & c \\ d & e & f \end{bmatrix}\end{split}\]

The following is a 3-vector, with one row (we’ll use a convention where matrices are denoted by capital letters and vectors are denoted by lower-case letters):

\[ v = \begin{bmatrix} a & b & c \end{bmatrix} \]

We can also write the same values into a vector with three rows and one column:

\[\begin{split} v = \begin{bmatrix} a \\ b \\ c \end{bmatrix} \end{split}\]

In the most general terms, the matrix multiplication between the \(m\)-by-\(n\) matrix \(A\) and the \(n\)-by-\(p\) matrix \(B\) is a an m-by-p matrix. The entry on row \(i\) and column \(j\) in the resulting matrix, \(C\) is defined as:

\[C_{i,j} = \sum_{k=1}^{n}{A_{i,k}B_{k,j}} \]

If this looks a bit mysterious, let’s consider an example and work out the computations of individual components. Let’s consider:

\[\begin{split} A = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix} \end{split}\]
\[\begin{split} B = \begin{bmatrix} 1 & 0 \\ 0 & 0 \\ 1 & 0 \end{bmatrix} \end{split}\]

The first matrix \(A\) is a 2-by-3 matrix, and the second matrix, \(B\) is a 3-by-2 matrix. Based on that, we already know that the resulting multiplication, which we write as \(C=AB\) will be a 2-by-2 matrix.

Let’s calculate the entries in this matrix one by one. Confusingly enough, in contrast to Python sequences, when we do math with matrices, we index them using a one-based indexing system. This means that the first item in the first row of this matrix is \(C_{1,1}\). The value of this item is:

\[ \begin{align}\begin{aligned}C_{1,1} = \sum_{k=1}^{3}{A_{1,k}B_{k,1}}\\= A_{1,1} B_{1,1} + A_{1, 2} B_{2, 1} + A_{1, 3} B_{3, 1}\\= 1 \cdot 1 + 0 \cdot 0 + 0 \cdot 1 = 1 \end{aligned}\end{align} \]

Similarly, the value of \(C_{1,2}\) is:

\[ \begin{align}\begin{aligned}C_{1,2} = \sum_{k=1}^{3}{A_{1,k}B_{k,1}}\\= A_{2,1} B_{1,2} + A_{2, 2} B_{2, 2} + A_{2, 3} B_{3, 2}\\= 0 \cdot 0 + 1 \cdot 0 + 0 \cdot 0 = 0 \end{aligned}\end{align} \]

And so on… In the end of it all, the matrix \(C\) will be:

\[\begin{split} C = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \end{split}\]

Exercises

  1. Convince yourself that the answer we gave above is correct by computing the components of the second row following the formula.

  2. Write a function that takes two matrices as input and computes their multiplication.

10.2.1. Multiplying matrices in Python#

Fortunately, numpy already implements a (very fast and efficient!) matrix multiplication for matrices and vectors that are represented with numpy arrays. The standard multiplication sign * is already being used in numpy for element-by-element multiplication (see Section 8), so another symbol was adopted for matrix multiplication: @. As an example, let’s look at the matrices \(A\), \(B\) and \(C\) we used above[^1]:

[1] There is also a function that implements this multiplication: np.dot(A, B) is equivalent to A @ B

import numpy as np
A = np.array([[1, 0, 0], [0, 1, 0]])

B = np.array([[1, 0], [0, 0], [1, 0]])

C = A @ B

print(C)
[[1 0]
 [0 0]]

10.3. Using the affine#

The way that the affine information is used requires a little bit of knowledge of matrix multiplication. If you are not at all familiar with this idea, a brief introduction, which includes pretty much everything you need to know about this topic to follow along from here, is provided above.

10.3.1. Coordinate transformations#

One particularly interesting case of matrix multiplication is the multiplication of vectors of length \(n\) with an \(n\)-by-\(n\) matrix. This is because the input and the output in this case has exactly the same number of entries. Another way of saying that is that the dimensionality of the input and output is identical. One way to interpret this kind of multiplication is to think of it as taking one coordinate in the \(n\)-dimensional space and moving it to another coordinate in the \(n\)-dimensional space. This idea is used as the basis for transformations between the different coordinate spaces that describe where our MRI data is.

Let’s think about some simple transformations of coordinates. One kind of transformation is a scaling transformation. This takes a coordinate in the \(n\)-dimensional space (for example, in the \(2\)-dimensional space that might be [x, y]) and sends it to a scaled version of that coordinate (for example, to [a*x, b*y]). This transformation is implemented through a diagonal matrix. Here is the \(2\)-dimensional case:

\[\begin{split} \begin{bmatrix} a & 0 \\ 0 & b \end{bmatrix}\end{split}\]

If you are new to matrix multiplications, you might want to convince yourself that this matrix does indeed implement the scaling, by working through the math of this example element by element. Importantly, this idea is not limited to two dimensions, or any particular dimensionality for that matter. For example, in the \(3\)-dimensional case, this would look like this:

\[\begin{split}\begin{bmatrix} a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & c \end{bmatrix} \end{split}\]

and it would scale [x, y, z] to [a*x, b*y, c*z].

Another interesting transformation is what we call a rotation transformation. This takes a particular coordinate and rotates it around an axis.

\[\begin{split}A_{\theta} = \begin{bmatrix} cos(\theta) & -sin(\theta) \\ sin(\theta) & cos(\theta) \end{bmatrix} \end{split}\]

What does it mean to rotate a coordinate around the axis? Imagine that the coordinate [x,y] is connected to the origin with a straight line. When we multiply that coordinate by our rotation matrix \(A_\theta\), we are rotating that line around the axis by that angle. It’s easier to understand this idea through an example with code, and with a visual. The code below defines an angle theta (in radians) and the two-dimensional rotation matrix A_theta, and then multiplies a two-dimensional coordinate xy (plotted as a circle marker), which produces a new coordinate xy_theta (plotted with a square marker). We connect each coordinate to the origin (the coordinate [0, 0]) through a line: the original coordinate with a solid line and the new coordinate with a dashed line. The angle between these two lines is \(\theta\).

theta = np.pi/4
A_theta = np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]])
xy = np.array([2, 3])
xy_theta = A_theta @ xy

fig, ax = plt.subplots()
ax.scatter(xy[0], xy[1])
ax.plot([0, xy[0]], [0, xy[1]])
ax.scatter(xy_theta[0], xy_theta[1], marker='s')
ax.plot([0, xy_theta[0]], [0, xy_theta[1]], '--')
p = ax.axis("equal")
../../_images/001-nibabel_27_0.png

This idea can be expanded to three dimensions, but in this case, we define a separate rotation matrix for each of the axes. The rotation by angle \(\theta\) around the x axis is defined as:

\[\begin{split}A_{\theta,x} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & cos(\theta) & -sin(\theta) \\ 0 & sin(\theta) & cos(\theta) \end{bmatrix}\end{split}\]

The rotation by angle \(\theta\) around the y axis is defined as:

\[\begin{split} A_{\theta,y} = \begin{bmatrix} cos(\theta) & 0 & -sin(\theta) \\ 0 & 1 & 0 \\ sin(\theta) & 0 & cos(\theta)\end{bmatrix}\end{split}\]

Which leaves the rotation by angle \(\theta\) around the z axis, defined as:

\[\begin{split}A_{\theta,z} = \begin{bmatrix} cos(\theta) & -sin(\theta) & 0 \\ sin(\theta) & cos(\theta) & 0 \\ 0 & 0 & 1 \end{bmatrix}\end{split}\]

10.3.2. Composing transformations#

The transformations defined by matrices can be composed into new transformations. This is a technical term that means that we can do one transformation followed by another transformation, and that would be the same as multiplying the two transformation matrices by each other and then using the product. In other words, matrix multiplication is associative:

\((A B) v = A (B v)\)

For example, we can combine a scaling and a rotation:

theta = np.pi/4
A_theta = np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]])
B = np.array([[2, 0], [0, 2]])
xy = np.array([2, 3])
xy_transformed = A_theta @ B @ xy

fig, ax = plt.subplots()
ax.scatter(xy[0], xy[1])
ax.plot([0, xy[0]], [0, xy[1]])
ax.scatter(xy_transformed[0], xy_transformed[1], marker='s')
ax.plot([0, xy_transformed[0]], [0, xy_transformed[1]], '--')
p = ax.axis("equal")
../../_images/001-nibabel_29_0.png

Comparing this to the previous result, you can see that the angle between the solid line and the dashed line is the same as it was in our previous example, but the length of the dashed line is now twice as long as the length of the solid line. This is because we applied both a scaling and a rotation. Notice also that following the associativity rule that we describe above the line xy_transformed = A_theta @ B @ xy could be written xy_transformed = (A_theta @ B) @ xy or xy_transformed = A_theta @ (B @ xy) and the result would be exactly the same.

Another thing to consider is that every coordinate in the 2D space that we would multiply with this matrix would undergo the same transformation, relative to the origin: it would be rotated counterclockwise by an angle theta and moved to be twice as far from the origin as it previously was. In other words, the matrix performs a coordinate transformation.

Now, using this idea, we can start thinking about how we might describe data represented in one coordinate frame – the scanner space measured in millimeters – to another coordinate frame – the representation of the data in the array of acquired MRI data. This is done using a matrix that describes how each [x,y,z] position in the scanner space is transformed in an [i,j,k] coordinate in the array space.

Let’s think first about the scaling: if each voxel is 2-by-2-by-2 millimeters in size, that means that for each millimeter that we move in space, we are moving by half a voxel. A matrix that would describe this scaling operation would look like this:

\[\begin{split} A_{scale} = \begin{bmatrix} 0.5 & 0 & 0 \\ 0 & 0.5 & 0 \\ 0 & 0 & 0.5 \end{bmatrix} \end{split}\]

The next thing to think about is the rotation of the coordinates: this is because when we acquire MRI data, we can set the orientation of the acquisition volume to be different from the orientation of the scanner coordinate frame. In fact, it can be tilted relative to the scanner axes in any of the three dimensions, which means that it could have rotations around each of the axes. So, we would combine our scaling matrix with a series of rotations:

\[ A_{total} = A_{scale} A_{\theta, x} A_{\theta, y} A_{\theta, z} \]

But even after we multiply each of the coordinates in the brain in millimeters by this matrix, there is still one more operation that we need to do to move to the space of the MRI acquisition. As we told you above, the origin of the scanner space is in the iso-center of the magnet. But the MRI acquisition usually has its origin somewhere outside of the subjects head, so that it covers the entire brain. So, we need to move the origin of the space from iso-center to the corner of the scanned volume. Mathematically, this translates to shifting each coordinate [x,y,z] in the scanner space by [\(\Delta x\),\(\Delta y\), \(\Delta z\)], where each one of these components describes where the corner of the MRI volume is relative to the position of iso-center.

Unfortunately, this is not an operation that we can do by multiplying the coordinates with a 3-by-3 matrix of the sort that we saw above. Instead, we will need to use a mathematical trick: instead of describing each location in the space via a three-dimensional coordinate [x,y,z], we will add a dimension to the coordinates, so that they are written as [x, y, z, 1]. We then also make our matrix a 4-by-4 matrix:

\[\begin{split} \begin{bmatrix} & & & \Delta x \\ & A_{total} & & \Delta y\\ & & & \Delta y \\ 0 & 0 & 0 & 1 \end{bmatrix} \end{split}\]

where \(A_{total}\) is the matrix composed out of our rotations and scalings. This might look like a weird thing to do, but an easy way to demonstrate that this shifts the origin of the space is to apply a matrix like this to the origin itself, which we have augmented with a 1 as the last element: [0, 0, 0, 1]. To make the example more straightforward, we will also set \(A_{total}\) to be a matrix with ones in the elements on the diagonal and all other elements set to 0. This matrix is called an identity matrix because when you multiply an n-dimensional vector with a matrix that is constructed in this manner, the result is identical to the original vector (use the formula in matmul, or create this matrix with code to convince yourself of that!).

A_total = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
delta_x = 128
delta_y = 127
delta_z = 85.5
A_affine = np.zeros((4, 4))
A_affine[:3, :3] = A_total
A_affine[:, 3] = np.array([delta_x, delta_y, delta_z, 1])

origin = np.array([0, 0, 0, 1])

result = A_affine @ origin
print(result)
[128.  127.   85.5   1. ]

As you may have noticed, in the code, we called this augmented transformation matrix an affine matrix. We will revisit affine transformations later in the book (when we use these transformations to process images in Section 14). For now, we simply recognize that this is the name used to refer to the 4-by-4 matrix that is recorded in the header of neuroimaging data and tells you how to go from coordinates in the scanner space to coordinates in the image space. In nibabel, these transformations are accessed via:

affine_t1 = img_t1.affine
affine_bold = img_bold.affine

Where do these affines send the origin?

print(affine_t1 @ np.array([0, 0, 0, 1]))
[ -85.5  128.  -127.     1. ]

This tells us that the element indexed as data_t1[0, 0, 0] lies 85.5 mm to the left of the isocenter, 128 mm anterior to the isocenter (towards the ceiling of the room) and 127 mm inferior to isocenter (towards the side of the room where the subject’s legs are sticking out of the MRI).

Exercise

Where in the scanner is the element data_bold[0, 0, 0]? Where is data_bold[-1, -1, -1]? (hint: affine_bold @ np.array([-1, -1, -1, 1] is not going to give you the correct answer).

10.3.3. The inverse of the affine#

The inverse of a matrix is a matrix that does the opposite of what the original matrix did. The inverse of a matrix is denoted by adding a “-1” in the exponent of the matrix. For example, for the scaling matrix:

\[\begin{split} B = \begin{bmatrix} 2 & 0 \\ 0 & 2 \end{bmatrix} \end{split}\]

the inverse of \(B\) is called \(B^{-1}\). Since this matrix sends a coordinate to be twice as far from the origin, its inverse would send a coordinate to be half as far from the origin. Based on what we already know, this would be:

\[\begin{split} B^{-1} = \begin{bmatrix} 0.5 & 0 \\ 0 & 0.5 \end{bmatrix} \end{split}\]

Similarly, for a rotation matrix:

\[\begin{split} A_{\theta} = \begin{bmatrix} cos(\theta) & -sin(\theta) \\ sin(\theta) & cos(\theta) \end{bmatrix} \end{split}\]

the inverse is the matrix that rotates the coordinate around the same axis by the same angle, but in the opposite direction, so:

\[\begin{split} A_{\theta^{-1}} = \begin{bmatrix} cos(-\theta) & -sin(-\theta) \\ sin(-\theta) & cos(-\theta) \end{bmatrix} \end{split}\]

Exercise

The inverse of a rotation matrix is also its transpose, the matrix that has the rows and columns exchanged. Using the math you have seen so far, demonstrate that this is the case.

10.3.4. Computing the inverse of an affine in Python#

It will come as no surprise that there is a way to compute the inverse of a matrix in Python. That is the numpy function: np.linalg.inv. As you can see from the name it comes from the linalg sub-module (linalg stands for “Linear Algebra”, which is the field of mathematics that deals with matrices and operations on matrices). For example, the inverse of the scaling matrix \(B\) is computed as follows:

B_inv = np.linalg.inv(B)
print(B_inv)
[[0.5 0. ]
 [0.  0.5]]

Why do we need the inverse of a neuroimaging data file’s affine? That is because we already know how to go from one data file’s coordinates to the scanner space, but if we want to go from one data file’s coordinates to another data files coordinates, we will need to go from one data file to the scanner and then from the scanner coordinates to the other data file’s coordinates. The latter part requires the inverse of the second affine.

For example, consider what we need to do if we want to know where the central voxel in the BOLD acquisition falls in the space of the T1-weighted acqusition. First, we calculate the location of this voxel in the [i, j, k] coordinates of the BOLD image space:

central_bold_voxel = np.array([img_bold.shape[0]//2,
                               img_bold.shape[1]//2,
                               img_bold.shape[2]//2, 1])

Next, we move this into the scanner space, using the affine transform of the BOLD image:

bold_center_scanner = affine_bold @ central_bold_voxel
print(bold_center_scanner)
[  2.         -19.61000896  16.75154221   1.        ]

Again, to remind you, these coordinates are now in millimeters, in scanner space. This means that this voxel is located 2 mm to the right, almost 20 mm posterior and almost 17 mm superior to the MRI iso-center.

Next, we use the inverse of the T1-weighted affine to move this coordinate into the space of the T1-weighted image space.

bold_center_t1 = np.linalg.inv(affine_t1) @ bold_center_scanner
print(bold_center_t1)
[147.61000896 143.75154221  87.5          1.        ]

This tells us that the time-series in the central voxel of the BOLD volume comes approximately from the same location as the T1-weighted data in data_t1[147 143  87]. Because the data are sampled a little bit differently and the resolution differs, the boundaries between voxels are a little bit different in each one of these modalities recorded. If you really want to put them in the same space, you will also need to compensate for that. We will not show this in detail, but you should know that the process of alignment uses forms of data interpolation or smoothing to make up for these discrepancies, and these usually have a small effect on the values in the data. For example, resampling the data from the BOLD space to the T1 space and then back would not necessarily give you exactly the same values as you started out with.

Taking all these facts together, we are now ready to align our data to each other. Fortunately, nibabel implements a function that allows us to take one NIfTI image and resample it into the space of another image. This function takes a NiFTI1 image object input and the shape and affine matrix of the space into which you would like to resample the data. So, we can go ahead and resample the T1-weighted image to the space and resolution of the BOLD data:

from nibabel.processing import resample_from_to
img_t1_resampled = resample_from_to(img_t1, (img_bold.shape[:3], img_bold.affine))

The image that results from this computation has the shape of the BOLD data, as well as the affine of the BOLD data:

print(img_t1_resampled.shape)
print(img_t1_resampled.affine == img_bold.affine)
(96, 96, 66)
[[ True  True  True  True]
 [ True  True  True  True]
 [ True  True  True  True]
 [ True  True  True  True]]

And, importantly, if we extract the data from this image, we can show that the two data modalities are now well aligned:

data_t1_resampled = img_t1_resampled.get_fdata()

fig, ax = plt.subplots(1, 2)
ax[0].matshow(data_bold_t0[:, :, data_bold_t0.shape[-1]//2])
im = ax[1].matshow(data_t1_resampled[:, :, data_t1_resampled.shape[-1]//2])
../../_images/001-nibabel_49_0.png

Finally, if you would like to write out this result into another NIfTI file, to be used later in some other computation, you can do so by using nibabel’s save function:

nib.save(nib.Nifti1Image(data_t1_resampled, img_t1_resampled.affine),
         't1_resampled.nii.gz')

Exercises

  1. Move the BOLD data into the T1-weighted space and show that they are well aligned in that space. Save the resulting data into a new NIfTI file.

  2. In which direction would you expect to lose more information to interpolation/smoothing? Going from the BOLD data to the T1-weighted data, or the other way around?

10.4. Additional resources#

Another explanation of coordinate transformations is provided in the nibabel documentation.