The Inverse Problem of Magnetorelaxometry Imaging

# The Inverse Problem of Magnetorelaxometry Imaging

## Abstract

The aim of this paper is to provide a solid mathematical discussion of the inverse problem in Magnetorelaxometry Imaging (MRXI), a currently developed technique for quantitative biomedical imaging using magnetic nanoparticles. We provide a detailed discussion of the mathematical modeling of the forward problems including possible ways to activate and measure, leading to a severely ill-posed linear inverse problem. Moreover, we formulate an idealized version of the inverse problem for infinitesimal small activation coils, which allows for a more detailed analysis of uniqueness issues.

We propose a variational regularization approach to compute stable approximations of the solution and discuss its discretization and numerical solution. Results on synthetic are presented and improvements to methods used previously in practice are demonstrated. Finally we give an outlook to further questions and in particular experimental design.

keywords— Magnetorelaxometry Imaging; Inverse Source Problem; Magnetic Nanoparticles; Total Variation Regularization; Uniqueness; ADMM; Magnetic Remanence

## 1 Introduction

Measuring and analyzing magnetic nanoparticles (MNP) for medical applications is currently under heavy research. For example, MNP are employed for novel cancer therapy techniques referred to as Magnetic Hyperthermia  and Magnetic Drug Targeting . For both applications, the amount and distribution of the magnetic nanoparticles in the tissue are crucial for efficacy and safety of the therapy. Magnetorelaxometry (MRX) is able to determine the amount of magnetic nanoparticles based on their magnetic response to an external magnetic field . On this basis Magnetorelaxometry Imaging (MRXI) has been developed as a novel imaging modality aiming at the acquisition of three dimensional and quantitative reconstructions of the particle distribution . This knowledge is crucial for monitoring the mentioned therapies and can further be used to validate assumptions about the distribution, finally leading to a more precise and safe treatment.

MRXI can be characterized by two alternating phases : first, the magnetic moment of the MNP in the area of interest is aligned by coil induced magnetic fields. Second, after these coils are switched off the MNP show a magnetic relaxation that is measured with highly sensitive sensors outside the tissue. These steps can be executed multiple times with different excitation fields . The problem of determining the quantitative distribution of the MNP from these measurements can be formulated as an inverse problem , which we will elaborate further in Section 2 and 3.

Based on this general approach, several studies have been published concerning activation patterns and coil positioning for inhomogeneous excitation fields [4, 12, 10, 3, 11]. The main interest here has been to improve the system condition to gain reconstruction quality using basic regularization techniques, i.e. least squares solution using the pseudo inverse, Truncated Singular Value Decomposition (TSVD) and Tikhonov regularization. On the other hand, nonlinear regularization techniques have shown promising results for image reconstruction in undersampled MRI, CT and PET cf. e.g. ) in particular using the total variation as part of the variational model.

In this paper, we will recall he model originally presented by Baumgarten et al. and Liebl et al. [5, 22] and put it into a mathematically rigorous inverse problems framework. On this basis we will determine the inverse problem of MRXI and investigate ill-posedness and uniqueness issues. Motivated by these insights we will present a variational model to find a meaningful solution to the image reconstruction problem, where we apply the Total Variation (TV) regularization in combination with a positivity constraint leading to

 minc ∥Kc−g∥22+αTV(c)+χ≥0(c).

In the end we will provide a simulation setup and compare results using nonlinear regularization techniques with previously u used techniques for MRXI. Moreover, we will provide a preliminary discussion of the impact of different activation strategies, a crucial issue for future research.

## 2 The Forward Model

In this section we describe the basic principles of MRXI, how data is acquired and processed. We roll out the general idea in Subsection 2.1 and describe a mathematical model for a 3D environment in Subsection 2.2. In the end we provide a mathematically idealized model in Subsection 2.3.

### 2.1 Basic Idea and Physical Principles

Magnetic nanoparticles for biomedical applications usually consist of a magnetic iron oxide core with a diameter of a few up to about 30 nanometers surrounded by a non-magnetic shell layer. The magnetic core of these particles usually contains a single magnetic domain and can therefore be modeled as a magnetic dipole, thus having an magnetization depending on core size and material used. This magnetization and therewith the magnetic moment can be oriented by external magnetic fields either within the particle’s core in term of Néel motion  or by rotation of the whole particle in terms of Brownian motion .

Following the idea of aligning the particles and afterwards measuring the response, we obtain two distinct phases that are implemented in MRXI: First, during the so called ’Excitation Phase’ the particles are exposed to a magnetic field strong enough to reorientate the magnetization of the particles. At this point, the fields of the aligned particles add up to a measurable superposition field. For the second ’Relaxation Phase’, the external coils aligning the particles are switched off. Due to several reasons, the particle’s dipole orientation shifts in arbitrary direction yielding a relaxation signal that is measured with highly sensitive sensors. Currently, usually SQUIDs (Superconducting QUantum Interference Device ) are employed. The combination of these two previously described distinct phases is called to be one iteration of MRXI. We provide an intuitive illustration of the entire cycle in Figure 1. Since the excitation fields are a few orders of magnitudes larger compared to the fields induced by the aligned particle, a delay time between the phases is required to ensure that the excitation fields do not influence the data acquisition. We will give more detailed information on the referred time steps in the subsequent section. Figure 1: Illustration of the absolute strength of the magnetic field induced by external coils and magnetic nanoparticles during MRXI. Timestep t0 describes the default state of the system without any fields applied. In t1 the external magnetic field is activated and induces a magnetic field with strength |B2|. The particle alignment reaches a maximum at t2 with an induced magnetic field |B1|. In t3 the coils are disabled and are fully deactivated in t4. The data acquisition of the particles’ induced fields is carried out during the time interval [t4,t5] (compare pink area).

### 2.2 Mathematical Model

Let us now detail the mathematical model for MRX Imaging. For this sake we denote by a bounded domain describing the region of interest holding magnetic nanoparticles. Then excitation coils and measuring sensors are positioned in . In general coil and sensors may be in the same position in since both are distinct processes and, in theory, can be exchanged during an MRXI iteration.

We define as a nonnegative density function of magnetic nanoparticles with compact support on the domain , it will be the unknown to be determined in the inverse problem. We have already stated that the particle properties depend on multiple factors, including core size and material, however for our purposes we assume a constant particle base magnetization .

Then we define a vector field defining the magnetic moment of a corresponding particle density . We assume that in the initial state and in the full relaxation state all particles are orientated randomly on a microscopic scale. As a result the particle’s magnetization cancels out on a macroscopic scale and therefore we demand .

In the following we specify an excitation coil , where defines the set of all coils. consist of a conductor path and an activation current . However, for our purposes we only consider coil activations with a constant current . The coil conductor is defined as a curve in namely , where is the length of the curve that is assumed to be given in arclength parametrization. Then the Biot-Savart-Law (cf. ) provides a connection between the coil conductor path and the resulting magnetic field in :

 Bcoilα:Ω→R3,w↦ϑLα∫0φ′α(s)×(w−φα(s)|w−φα(s)|3)ds (1)

Note that with and is well defined as an element of . Here provides a collection of physical constants, including activation current .

Given an external magnetic field , i.e. as in Equation (1), the behavior of magnetic particles with a magnetization can be described by the well-known Langevin function illustrated in Figure 2. For a given magnetic field and a particle density the resulting magnetization of the particles in is given by

 m:Ω→R3,w↦L(B(w))c(w) (2) Figure 2: Langevin Function, linearization and working area of MRXI: The blue curve is the Langevin function defined as L:x↦coth(x)−1x, the red line its linearization L in x=0: x↦13x. The pink area illustrates the approximate value range for the Langevin functional that is used for the forward operator of MRXI.

For weak magnetic fields the magnetization operates in the linear range of the Langevin function and can be approximated well by . Note that the correct shape and scale of the Langevin function depends on particle size, particle base magnetization, temperature, particle domain and other physical constants. Here we set all these variables as suitable for the application of the mentioned linearization. Then the particle magnetization after being exposed to a magnetic field is given by

 mα:Ω→R3,w↦13Bcoilα(w)c(w). (3)

For the sensors, which measure the magnetic field differences over time, we define a measuring space . Moreover we define as a single sensor with position and orientation , that only acquires magnetic fields in direction of its given orientation . For a given magnetization peak in the sensor in yields a magnetic field measured in direction :

 Bmeas:Ω×Σ →R (w,σ) (4)

where denotes the identity matrix.

With the modeling of each part at hand we can, in a next step, assemble the forward operator. Again we refer to the time steps as seen in Figure 1. As stated before in the initial state () we have no magnetization in , therefore . Then at time step coil is activated and the resulting magnetic field is given as of Equation (1). The applied fields initiate the reorientation process. The stable state is described by Equation (3) resulting in a magnetization . The particles’ reorientation process reaches a stable state in time step . Shortly after this process is done the coils are deactivated in . In practice, it takes a short time interval to fully disable the excitation coils, therefore the data acquisition is started at time step after full disappearance of . Note that in practice magnetic field sensors can only register changing fields over time. Therefore the acquired data represents the change in the magnetic field . For this model we demand the following: first the particle reach a full relaxed state in , therefore , and second the switch off interval for the excitation coil can be ignored (i.e. ). As a result we have as of Equation (4).

In short hand notation the measured magnetic field in induced by coil is

 Bmeasα(σ)=kα(w,σ)c(w) (5)

where we define the kernel of the measurement process as

 kα(w,σ)=σn⋅((3(σx−w)⊗(σx−w)|σx−w|5−I|σx−w|3)Bcoilα(w)). (6)

Now we define the forward Operator for a single given coil activation . Here, the sensor measures the combined response of all particles, hence using (5) we have

 Kα:L2(Ω)→L2(Σ),c↦⎡⎢⎣σ↦∫Ωkα(w,σ)c(w)d3w⎤⎥⎦ (7)

We finally mention that obviously in reality the magnetic fields of all particles directly combine and produce the directional measurement of the sensors together. In our derivation this means that we first need the convolutional integral and then evaluate at and take the scalar product with . However, due to the linearity of the operations this procedure is equivalent to our derivation leading directly to the kernel function .

Mathematically maps a particle density to elements of the measurement space . Note that the support of the activating coils as well as the sensors are located outside , hence is a bounded integral kernel. Thus, the well-definedness and boundedness of in the above spaces follows from known results on Fredholm integral operators (cf. ). The full forward operator is then the collection of all and the inverse problem consists in (approximatively) inverting .

So far the set parameterizing the activations was rather general, hence we discuss the possible strategies for activating the coils, which determine . Note that this is also important for a sound mathematical formulation, since in a general formulation of activations solely parametrized by an index set it is not even clear in which spaces to define measurements respectively how to set up the operator . The first option is to simply take as a finite set, corresponding to the practical realization of a measurement. However, the shape and even size for this set may change in different experiments, since one is rather free about how and where to place the coils. In order to fully exploit the capabilities of MRX Imaging it seems more reasonable to construct a continuous model and interpret the practical model as a sampling thereof. For this sake we notice that there are mainly three options for varying the coils: the type of the coil (shape of the curve ), its position in space (corresponding to the center of mass of ) and the orientation (corresponding to a rotation of the coil). It seems reasonable to assume that there is only a finite number of different coil types and that activations are not carried out at arbitrary high distance from . The possible sensors are collected in a bounded subset of already. Hence we can encode the actual measurements into a probability measure with compact support on

 M:={1,…,M}×Σcoils×Σsensors,

where we assume that is a product measure of the form , where is a probability measure on and a probability measure on . Then we can define the forward operator

 K:L2(Ω)→L2(M;μ).

The case of a finite number of measurements is then a specific realization that we obtain by choosing respectively as concentrated measure. Noticing that

 ∥Kc∥2L2(M;μ) =∫{1,…,M}×Σ∥Kαc∥2L2(Σ;μmeas)\leavevmode\nobreak dμcoil(α) ≤supα∥Kα∥2L2(Σ;μmeas),

boundedness of follows from the uniform boundedness of the operators , which is a straightforward estimate if all possible coil locations are outside . Indeed, by analogous arguments we can even show that the extension

 ∥Kc∥2L1(M;μ) =∫{1,…,M}×Σ∥Kαc∥2L1(Σ;μmeas)\leavevmode\nobreak dμcoil(α) ≤supα∥Kα∥2L1(Σ;μmeas)

is well-defined and bounded.

In practice one would sometimes like to use multiple coil activations within the set of coils simultaneously, also allowing different coil currents to vary the resulting field strength. Let the set of all possible activation patterns. Then we can introduce weighting parameters , where defines a specific pattern, and write

 ~K=(∑α∈AωβαKα)β∈B. (8)

As a result the choice of the weights defines a specific activation pattern. Note that even if is not a finite set, we will only have a finite number of nonzero weights in any measurement, the above sums have thus to be interpreted as finite ones and no convergence issues arise. The case of directly measuring is of course a special case where one weight is equal to one and the others are vanishing.

### 2.3 An Idealized Mathematical Model

In order to gain further understanding of the capabilities and mathematical structure of MRXI, it will be useful to study an idealized model that does not need to take into account the fine details of the coil. We approximate the activation by the limit of a small coil, with the center of mass of . Then we can approximate

 Lα∫0φ′α(s)×(w−φα(s)|w−φα(s)|3)ds ≈Lα∫0φ′α(s)×(w−yα|w−yα|3)ds =ηα×(w−yα|w−yα|3)

with the orientation of the approximated coil. Thus, further ignoring multiplicative constants, we can write an idealized measurement related to the activation and the vector as

 bα(σ)=σn⋅∫Ω(3(σx−w)⊗(σx−w)|σx−w|5−I|σx−w|3)(ηα×(w−yα|w−yα|3))c(w)\leavevmode\nobreak dw.

The formula for the measurement can be brought into a more compact notation by employing the fundamental solution of the Laplace equation (cf. [15, Chapter 2.2]) and noticing that

 ∇xγ(x)=−14πx|x|3

and

 ∇x∇xγ(x)=14π3x⊗x|x|5−I|x|3.

Hence, we obtain

 bα(σ)=16π2σn⋅∫Ω∇(σ−w)∇(σ−w)γ(σx−w)⋅(ηα×∇(w−yα)γ(w−yα))c(w)\leavevmode\nobreak dw.

Note that the following identities hold for the fundamental solution :

 ∇(x1−x2)γ(x1−x2)=∇x1γ(x1−x2)=−∇x2γ(x1−x2).

By using these identities, integration by parts and the compact support of in we get

 bα(σ) =−16π2σn⋅∇σx∫Ω∇wγ(σx−w)(c(w)ηα×∇wγ(w−yα))\leavevmode\nobreak dw =−16π2σn⋅∇σx∫Ωγ(σx−w)∇w⋅(c(w)ηα×∇wγ(w−yα))\leavevmode\nobreak dw =−16π2σn⋅∇σxU(σx;yα,ηα).

Using the properties of the fundamental solution we can characterize as the unique solution of

 −ΔU=∇⋅(cA)in R3, (9)

decaying at infinity, with the activation vector field

 A(x)=ηα×∇γ(x−yα).

In the following we further assume that the coils can be arranged around a hypersurface and that coils with three different orientations spanning the whole are available. This means that one indeed measures effectively. Finally, ignoring known scaling constants we can assume that is normalized and the measurement corresponds directly to on . This leads to the following idealized problem, that will be the basis of further analysis:

Idealized Inverse Problem:
Given measurements of on for a set of activations , where is the solution of Equation (9), determine the magnetic particle density compactly supported in .

We see that in this setting the problem shares similarities to inverse source problems (cf. ). Indeed basic unique continuation results for the Laplace equation from Cauchy data on a hypersurface (or even a stronger result from the knowledge of only, cf. [18, Lemma 2.1.1]) show that from such data is uniquely determined in . Another analogy can be drawn to inverse problems in fluorescence tomography (cf. ). By rewriting the activation via the solution of another Poisson equation, we thus have the system

 −ΔV=δy−ΔU=∇⋅(c(η×∇V))=−∇⋅(c(∇×(Vη))). (10)

Apart from the fact that elliptic operators on bounded domains with more complicated coefficients are used in fluorescence tomography, the key difference is the way of activation. In fluorescence tomography the right-hand side in the second equation is of the form , while here we find a non-scalar version mediated by the effective coil orientation .

We mention that an analogous formulation is possible for the original forward model, however there is no equivalent of the scalar potential and we need to write an equation for a vector field corresponding to in the above formulation. Noticing that

 14πLα∫0φ′α(s)×(w−φα(s)|w−φα(s)|3)\leavevmode\nobreak ds =−Lα∫0φ′α(s)×(∇wγ(w−φα(s)))\leavevmode\nobreak ds =−∇w×Lα∫0φ′α(s)γ(w−φα(s))\leavevmode\nobreak ds

since . Then we can write

 −ΔW=ϵα−ΔU=∇⋅(c(∇×W)) (11)

with the vectorial distribution

 ϵα:ψ↦Lα∫0φ′α(s)ψ(φα(s))ds.

### 2.4 Idealized Dipole Model

The asymptotic analysis in the previous section is based on the implicit assumption that , otherwise no activation is left at leading order. Since may happen in practice, in particular for any coil represented by a closed curve, we further discuss this case in the following. The appropriate model arises from the first order expansion of

 w−φα(s)|w−φα(s)|3≈(w−yα|w−yα|3)−∇(w−yα|w−yα|3)(φα(s)−yα).

 Lα∫0φ′α(s) ×(w−φα(s)|w−φα(s)|3)ds =−Lα∫0φ′α(s)×(∇(w−yα|w−yα|3)(φα(s)−yα))ds =∇×(Mα∇γ(w−yα)) =∇×(∇⋅(Mαγ(w−yα)))

with the matrix

 Mα=4πLα∫0φ′α(s)⊗(φα(s)−yα)\leavevmode\nobreak ds.

With the notations of the previous section we can also rewrite a slightly changed formula for the activation and obtain the forward solution now with the changed activation model

 −ΔW=∇⋅(Mαδy)−ΔU=∇⋅(c(∇×W)), (12)

which actually corresponds to a magnetic dipole at .

Idealized Dipole Inverse Problem:
Given measurements of on for a set of activations , where is the solution of Equation (12), determine the magnetic particle density compactly supported in .

The analysis of this inverse problem shares many similarities with the idealized model above and will hence not be discussed in detail in the following. However, we will use the two dimensional version of the dipole model for the first set of numerical experiments.

## 3 The Inverse Problem of MRXI

In the following we further outline some properties of the inverse problem in MRXI, discuss possible variational regularizations and constraints, and finally provide a more detailed analysis of the idealized inverse problem.

### 3.1 Ill-Posedness of the Inverse Problem

The modeling above provides an operator for a given activation pattern . With knowledge of the measurements we can denote an operator linear in the particle distribution . Then we have a standard linear inverse problem in the form of the operator equation

 Kc=g.

As mentioned above we can see each integral operator as a Fredholm operator of the first kind (see Equation (7)) with bounded kernel. Then the operator is bounded and compact (cf. ). If the set of all coils is finite it directly follows that is compact as well on the corresponding product topology and hence the inverse problem is ill-posed. Similar arguments also hold for other versions of the index set discussed before. We mention that in the realistic case of both activation coils and sensors being outside the region of interest, the operator is a Fredholm integral operator with analytic kernel, hence the inverse problem is severely ill-posed.

### 3.2 Variational Regularization

In order to compute stable approximations of the solution despite the ill-posedness, we use the popular approach of variational regularization methods, i.e. we look for

 c∗=argminc12∥Kc−g∥22+αR(c). (13)

In general this is a flexible solution approach, allowing to include prior knowledge and further constraints to the solution of the original problem. For example, the assumption of a smooth solution can be included as prior knowledge leading to

 argminc12∥Kc−g∥2+α∥∇c∥2L2(Ω) (14)

as an explicit example commonly known as first-order Tikhonov regularization. Since Tikhonov regularization and methods producing similar results are widely used in practice, we will consider it as state-of-the-art method and use it for comparison with the reconstruction method proposed in this paper. In general, the choice of a penalization term depends on the considered problem and often originates from physical principles or constraints and includes system relevant properties.

For MRXI, particles are distributed into the region of interest. This region of interest may consist of various materials with individual physical properties that yield different characteristic densities. Due to the limit resolution of MRXI we do not expect to resolve local variations in the density, but rather focus on reconstructing sharp edges between different kinds of tissue and assume that the magnetic nanoparticles will distribute homogeneously in a certain tissue. It is well known that the special property of constant regions and (mainly) sharp edges is supported using Total Variation regularization (cf. [9, 8]). Hence, we incorporate the Total Variation seminorm

 TV(c)=supψ∈C∞0(Ω)3,∥ψ∥∞≤1∫Ωc(x)∇⋅ψ(x)\leavevmode\nobreak dx (15)

as a regularization term in our variation model (13). In addition we incorporate the natural constraint that a density function is, from a physical point of view, a nonnegative function. Thus we restrict the minimization to nonnegative functions. In order to incorporate a constraint into the variational model we employ the characteristic function of a convex set , i.e.

 χC(x)={0ifx∈C∞else. (16)

To implement a nonnegativity constraint on the particle distribution we define and consider , or simply write . This leads to the variational model

 c∗∈argminc12∥Kc−g∥2+αTV(c)+χ≥0(c) (17)

Indeed this problem is well-defined, i.e. a nonnegative minimizer exists for any in

 BV(Ω)={c∈L1(Ω)\leavevmode\nobreak |\leavevmode\nobreak TV(c)<∞\leavevmode\nobreak }.

This follows by minor modifications from standard existence results for Total Variation regularization (cf. e.g. [1, 8]), using the compact embedding of into and the boundedness of the forward operator on the latter space.

Instead of the variational method, which is known to produce a rather strong bias (a loss of contrast in the case of total variation regularization, cf. [6, 8]) we can employ the Bregman iteration as an iterative regularization method (cf. ). This means for , and fixed and large, we iteratively compute a sequence of reconstructions

 ck+1 ∈argminc12∥Kc−gk∥2+αTV(c)+χ≥0(c) (18) gk+1 =gk+g−Kck+1. (19)

The regularizing effect arises from an appropriate stopping of the iteration, where previous analysis indicates that times the number of Bregman iterations corresponds to the regularization parameter in the regularization method, however with a reduction of the bias compared to the variational method (cf. [6, 8]). If an estimate of the size of the noise is available, one can easily choose a stopping index by the discrepancy principle or a similar method.

### 3.3 Identifiability Analysis of the Idealized Problem

We finally turn to a more detailed analysis of the idealized problem formulated in Subsection 2.3. In particular we are interested in the uniqueness or possible non-uniqueness in the determination of the density from different activation strategies. We will consider two extreme cases in order to understand when uniqueness can hold or fail in such a problem:

• Full activation: in this scenario we assume that activation is carried out at any for an open set , with two linearly independent orientations and for every .

• Far-field activation: in this scenario we assume that activation is carried out at any in the limit , again with two linearly independent orientations and for every .

We start with a rather standard reciprocity principle that holds for any kind of activation:

###### TheoremLemma 3.1:

Let be nonnegative densities with compact support in , such that the potentials related to satisfy

 ∇U1(⋅;y,η)=∇U2(⋅;y,η) (20)

on a -surface outside . Then for all and the identity

 0=∫Ω~c(x)∇γ(x−z)⋅(η×∇γ(x−y))\leavevmode\nobreak dx=∫Ω~c(x)∇γ(x−y)⋅(η×∇γ(x−z))\leavevmode\nobreak dx (21)

holds, with fundamental solution of the Laplace equation.

###### Proof.

First of all, due to the compact support of in , the potential is a smooth harmonic function in . As explained in Section 2, standard unique continuation for harmonic functions then implies in . Moreover, is a weak solution of the Poisson equation on , more precisely for any function we have

 −∫Ω~U(x)Δφ(x)\leavevmode\nobreak dx=∫Ω~c(η×∇γ(x−y))⋅∇φ(x)\leavevmode\nobreak dx.

Note that we do not require compact support of due to the vanishing Cauchy data of and the compact support of . Thus, for with positive distance to , is a suitable test function, harmonic in . This implies the second identity in (21) for such . Using the compact support of in it is straightforward to extend to all by an approximation argument. The first identity follows from an elementary vector identity. ∎

Now let us further characterize the type of measurements we find in the different application scenarios:

###### TheoremLemma 3.2:

Let the set of activations be chosen according to the full activation scenario. Then with the assumptions of Lemma (3.1) we have

 0=∫Ω~c(x)x−y|x−y|6⋅(ηi(y)×e)\leavevmode\nobreak dx (22)

for all , , and .

###### Proof.

Given with positive distance to and , choose for sufficiently small. From (21) and the definition of we see that

 0 =limϵ→0(16π2ϵ∫Ω~c(x)x−y|x−y|3⋅(ηi(y)×x−zϵe|x−zϵe|3)\leavevmode\nobreak dx) =limϵ→0(16π2ϵ∫Ω~c(x)1|x−y|3|x−zϵe|3 ((x−y)⋅ηi(y)×(x−y)+(x−y)⋅ηi(y)×(x+ϵe))\leavevmode\nobreak dx) =limϵ→0(16π2∫Ω~c(x)1|x−y|3|x−zϵe|3((x−y)⋅ηi(y)×(xϵ+e))\leavevmode\nobreak dx)

leads to the right-hand side in (22) using orthogonality of to and for . ∎

###### TheoremLemma 3.3:

Let the set of activations be chosen according to the far-field activation scenario. Then with the assumptions of Lemma (3.1) we have

 lim|y|→∞|y|2∫Ω~c(x)∇γ(x−z)⋅(η×∇γ(x−y))\leavevmode\nobreak dx =−∫Ω~c(x)x−z|x−z|3⋅(ηi(y)×y|y|)\leavevmode\nobreak dx

and thus,

 ∫Ω~c(x)1|x−z|⋅(ηi(y)×y|y|)\leavevmode\nobreak dx=0 (23)

for all .

###### Proof.

The first identity for the limit follows from a straightforward computation of the limit. Together with (21) we obtain

 ∇z∫Ω~c(x)1|x−z|⋅(ηi(y)×y|y|)\leavevmode\nobreak dx=0

for all . The asymptotic decay of the Coulomb potential for implies that there is no constant when integrating the gradient, i.e. (23) holds. ∎

Since the measurement (23) is known to be insufficient to determine , which would amount exactly to the inverse source problem for the Poisson equation (cf. ), we have to expect non-uniqueness in this activation scenario. Note that, strictly speaking, we have not given a non-uniqueness argument, since (21) is only a sufficient condition and we have not used the nonnegativity of the densities , but it appears natural that this case cannot suffice to uniquely determine the magnetic particle density.

In the case of full activation we have a different picture, there we can determine a suitable Riesz potential instead of the Coulomb potential, allowing to give a uniqueness result:

###### TheoremTheorem 3.4:

Let the set of activations be chosen according to the full activation scenario. Then the measurements of , , uniquely determine a nonnegative density with compact support.

###### Proof.

According to Lemma (3.1) and Lemma (3.2) it suffices to show that (22) implies in . First of all we see that with two linearly independent vectors we obtain

 {ηi(y)×e|ηi(y)×e|\leavevmode\nobreak |\leavevmode\nobreak e∈S2}=S2,

i.e. we can achieve arbitrary directions in (22). Hence, we find

 ∇y∫Ω~c(x)1|x−y|4\leavevmode\nobreak dx=−4∫Ω~c(x)x−y|x−y|6\leavevmode\nobreak dx=0.

Noticing its decay at infinity, the Riesz-Potential

 R4(y)=∫Ω~c(x)1|x−y|4\leavevmode\nobreak dx

consequently vanishes for all . Since is analytic outside and unique analytic continuation implies that it vanishes in . This is well-known to imply in (cf. [18, 19]). ∎

## 4 Discrete Forward Model and Numerical Solution

In the following we discuss an appropriate discretization of the inverse problem, its implementation and finally the numerical optimization of the discretized variational model.

### 4.1 Discrete Operator

In the following we provide a discretization of the MRXI forward operator (7). For our purposes we set as a domain for the magnetic nanoparticles. We assume that activation fields and sensors have finite but positive distance to . Therefore we define that holds both coils and sensors with .

In the discrete setting the curve that reassembles the conductor coil of an activation with length is cut into piecewise linear segments. Then the -th segment is defined as

 φα,k:[(k−1)Lαl,kLαl]→Ω0,t↦ak+t(bk−ak)

with . Then the approximated conductor path is defined as where defines the concatenation of for . In general we require that to provide a proper discretization of the activation coil. Then the -th segment provides a magnetic field in (cf.):

 Bcoilα,k:Ω →R3 (24) w ↦ϑ|ak−w|+|bk−w||ak−w||bk−w|(ak−w)×(bk−w)|ak−w||bk−w|+(ak−w)⋅(bk−w).

Thus the magnetic field induced by a -segmented activation coil is given by

 Bcoilα(w)=l∑k=1Bcoilα,k(w).

Now we define a disjunct decomposition of as well as a set of midpoints and demand that holds. Then the discretized particle distribution is defined by

 c:W→R,wk↦1|Ωk|∫Ωkc(w)\leavevmode\nobreak dw. (25)

Thus, on a grid with nodes, the continuous particle distribution can be understood as an element in the discrete setting.

For a given but fixed measurement , Equation (7) can be translated into a linear operator

 Kα,σ:RN→R,c↦N∑k=1kα(wk,σ)c(wk) (26)

where is defined as seen in Equation (6). However, in this case we are using the discrete approximation of (Equation (24)) instead. In the end provides a measurement of the full amplitude of the relaxation induced by coil activation .

### 4.2 Simplified Coil Activation

We see that a realistic implementation of the activation coil using piecewise linear sections leads to Equation (24). As a result the number of calculations, and therefore the complexity of the computation speed, increases with higher precision modeling of the coil. However, in Subsection 2.3 we have shown that the coil can be approximated in the limit by a small coil, maintaining its magnetic properties, resulting in the idealized model. Thus, the excitation coil can be implemented as a magnetization peak . This way both, coil excitation and dipole relaxation, is described by Equation (4), however the coil activation fields are not restricted by any measuring direction . This leads to a much more lightweight computation for the coil activation.

### 4.3 Matrix Assembly: Sensors and Activation Patterns

In an experimental setup the amplitude of the particle relaxation is acquired in multiple points where provides the number of sensors in the system. Due to the linearity in , Equation (26) can be translated in a matrix representation combining all measuring points :

 Kα:=⎡⎢ ⎢⎣Kα,σ1⋮Kα,σs⎤⎥ ⎥⎦.

Then for multiple subsequent activations this leads to a fully discretized forward operator matrix with :

 K:=⎡⎢ ⎢⎣Kα1⋮Kαr⎤⎥ ⎥⎦. (27)

For a generic variational approach it suffices to have the discretized forward operator , respectively its matrix representation. For advanced numerical methods and in particular for future design of activation and measurement strategies it is however crucial to keep in mind its internal structure.

### 4.4 Alternating Direction Method of Multipliers (ADMM)

We defined our desired model for MRXI in (17):

 c∗∈argminc12∥Kc−g∥22+α∥∇c∥1+χ≥0(c) (28)

where describes the discrete version of the Total Variation . To approach this problem we use ADMM to deduce an iterative scheme as suggested by .

In the general formulation of ADMM we have convex functionals and , linear operators and as well as variables , and . Then a generalized representation of a minimization problem can be denoted as

 minc,v D(c)+Ψ(v)s.t.Ec+Hv=u. (29)

This constrained problem can be expressed by the augmented Lagrangian

 Lρ(c,v,λ)=D(c)+Ψ(v)+λT(Ec+Hv−u)+ρ2∥Ec+Hv−u∥22.

Basically, the Lagrangian is a reformulation of the minimization problem, where a set is an extrema of the Lagrangian if and are minimizer of the original problem. This leads to the following update scheme:

 ck+1 =argminc Lρ(c,vk,λk) (30) vk+1 =argminv Lρ(ck+1,v,λk) (31) λk+1 =λk+ρ(E(ck+1)+H(vk+1)−u). (32)

To transfer our variational model (28) into the ADMM scheme (29) we have to define

 D(c) :=12∥Kc−g∥22 v :=(^vv+) Ψ(v) :=α∥^v∥1+χ≥0(v+) u :=0 E :=(∇I) H :=−I.

In this setting we can directly apply the ADMM scheme (30)-(32) for solution of the inverse problem of MRXI. Note that the major computational effort is in each iteration step is due to the first step, i.e. computing an update for , which involves the solution of a large linear system with the full matrix