Vector Calculus in Maxwell’s Equations and the Klein-Gordon Equation

0
1182

Abstract

This paper aims to study the intricacies of Maxwell’s equations and the Klein-Gordon equation, in which all the equations involve detailed derivations. Vector calculus contains many applications in advanced physics topics which include electromagnetism and cosmology. It serves as the mathematical framework for expressing variables involving vector quantities such as force, velocity, and electric fields. However, a basis of vector calculus must be built up in order to comprehend what the equations in this paper describe. This paper will start with a brief introduction to vector calculus concepts and essential theorems to compute higher integrals. Using our newfound knowledge of vector calculus, this paper will transition into an exploration and derivation of Maxwell’s equations in both the integral and differential forms. From here, the paper will transition into an exploration of the universe and its works through the Klein-Gordon equation. At the end of Section 3 and Section 4, the paper aims to highlight the importance of each equation in the physics world.

Introduction

Vector calculus plays an important role in analyzing the study of rates of change with three dimensions. It is a continuation of differentiation and integration, the basis of calculus I and II, but applied with three-dimensional vectors, utilizing newly formulated concepts to accomplish this. Vector calculus is crucial in fields like physics and most engineering avenues, as it enables us to model and understand real-world situations in three dimensions and solve obscure, unexplored problems of our natural world.

Maxwell’s equations are a set of four foundational equations developed by James Clerk Maxwell in the 19th century. These equations concisely state the fundamentals of electromagnetism. They describe the interactions between electric fields, magnetic fields, electric flux, and magnetic flux. Maxwell’s equations include Gauss’ law for electricity, Gauss’ law for magnetism, Faraday’s law of electromagnetic induction (Maxwell-Faraday equation), and Ampère’s law with Maxwell’s addition (Ampère-Maxwell equation)1.

On the other hand, the Klein-Gordon equation is a wave equation in quantum field theory that describes the motion of specific particles. The equation, named after Oskar Klein and Walter Gordon, combines special relativity and quantum mechanics to describe the movement of particles with zero spin, such as the only confirmed particle with zero spin, the Higgs boson. The Klein-Gordon equation plays a crucial role in understanding the behavior of scalar particles in quantum physics, shedding light on topics like particle creation and annihilation, which can be explored in a whole paper by itself.

This paper aims to provide an in-depth derivation of all four of Maxwell’s equations and the Klein-Gordon equation, followed up with a detailed analysis on the applications and importance of each equation. The paper will start with an introduction to cornerstone vector calculus concepts and other mathematical topics integrated into the derivations in the following sections. The paper will transition into an introduction to Maxwell’s equations, with a thorough derivation of each equation. The paper will finish with an introduction to the Klein-Gordon equation and a meticulous derivation of the equation with an emphasis on operators, quantization, and the significance of the equation.

Vector Calculus Introduction

In this introductory exploration of vector calculus, we will delve into the key operations and theorems that form the basis of this indispensable mathematical discipline, offering insights into its significance in the realm of science and technology, starting with four crucial identities involving differentiation and integration.

Del Operator and Gradient, Divergence, and Curl

The backbone of the three vector operators we will go over is the (\nabla) operator, which is a mathematical operator that is commonly used to find higher derivatives. It is defined as the vector,

(1)   \begin{equation*}\vec{\nabla} = \frac{\partial}{\partial x}\mathbf{i} + \frac{\partial}{\partial y}\mathbf{j} + \frac{\partial}{\partial z}\mathbf{k}\end{equation*}

Although it does not have magnitude and cannot be expressed by itself (like a lone derivative \frac{d}{dx}), Equation (1) is the equation for the Nabla operator in Cartesian coordinates in 3 dimensions.

First, we will explore the gradient vector operator, which despite being used much less than the other two vector operators, is still crucial to go over and helps introduce divergence and curl.

Definition: For a function ( f(x, y, z) ) on ( R^3 ), the gradient, notated ( \nabla f(x, y, z) ), is a vector function on (R^3) such that

(2)   \begin{equation*}\text{grad} f = \nabla f(x, y, z) = \frac{\partial f}{\partial x} \mathbf{i} + \frac{\partial f}{\partial y} \mathbf{j} + \frac{\partial f}{\partial z} \mathbf{k}\end{equation*}

The gradient in simple terms is a vector field that explains the rate of change of the function in terms of each dimension.

Next, we will explore divergence, which is a vector operator that produces a scalar from a vector (because it is a dot product).

Definition: For a vector field (\mathbf{f}(x, y, z) = f_1(x, y, z)\mathbf{i} + f_2(x, y, z)\mathbf{j} + f_3(x, y, z)\mathbf{k}), the divergence of (\mathbf{f}), notated (\nabla \cdot \mathbf{f}), is written as

(3)   \begin{equation*}\text{div} \, f = \nabla \cdot f\end{equation*}

(4)   \begin{equation*}= \left( \frac{\partial}{\partial x}\mathbf{i} + \frac{\partial}{\partial y}\mathbf{j} + \frac{\partial}{\partial z}\mathbf{k} \right) \cdot \left( f_1(x, y, z)\mathbf{i} + f_2(x, y, z)\mathbf{j} + f_3(x, y, z)\mathbf{k} \right)\end{equation*}

(5)   \begin{equation*}= \frac{\partial}{\partial x} (f_1) + \frac{\partial}{\partial y} (f_2) + \frac{\partial}{\partial z} (f_3)\end{equation*}

(6)   \begin{equation*}= \frac{\partial f_1}{\partial x} + \frac{\partial f_2}{\partial y} + \frac{\partial f_3}{\partial z}\end{equation*}

The divergence of a vector field can be used to physically explain how the flux of the field behaves at a single point. A point with positive divergence has outgoing and is called a “source” of the field. A point with negative divergence has incoming flux and is labeled as a “sink” of the field.

Finally, we will cover curl, which is a vector operator that produces a vector from a vector (because it is a cross product).

Definition: For a vector field \mathbf{f}(x, y, z) = P(x, y, z)\mathbf{i} + Q(x, y, z)\mathbf{j} + R(x, y, z)\mathbf{k}, the curl of \mathbf{f}, denoted \nabla \times \mathbf{f}, is written as

(7)   \begin{equation*}\text{curl } \mathbf{f} = \nabla \times \mathbf{f}\end{equation*}

(8)   \begin{equation*}= i \, j \, k \, \frac{\partial}{\partial x} \, \frac{\partial}{\partial y} \, \frac{\partial}{\partial z} \, P(x, y, z) \, Q(x, y, z) \, R(x, y, z)\end{equation*}

(9)   \begin{equation*}= \left( \frac{\partial R}{\partial y} - \frac{\partial Q}{\partial z} \right) \mathbf{i} - \left( \frac{\partial R}{\partial x} - \frac{\partial P}{\partial z} \right) \mathbf{j} + \left( \frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y} \right) \mathbf{k}\end{equation*}

The curl of a vector field describes the infinitesimal circulation of the field in three dimensions at specific points and is defined as the circulation density at a particular point in the field.

Stokes’ Theorem and Gauss’ Theorem

There are four main fundamental theorems for vector calculus, in which the two most important ones we will be covering in this paper. The first one we will be exploring is Stokes’ Theorem, which allows an integral taken over a closed curve to be turned into an integral taken over a surface bounded by the curve.

Definition: Stokes’ Theorem states that the line integral of a vector field a over a curve is equal to the curl of the same vector field over any surface enclosed within the curve, or in equation form:

(10)   \begin{equation*}\oint_C \mathbf{a} \cdot d\mathbf{l} = \iint_S (\nabla \times \mathbf{a}) \cdot d\mathbf{S}\end{equation*}

Let’s now explore Gauss’ Theorem, also known as the divergence theorem, which enables us to turn an integral taken over a volume to a one taken over the surface around the volume and the other way around.

Definition: Gauss’ Theorem states that the surface integral of a vector field \mathbf{a} over a surface S that bounds a volume V can be expressed as the volume integral of the divergence of \mathbf{a}, or in equation form:

(11)   \begin{equation*}\iint_{S}^{} \mathbf{a} \cdot d\mathbf{S} = \iiint_{V}^{} \nabla \cdot a \, dV\end{equation*}

Simply put, Gauss’ Theorem explains that the sum of all the outward flow inside a volume is equal to the total outward flow from the volume quantified by the flux through its surface.

Other Important Mathematical Topics

The Leibniz integral rule: The Leibniz integral rule (Feynman’s integral trick) is a neat trick that we will be using in this paper which involves integrals and derivatives. The Leibniz integral rule states that for an integral of the form:

(12)   \begin{equation*}\int_{a(x)}^{b(x)} f(x, t) \, dt\end{equation*}

Where a(x), b(x) are functions with respect to x, and f(x,t) is a function of two variables x and t, the derivative of the integral with respect to x is equal to the integral of the partial derivative of f(x,t) with respect to x and can be written as shown in Equation: 13.

(13)   \begin{equation*} \frac{d}{dx} \left( \int_{a(x)}^{b(x)} f(x,t) \, dt \right) = \int_{a(x)}^{b(x)} \frac{\partial}{\partial x} f(x,t) \, dt + f(x, b(x)) \cdot \frac{d}{dx} b(x) - f(x, a(x)) \cdot \frac{d}{dx} a(x) \end{equation*}

and when (a(x) and b(x) are constants, this simplifies to

(14)   \begin{equation*}\frac{d}{dx} \left( \int_{a}^{b} f(x,t) \, dt \right) = \int_{a}^{b} \frac{\partial}{\partial x} f(x,t) \, dt\end{equation*}

This trick for integration enables us to bring derivatives into integrals for integration by turning them into partials.

Linear Transformations, Eigenvectors and Eigenvalues: Eigenvectors and eigenvalues are important components in linear algebra that helps relate operators and their physical observables. To understand eigenvectors correctly, linear transformations must be explored.

A linear transformation2 is a mapping between two vector spaces that follows the rules of vector addition and scalar multiplication. For the linear transformation between vector spaces V and W (map T: V \to W), the following must be held for any vectors v_1, v_2 in V and scalar a:

(15)   \begin{equation*}T(\mathbf{v_1} + \mathbf{v_2)} = T(\mathbf{v_1)} + T(\mathbf{v_2)}\end{equation*}

(16)   \begin{equation*}T(a\mathbf{v}) = aT(\mathbf{v})\end{equation*}

In other words, linear transformations must follow the rules of vector addition and scalar multiplication.

Now with knowledge on linear transformations, eigenvectors are vectors that maintain the same direction upon a linear transformation. For the example linear transformation (T), there are eigenvalues of (T) denoted as (\lambda) such that

(17)   \begin{equation*}T \mathbf{v} = \lambda \mathbf{v}\end{equation*}

Where v is a nonzero vector in vector space V and an eigenvector of T as T is a scalar multiple of v. If V is finite dimensional, then the above equation can be represented as

(18)   \begin{equation*}\underline{\underline{A}}\mathbf{u} = \lambda \mathbf{u}\end{equation*}

where \underline{\underline{A}} is the matrix representation of T and \mathbf{u} is the coordinate vector of \mathbf{v}.

Maxwell’s Equations

Maxwell’s equations are a set of four main partial differential equations in electromagnetism that describe the behavior of electric and magnetic fields, their sources, and their interactions. These equations were formulated by the Scottish physicist James Clerk Maxwell in the 19th century and played a pivotal role in the understanding of electricity and magnetism. Table 1 provides a list of all four of Maxwell’s equations, which can be expressed in differential and integral form in the time or frequency domain. For the sake of simplicity, we will be solely focusing on Maxwell’s equations in the time domain. The first two equations we will explore are Gauss’ laws concerning electric and magnetic fields, and these equations explain how electric and magnetic fields are produced by moving charges, currents, and changes to the fields. The third equation, the Maxwell-Faraday equation, predicts how a magnetic field produces an electromotive force (emf) when interacting with a circuit. The last equation, Ampère’s circuital law (with Maxwell’s additions to create the Ampère-Maxwell equation), relates a magnetic field’s circulation and direction around a closed loop to the current in the loop. These four equations are the backbone of many inventions used today that involve electricity, like electric generators and motors. The derivation of each equation will be details in the subsections following Table 1.

EquationIntegral formDifferential form
Gauss’ law for electric fields\iint_{S} \mathbf{E} \cdot dA = \frac{Q}{\varepsilon_0}\nabla \cdot \mathbf{E} = \frac{\rho}{\varepsilon_0}
Gauss’ law for magnetic fields\iint_{S} \mathbf{B} \cdot d\mathbf{S} = 0\nabla \cdot \mathbf{B} = 0
Maxwell-Faraday equation\oint_{C} \mathbf{E} \cdot d\mathbf{l} = -\iint_{S} \frac{\partial \mathbf{B}}{\partial t} \cdot d\mathbf{S}\nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}}{\partial t}
Ampère-Maxwell equation\oint_{C} \mathbf{B} \cdot d\mathbf{l} = \iint_{S} (\mu_0 \mathbf{J} + \mu_0 \varepsilon_0 \frac{\partial \mathbf{E}}{\partial t}) \cdot d\mathbf{S}\nabla \times \mathbf{B} = \mu_0 \mathbf{J} + \mu_0 \varepsilon_0 \frac{\partial \mathbf{E}}{\partial t}
Table: 01 Comparison of integral and differential forms of Maxwell’s equations in the time domain.

Derivation of Gauss’ Law for Electric Fields

A Gaussian surface is a completely enclosed surface that encapsulates a three-dimensional volume. Some Gaussian surfaces include the surface of a sphere or cube, and some non-Gaussian surfaces include the surface of a disk or the top of a cone.

Gauss’ Law states that the electric flux \Phi_E across any Gaussian surface is proportional to the net electric charge Q enclosed by the surface. This can be expressed as

(19)   \begin{equation*}\Phi_E = \frac{Q}{\varepsilon_0}\end{equation*}

where \Phi_E is the electric flux through a Gaussian surface S, Q is the total charge enclosed within the surface, \epsilon_0 is the electric constant equal to 8.854 \times 10^{-12} \, \text{C}^2/(\text{N} \cdot \text{m}^2).

First, we must determine how to find the flux of a surface. For electric flux, this is defined as the electric field times the component of area perpendicular to the field. We will use a sphere (see Figure 1) for a Gaussian surface for the sake of simplicity, and solve for electric flux with (\mathbf{E}) being the electric field and (d\mathbf{A}) being the area vector for an infinitesimal area:

The Gaussian surface is an enclosed sphere and has an electric field pointing up (via Google Drawings).

The electric flux for a single infinitesimal area on the sphere can be represented as

(20)   \begin{equation*}\Phi_0 = (\mathbf{E})(d\mathbf{A})\cos\theta = \mathbf{E} \cdot d\mathbf{A}\end{equation*}

which is simply the dot product of the electric field and area. The total electric flux of the sphere then can be represented by the sum of the singular electric fluxes of all infinitesimal areas on the sphere, or a surface integral.

(21)   \begin{equation*}\Phi_0 + \Phi_1 + \Phi_2 + \dots = \sum_{n} \Phi_n = \iint_{S} \mathbf{E} \cdot d\mathbf{A}\end{equation*}

Now we can derive the other part of the integral form, where the total electric flux equals the total charge enclosed by the surface divided by the electric constant. We can do this by using Coulomb’s Law and for simplicity, we will have a scenario where a spherical surface encloses a single point charge.

Coulomb’s Law states that

(22)   \begin{equation*}\mathbf{E} = \frac{q}{4 \pi \varepsilon_0 r^2}\end{equation*}

where q is the point charge and r is the distance between the surface and the point charge.

We can now substitute E into our equation relating total electric flux and the integral form of Gauss’ law

(23)   \begin{equation*}\iint_{S} \mathbf{E} \cdot d\mathbf{A} = \iint_{S} \frac{q}{4\pi\varepsilon_0 r^2} \cdot dA = \frac{q}{4\pi\varepsilon_0 r^2} \iint_{S} d\mathbf{A}\end{equation*}

Since we are dealing with a spherical surface in this case, the area of the enclosed volume is 4\pi r^2

(24)   \begin{equation*}\frac{q}{4\pi\varepsilon_0 r^2} \iint_{S} d\mathbf{A} = \frac{q\mathbf{A}}{4\pi\varepsilon_0 r^2} = \frac{q(4\pi r^2)}{4\pi\varepsilon_0 r^2} = \frac{q}{\varepsilon_0}\end{equation*}

We find that the total electric flux for a spherical surface enclosing a single point charge is equal to the point charge divided by the electric constant, which proves Gauss’ Law for this case.

Let’s talk about another scenario where an irregular surface encloses the same single point charge. Let’s note that S_1 is the spherical surface and S_2 is the irregular surface. We can compare the two and note a very important detail.

A comparison of the field lines that pass through the spherical and irregular surfaces (via Google Drawings).

We can see that the same number of field lines pass through surface S_1 as S_2, which implies that their surface integrals are the same, and Gauss’ law applies to irregular surfaces encapsulating a single charge as well.

(25)   \begin{equation*}\iint_{S_2} \mathbf{E} \cdot d\mathbf{A} = \iint_{S_1} \mathbf{E} \cdot d\mathbf{A} = \frac{q}{\epsilon_0}\end{equation*}

Now, we need to cover the most complicated scenario of multiple charges in an enclosed space. For any single charge, denoted by q_i in the region,

(26)   \begin{equation*}\iint\limits_{S} \mathbf{E}_i \cdot d\mathbf{A}_i = \frac{q_i}{\varepsilon_0}\end{equation*}

Using the Superposition Principle, the total electric field can be expressed as

(27)   \begin{equation*}\mathbf{E} = \sum_{i} \mathbf{E}_i\end{equation*}

and we can further plug in to find that

(28)   \begin{equation*} \iint\limits_{S} \mathbf{E} \cdot d\mathbf{A} = \iint\limits_{S} \sum_{i} \mathbf{E}_i \cdot d\mathbf{A} = \frac{\sum_{i} q_i}{\epsilon_0} = \frac{Q}{\epsilon_0} \end{equation*}

which proves Gauss’ Law for electric fields for any Gaussian surface, and states that the electric flux across a Gaussian surface is proportional to the net electric charge enclosed by the surface3.

To express the equation in differential form, we can first start with the integral form

(29)   \begin{equation*}\iint_{S} \mathbf{E} \cdot d\mathbf{A} = \frac{Q}{\epsilon_0}\end{equation*}

and use Gauss’ theorem to express the integral as

(30)   \begin{equation*}\iiint_V \nabla \cdot \mathbf{E} \, dV = \frac{Q}{\varepsilon_0}\end{equation*}

where V is the volume of the closed surface containing the total charge Q. Relating charge and charge density, we can rewrite the right side of the equation to get

(31)   \begin{equation*}\iiint_V (\nabla \cdot \mathbf{E}) \, dV = \iiint_V \frac{\rho}{\varepsilon_0} \, dV\end{equation*}

where \rho is the charge per unit of volume. We can then remove the triple integrals from both sides and obtain the differential form of Gauss’ Law for electric fields,

(32)   \begin{equation*}\nabla \cdot \mathbf{E} = \frac{\rho}{\varepsilon_0}\end{equation*}

Derivation of Gauss’ Law for Magnetic Fields

Deriving Gauss’ Law for magnetic fields requires us to explore the Biot-Savart Law4, which describes the magnetic field generated by a constant electric current. We can start with visualizing a magnetic field around a straight wire flowing with current5.

The current is going upward and the magnetic field is counterclockwise from above (via Google Drawings).

The Biot-Savart Law states that the magnetic flux density dB created by an infinitesimal current dl carrying a current I at a point P around the wire is directly proportional to dl, I, the sine of the angle made by the current vector and position vector of the radius R, and inversely proportional to R^2. This creates the relationship

(33)   \begin{equation*}d\mathbf{B} \propto \frac{Id\mathbf{l} \sin \theta}{R^2}\end{equation*}

or rewriting this results in

(34)   \begin{equation*}d\mathbf{B} = \frac{k \cdot I \cdot d\mathbf{l} \cdot \sin(\theta)}{R^2}\end{equation*}

where the constant k is dependent on magnetic properties of the medium and system and can be expressed as

(35)   \begin{equation*}k = \frac{\mu_0 \mu_r}{4\pi}\end{equation*}

so that the final Biot-Savart Law can be written as

(36)   \begin{equation*}d\mathbf{B} = \frac{\mu_0 \mu_r}{4\pi} \cdot \frac{Id\mathbf{l} \sin\theta}{R^2}\end{equation*}

Upon taking the line integral of both sides over the path of the wire, we get another expression for the Biot-Savart Law

(37)   \begin{equation*}\mathbf{B} = \frac{\mu_0}{4\pi} \int_C \frac{\mathbf{I} \, d\mathbf{l} \times \mathbf{R}'}{\left|\mathbf{R}'\right|^3}\end{equation*}

where {R^'} is the displacement vector from point l to where the field is at the end of the vector R. To start with a derivation of Gauss’ Law for magnetic fields, we will take the divergence of both sides to get

(38)   \begin{equation*}\nabla \cdot \mathbf{B} = \frac{\mu_0}{4\pi} \nabla \cdot \int_C \frac{(\mathbf{I} \, d\mathbf{l} \times \mathbf{R'})}{|\mathbf{R'}|^3}\end{equation*}

We can move the divergence into the integral and apply the divergence of a cross product with the A portion constant:

(39)   \begin{equation*}\mathbf{A} \times \mathbf{B} = \mathbf{B} \times \nabla \times \mathbf{A} - \mathbf{A} \cdot \nabla \times \mathbf{B}\end{equation*}

and we will get the following equation

(40)   \begin{equation*}\nabla \cdot \mathbf{B} = -\frac{\mu_0}{4\pi} \int_C I\,d\mathbf{l} \cdot \nabla \times \frac{\mathbf{R}'}{\lvert \mathbf{R}' \rvert^3}\end{equation*}

The complete expression on the right equals 0 as the following curl expression equals 0.

(41)   \begin{equation*}\nabla \times \frac{\mathbf{R}'}{|\mathbf{R}'|^3} = 0\end{equation*}

This gives us our desired result of the differential form of Gauss’ Law for magnetic fields

(42)   \begin{equation*}\nabla \cdot \mathbf{B} = 0\end{equation*}

which states that the magnetic flux through any closed Gaussian surface is equal to zero. Using the same method from the first derivation, we can take the triple integral of both sides over V

(43)   \begin{equation*}\iiint_V (\nabla \cdot \mathbf{B}) \, dV = \iiint_V dV 0 \, = 0\end{equation*}

and use Gauss’ theorem to arrive at our integral form.

(44)   \begin{equation*}\iiint_{V} (\nabla \cdot \mathbf{B}) \, dV = \iint_{S} (\mathbf{B} \cdot d\mathbf{S}) = 0\end{equation*}

Derivation of the Maxwell-Faraday equation

A derivation for Maxwell’s third equation requires us to go to Faraday’s law of induction, which states that for a solenoid (see Figure 4)

(45)   \begin{equation*}\epsilon = -N \frac{d\Phi_B}{dt}\end{equation*}

where \epsilon is the induced voltage through the solenoid, N is the number of loops of wire in the solenoid, and \frac{d\Phi_B}{dt} is the instantaneous rate of change of magnetic flux with respect to time.

Solenoids like the ones pictured above often contain hundreds of loops of wire6.

Let’s set N equal to 1 and just assume that our solenoid is a single loop for simplicity. We now have

(46)   \begin{equation*}\epsilon = -\frac{d\Phi_B}{dt}\end{equation*}

and we can plug in the formula for scalar magnetic flux7

(47)   \begin{equation*}\Phi_B = \oint_S \mathbf{B} \cdot d\mathbf{S}\end{equation*}

into our modified Faraday’s law equation.

(48)   \begin{equation*}\epsilon = -\frac{d}{dt} \iint_{S} \mathbf{B} \cdot d\mathbf{S}\end{equation*}

We can then use the Leibniz integral rule from Section 2 to bring the derivative into the integral.

(49)   \begin{equation*}\epsilon = -\iint_S \frac{\partial \mathbf{B}}{\partial t} \cdot d\mathbf{S}\end{equation*}

Now, using Faraday’s Integral Law, which states that the circulation of electric field E around a path C is equal to our expression above, the rate of change of magnetic flux over time, we can further write that

(50)   \begin{equation*}\epsilon = \oint_C \mathbf{E} \cdot d\mathbf{l} = -\iint_S \frac{\partial\mathbf{B}}{\partial t} \cdot d\mathbf{S}\end{equation*}

which is the integral form of the Maxwell-Faraday equation. Now using Stokes’ Theorem, the closed line integral on the left hand side of the Maxwell-Faraday equation can be rewritten as a surface integral of the curl of the electric field.

(51)   \begin{equation*}\oint_C \mathbf{E} \cdot d\mathbf{l} = \iint_S (\nabla \times \mathbf{E}) \cdot d\mathbf{S} = -\iint_S \frac{\partial \mathbf{B}}{\partial t} \cdot d\mathbf{S}\end{equation*}

Bringing the negative sign of the expression on the right side into the integral and removing the surface integral from both sides of the equation results in the differential form of the Maxwell-Faraday equation, which states that a time-varying magnetic field will always produce an electric field.

(52)   \begin{equation*}{\nabla} \times \vec{E} = -\frac{\partial \vec{B}}{\partial t}\end{equation*}

Derivation of the Ampère–Maxwell equation

Finally, the derivation of the Ampère–Maxwell equation requires us to explore Ampère’s circuital law first. Let’s assume we have a wire carrying a current I. Ampère’s circuital law states that the closed line integral of the magnetic field H is equal to the total amount of scalar electric current I enclosed within the path of any shape C, and can be written as

(53)   \begin{equation*}\oint_C \mathbf{H} \cdot d\mathbf{l} = I\end{equation*}

We will be temporarily using a different symbol H for magnetic fields when previously we have used B. These are actually different quantities and very closely related. B is the total magnetic field and includes the magnetic field H along with contributions made by the materials in the field which are specific to different materials. H is the magnetic field purely produced by the flow of current in the wires, and both quantities can be related with the following equation

(54)   \begin{equation*}\mathbf{B} = \mathbf{H}\mu_0\end{equation*}

where \mu_0 is the constant for magnetic permeability.

Let’s isolate H and substitute it into Equation (53), and then multiply \mu_0 on both sides.

(55)   \begin{equation*}\oint_C \mathbf{H} \cdot d\mathbf{l} = \oint_C \frac{\mathbf{B}}{\mu_0} \cdot d\mathbf{l}\end{equation*}

(56)   \begin{equation*}\oint_C \mathbf{B} \cdot d\mathbf{l} = \mu_0 I\end{equation*}

We can now use Stokes’ Theorem to turn the closed line integral on the left side of the equation into a surface integral of the curl of the magnetic field.

(57)   \begin{equation*}\oint_C \mathbf{B} \cdot d\mathbf{l} = \iint_S (\nabla \times \mathbf{B}) \cdot d\mathbf{S}\end{equation*}

(58)   \begin{equation*}\iint_S (\nabla \times \mathbf{B}) \cdot d\mathbf{S} = \mu_0 I\end{equation*}

Now, let’s explore how to turn the right side of Equation (58) into a vector, as we have expressed the left side of Equation (58) as a surface integral of a curl, which results in a vector. We can look at the current density vector J which is defined as the electric current flowing per unit of surface area. This equation can be written as

(59)   \begin{equation*}J = \frac{dI}{dS}\end{equation*}

We can move the differential dS to the left side and integrate both sides to obtain an equation for I.

(60)   \begin{equation*}J \cdot dS = dI \rightarrow \quad dI = J \cdot dS\end{equation*}

(61)   \begin{equation*}I = \iint_{S} J \cdot dS\end{equation*}

Now, we can substitute our expression for current into Equation (58) and remove the surface integrals on both sides.

(62)   \begin{equation*}\iint_S (\nabla \times \mathbf{B}) \cdot d\mathbf{S} = \mu_0 \iint_S \mathbf{J} \cdot d\mathbf{S}\end{equation*}

(63)   \begin{equation*}\iint_S (\nabla \times \mathbf{B}) \cdot d\mathbf{S} = \iint_S \mu_0 \mathbf{J} \cdot d\mathbf{S}\end{equation*}

(64)   \begin{equation*}\nabla \times \mathbf{B} = \mu_0 \mathbf{J}\end{equation*}

Here, we land at the differential form of Ampère’s circuital law with the magnetic field in terms of B.

We can now introduce another equation, the continuity equation:

(65)   \begin{equation*}\vec{\nabla} \cdot \vec{J} = -\frac{\partial \rho_v}{\partial t} \neq 0\end{equation*}

which measures if the current is flowing into a volume (positive right-side term) or flowing out of a volume (negative right-side term), and requires that the equation cannot equal 0 in a magnetic field, which is what we are dealing with.

We can now continue by taking the divergence of both sides of Equation (64), which results in the left-side term equaling 0 as the divergence of a curl is always equal to 0.

(66)   \begin{equation*}\nabla \cdot (\nabla \times \mathbf{B}) = \nabla \cdot \mu_0 \mathbf{J}\end{equation*}

(67)   \begin{equation*}\nabla \cdot (\nabla \times \mathbf{B}) = 0\end{equation*}

(68)   \begin{equation*}\nabla \cdot \mu_0 \mathbf{J} = \mu_0 (\nabla \cdot \mathbf{J}) = 0\end{equation*}

Referring back to Equation (65), this creates a contradiction to our equation, so we have to add an arbitrary vector, J_E, to Equation (64) and solve for it.

(69)   \begin{equation*}\nabla \times \mathbf{B} = \mu_0 \mathbf{J} + \mathbf{J}_E\end{equation*}

Let’s now take the divergence of both sides with the arbitrary vector and simplify.

(70)   \begin{equation*}\nabla \cdot (\nabla \times \mathbf{B}) = \nabla \cdot (\mu_0 \mathbf{J}) + \nabla \cdot \mathbf{J}_E\end{equation*}

(71)   \begin{equation*}0 = \nabla \cdot (\mu_0 \mathbf{J}) + \nabla \cdot \mathbf{J}_E\end{equation*}

(72)   \begin{equation*}\nabla \cdot \mathbf{J}_E = -\mu_0 \nabla \cdot \mathbf{J}\end{equation*}

Using Equation (65), we can substitute -\frac{\partial \rho_v}{\partial t} into the right side of the equation.

(73)   \begin{equation*}\vec{\nabla} \cdot \mathbf{J}_E = \mu_0 \frac{\partial \rho_v}{\partial t}\end{equation*}

Let’s now look back to Section, where we derived Gauss’ Law for Electric Fields and focus specifically on the differential form of the equation. We can solve for \rho and plug it into Equation (73).

(74)   \begin{equation*}\nabla \cdot \mathbf{E} = \frac{\rho}{\varepsilon_0}\end{equation*}

(75)   \begin{equation*}\varepsilon_0 \nabla \cdot \mathbf{E} = \rho\end{equation*}

(76)   \begin{equation*}\nabla \cdot \mathbf{J}_E = \mu_0 \quad \frac{\partial}{\partial t} \left[\varepsilon_0 \nabla \cdot \mathbf{E}\right] = \mu_0 \varepsilon_0 \frac{\partial}{\partial t} \left[\nabla \cdot \mathbf{E}\right]\end{equation*}

Since the partial with respect to t is time-variant and the term \nabla.E is space-variant, they are both independent of each other and we can rearrange the equation to bring the divergence outside of the partial.

(77)   \begin{equation*}{\nabla} \cdot \mathbf{J}_\mathbf{E} = \mu_0 \epsilon_0 \left( {\nabla} \cdot \frac{\partial \mathbf{E}}{\partial t} \right)\end{equation*}

\mu_0 and \epsilon_0 are constants, so we can cancel divergence on both sides and obtain an equation for our unknown vector J_E.

(78)   \begin{equation*}\mathbf{J}_\mathbf{E} = \mu_0 \epsilon_0 \frac{\partial \mathbf{E}}{\partial t}\end{equation*}

Upon plugging J_E into Equation (69), we have obtained the differential form for the Ampère–Maxwell equation.

(79)   \begin{equation*}\nabla \times \mathbf{B} = \mu_0 \mathbf{J} + \mu_0 \varepsilon_0 \frac{\partial \mathbf{E}}{\partial t}\end{equation*}

Now we can take the surface integral of both sides of Equation (79) and apply Stokes’ Theorem to arrive at the integral form of the Ampère–Maxwell equation.

(80)   \begin{equation*}\iint_{S} (\nabla \times \mathbf{B}) \, d\mathbf{S} = \iint_{S} (\mu_0 \mathbf{J} + \mu_0 \epsilon_0 \frac{\partial \mathbf{E}}{\partial t}) \, d\mathbf{S}\end{equation*}

(81)   \begin{equation*}\iint_{S} (\nabla \times \mathbf{B}) \, d\mathbf{S} = \oint_{C} \mathbf{B} \cdot d\mathbf{l}\end{equation*}

(82)   \begin{equation*}\oint_{C} \mathbf{B} \cdot d\mathbf{l} = \iint_{S} (\mu_0 \mathbf{J} + \mu_0 \epsilon_0 \frac{\partial \mathbf{E}}{\partial t}) \, d\mathbf{S}\end{equation*}

Applications and significance of Maxwell’s Equations

James Clark Maxwell had derived his equations in 1864 hydrodynamically through a molecular vortex model through experimental results from Michael Faraday, Wilhelm Weber, and Friedrich Kohlrausch8. However, his original work contained twenty equations, and the four cornerstone equations we know today are in the Heaviside notation, simplified by Oliver Heaviside9. Maxwell’s equations play a pivotal role in the laws of electromagnetism as they have established connections between electricity and magnetism. Before Maxwell’s works, contributions from Faraday and Ampère separately delved into the fields of electricity and magnetism through experiments involving currents and magnets or magnetic fields propagated from currents. Maxwell’s equations combined these experimental results through the Maxwell-Faraday equation derived above in Section 3.3, stating that a changing magnetic field induces an electric field, and a changing electric field induces a magnetic field. Faraday also discovered that an electromagnetic force existed, which combined aspects of electricity and magnetism into one force. Later in 1895, Hendrik Lorentz derived the modern formula of the electromagnetic force and labeled it as the Lorentz force10. Additionally, Maxwell’s equations involve many tangible uses, including solenoids mentioned in Section 3.3. Referring to Figure 4, a solenoid is a coiled length of wire helical shaped possessing a length noticeably larger than its diameter. When an electric current flows through the wire of a solenoid, a nearly uniform magnetic field is created on the inside of the helix. Solenoids are tangible objects that have been observed to produce magnetic fields and can be practically utilized to model real-world problems. This practical aspect becomes clear when solenoid coils have many real-world applications requiring specific magnetic fields to be produced. Spectroscopy, ablation therapy, and magnetic resonance imaging (MRI) are some examples of situations where solenoids can be applied11. For the first Maxwell equation, Gauss’ law for electric fields, applications like capacitors, electric power systems, and semiconductor devices can be emphasized. Capacitors are electronic devices that store energy in the form of electrical charge accumulated on two parallel plates insulated from each other. Gauss’ law is crucial to optimize the capacitance and energy storage of capacitors as the parallel plates utilize an electric field. In analyzing electric power systems, Gauss’ law is employed to study the distribution of electric fields in high-voltage equipment. For semiconductors, where the behavior of electrons and holes in materials is important, Gauss’ Law is applied to analyze and design the electric fields within semiconductor devices such as transistors and diodes. Some practical applications for Gauss’ Law for magnetic fields include maglev bullet trains, geomagnetic field studies, and magnetic sensors. For inventions that utilize magnetic levitation like bullet trains, Gauss’ Law is relevant to understanding magnetic fields to achieve stable levitation and propulsion. In geophysics, Gauss’ law can be used to study the Earth’s magnetic field and understand its closed-loop nature. Gauss’ law for magnetic fields is also critical to understanding the design and calibration of magnetic sensors, such as magnetometers and Hall effect sensors. Furthermore, the Ampère-Maxwell equation is crucial in understanding how changing electric fields contribute to electromagnetic waves in the propagation of radio, micro, and light waves. In designing antennas, the equation helps engineers analyze radiation patters and how varying electric fields contribute to the emission of EM waves from antennas. Relating to other physics fields, Maxwell’s equations have had an impact on quantum mechanics12 and relativity, which will be later explored in Section 4. Subsequently to the formulation of Maxwell’s electromagnetism theory, physicists explored the nature of matter and energy at the quantum scale, and quantum mechanics emerged as a framework to understand the behavior of electrons. Maxwell’s equations maintained their validity even in quantum mechanics, leading to the development of quantum electrodynamics (QED) and quantum optics.

The Klein-Gordon equation

The Klein-Gordon equation13 is a fundamental equation in theoretical physics, especially in the areas of quantum field theory and relativistic quantum mechanics. Formulated by the physicist Oskar Klein and edited by Walter Gordon, this partial differential equation describes the behavior of scalar particles with both relativistic and quantum characteristics. This equation is important for figuring out the behavior of elementary particles and how they interact with each other. Deep understandings about the nature of matter, energy, and the basic structure of the universe are revealed when exploring the subtleties of the Klein-Gordon equation.

The Klein-Gordon equation is shown below.

(83)   \begin{equation*} \frac{1}{c^2} \frac{\partial^2 \psi}{\partial t^2} + \nabla^2 \psi - \frac{m^2 c^2}{\hslash^2} \psi = 0 \end{equation*}

Derivation of the Klein-Gordon Equation

Let’s start with the formula for energy in classical mechanics for a free particle for non-relativistic speeds, where

(84)   \begin{equation*}E = \frac{1}{2} m \mathbf{v}^2\end{equation*}

The equation for momentum,

(85)   \begin{equation*}\mathbf{p} = m\mathbf{v}\end{equation*}

can be substituted into Equation (84) to get

(86)   \begin{equation*}E = \frac{\mathbf{p}^2}{2m}\end{equation*}

where E is the particle energy, p is momentum, and m is mass. We can now quantize both sides of the equation to get the Schrödinger equation for a free particle moving at non-relativistic speeds,

(87)   \begin{equation*}\hat{E}\psi = \frac{\mathbf{\hat{p}}^2}{2m}\psi\end{equation*}

which turns ( \hat{E} ) and ( \hat{p} ) into their respective operators, ( \hat{E} ) (energy operator) and ( \hat{p} ) (momentum operator).

In quantum mechanics, we can think of systems as states, like a particle can be labeled with a general state vector ( |\psi\rangle ), which is a vector in a complex Hilbert space. When we probe these states to make a measurement, like measuring the momentum ( p ) on the particle, it is like acting the momentum operator ( \hat{p} ) on the state ( |\psi\rangle ). In short, operators correspond to observable values.

Eigenvectors and eigenvalues are concepts from linear algebra that are used frequently in quantum mechanics regarding operators and state vectors. Let’s say for the momentum of the particle ( p_{\text{particle}} ), the eigenvector equation is

(88)   \begin{equation*}\hat{p}|\psi_{\text{particle}}\rangle = p_{\text{particle}}|\psi_{\text{particle}}\rangle\end{equation*}

where \hat{p} is a matrix and the momentum operator, |\psi_{\text{particle}}\rangle is an eigenvector of the momentum operator with eigenvalue p_{\text{particle}}. The equation above explains how an operator corresponding to physical quantities like position and spin (and in this situation, momentum and energy) can be related to its physical equivalent.

To derive these operators, we can first look at the solution to Schrödinger’s equation, or a general wave function in three dimensions,

(89)   \begin{equation*}\psi = e^{\frac{i}{\hslash}(\mathbf{p}x - Et)}\end{equation*}

where \hslash is Planck’s constant divided by 2\pi, E is the particle energy, and \mathbf{p} is the momentum vector. We can assume x_1, x_2, and x_3 represent the three dimensions and rewrite the wave function as

(90)   \begin{equation*}\psi = e^{\frac{i}{\hslash}(p_1 x_1 + p_2 x_2 + p_3 x_3 - Et)}\end{equation*}

Let \psi be a wave function. The partial derivatives of the wave function with respect to x_1, x_2, and x_3 are used to determine the gradient of \psi. Mathematically, this can be expressed as follows:

(91)   \begin{equation*}\frac{\partial\psi}{\partial x_1} = \frac{i}{\hslash} p_1 e^{\frac{i}{\hslash}(p_1 x_1 + p_2 x_2 + p_3 x_3 - Et)}\end{equation*}

(92)   \begin{equation*}\frac{\partial\psi}{\partial x_2} = \frac{i}{\hslash} p_2 e^{\frac{i}{\hslash}(p_1 x_1 + p_2 x_2 + p_3 x_3 - Et)}\end{equation*}

(93)   \begin{equation*}\frac{\partial\psi}{\partial x_3} = \frac{i}{\hslash} p_3 e^{\frac{i}{\hslash}(p_1 x_1 + p_2 x_2 + p_3 x_3 - Et)}\end{equation*}

(94)   \begin{equation*}\nabla\psi = \langle\frac{\partial\psi}{\partial x_1}, \frac{\partial\psi}{\partial x_2}, \frac{\partial\psi}{\partial x_3}\rangle\end{equation*}

Factoring out like terms gives

(95)   \begin{equation*}\psi = \frac{i}{\hslash} e^{\frac{i}{\hslash}(p_1 x_1 + p_2 x_2 + p_3 x_3 - Et)} \langle p_1, p_2, p_3 \rangle\end{equation*}

Substituting back in \psi and expressing momentum as a single vector value gives

(96)   \begin{equation*}\psi = \frac{i}{\hslash} \mathbf{p}\psi\end{equation*}

We can now isolate p to get

(97)   \begin{equation*}\mathbf{\hat{p}} = \frac{\hslash}{i} \nabla = -i\hslash \nabla\end{equation*}

which suggests that the momentum operator is equivalent to the value above.

We can apply the same process to energy in one dimension, as time is a scalar. Taking the partial derivative of \psi with respect to E gives

(98)   \begin{equation*}\frac{\partial \psi}{\partial t} = -\frac{i}{\hslash} E e^{\frac{i}{\hslash}(\mathbf{p}x - Et)} = -\frac{i}{\hslash} E\psi\end{equation*}

Cancelling the wave function on both sides and solving for E gives

(99)   \begin{equation*}\hat{E} = i\hslash \frac{\partial}{\partial t}\end{equation*}

which suggests that the energy operator is equivalent to the value above.

Let’s now visit the energy-momentum relation from special relativity,

(100)   \begin{equation*}\mathbf{p}^2 c^2 + m^2 c^4 = E^2\end{equation*}

where p is momentum, c is the speed of light, and m is mass. Quantizing both sides of the equation gives

(101)   \begin{equation*}[\mathbf{\hat{p}}^2 c^2 + m^2 c^4]\psi = [\hat{E}^2]\psi\end{equation*}

and we can substitute our newfound operators into the equation to get

(102)   \begin{equation*}[(-i\hslash\nabla)^2 c^2 + m^2 c^4]\psi = [(i\hslash\frac{\partial}{\partial t})^2]\psi\end{equation*}

and further simplify the equation to get

(103)   \begin{equation*}[-\hslash^2 \nabla^2 c^2 + m^2 c^4]\psi = [-\hslash^2 \frac{\partial^2}{\partial t^2}]\psi\end{equation*}

Dividing by c^2 and \hslash^2 gives

(104)   \begin{equation*}[-\nabla^2 + \frac{m^2 c^2}{\hslash^2}]\psi = [-\frac{1}{c^2}\frac{\partial^2}{\partial t^2}]\psi\end{equation*}

and moving all the terms to the left side and multiplying by a negative results in

(105)   \begin{equation*}[-\frac{1}{c^2}\frac{\partial^2}{\partial t^2} + \nabla^2 - \frac{m^2 c^2}{\hslash^2}]\psi = 0\end{equation*}

which is the form of the Klein-Gordon equation we were looking to derive.

Applications and significance of the Klein-Gordon equation

Oskar Klein in 1926 derived the relativistic wave equation that we now understand as a modern fundamental equation. Walter Gordon derived the same equation in 1928 and both their contributions were combined and commonly referred to today as the Klein-Gordon equation. The equation originated to extend wave-particle duality to relativistic particles, where Erwin Schrödinger and Walter Ritz attempted to create relativistic wave equations. Schrödinger and Ritz’s works laid the foundation for the derivation of the equation we know today14.

The Klein-Gordon equation’s theoretical significance lies in providing a relativistic framework to understand and describe the behavior of fundamental particles. Scalar bosons are an example of a fundamental particle, and it is defined as a particle with zero spin. In specific, the Higgs boson is a particle that describes fundamental particles and is associated with the Higgs field, which is a field that explains how particles acquire mass15.

The Klein-Gordon equation describes the behavior of scalar particles in special relativity, so it is suitable for describing the dynamics of the Higgs boson.

The Klein-Gordon equation also contributes to a broader group of physics fields, helping to formulate theoretical models in cosmology and describe types of vector fields in quantum field theory, including fermionic fields and gauge fields. In the study of inflationary cosmology, the universe experienced exponential expansion, and the behavior of the inflaton field is intricately described by the Klein-Gordon equation. Paired with an inflaton potential16, the Klein-Gordon equation helps explain the evolution of the inflaton field, influencing the expansion rate and energy density of the universe. Furthermore, the Klein-Gordon equation is crucial for understanding the origin of structures in space such as galaxies, through predictions rooted in the equation with observational data.

For a description of quantum field theory, the Klein-Gordon equation extends beyond electrodynamics and explains fermionic and gauge fields. Fermionic fields17 account for particles with half-integer spins and gauge fields govern force-carrying particles like photons and gluons. The Dirac equation, which is an extension of the Klein-Gordon equation, incorporates fermionic fields by allowing for the description of fermions with both positive and negative energy states. A different extension of the Klein-Gordon equation in Yang-Mills18 equations helps to generalize electromagnetism to describe non-abelian symmetries associated with gauge theories, like quantum chromodynamics.

Additionally, the Klein-Gordon equation relates to theoretical quantum chemistry. Quantum chemistry relies heavily on the Schrödinger equation to describe molecular systems, and greater accuracy for quantum chemistry has prompted exploration beyond its non-relativistic framework19. Inspired by principles from the relativistic Klein-Gordon equation, scientists are evolving quantum chemistry methods to address scenarios involving significant relativistic and high-speed effects. While the Schrödinger equation excels in many situations, heavy atoms and processes approaching the speed of light highlight its shortcomings. The incorporation of Klein-Gordon-inspired principles has given rise to the field of relativistic quantum chemistry, offering more accurate predictions in such systems.

Conclusion

To summarize, vector calculus is a cornerstone to understanding the nuances of numerous physics equations like the four Maxwell’s equations and the Klein-Gordon equation. By utilizing many known and established laws of electromagnetism and tools of vector calculus, we performed meticulous derivations of each of Maxwell’s equation in both differential and integral forms. We assumed many different scenarios involving wires, closed loops, and Gaussian surfaces to carry out our derivations. We also delved into the intricacies of electromagnetism and the significance of Maxwell’s equations in the development of electromagnetism and numerous applications and inventions due to Maxwell’s equations.

For the Klein-Gordon equation, we delved into specifics of quantum mechanics like quantization and operators. To derive the Klein-Gordon equation, we had to quantize equations to obtain the operator form of certain variables like momentum and energy. We learned how to derive the momentum and energy operators by using the general form of a wave function and taking the partial derivative or gradient of the function and solving for the operator. We applied vector calculus knowledge to derive the formula for these specific operators and were able to algebraically solve for the form of the Klein-Gordon equation stated in this paper. We also elaborated on the complex theoretical applications of the Klein-Gordon equation into quantum mechanics and the quantum field theory.

Additionally, vector calculus is utilized extensively in other fields of physics like classical mechanics (specifically fluid mechanics) and other real-world applications like computer programming and graphics. Fluid mechanics are heavily explored and integrated with the use of the Divergence Theorem and flux. For computer programming and graphics, the differential operator, gradient, is crucial to create smooth shading and realistic effects on a computer screen. The gradient of a function helps calculate the rate of change in color or strength over a surface to create smooth lighting and detailed shadows in images, animations, and even video games.

Continuing future research, the field of general relativity and quantum mechanics utilizes forms of vector calculus and has many unknown or uncertain aspects. We have observed quantum gravity and formulated quite a few unproved theories on the matter, including string theory and loop quantum gravity. With future developments and research papers, we can hope to fill in the gaps of knowledge and understand the unresolved mysteries of quantum mechanics.

Acknowledgements

I would like to thank my mentor, Ayngaran Thavanesan, for his remarkable teaching and explaining ability, and for his smooth guidance and availability for the duration of this research paper. His detailed clarifications of quantum mechanics notations and concise explanations on vector calculus specifics helped refine the rough spots of each derivation. I am extremely grateful to have such a specialized mentor and guide throughout this whole research process, who has immensely contributed to my growth as a scholar in the academic research community.

  1. D. P. Hampshire, A derivation of Maxwell’s equations using the Heaviside notation (2018). []
  2. T. Rowland, E. W. Weisstein, Linear Transformation, https://mathworld.wolfram.com/LinearTransformation.html. []
  3. C. Singh, Student understanding of symmetry and Gauss’s law of electricity (2016). []
  4. B. Ricketti, Magnetostatics and the Biot-Savart Law (2015). []
  5. G. Müller, R. Coyne, Gauss’s law for the magnetic field, Ampere’s law with applications (2020). []
  6. D. Kumar, An Introduction to Solenoids (2019). []
  7. S. W. Ellingson, Electromagnetics (2018). []
  8. J. C. Maxwell, A Dynamical Theory of the Electromagnetic Field (1864). []
  9. E. P. Dollard, Electromagnetic Induction And Its Propagation (2023). []
  10. M.A. Natiello, H.G. Solari, Relational electromagnetism and the electromagnetic force (2021). []
  11. D. E. Bordelon, et al., Modified Solenoid Coil That Efficiently Produced High Amplitude AC Magnetic Fields With Enhanced Uniformity for Biomedical Applications (2012). []
  12. W. C. Chew, et al., Quantum Maxwell’s Equations Made Simple (2020). []
  13. P. J. Bussey, Improving our understanding of the Klein–Gordon equation (2022). []
  14. Z. J. Sabata, Relativistic Introduction to the Klein-Gordon Equation (2016). []
  15. A. Tumasyan, et al., A portrait of the Higgs boson by the CMS experiment ten years after the discovery (2022). []
  16. K. Kefala, et al., Features of the inflaton potential and the power spectrum of cosmological perturbations (2021). []
  17. K. Rejzner, Fermionic fields in the functional approach to classical field theory (2011). []
  18. L. Álvarez-Cónsul, et al., On the Kähler-Yang-Mills-Higgs equations (2020). []
  19. M. Scherbela, et al., Solving the electronic Schrödinger equation for multiple nuclear geometries with weight-sharing deep neural networks (2021). []

LEAVE A REPLY

Please enter your comment!
Please enter your name here