# Engineering Page

An engineer is a scientist who masters known laws of physics to create useful products for the betterment of mankind. However, many engineers end up learning how to solve a specific type of problem, without understanding where the process came from. In this page, I point out some stuff that helped me understand my classes. Since I am an electrical engineer, most of this stuff will be electrical engineering related, but occasionally I will put something that applies to all engineering disciplines.

General Engineering

Core classes

Electrical Engineering

Core Classes

Controls and Communications

Electromagnetics and Solid State Physics

# General Engineering

## Core Classes

### Statics and Dynamics

Some things you should know about statics:

Of course, you need to know (total force)=(sum of) F=m*a=0 and Torque=(sum of) l x F=I*(angular acceleration)=0. If you are given a static system, you can analyze it by "summing forces and moments." To do this, you should be very careful with vector notiation. Note that most problems are 2 dimensional, so moments are always in one direction (in the + or - "z" direction), but in a 3 dimentions, you must add the moments as vectors. Also note that you may sum forces and moments about any point in the system.

In addition, you must learn how to analyze different objects. For instance, a rope does not "compress"...it can only be in tension, or totally loose. Because of this, sometimes you must guess the state of a system, and see if your results make sence. Other things you may see are pulleys, blocks of mass, ramps, rods, walls, etc.

A specific type of static problem is the "two force member" problem--one with a static bar which has only two forces acting on it. It can be shown that the forces on a two force member MUST be along the line between the points of action (it does not matter what the shape of the bar is...it really doesn't have to be a bar at all, actually). This note usually simplifies the process of analyzing a system greatly. This technique is commonly used to analyze simple bridges, where the members of the bridge are modeled as massless rods connected by pins.

I believe most classes also learn how to plot the shear and bending moments of a rod under stress. To do this, you split the rod into sections and analyze each section to find the shear and bending tendancies of the rod as a function of position.

Another special case of statics is a "hydrostatic problem." You must learn how to calculate the force of water on a wall. Realize that the magnitude of the pressure in water is not constant has you go deeper. The pressure incraeses (P=density*g*depth), and so the force increases (F=P*A). Using this knowledge, you can "sum" forces (or rather, integrate them) and find the effective location of the point of action.

Some things you should know about dynamics:

In dynamics, acceleration and angular acceleration are no longer zero...it is another variable for you to solve for. In general, the method of solving these problems is the same...sum forces and moments and set up the equations appropriately. Be very careful to sum moments about the center of mass though, or your work will all be wrong. If you sum moments anywhere else, certain terms in the definition do not cancel out, and must be accounted for.

In some problems, there is a relationship between angular acceleration and acceleration (like for a rolling wheel). In a spinning wheel problem, note that you may sum moments about the axis of rotation without worrying about those extra terms I mentioned above, even if the axis of rotation does not pass through the center of mass.

In dynamics, energy methods can be used to solve some problems. For example, a ball rolling down a hill exchanges potential energy for translational and rotational energy (kinetic energy).

### Therodynamics

Sub cooled liquids, Saturated mixtures, and Super heated vapors:

Suppose you have a liquid, say water at room temperature (sitting on your table at 1 atm pressure). In this state, it is a "sub-cooled liquid." As you increase the temperature, the volume of the water increases (and the density decreases). At some point (which depends on the pressure), the water will stop increasing in temperature (this temperature is called the saturation temperature, and the corresponding pressure is called the saturation pressure), but the volume will still increase. When this just starts to happen, the water is called "saturated liquid." If we add any more energy, then the water will boil, and thus we will have a "saturated liquid-vaper mixture." When all of the water has been transformed into gas, the temperature will again begin to increase. At this point, the water is in the "saturated vapor stage." As the water increases in temperature, the gas expands even more, becoming a "super heated vapor."

All this is assuming the pressure of the water is less than the critical pressure. If you look at this process for several pressures on a temperature vs. volume plot, you will find that the points where water is saturated is under a curve, which looks like a dome (the tip of the curve is the critical point). If you plot the process on a pressure vs. temperature plot, you will see a line, but the whole saturation process occurs at one point on the line.

As it turns out, you can completely describe the state of a system with 2 independent "properties," which may be temperature, pressure, volume (actually, specific volume, which is v=(V=volume)/mass), quality, or a number of other things. Quality (denoted x) is a number which only has meaning for saturated substances. If the substance is a sat. liq., then x=0. If it is sat. vap., then x=1. This number is designed so that: y=y(@ sat liq) + y*(y(@ sat. vap.) - y(@ sat. liq.)), where y is specific volume (v), internal energy (u), or enthalpy (h = u + P*v). Note that for saturated mixtures, temperature and pressure are dependent, and do not completely describe a system.

For superheated vapor, temperature and pressure are independent, and the specific volume and other properties can be found using tables or approximations (like the ideal gas approximation, P*V=n*Ru*T). Speaking of this approximation, there are many ways to write it: P*v=R*T, P*V=m*R*T, P*V=n*Ru*T, etc. These are all really the same, since Ru=universal constant, R=Ru/(M=molar mass), and mass=m=M*(N=mole number). There is an improvement on the ideal gas equation, which involves adding a compressability factor, Z. In essence, P*v=Z*R*T, where Z=V(actual)/V(ideal). Usually, to find Z, one calculates the reduced pressure and temperature (Pr=P/P(critical) and Tr=T/T(critical)) and looks on a plot to find Z. To find u, note that Cv=du/dt, where Cv is the specific heat for constant volume processes. If the final temperature is unknown, then one way of finding (delta)u is to guess on the average value of Cv for the process, and use (delta)u=Cv*(delta)T. Finding h is done the same way, except Cp=dh/dt. One good thing to know is that for an ideal gas, Cp = Cv + R.

Compressed liquids are usually treated as saturated liquids when looking for u, v, or h. This is because these properties do not change much as you decraese temperature. To illustrate, realize that the volume of a cup of water at 50 degrees celcius is about the same as the volume of the same cup of water at 10 degrees celcius. But there is a better approximation for h: h=h(@ sat. liq.) + v(@ sat. liq.)*(P - P(@ sat.))

First Law of Thermodynamics:

The first law of thermodynamics is basically a statement of energy conservation. Energy includes heat and work, and there are many forms of heat and work transfer:

There are 3 kinds of heat transfer: conduction, convection, and ratiation. Conduction is heat transfer between two touching substances (it comes from the transfer of energy of the particles in the substances). This causes a hot cup of coffee to cool down in a cold room. Convection is heat transfer between a solid and a moving gas or liquid, and involves convection and fluid motion. This is the mechanism which cools down your body when you sweat. There are two kinds of convection: forced and natural. Forced convection occurs when you place a fan which blows on the object, and natural convection occurs when currents are naturally formed (imagine a hot object--the air around it heats up, and becomes less dence, and so it moves up, causing natural currents). Radiation is energy emitted by matter in the form of electromagnetic waves. This is the only form of heat transfer which can occur when a vacuume separates two objects. An example is radiation from the sun.

There are also several ways to transfer energy:

1. Electrical work: dw=p=voltage*current

2. Boundary work: Wb=(integral)P*dV

3. Gravitational work: Wg=m*g*(h2-h1)=potential energy

4. Accelerational work: Wa=(integral)F*ds=(1/2)*m*(v2^2-v1^2)=knetic energy

5. Spring work: Ws=(1/2)*k*(x1^2-x1^2)

6. Some other forms of work: magnetic work and electrical polarization work

For a closed system (no mass flow), The big equation is: dq-dw=du+d(ke)+d(pe). ke=kinetic energy, pe=potential energy. Note that work here includes boundary work.

For a steady state or steady flow system, the equation is: dq-dw=dh+d(ke)+d(pe). The dh term comes from the energy added by the flowing mass. Again, work includes boundary work. In addition to this, you must know m(dot, in)-m(dot, out)=(delta)m(dot, system)=0 for SS/SF systems, so m(dot, in)=m(dot, out)

There are lots of steady flow devices, which include Nozzles, Diffusers, Turbines, Compressors, Throttling valves, Mixture chambers, Heat exchangers, and Pipe and duct flow. To analyze all of these, you simply solve the equation above. Usually, a lot of terms can be assumed to be zero, like Q(dot), W(dot), and PE(dot) for a nozzle.

To solve unstaedy flow problems, just keep in mind energy conservation and mass conservation.

What are specific heats?

By definition, specific heat is the energy required to raise the temperature of a unit mass of a substance by one degree. This value will depend on the process executed. We can find a workable equation for Cv by the following method:

First, assume we have a constant volume process. Then boundary work=0 since the bondary does not move. Then dq - qw(non boundary work) = du, where q and w are heat input and work output from the system. Then, by definition, dq - dw = Cv*dT, and so Cv=(du/dT)|@const. v. In a constant pressure process, there will be boundary work done, but wb + du=dh, and so dq - dw(non boundary work)=dh, and Cp=(dh/dt)|@const. P.

One thing that confuses poeple is when to use Cv or Cp. The equations are derived for a certain process, but they are valid for any process. For example, if you are finding u, you should use (delta)u=Cv*(delta)T even if volume isn't constant during the process.

For ideal gasses, it turns out that Cp=Cv+R, since h=u+P*v=u+R*T and dh=du+RdT

For incompressable solids and liquids, v=constant, and it turns out that Cv=Cp=C. Usually, (delta)u is found either by finding an average value of C, and using (delta)u=C*(T2-T1), or by using tables.

# Electrical Engineering

## Core Classes

### Circuits

How do you use Gauss' law and Kirchoff's laws to analyze resistor circuits?

What happens when you add a dependent source in the circuit?

How to analyze RL and RC circuits:

How to analyze RLC circuits:

What is the transient and steady state responce?

What is impedence?

How do you solve an op-amp problem?

What is mutual inductance, and what does the notation mean?

### Digital Design

What are gates, anyway?

What you need yto know about transistors:

Truth tables:

K-maps:

Boolean Algebra:

Minimization:

A digital adder, subtracter, and multiplier examples:

PALs and PLAs:

Mealy vs. Moore machine:

Flip-flops, latches, MUXs, and other digital devices:

### Signals and Systems

Why you convolve the input with the system impulse response to get the output:

This is a question that eluded me for a very long time when I took signals and systems. Actually, this result is only true for LTI (linear and time-invarient) systems.

To prove this, first note a property of the Dirac delta function (which I will call d(t)): anything convolved with the delta function is itself (i.e., x(t)*d(t) = x(t)). Now, suppose we input x(t) and the output is y(t). We denote the operation performed by the system with y(t) = T{x(t)}. Then, (using quasi Mathematica notation for the integrals)

y(t) = T{x(t)}
= T{x(t)*d(t)}
= T{Integrate[x(u)d(t - u), {u, -infinity, infinity}]}

If we assume the system is linear, then we can view the integral as a sum and x(u) as a scaler. So the system response to x(t) is the same as the superposition of the scaled system responces to d(t - u) for all values of u. This means,

y(t) = Integrate[x(u)T{d(t - u)}, {u, -infinity, infinity}]

We define the system impulse responce function h(t, u) = T{d(t - u)}. If we assume the system is time-invarient, then when we input a delayed impulse, the system impulse responce must have the same delay time. Thus,

y(t) = Integrate[x(u)h(t, u), {u, -infinity, infinity}]
= Integrate[x(u)h(t - u), {u, -infinity, infinity}]
= x(t)*h(t)

And that's it! One interesting fact to come from this is that LTI systems are completely described by there impulse responce. Once we know this, we can compute the output for any input signal.

When analyzing an LTI circuit in time, how can I tell when the output signal has an impulse in it?

Usually, to find the impulse responce of a circuit completely through the time domain, we input a step function to the system and compute the resulting output using techniques from differential equations. Then, rather than trying to do some kind of inverse convolving, we take the derivative of the output to get the impulse response of the circuit. This was not an obvious step for me to accept in circuit analysis, but it can be easily proven in Laplace, since a derivative implies multiplication by s and this cancels out the 1 / s from the step input.

In the time domain, you can determine if the output signal has an impulse in it by one of two methods (that I can think of). You can actually try to figure it out with logic, which is fine, but not reliable (for me anyway). Basically, you have to realize that the current through a capacitor and the voltage across an inductor may change instantaneously (to prove this to yourself, go back to their voltage/current relationships and note that the derivative of a continuous function may change instantaneously), and you must add the delta function (an impulse) if you are looking at the output of one of these.

I only actually do that to check my answers. You can find the exact impulse responce if you include the step function in your answer for the output of the step input. (Oh yeah...remember that the step function is secretly multiplied by everything because we input a step?) When you differentiate the output to get the impulse response, the impulse function always comes out if needed.

As a side note, note that the impulse responce and transfer function of a system are not the same thing. Technically, the transfer function is the Fourier (or Laplace) transform of the impulse responce of a system.

When analyzing an LTI circuit in the frequency domain, the impulse function comes out in the math too (specifically it can happen when the order of the numerator polynomial is greater than or equal to the order of the denominator).

Some insight as to why Fourier transforms work and how they relate time and frequency:

There is nothing magic about Fourier and Laplace transforms. They simply convert a function in time to a function in frequency. This is most easily explained with Fourier series and transforms.

Recall the inverse Fourier series formula, f(x) = (sum from n = -N to N, excluding n = 0 of) cn * exp (j * wn * t). Knowing that this complex exponential can be converted to sine and cosine functions using Euler's equation (exp(j * x) = cos(x) + i * sin(x)), it is easy to see how a function may be built by deciding how much of each frequency you need to makeup the signal (each sinusoid represents it's own frequency), and adding them together. See the Fourier analysis section of the math page for more about this.

There are really 4 kinds of Fourier Transforms. Simply put, there are transforms for periodic (Fourier Series) and non-periodic (Fourier Transform) signals, which are further sub divided into continuous and discrete functions.

First of all, aperiodic signals yield continuous frequency equations, and periodic signals have discrete frequency equations. To illustrate, suppose you are given a periodic signal, x(t). If we try to find an oddball frequency component in x(t) where the nodes of the oddball sinusoid don't match the signal period, the superposition of the oddball signal would be different over each period of x(t). Thus it is obviously not a component of the original signal. i.e., it's not an ingredient in the soup of frequencies needed to concoct our desired function, x(t). Only specific discrete frequencies will properly line up within the signal period.

Also, a continuous time signal implies an aperiodic frequency response, while a discrete time signal gives a periodic frequency response. One might even think of the frequency function as redundant instead of periodic. The reason for this is best described with a pictures. First, imagine a signal x(t) = sin(2*pi/16) (period T = 16 sec). Suppose we sample it every second, as shown below.

We can argue that the frequency 1/16 Hz fully describes these points. However, it turns out that one gets exactly the same samples from the sine curve y(t) = sin(2*pi*17/16) (period T = 16/17 sec). Below is a plot of y(t) and a plot of x(t) and y(t) together.

This phenomenon is called aliasing, which is a result of the redundancy in possible frequencies for a discrete signal.

See the Fourier analysis section of the math page for information about the validity of the Fourier transform.

Transforms, transforms, transforms!

There are a lot of transforms out there. Here are some of the most common.

1. Fourier Transform
This transform finds the steady state response. There are actually 4 different kinds of Fourier transforms. The transform for continuous aperiodic signals (Fourier transform), continuous periodic signals (Fourier series), discrete aperiodic signals (discrete-time Fourier transform or DTFT), and discrete periodic signals (discrete Fourier transform or DFT). All of these are very closely related, and we can infact quickly map one transform to another when we modify the signal function (i.e., when we sample it, etc.).

2. Laplace transform
This transform can be thought of as an extension of the Fourier transform. It includes the steady state and transient response of a system.

3. Z-Transform
The z-transform can be thought of as an extention of the DTFT. It often brings light to the stability and causality of a discrete-time system.

All these transforms are closely related and have very similar properties and rules. For instance, calculating the transform of a function by scratch is difficult, but engineers can use short cut properties for time delay or differentiation of a signal with known transform.

You might also hear about an FFT (fast Fourier transform). This is an algorythm to find the DFT of a periodic discrete-time function efficiently.

I have also seen the Hilbert transform, which is actually not a transform. It essentially adds a 90 degree phase shift to a signal. Denoting the Hilbert transform with x(hat)(t), it is defined to be x(hat)(t)=1/(Pi*t) (convolved with) x(t) (where x(t) is the original signal). Taking the Fourier transform, we see X(hat)(f)=-j*sgn(f)*X(f), where sgn(f) is the sign or signum function. One application of the Hilbert transform is the conversion of a band pass signal into a low pass signal.

Just for fun, suppose we decided not to use sine and cosine functions to represent our signal. We might choose any set of basis functions, as long as the span of the basis space includes the signals of interest. Although, it would be desirable to use an orthogonal basis set (the inner product of two different basis is 0, and the inner product of a basis with itself is 1) and ensure a one-to-one correspondence between signals in time and their representation in the new space. We could then model the given signal by finding it's projection onto each basis function (take the inner product--i.e., for continuous functions we take the integral over all time of the product of the functions). Note that sinusoids of different frequencies are all orthonormal to each other. And amazingly, the basis of these sinusoids spans all "well behaved" functions and even some wilder functions (though the Fourier transform is only guaranteed to exist if the Dirichlet conditions are satisfied). Also with this basis set, we can adopt a frequency component interpretation that makes it easy to understand.

See the Fourier analysis section on the math page for more stuff about Fourier transforms.

What do Laplace Transforms have to do with electrical circuits?

We use Laplace to quickly analyze complicated circuits, and to easily find the response of a circuit for an arbitrary input signal. Also, Laplace and Fourier transforms reveal the frequency response of a circuit.

There are at least two ways to understand circuit analysis with Laplace transforms. We could come up with the time-domain equations for the circuit (which will always be a linear differential equations if we have only linear components) and use Laplace transforms to solve the differential equations directly.

Alternatively, we can convert linear circuit components, i.e. resistors, capacitors, and inductors, to simple Laplace equivalents (see "why you put a source on inductors and capacitors" below for more info about this), and apply basic circuit laws (Kirchoff's Current and Loop rules, Thevenin Equivalent circuits, etc.) to find the response of the circuit for any input. These methods are completely equivalent, though the second is probably computationally easier.

I love how Laplace transforms solve everything for you so easily and elegantly--the steady state and transient responses fall out automatically.

Why are resistors, capacitors, and inductors "linear?"

The current and voltage relationships for each of these components are v(t) = i(t) * R, i(t) = C * dv(t)/dt and v(t) = L * di(t)/dt (respectively). All of these are linear relationships because they obey the superposition principle. Mathematically, this means that if the current through a device with voltage v1(t) is i1(t), and the current through a device with voltage v2(t) is i2(t), than the current for a voltage a * v1(t) + b * v2(t) (where a and b are constants) will be a * i1(t) + b * i2(t). Try it, it's true!

Circuit analysis using Laplace transforms:

When doing circuit analysis, you can convert each component to their Laplace equivalent. That is, we may take the Laplace transform of the response functions of each component, generate an equivalent circuit in the fequency domain, and treat the components as usual. We can do this because the circuit components are linear (i.e., they obey the superposition principle), so the Laplace transform of the system equals the sum of the Laplace transforms of the components in series.

For example, say we are given a circuit, and we want to use the loop rule. Well, we convert the voltage across each component to Laplace and sum them up. Voltage across resistors become R*Ir(s), voltages across inductors become L*s*Il(s) (assuming no initial conditions), and caps become Vc(s). These formulas come from V=I(t)*R, and V=L* dI(t)/dt. Similarly, we could find the Laplace transform of the currents through each component for node analysis.

You may also treat all of the components as resistors by calling the impedance of inductors L*s and capacitors 1/(C*s). For example, By calling the resistance across an inductor L*s, you will compute a voltage L*s*I(s).

Remember that you are taking a Laplace transform, so a step input voltage becomes V/s, but a dependent source is just V(s)! Some people get carried away and divide all of the variables by s just because the step input divides by s.

When you have a Laplace expression for the output, you can simply take the inverse Laplace, and you will have the response in the time domain.

One reason we go though all this trouble is to easily find the response of a circuit with any kind of input. A step signal, sinusoid, triangle, etc. Imagine solving a set of differential equations with an obscure input signal...it's scary. Back in circuits I, you learned the forms of the output of a RLC circuit for nice step inputs. The solution in differential equations is long and tedious, and they were only done for extremely simple circuits. The Laplace technique enables us to do much more.

Why you put a source on inductors and capacitors (when transforming a circuit into their Laplace equivalent) if they have initial conditions:

Most people find the conversion of a capacitor or inductor from time to Laplace very natural because they have seen the same conversions when working with phasors in circuits 101 (or ELEN 214 in Aggieland). But they are confused about the reason for adding the voltage or current sources when they have initial conditions. The proof is simple. Again, we just go back to the i-v characteristics of capacitors and inductors.

First we will analyze capacitors: i(t) = C * dv(t) / dt <=> I(s) = C * (s * V(s) - v(0)). This model suggests that a capacitor can be modeled by the parallel connection of a resistor-like component with impedance 1 / (C * s) and a current source of -v(0) * C. Also, we can rearrange the Laplace equation to get V(s) = I(s) / (C * s) + v(0) / s. This shows why we add a series connection with a current source v(0) / s for the capacitor model.

Inductors are very similar: v(t) = L * di(t) / dt <=> V(s) = L * (s * I(s) - i(s)). This leads directly to a model of the inductor, L * s, in series with a -L * i(s) voltage source. If we rearrange the equation, we get I(s) = V(s) / (L * s) + i(s) / s. Thus, the circuit equivalent is a parallel connection of L * s and a voltage source i(0) / s.

These models actually make sense if you think about them. At t = 0, the voltage across the capacitor is zero (since it's voltage drop can't change intantaneously). The voltage source makes sure that the circuit effectively sees the cap's initial condition. Note that the voltage source is v(0) / s because it acts like a step input of magnitude v(0) to the circuit. The inductor model makes sense too. No current flows through the inductor at t = 0, so the parallel current source supplies the effective step current, i(0) / s, to the circuit

In the heat of a test, I like to keep the above in mind, and use a source transformation to convert between types of sources depending on weather I want to do a loop or nodal analysis of the circuit.

Finally, you should notice that I have not gone into much detail about the polarity of the sources. This is because I haven't got a good picture to show you, and the thought of writing a large painful paragraph about the direction of an arrow and some plus and minus signs makes me shudder. All I will say about this is that the orientation of the initial condition matches the corresponding source orientation, and that the one should pay attention to the negative signs in the equations for the capacitor and inductor transformation.

Stability in output functions:

A function is stable if it converges to zero in the time domain (it is semi-stable if it converges to some constant value). This basically means that the output can't have anything like an exp(a * t) term (where a is a non-zero number).

The spectrum of the output of a system is almost always in rational form polynomial form. Typically, this is converted to a sum of easier functions to deal with (using partial fractions to get something like k / (s - root), k constant), and directly converted to time using a table. But, one can find out if their system is stable (and thus if this task is worth their time) very quickly by just looking at the roots of the denominator of the function.

The following paragraph is probably hard to follow without a table of Laplace transforms handy. Also, remember that the Laplace of a sum of functions is equivalent to the sum of the Laplace of each individual function.

The roots in the denominator (poles) of the signal spectrum will be of the form p = a + b * j (j is the engineer's equivalent to the imaginary number i). If a root is real and distinct, it will add a k / (s - a) term to the partial fraction expansion, which is k * exp(a * t) in the time domain. This only converges to zero if a is negative. If b is non-zero, the above process is still true, but remember that the complex root a + b * j of a real signal implies that there is another root of the form a - b * j. It can be shown that these terms together contribute a k * exp(a * t) * cos(w * t + z) term (k, a, w, and z are constants), which also only converges to zero for a < 0. Finally, the same thought process will show that a root repeated N times contributes a k * t^(N - 1) * exp(a * t) term, which also only converges to zero if and only if (iff) a < 0. Basically, a system is stable iff the real part of all of the roots are negative. If there is one real root with a = 0, then the system is semi-stable and converges to k * exp(0 * t) = k. But, adding a second root with a = 0 will result in unstable output as t goes to infinity. If the two roots are complex conjugates (on the jw axis), we will see a pure sinusoid in the signal, which does not converge to zero or blow up.

The initial and final value theorems, and why they work:

The initial value theorem says that given a signal x(t) with Laplace transform X(s) (assume x(t) is continuous at x(0), that is, x(0) = x(0-) = x(0+)),

x(0) = (lim as s -> infinity of) s X(s)

Or in general (denoting the Nth derivative of x(t) with x^(N)(t)),

x^(N)(0) = (lim as s -> infinity of) s^(N+1) X(s) - s^N x(0) - s^(N-1) x^(1)(0) - ... - s x^(N-1)(0)

We can easily demonstrate the feasibility of the first formula. First recall the following property of Laplace transforms (denoting the Laplace transform of x(t) with L{x(t)} and using quasi Mathematica notation for integrals).

L{x^(1)(t)} = Integrate[x^(1)(t) e^(-st), {t, 0, infinity}]
= s X(s) - x(0)

taking the limit, we get:

(lim as s -> infinity of) s X(s) - x(0) = (lim as s -> infinity of) Integrate[x^(1)(t) e^(-st), {t, 0, infinity}]

The RHS is zero because e^(-infinity) = 0. Thus,

x(0) = (lim as s -> infinity of) s X(s)

This seems a lot easier than the proof that was in my signals and systems book...they assumed x(t) could be expressed as a Taylor expansion and plugged that into the definition of X(s).

The final value theorem says that if x(infinity) exists, then we have,

x(infinity) = (lim as s -> 0) s X(s)

We can prove this in the following way. Recall,

L{x^(1)(t)} = Integrate[x^(1)(t) e^(-st), {t, 0, infinity}]
= s X(s) - x(0)

Taking the limit, we see,

(lim as s -> 0) L{x^(1)(t)} = (lim as s -> 0) Integrate[x^(1)(t) e^(-st), {t, 0, infinity}]
= Integrate[x^(1)(t), {t, 0, infinity}]
= x(infinity) - x(0)

but we also know,

(lim as s -> 0) L{x^(1)(t)} = (lim as s -> 0) s X(s) - x(0)

Thus, canceling out x(0) we get,

x(infinity) = (lim as s -> 0) s X(s)

One should be cautious when using the final value theorem because it is only valid if x(infinity) exists. Otherwise, you will get an incorrect answer (and think it is right because no computational complications arose). It turns out that x(infinity) exists if and only if the real component of all the poles in X(s) are less than zero, except one pole can be at the origin (i.e., all poles are on the LHS of the complex plane except at most one can be at the origin).

This is all nice, but these proofs don't give us a lot of insight. I prefer the think of these theorems in the following way. Suppose we write X(s) like this (we assume there are no repeated or complex roots for simplicity; the following analysis will still work otherwise):

X(s) = c1/(s-p1) + c2/(s-p2) + ... + cN/(s-pN)

Assuming p1 = 0 (this is without loss of generality because we can say c1 = 0 if this is not the case), we know,

x(t) = (c1 + c2 e^(p2 t) + ... + cN e^(pN t)) u(t)

Clearly, x(0) = c1 + c2 + ... + cN. By looking at X(s), we see that that indeed, x(0) = (lim as s -> infinity of) s X(s).

Also, notice that x(t) is unstable (x(t) does not converge to a horizontal asymptote as t -> infinity) if any of the poles are in the RHP because the exponentials will tend to infinity as t -> infinity. Knowing this, we see that if poles p2, p3, ..., pN are in the LHP, then x(infinity) = c1. But we can find c1 easily from the residue method,

c1 = s X(s)|s=0 = (lim as s -> 0 of) s X(s)

And this is the final limit theorem. See the section above called "Stability in output functions" for hints on stability and extending this analysis to cases with complex or multiple repeated roots.

A demonstration of how LTI systems work:

Here is an interesting question:

A complex mechanical system is struck sharply with a hammer. The time response of the position, x(t), of the system is measured and available for computations. An arbitrary excitation is now applied to the system. Explain how the position of the system can be determined as a function of time.

In the following answer, I will denote the unit impulse function by d(t), and convolution by "*"

We will first identify the system, input, and output:

The system is the "complex mechanical system" which I will call h(t).
The input is the "arbitrary excitation" or hammer blow, which I will call v(t).
The output is the position of the system, which is called x(t).

We will input two signals to the system (a blow from a hammer and an arbitrary excitation) and see two different outputs. Assuming we are dealing with an LTI system, note that:

x(t) = h(t) * v(t).

To determine the position of the system for an arbitrary input, we will first examine what happens when we input a single blow from a hammer (the input and output for this case will be denoted with a 1). We can measure the output position of the system from the single hit, so we know x1(t). Using the above equation, we also know x1(t) = h(t) * v1(t).

From physics, we can show that a sharp strike between two objects can be modeled as an impulse, so our first input is an impulse with some known (measured) energy. Mathematically, this means v1(t) = E d(t), where E is the square root of the energy of the blow. We use a property of Fourier transforms (h(t) * d(t) = h(t)) to find x1(t) = E h(t), or:

h(t) = (1/E) x1(t).

So by using an impulse signal, we can easily find the system unit impulse response, h(t). Now, when we input some arbitrary excitation, v(t), we can use the fact that x(t) = h(t) * v(t) to find the position of the system, x(t).

We can't really write any input v(t) as a linear combination of impulse excitations (for example, if v(t) is a unit step). But, if the input is a series of taps from a hammer, we could rewrite it as a linear combination of impulses:

v(t) = (SUM OF) an d(t-bn).

The scaling factor, an, is a measure of the energy of each impulse and bn identifies the time when each impulse occurs. Then from the above equations, we find that:

x(t) = h(t) * (SUM OF) an d(t-bn)
= (SUM OF) an h(t) * d(t-bn)
= (SUM OF) an h(t-bn)
= (SUM OF) (an / E) x1(t-bn).

So, in words, the position of the system is a linear combination of the responses to each impulse excitation in v(t). This result is always true for LTI systems.

Linearity, Time invariance, and Causality:

Finite dimensionality, relationship to LTI:

How finite dimensionality is related to circuits and other physical systems:

What is a filter?

Essentially, all systems are filters. For simplicity, suppose we have a continuous-time LTI system. Such systems are completely described by their impulse response, or equivalently, their transfer function.

Because the output spectrum of a system is the product of the input spectrum and transfer function (Y(f) = X(f)H(f)), the transfer function has the effect of selectively amplifying and attenuating frequency components of the input. In other words, the system "filters out" certain frequency bands of the input.

Understanding bode plots:

Sample signals and systems problem:

Here is a sample problem in signals and systems.

The spectrum of a signal, M(f), contains components within a band of frequencies. The phase spectrum is defined as < M(f). If this signal is passed through a system, show that the time delay of each frequency component will be the same only if the phase spectrum of the system is linear with frequency.

I denote the Fourier transform of a function by X(f) = F{x(t)}.

Say, we input M(f) to the system with response H(f). If the phase spectrum of the system is linear with frequency, then < H(f) = - k f, where k is some constant (the negative sign is irrelevant, but will result in a delay in time rather than advancement in time). So,

H(f) = |H(f)| exp(- j k f)

|H(f)| is arbitrary, but for simplicity, say |H(f)| = 1 (This means that the system does not effect the frequency magnitude spectrum of the signal).

Assuming we are dealing with a linear system and by using a property of Fourier transforms, we find that the output of the system will be,

Y(f) = H(f) M(f) = exp(- j k f) M(f) = F{m(t - k/(2 pi))}

So, the output will simply be a delayed version of the input (delayed by k/(2 pi)). Since the entire signal is delayed, the time delay of each frequency in the signal is delayed by the same amount.

The inverse is also true. If the time delay, t0, of each frequency component in M(f) is the same for all frequency components, then the system shifts the entire signal in time by t0. We find,

Y(f) = F{m(t - t0)} = exp(- j 2 pi f t0) M(f)

Now, it is also true that Y(f) = H(f) M(f). Comparing this to the equation above, we see that H(f) = exp(- j 2 pi f t0). So, the phase spectrum of this system is < H(f) = - 2 pi f t0, which is linear with respect to frequency.

This demonstration shows that the time delay in each frequency component in M(f) will be the same only if the phase spectrum of the system is linear with frequency. QED.

If you are interested, you may want to look what happens if |H(f)| <> 1.

### Microelectronic Circuits

Op-amp circuits:

What is the difference between open circuit gain and closed circuit gain?

What is the slew rate limit?

Op-amp imperfections, and how they are modeled:

How can you made a function generator?

How to analyze a diode circuit:

What kinds of Diodes are there?

A note on fabrication of semiconductor devices:

How to analyze a BJT (common emitter configuration):

How to analyze a MOSFET:

What different kinds of FETs are out there?

How do you make an analog multiplier (and an adder, subtracter, integrator, and differentiator)?

Bode plots:

### Electromagnetics

What is EMI?

I while ago, I got a bunch of questions about EMI (electromagnetic interference). Below, I have written some stuff about it.

EMI is simply an electromagnetic field. Every material has an electromagnetic field, defined by Maxwell's equations. EMI is indistinguishable from any other electromagnetic field. It is viewed as interference because it is unwanted noise in an electrical device and tends to distort information. Usually, this is associated with some kind of communication system, such as noise in a coax cable or noise added to a signal between a transmitting and receiving antenna.

There are many ways to detect an electromagnetic field. One simple method is an antenna, which is basically just a wire. It works by letting the electrons in the wire move up and down (thus producing a current) with the electromagnetic signal (field) being received. If you are receiving from a cable, you could also coil a second wire around a ring encircling the cable to measure the magnetic field.

Since EMI is simply an electromagnetic field, it is also detected by these receivers. In fact, the problem is that we receive it when we don't want it. It is difficult to distinguish between noise and the intended signal, but we overcome this with many techniques. We could get two (or multiple) copies of the same signal and compare them, or shut off the signal and analyze the nature of the noise so we can guess what it will be when we receive the signal.

A good model of interference is additive white Gaussian noise (AWGN), which has a constant frequency spectrum (exists at all frequencies) and is described in terms of probability. As a side note, the term "white noise" comes from the idea of white light containing all frequencies of light.

If there are special circumstances in your system, then you should alter your model to take them into account. For example, if you know the message signal is digital, then there is a probability distribution for the received signal being one of each of the possible waveform representations. Then, you know a bit more about how the noise may have distorted your signal. As another example, note that while general noise has an even distribution over all frequencies, there might be some nearby source of interference at a specific frequency. If you know this information, the performance of your receiver will improve.

Though EMI exists at all frequencies, only EMI at the frequencies you are using will matter after proper filtering is performed. As an example, imagine receiving an AM radio station. All the other stations are basically noise if you are only interested in this one station. Since they are made to exist at other frequencies, we can filter them out completely so they no longer contaminate your desired music. Another analogy refers to light; humans can only see a certain range of light (the frequencies in the visible spectrum) and the amount of ultraviolet or inferred light bombarding their eye does not effect what they see (aside from possibly damaging their eye, but that's another phenomenon).

EMI is a nasty thing, but it's what makes communications fun!

What is the skin effect?

Electrons tend to stick to the outside of a conductor at high frequencies. Actually, the electron density in parts of the conductor doesn't really change, but current mostly flows on the edge of the conductor. This is because the electromagnetic field inside the conductor will decrease (attenuate) exponentially as you probe deeper into it. The distance below the surface in which most current flows is called the skin depth. For a good conductor, the skin depth is approximated by (skin depth) = 1/sqrt[Pi f u c], where Pi is 3.14159..., f is the frequency, u is the permeability of the conductor, and c is the conductivity of the conductor. For copper, u = 12.566e-7_H/m and c = 5.80e7_S/m.

What is a loss angle?

The loss angle, defined by [tan (loss angle)] = c/(w e), where c is conductivity, w is frequency (angular), and e is permittivity, measures the angle between the real and imaginary component of a phasor representing an electric field. The greater the frequency in a circuit, the more the electromagnetic field deviates from that of a perfect conductor or a DC current, and the greater the loss angle. The idea of a loss angle is very abstract, and doesn't have anything to do with angles in the real world. It turns out that it is a measure of attenuation of current in a lossy medium, which relates it to skin depth.

How is vector calculus used in EM?

About the intuitive interpretation of Maxwell's equations for electrostatics and magnetostatics:

Why do we use phasors, and what good do they do?

What is the Poynting Vector?

What are plane waves?

What is polarization?

### Device Physics

What is a Semiconductor?

I wrote the following definition of a semiconductor for my technical writing class (I added a bit of info to explain technical statements). It is directed toward a student with some background in basic chemistry and physics. I also included some notes on the history and impact of the semiconductor.

A semiconductor is a solid crystalline material that conducts a small amount of electricity at room temperature. The most common semiconductor material is silicon because it is abundant and inexpensive.

Semiconductor materials were studied in laboratories as early as 1830. However, the first semiconductor device, the galena crystal diode invented by Ferdinand Braun, was not developed until 50 years later. Though simple semiconductor devices, such as the BJT and JFET, continued to develop in the early and mid 1900's, integrated circuits were not made possible until 1959. This technology enabled companies to manufacture an entire circuit at once instead of making single components and manually connecting them. This allowed them to produce circuits more efficiently and at a lower cost, which helped start the computer revolution.

Semiconductors have a diamond lattice structure held together by covalent bonds between atoms (meaning the atoms share electrons in a neat diamond pattern). Some insulators, like carbon diamonds, also have this structure, but the covalent bonds between the atoms are so strong that electrons cannot escape for conduction (the nucleus of a Carbon atom is small, so valence electrons are held in more tightly and the ionization energy of Carbon is high). For germanium and silicon, these covalent bonds are relatively weak (since the nucleus is larger), and some of the electrons escape when given enough energy from the thermal vibrations of the crystal. Therefore, these materials can conduct only small amounts of electricity, which is why we call them semiconductors.

Good conductors, like copper, do not have the same physical structure as semiconductors. They have many loose electrons that form metallic bonds between the atoms. These electrons are not shared between atoms in a structured manner, but move about freely between atoms. The nature of these bonds make the material more conductive and malleable. Semiconductors are also different from conductors in that they rely on both electrons and holes for current. However, at high enough temperatures, a semiconductor will become a conductor because more electrons will have enough thermal energy to escape their covalent bonds.

Most semiconductor devices do not rely on intrinsic, or pure, semiconductor materials. We add small amounts of impurity atoms, or dopants, to pure semiconductors to alter their atomic structure. This technique of creating impure, or extrinsic, semiconductor materials allows us to control the conductivity and other parameters of the material.

The magic and power of the semiconductor is actually in its application. The simplest device is the diode, which is simply two joined semiconductors with different doping characteristics. The most important semiconductor device is the transistor, which is essentially a switch made of three different pieces of semiconductor. It has revolutionized the computer industry because it posed a cheap, small, and easy alternative to vacuum tubes. The transistor and the development of integrated circuits has enabled computers to become smaller, cheaper, and faster.

What is conductivity and mobility, and how do they relate to current density and electric potential?

What is the Hall Effect?

Some results if we treat an electron as a wave:

Basic quantum mechanics:

Bond types and crystal structure:

Density of states, electron density, and Fermi energy level:

Thermionic emission, photo emission (photoelectric effect), and field emission:

What is the band theory of solids?

What is effective mass, and effective number of electrons?

What are holes?

What is the difference between metals, insulators, and semiconductors?

What is an intrinsic and extrinsic semisonductor?

What is scattering?

## Controls and Communications

### Communication Theory

The Hilbert transform:

What is amplitude modulation?

What is frequency modulation?

Example problems in communications:

Below are 7 typical communications problems (in Word 97).

and their solutions (in PDF):

### Digital Signal Processing

Sampling and aliasing:

Zero-order-hold sampling:

Suppose we have a signal, x(t). In ideal sampling, we essentially generate a pulse train carrying the discrete measured values x(nT), where n is an integer and T is the sampling period. It is well known that under appropriate conditions (i.e., if we sample at least at the Nyquist frequency), we theoretically can perfectly reconstruct the original signal from this pulse train (more about sampling to come in the section above).

This is a wonderful result, but we run into problems because we cannot practically generate or detect such sample impulses. One technique to overcome this difficulty is zero-order-hold sampling. Here, instead of generating a short impulse for samples (measured at t = nT), we hold their values until the next sample is read. As a result, we need a more complicated filter to reconstruct x(t) perfectly, or we can use a simpler lowpass filter to approximate x(t).

Also, zero-order hold sampling forces a time delay on the system. The paper below (in PDF format) explains this issue.

What is the Z transform?

What is the Fourier Transform?

What is the relationship between the Z and Fourier transforms?

What is the Fourier series?

The DFT:

The FFT:

Filters and filter design:

### Controls

The Laplace transform:

State equations:

How are Laplace, Differential equations, and Linear algebra related and used in controls?

When is a transfer function stable?

What are some common realizations of transfer functions?

How does an integrater stablize a system?

Nyquist plots:

Bode plots:

### Digital Communications

Probability (general theory, conditional, random variables, random vectors, joint, marginal, independent, expectations):

What is source coding?

Quantization (uniform, scalar, vector):

PCM (uniform, non-uniform, differential, delta modulation):

LPC:

What is entropy (entropy, joint entropy, source-coding theorem)?

Huffman codes:

Lempel-Ziv codes:

What is channel coding?

Communication channels (DMC, BSC):

Channel capacity (mutual information, channel capacity, channel coding theorem):

Block codes:

Hamming distance and Hamming weight:

Optimal decoding rule (minimum distance decoding):

Binary vector space:

Linear block codes (systematic code, generator matrix, parity check matrix):

Repetition codes, Hamming codes, and Hadamard codes:

Distance properties of linear block codes (minimum distance, singleton bound, error correcting capability, error detection capability):

Syndrome decoding:

Convolutional codes:

Representation of convolutional codes (state-transition diagram, trellis diagram):

Transfer function of convolutional codes:

Catastrofic convolutional codes:

Optimal decoding of convolutional codes (Viterbi algorithm):

Proof of Poisson sum formula:

The Poisson sum formula is:

(sum over k = -infinity to infinity of) d(t - k T) = (1/T) (sum over n = -infinity to infinity of) e^(j 2 pi n t/T)

Where d(t) is the Dirac delta functional. We can prove this is the following way. First, define,

x(t) = (sum over k = -infinity to infinity of) d(t - n T)

then the Fourier series of x(t) is, (using quasi Mathematica notation)

xn = (1/T) Integrate [x(t) e^(-j 2 pi k t/T), {t, -T/2, T/2}]
= (1/T) Integrate [(sum over k = -infinity to infinity of) d(t - n T) e^(-j 2 pi k t/T), {t, -T/2, T/2}]
= (1/T) Integrate [d(t) e^(-j 2 pi k t/T), {t, -T/2, T/2}]
= (1/T) Integrate [d(t) e^(-j 2 pi k 0/T), {t, -T/2, T/2}]
= (1/T) Integrate [d(t), {t, -T/2, T/2}]
= 1/T

But we also have,

x(t) = (sum over k = -infinity to infinity of) xn e^(j 2 pi k t/T)
= (1/T)(sum over k = -infinity to infinity of) e^(j 2 pi k t/T)

And the Poisson sum fourmula results. This seems at first to be a surprising result, but it really isn't. It basically says that T-periodic complex exponentials are ortogonal (see the Fourier analysis section on the Math page for info about orthogonality). To illustrate, suppose we take the integral of both sides of the Poisson sum formula over one period. Then we have,

Integrate[(sum over k = -infinity to infinity of) d(t - k T), {t, -T/2, T/2}] = Integrate[(1/T) (sum over n = -infinity to infinity of) e^(j 2 pi n t/T), {t, -T/2, T/2}]

Pulling the sums out of the integral,

<=> (sum over k = -infinity to infinity of) Integrate[d(t - k T), {t, -T/2, T/2}] = (1/T) (sum over n = -infinity to infinity of) Integrate[e^(j 2 pi n t/T), {t, -T/2, T/2}]

Note that the Dirac delta function, d(t - k T), is zero for t <> k T. So the integral on the RHS is 1 if k = 0 and 0 if k <> 0 because d(t - k T) is only non-zero outside of the range of integration. We define dk(k) as the Kronecker delta function. Then,

<=> (sum over k = -infinity to infinity of) dk(k) = (1/T) (sum over n = -infinity to infinity of) Integrate[e^(j 2 pi n t/T), {t, -T/2, T/2}]
<=> 1 = (1/T) (sum over n = -infinity to infinity of) Integrate[e^(j 2 pi n t/T), {t, -T/2, T/2}]
<=> (sum over n = -infinity to infinity of) Integrate[e^(j 2 pi n t/T), {t, -T/2, T/2}] = T

It is now easy to see how this is a manifestation of the orthogonality of of T-periodic complex exponentials. We can define integers m and l such that n = m-l. Then e^(j 2 pi n t/T) = e^(j 2 pi (m - l) t/T) = e^(j 2 pi m t/T) e^(-j 2 pi l t/T). Then the integral above is the inner product of two complex exponentials, which is zero if the frequencies are different (m <> l or n <> 0) and T if the frequencies are the same (m = l or n = 0). When we take the sum over all values of n, the above equation is clearly true.

## Electromagnetics and Solid State Physics

### Applied Electromagnetic Theory

Transmission lines:

Waveguides:

Cavity Resonators:

Antennas:

Antenna power flow:

I was asked a question about antennas a while ago that seemed paradoxical. Imagine that you have a vertical current carrying wire (a dipole or monopole antenna). Further, suppose at some instant, an increasing positive voltage is applied so the current is flowing upwards. Using the right hand rule and viewing from the top, we determine that the magnetic field (H(x, y, x, t)) very near the conductor encircles the wire in a counterclockwise fashion (note that the EM field is a function of time and space because the waves take time to propagate outward, but here we assume the distance is very small). Also, noting that the electric field component tangent to a boundary is always continuous over the boundary, the electric field (E(x, y, z, t)) seems to be directed upwards because of the positive applied voltage. Then the Poynting vector, E x H, would be flowing into the conductor. But our intuition says that power flows outward. What is the problem here?

It turns out that the fields sort of go crazy near the conductor. Extra terms and vectors appear that otherwise converge to zero in the far-field region (these terms do not contribute to average power flow). As a matter of fact, the dominant near-field term in the electric field equation is phase shifted 180 degrees from the dominant far-field term, and the magnetic field is shifted 90 degrees. Because they are shifted by different ammounts, the electric and magnetic fields clearly do not interact in the near field region as they do in the far field.

Also, it's true that power flows outward, meaning the time-average power flow = (1/2) Re{E x H*} (E and H are phasors, Re{x} is the real part of x, and X* is the complex conjugate of X) is always positive. However, this does not necessarily mean that the Poynting vector, E x H (E and H are functions of time), is always positive. It turns out to be virtually always positive in the far-field region (because E and H are in phase), but can be negative in the near-field region (because E and H are 90 degrees out of phase).

### RF and Microwave Wireless Systems

Transmissions lines:

What is a Smith chart?

Antennas:

Various Components:

### Semiconductor Lasers and Photodetectors

Rayleigh scattering and dispersion in silica fibers:

Direct verses indirect semiconductor materials:

Semiconductor structure and materials (Zinc blende, Binary, Ternary, Quaternary, Band gap, lattice constant, and refractive index):

How semiconductor lasers work (electron-hole confinement, optical confinement, the PN junction):

Semiconductor laser structure (homojunction, double heterostructure, and buried heterostructure):

The semiconductor laser as an optical waveguide:

The semiconductor laser as an electrical waveguide:

Longitudinal modes (equation, difference between modes, mode shift with index shift):

The laser threshold condition (absorbtion coefficient, gain, and mirror loss):

The index profile (maximum width of active region, confinement factor):

Near field and far field:

Common modern laser structures:

Recombination mechanism (spontaneous recombination, stimulated recombination and absorbtion--assume electons and holes already exist in appropriate places):

Stimulated emission and absorbtion (consider electron-hole densities):

Net stimulated emission (gain profile, relation to energy and frequency modes):

Fabry Perot Lasers:

Chirp:

Gain:

Threshold carrier density:

Power-current relationship:

Gain saturation:

Rate equations:

Optical power (internal and external quantum effieciency):

Direct modulation (parasitics):

External modulation (Mach-Zehnder electro-optic LiNbO3 intensity modulator):

Back to Stuff, Explained Home