Mass Drivers

work in progress - more will be added later

Coil Guns aka Mass Drivers are probably too expensive, due to the high cost of high voltage switching electronics near the breech (output) end. The scaling cost goes proportionally to the mass and to the cube of the breech velocity, as we shall see.

Magnetic fields do not always go where we want them to - focusing them precisely usually involves channels of ferromagnetic material, which typically is pretty dense and heavy. The assumptions below are probably VERY optimistic. Rather than focus on magnetic fields, think in terms of energy fields ( {B^2}/2\mu_0 ) and energy gradients ( Pressure = \partial E / \partial x ). The vehicle is accelerated by a series of switched energy gradients, and it is the expansion of energy in a volume that ends up pushing a vehicle. Most of the discussion that follows is based on energy and power considerations, not the particular form that it takes.

Assume the target mass driver accelerates a 100kg payload at 1000 m/s2 to 10,000 m/s . The acceleration length L_a is 50 km, V2/2*a . The power input P_V into the vehicle is mass times acceleration times velocity, P_V ~ = ~ M a V , ranging from 0 to 1E9 watts.

Obviously, a coil gun is not applying that power to its entire length - it is only applied in the vicinity of the payload, over a few meters. And the energy density must be modulated in time - the vehicle is pushed by force, an energy gradient, not power. Energy gradients over distance in the moving payload frame of reference correspond to energy changing in time - power - in the stationary track frame of reference. So, the power being handled is not merely the power going into the vehicle, but the power being put into, and taken out of, the moving energy fields that supply the force gradients. Regardless of the structure used, the power level will have a multiplier - energy will be temporarily pushed into space, then taken back out, then put back in, with only part of it turning into vehicle kinetic energy.

In railguns, for example, most of the capacitor energy is turned into magnetic field between the rails, turning into heat and rail warping and erosion - less than 10% becomes payload kinetic energy. That is part of the reason that most railguns are one-shot machines.

Assume the interacting region of the vehicle is confined to length L_V . Without getting into the detailed structure within that length L_V , we can assume that there must be switches moving power into that region, and out of it again, with switching times of at most L_V / 2V . If there is spatial modulation of the energy, with wavelength \lambda , then the switching times are \lambda / 2 V and the switches make L_V / \lambda cycles. For the purpose of scaling, we will ignore that for now.

In the ideal case, we can assume our coil gun is broken up into regions of length L_V , each of which must independently handle power levels of at least P_V . The time dt to accelerate form velocity V to V + dV is ( 1 / a ) dV . The number of regions traversed is ( V / L_V ) dt ~ = ~ ( V / a L_V ) dV . The vehicle power switched by each region (NOT including the power associated with building and collapsing energy fields) is M a V . So the total vehicle switched power P_T for the entire length of the coil gun is the integral of d P_T ~ = ~ M a V * V / a L_V dV ~ = ~ ( M / L_V ) V^2 dV , or P_T ~ = ~ ( M / 3 L_V ) V^3 . Naively assuming a switching technology whose cost scales only to peak power, independent of switching speed,

The switching technology cost is proportional to the mass times the exit velocity cubed

If a demonstration system costing \$1000 for the switching electronics can launch 1 kg to 100 meters per second, then a system launching 100kg to 10,000 meters per second will cost \$1,000,000,000,000 for the switching electronics. As component volumes approach the production of the entire industry, "learning curve" manufacturing improvements empirically reduce cost by a factor of 2 for every factor of 10 increase in production volume. So, assuming that power semiconductor production in the relevant sector is now around \$1,000,000,000, the cost of the switching electronics becomes a mere \$125,000,000,000.

And if you think you can do better, please go make a fortune selling power supplies, then build launchers with that fortune. As we will see in the electronics section below, we are dealing with limits set by reliability statistics and the physics of real materials - the multi-billion dollar power electronics industry is already close to the limits set by nature.

But we are neglecting something. The vehicle is not directly driven by a closed pole, energy confined motor. Instead, the vehicle is driven by an energy gradient, which means we must be putting energy into a medium and taking it back out again. The energy gradient is proportional to the force, and the gradient must move. { dE \over dt } ~ = ~ { dx \over dt } ~ { dE \over dx } ~ = ~ V { dE \over dx } . This power also increases with velocity. How does it compare to vehicle acceleration power?


Power handling is not the whole story. As switching speed increases, more power goes into charging parasitic capacitance, and space charge regions in semiconductors. Some devices cannot run at microsecond speeds - high voltage thyristors are good under a kilohertz, but they cannot switch in microseconds. Faster electronics is more expensive electronics.

Faster pulses don't deposit as much heat as slower pulses, but the heat does not conduct as far, either. This is generally good in fast switching systems - the semiconductor area needed for switches goes up approximately linearly with power, for a constant power per area, but the pulse widths go down. The region storing the heat goes down as the square root of the pulse width, meaning the heat stored goes down as the square root also. However, the damaging thermal gradients (and associated stresses) remain about the same.

Faster pulses means more skin effect in conductors, and the higher currents (or voltages) implies larger cross sections (or insulation thicknesses). For "Litz wire conductors" the insulation will need to be fairly thick, because high voltage differences will exist between inner and outer strands, and shorts leading to current loops can create devastating hot spots.

General note on electronics and reliability

"Cheap, fast, good - pick two" old electronics adage.

Any system incorporating kilometers of electronics must be made with highly reliable components used in highly reliable ways. If the system is dependent on millions of assemblies containing hundreds of components each, working for many years, then each component must have a mean failure rate of less than a part per trillion per hour. No such components exist. Even with redundancy, fail-safes, and careful design for uncorrelated failure modes, a multi-year lifetime is difficult to achieve.

Mass drivers and electromagnetic launchers in general can tolerate some kinds of imperfect reliability. If a vehicle is travelling down the track at 6 kilometers per second, accelerating at 40 meters per second squared, and a typical impulse section is 10 meters long, then a "stuck off" section will hit the vehicle with a jerk impulse of -40 m/s2 for 1/600 of a second, a 6.7 cm/s velocity impulse. If the section is "stuck on", the impulse could be many times that, and the vehicle would hit a magnetic wall. If the section is stuck on with high overcurrent (which proper fusing should prevent), then it will shut down the power bus feeding it -- a long stretch of jerk impulse, possibly catastrophic of the payload starts turning.

Heat is the enemy of reliability. The failure rate of components typically doubles with every 10C temperature increase. The failure rate also tends to follow a "bathtub curve", with high initial failure rates, a few years of moderate rates, followed by high "wear-out" rates near the end of life. Most wear-out mechanisms are related to mechanical stress, impurity infiltration, and material migration, particularly where currents or voltages or temperature gradients are high.

In consumer electronics, there is such a thing as "too much reliability". Modern electronics is designed to last a fairly long time, on average, to reduce the number of expensive warranty returns. If you want to replace less than 0.1% of your cell phones during a one year warranty period, you might need to design most components to last 10 years, with an average one year failure rate of one part per million. However, components with low "bathtub" rates, which reliably last three years but suffer from wear-out in five, are just fine for 1 year warranty products. Electrolytic capacitors and microprocessor sockets are examples of this kind of component. If you increase material weight, you will often make assemblies larger, increase price, and reduce performance. Given that most consumers replace old models with higher performance new models more frequently than the components wear out, creating more sales opportunities, there are strong counter-incentives to maximize reliability beyond warranty demands.

Indeed, decreasing reliability is often mandated by law. RoHS lead-free solder has a high tin content, and tin crystals have the unfortunate tendency to relieve stress by growing filaments of tin out of the side of solder joints. These filaments short across the small insulating gaps in miniaturized electronics. Low-lead solders melt at higher temperatures, so there is more built-in stress when they cool. If an electronic system handles a lot of power, or pulse power, the temperature cycles can severely aggravate whisker formation.

This is what high-volume, Consumer-Off-The-Shelf (COTS) electronics is geared for. That makes these inexpensive and well characterized components risky to use for military, space, and medical applications. On the other hand, the very low production volumes associated with these high-reliability markets leads to poor characterization, which itself is a source of low reliability. Perhaps the best match for high-reliability markets is automotive-grade parts. For safety reasons, these parts need to be high reliability, while their volumes are adequate to increase scrutiny and lower cost. The mediocre track record of automotive electronics suggests it will be a while before these components live up to their potential.

Semiconductor Reliability

Inside power electronic chips, semiconductors are very poor heat conductors. Mono-isotope diamond/carbon has the best heat conduction of any semiconductor, but we still don't know how to make useful transistors with it, and even diamond has poor heat conduction compared to metals. Graphene will work much better, but we don't know how to reliably fabricate that. Among production-ready materials, silicon still reigns. Exotic 3-5 compounds like Gallium Arsenide have poor heat conduction, and insulators are worse. So, assume silicon for bulk power handling - 99%+ of all power electronics uses silicon power components.

Next, arbitrarily assume good heat sinks and a maximum junction temperature of 100C. Assume a maximum temperature difference across the semiconductor thickness, and a maximum heating per pulse, of 50K. A mass driver in vacuum must cool by radiation, so this may be optimistic. Most wearout mechanisms double every 10K, and stress is greatly increased by temperature differentials and temperature cycling.

The most popular semiconductor switch is the power MOSFET. Unlike planar silicon, these devices use the whole depth of the die as a drift region, to stand off higher voltages. The drift region must be thick enough and lightly doped enough to reduce the maximum electric field. However, a thicker die and lower doping also increases the turn-on resistance, proportional to the 2.5 power of the breakdown voltage V_B .

How thick? How much resistance? Let:

From Hu:

(1) T ~ = ~ 3 V_B / 2 E_C ~ = ~ 0.0183 V_B^{1.2} microns

(2) R_n ~ = 8.3e-9 V_B^{2.5}

See and "A Parametric Study of Power MOSFETS" by Chenming Hu, IEEE Power Electronics Specialist Conference, 1979 pp 385-395.

The above equations are most accurate with V_B ranging from 200 to 2000 volts. They are also based on the very optimistic assumption that the silicon has been thinned to just the width of the maximum depletion region. In normal manufacturing, the silicon is an epitaxial layer above an N+ base wafer, adding to the thermal resistance.

The heat capacity of silicon is 1.65 MJ/m3-K. For a 50C rise, and a die T microns thick and 1 cm2 square, that is:

(3) C ~ = ~ 8.25e-3 T ~ ~ J / cm^2 . . . T in microns

The thermal conductivity of silicon is 149 W/m-K . For a 50C rise from the center of a die T microns thick and 1 cm2, the thermal resistance is:

(4) R_T ~ = ~ 6.7e-7 T ~ ~ cm^2 / W

The thermal time constant is the product of these:

(5) t_{th} ~ = ~ 5.5E-9 T^2 sec . . . T in microns

If the pulse period is larger than the thermal time constant, the heat has time to diffuse from the silicon die into the copper substrate, adding that heat capacity to the heat capacity of the silicon itself. For shorter pulse periods, the heat is confined to the silicon die itself.









\mu m

J / cm^2

cm^2 / W






































Assume an infinite slab of copper underneath, and a long time between groups of pulses, long enough to radiate away all the heat. The problem changes; now we are concerned about the ability of the copper to absorb pulses of heat. The faster the pulse, the shorter the diffusion time, and the faster the surface of the copper may heat. The sutuation is probably complicated by heat pipes, eutectic die attach materials, etc, but this is a rough (and optimistic) approximation. What we will determine is the heat capacity versus pulse width, for a 20C temperature rise.

The heat capacity of copper is 3.4MJ/m3-K, and the thermal conductivity is 401 W/m-K . From a similar set of calculations as before, the heat energy stored per pulse per cm^2 is:

(6) E ~ = ~ 20 K * 1e-4 m^2 * \sqrt { 2 * 3.4e6 * 20 * 401 * t } ~ = ~ 470 \sqrt t . . . Joules/cm2 . . . t in seconds

Or 470 \sqrt { t } ~ mJ/cm^2 for t in microseconds.



All capacitors are not created equal. Real capacitors have series inductance, series resistance, and leakage resistance. "Supercapacitors" tend to have very high Equivalent Series Resistance (ESR) with time constants on the order of 10 to 100 milliseconds, see They also leak, with a leakage time constant of on the order of 10 hours at room temperature, decreasing rapidly as the capacitor heats up. Since a sizable portion of the energy ends up in the series resistance, they will heat up. Supercapacitors are best suited for electric cars, providing extra startup current for stalled motors, and for storing the energy from regenerative braking, until batteries have time to interact. These low-speed operations match what supercapacitors can do.

Higher voltage capacitors have much less capacitance. The { \small { 1 \over 2 } } C V ^2 per volume is pretty much the same for a given dielectric, so if V doubles, we can expect C to be reduced by a factor of 4.

Capacitors have other quirks. Most are piezoelectric - charging them causes them to deform mechanically. Inductive currents from rapid discharge create magnetic fields and forces as well. If the capacitor is not designed for rapid discharge, it can explode. For example, high voltage, 60Hz line correction capacitors sometimes appear on the surplus market. Some hobbyists use these to build electromagnetic coin crushers. These re-purposed capacitors are usually good for less than 1000 shots before they explode (inside a metal box, inside concrete, in another room, please).

So where high voltage fast pulses are needed, any old capacitor will not do. Low ESR, low inductance, mechanically reinforced capacitors must be used, the kind used in pulse lasers and magnetic electroforming equipment. MORE LATER

MassDriver (last edited 2013-07-11 18:31:36 by KeithLofstrom)