The calculus is a set of powerful analytical techniques, especially differentiation and integration, that utilize the concepts of rate and limit to describe the properties of functions. The formal development of the calculus in the latter half of the seventeenth century, primarily through the independent work of English physicist and mathematician Sir Isaac Newton (1642–1727) and German mathematician Gottfried Wilhelm Leibniz (1646–1716), was the crowning mathematical achievement of the Scientific Revolution. The subsequent advance of the calculus influenced the whole course and scope of mathematical and scientific inquiry. Although the logical underpinnings of calculus were hotly debated early on, the techniques of calculus were quickly applied to a variety of problems in physics, astronomy, and engineering. By the end of the eighteenth century, calculus had proved a powerful tool that allowed mathematicians and scientists to construct accurate mathematical models of physical phenomena ranging from planetary motions to particle dynamics.
Historical Background and Scientific Foundations
Important mathematical developments that laid the foundation for the calculus of Newton and Leibniz can be traced back to techniques advanced in ancient Greece and Rome. Most of these techniques were concerned with determining areas under curves and the volumes of curved shapes. Besides their mathematical utility, these advancements both reflected and challenged prevailing philosophical notions about the concept of infinitely divisible time and space. Greek philosopher and mathematician Zeno of Elea (c.495–c.430 BC) constructed a set of paradoxes that were fundamentally important in the development of mathematics, logic, and scientific thought. The most famous of these is the dichotomy (two-parts) paradox: If you wish to walk from your chair to the door, you first traverse half the distance. But when you have done so, half the distance still remains before you. When you have traversed half that remaining distance, half of it still remains undone—and so on, forever. To reach the doorway, you must traverse an infinite number of finite distance intervals in finite time.
Zeno argued that this was impossible, and that motion is therefore an illusion. Zeno's paradoxes reflected the idea that space and time could be infinitely subdivided into smaller and smaller portions; his paradoxes remained mathematically unsolvable until the concepts of continuity, limits, and infinite series were introduced in the development of the calculus. In the calculus, it is elementary to show that the sum of an infinite number of finite terms (such as the times it takes to make the ever-shorter journeys in Zeno's dichotomy paradox) can indeed be a finite number. You can reach the door. Motion is not an illusion. Yet Zeno's paradoxes were not foolish; they pointed to deep mathematical questions that were not resolved until many centuries later, and his claim that motion is impossible is no more counter-intuitive than some claims made by modern physics, such as the relativistic revelation that there is no such thing as a universal simultaneous moment of time.
Greek astronomer, philosopher, and mathematician Eudoxus Of Cnidus (c.408–c.355 BC) developed a method of exhaustion that could be used to calculate the area and volume under certain curves and of solids (i.e., the cone and pyramid). In this context, “exhaustion” refers not to physical tiredness but to the adding up of smaller and smaller areas and volumes to draw closer to a true solution. Eudoxus's method relied, like the calculus, on the concept that time and space can be divided into infinitesimally small portions. The method of exhaustion pointed the way toward a primitive geometric form of what in calculus is known as integration.
Although other advances by classical mathematicians helped set the intellectual stage for the ultimate development of the calculus during the Scientific Revolution, ancient Greek mathematicians failed to find a common link between problems related to finding the area under curves and to problems requiring the determination of a tangent (a line touching a curve at only one point). That these processes are actually the inverse of each other—one equals the other in reverse—became the basis of the calculus eventually developed by Newton and Leibniz. Today, the reciprocal or symmetrical relationship between integration (area-finding) and differentiation (tangent-finding) is known as “the fundamental theorem of the calculus.”
In the Middle Ages, philosophers and mathematicians continued to ponder kinematics (questions relating to motion). These inquiries led to early efforts to plot functions relating to time and velocity. In particular, the work of French bishop Nicholas Oresme (c.1325–1382) was an important milestone in the development of kinematics and geometry, especially Oresme's proof of the Merton theorem, which allowed for the calculation of the distance traveled by an object when uniformly accelerated (e.g., by acceleration due to gravity). Oresme's proof established that the sum of the distance traversed (i.e., the area under the velocity curve) by a body with variable velocity was the same as that traversed by a body with a uniform velocity equal to that of the varying-velocity body at the middle instant of whatever period was measured. From this it could be shown that the area under the location-versus-time curve was the sum of all distances covered by a series of instantaneous velocities. This work would later prove indispensable to the quantification of parabolic motion by Italian astronomer and physicist Galileo Galilei (1564–1642) and later influenced Newton's development of differentiation.
During the Renaissance in Western Europe, a rediscovery of ancient Greek and Roman mathematics spurred increased use of mathematical symbols, especially to denote algebraic concepts. The rise in symbolism also allowed the development of and increased application of the techniques of analytical geometry principally advanced by French philosopher and mathematician René Descartes (1596–1650) and French mathematician Pierre de Fermat (1601–1665). Beyond the practical utility of establishing that algebraic equations corresponded to curves, the work of Descartes and Fermat laid the geometrical basis for the calculus. In fact, Fermat's methodologies included concepts related to the determination of minimums and maximums for functions that are mirrored in modern mathematical methodology (e.g., setting the derivative or rate-of-change of a function to zero). Both Newton and Leibniz would rely heavily on the use of Cartesian algebra in the development of their calculus techniques.
Although many of the elements of the calculus were in place by the mid—1600s, recognition of the fundamental theorem relating differentiation and integration as inverse processes continued to elude mathematicians and scientists. Part of the difficulty was lingering philosophical resistance toward the philosophical ramifications of the limit and the infinitesimal—a resistance descended from Zeno's paradoxes. In one sense, the genius of Newton and Leibniz lay in their ability to put aside the philosophical and theological ramifications of the infinitesimal in favor of developing practical mathematics. Neither Newton nor Leibniz were seriously worried by the deeper philosophical issues regarding limits and infinitesimals. In this regard, Newton and Leibniz worked in the spirit of empiricism that grew during throughout the Scientific Revolutional—though ultimately, mathematicians would address those deep issues in developing the calculus further and making it a part of the rigorous system of modern mathematics.
A bitter, angry controversy arose concerning whether Newton or Leibniz deserved credit for the calculus. This controversy was grounded in the actions of both men during the late 1600s. Historical documents established that Newton's unpublished formulations of the calculus came two decades before Leibniz's publications in 1684 and 1686. However, most scholars conclude that Leibniz developed his techniques independently. Also, although the mathematical outcomes were identical, the differences in symbolism and nomenclature used by Newton and Leibniz are evidence of independent development.
The controversy regarding credit for the origin of calculus quickly became more than a simple dispute between mathematicians. Supporters of Newton and Leibniz often argued along blatantly nationalistic lines; the feud itself had a profound influence on the subsequent development of the calculus and other branches of mathematical analysis in England and in Continental Europe. English mathematicians who relied on Newton's “fluxions” methods were divided from mathematicians in Europe who followed the notational conventions established by Leibniz. The publications and symbolism of Leibniz greatly influenced the mathematical work of Swiss mathematicians (and brothers) Jakob Bernoulli (1654–1705) and Johann Bernoulli (1667–1748).
The publications of Newton and Leibniz emphasized the utilitarian aspects of the calculus. Nevertheless, the nomenclatures and techniques developed by Newton and Leibniz also mirrored their own philosophical leanings. Newton developed the calculus as a practical tool with which to analyze planetary motion and the effects of gravity. Accordingly, he emphasized analysis, and his mathematical methods describe the effects of forces on motion in terms of infinitesimal changes with respect to time. Leibniz's calculus, on the other hand, was driven by the idea that incorporeal (non-material) entities were the driving basis of existence and the changes in the larger world experienced by mankind; consequently, he sought to derive integral methods by which discrete infinitesimal units could be summed to yield the area of a larger shape.
While philosophical debates regarding the underpinnings of the calculus simmered, the first calculus texts appeared before the end of the seventeenth century. The first textbook in calculus was published by French mathematician Guillaume Franois Antoine l'(H)ôpital (1661–1704), though modern scholars now credit much of the content to Johann Bernoulli. L'(H)ôpitals Analyse des Infiniment Petits four l'intelligence des lignes courbes (1696) helped bring the calculus into wider use throughout continental Europe.
Within a few decades the calculus was quickly embraced and applied to a wide range of practical problems in physics, astronomy, and mathematics. Why the calculus worked, however, remained a vexing question that opened it to attack on philosophical and theological grounds. This school of critic—seventually to be led in eighteenth century England by the Irish Anglican bishop George Berkeley (1685–1753) argued that the fundamental theorems of calculus derived from logical fallacies, and that the great accuracy of the calculus actually resulted from the mutual lucky cancellation of fundamental errors. This argument was incorrect, but such attacks upon the calculus resulted in increased rigor in mathematical analysis, which was ultimately beneficial for modern mathematics. Scholars took Berkeley's criticisms seriously and set out to support the logical foundations of calculus with well-reasoned rebuttals.
IN CONTEXT: AN ANGRY GENIUS
The fame and status of English mathematician and physicist Isaac Newton (1642–1727), plus the fact that he corresponded with his great rival in the development of the calculus, German mathematician Gottfried Wilhelm Leibniz (1646–1716), resulted in a charge of plagiarism against Leibniz by members of the British Royal Society (the most prestigious scientific organization of its day). Supporters of Leibniz subsequently leveled similar charges against Newton. Leibniz petitioned the Royal Society for redress, but Newton, a high-ranking member of the Society, hand-picked the investigating committee and prepared reports on the controversy for committee members to sign. Not surprisingly, the committee ruled against Leibniz. Before the dispute was resolved, Leibniz died.
Newton's anger remained unabated after Leibniz's death. In many of Newton's papers, he continued to set out mathematical and personal criticisms of Leibniz. Another feud, with English scientist Robert Hooke (1635–1703), resulted in Newton purging from his master-work Philosophiae Naturalis Principia Mathematica (1687) any reference to Hookes important contributions. Another dispute, between Newton and the British Royal Astronomer John Flamsteed (1646–1719), resulted in Flamsteeds name also being deleted from the Principia.
The feud over credit for the calculus adversely affected communications between English and European mathematicians, who took sides along national lines. English mathematicians used Newton's fluxion notations exclusively when doing the calculus; European (especially Swiss and French) mathematicians used only Leibniz's dy/dx notation. The latter is standard today.
In the late 1700s, French mathematician Jean le Rond d'(A)lembert (1717–1783) published two influential articles, “Limite” and “Différentielle,” that offered a strong rebuttal to Berkeley's arguments and defended the concepts of differentiation and infinitesimals by discussing the notion of the limit. The debates regarding the logic of the calculus resulted in the introduction of new standards of rigor in mathematical analysis and laid the foundation for subsequent rise of pure mathematics in the nineteenth century.
IN CONTEXT: CALCULUS HELPS THREAD A COSMIC NEEDLE
On June 28, 2004, the Cassini space probe, a robot craft about the size of a small bus, built jointly by the United States and Europe, arrived at Saturn after a seven-year journey. The plan was for it to be captured by Saturns gravity and so become a permanent satellite, observing Saturn and its rings and moons for years to come. But to make the journey, Cassini had reached a speed of 53,000 miles per hour (many times faster than a rifle bullet), too fast to be captured by Saturns gravity. At that speed it would swoop past Saturn and head out into deep space. Therefore, it was programmed to hit the brakes, to fire a rocket against its direction of travel as it approached its destination.
For objects moving in straight lines, changes in velocity can be calculated using basic algebra. Calculus is not needed. But Cassini was not moving in a straight line; it was falling through space on a curving path toward Saturn, being pulled more strongly by Saturns gravity with every passing minute. To figure out when to start Cassinis rocket and how long to run it, the probes human controllers on Earth had to use calculus. The effects of the important forces acting on Cassini—in particular, its own rocket motor and Saturns gravity—had to be integrated over time. And the calculation—carried out using computers, not, for the most part, on paperhad to be extremely exact, or Cassini would be destroyed. To get deep enough into Saturns gravitational field, Cassini would have to steer right through a relatively narrow gap in Saturns rings called the Cassini division (named after the same Italian astronomer as the probe itself), a navigational feat comparable to threading a cosmic needle. If it missed the gap, the Cassini craft would have been destroyed by collision with the rings.
Not all of NASAs navigational calculations have been correct; in 1998, a space probe crashed into Mars because of a math mistake. But in Cassini's case, the calculations were correct. Cassini passed the rings safely, was captured by Saturns gravity, and began its orbits of Saturn.
The first texts in calculus actually appeared in the last years of the seventeenth century. The publications and symbolism of Leibniz greatly influenced the mathematical work of two brothers, Swiss mathematicians Jakob and Johann Bernoulli. Working separately, the Bernoulli brothers improved and made wide application of the calculus. Johann Bernoulli was the first to apply the term “integral” to a subset of calculus techniques allowing the determination of areas and volumes under curves. During his travels, Johann Bernoulli sparked intense interest in the calculus among French mathematicians, and his influence was critical to the widespread use of Leibniz-based methodologies and nomenclature.
Physicists and mathematicians seized upon the new set of analytical techniques comprising the calculus. Advancements in methodologies usually found quick application and, correspondingly, fruitful results fueled further research and advancements. Although the philosophical foundations of calculus remained in dispute, these arguments proved no hindrance to the application of calculus to problems of physics. The Bernoulli brothers, for example, quickly recognized the power of the calculus as a set of tools to be applied to a number of statistical and physical problems. Jakob Bernoulli's distribution theorem and theorems of probability and statistics, ultimately of great importance to the development of physics, incorporated calculus techniques. Johann Bernoulli's sons, Nicolaus Bernoulli (1695–1726), Daniel Bernoulli (1700–1782), and Johann Bernoulli II (1710–1790), all made contributions to the calculus. In particular, Johann Bernoulli II used calculus methodologies to develop important formulae regarding the properties of fluids and hydrodynamics.
The application of calculus to probability theory resulted in probability integrals. The refinement made immediate and significant contributions to the advancement of probability theory based on the late-seventeenth-century work of French mathematician Abraham De Moivre (1667–1754).
English mathematician Brook Taylor (1685–1731) developed what was later termed the Taylor expansion theorem and the Taylor series. Taylor's work was subsequently used by Swiss mathematician Leonard Euler (1707–1783) in the extension of differential calculus and by French mathematician Joseph Louis Lagrange (1736–1813) in the development of his theory of functions.
Scottish mathematician Colin Maclaurin (1698–1746) advanced an expansion that was a special case of a Taylor expansion (where x = 0), now known as the Maclaurin series. More importantly, in the face of developing criticism from Bishop Berkeley regarding the logic of calculus, Maclaurin set out an important and influential defense of Newtonian fluxions and geometric analysis in his 1742 Treatise on Fluxions.
The application of the calculus to many areas of math and science was profoundly influenced by the work of Euler, a student of Johann Bernoulli. Euler was one of the most dedicated and productive mathematicians of the eighteenth century. Based on earlier work done by Newton and Jakob Bernoulli, in 1744 Euler developed an extension of calculus dealing with maxima and minima of definite integrals termed the calculus of variation (variational calculus). Among other applications, variational calculus techniques allow the determination of the shortest distance between two points on curved surfaces.
Euler also advanced the principle of least action formulated in 1746 by Pierre Louis Moreau de Maupertuis (1698–1759). In general, this principle asserts economy in nature (i.e., an avoidance in natural systems of unnecessary expenditures of energy). Accordingly, Euler asserted that natural motions must always be such that they make the calculation of a minimum possible (i.e., nature always points the way to a minimum). The principle of least action quickly became an influential scientific and philosophical principle destined to find expression in later centuries in various laws and principles, including Le Chatelier's principle regarding equilibrium reactions. The principle profoundly influenced nineteenth-century studies of thermodynamics.
On the heels of an influential publication covering algebra, trigonometry, and geometry (including the geometry of curved surfaces) Euler's 1755 publication, Institutiones Calculi Differentialis (Foundations of Differential Calculus), influenced the teaching of calculus for more than two centuries. Euler followed with three volumes published from 1768 to 1770, titled Institutiones Calculi Integralis (Foundations of Integral Calculus), which presented his work on differential equations. Differential equations contain derivatives or differentials of a function. Partial differential equations contain partial derivatives of a function of more than one variable; ordinary differential equations contain no partial derivatives. The wave equation, for example, is a second-order differential equation important in the description of many physical phenomena including pressure waves (e.g., water and sound waves). Euler and d'(A)lembert offered different perspectives regarding whether solutions to the wave equation should be, as argued by d'(A)lembert, continuous (i.e. derived from a single equation) or, as asserted by Euler, discontinuous (having functions formed from many curves). The refinement of the wave equation was of great value to nineteenth century scientists investigating the properties of electricity and magnetism that resulted in Scottish physicist James Clerk Maxwell's (1831–1879) development of equations that accurately described the electromagnetic wave.
The disagreement between Euler and d'(A)lembert over the wave equation reflected the type of philosophical arguments and distinct views regarding the philosophical relationship of calculus to physical phenomena that developed during the eighteenth century.
Although both Newton and Leibniz developed techniques of differentiation and integration, the Newtonian tradition emphasized differentiation and the reduction to the infinitesimal. In contrast, the Leibniz tradition emphasized integration as a summation of infinitesimals.
A third view of the calculus, mostly reflected in the work and writings of Lagrange, was more abstractly algebraic and depended upon the concept of the infinite series (a sum of an infinite sequence of terms). These differences regarding a grand design for calculus were not trivial. According to the Newtonian view, calculus derived from analysis of the dynamics of bodies (e.g., kinematics, velocities, and accelerations). Just as the properties of a velocity curve relate distance to time, in accord with the Newtonian view, the elaborations of calculus advanced applications where changing properties or states could accurately be related to one another (e.g., in defining planetary orbits, etc.). Calculus derived from the Newtonian tradition was used to allow the analysis of phenomena by artificially breaking properties associated with that phenomena into increasingly smaller parts. In the Leibniz tradition, calculus allowed accurate explanation of phenomena as the summed interaction of naturally very small components.
Lagrange's analytic treatment of mechanics in his 1788 publication Analytical Mechanics (containing the Lagrange dynamics equations) placed important emphasis on the development of differential equations. Lagrange's work also profoundly influenced the work of another French mathematician, Pierre-Simon Laplace
(1749–1827) who, near the end of the eighteenth century, began important and innovative work in celestial mechanics.
Modern Cultural Connections Since the eighteenth century, the calculus has become ubiquitous in the physical and social sciences. It is employed daily in almost all forms of physical science and engineering, much of biology and medicine, and economics. It functions almost as a universal language in fields that involve applied mathematics and is as much a part of the fundamental toolbox of higher mathematics as arithmetic itself.
The calculus is a true tool; that is, it does not merely express knowledge that is obtained from other sources, but makes possible the discovery of new knowledge and the design of devices that could not be produced by any other means. It is essential in the design of transistors, engines, and chemical processes; weapons, games, and medicines; indeed, of that whole panoply of technological devices that shapes so much of the modern human environment, from two-stroke engines and electrical generators to cell phones, nuclear weapons, computers, and communications satellites. Without the calculus, none of our advanced technologies would be possible, and efforts to understand or predict global climate change and most other aspects of the physical world would be futile. The calculus is the indispensable gram-mar of modern science and technology.
The Calculus of War and Peace
Like all scientific knowledge, calculus can be applied not only to creation but to destruction. For example, the calculus-based concept of inertial guidance has been developed by missile-makers to a fine art.
The first ballistic missile's used in war, the V-2 rockets produced by Nazi Germany near the end of World War II (1939–1945), were fired at London from mainland Europe. They were intended as terror or “vengeance” weapons and so only needed to explode somewhere over the city, not over particular military objectives. Yet to hit a large city such as London at such a distance, a V-2 missile needed a guidance system, a way of knowing where it was at every moment so that it could steer toward its target. It was not practical to steer by the stars or the sun, because these are hard to observe from a missile in supersonic flight and would require complex calculations. Nor was it practical to steer by sending radio signals to the missile's, for with-out advanced radar (not yet available) controllers on the ground would be just as ignorant of the missile's location as the missile itself. Besides, the enemy might learn to fake or jam control signals, that is, drown them out with radio noise.
The solution was inertial guidance, which exploits the calculus fact that (a) the time derivative of position is velocity and (b) the time derivative of velocity is acceleration. By the fundamental theorem of calculus, which says that integration and derivative are opposites, we know that we can follow the trail backwards: the integral of acceleration is velocity, and the integral of velocity is position.
What designers need a missile to know is its position. But position is hard to measure directly. You have to look out the window, identify landmarks (if any happen to be visible), and do some fast geometry, like-wise with velocity. But acceleration is easy to measure, because every part of an object accelerated by a force experiences that force. In addition, unlike velocity or position, a force can be measured directly and locally, that is, without making observations of the outside world. Therefore, the V-2's engineers installed gyroscopes (spinning masses of metal) in their missile and used these to measure its accelerations. Lasers, semiconductors, and other gadgets have also been used since that time. Some are more expensive and accurate than others, but all do the same job: they measure accelerations. Any device that measures acceleration is called an accelerometer.
Thanks to its accelerometers, an inertial guidance system knows its own acceleration as a function of time. Acceleration can be written as a function of time, a (t). This function is known by direct measurement by accelerometers. The integral of a (t) gives velocity as a function of time: ∫ a (t) dt = v (t). And the integral of v (t) gives distance as a function of time, which reveals one's position at any given moment: ∫ v (t) dt = x (t). The actual calculations used in inertial guidance systems are, of course, more complex, but the principle remains the same.
The bottom line for inertial guidance is that, given an accurate knowledge of its initial location and velocity, an inertial guidance system is completely independent of the outside world. It knows where it is, no matter where it goes, without ever having to make an observation.
The V-2 inertial guidance system was crude, but since World War II inertial guidance systems have become more accurate. In the early 1960s they were placed in the first intercontinental ballistic missile's (ICBMs), large missile's designed by the Soviet Union and the United States to fly to the far side of the planet in a few minutes and strike specific targets with nuclear warheads. They were also used in the Apollo moon rocket program and in nuclear submarines, which stay underwater for weeks or months without being able to make observations of the outside world. Inertial guidance systems are today not only in missile's but in tanks, some oceangoing ships, military helicopters, the Space Shuttle and other spacecraft, and commercial airliners making transoceanic journeys.
Calculus makes inertial guidance possible, but also, in a sense, limits its accuracy. The problem is called integration drift. Integration drift is a pesky result of the fact that small “biases” are, for various technical reasons, almost certain to creep into acceleration measurements. Any bias in these acceleration measurements, any unwanted, constant number that adds itself to all the measurements, will result in a position error that increases in proportion to the square of time. As a result, no inertial guidance system can go forever without taking an observation of the outside world to see where it really is. Increasingly, inertial guidance systems are designed to update themselves automatically by checking the Global Positioning System (GPS), a network of satellites that blanket the whole Earth with radio signals that can be used to determine a receiver's position accurately.
Today, inertial guidance systems have reached a very high degree of accuracy. A ballistic missile launched from a nuclear submarine, which may begin with a knowledge of its initial position and velocity, can be launched from a still-submerged submarine by compressed air, burst through the surface of the water, ignite its rocket, fly blind to the far side of the planet, and explode its nuclear warhead within a few yards of its target.
IN CONTEXT: EVERYDAY CALCULUS
Calculus has been used literally thousands of times in the design of every one of the electronic toys we take for granted—including MP3 players, TV screens, computers, cell phones, etc. For long-term information storage, most computers contain a hard drive (some simple terminal computers do not, and new generation computers may contain flash drive type storage similar in principle to USB memory drives). A hard drive contains a stack of thin discs coated with magnetic particles. A computer stores information in the form of binary digits (bits for short, 1s and 0s) on the surface of each disc by impressing or writing on it billions of tiny magnetic fields that point one way to signify 1, another way to signify 0. The bits are arranged in circular tracks. To read information off the spinning disk, sensors glide back and forth between the edge of the disc and its center to place them-selves over selected tracks. The track spins under the sensor; the bits are read off one by one at high speed; and within a few seconds, the program appears. In designing a data-storage disc, engineers attempt to optimize the amount of data stored on the disk, that is, to store the most bits possible.
At first it may seem logical that the best way to store data would be to completely cover the disks surface with tracks. Although logical, that turns out to be the wrong approach. For the sake of keeping the read-write mechanism simpler (and therefore cheaper), every circular track has to have the same number of bits. However, the smaller you make the radius of the innermost track, the fewer bits you can fit on it. But all the tracks on the disc must, as specified above, have the same number of bits, so if you make the innermost track too small, it will hold only a few bits, and so will all the other tracks, and you'll end up with an inefficient disc. On the other hand, if you make the innermost track too big, there wont be much room for additional tracks between the innermost track and the outer edge of the disc, and again your design will be inefficient.
Calculus is used to solve the problem. Using derivative calculus to find the value of the innermost radius permits engineers to place the maximum amount of data (in term of number of bits) on the disk.
See Also Mathematics: Trigonometry.
Boyer, Carl. The History of the Calculus and Its Conceptual Development. New York: Dover, 1959.
Boyer, Carl. A History of Mathematics. 2nd ed. New York: John Wiley and Sons, 1991.
Edwards, C.H. The Historical Development of the Calculus. New York: Springer, 1979.
Hall, Rupert. Philosophers at War: The Quarrel between Newton and Leibniz. Cambridge, UK: Cambridge University Press, 1980.
Kline, M. Mathematical Thought from Ancient to Modern Time's. New York: Oxford University Press, 1972.
K. Lee Lerner