Values of h (2014) | Units | Ref. |
---|---|---|
J?s | ^{[1]} | |
eV?s | ^{[2]} | |
2? | E_{P}?t_{P} | |
Values of ? (h-bar) | Units | Ref. |
J?s | ^{[2]} | |
eV?s | ^{[2]} | |
1 | E_{P}?t_{P} | |
Values of hc | Units | Ref. |
J?m | ||
eV??m | ||
2? | E_{P}?l_{P} | |
Values of ?c (h-bar) | Units | Ref. |
J?m | ||
eV??m | ||
1 | E_{P}?l_{P} |
The Planck constant (denoted h, also called Planck's constant) is a physical constant that is the quantum of action, which relates the energy carried by a photon to its frequency. A photon's energy is equal to the Planck constant times its frequency. The Planck constant is of fundamental importance in quantum mechanics.
At the end of the 19th-century physicists were unable to explain the spectrum of black body radiation, which had been measured with some accuracy. In 1900, Max Planck derived a formula for the spectrum that agreed with the measurements. He did this by assuming a hypothetical electrically charged oscillator in a cavity that contained black body radiation could only change its energy in a minimal increment, E, that was proportional to the frequency of its associated electromagnetic wave. He was able to calculate the proportionality constant, h, from the experimental measurements, and that constant is named in his honor. In 1905, the value E was associated by Albert Einstein with a "quantum" or minimal element of the energy of the electromagnetic wave itself. The light quantum behaved in some respects as an electrically neutral particle, as opposed to an electromagnetic wave. It was eventually called a photon.
Since energy and mass are equivalent, the Planck constant also relates mass to frequency. By 2017, the Planck constant had been measured with sufficient accuracy in terms of the SI base units, including the kilogram as traditionally defined by a metal cylinder called the International Prototype of the Kilogram (IPK), to replace the IPK as the standard of mass.^{[3]} The new definition was approved by the BIPM on 16 November 2018.^{[4]}
The new value of the Planck constant by the ISO standard is set to 6.626 070 150 x 10^{-34} J?s.^{[5]}^{[6]}
The Planck constant is related to the quantization of light and matter. It can be seen as a subatomic-scale constant. In a unit system adapted to subatomic scales, the electronvolt is the appropriate unit of energy and the petahertz the appropriate unit of frequency. Atomic unit systems are based (in part) on the Planck constant.
The Planck constant is one of the smallest constants used in physics. This reflects the fact that on a scale adapted to humans, where energies are typically of the order of kilojoules and times are typically of the order of seconds or minutes, the Planck constant (the quantum of action) is very small.
Equivalently, the smallness of the Planck constant reflects the fact that everyday objects and systems are made of a large number of particles. For example, green light with a wavelength of 555 nanometres (a wavelength that can be perceived by the human eye to be green) has a frequency of . Each photon has an energy . That is a very small amount of energy in terms of everyday experience, but everyday experience is not concerned with individual photons any more than with individual atoms or molecules. An amount of light more typical in everyday experience (though much larger than the smallest amount perceivable by the human eye) is the energy of one mole of photons; its energy can be computed by multiplying the photon energy by the Avogadro constant, . The result is that green light of wavelength 555 nm has an energy of , a typical energy of everyday life.
In the last years of the 19th century, Planck was investigating the problem of black-body radiation first posed by Kirchhoff some 40 years earlier. It is well known that hot objects glow, and that hotter objects glow brighter than cooler ones. The electromagnetic field obeys laws of motion similarly to a mass on a spring, and can come to thermal equilibrium with hot atoms. The hot object in equilibrium with light absorbs just as much light as it emits. If the object is black, meaning it absorbs all the light that hits it, then its thermal light emission is maximized.
The assumption that black-body radiation is thermal leads to an accurate prediction: the total amount of emitted energy increases with temperature according to a definite rule, the Stefan-Boltzmann law (1879-84). It was also known that the colour of the light given off by a hot object changes with the temperature, such that "white hot" is hotter than "red hot". Nevertheless, Wilhelm Wien discovered the mathematical relationship between the peaks of the curves at different temperatures, by using the principle of adiabatic invariance. At each different temperature, the curve is moved over by Wien's displacement law (1893). Wien also proposed an approximation for the spectrum of the object, which was correct at high frequencies (short wavelength) but not at low frequencies (long wavelength).^{[7]} It still was not clear why the spectrum of a hot object had the form that it has (see diagram).
Planck hypothesized that the equations of motion for light describe a set of harmonic oscillators, one for each possible frequency. He examined how the entropy of the oscillators varied with the temperature of the body, trying to match Wien's law, and was able to derive an approximate mathematical function for black-body spectrum.^{[8]}
Planck soon realized that his solution was not unique. There were several different solutions, each of which gave a different value for the entropy of the oscillators.^{[8]} To save his theory, Planck resorted to using the then-controversial theory of statistical mechanics,^{[8]} which he described as "an act of despair ... I was ready to sacrifice any of my previous convictions about physics."^{[9]} One of his new boundary conditions was
to interpret U_{N} [the vibrational energy of N oscillators] not as a continuous, infinitely divisible quantity, but as a discrete quantity composed of an integral number of finite equal parts. Let us call each such part the energy element ?;
-- Planck, On the Law of Distribution of Energy in the Normal Spectrum^{[8]}
With this new condition, Planck had imposed the quantization of the energy of the oscillators, "a purely formal assumption ... actually I did not think much about it..." in his own words,^{[10]} but one which would revolutionize physics. Applying this new approach to Wien's displacement law showed that the "energy element" must be proportional to the frequency of the oscillator, the first version of what is now sometimes termed the "Planck-Einstein relation":
Planck was able to calculate the value of h from experimental data on black-body radiation: his result, , is within 1.2% of the currently accepted value.^{[8]} He also made the first determination of the Boltzmann constant k_{B} from the same data and theory.^{[11]}
Prior to Planck's work, it had been assumed that the energy of a body could take on any value whatsoever - that it was a continuous variable. The Rayleigh-Jeans law makes close predictions for a narrow range of values at one limit of temperatures, but the results diverge more and more strongly as temperatures increase. To make Planck's law, which correctly predicts blackbody emissions, it was necessary to multiply the classical expression by a complex factor that involves h in both the numerator and the denominator. The influence of h in this complex factor would not disappear if it were set to zero or to any other value. Making an equation out of Planck's law that would reproduce the Rayleigh-Jeans law could not be done by changing the values of h, of the Boltzmann constant, or of any other constant or variable in the equation. In this case the picture given by classical physics is not duplicated by a range of results in the quantum picture.
The black-body problem was revisited in 1905, when Rayleigh and Jeans (on the one hand) and Einstein (on the other hand) independently proved that classical electromagnetism could never account for the observed spectrum. These proofs are commonly known as the "ultraviolet catastrophe", a name coined by Paul Ehrenfest in 1911. They contributed greatly (along with Einstein's work on the photoelectric effect) in convincing physicists that Planck's postulate of quantized energy levels was more than a mere mathematical formalism. The very first Solvay Conference in 1911 was devoted to "the theory of radiation and quanta".^{[12]} Max Planck received the 1918 Nobel Prize in Physics "in recognition of the services he rendered to the advancement of Physics by his discovery of energy quanta".
The photoelectric effect is the emission of electrons (called "photoelectrons") from a surface when light is shone on it. It was first observed by Alexandre Edmond Becquerel in 1839, although credit is usually reserved for Heinrich Hertz,^{[13]} who published the first thorough investigation in 1887. Another particularly thorough investigation was published by Philipp Lenard in 1902.^{[14]} Einstein's 1905 paper^{[15]} discussing the effect in terms of light quanta would earn him the Nobel Prize in 1921,^{[13]} when his predictions had been confirmed by the experimental work of Robert Andrews Millikan.^{[16]} The Nobel committee awarded the prize for his work on the photo-electric effect, rather than relativity, both because of a bias against purely theoretical physics not grounded in discovery or experiment, and dissent amongst its members as to the actual proof that relativity was real.^{[17]}^{[18]}
Prior to Einstein's paper, electromagnetic radiation such as visible light was considered to behave as a wave: hence the use of the terms "frequency" and "wavelength" to characterise different types of radiation. The energy transferred by a wave in a given time is called its intensity. The light from a theatre spotlight is more intense than the light from a domestic lightbulb; that is to say that the spotlight gives out more energy per unit time and per unit space (and hence consumes more electricity) than the ordinary bulb, even though the colour of the light might be very similar. Other waves, such as sound or the waves crashing against a seafront, also have their own intensity. However, the energy account of the photoelectric effect didn't seem to agree with the wave description of light.
The "photoelectrons" emitted as a result of the photoelectric effect have a certain kinetic energy, which can be measured. This kinetic energy (for each photoelectron) is independent of the intensity of the light,^{[14]} but depends linearly on the frequency;^{[16]} and if the frequency is too low (corresponding to a photon energy that is less than the work function of the material), no photoelectrons are emitted at all, unless a plurality of photons, whose energetic sum is greater than the energy of the photoelectrons, acts virtually simultaneously (multiphoton effect).^{[19]} Assuming the frequency is high enough to cause the photoelectric effect, a rise in intensity of the light source causes more photoelectrons to be emitted with the same kinetic energy, rather than the same number of photoelectrons to be emitted with higher kinetic energy.^{[14]}
Einstein's explanation for these observations was that light itself is quantized; that the energy of light is not transferred continuously as in a classical wave, but only in small "packets" or quanta. The size of these "packets" of energy, which would later be named photons, was to be the same as Planck's "energy element", giving the modern version of the Planck-Einstein relation:
Einstein's postulate was later proven experimentally: the constant of proportionality between the frequency of incident light (f) and the kinetic energy of photoelectrons (E) was shown to be equal to the Planck constant (h).^{[16]}
Niels Bohr introduced the first quantized model of the atom in 1913, in an attempt to overcome a major shortcoming of Rutherford's classical model.^{[20]} In classical electrodynamics, a charge moving in a circle should radiate electromagnetic radiation. If that charge were to be an electron orbiting a nucleus, the radiation would cause it to lose energy and spiral down into the nucleus. Bohr solved this paradox with explicit reference to Planck's work: an electron in a Bohr atom could only have certain defined energies E_{n}
where c_{0} is the speed of light in vacuum, R_{?} is an experimentally determined constant (the Rydberg constant) and n is any integer (n = 1, 2, 3, ...). Once the electron reached the lowest energy level , it could not get any closer to the nucleus (lower energy). This approach also allowed Bohr to account for the Rydberg formula, an empirical description of the atomic spectrum of hydrogen, and to account for the value of the Rydberg constant R_{?} in terms of other fundamental constants.
Bohr also introduced the quantity , now known as the reduced Planck constant, as the quantum of angular momentum. At first, Bohr thought that this was the angular momentum of each electron in an atom: this proved incorrect and, despite developments by Sommerfeld and others, an accurate description of the electron angular momentum proved beyond the Bohr model. The correct quantization rules for electrons - in which the energy reduces to the Bohr model equation in the case of the hydrogen atom - were given by Heisenberg's matrix mechanics in 1925 and the Schrödinger wave equation in 1926: the reduced Planck constant remains the fundamental quantum of angular momentum. In modern terms, if J is the total angular momentum of a system with rotational invariance, and J_{z} the angular momentum measured along any given direction, these quantities can only take on the values
The Planck constant also occurs in statements of Werner Heisenberg's uncertainty principle. Given a large number of particles prepared in the same state, the uncertainty in their position, ?x, and the uncertainty in their momentum (in the same direction), ?p, obey
where the uncertainty is given as the standard deviation of the measured value from its expected value. There are a number of other such pairs of physically measurable values which obey a similar rule. One example is time vs. energy. The either-or nature of uncertainty forces measurement attempts to choose between trade offs, and given that they are quanta, the trade offs often take the form of either-or (as in Fourier analysis), rather than the compromises and gray areas of time series analysis.
In addition to some assumptions underlying the interpretation of certain values in the quantum mechanical formulation, one of the fundamental cornerstones to the entire theory lies in the commutator relationship between the position operator and the momentum operator :
where ?_{ij} is the Kronecker delta.
The Planck-Einstein relation connects the particular photon energy E with its associated wave frequency f:
This energy is extremely small in terms of ordinarily perceived everyday objects.
Since the frequency f, wavelength ?, and speed of light c are related by , the relation can also be expressed as
The de Broglie wavelength ? of the particle is given by
where p denotes the linear momentum of a particle, such as a photon, or any other elementary particle.
In applications where it is natural to use the angular frequency (i.e. where the frequency is expressed in terms of radians per second instead of cycles per second or hertz) it is often useful to absorb a factor of 2? into the Planck constant. The resulting constant is called the reduced Planck constant. It is equal to the Planck constant divided by 2?, and is denoted ? (pronounced "h-bar"):
The energy of a photon with angular frequency ? = 2?f is given by
while its linear momentum relates to
where k is an angular wavenumber. In 1923, Louis de Broglie generalized the Planck-Einstein relation by postulating that the Planck constant represents the proportionality between the momentum and the quantum wavelength of not just the photon, but the quantum wavelength of any particle. This was confirmed by experiments soon afterwards. This holds throughout quantum theory, including electrodynamics.
Problems can arise when dealing with frequency or the Planck constant because the units of angular measure (cycle or radian) are omitted in SI.^{[21]}^{[22]}^{[23]} In the language of quantity calculus,^{[24]} the expression for the "value" of the Planck constant, or of a frequency, is the product of a "numerical value" and a "unit of measurement." When we use the symbol f (or ? ) for the value of a frequency it implies the units cycles per second or hertz, but when we use the symbol ? for its value it implies the units radians per second; the numerical values of these two ways of expressing the value of a frequency have a ratio of 2?, but their values are equal. Omitting the units of angular measure "cycle" and "radian" can lead to an error of 2?. A similar state of affairs occurs for the Planck constant. We use the symbol h when we express the value of the Planck constant in J s/cycle, and we use the symbol ? when we express its value in J s/radian. Since both represent the value of the Planck constant, but in different units, we have h = ?. Their "values" are equal but, as discussed below, their "numerical values" have a ratio of 2?. In this like2do.com resource article the word "value" as used in the tables means "numerical value," and the equations involving the Planck constant and/or frequency actually involve their numerical values using the appropriate implied units. The distinction between "value" and "numerical value" as it applies to frequency and the Planck constant is explained in more detail in this pdf file Link.
These two relations are the temporal and spatial component parts of the special relativistic expression using 4-vectors.
Classical statistical mechanics requires the existence of h (but does not define its value).^{[25]} Eventually, following upon Planck's discovery, it was recognized that physical action cannot take on an arbitrary value. Instead, it must be some multiple of a very small quantity, the "quantum of action", now called the Planck constant. This is the so-called "old quantum theory" developed by Bohr and Sommerfeld, in which particle trajectories exist but are hidden, but quantum laws constrain them based on their action. This view has been largely replaced by fully modern quantum theory, in which definite trajectories of motion do not even exist, rather, the particle is represented by a wavefunction spread out in space and in time. Thus there is no value of the action as classically defined. Related to this is the concept of energy quantization which existed in old quantum theory and also exists in altered form in modern quantum physics. Classical physics cannot explain either quantization of energy or the lack of a classical particle motion.
In many cases, such as for monochromatic light or for atoms, quantization of energy also implies that only certain energy levels are allowed, and values in between are forbidden.^{[26]}
The Planck constant has dimensions of physical action; i.e., energy multiplied by time, or momentum multiplied by distance, or angular momentum. In SI units, the Planck constant is expressed in joule-seconds (J?s or N?m?s or kg?m^{2}?s^{−1}). Implicit in the dimensions of the Planck constant is the fact that the SI unit of frequency, the Hertz, represents one complete cycle, 360 degrees or 2? radians, per second. An angular frequency in radians per second is often more natural in mathematics and physics and many formulas use a reduced Planck constant (or Dirac constant) (pronounced h-bar)
On 16 November 2018, the International Bureau of Weights and Measures (BIPM) voted to redefine the kilogram by fixing the value of the Planck constant, thereby defining the kilogram in terms of the second and the speed of light. Starting 20 May 2019, the new value is exactly
In July 2017, the NIST measured the Planck constant using its Kibble balance instrument with an uncertainty of only 13 parts per billion, obtaining a value of .^{[27]} This measurement, along with others, allowed the redefinition of SI base units.^{[28]} The two digits inside the parentheses denote the standard uncertainty in the last two digits of the value.
As of the 2014 CODATA release, the best measured value of the Planck constant was:^{[2]}
The value of the reduced Planck constant (or Dirac constant) was:
The 2014 CODATA results were made available in June 2015^{[29]} and represent the best-known, internationally accepted values for these constants, based on all data published as of 31 December 2014. New CODATA figures are normally produced every four years. However, in order to support the redefinition of the SI base units, CODATA made a special release that was published in October 2017.^{[30]} It incorporates all data up to 1 July 2017 and determines the final numerical values of the Planck constant, h, Elementary charge, e, Boltzmann constant, k, and Avogadro constant, N_{A}, that are to be used for the new SI definitions.
The reduced Planck Constant can also be calculated utilizing the gravitational time dilation formula to find the mass energy needed to dilate time by one Planck second from one Planck length away. ^{[31]}
There are several related constants for which more than 99% of the uncertainty in the 2014 CODATA values^{[32]} is due to the uncertainty in the value of the Planck constant, as indicated by the square of the correlation coefficient . The Planck constant is (with one or two exceptions)^{[33]} the fundamental physical constant which is known to the lowest level of precision, with a 1? relative uncertainty u_{r} of .
The normal textbook derivation of the Rydberg constant R_{?} defines it in terms of the electron mass m_{e} and a variety of other physical constants.
However, the Rydberg constant can be determined very accurately from the atomic spectrum of hydrogen, whereas there is no direct method to measure the mass of a stationary electron in SI units. Hence the equation for the computation of m_{e} becomes
where c_{0} is the speed of light and ? is the fine-structure constant. The speed of light has an exactly defined value in SI units, and the fine-structure constant can be determined more accurately than the Planck constant. Thus, the uncertainty in the value of the electron rest mass is due entirely to the uncertainty in the value of the Planck constant .
The Avogadro constant N_{A} is determined as the ratio of the mass of one mole of electrons to the mass of a single electron; the mass of one mole of electrons is the "relative atomic mass" of an electron A_{r}(e), which can be measured in a Penning trap , multiplied by the molar mass constant M_{u}, which is defined as .
The dependence of the Avogadro constant on the Planck constant also holds for the physical constants which are related to amount of substance, such as the atomic mass constant. The uncertainty in the value of the Planck constant limits the knowledge of the masses of atoms and subatomic particles when expressed in SI units. It is possible to measure the masses more precisely in atomic mass units, but not to convert them more precisely into kilograms.
Sommerfeld originally defined the fine-structure constant ? as:
where e is the elementary charge, ?_{0} is the electric constant (also called the permittivity of free space), and ?_{0} is the magnetic constant (also called the permeability of free space). The latter two constants have fixed values in the International System of Units. However, ? can also be determined experimentally, notably by measuring the electron spin g-factor g_{e}, then comparing the result with the value predicted by quantum electrodynamics.
At present, the most precise value for the elementary charge is obtained by rearranging the definition of ? to obtain the following definition of e in terms of ? and h:
The Bohr magneton and the nuclear magneton are units which are used to describe the magnetic properties of the electron and atomic nuclei respectively. The Bohr magneton is the magnetic moment which would be expected for an electron if it behaved as a spinning charge according to classical electrodynamics. It is defined in terms of the reduced Planck constant, the elementary charge and the electron mass, all of which depend on the Planck constant: the final dependence on h^{1/2} can be found by expanding the variables.
The nuclear magneton has a similar definition, but corrected for the fact that the proton is much more massive than the electron. The ratio of the electron relative atomic mass to the proton relative atomic mass can be determined experimentally to a high level of precision .
This section is outdated; it does not include many more recent measurement results. (September 2015) |
Method | Value of h |
Relative uncertainty |
Ref. |
---|---|---|---|
Kibble (watt) balance | ^{[34]}^{[35]}^{[36]} | ||
X-ray crystal density | ^{[37]} | ||
Josephson constant | ^{[38]}^{[39]} | ||
Magnetic resonance | ^{[40]}^{[41]} | ||
Faraday constant | 1.3×10^{-6} | ^{[42]} | |
CODATA 2010 | 4.4×10^{-8} | ^{[43]} | |
Kibble balance with superconducting magnet | 4.5×10^{-8} | ^{[1]} | |
The nine recent determinations of the Planck constant cover five separate methods. Where there is more than one recent determination for a given method, the value of h given here is a weighted mean of the results, as calculated by CODATA. |
In principle, the Planck constant could be determined by examining the spectrum of a black-body radiator or the kinetic energy of photoelectrons, and this is how its value was first calculated in the early twentieth century. In practice, these are no longer the most accurate methods. The CODATA value quoted here is based on three Kibble balance measurements of K_{J}^{2}R_{K} and one inter-laboratory determination of the molar volume of silicon,^{[44]} but is mostly determined by a 2007 Kibble balance measurement made at the U.S. National Institute of Standards and Technology (NIST).^{[36]} Five other measurements by three different methods were initially considered, but not included in the final refinement as they were too imprecise to affect the result.
There are both practical and theoretical difficulties in determining h. The practical difficulties can be illustrated by the fact that the two most accurate methods, the Kibble balance and the X-ray crystal density method, do not appear to agree with one another. The most likely reason is that the measurement uncertainty for one (or both) of the methods has been estimated too low - it is (or they are) not as precise as is currently believed - but for the time being there is no indication which method is at fault.
The theoretical difficulties arise from the fact that all of the methods except the X-ray crystal density method rely on the theoretical basis of the Josephson effect and the quantum Hall effect. If these theories are slightly inaccurate - though there is no evidence at present to suggest they are - the methods would not give accurate values for the Planck constant. More importantly, the values of the Planck constant obtained in this way cannot be used as tests of the theories without falling into a circular argument. There are other statistical ways of testing the theories, and the theories have yet to be refuted.^{[44]}
The Josephson constant K_{J} relates the potential difference U generated by the Josephson effect at a "Josephson junction" with the frequency ? of the microwave radiation. The theoretical treatment of Josephson effect suggests very strongly that K_{J} = 2e/h.
The Josephson constant may be measured by comparing the potential difference generated by an array of Josephson junctions with a potential difference which is known in SI volts. The measurement of the potential difference in SI units is done by allowing an electrostatic force to cancel out a measurable gravitational force. Assuming the validity of the theoretical treatment of the Josephson effect, K_{J} is related to the Planck constant by
A Kibble balance (formerly known as a watt balance)^{[45]} is an instrument for comparing two powers, one of which is measured in SI watts and the other of which is measured in conventional electrical units. From the definition of the conventional watt W_{90}, this gives a measure of the product K_{J}^{2}R_{K} in SI units, where R_{K} is the von Klitzing constant which appears in the quantum Hall effect. If the theoretical treatments of the Josephson effect and the quantum Hall effect are valid, and in particular assuming that R_{K} = h/e^{2}, the measurement of K_{J}^{2}R_{K} is a direct determination of the Planck constant.
The gyromagnetic ratio ? is the constant of proportionality between the frequency ? of nuclear magnetic resonance (or electron paramagnetic resonance for electrons) and the applied magnetic field . It is difficult to measure gyromagnetic ratios precisely because of the difficulties in precisely measuring B, but the value for protons in water at is known to better than one part per million. The protons are said to be "shielded" from the applied magnetic field by the electrons in the water molecule, the same effect that gives rise to chemical shift in NMR spectroscopy, and this is indicated by a prime on the symbol for the gyromagnetic ratio, ??_{p}. The gyromagnetic ratio is related to the shielded proton magnetic moment ??_{p}, the spin number I ( for protons) and the reduced Planck constant.
The ratio of the shielded proton magnetic moment ??_{p} to the electron magnetic moment ?_{e} can be measured separately and to high precision, as the imprecisely known value of the applied magnetic field cancels itself out in taking the ratio. The value of ?_{e} in Bohr magnetons is also known: it is half the electron g-factor g_{e}. Hence
A further complication is that the measurement of ??_{p} involves the measurement of an electric current: this is invariably measured in conventional amperes rather than in SI amperes, so a conversion factor is required. The symbol ??_{p-90} is used for the measured gyromagnetic ratio using conventional electrical units. In addition, there are two methods of measuring the value, a "low-field" method and a "high-field" method, and the conversion factors are different in the two cases. Only the high-field value ??_{p-90}(hi) is of interest in determining the Planck constant.
Substitution gives the expression for the Planck constant in terms of ??_{p-90}(hi):
The Faraday constant F is the charge of one mole of electrons, equal to the Avogadro constant N_{A} multiplied by the elementary charge e. It can be determined by careful electrolysis experiments, measuring the amount of silver dissolved from an electrode in a given time and for a given electric current. In practice, it is measured in conventional electrical units, and so given the symbol F_{90}. Substituting the definitions of N_{A} and e, and converting from conventional electrical units to SI units, gives the relation to the Planck constant.
The X-ray crystal density method is primarily a method for determining the Avogadro constant N_{A} but as the Avogadro constant is related to the Planck constant it also determines a value for h. The principle behind the method is to determine N_{A} as the ratio between the volume of the unit cell of a crystal, measured by X-ray crystallography, and the molar volume of the substance. Crystals of silicon are used, as they are available in high quality and purity by the technology developed for the semiconductor industry. The unit cell volume is calculated from the spacing between two crystal planes referred to as d_{220}. The molar volume V_{m}(Si) requires a knowledge of the density of the crystal and the atomic weight of the silicon used. The Planck constant is given by
The experimental measurement of the Planck constant in the Large Hadron Collider laboratory was carried out in 2011. The study called PCC using a giant particle accelerator helped to better understand the relationships between the Planck constant and measuring distances in space.^{[]}
As mentioned above, the numerical value of the Planck constant depends on the system of units used to describe it. Its value in SI units is known to 12 parts per billion but its value in atomic units is known exactly, because of the way the scale of atomic units is defined. The same is true of conventional electrical units, where the Planck constant (denoted h_{90} to distinguish it from its value in SI units) is given by
with K_{J-90} and R_{K-90} being exactly defined constants. Atomic units and conventional electrical units are very useful in their respective fields, because the uncertainty in the final result does not depend on an uncertain conversion factor, only on the uncertainty of the measurement itself.
It is currently planned to redefine certain of the SI base units in terms of fundamental physical constants.^{[3]} This has already been done for the metre, which since 1983 has been defined in terms of a fixed value of the speed of light. The most urgent unit on the list for redefinition is the kilogram, whose value has been fixed for all science (since 1889) by the mass of a small cylinder of platinum-iridium alloy kept in a vault just outside Paris. While nobody knows if the mass of the International Prototype Kilogram has changed since 1889 - the value 1 kg of its mass expressed in kilograms is by definition unchanged and therein lies one of the problems - it is known that over such a timescale the many similar Pt-Ir alloy cylinders kept in national laboratories around the world have changed their relative masses by several tens of parts per billion, however carefully they are stored. A change of several tens of micrograms in one kilogram is equivalent to the current uncertainty in the value of the Planck constant in SI units.
The legal process to change the definition of the kilogram to one based on a fixed value of the Planck constant is already underway.^{[46]} The 24th and 25th General Conferences on Weights and Measures (CGPM) in 2011 and 2014 approved of the redefinition in principle, but were not satisfied with the measurement uncertainty of the Planck constant. The limits they specified were reached in 2016,^{[3]} and the redefinition is scheduled to occur on 16 November 2018, during the 26th CGPM.^{[47]}
Kibble balances already measure mass in terms of the Planck constant: at present, standard kilogram prototypes are taken as fixed masses and the measurement is performed to determine the Planck constant but, once the Planck constant is fixed in SI units, the same experiment would be a measurement of the mass. The relative uncertainty in the measurement would remain the same.
Mass standards could also be constructed from silicon crystals or by other atom-counting methods. Such methods require a knowledge of the Avogadro constant, which fixes the proportionality between atomic mass and macroscopic mass but, with a defined value of the Planck constant, N_{A} would be known to the same level of uncertainty (if not better) than current methods of comparing macroscopic mass.
The question is first: How can one assign a discrete succession of energy value H_{?} to a system specified in the sense of classical mechanics (the energy function is a given function of the coordinates q_{r} and the corresponding momenta p_{r})? The Planck constant h relates the frequency H_{?}/h to the energy values H_{?}. It is therefore sufficient to give to the system a succession of discrete frequency values.