Feeds:
Posts

## Measuring the brightness of stars

If you look at the night-time sky, it quickly becomes evident that stars appear to have different brightnesses. There are two things which affect how bright a star appears to be, its intrinsic brightness and its distance from us. If we know the distance of the star, using e.g. stellar parallax as I discussed here, we can work out the intrinsic luminosity $L$ from its apparent luminosity $\ell$. If the distance of the star is $d$, then we can write that the apparent luminosity is $\ell = \frac{ L }{ d^{2} }$

as the star spreads its light out over a sphere or radius $d$.

The units of luminosity $L$ as used in astronomy are Watts (W), and for apparent luminosity the units of $\ell$ are $\text{W/m}^{2}$. In radio astronomy, we do indeed measure the apparent brightness of sources in these units, but for historical reasons we use an entirely different system at visible wavelengths. Instead we use something called the magnitude system.

## The magnitude system

The magnitude system used by astronomers dates back to ancient Greece, and the work of Hipparchus on the Island of Rhodes in the 2nd century B.C. He compiled a catalogue of the brightness of stars, and decided to rank the brightest as first magnitude, and the faintest ones he could see (barely perceptible to the naked eye) as sixth magnitude. He categorised stars according to their brightness, with a second magnitude star being twice as faint as a first magnitude star, a third magnitude star being twice as faint as a second magnitude star etc. The Greek astronomer Hipparchus classified stars according to their brightness. He called the brightest stars first magnitude and the faintest stars sixth magnitude.

With the invention of the telescope it was realised that there were many stars fainter than sixth magnitude, they just could not be seen with the naked eye. In addition, telescopes allowed astronomers to visually distinguish small brightness differences between stars which had been categorised as e.g. 1st magnitude. In 1856, with the introduction of photographic imaging of celestial objects, the English astronomer Norman Pogson decided to formalise the magnitude system.

His system, which we still use today, stated that

A star of 1st magnitude is exactly 100 times brighter than a star of 6th magniutude

There are two important things to note about the magnitude system

1. It is a backwards system, lower numbers are brighter than higher numbers.
2. It is a logarithmic system. A 6th magnitude star is 100 times fainter than a 1st magnitude star. An 11th magnitude star is 10,000 times (100 x 100) fainter than a 1st magnitude star.

The star Vega defines the zero point of the magnitude system. As those of you familiar with the sky as seen at mid-latitudes in the Northern Hemisphere will know, Vega is not the brightest star in the sky, in fact it is the 3rd brightest as seen from these latitudes, and the 5th brightest if one includes stars not visible at mid-Northern latitudes. The two stars which are brighter than it, Sirius and Arcturus therefore have negative magnitudes. Obviously the Sun and the Moon and planets like Venus and Jupiter (when they are at their brightest) are also brighter than Vega, so these too have negative magnitudes.

The figure below shows the apparent magnitude of some well known objects such as Polaris, Vega, Sirius, Jupiter, Venus, the Moon and the Sun. The figure also shows the limiting magnitudes which can be seen with various instruments. Way at the bottom of the scale is the faintest objects which can be seen with the Hubble Space Telescope. This figure shows the apparent magnitude of the Sun, the Moon, Venus, Jupiter, Vega, Polaris and some others.

From the definition of the magnitude system we can write that $2.5 \log_{10} ( \frac{ L_{1} }{ L_{2} } ) = m_{2} - m_{1}$

It is sometimes incorrectly stated (as in the figure below), that a star of 1st magnitude is 2.5 times brighter than a star of 2nd magnitude. This is only an approximation, the exact figure can be derived from the equation above and is $2.5 \log_{10} ( \frac{ L_{1} }{ L_{2} } ) = m_{2} - m_{1} = 2 - 1 = 1 \rightarrow ( \frac{ L_{1} }{ L_{2} } ) = 10^{1/2.5} = 10^{0.4} = 2.512$

A magnitude difference of 2 equates to a luminosity ratio of $10^{2/2.5} = 10^{0.8} = 6.31$, etc. A cartoon of the magnitude system for stars. Note, the statement that a magnitude difference of 1 means a star is 2.5 times brighter is not strictly correct, but only an approximation, as I have shown above.

## Apparent and absolute magnitude

As I said at the start of this blog, how bright a star appears to be in the sky depends on how bright it actually is, and its distance from us. How bright it appears to be is referred to as the apparent magnitude, but the star’s intrinsic brightness is called its absolute magnitude. By definition, the absolute magnitude of a star is what its apparent magnitude would be if it were at a distance of 10 parsecs (10 pc) from us. So, when we compare the absolute magnitudes of stars we are placing them all at 10 pc, so that the differences in magnitudes tell us the ratios of the stars’ intrinsic brightnesses. We denote the apparent magnitude of a star with a lower-case m, and the absolute magnitude with an upper-case M. To convert from one to the other we can use the equation $\boxed{ m - M = 5 \log_{10}(d) - 5 }$

where $d$ needs to be in parsecs. This equation just comes from our definition of the magnitude system and our definition of absolute magnitude. The quantity $m - M$ is referred to as the distance modulus. From the relationship between distance and stellar parallax, $d \text{ (in parsecs) } = \frac{ 1 }{ p \text{ (in arc seconds) } }$

we can rewrite the relationship between apparent and absolute magnitude as $m - M = 5\log_{10} ( \frac{ 1 }{ p } ) -5 = 5\log_{10}(1) - 5\log_{10}(p) - 5 \newline \newline \rightarrow \boxed{ M = m + 5\log_{10}(p) + 5 = m + 5(1 + \log_{10}(p)) }$

## The apparent and absolute magnitude of some well known stars

In the table below I have listed the apparent and absolute magnitudes of some well known stars (strictly speaking they are the visual magnitudes).

The apparent and absolute magnitude of some well known stars
Name Apparent Magnitude Absolute Magnitude Luminosity (compared to the Sun)
The Sun -26.74 4.83 1
Sirius -1.46 +1.4 23.6
Canopus -0.72 -5.6 $1.5 \times 10^{4}$
Arcturus +0.06 -0.3 112.7
Vega 0.0 +0.6 49.2
Rigel +0.14 -6.8 $4.5 \times 10^{4}$
Deneb +1.26 -8.7 $2.6 \times 10^{5}$
Betelgeuse +0.41 -5.2 $1.0 \times 10^{4}$

As this table shows, the intrinsic brightness of all of these well known stars is greater than the Sun; for most of them it is much greater. This is true of most of the well known stars in the sky, they are well known because they appear bright, and they appear bright either because they are close to us (Sirius), or because they are intrinsically very very bright (e.g. Deneb, Canopus, Rigel). It is not because bright stars are more common, in fact the most common types of stars are so-called red dwarfs, but they are intrinsically much fainter than the Sun. So much so that with the naked eye we cannot see most of the red dwarfs in our neighbourhood, and those close enough to be visible to the naked eye are still faint and insignificant.

As I shall show next week, once we have determined the intrinsic brightness of a star we only need one more piece of information, its temperature, to determine its actual size.

### 3 Responses

1. […] week I blogged about how we measure the brightnesses of stars, and how at optical (and near infrared) wavelengths we use something called the magnitude system. […]

2. […] brightest it can be. It has a magnitude of this week (for an explanation of the magnitude system, see my blog here). Mars, as is obvious from the diagrams and if you look yourself, is much fainter; currently , […]

3. […] I have mentioned before (for example here), astronomers use the strange units of magnitudes to express the brightness/luminosity of objects. […]