What is a Law of Nature?

what is a law of nature

In philosophy, there is much debate as to what constitutes a ‘law of nature’. In this essay, I will be critically examining the Naive Regularity Theory of Laws, a popular philosophical account of laws of nature. A proponent of this theory claims that p is a statement of a law of nature if and only if it is universally quantified (in the form ‘All Fs are Gs’). The statement must also be true (across all space and time) and it cannot be a logical necessity, like the statement ‘all bachelors are unmarried men’ is. Given these conditions, a law of nature for a Naive Regularity theorist is a cosmic or Humean uniformity. A law of nature is simply a regularity which holds across the whole universe and its entire history. The theory can also correctly be described as being minimalist because it says that a law of nature is nothing over and above the collection of its instances.

It is important to point out that although the regularity theory of causation (which says that a causal connection is law-like) assumes the regularity theory of laws, the regularity theory of laws need not assume the regularity theory of causation. Therefore, the following discussion will not necessarily take the regularity theory of causation for granted. With respect to the Naive Regularity Theory, it will be stressed that this theory is considerably flawed and problematic. I will, therefore, analyse the various criticisms which aim to highlight the theory’s flaws. Attention will be given to some ‘sophistications’ of the Naive Regularity Theory, which try to resolve these flaws, but which are not immune to criticism either.

Another account of natural law should also be considered as a competitor to the Naive Regularity theory; namely, the theory that laws are relations between universals (a theory supported by D.M. Armstrong). If this theory fares any better, this may help to explain the weaknesses of the Naive Regularity Theory. The general aim of this discussion is to bolster the claim that the Naive Regularity Theory, even in its most sophisticated form, should be abandoned. By critically examining the basic elements of the theory, it should become clear that the theory is inconsistent with many features of laws of nature. The theory also faces numerous metaphysical issues which it has failed to resolve.

Metaphysical Issues with the Theory

One major problem with the Naive theory is that its basic premise, which says that general regularities are equivalent to laws of nature, is in tension with the fact that there are regularities in nature which are not necessarily manifestations of a law. As a case in point, if we compare the regularity that all lumps of pure gold-195 have a mass less than 1,000 kg to the regularity that all lumps of pure uranium-235 have a mass less than 1,000 kg, we find that the former is an accidental regularity, whereas the latter holds as a law. We simply have never found lumps of gold heavier than 1,000 kg. However, 1,000 kg exceeds the critical mass of the uranium-235 isotope, meaning that, if it would self-destruct before reaching that mass.

Thus, the Naive theory is unable to distinguish between genuine laws and mere coincidences. Perhaps an example like this is contrived; however, the mere fact that we can contrive such an example illustrates that the Naive theory is too simple, because it demands that all regularities must be laws, even though this is not always the case.

Karl Popper also criticised the Naive theorist’s account of laws of nature, pointing out that such a theorist has to be committed to absurd conclusions. Popper reminds us of the moa, an extinct species of bird, which many biologists supposed never lived beyond the age of 50. With the statement that “all moas die before the age of 50”, the Naive theorist, who is committed to the empiricist school of thought, would have to accept that this as a law-like statement, given that the statement is true and contingent. However, it is possible that a moa could have lived beyond the age of 50. These examples indicate that, as Armstrong puts it, “being a Humean uniformity is not sufficient for a law of nature”.

We can see that the Naive theorist disallows other logical possibilities, such as local uniformities qualifying as a law of nature. It seems entirely possible that even a small-scale, local uniformity could be a manifestation of a law. We could suppose, for example, that uranium found in Australia behaves in a slightly different manner from uranium found elsewhere, but that it does not differ from uranium in any of its most basic properties. Or we could look to less speculative examples, since there are, in fact, examples of local uniformities as laws, but on a scale larger than the distance between continents on the Earth. Galileo’s law of free fall is the generalization that on Earth, specifically, free falling bodies accelerate at a rate of 9.8m/s². Yet if we are to be committed to the Naive theory, we will have to disallow this kind of law based on the assumption that laws of nature must be cosmic uniformities.

This weakness of the Naive theory is further highlighted by Alfred Whitehead’s suggestion that the laws of nature may be different depending on the different cosmic epochs of the Universe. So, there could be spatio-temporally limited laws; for example, with the speed of light varying between different evolutionary stages in the cosmos. There is also an inconsistency between the Naive theory and some vacuous laws which are established in physics. Newton’s first law of motion says that all inertial bodies have no acceleration. However, the Naive theorist must presumptuously conclude that such a law is false simply because there are no inertial bodies; that is because there are no instances of the law.

The Naive theory is flawed because it is unable to accommodate these kinds of possibilities. Similarly, in George Molnar’s paper Kneale’s Argument Revisited, we find that William Kneale attacks the Naive theory, arguing that it is incomplete, on the basis that it excludes unrealized physical possibilities. Going back to the example with gold and uranium, Kneale argues that we cannot justifiably rule out the physical possibility that a lump of gold could be heavier than 1,000 kg, based only on instances to the contrary.

In addition to these criticisms, there appears to be an inconsistency between the Naive theory and probabilistic or statistical laws that we come across in physics. If we are to accept the standards of contemporary physics, then many of the fundamental laws of nature, such as those governing quantum behaviour and atomic decay, are probabilistic. A probabilistic law supposes that if a uranium atom, say, comes to have a certain property, then after a specific interval in time it will disintegrate, but not otherwise. And, as Armstrong points out, the probability of this happening is “dependent upon the relative frequency with which such atoms acquire this property”.

However, if this analysis of probabilistic laws is correct, then it seems to be in tension with the Naive theorist’s claim that laws of nature are nothing over and above atomic facts. Probabilistic laws, on the other hand, imply that we can have laws of nature that are not logically supervenient upon matters of fact. Probabilistic laws are in fact logically independent from particular matters of fact. So, it seems that probabilistic laws cannot be accommodated by the Naive theory.

Furthermore, because Naive theorists insist that laws of nature are nothing over and above atomic facts, it is difficult to see how, according to Armstrong, “the fact that every F is G can explain why any F is G”. The Naive theorist offers the Humean uniformity, that all Fs are Gs, as an explanation of the law that: Fs are Gs. But obviously “this involves using the law to explain itself”, as Armstrong says. And when this kind of circular reasoning is employed, we find that the Naive theorist is unable to posit a law as an explanation of its manifestations. There needs to be some distinction between a law and its manifestation if the law is to explain the manifestation.

In response to this criticism, Ludwig Wittgenstein has argued that it is merely an illusion that the laws of nature are explanations of natural phenomena. Therefore, even if the Naive theorist’s account of laws lacks any explanatory power, this should not entitle us to reject their account. On the other hand, we should still ask ourselves whether an account of laws which has this kind of explanatory power is preferable to the Naive account which lacks it. In fact, an account which lacks this kind of explanatory power seems to be uninteresting and incomplete because if there is no difference between laws and their instances, then in what sense do laws exist at all? For this reason, F. Dretske maintains that a vital feature of a law of nature is how they “figure in the explanation of the phenomenon falling within their scope”. It is probably far too simplistic then to argue, as Naive theorists do, that laws are merely summaries of their instances.

When considering the features of statements of law, it is clear that such statements support counterfactuals, that is, if the statement of law is true, then the counterfactuals are said to be true. If the statement that Fs are Gs is true, then this validates the counterfactual which says that if a, which is not an F, were to be an F, then it would also be a G. Statements of mere uniformity do not support counterfactuals in this way. In order to defend the Naive theory, we would have to assume the following: that everybody in a certain room at a certain time is wearing a watch, and that if a, who was not in the room, had been in the room, then they would have been wearing a watch. The reasoning here appears to be flawed. This example implies that laws cannot be identified with mere uniformities; in this case, the uniformity of people wearing a watch in a certain place and at a certain time.

A possible rebuttal to this criticism involves the argument that laws only support counterfactuals because counterfactuals implicitly refer to laws. In the counterfactual “Freddie’s car would have got hot less quickly had it been white”, is the hidden assumption that the laws of nature remain the same. In this example, we can see that counterfactuals do not actually have anything to tell us about the laws of nature. Moreover, laws do not support all counterfactuals. We could ask how quickly two objects would accelerate towards each other had the force of gravity (which is constant) been triple what it is. In any case, even if these counter-arguments are sound, which they appear to be, the previously unresolved criticisms of the Naive theory still suggest that a uniformity is insufficient to be a law of nature.

It is important to remember that as humans we cannot observe all things and events, of past, present and future, to which a law of nature will apply. For the Naive theorist, the law is not only a summary of its observed instances but also its unobserved instances. Arguably, this is a sensible conclusion to make – there must be instances of a law which we have not observed, or which we cannot observe just yet.

Despite this, David Hume famously remarked that this inference, from the observed to the unobserved, is irrational since there is no independent basis by which we can say that the law will hold in unobserved cases. This, in a nutshell, is the problem of induction. We can see that with the Naive theory, if the observed instances are used to infer the unobserved instances, no explanation is given as to why this should be the case. This lends credence to the view that a law should be able to explain its instances. If we were to put some ‘distance’ between the observed instances and the unobserved instances, by means of a law (which is distinct from its instances), then the inference from the observed to the unobserved can be rational.

Is a ‘Sophisticated’ Version Defensible?

Some philosophers have tried to amend and sophisticate the Naive theory in order to avoid the criticisms made of it. R.B. Braithwaite, for example, says that the difference between laws and cosmic coincidences is “in the different roles which they play in our thinking”; that is, the difference lies in why we believe laws, not in what they say. In a similar fashion, George Molnar believed he could avoid Kneale’s rebuttal of the Naive theory by claiming a Humean regularity could be defined as a law only with some epistemic restrictions. These restrictions would be “a clause requiring p [the Humean regularity] to be known in a certain way, or that the evidence be acquired in a certain manner”.

Yet even if these restrictions and qualifications of the theory could resolve its previous flaws, they seem to create some new issues due to the fact that they rely on human subjectivity. With Molnar’s epistemic restrictions, as well as Braithwaite’s qualifications, we find that it is a requirement for minds to exist in order for laws to exist. This seems to be in tension with a possible feature of any law of nature: that the law is something to be discovered, not to be invented. This feature of a law is emphasised by Ayer’s counterargument to Molnar’s epistemic restrictions when he says that “it makes sense to say that there are laws of nature which remain unknown”.

The sophisticated regularity theorist may reply that if certain conditions had obtained, then we could have a certain attitude towards the Humean uniformity in question. But this only complicates matters. We would then need to postulate laws about epistemic attitudes. This introduction of attitudes and human subjectivity into the debate seems to be anthropocentric – there could, in fact, be laws of nature which rule out the possibility of minds existing at all.

Some philosophers of science believe that the Ramsey-Lewis system view of laws is more defensible than Braithwaite and Molnar’s sophisticated Naive theories. According to this view, a regularity is a law of nature if and only if it interlocks well with other regularities and can act as an axiom in a true deductive system, achieving the best combination of simplicity and strength. This more sophisticated account is supposed to allow us to properly distinguish between accidental regularities and laws, and it allows for the existence of vacuous laws as well. Another strength of this account is its concession that we are somewhat ignorant about the laws of nature and so we need to create a system of axioms which can help us to organise the facts in the most economical way possible.

In spite of these strengths, this account, like Braithwaite and Molnar’s sophistications of the Naive theory, unavoidably relies on human subjectivity. What may appear simple to one person may appear complex for someone else, for example. In the words of J. Roberts, “we have no practice of weighing competing virtues of simplicity and information content for the purpose of choosing one deductive system over others…”. The ambiguity of the terms simplicity and strength, therefore, makes it difficult to use the Ramsey-Lewis system view in a way which can easily differentiate between accidental regularities and laws.

An Alternative Account of a Law of Nature

The discussion so far suggests that we need to look for an alternative account of natural laws, one which maintains that a law of nature is not equivalent to a cosmic uniformity. For Armstrong, “any satisfactory account of laws of nature must involve universals, and irreducible relations between them”. A universal is a property or relation (such as mass or velocity) that can apply to more than one object. Given this fact, the view that a law of nature involves a necessitation between universals is much stronger than the view that laws are nothing more than contingent regularities.

If we consider the law that magnesium is combustible in air, according to Armstrong we should picture this law as a relation, of natural necessity, between the properties of being magnesium and of being combustible in air. Immediately we can see that such an account avoids the issue of accidental regularities not being laws. In addition, with this view we can make sense of explanation and induction. In Alexander Bird’s view, “the fact of a‘s being both F and G and the general fact of all Fs being Gs are both quite distinct from the fact of Fness necessitating Gness. The former are facts about individuals; the latter about universals”. Reference to necessitation between universals as an explanation of particular facts and regularities is therefore genuinely informative.

The Naive theorist may attack such an account on the basis that it is too strong, arguing that it is Platonic and unnecessarily posits these mysterious universals which we cannot observe. However, as Dretske points out, we can avoid this criticism by offering a slightly weaker account which says that “if there are any laws of nature, then there exists universal properties with a definite relationship between them”. So, the proponent of this account does not have to demand that universal properties certainly exist. In any case, the other strengths of this account include being able to support counterfactuals, being useful in prediction (since we are told what must happen) and it supports our intuition that laws are independent of minds. This account basically avoids all the criticisms made of the Naive theory. In this way, this account is a far superior one.

In conclusion, the Naive theory, even in its most ‘sophisticated’ form, cannot overcome some numerous arguments made against it. The account is inconsistent with established laws of nature, physical and logical possibilities and the features that a law of nature should essentially have. Given these points, the account should be abandoned. Although the Naive theory may seem to be intuitively correct because of its simplicity and its commitment to observation and empiricism, it cannot be defended on these reasons alone. The weaknesses of theory also help to highlight the strengths of the ‘universals’ theory, implying that we should defend and adopt this theory instead of the Naive theory.

4 Comments

  1. danlanglois
    August 19, 2018 / 7:45 am

    Very interesting subject, but you lost me:

    ‘..if we compare the regularity that all lumps of pure gold-195 have a mass less than 1,000 kg to the regularity that all lumps of pure uranium-235 have a mass less than 1,000 kg, we find that the former is an accidental regularity, whereas the latter holds as a law. .. Perhaps an example like this is contrived..’

    What is the distinction being drawn, about the ‘regularity that all lumps of pure gold-195 have’, versus ‘the regularity that all lumps of pure uranium-235 have’? You don’t drop a hint about how they are different in nature. Yoru point here was perhaps supposed to be obvious, but who is reading this, somebody with a degree in chemistry? I do know, about Molecular Weight/Monoisotopic Mass/Exact Mass. For gold-195 it is 194.965 g/mol. For uranium-235 it is 235.044 g/mol. My best guess is that I’m supposed to be thinking about what killed 2 scientists working on the Manhattan project- they slipped up and let a core they were working with go critical for a brief time, and it killed them. When we talk about how nuclear weapons work, we inevitably mention the “critical mass.” But it’s a very tricky concept, right? I mean, sure, I get how knowing that the critical mass is so many kilograms of fissile material, as opposed to so many tons, was an early and important step in deciding that an atomic bomb was feasible in the first place. So you don’t want to *inadvertently* create a critical mass. But, in my understanding, there is no *one* critical mass of uranium-235. It’s more complicated than that. So okay, maybe what you are doing, is you are looking up what “the” critical mass of uranium-235 is, and you find a number like 50 kg. You then say, OK, you must need 50 kg to start a nuclear reaction. But this is wrong. If your uranium is fashioned not into a solid sphere, but a cylinder, or is a hollow sphere .. in other words, under different conditions, the mass of fissile material that will react varies, and varies dramatically. we might consider different geometries, densities, temperatures … it’s not a fixed number, unless you also fix all of your assumptions about the conditions etc. Does it seem like reactivity is a function of the mass alone? But no, it has nothing to do with the critical mass per se. That is a question of bomb efficiency. So we can imagine a whole sea of uranium-235 atoms. Neutrons enter the system (either from a neutron source, spontaneous fissioning, or the outside world). But if you consider whether there are neutron-moderating substances (i.e. water) present and so forth, then maybe you can (if you know what you are doing) exceed that 50 kg number without it reacting.

    Another thing, is that sure, it’s true that if you pure enough fissile material in one place, in the right shape, under the right conditions, it’ll become critical. But what does that mean? Just — generating heat. In a bomb, you need more than just a critical reaction. Consider: when a reactor “goes critical” it means that the reactor is producing a self-sustaining reaction, but it sounds like the reactor is going to explode. You can even be supercritical in a reactor (they usually are on startup, to get the neutron economy started). Anyways, it’s more nuanced than a mere mass or radius or volume.

    Let me imagine a response: ‘whatever, but please do not slowly bring together two half-spherical Uranium cores to produce a supercritical mass of Uranium.’

    I get it, but really, it would have to be about 500 pounds of explosives slamming the two pieces together with high energy, or somesuch (plus neutron reflectors? plus a neutron initiator and a lump of heavy metal surrounding the fission materia?). Yes, this is a very bad idea.

    P.S. If this is not about whether it’s important to realise what ‘critical mass’ means here, and supposing it’s a specific mass, then okay, I’m back to wondering what it’s about..

  2. LunarTerry
    September 24, 2018 / 11:34 am

    I agree with DANLANGLOIS that the gold V uranium remark is not a very good example: could you suggest something else, Sam? Thanks. LunarTerry

    • Sam Woolfe
      Author
      September 24, 2018 / 11:42 am

      I will try to find a better example. As far as I can remember, this was the example used in my philosophy textbook. It seems the example is regularly used by philosophers of science (see below, p.3).

      http://faculty.poly.edu/~jbain/philsci/articles/Bird.pdf

      • LunarTerry
        September 26, 2018 / 7:42 am

        Thanks for posting the link to the Bird article, Sam.He discusses the law V regularity issue and the gold-uranium example in more detail, so I’m a bit happier with it now. I’d be grateful if you could try to answer this question: it seems to me there is a domain intermediate between physics and philosophy. It is the territory where thought experiments are conducted eg Galileo’s refutation of Aristotle (on acceleration due to gravity); proof of the principle of moments using simple symmetry arguments; proof of momentum conservation also using symmetry. I would also include the argument for Bell’s inequality, which is based on very basic intuitions about the world. My question is: is there a recognized branch of philosophy/physics which embraces these kinds of considerations? Many thanks.

Leave a Reply